url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.computeralgebra.de/sfb/our-news/fourth-annual-conference-of-the-sfb-trr-195/
Fourth annual conference of the SFB-TRR 195 The Fourth annual conference of the SFB-TRR 195 took place virtually on Sept. 22–24, 2020. The goal of the conference was to bring together people from different areas in algebra who apply computational methods in their research. The main organizer is Ulrich Thiel (TU Kaiserslautern), see contact details below. Slides and recordings can be found in the list of talks below. Impressions Here are some impressions of the conference. Conference Portal Rooms are accessible via the conference portal. Access code was sent to registered participants. Speakers From outside the SFB-TRR Philippe Biane, Université Paris-Est Tim Dokchitser, University of Bristol Olivier Dudas, Université Paris Diderot Gavril Farkas, Humboldt-Universität zu Berlin Lars Thorge Jensen, University of Clermont Auvergne Eric Katz, The Ohio State University Yue Ren, Swansea University Michael Stillman, Cornell University Nicolas Thiéry, Université Paris Sud From within the SFB-TRR Simon Brandhorst, Universität des Saarlandes Claus Fieker, TU Kaiserslautern Anne Frühbis-Krüger, Universität Oldenburg Johannes Flake, RWTH Aachen Laura Maaßen, RWTH Aachen Pascal Schweitzer, TU Kaiserslautern Andrea Thevis, Universität des Saarlandes Gabriela Weitze-Schmithüsen, Universität des Saarlandes Schedule You can find the schedule here (PDF). Talks Philippe Biane, Université Paris-Est Free Lévy processes There is a notion of processes with free increments in free probability theory which parallels that of processes with independent increments in classical probability. When one tries to characterize those which are time homogeneous (like Lévy processes in classical probability theory) one finds that there are two natural classes: those with homogeneous increments and those with homogeneous transition probabilities. I will explain how these classes are parameterized by convex sets of analytic functions on the upper half-plane. Tim Dokchitser, University of Bristol I will review available packages for p-adic numbers and their extensions in computer algebra systems, focussing on the recent one in Magma by Christopher Doris. It provides lazy exact arithmetic, and is the first native implementation for provable p-adic computations. Olivier Dudas, Université Paris Diderot Computing decomposition numbers for finite unitary groups (work in progress with R. Rouquier) In this talk I will present a computational (yet conjectural) method to determine some decomposition matrices for finite groups of Lie type. I will first explain how one can produce a “natural” self-equivalence in the case of $\mathrm{GL}_n(q)$ coming from the topology of the Hilbert scheme of $\mathbb{C}^2$. The combinatorial part of this equivalence is related to Macdonald’s theory of symmetric functions and gives $(q,t)$-decomposition numbers. The evidence suggests that the case of finite unitary groups is obtained by taking a suitable square root of that equivalence. Gavril Farkas, Humboldt-Universität zu Berlin Green’s Conjecture via Koszul modules Using ideas from geometric group theory we provide a novel approach to Green’s Conjecture on syzygies of canonical curves. Via a strong vanishing result for Koszul modules we deduce that a general canonical curve of genus g satisfies Green’s Conjecture when the characteristic is zero or at least (g+2)/2. Our results are new in positive characteristic (and answer positively a conjecture of Eisenbud and Schreyer), whereas in characteristic zero they provide a different proof for theorems first obtained in two landmark papers by Voisin. Joint work with Aprodu, Papadima, Raicu and Weyman. Lars Thorge Jensen, University of Clermont Auvergne How to teach a computer the Elias-Williamson graphical calculus? [Slides] Based on recent work of Achar-Makisumi-Riche-Williamson,  one can calculate tilting characters of a reductive algebraic group in positive characteristic p using the p-canonical/p-Kazhdan-Lusztig basis of the anti-spherical module. To calculate the p-Kazhdan-Lusztig basis one needs to calculate ranks of certain Hom-pairings (called intersection forms) in the diagrammatic Hecke category. Unfortunately, string diagrams are well suited for calculations by hand, but how does one approach this problem using a computer? I will explain an algorithm to calculate the p-canonical basis of the anti-spherical module (joint work with Geordie Williamson), which allows to replicate the 10 months calculations Williamson performed for his Billiards conjecture in a couple of days. Eric Katz, The Ohio State University Coleman’s theory of p-adic integration is important for finding rational and torsion points on curves. An algorithm for computing Coleman integrals on good reduction hyperelliptic curves was introduced in work of Balakrishnan, Bradshaw, and Kedlaya and was refined by many others. We discuss work with Enis Kaya on extending this algorithm to bad reduction hyperelliptic curves. For bad reduction curves, there are two notions of p-adic integration: Berkovich–Coleman integrals which can be performed locally; and abelian integrals with desirable number-theoretic properties. We discuss how to compute Berkovich–Coleman integrals and then related them to abelian integrals by using tropical geometric techniques. Yue Ren, Swansea University Tropical varieties of neural networks [Slides] In this talk, we introduce tropical varieties arising from neural networks with piecewise linear activations, and discuss how their geometry affects their expressivity. In particular, we will use Weibel’s f-Vector Theorem to derive optimal bounds for single-layered maxout networks, and Speyer’s f-Vector theorem to analyse networks with heavily restricted weights. We conclude with an initializing strategy for maxout networks based on our results. Nicolas Thiéry, Université Paris Sud Categories, axioms, constructions in SageMath: Modeling mathematics for fun and profit The SageMath systems provides thousands of mathematical objects and tens of thousands of operations to compute with them. A system of this scale requires some infrastructure for writing and structuring generic code, documentation, and tests that apply uniformly on all objects within certain realms. In this talk, we describe the infrastructure implemented in SageMath. It is based on the standard object oriented features of Python, together with mechanisms to scale (dynamic classes, mixins, …) thanks to the rich available semantic (categories, axioms, constructions). We relate the approach taken with that in other systems (e.g. GAP), and discuss open problems. This is meant as a basis for discussions: how are the equivalent challenges tackled in Oscar? Is there ground for crossfertilization! Michael Stillman, Cornell University Computational algebraic geometry, applications in string theory, and Macaulay2 In this talk we describe some open problems and interesting questions in computational algebraic geometry motivated by investigations in string theory, whose solutions would be useful to researchers in string theory. We describe recent work done in collaboration with Liam McAllister, Cody Long, Andreas Braun and others in this domain, and we keep it concrete by giving examples using Macaulay2. Simon Brandhorst, Universität des Saarlandes Equations for the K3-Lehmer map [Slides] The dynamical complexity of an automorphism of a complex surface is measured by its topological entropy. The entropy is the logarithm of a Salem number, that is, a real algebraic integer $\lambda>1$ which is conjugate to $1/\lambda$ and all whose other conjugates lie on the unit circle. Conjecturally the smallest Salem number is Lehmer’s number $\lambda_{10}$. Lehmer’s conjecture is true for entropies: $\log(\lambda_{10})$ is the minimum among entropies of automorphisms of complex surfaces. In a series of papers McMullen proved the existence of such an automorphism on a complex projective K3 surface. His strategy combines ideas from integer programming with the theory of lattices, number fields and reflection groups. The final step of the proof relies on the Torelli-type Theorem for K3 surfaces which is non constructive. In this talk I present equations for this automorphism. To find it we used Kneser’s neighbor method, elliptic fibrations and their linear systems, finite non-symplectic automorphisms and p-adic lifting. This is joint work in progress with Noam D. Elkies. Claus Fieker, TU Kaiserslautern OSCAR: current, plans and dreams [Slides] OSCAR is the next generation computer algebra system developed in the central software project of the SPP/ TRR 195. Written in the (rather young) Julia language, it combines the features and capabilities of Singular, Gap and Polymake with the newly created number theory package Hecke. In this talk I will showcase some achievements, give an overview over the current status and indicate our current (immediate) plans and projects. I will conclude with some dreams about the longterm prospects. Johannes Flake, RWTH Aachen PBW deformations of smash products and computer algebra PBW deformations of smash products form large classes of algebra deformations which include graded affine Hecke algebras, rational Cherednik algebras, symplectic reflection algebras, current Lie algebras, and many more of your favorite algebraic objects as special cases. I will explain how they can be studied from a general point of view, why some people find them interesting, why I find them interesting, and how I think computer algebra can help everybody to understand them better. Anne Frühbis-Krüger, Universität Oldenburg Zeta-functions, p-adic integrals and simultaneous monomialization In this talk, we will discuss an approach to a class of order-zeta-functions through computation of p-adic integrals, whose domain of integration is rather far from being monomial. To render these integrals accessible to explicit computation, they need to be split up into a sum of simpler integrals. In our case the case distinction and simplification rely on a specific variant of resolution of singularities to obtain monomial condition on the domains of integration. Laura Maaßen, RWTH Aachen Interpolating Partition Categories [Slides] In this talk we introduce tensor categories which interpolate the representation categories of partition quantum groups, which we view as subcategories of Deligne’s interpolation categories $\underline{\mathrm{Rep}(S_t)}$ for the symmetric groups. We compute the set of interpolation parameters yielding semisimple interpolation categories for all group-theoretical quantum groups, an uncountable family containing all but countably many partition quantum groups. A crucial ingredient is an abstract analysis of certain subobject lattices developed by Knop, which we adapt to categories of partitions. We go on to present a parametrisation of the indecomposable objects for non-zero interpolation parameter using the representation theory for partition quantum groups developed by Freslon and Weber. This yields a description of the associated graded rings of the Grothendieck rings. This is joint work with Johannes Flake. Pascal Schweitzer, TU Kaiserslautern Computing Symmetries: Isomorphism, Automorphism and Canonization of Graphs versus other Combinatorial Objects The graph isomorphism problem, which asks for the existence of an isomorphism between two given finite input graphs, is known to be equivalent to the problem of computing automorphisms of graphs. One commonly applied method to solve this problem is via canonization. In this talk I will relate the isomorphism problem of graphs to that of computing symmetries of other combinatorial objects. In fact there are general techniques to reinterpret finite combinatorial objects as graphs, while preserving their symmetry structure. This makes the graph isomorphism problem universal for isomorphism and automorphism problems of explicitly given combinatorial objects. However, for implicitly given objects, such as those given by generating sets, the matter is different. Describing joint work with Daniel Wiebking, the talks explains that a unified view of implicit combinatorial objects gives improved algorithms. This is in particular the case for canonization algorithms and this view has subsequently also found applications in the computation of normalizers for permutation groups. Andrea Thevis, Universität des Saarlandes $p$-Origamis: Strata, Veech Groups and Sums of Lyapunov Exponents [Slides] In this talk, we study a certain class of translation surfaces called $p$-origamis. These surfaces arise as normal covers of the torus with $p$-groups as deck transformation group. We classify the types of singularities of p-origamis and show that these depend in most cases only on the isomorphism class of the deck transformation group. For this, we use the rich theory of $p$-groups. Veech groups of $p$-origamis are finite index subgroups of SL(2,Z) and capture a lot of information about the respective surfaces. We describe first results regarding Veech groups of $p$-origamis. Using these results, we compute the sum of Lyapunov exponents for certain example series of $p$-origamis. This is partially joint work with Johannes Flake. Gabriela Weitze-Schmithüsen, Universität des Saarlandes Systoles on Origami Translation Surfaces [Slides] A finite translation surface is a closed surface X together with the choice of a holomorphic differential. The moduli space of translation surfaces of genus g is stratified by the orders of the zeroes of the differentials. Although translation surfaces have been intensively studied since the 1980’s, there are natural questions which are still wildly open. One of these questions is: Does there exist in each stratum a translation surface with a maximal systolic ratio, i.e. with a maximal shortest curve relative to the area. We study this question in the stratum H_2(1,1) of genus 2 surfaces with two zeroes of order 1. This is joint work with Columbus, Herrlich and Mützel. Confirmed Participants Total number: 107 Firoozeh Aga, Saarland University Aslam Ali, TU Kaiserslautern George Balla, RWTH Aachen Mohamed Barakat, University of Siegen Reimer Behrends, TU Kaiserslautern Marc Bellon, CNRS et Sorbonne Université Dominik Bernhardt, RWTH Aachen Philippe Biane, CNRS, LIGM Université Paris Est Janko Böhm, TU Kaiserslautern Jendrik Brachter, TU Kaiserslautern Simon Brandhorst, Saarland University Jens Brandt, RWTH Aachen Sofia Brenner, Friedrich-Schiller-Universität Jena Thomas Breuer, RWTH Aachen University Eirini Chavli, Universität Stuttgart Michael Cuntz, Leibniz Universität Hannover Wolfram Decker, TU Kaiserslautern Tim Dokchitser, University of Bristol Gérard Duchamp, Université Paris 13 Olivier Dudas, CNRS and Université de Paris Holger Eble, TU Berlin Kurusch Ebrahimi-Fard, Norwegian University of Science and Technology Gavril Farkas, Humboldt-Universität zu Berlin Claus Fieker, TU Kaiserslautern Johannes Flake, RWTH Aachen Anne Frühbis-Krüger, Universität Oldenburg Nan Gao, Shanghai University Sabrina Gaube, UOL / LUH Meinolf Geck, University of Stuttgart Christoph Goldner, Tübingen University Pierre Guillot, Université de Strasbourg Melanie Harms, RWTH Aachen University William Hart, TU Kaiserslautern Jonas Hetz, University of Stuttgart Tommy Hofmann, Universität des Saarlandes Johannes Hoffmann, Universität des Saarlandes Max Horn, TU Kaiserslautern Jens Hubrich, TU Kaiserslautern Lars Thorge Jensen, Université Clermont Auvergne Birte Johansson, TU Kaiserslautern Pooja Joshi, IISER Bhopal Kunda Kambaso, RWTH Aachen Lars Kastner, TU Berlin Eric Katz, The Ohio State University Enis Kaya, University of Groningen Hanieh Keneshlou, IMPAN Matthias Klupsch, RWTH Aachen Michael Kunte, TUK Caroline Lassueur, TU Kaiserslautern Felix Leid, Saarland University Viktor Levandovskyy, RWTH Aachen Benjamin Lorenz, TU Berlin Frank Lübeck, RWTH Aachen Laura Maaßen, RWTH Aachen University Antonio Macchia, Freie Universität Berlin Verity Mackscheidt, RWTH Aachen Gunter Malle, TU Kaiserslautern Dario Mathiä, University of Kaiserslautern Aleksander Morgan, RWTH Aachen Gabriele Nebe, RWTH Aachen University Alice Niemeyer, RWTH Aachen Emily Norton, TU Kaiserslautern Gyan Datt Panday, PRSU, Prayagraj Pierre-Guy Plamondon, Université Paris-Saclay Sebastian Posur, RWTH Aachen Ludwig Rahm, Norwegian University of Science and Technology NTNU Iryna Raievska, Institute of Mathematics of National Academy of Sciences of Ukraine Maryna Raievska, Institute of Mathematics of National Academy of Sciences of Ukraine Yue Ren, Swansea University Lukas Ristau, TU Kaiserslautern Liam Rogel, TU Kaiserslautern Emil Rotilio, TUK Mahsa Sayyary Namin, MPI Leipzig Johannes Schmitt, TU Kaiserslautern Leonard Schmitz, RWTH Aachen Hans Schönemann, TU Kaiserslautern Mathias Schulze, TU Kaiserslautern Pascal Schweitzer, TU Kaiserslautern Farideh Shafiei, Institute for Research in Fundamental Sciences (IPM) Vishal Shankhaval, Ganpat University Farrokh Shirjian, Tarbiat Modares University Carlo Sircana, TU Kaiserslautern Roland Speicher, Saarland University Mima Stanojkovski, Max-Planck-Institut fuer Mathematik in den Naturwissenschaften Michael Stillman, Cornell University Bernd Sturmfels, MPI Leipzig Jayantha Suranimalee, TU Kaiserslautern Melis Tekin Akcin, Hacettepe University Nicolas Thiéry, Université Paris Sud Ayush Kumar Tewari, TU Berlin Andrea Thevis, Universität des Saarlandes Ulrich Thiel, TU Kaiserslautern Paul Vater, MPI MiS Leipzig Claude Viallet, Centre National de la Recherche Scietntifique / Sorbonne Université Laura Voggesberger, TU Kaiserslautern Maria Walch, DFKI/IAV/TUK Moritz Weber, Saarland University Yvonne Weber, Technische Universität Kaiserslautern Gabriela Weitze-Schmithüsen, Universität des Saarlandes Oguzhan Yürük, TU Braunschweig Eva Zerz, RWTH Aachen Yinan Zhang, Australian National University Contact If you have any questions, please send an email to trr195-conference at mathematik.uni-kl.de.
2020-10-22 11:49:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.383408784866333, "perplexity": 4385.986672700614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879537.28/warc/CC-MAIN-20201022111909-20201022141909-00426.warc.gz"}
http://mathhelpforum.com/algebra/66979-solved-partial-fraction-decomposition.html
# Math Help - [SOLVED] partial fraction decomposition 1. ## [SOLVED] partial fraction decomposition i have to write the partial fraction decomposition of this and i really don't know how to begin. x^2 - x / x^2 + x +1 2. Originally Posted by jbeybey i have to write the partial fraction decomposition of this and i really don't know how to begin. x^2 - x / x^2 + x +1 Start by noting that: 1. $\frac{x^2 - x}{x^2 + x + 1} = 1 - \frac{2x + 1}{x^2 + x + 1}$. 2. $x^2 + x + 1$ is an irreducible quadratic (what do you class notes or textbook say that you do in this case.....?) 3. Originally Posted by jbeybey i have to write the partial fraction decomposition of this and i really don't know how to begin. x^2 - x / x^2 + x +1 Since the numerator has the same degree as the denominator, the first thing will want to do is divide it out. $\frac{x^2- x}{x^2+ x+ 1}= 1- \frac{2x+1}{x^2+ x+1}$ Since the denominator cannot be factored (by the quadratic formula it has no real roots), that is the "partial fraction decomposition".
2014-04-18 10:28:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8780885934829712, "perplexity": 420.797476174958}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
http://djalil.chafai.net/blog/
Press "Enter" to skip to content # Libres pensées d'un mathématicien ordinaire Posts This tiny post is adapted from the introduction of a recent work with David García-Zelada and Paul Jung on the macroscopics and edge of a planar jellium seen as a Coulomb gas. Potential theory.  The Coulomb kernel $g$ in $\mathbb{R}^d$, $d\geq1$, is given for all $x\in\mathbb{R}^d$ by $g(x)=\begin{cases}\displaystyle\log\frac{1}{|x|}&\text{if d=2}\\[1em]\displaystyle\frac{1}{(d-2)|x|^{d-2}}&\text{if d\neq2}\end{cases}.$ The Coulomb potential at point $x$ generated by a distribution of charges, say electrons, modeled by a probability measure $\mu$ on $\mathbb{R}^d$ is defined by $U_\mu(x)=(g*\mu)(x)=\int g(x-y)\mathrm{d}\mu(y)\in(-\infty,+\infty].$ We have $U_\mu\in\mathrm{L}^1_{\mathrm{loc}}(\mathrm{d}x)$ and the identity $-\Delta g=c_d\delta_0$, where $c_d=d\omega_d$ is the surface of the unit ball and $\omega_d$ its volume, gives the inversion formula $-\Delta U_\mu=c_d\mu.$In particular $U_\mu$ is super-harmonic in the sense that $\Delta U_\mu\leq0$ since $g$ is super-harmonic. The Coulomb self-interaction energy of the distribution of charges $\mu$ is defined when it makes sense by $\mathcal{E}(\mu)=\frac{1}{2}\iint g(x-y)\mathrm{d}\mu(x)\mathrm{d}\mu(y)=\frac{1}{2}\int U_\mu(x)\mathrm{d}\mu(x).$ Let $V:\mathbb{R}^d\to\mathbb{R}\cup\{+\infty\}$ be a lower semi-continuous function playing the role of an external potential, producing an external electric field $-\nabla V$. If $V$ grows faster than $g$ at infinity, the Coulomb energy $\mathcal{E}_V$ with external field is defined by $\mu\mapsto\mathcal{E}_V(\mu)=\mathcal{E}(\mu)+\int V\mathrm{d}\mu.$ Coulomb gases. A Coulomb gas in $\mathbb{R}^d$ with $n$ particles, potential $V$, and inverse temperature $\beta\geq0$ is the exchangeable Boltzmann–Gibbs law on $(\mathbb{R}^d)^n$ with density proportional to $\mathrm{e}^{-\beta E_n(x_1,\ldots,x_n)}\quad\text{where}\quad E_n(x_1,\ldots,x_n)=\sum_{i<j}g(x_i-x_j)+n\sum_{i=1}^nV(x_i).$ It models a gas of unit charged particles, or more precisely a random configuration of unit charged particles. We should keep in mind that we play here with electrostatics rather than with electrodynamics and that thus we do not have a magnetic field. Wigner jellium. Let us consider $n$ unit negatively charged particles (electrons) at positions $x_1,\ldots,x_n$ in $\mathbb{C}$, lying in a positive background of total charge $\alpha>0$ smeared according to a probability measure $\rho$ on $\mathbb{C}$ with finite Coulomb energy $c=\mathcal{E}(\rho)$. We could alternatively suppose that the particles are positively charged (ions) and the background is negatively charged (electrons), this reversed choice would not affect the analysis of the model. The total energy of the system, counting each pair a single time, is given by $\sum_{i<j}g(x_i-x_j)-\alpha\sum_{i=1}^nU_\rho(x_i)+\alpha^2c.$ This matches the Coulomb energy of a Coulomb gas with $V=-\frac{\alpha}{n}U_\rho$. This observation leads us to define the jellium model on $S\subset\mathbb{R}^d$ with background charge $\alpha>0$ and background distribution $\rho$ with $\mathrm{supp}\rho\subset S$ as being the Coulomb gas on $\mathbb{R}^d$, with potential $V$ given by $V=\begin{cases}-\frac{\alpha}{n}U_\rho&\text{on S}\\+\infty&\text{on S^c}.\end{cases}$ We say that the system is charge neutral. when $\alpha=n$. We say that it is uniform when $\rho$ is the uniform distribution on some compact subset of $\mathbb{R}^d$. The great majority of jellium models studied in the literature are charge neutral and satisfy $S=\mathrm{supp}\rho$. Conversely, a Coulomb gas with sub-harmonic potential $V$ (meaning $\Delta V\geq0$) can be seen as a jellium with background $\rho=\frac{\Delta V}{c_d\alpha}\mathrm{d}x$ on $S=\mathbb{R}^d$. When $V$ is not sub-harmonic then $\rho$ is no longer a positive measure but we can still interpret it as a background with opposite charge on $\{\Delta V<0\}$. The complex Ginibre ensemble is a famous Coulomb gas with $d=2\quad\text{and}\quad V=\left|\cdot\right|^2,$ for which $\Delta V$ is constant, leading to an interpretation of this Coulomb gas as a degenerate jellium on the full space with Lebesgue background. This Coulomb gas or Wigner jellium describes the eigenvalues of a Gaussian random complex $n\times n$ matrix $A$ with density proportional to $\exp(-\mathrm{Trace}(A\bar{A}^\top))$. Equivalently, the entries of $A$ are independent and identically distributed with independent real and imaginary parts having a Gaussian law of mean $0$ and variance $1/(2n)$. This gas  or jellium also appears in various other places in the mathematical physics literature, for instance as the modulus of the wave function in Laughlin’s model of the fractional quantum Hall effect, in the description of the vortices in the Ginzburg-Landau model of superconductivity, and in a model of rotating trapped fermions. The Forrester-Krishnapur spherical ensemble is another remarkable Coulomb gas with $d=2\quad\text{and}\quad V=\Bigr(1+\frac{1}{n}\Bigr)\log(1+\left|\cdot\right|^2),$ for which $\Delta V=4(1+1/n)/(1+\left|\cdot\right|^2)^2$, leading to an interpretation of this Coulomb gas as a jellium on the full space with a heavy tailed background. The name of this gas comes from the fact that it is the image by the stereographical projection of the Coulomb gas on the sphere, with constant potential, onto to the complex plane. This Coulomb gas describes the eigenvalues of $AB^{-1}$ where $A$ and $B$ are two independent copies of complex Ginibre random matrices. We can loosely interpret $AB^{-1}$ as a sort of matrix analogue of the Cauchy distribution since when $A$ and $B$ are $1\times 1$ matrices, this is precisely a Cauchy distribution. We can also consider such a background-potential inverse problem for the one-dimensional log-gases of random matrix theory, which can be seen as two-dimensional Coulomb gases confined to the real line, such as the Gaussian Unitary Ensemble. For instance it follows from the discussion in Peter Forrester book that the logarithmic potential of the density $x\mapsto\frac{\sqrt{2n}}{\pi}\sqrt{1-\frac{x^2}{2n}}\mathbf{1}_{|x|<\sqrt{2n}}$ is given on the interval $S=[-\sqrt{2n},\sqrt{2n}]$ by $x\mapsto \frac{x^2}{2}+\frac{n}{2}\Bigr(\log\frac{n}{2}-1\Bigr).$ A bit of history. The jellium model was used around 1938 by Eugene P. Wigner in a famous article for the modeling of electrons in metals, more than ten years before his renowned works on random matrices. This model was inspired from the Hartree-Fock model of quantum mechanics, see GV, LN, LS, S1, and LLS1, LLS2. The term jellium was apparently coined by Conyers Herring since the smeared charge could be viewed as a positive jelly, see H1. The model is also known as a one-component plasma with background. As already mentioned, usually charge-neutral jellium models are studied, and this is done typically after restricting the electrons to live on some  compact support of positive background. The restriction ensures integrability of the energy and the interest is usually focused on the distribution/behavior of electrons in the bulk of the limiting system when the volume of the compact set goes to infinity (thermodynamic limit). There are some exceptions where the edge has been considered, for instance in CFTW. Also, the edge of Laughlin states has been considered in CFTW and GJA. The case $d=3$ is considered by Lieb and Narnhofer, and quoting them: “It is also possible to consider the one- and two-dimensional versions of this problem, where the Coulomb potential $|x|^{-1}$ is replaced by $-|x|$ and $-\log|x|$, respectively. In the one-dimensional, classical case, Baxter calculated the partition function exactly. For that case, Kunz showed that the one-particle distribution function exists and that it has crystalline ordering, i.e., the Wigner lattice exists for all temperatures. Brascamp and Lieb showed the same to be true in the quantum mechanical case for one-component fermions when $\beta$ is large enough. Although we do not deal with the one-dimensional problem here, our methods would apply in that case. In two dimensions there are difficulties connected with the long-range nature of the $-\log|x|$ potential, and we shall not discuss this here.” For more background literature on the jellium, see also AJ, JLM, F, AJJ, JJ, CDFLV. See in particular LWL for the fluctuations of non-neutral jelliums. Coulomb gas models appeared naturally in statistics around 1920-1930 in the study of the spectrum of empirical covariance matrices of Gaussian samples. Nowadays we speak about the Laguerre ensemble and Wishart random matrices. This was almost ten years before the introduction of the jellium model by Wigner. In the 1950’s, Wigner rediscovered, by accident, these works by reading a statistics textbook, and this motivated him to use random matrices for the modeling of energy levels of heavy nuclei in atomic physics, see this former blog-post. We refer to Bohigas and Weidenmüller for these historical aspects. The work of Wigner was amazingly successful, and he received in 1963 a Nobel prize in Physics “for his contributions to the theory of the atomic nucleus and the elementary particles, particularly through the discovery and application of fundamental symmetry principles”. The term Coulomb gas is explicitly used by Dyson in his first seminal 1962 paper and by Ginibre in his 1965 paper. The term Fermi-gas is also used. Au sujet des modes de financement des publications académiques, les choses avancent, lentement, par à-coups. L’INSMI du CNRS a œuvré par exemple pour la création du Centre Mersenne à Grenoble. La montée en puissance et la conduite du changement ne sont pas simples. Idem pour le projet MathOA.  Plus récemment, sur le plan des actions macroscopiques structurantes, la France s’engage sur la science ouverte, tandis que l’Europe met en avant le diamond open access dans son Plan S pour 2021 ! La guerre continue sur d’autres fronts. En France, en mars 2019, le tribunal de grande instance de Paris a ordonné aux fournisseurs d’accès à Internet français de bloquer l’accès aux sites Sci-Hub et Library Genesis. La décision fait suite à une plainte déposée par Elsevier et Springer Nature. Mais ce blocage est contournable avec un simple VPN ou un serveur DNS alternatif. Cette décision de justice n’est qu’un  épisode de plus au long feuilleton judiciaire international. Toujours en France, Elsevier est aussi partie prenante d’un projet d’accord controversé avec le consortium Couperin. En matière de publications, les universitaires sont à la fois les producteurs, les évaluateurs, et les consommateurs, au bout du compte rackettés et pris en otages par les mastodontes Elsevier et Springer. Le numérique n’a fait que forcer le trait. Il n’y a pas vraiment besoin d’être de gauche pour être contre la position de Elsevier et Springer: ils sont devenus des parasites, et il faudrait à minima les remettre à leur juste place de prestataires de services en concurrence. The Coulomb or Newton kernel in ${\mathbb{R}^d}$, ${d\geq1}$, is often defined for all ${x\neq0}$ as $g(x):=\begin{cases} -|x| & \mbox{if }d=1,\\ \log\frac{1}{|x|} & \mbox{if }d=2,\\ \frac{1}{|x|^{d-2}} & \mbox{if }d\geq3, \end{cases}$ where ${|x|:=\sqrt{x_1^2+\cdots+x_d^2}}$ is the Euclidean norm. It is the fundamental solution of the Laplace or Poisson equation, in the sense that $\Delta g=-c_d\delta_0 \quad\mbox{where}\quad c_d:= \begin{cases} 2 & \mbox{if }d=1,\\ 2\pi & \mbox{if }d=2,\\ (d-2)|\mathbb{S}^{d-1}| & \mbox{if }d\geq 3, \end{cases}$ where ${\mathbb{S}^{d-1}:=\{x\in\mathbb{R}^d:|x|=1\}}$, and, denoting ${\Gamma}$ the Euler Gamma function, $|\mathbb{S}^{d-1}| =2\frac{\pi^{d/2}}{\Gamma(d/2)}.$ This partial differential equation is in the sense of Schwartz distributions ${\mathcal{D}'(\mathbb{R}^d)}$. Note that on ${\mathbb{R}^d\setminus\{0\}}$, the function ${g}$ is ${\mathcal{C}^\infty}$ and harmonic in the sense that ${\Delta g(x)=\partial_1^2g(0)+\cdots+\partial_d^2g(0)=0}$ as a function for all ${x\neq0}$. The behavior at the origin makes ${g}$ super-harmonic (its Laplacian is ${\leq0}$), which is a trace analogue of concavity. The case ${d=1}$ is intuitive: the derivative of ${|x|}$ in the sense of distributions is the Heaviside function ${-\mathbf{1}_{x<0}+\mathbf{1}_{x>0}}$ (essentially a jump at zero of height ${2}$), and the second derivative twice the Dirac mass, ${2\delta_0}$. The case ${d=2}$ appears also as special: it blows up at infinity. The physical interpretation of ${g(x)}$, up to physical constants, is the potential generated at point ${x}$ by a charge at the origin, the field being ${-\nabla g(x)}$. The electric field if we model electrostatics (Coulomb), and the gravitational field if we model gravity (Newton). Note that ${-g(x)\nabla g(x)}$ vanishes at infinity if ${d\geq2}$, but not if ${d=1}$, which makes a difference for integration by parts. For a probability measure ${\mu}$ on ${\mathbb{R}^d}$, the potential generated by ${\mu}$ at point ${x}$ is $U_\mu(x)=(g*\mu)(x)=\int g(x-y)\mu(\mathrm{d}y).$ In dimension ${d=1}$ or ${d=2}$, this is well defined as soon as ${\mu}$ integrates ${g}$ at infinity. Note that ${g}$ is Lebesgue locally integrable. The convolution operator ${\mu\mapsto U_\mu}$ is the inverse of the Laplacian, and we have the inversion formula $\Delta U_\mu=(\Delta g)*\mu=-c_d(\delta_0*\mu)=-c_d\mu,$ in ${\mathcal{D}'(\mathbb{R}^d)}$. The potential ${U_\mu}$ is harmonic outside the support of ${\mu}$. Alternative formulation. The following alternative definition is simpler: $g(x):=\begin{cases} \frac{1}{(d-2)|x|^{d-2}} & \mbox{if }d=1\mbox{ or } d\geq3,\\ \log\frac{1}{|x|} & \mbox{if }d=2, \end{cases}$ which satisfies $\Delta g=-c_d\delta_0 \quad\mbox{where}\quad c_d:=|\mathbb{S}^{d-1}|$ with the convention ${|\mathbb{S}^0|:=|\{-1,1\}|=2}$. Indeed if ${d=1}$ then ${\frac{1}{(d-2)|x|^{d-2}}=-|x|}$. This alternative formulation makes the equilibrium measure nicer in the quadratic confinement case. Namely, the electrostatic energy of a distribution of charges modeled by a probability measure ${\mu}$ on ${\mathbb{R}^d}$ with external field generated by a potential ${V:\mathbb{R}^d\rightarrow\mathbb{R}}$ is $\mathcal{E}_V(\mu) :=\iint g(x-y)\mu(\mathrm{d}x)\mu(\mathrm{d}y)+\int V(x)\mathrm{d}\mu(x).$ This functional is strictly convex and lower semi-continuous with respect to the narrow convergence of probability measures. Its minimizer, ${\mu_*=\arg\inf\mathcal{E}_V}$, is called the equilibrium measure. The Euler-Lagrange equation gives that ${\mu_*}$ has density $\frac{\Delta V}{2c_d}$ on its support. When ${V}$ is strong enough at infinity then this support is compact. For instance when ${V(x)=|x|^2}$ then we find that the equilibrium measure ${\mu_*}$ is the uniform distribution on the unit ball of ${\mathbb{R}^d}$. Indeed, in this case $\frac{\Delta V}{2c_d} =\frac{2d}{2|\mathbb{S}^{d-1}|} =\frac{2d}{2d|\mathbb{B}^d|} =\frac{1}{|\mathbb{B}^d|}$ where ${\mathbb{B}^d=\{x\in\mathbb{R}^d:|x|=1\}}$ is the unit ball of ${\mathbb{R}^d}$. Recall that if ${s(r)}$ and ${v(r)}$ are respectively the surface of the sphere of radius ${r}$ and the volume of the ball of radius ${r}$ in ${\mathbb{R}^d}$ then ${v(r)=r^dv(1)}$ and ${s(r)=v'(r)=dr^{d-1}v(1)}$ hence ${s(1)=dv(1)}$. An even more compact definition. If we think the dimension ${d}$ as being a real positive number, we may observe that for all ${x\neq0}$, $\lim_{d\rightarrow2}\frac{\frac{1}{|x|^{d-2}}-1}{d-2} =\partial_{s=0}\frac{1}{|x|^{s}} =\partial_{s=0}\mathrm{e}^{-s\log|x|} =-\log|x|.$ This means that the formula ${\frac{1}{(d-2)|x|^{d-2}}}$, already valid for ${d=1}$ and ${d\geq3}$, is actually also valid for ${d=2}$ provided that we remove a singularity. This suggests to define the kernel, for all integers ${d\geq1}$ and even all real numbers ${d\geq1}$, and all ${x\neq0}$, as $g(x):=\frac{1}{(d-2)|x|^{d-2}}-\frac{1}{d-2},$ which satisfies $\Delta g=-2\frac{\pi^{d/2}}{\Gamma(d/2)}\delta_0.$ Note that physically, the potential is important only for defining the field, and as a consequence, the potential is defined up to an additive constant. In other words, what matters is the differences of potential values rather than the values themselves. Riesz kernels. The Riesz kernel in dimension ${d\geq2}$ and parameter ${s\geq0}$ with ${0<s<d}$ is defined for all ${x\neq0}$ by the formula $g(x):=\frac{1}{|x|^{d-s}}.$ We recover the Coulomb/Newton kernel when ${s=2}$. The Riesz kernel is the fundamental solution of the fractional Laplacian, which is a Fourier multiplier, and a non-local operator when ${d\neq2}$. Its inverse is a convolution operator. Conclusion. All in all, if we would like to incorporate all cases in a compact formula, we could consider in ${\mathbb{R}^d}$, ${d\geq1}$, for all ${s\in\mathbb{R}}$ and ${x\neq0}$ the kernel $g(x):=\frac{1}{s|x|^s},$ with the convention ${g(x)=-\log|x|}$ if ${s=0}$. The Coulomb/Newton case corresponds to taking ${s=d-2}$ as we have seen. Indeed, this is more or less the choice often made by my colleague Mathieu Lewin for instance, see for example arXiv:1905.09138. Entropy. The logarithm appears as a derivative of power functions also in relation with entropy and hypercontractivity, as explained in a previous post. The same for the derivation of the logarithmic Sobolev inequality from the Beckner inequalities. Further reading. Final word. Mathematics is also revealing common structures among apparently different things. In Physics, going beyond integers and the apparent physical meaning has advantages, as in the replica trick. You may already know this aphorism by Henri Bouasse, French Physicist from Toulouse of the 19-th century: Le physicien traite les problèmes du véhicule à une roue (la brouette), à deux roues (tilbury ou bicyclette), à trois, à quatre roues. Le mathématicien traite le problème général du véhicule à ${n}$ roues, ${n}$ étant entier ou fractionnaire, positif ou négatif, réel ou imaginaire. Trois idées basiques pour enseigner différemment : • Demander au département d’installer un serveur JupyterHub pour faire du Julia/Python/R • Utiliser le Google colaboratory ou rstudio.cloud pour faire du Python ou du R en ligne (1) • Demander aux étudiants de rendre leur projet sur github pour apprendre à s’en servir (2) (1) signalé par mon collègue Jamal Atif. L’analogue chez Microsoft est Azure Notebooks. (2) signalé par mon collègue Robin Ryder. Syntax · Style · Tracking & Privacy.
2019-09-20 12:00:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8390017151832581, "perplexity": 799.2339417754698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574018.53/warc/CC-MAIN-20190920113425-20190920135425-00104.warc.gz"}
https://quizplus.com/quiz/153051-quiz-9-compound-interest-further-topics-and-applications
# Business Mathematics Study Set 1 Mathematics ## Quiz 9 :Compound Interest: Further Topics and Applications Question Type A 30-year, $1,000 strip bond was traded for$167, four years after it was issued. What was the semi-annually compounded nominal rate at that time? Free Multiple Choice B Tags Choose question tag If the population of Green City is growing at a rate of 5% per year, how long will it take to grow from 2,300 to 10,000? Free Multiple Choice E Tags Choose question tag Twenty years ago the population of a village in Newfoundland was 964. Now it is 612. At what average annual rate has the population of the village declined over the last 20 years? Free Multiple Choice C Tags Choose question tag Albert Greco paid $1,974 for a$10,000 strip bond 16 years before it reached maturity. What semi-annually compounded nominal rate will Albert earn on his investment? Multiple Choice Tags Choose question tag A demand loan for $8,000 with interest at 16% compounded quarterly was repaid after two years and eight months. What was the amount of interest paid? Multiple Choice Answer: Tags Choose question tag If the population of Dodge City is decreasing at a rate of 19% per year, how long will it take to decrease from 7,700 to 2,000? Multiple Choice Answer: Tags Choose question tag At what quarterly compounded nominal interest rate will money double in 75 months? Multiple Choice Answer: Tags Choose question tag At what annually compounded interest rate will an investment of$71,294.69 double in 90 months? Multiple Choice Tags Choose question tag Sollozo just made a single payment to repay a loan he had with the Corleone Finance Company. He paid a total of $86,500 which included interest of$56,500 at 48% compounded monthly. How long ago was the money borrowed? Multiple Choice Tags Choose question tag A $50,000 strip bond was discounted to$21,680. The market rate was 7.4% compounded semi-annually. How much time was left before the bond reached maturity? Multiple Choice Tags Choose question tag Maury invested $5,000 in a selection of high-tech stocks. After six years of careful trading, his investments were worth$79,700. At what quarterly compounded nominal rate did his investments grow? Multiple Choice Tags Choose question tag A six-year, $20,000 GIC has a maturity value of$29,625. Calculate the semi-annually compounded nominal interest rate. Multiple Choice Tags Choose question tag At 11.4% compounded quarterly, how long will it take for money to double? Multiple Choice Tags Choose question tag A $50,000 GIC will earn$70,000 of interest over its 10-year term. What is the monthly compounded nominal rate of interest? Multiple Choice Tags Choose question tag The Wilsons bought their home 16 years ago for $128,000. Its value now is$141,000. At what annual rate has the value of their home appreciated since they bought it? Multiple Choice Rounded to the nearest month, how long will it take for $25,000 to grow to$35,000 at 9% compounded quarterly? What amount invested at 10% compounded semiannually will be worth $6380.00 after 38 months? Multiple Choice Answer: Tags Choose question tag Seven years before it matures the value of a$1,000 strip bond is $672. What is the semi-annually compounded nominal interest rate? Multiple Choice Answer: Tags Choose question tag At 14% compounded annually, an investment of$50,000 will grow to $1,000,000 in 22.86 years. How much longer will it take at 11% compounded annually? Multiple Choice Answer: Tags Choose question tag What is the term of a compound-interest Guaranteed Investment Certificate if$8,500 invested at 6.1% compounded annually will earn interest totaling \$4,365.50?
2022-08-18 14:04:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35911059379577637, "perplexity": 7877.495501136754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00223.warc.gz"}
http://www.irriapps.com/kmltool/
# KML Tool ## Description KML Tool is a program designed to retrieve coordinates from Google Earth’s Keyhole Markup Language files (KML). The program interface has a very minimalistic interface as shown in the figure. At the top of the program should be the menu bar followed by a field to select a KML. Below there is a button to retrieve coordinates followed by a text field to show the coordinates. ## Installation and requirements At the moment no installer is provided. This program does not have any special requirements, but it’s designed to work in any modern computer using a graphical interface, so at least you should have: • SO: Windows 10, macOS Sierra 10.12, Linux 4.4 (64 bits recommended) • 50 MB free space in hard drive • Processor 1.2 GHz • 1 GB of RAM • Keyboard, mouse and screen ## Features Some technical features are: • This program is coded in the C++ programming language. • The Qt5 libraries are used for the graphical interface. • The program can retrieve coordinates text from KML files. • Supported place marks are: Point, LineString and Polygon. ## Usage The procedure to retrieve coordinates from a KML file is very simple. ### Retrieving coordinates 1. Use the browse button (the one with the three dots “…”) to locate a file from your computer, only KML files can be selected. 2. You can also use the menu File > Open. 3. The path and name of the selected file will appear in the text field. 4. Use button Retrieve to extract the coordinates from the selected KML file. 5. The text of the coordinates should appear in the text field below. 6. You can copy, edit or save the coordinates as desired. 7. To save the coordinates to a text file use the menu File > Save. Windows binaries (64 bits) macOS (64 bits) Source Code GitHub repository KML Tool Retrieve coordinates from a KML file Copyright (C) 2018 Eduardo Jiménez <ecoslacker@irriapps.com> This program is free software; you can redistribute it and/or modify it more details.
2019-08-23 00:35:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19222593307495117, "perplexity": 5319.767946811897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317688.48/warc/CC-MAIN-20190822235908-20190823021908-00399.warc.gz"}
https://www.physicsforums.com/threads/rlc-circuit-w-switch.395290/
# RLC Circuit w/ switch 1. Apr 14, 2010 ### phufool 1. The problem statement, all variables and given/known data Provided in the picture below. 2. Relevant equations 3. The attempt at a solution So what I've done so far is use KVL to obtain Vc(t) +Vr(t) + Vo(t) = 1/2Vin(t). Could someone tell me if I started this problem correctly? Would I just take the derivative of the equation next? #### Attached Files: • ###### ECE #2.jpg File size: 28.3 KB Views: 194 2. Apr 14, 2010 ### Staff: Mentor Welcome to the PF. I think the input Vi is meant to have the value of 1/2V, but it's hard to tell for sure from the drawing. And yes, writing the KVL around the loop is a good approach, but you need to use the differential equations that relate I and V for the capacitor and inductor. That's where you end up with one differential and one integral. And yes, you then differentiate that equation to get a 2nd order DE, which you then solve and apply the initial conditions to. 3. Apr 14, 2010 ### phufool Thanks for the reply! And yes I think you're right, it is probably 1/2 V. So then the differential equation would look like: Vc(t) + VL(t) + Vo(t) = 1/2 . Vo(t) = Ri(t) VL(t) = Ldi(t)/dt Vc(t) = 1/C integral from -infinity to t of i(a)da So if I plug these into the equation and take the derivative, would I get: 1/C*i(t) + Ld^2i(t)/dt^t + Rdi(t)/dt = 0? So the 1/2 would just be irrelevant? So for these C, L, and R variables, would I just substitute these for the numbers given in the problem? Say C = 1/2, L= 1/4, and R = 1? PS: Is there a program/website I can use to make these equations look nicer? I'm sure what I'm typing must be hard to read lol. 4. Apr 15, 2010 ### phufool So I was hoping if someone could still help me with this equation? From the above equation: 1/C*i(t) + Ld^2i(t)/dt^t + Rdi(t)/dt = 0? Since i(t) = 1/R * Vo(t), I would substitue i(t) and the values of C, L and R and get: 1/4*d^2Vo(t)/dt + dVo(t)/dt + 2Vo(t) = 0 So I assume Vo(t) = Ae^st and get the roots: s = -2+-2i Can anyone tell me what to do next or if I'm doing this correctly? Thanks 5. Apr 16, 2010 ### Staff: Mentor The response will be a damped sinusoid, so your exponential term should have both sigma and j*omege in it. Something like: $$V_o(t) = A e^{B(\sigma + j\omega)}$$ Use that, differentiate, substitute back, and apply initial conditions to solve for the constants. 6. Apr 16, 2010 ### phufool Thanks so much! Now the trouble I'm having is knowing what are the initial conditions. I can't seem to understand it based on the circuit given. Do you think you could help explain it to me? 7. Apr 16, 2010 ### Staff: Mentor Glad that helped. BTW, the coefficient B may just be 1, but I'm not sure. You should be able to tell as you apply the ICs. At time t=0-, the switch is open, so there is zero current, and zero voltage across the elements to the right of the switch. At t=0, the switch is closed, so all of a sudden you have the supply voltage across the series RLC combination. That will let the current start to build, and the voltage division between those 3 components will start to change with time...
2018-03-17 16:53:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6281505227088928, "perplexity": 1056.1550278199802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645248.22/warc/CC-MAIN-20180317155348-20180317175348-00201.warc.gz"}
https://www.gamedev.net/forums/topic/495317-calculating-viewing-cube/
# Calculating viewing "cube" This topic is 3655 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Using gluPerspective(45.0f,(GLfloat)width/(GLfloat)height,0.1f,100.0f);, I need to find the coordinates of all 8 points of the viewing "cube" (Yes, I know it's not a cube, but a distorted prism) but I can't find any resources to help me do this using the width and height like I'm using. Yes, I've tried gluUnproject, and it just doesn't work. I don't need it to be dynamic based on a click, I just need to be able to calculate it when I move the window (I want to render a UI directly in front of the screen and a background directly behind it) ##### Share on other sites Your "viewing cube" is called the (view/camera) frustum. Searching on goggle for frustum should give you lots of hits for constructing it. ##### Share on other sites BTW, there is no necessity to render the UI with the same set-up as the scene. Assuming you speak of a usual 2D UI layer on top of the screen, you can simply switch to glOrtho after rendering the scene and then render the UI. To answer your question: WIth the vertical field-of-view (fovy) angle (your 45°), the aspect ratio a := width/height, near clipping plane distance n (your 0.1) and far clipping plane distance f (your 100.0), the projection matrix made by gluPerspective will be [ v/a 0 0 0 ][ 0 v 0 0 ][ 0 0 (n+f)/(n-f) 2nf/(n-f) ][ 0 0 -1 0 ] where v := cotangent( fovy/2 ) Using the abbreviations hn := n * tan( fovy/2 ) hf := f * tan( fovy/2 ) wn := a * hn wf := a * hf I think that the points are [ wn hn n ]T [ -wn hn n ]T [ -wn -hn n ]T [ wn -hn n ]T for the near plane, and [ wf hf f ]T [ -wf hf f ]T [ -wf -hf f ]T [ wf -hf f ]T for the far plane. These, of course are given in the camera co-ordinate frame. To yield in global co-ordinates (if you want that) you'll need to apply the inverse VIEW transformation. ##### Share on other sites voidgluPerspective(GLdouble fovy, GLdouble aspect, GLdouble zNear, GLdouble zFar){ GLdouble xmin, xmax, ymin, ymax; ymax = zNear * tan(fovy * M_PI / 360.0); ymin = -ymax; xmin = ymin * aspect; xmax = ymax * aspect; glFrustum(xmin, xmax, ymin, ymax, zNear, zFar);} Not my orginal code, I think it is from Mesa 3D originally. ##### Share on other sites Ahh, thanks, but I'm using selection with my UI and it refuses to play nicely with ortho. Nothing seems to work, and this seems like an easier solution to the problem. Though all the matrices are confusing me pretty badly Quote: Your "viewing cube" is called the (view/camera) frustum. Searching on goggle for frustum should give you lots of hits for constructing it. Thanks, I'll give that a shot ##### Share on other sites Yep, that implementation (shown in lexs post above) is reasonable and matches the posted matrix when the parameters are inserted in glFrustum's matrix. However, just for furthur clarification :) The matrix isn't really needed for calculating the points but at most for determining the meaning of the parameters. The points can be derived from the ratio formula of tangent function: tan( angle ) == (length of opposite) / ( length of adjacent) where the adjacent is given as the near and far clipping plane distances, and the opposites are the half heights of the particular planes since gluPerspective uses the vertical field-of-view angle. The widhtes are then computed by the rule-of-three using the aspect ratio. The points are then given by proper combinations of those values. (See my previous post below "Using the abbreviations" and simply ignore the matrix.) ##### Share on other sites Your calculations are perfect. Thanks a ton. This is exactly what I was after • 9 • 16 • 9 • 13 • 41
2018-05-26 10:33:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2912386655807495, "perplexity": 3601.4434424720744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867416.82/warc/CC-MAIN-20180526092847-20180526112847-00181.warc.gz"}
https://noa.gwlb.de/receive/cop_mods_00006891
# Three-dimensional density and compressible magnetic structure in solar wind turbulence The three-dimensional structure of both compressible and incompressible components of turbulence is investigated at proton characteristic scales in the solar wind. Measurements of the three-dimensional structure are typically difficult, since the majority of measurements are performed by a single spacecraft. However, the Cluster mission consisting of four spacecraft in a tetrahedral formation allows for a fully three-dimensional investigation of turbulence. Incompressible turbulence is investigated by using the three vector components of the magnetic field. Meanwhile compressible turbulence is investigated by considering the magnitude of the magnetic field as a proxy for the compressible fluctuations and electron density data deduced from spacecraft potential. Application of the multi-point signal resonator technique to intervals of fast and slow wind shows that both compressible and incompressible turbulence are anisotropic with respect to the mean magnetic field direction inline-formula $M1inlinescrollmathml{P}_{⟂}\gg {P}_{\parallel }$ 41pt14ptsvg-formulamathimgcbfc409f6226c5c2cd8cb6eb7dd0323a angeo-36-527-2018-ie00001.svg41pt14ptangeo-36-527-2018-ie00001.png and are sensitive to the value of the plasma beta (inline-formulaβ; ratio of thermal to magnetic pressure) and the wind type. Moreover, the incompressible fluctuations of the fast and slow solar wind are revealed to be different with enhancements along the background magnetic field direction present in the fast wind intervals. The differences in the fast and slow wind and the implications for the presence of different wave modes in the plasma are discussed. keywordsKeywords. Interplanetary physics (MHD waves and turbulence) ### Zitieren Zitierform: Roberts, Owen W. / Narita, Yasuhito / Escoubet, C.-Philippe: Three-dimensional density and compressible magnetic structure in solar wind turbulence. 2018. Copernicus Publications. ### Zugriffsstatistik Gesamt: Volltextzugriffe:
2020-09-28 12:49:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4266439378261566, "perplexity": 1233.4050234181454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401600771.78/warc/CC-MAIN-20200928104328-20200928134328-00285.warc.gz"}
https://settheory.mathtalks.org/this-week-in-logic-at-cuny-38/
# This Week in Logic at CUNY Computational Logic Seminar October 16, Time 2:00 – 4:00 PM, Room 3309. Speaker: Giorgi Japaridze, Villanova Title: Give Caesar what belongs to Caesar Abstract: In this talk I will discuss the possibility and advantages of basing applied theories (e.g. Peano Arithmetic) on Computability or Intuitionistic Logics. Set Theory Seminar Friday, October 19, 2012, 10:00am GC 6417 Speaker: Thomas Johnstone Title: Definability of the ground model in forcing extensions of ZF-models, I Abstract: Richard Laver [2007] showed that if M satisfies ZFC and G is any M-generic filter for forcing P of size less than delta, then M is definable in M[G] from parameter P(delta)^M. I will discuss a generalization of this result for models M that satisfy ZF but only a small fragment of the axiom of choice. This is joint work with Victoria Gitman. Definition (ZF). P*Q has closure point delta if P is well-orderable of size at most delta and Q is be well-orderable here.) Theorem: If M models ZF+DC_delta and P is forcing with closure point delta, then M is definable in M[G] from parameter P(delta)^M. Model Theory Seminar Friday, October 19, 2012, 12:30pm-1:45pm, GC 6417 Koushik Pal (University of Maryland) Unstable Theories with an Automorphism Abstract: Kikyo and Shelah showed that if T is a first-order theory in some language L with the strict-order property, then the theory T_\sigma, which is the old theory T together with an L-automorphism \sigma, does not have a model companion in L_\sigma, which is the old language L together with a new unary predicate symbol \sigma. However, it turns out that if we add more restrictions on the automorphism, then T_\sigma can have a model companion in L_\sigma. I will show some examples of this phenomenon in two different context – the linear orders and the ordered abelian groups. In the context of the linear orders, we even have a complete characterization of all model complete theories extending T_\sigma in L_\sigma. This is a joint work with Logic Workshop Friday, October 19, 2012 2:00 pm GC 6417 Prof. Andrej Bauer (University of Ljubljana) Synthetic computability Synthetic computability is a formulation of computability theory in the style of synthetic differential geometry and synthetic domain theory: we first “synthesise” a world of mathematics tailored for computability, namely the effective topos, and then we work in it. Computability theory becomes just ordinary mathematics in an extraordinary world. Thus the c.e. sets are just the computable sets, the computable functions are just functions, the Kreisel-Lacombe-Shoenfield theorem is the Brouwerian continuity principle, etc. Many classical theorems in computability theory can be formulated and proved elegantly from simple, but unusual axioms, such as “there are countably many countable subsets of natural numbers.” After an excursion into synthetic computability we build another synthetic world, based on infinite-time Turing machines. This one is even stranger on the inside, as in it the real numbers form a subset of the natural numbers. Continuity principles are invalid in the new world, but we can look for their substitutes in the higher levels of the hierarchies of sets.
2017-07-21 02:49:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8578684329986572, "perplexity": 2786.789415808814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423681.33/warc/CC-MAIN-20170721022216-20170721042216-00408.warc.gz"}
https://mathoverflow.net/questions/202462/rigorous-justification-that-overdetermined-systems-do-not-have-a-solution?noredirect=1
# Rigorous justification that overdetermined systems do not have a solution There is the following well known and very useful heuristic principle: Assume one has a natural map from the space of $k$-tuples of functions in $n$ variables into the space of $K$-tuples of functions in $N$ variables such that either (a) $n<N$ or (b) $n=N$ and $k<K$. Then this map cannot be onto. Also natural maps in the opposite direction cannot be injective. I would like to have rigorous theorems making this principle precise. Let me state few examples of precise statements for which I would like to have a rigorous proof. Example 1. On an $n$-dimensional manifold $M$, $n>2$, there exists a Riemannian metric which cannot be realized isometrically as a hypersurface in $\mathbb{R}^{n+1}$. (Here we have the obvious map from the space of imbeddings of $M$ to $\mathbb{R}^{n+1}$ to the space of metrics on $M$. The former space is an open subset in the space of $(n+1)$-tuples of functions in $n$ variables; the latter space is the space of $n(n+1)/2$-tuples of functions in $n$ variables. But for $n>2$ one has $n(n+1)/2>n+1$.) Example 2. Let $n>3$. Consider the map from metrics on $\mathbb{R}^n$ to sections of the tensor bundle $Sym^2(\wedge^2T^*\mathbb{R}^n)$ sending the metric to its Riemann curvature tensor. Then this map cannot be onto. (This example for discussed at this post Equations satisfied by the Riemann curvature tensor) Example 3. There exist systems of ordinary differential equations of second order on $\mathbb{R}^n$ which cannot be considered as Euler-Lagrange equations for any Lagrangian. Example 4. Consider the Radon transform between spaces of functions on two real Grassmannians $Gr$ and $Gr'$. If $\dim Gr> \dim Gr'$ then it must have nontrivial kernel. Finally let me state few rigorous results contradicting the principle stated at the beginning of this post. 1) The Hilbert spaces of $L^2$-functions on any two Riemannian manifolds are isomorphic. 2) The Banach spaces of continuous functions on any two compact manifolds of any positive dimension are isomorphic (it fact, for any uncountable compact metric spaces). (Milyutin's theorem.) 3) The Frechet spaces of infinitely smooth functions on torii of any positive dimension are isomorphic. There is probably no single proof that would provide a rigorous justification of the OP's principle in all cases. Moreover, without specifying more clearly what is meant by a 'natural map', the principle itself turns out not to hold in general. For example, every smooth (complex-valued) function $f$ on the unit circle $S^1\subset\mathbb{C}$ can be written uniquely in the form $$f(e^{i\theta}) = f_0(e^{2i\theta}) + e^{i\theta}\,f_1(e^{2i\theta}),$$ and the (natural?) 'even-odd' decomposition mapping $f(e^{i\theta})\mapsto \bigl(f_0(e^{i\theta}),f_1(e^{i\theta})\bigr)\$ is one-to-one and onto. As a (perhaps) more serious example, by the Nash-Kuiper $C^1$-isometric embedding theorem, any smooth Riemannian metric $g$ in dimension $n$ can be locally isometrically embedded into $\mathbb{R}^{n+1}$ by a $C^1$-mapping. Since metrics in dimension $n$ depend on $\tfrac12n(n{+}1)$ functions of $n$ variables while maps into $\mathbb{R}^{n+1}$ depend on $n{+}1$ functions of $n$ variables, this violates your heuristic principle when $n>2$. (To be sure, the Nash-Kuiper theorem was greeted with astonishment when it first appeared in 1954.) It would be hard to argue that the mapping $\Phi(f) = \mathrm{d}f\cdot\mathrm{d}f = g$, where $f:M^n\to\mathbb{R}^{n+1}$ and $g$ is a metric on $M$ is not natural, but, of course, one can object that the differing degrees of differentiability on $f$ and $g$ should be taken into account in any careful formulation of a heuristic principle along the lines that the OP wants. Now, there is a wide class of equations that includes the OP's Examples 1, 2, and 3 and explains the failure of surjectivity of each of them, namely the class of smooth nonlinear differential operators, $\Phi:C^\infty(E)\to C^\infty(F)$, where $E$ and $F$ are bundles over a common base manifold $M$. To formulate this notion precisely, recall that, given a smooth bundle $E\to M$ of fiber rank $p$, say, and where $M$ is a smooth manifold of dimension $n$, one has the bundle $J^k(E)\to M$ of $k$-jets of sections of $E$, which is a smooth bundle of fiber rank $p{{n+k}\choose n}$. Given another smooth bundle $F\to M$ of fiber rank $q$, say, a (smooth) nonlinear differential operator of order at most $s$, say $\Phi:C^\infty(E)\to C^\infty(F)$ is a mapping of the form $\Phi(u) = \Phi^s\bigl(j^s(u)\bigr)$ for all $u\in C^\infty(E)$ where $\Phi^s: J^s(E)\to F$ is a smooth bundle mapping. (In practice, one often only has $\Phi^s$ defined on an open subbundle $A^s\subset J^s(E)$, in which case one says that a section $u:M\to E$ is $A^s$-admissible if its associated $s$-jet section $j^s(u):M\to J^s(E)$ has its image lying in $A^s$. Then, one only gets a mapping $\Phi:C^\infty(E,A)\to C^\infty(F)$, where $C^\infty(E,A)\subset C^\infty(E)$ is the subset of $A^s$-admissible sections. The reader can deal with the details of that situation.) In this case, the OP's heuristic principle would suggest that $\Phi$ cannot be surjective, even locally, if the rank of $F$ is greater than the rank of $E$. Indeed, this turns out to be the case, and can be proved rigorously by the argument below. First, though, note that the OP's Example 1 is a nonlinear differential operator of order $1$, where $E = M\times \mathbb{R}^{n+1}$ and $F = S^2(T^*M)$ and $\Phi^1(u) = \mathrm{d}u\cdot\mathrm{d}u$. Here, $(p,q) = \bigl(n{+}1, {{n+1}\choose2}\bigr)$. Similarly, the OP's Examples 2 and 3 are nonlinear differential operators of order $2$, with $(p,q) = \bigl({{n+1}\choose2}, \frac{n^2(n^2-1)}{12}\bigr)$ in the case of Example 2, and $(p,q) = \bigl(1, \frac{n}2\bigr)$ in the case of Example 3 (here, the underlying manifold is $T\mathbb{R}^{n/2} = \mathbb{R^n}$, on which the Lagrangian for curves would be defined). To prove the promised nonsurjectivity, suppose given such a smooth differential operator. One then has canonical prolongations $\Phi^k:J^k(E)\to J^{k-s}(F)$ for $k\ge s$, which are defined by the property that $$\Phi^k\bigl(j^k(u)\bigr) = j^{k-s}\bigl(\Phi^s(j^s(u))\bigr)$$ for all $k\ge s$ and all (local) sections $u\in C^\infty(E)$. The crucial consequence of this equation is that the $(k{-}s)$-jets of sections $v:M\to F$ that are of the form $v = \Phi(u) = \Phi^s(j^s(u))$ for some $u\in C^\infty(E)$ have to lie in the image of $\Phi^k$. Now, if $p<q$, then for all $k$ sufficiently large, one has $$\dim J^k(E) = n + p{{n+k}\choose n} < n+ q{{n+k-s}\choose n} = \dim J^{k-s}(F).$$ Thus, the prolongation map $\Phi^k:J^k(E)\to J^{k-s}(F)$ cannot be surjective for $k$ sufficiently large, and this implies that the equation $v = \Phi^s(j^s(u))$ has no solution, even locally, for the generic section $v:M\to F$, i.e., $\Phi:C^\infty(E)\to C^\infty(F)$ is not surjective. Finally, note that this argument heavily uses the assumption of infinite differentiability. This is not surprising because, as the Nash-Kuiper theorem shows, this non-surjectivity can indeed fail spectacularly without such assumptions. • Thanks for the interesting answer. However I still do not understand one point. We can consider the prolongation $\Phi^{l,k}\colon J^l(E)\to J^{k-s}(F)$ for $l\geq k$. As you explained, $\Phi^{k,k}$ cannot be surjective for $k\gg 1$ for dimensional reasons. But it still might be possible that $\Phi^{l,k}$ is surjective for $l\gg k$. Thus formally we would not get a contradiction. – makt Apr 10 '15 at 8:01 • @MKO: I don't know what you mean by $\Phi^{l,k}$, perhaps you can explain it. However, it doesn't matter: If $v = \Phi^s(j^s(u))$ then $j^{k-s}(v) = \Phi^k(j^k(u))$ by the above formula, so the $(k{-}s)$-jets of those $v$ in the image of the operator $\Phi:C^\infty(E)\to C^\infty(F)$ must lie in the image of $\Phi^k$. Since $\Phi^k$ is not onto for $k$ sufficiently large, and since the $(k{-}s)$-jets of $v\in C^\infty(F)$ reach all of $J^{k-s}(F)$, it follows that there are $v\in C^\infty(F)$ that are not in the image of $\Phi$. – Robert Bryant Apr 10 '15 at 8:30 • Your example with functions on the circle is great. It is the simplest counter-example I know to the principle. It looks to me more natural than any other example. – makt Apr 11 '15 at 13:09 The principle you mention is not always true ! V. Arnold proved that every continuous function in $N$ real variables is a composition of continuous functions of two variables only. More precisely, there exist $N(2N+1)$ universal functions $\phi_{ij}:[0,1]\rightarrow[0,1]$ such that the map $$(g_1,\ldots,g_{2N+1})\mapsto\sum_jg_j\left(\sum_i\phi_{ij}(x_i)\right)$$ is onto when acting from $C([0,1])^{2N+1}$ to $C([0,1]^N)$. Here $$n=1,\quad k=2N+1,\quad K=1.$$ This disproved the conjecture expressed in Hilbert's XIIIth problem. By the way, is there any detailed proof published in English (I also accept French) ? • Another good example of why you need some hypothesis, such as smoothness (though that is not the only one, of course), in order to prove such statements. – Robert Bryant Apr 10 '15 at 11:11 Other examples: The Frechet spaces $C^\infty(M)$ of compact smooth manifolds are all linearly isomorphic to the space $s$ of rapidly decreasing sequences, see here. The same is true for the DF-spaces of real analytic functions of compact real analytic manifolds, see [Seeley, R. T. Eigenfunction expansions of analytic functions. Proc. Amer. Math. Soc. 21 1969 734–738], and even for classes of Denjoy-Carleman ultra differentiable functions, see arXiv:1410.2637. You wrote: Assume that one has a natural map $\dots$ What is a natural map? In this context, one can list the following kinds, in increasing complexity: • Algebra homomorphisms $C^\infty(M)\to C^\infty(N)$ are exactly of the form $\phi^*: f\mapsto f\circ\phi$ for smooth maps $\phi:N\to M$. So they satisfy the overdetermination principle' of the OP. • Spaces of sections $\Gamma(E\to M)$ of vector bundles $E$ over a manifold $M$ are precisely finitely generated projective modules over the algebra of smooth functions. Linear differential operators of order $k$ on $\Gamma(E\to M)$ are exactly those $A$ such that each commutator with a multiplication operator with $f\in C^\infty(M)$ is of order $k-1$, which iteratively determines from the module structure alone. • One can even define linear differential operators $\Gamma(E\to M)\to \Gamma(F\to N)$ over a fixed algebra homomorphism $C^\infty(M)\to C^\infty(N)$ in a similar way. All these operators are tied to the algebra structure on $C^\infty(M)$, and they satify the overdermined principle. I am sure that one can make a proof for the overdermination principle' out out of this facts. • Pseudodifferential operators: Here I am not so sure. Maybe, because some of them they are parametrices of differential operators? Fourier-integral operators? • Nonlinear differential operators are such that their derivatives are linear differential operators. Then they are also tied to the algebra structure.
2021-04-23 03:16:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351632595062256, "perplexity": 152.17335679617057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00244.warc.gz"}
https://blog.givewell.org/category/helen-keller-international/page/2/
# Announcing our 2019 top charities We’re excited to announce our top charities for 2019. After thousands of hours of vetting and review, eight charities stood out as excellent. • The decision to allocate the $2.96 million to the Against Malaria Foundation (AMF) (70 percent) and the Schistosomiasis Control Initiative (SCI) (30 percent). • Our recommendation that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we continue to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact. Read More # Key questions about Helen Keller International’s vitamin A supplementation program One of our two new top charities this year is Helen Keller International (HKI)’s vitamin A supplementation program. We named HKI’s vitamin A supplementation program a top charity this year because: • There is strong evidence from many randomized controlled trials of vitamin A supplementation that the program leads to substantial reductions in child deaths. • HKI-supported vitamin A supplementation programs are inexpensive (we estimate around$0.75 in total costs per supplement delivered) and highly cost-effective at preventing child deaths in countries where HKI plans to work using GiveWell-directed funds. • HKI is transparent—it has shared significant, detailed information about its programs with us, including the results and methodology of monitoring surveys HKI conducted to determine whether its vitamin A supplementation programs reach a large proportion of targeted children. • HKI has a funding gap—we believe it is highly likely that its vitamin A supplementation programs will be constrained by funding next year. HKI’s vitamin A supplementation program is an exceptional giving opportunity, but as with the case for donating to any of our other top charities, not a “sure thing.” I’m the Research Analyst who has led our work on HKI this year. In this post, I discuss some key questions about the impact of Helen Keller International’s vitamin A supplementation program and what we’ve learned so far. I also discuss GiveWell’s plans for learning more about these issues in the future.
2023-02-08 13:55:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17808844149112701, "perplexity": 4250.361835328502}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00206.warc.gz"}
https://physics.stackexchange.com/tags/electrostatics/hot?filter=week
# Tag Info 7 If there are charges left in any of the terminals, why approaching one terminal from a piece of paper or an electroscope doesn't show any kind of static electricity ? It is simply a matter of scale. A battery would have 1.5 V to 12 V worth of static electricity, but the minimum detection threshold for a human is about 3 kV of static electricity. So there is ... 4 Both earthed points are different (physically). I want to learn how this capacitor is getting charged The fact that the power supply and one plate of the capacitor are earth grounded at different locations simply potentially introduces additional resistance through which charging occurs. That resistance increases the charging time constant (t=RC) slowing ... 3 In Figure V.16 the equipotential surfaces between the two plates are all parallel to the plates. So the line (surfaces) labelled $V$ is an equipotential. Putting in an infinitesimally thin sheet of conductor along that equipotential surface makes no difference to the electrical properties of the capacitor. Now increasing the thickness of the conductor again ... 1 Is $L$ along the $z$-direction? Not that is matters. Since the correct answer goes as $1/L^2$, you are looking a pure monopole field (or the cylindrical field equivalent). A monopole field is radially outward with magnitude: $$E = k\frac q{r^2},$$ where $q$ is the total charge. It suffices to show that: $$q=\int\sigma(r,\phi)dr d\phi = \frac 1 3 \sigma_0 \... 1 I think you are being confused with charge and net charge. An uncharge sphere does have positive and negative charges, the only thing is that the magnitudes of positive and negative charges are equal and so there is no net charge. Another way to think is that there are both electrons and protons in a metallic sphere right? Arent they both charged? 1 The electrons in fur are much less tightly bound than electrons in ebonite (very strong relative bond, ebonite is at the bottom of the negative Triboelectric series, see [1]) and hence ebonite gets a strong relative negative charge [1]. "A material towards the bottom of the Triboelectric series table, when touched to a material near the top of the ... 1 There are a few issues to unpack here. First, note that what you have defined is not really the charge density. Let's write out this quantity explicitly: $$\hat{\rho}(x',y',z', \theta) = \lambda \delta(x'-x(\theta)) \delta(y'-y(\theta)) \delta(z'-z(\theta)),$$ Note that I have defined \hat\rho with an explicit dependence on \... 1 Instead of a cube, use an infinite slab (so the transverse coordinates don't matter). We have:$$ E(x) = E_0 $$Then \nabla \vec E \rightarrow dE/dx, so with no conductor:$$ \frac{dE}{dx} = \frac 1 {\epsilon_0}\rho(x)= 0$$which of course is solved by E(x)=E_0. Now place a conductor with a surface at x=0. A charge is induced:$$ \rho(x) = E_0 \... 1 As long as the hollow conductor has thickness, the total amount of charge $+q$ on the outer surface migrates to the inner surface after $-q$ has been inserted in the cavity. That'll be still consistent with the Gauss' Law. 1 The external force is the negative of the field force. f.dr is work done by the field, -f.dr is the work done against 1 The electrons in fur are much less tightly bound than electrons in ebonite (very strong relative bond, ebonite is at the bottom of the negative Triboelectric series, see [1]) and hence ebonite gets a strong relative negative charge [1]. "A material towards the bottom of the Triboelectric series table, when touched to a material near the top of the ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-10-24 21:29:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7589959502220154, "perplexity": 619.605916970532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00511.warc.gz"}
https://nkrafa.rtaf.mi.th/corus-entertainment-dsqp/4ee14d-frequency-schedule-means
Parameterised Schedule. Their paychecks are more money but less frequent. If 0, the schedule is not enabled. Adverbs of frequency are often used to indicate routine or repeated activities, so they are often used with the present simple tense. Capitalization and the use of periods are a matter of style. Again, keep in mind that different industries and company sizes can impact this statistic. The answer isn’t so cut and dry—it depends. Once Backup Administrators get to know and understand the frequency based scheduler, it becomes the no. Once Backup Administrators get to know and understand the frequency based scheduler, it becomes the no. Reach measures the number of potential customers who see/hear the advertising campaign. A schedule of reinforcement is a protocol or set of rules that a teacher will follow when delivering reinforcers (e.g. Download our guide, “, Pay Schedules: The Cornerstone of Running Payroll. The definition of Reach and Frequency. A biweekly pay frequency is a happy medium between weekly and monthly pay frequencies. According to the BLS, 36.5% of employees are paid biweekly, making it the most popular pay frequency. Understanding the basic test logic of Frequency based scheduling is key. 2. uncountable noun. Before paying employees, you need to decide on a pay frequency. 1 MegaHertz (MHz) is equal to 1000 kHz. Definition of FREQUENCY SCHEDULE: Maintenance timetable. You could end up paying more to run weekly payrolls than running biweekly, semimonthly, or monthly payrolls. You also need to keep in mind things like how long payroll will take you and hidden fees associated with some frequencies. The payment schedule of financial instruments defines the dates at which payments are made by one party to another on for example a bond or derivative.It can be either customised or parameterised. What does salary frequency mean to employees? With payroll software, you can significantly cut back the time it takes you to run payroll. Research has shown that the timing of the first nursing (within an hour after birth) and the frequency of nursing on the second day of your baby’s life after birth are correlated with the amount of milk you will produce by the fifth day after birth, 1 though this isn’t a “hard-and-fast rule” by any means. Different industries and company sizes differ from this statistic, however. With Patriot’s online payroll services, you pay per employee, not per paycheck. Dividend frequency is how often a stock or fund pays a dividend. You probably know the word frequent, a synonym for often. Pay frequency is the amount of time between an employee’s paydays. I have in the past often referred Connect Users to the original NetApp blog, e.g A semimonthly pay frequency can be difficult for employers and employees to keep track of. Simple examples are election returns and test scores listed by percentile. Employees will receive their wages the same day each pay period, like on a Friday of each week. tokens when using a token economy). You will need to run payroll more often than with any of the other frequencies. Businesses with fewer employees might choose to use weekly pay frequencies more than companies with many employees. When you have employees, you need to run payroll so they can receive their wages. What load does it put on the nbpem's worklist? Frequency measures how often things repeat over time. Before deciding on frequency, check with your state laws. enabled_flag is tinyint, with a default of 1 (enabled). City buses often reach stops at a frequency of every 15 minutes, unless it's snowing or raining really hard. Keep in mind that some payroll software companies charge you based on the number of payrolls you run each month. Employees receive two paychecks each month, although some months differ. If previous backup was queued for any reason, the start time for future backups will be moved to a later time. Get your free trial today! Their paychecks are less money and more frequent. It can be easy to confuse semimonthly pay frequencies with biweekly schedules because employees receive wages twice per month with both (for the most part). Each state regulates employee pay differently, so it’s no surprise that there are rules on pay frequencies. Understanding the basic test logic of Frequency based scheduling is key. If you have a weekly pay frequency, you will be running more payrolls than with any of the other pay frequencies, which takes up more time and energy. Q8. In some cases, hourly workers are the ones who are paid weekly because they do not follow a set schedule like salary employees. Don't confuse this with the short wave radio band, which is much lower in frequency: C: 4 to 8 GHz: C for "compromise" between S and X band. All these days, i've been ignoring this kid as i had no idea on what he is capable of. been pre-determined when a bandwidth schedule of KR is utilized. There are four pay frequency … The pay frequency you choose will determine the number of paychecks an employee receives. The monthly payment schedule clearly favors the interest-only loan, but the interest-only borrower faces a bullet repayment of \$320,000. Housekeeping department should implement a routine cleaning cycle as part of their standard operational procedures. These dates can fall on any of the seven days of the week. But first, what does pay frequency mean? Frequency of Dosage Abbreviations 10 6 99 Also known as "Sig Codes", Prescription abbreviations are basically coded instructions from a health-care professional. Every 1 day means 24 hours after last successful start time. When determining if the retention levels are the same (for scheduling purposes), does the “scheduler” use the retention level in the policy or in the storage lifecycle policy (assuming one is used)? You can establish different pay frequencies for salary vs. hourly employees, although this might get confusing if you run payroll by hand. The frequency interval supplied with the recurrence constructor is an integer that acts as a multiplier for the supplied frequency. schedules, then select relevant SLP as STU in the schedule. Than companies with many employees UTC ), same as Greenwich Mean time ( GMT ) there is still use... Biweekly, making it the least common pay frequency schedule, you run each month frequency division duplex LTE. Is developed to evaluate the sensitivity of expected passenger wait time at transit stops to service frequency schedule. A multiplier for the supplied frequency * the frequency interval supplied with present. Refers to the message for more information, please click here which payroll you! Employees can receive their wages each week those customers will be exposed to a later time are committed to timely. Employees 12 paychecks per year its wavelength \lambda: f = v /.... The seven days of the schedule is generated based on a daily, weekly monthly. By suggesting possible matches as you type this category that it is at... Under control the start time they can receive their wages on a Sunday or Friday. Pay period, like Patriot software, you pay per employee, not per paycheck are to. Can pay an employee ’ s paydays play a part in your business be lazy keep! Often reach stops at a frequency of a wave is its velocity v divided by its \lambda... Sometimes used for the supplied frequency is utilized ( e.g to run weekly payrolls running... All these days, i 've been ignoring this kid of items in a free, 30-day! Frequency options: weekly, monthly, every 6 months etc. money and don ’ so. Synonym for often for more information, please click here, however monthly paychecks can financial! That 19.8 % of employees are paid biweekly, semimonthly, and antonyms,! The week the third most popular payment option by hand set the freq 1! Committed to providing timely updates regarding COVID-19 updates regarding COVID-19 wave is its velocity v by. Can also play a part in your business ’ s no surprise that there are four pay frequency schedule you. Stops to service frequency and schedule reliability backups such as daily and weekly.! Over time, the start time for future backups will be exposed to a time... Than with any of the week there are four pay frequency is the number employees. Time for future backups will be exposed to a later time explaining how fix... As legal advice ; for more information, please click here exposed to a message a! Your search results by suggesting possible matches as you type to which the schedule try our payroll software charge! The least common pay frequency, you run the risk of 'schedule '. To decide on a daily, weekly, etc. easy-to-use table outlines! Employees to keep track of \lambda: f = v / \lambda that 19.8 % of employees paid... That different industries and company sizes differ from this statistic, however a! The words while Roman Numerals are sometimes used for the numbers: f v... Also play a part in your business Sunday or a Friday, all depending on the and! Etc. workers are the second most common option with 32.4 % of employees are semimonthly... S pay frequency options: weekly, monthly, making it the most popular pay frequency by! Any reason, the employee takes home the same amount of pay and would like to with. Might not be a federal law regarding pay frequency, which may include monthly, every 6 months etc ). Two paychecks each year, compared to 26 with biweekly pay frequency test scores listed by percentile each,. When a bandwidth schedule of reinforcement is a protocol or set of rules and conventions. Allow simultaneous transmission on two frequencies seven days of the job to which the schedule 98.3,.
2021-04-20 19:54:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2068062722682953, "perplexity": 2609.035519672124}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039490226.78/warc/CC-MAIN-20210420183658-20210420213658-00271.warc.gz"}
http://tailieu.vn/doc/using-a-digital-multimeter-67042.html
# Using a Digital Multimeter Chia sẻ: La Quang | Ngày: | Loại File: PDF | Số trang:4 0 126 lượt xem 17 ## Using a Digital Multimeter Mô tả tài liệu For this lab, you will need to compile power supply information as well as test procedures. Observe various types of power supply form factors and characteristics. If at any time you are unsure of the procedure, ask your instructor. Chủ đề: Bình luận(0) Lưu
2017-10-21 12:24:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8144069910049438, "perplexity": 8649.683489161944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824775.99/warc/CC-MAIN-20171021114851-20171021134851-00815.warc.gz"}
https://www.researchgate.net/publication/352152980_LCS-manifolds_and_Ricci_solitons
Article # LCS-manifolds and Ricci solitons Authors: To read the full-text of this research, you can request a copy directly from the authors. ## Abstract This paper is concerned with the study of [Formula: see text]-manifolds and Ricci solitons. It is shown that in a [Formula: see text]-spacetime, the fluid has vanishing vorticity and vanishing shear. It is found that in an [Formula: see text]-manifold, [Formula: see text] is an irrotational vector field, where [Formula: see text] is a non-zero smooth scalar function. It is proved that in a [Formula: see text]-spacetime with generator vector field [Formula: see text] obeying Einstein equation, [Formula: see text] or [Formula: see text] according to [Formula: see text] or [Formula: see text], where [Formula: see text] is a scalar function and [Formula: see text] is the energy momentum tensor. Also, it is shown that if [Formula: see text] is a non-null spacelike (respectively, timelike) vector field on a [Formula: see text]-spacetime with scalar curvature [Formula: see text] and cosmological constant [Formula: see text], then [Formula: see text] if and only if [Formula: see text] (respectively, [Formula: see text]), and [Formula: see text] if and only if [Formula: see text] (respectively, [Formula: see text]), and further [Formula: see text] if and only if [Formula: see text]. The nature of the scalar curvature of an [Formula: see text]-manifold admitting Yamabe soliton is obtained. Also, it is proved that an [Formula: see text]-manifold admitting [Formula: see text]-Ricci soliton is [Formula: see text]-Einstein and its scalar curvature is constant if and only if [Formula: see text] is constant. Further, it is shown that if [Formula: see text] is a scalar function with [Formula: see text] and [Formula: see text] vanishes, then the gradients of [Formula: see text], [Formula: see text], [Formula: see text] are co-directional with the generator [Formula: see text]. In a perfect fluid [Formula: see text]-spacetime admitting [Formula: see text]-Ricci soliton, it is proved that the pressure density [Formula: see text] and energy density [Formula: see text] are constants, and if it agrees Einstein field equation, then we obtain a necessary and sufficient condition for the scalar curvature to be constant. If such a spacetime possesses Ricci collineation, then it must admit an almost [Formula: see text]-Yamabe soliton and the converse holds when the Ricci operator is of constant norm. Also, in a perfect fluid [Formula: see text]-spacetime satisfying Einstein equation, it is shown that if Ricci collineation is admitted with respect to the generator [Formula: see text], then the matter content cannot be perfect fluid, and further [Formula: see text] with gravitational constant [Formula: see text] implies that [Formula: see text] is a Killing vector field. Finally, in an [Formula: see text]-manifold, it is proved that if the [Formula: see text]-curvature tensor is conservative, then scalar potential and the generator vector field are co-directional, and if the manifold possesses pseudosymmetry due to the [Formula: see text]-curvature tensor, then it is an [Formula: see text]-Einstein manifold. ## No full-text available ... The principal moto of this article is to investigate the curvature inheritance, Ricci solitons and collineations with different curvature tensors such as Ricci, conharmonic, projective curvature tensor. The different aspects of Ricci solitons has been recently studied by Ahsan et al. [1,13,14], Shaikh et al. [62][63][64][65][66][67], Blaga [17]. But Ricci solitons in various spacetimes remain to be investigated yet. ... Preprint Full-text available The purpose of the article is to investigate the existence of Ricci solitons and the nature of curvature inheritance as well as collineations on the Robinson-Trautman (briefly, RT) spacetime. It is shown that under certain conditions RT spacetime admits almost Ricci soliton, almost $\eta$-Ricci soliton, almost gradient $\eta$-Ricci soliton. As a generalization of curvature inheritance \cite{Duggal1992} and curvature collineation \cite{KLD1969}, in this paper, we introduce the notion of \textit{generalized curvature inheritance} and examine if RT spacetime admits such a notion. It is shown that RT spacetime also realizes the generalized curvature (resp. Ricci, Weyl conformal, concircular, conharmonic, Weyl projective) inheritance. Finally, several conditions are obtained, under which RT spacetime possesses curvature (resp. Ricci, conharmonic, Weyl projective) inheritance as well as curvature (resp. Ricci, Weyl conformal, concircular, conharmonic, Weyl projective) collineation. Article The main goal of this paper is to study the properties of generalized Ricci recurrent perfect fluid spacetimes and the generalized Ricci recurrent (generalized Robertson–Walker (GRW)) spacetimes. It is proven that if the generalized Ricci recurrent perfect fluid spacetimes satisfy the Einstein’s field equations without cosmological constant, then the isotropic pressure and the energy density of the perfect fluid spacetime are invariant along the velocity vector field of the perfect fluid spacetime. In this series, we show that a generalized Ricci recurrent perfect fluid spacetime satisfying the Einstein’s field equations without cosmological constant is either Ricci recurrent or Ricci symmetric. An n-dimensional compact generalized Ricci recurrent GRW spacetime with almost Ricci soliton is geodesically complete, provided the soliton vector field of almost Ricci soliton is timelike. Also, we prove that a (GR)n GRW spacetime is Einstein. The properties of (GR)n GRW spacetimes equipped with almost Ricci soliton are studied. Article Full-text available In this paper, we consider the Ricci curvature of a Ricci soliton. In particular, we have showed that a complete gradient Ricci soliton with non-negative Ricci curvature possessing a non-constant convex potential function having finite weighted Dirichlet integral satisfying an integral condition is Ricci flat and also it isometrically splits a line. We have also proved that a gradient Ricci soliton with non-constant concave potential function and bounded Ricci curvature is non-shrinking and hence the scalar curvature has at most one critical point. Article Full-text available We prove that in Robertson–Walker space-times (and in generalized Robertson–Walker spacetimes of dimension greater than 3 with divergence-free Weyl tensor) all higher-order gravitational corrections of the Hilbert–Einstein Lagrangian density F(R,□R,…,□kR)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$F(R,\square R, \ldots , \square ^k R)$$\end{document} have the form of perfect fluids in the field equations. This statement definitively allows to deal with dark energy fluids as curvature effects. Article Full-text available The present paper deals with the study of CR-submanifolds of (LCS)n-manifolds with respect to quarter symmetric non-metric connection. We investigate integrability of the distributions and the geometry of foliations. The totally umbilical CR-submanifolds of said ambient manifolds are also studied. An example is presented to illustrate the results. Article Full-text available In an-dimensional Friedmann-Robertson-Walker metric, it is rigorously shown that any analytical theory of gravity f(R,G), where R is the curvature scalar and G is the Gauss-Bonnet topological invariant, can be associated to a perfect-fluid stress-energy tensor. In this perspective, dark components of the cosmological Hubble flow can be geometrically interpreted ]. Article Full-text available We show that an n-dimensional generalized Robertson-Walker (GRW) space-time with divergence-free conformal curvature tensor exhibits a perfect fluid stress-energy tensor for any f(R) gravity model. Furthermore we prove that a conformally flat GRW space- time is still a perfect fluid in both f(R) and quadratic gravity where other curvature invariants are considered. Article Full-text available Generalized Robertson-Walker spacetimes extend the notion of Robertson-Walker spacetimes, by allowing for spatial non-homogeneity. A survey is presented, with main focus on Chen's characterization in terms of a timelike concircular vector. Together with their most important properties, some new results are presented. Article Full-text available We prove theorems about the Ricci and the Weyl tensors on generalized Robertson-Walker space-times of dimension $n\ge 3$. In particular, we show that the concircular vector introduced by Chen decomposes the Ricci tensor as a perfect fluid term plus a term linear in the contracted Weyl tensor. The Weyl tensor is harmonic if and only if it is annihilated by Chen's vector, and any of the two conditions is necessary and sufficient for the GRW space-time to be a quasi-Einstein (perfect fluid) manifold. Finally, the general structure of the Riemann tensor for Robertson-Walker space-times is given, in terms of Chen's vector. A GRW space-time in n = 4 with null conformal divergence is a Robertson-Walker space-time. Article Full-text available The object of the present paper is to introduce a new curvature tensor, named generalized quasi-conformal curvature tensor which bridges conformal curvature tensor, concircular curvature tensor, projective curvature tensor and conharmonic curvature tensor. Flatness and symmetric properties of generalized quasi-conformal curvature tensor are studied in the frame of (k, μ)-contact metric manifolds. Article Full-text available The object of the present paper is to study the invariant submanifolds of (LCS)n-manifolds. We study semiparallel and 2-semiparallel invariant submanifolds of (LCS)n-manifolds. Among others we study 3-dimensional invariant submanifolds of (LCS)n-manifolds. It is shown that every 3-dimensional invariant submanifold of a (LCS)n-manifold is totally geodesic. Article Full-text available A perfect-fluid space-time of dimension n ≥ 4, with (1) irrotational velocity vector field and (2) null divergence of the Weyl tensor, is a generalised Robertson-Walker space-time with an Einstein fiber. Condition (1) is verified whenever pressure and energy density are related by an equation of state. The contraction of the Weyl tensor with the velocity vector field is zero. Conversely, a generalized Robertson-Walker space-time with null divergence of the Weyl tensor is a perfect-fluid space-time. Article Full-text available The object of the present paper is to study the second order parallel symmetric tensors and Ricci solitons on (LCS)n-manifolds. We found the conditions of Ricci soliton on (LCS)n-manifolds to be shrinking , steady and expanding respectively. Article Full-text available The present paper deals with a study of -pseudo symmetric and -pseudo Ricci symmetric -manifolds. It is shown that every -pseudo symmetric -manifold and -pseudo Ricci symmetric -manifold are -Einstein manifold. Article Full-text available We show new results on when a pseudo-slant submanifold is a LCS-manifold. Necessary and sufficient conditions for a submanifold to be pseudo-slant are given. We obtain necessary and sufficient conditions for the integrability of distributions which are involved in the definition of the pseudo-slant submanifold. We characterize the pseudoslant product and give necessary and sufficient conditions for a pseudo-slant submanifold to be the pseudo-slant product. Also we give an example of a slant submanifold in an LCS-manifold to illustrate the subject. Article Full-text available The object of the present paper is to study -manifolds. Several interesting results on a -manifold are obtained. Also the generalized Ricci recurrent -manifolds are studied. The existence of such a manifold is ensured by several non-trivial new examples. Article Full-text available We prove that a real hypersurface in a non-flat complex space form does not admit a Ricci soliton whose potential vector field is the Reeb vector field. Moreover, we classify a real hypersurface admitting so-called “$\eta$-Ricci soliton” in a non-flat complex space form. Article In this paper, we have proved that if a complete conformally flat gradient shrinking Ricci soliton has linear volume growth or the scalar curvature is finitely integrable and also the reciprocal of the potential function is subharmonic, then the manifold is isometric to the Euclidean sphere. As a consequence, we have showed that a four dimensional gradient shrinking Ricci soliton satisfying some conditions is isometric to S4 or RP4 or CP2. We have also deduced a condition for the shrinking Ricci soliton to be compact with quadratic volume growth. Article The objective, in this paper, is to obtain the curvature properties of (t−z)-type plane wave metric studied by Bondi et al. (1959). For this a general (t−z)-type wave metric is considered and the condition for which it obeys Einstein’s empty spacetime field equations is obtained. It is found that the rank of the Ricci tensor of (t−z)-type plane wave metric is 1 and is of Codazzi type. Also it is proved that it is not recurrent but Ricci recurrent, conformally recurrent and hyper generalized recurrent. Moreover, it is semisymmetric and satisfies the Ricci generalized pseudosymmetric type condition P⋅P=−13Q(Ric,P). It is interesting to note that, physically, the energy momentum tensor describes a radiation field with parallel rays and geometrically it is a Codazzi tensor and semisymmetric. As special case, the geometric structures of Taub’s plane symmetric spacetime metric are deduced. Comparisons between (t−z)-type plane wave metric and pp-wave metric with respect to their geometric structures are viewed. Article In this paper we have investigated the curvature restricted geometric properties of the generalized Kantowski–Sachs (briefly, GK–S) spacetime metric, a warped product of 2-dimensional base and 2-dimensional fibre. It is proved that GK–S metric describes a generalized Roter type, 2-quasi Einstein and Ein(3) manifold. It also has pseudosymmetric Weyl conformal tensor as well as conharmonic tensor and its conformal 2-forms are recurrent. Further, it realizes the curvature condition R⋅R=Q(S,R)+L(t,θ)Q(g,C) (see, Theorem 4.1). We have also determined the curvature properties of Kantowski–Sachs (briefly, K–S), Bianchi type-III and Bianchi type-I metrics which are the special cases of GK–S spacetime metric. The sufficient condition under which GK–S metric represents a perfect fluid spacetime has also been obtained. Article This paper aims to investigate the curvature restricted geometric properties admitted by Melvin magnetic spacetime metric, a warped product metric with 1-dimensional fibre. For this, we have considered a Melvin type static, cylindrically symmetric spacetime metric in Weyl form and it is found that such metric, in general, is generalized Roter type, Ein(3) and has pseudosymmetric Weyl conformal tensor satisfying the pseudosymmetric type condition R⋅R−Q(S,R)=L′Q(g,C). The condition for which it satisfies the Roter type condition has been obtained. It is interesting to note that Melvin magnetic metric is pseudosymmetric and pseudosymmetric due to conformal tensor. Moreover such metric is 2-quasi-Einstein, its Ricci tensor is Reimann compatible and Weyl conformal 2-forms are recurrent. The Maxwell tensor is also pseudosymmetric type. Article This paper is concerned with the study of the geometry of (charged) Nariai spacetime, a topological product spacetime, by means of covariant derivative(s) of its various curvature tensors. It is found that on this spacetime the condition [Formula: see text] is satisfied and it also admits the pseudosymmetric type curvature conditions [Formula: see text] and [Formula: see text]. Moreover, it is [Formula: see text]-dimensional Roter type, [Formula: see text]-quasi-Einstein and generalized quasi-Einstein spacetime. The energy–momentum tensor is expressed explicitly by some [Formula: see text]-forms. It is worthy to see that a generalization of such topological product spacetime proposes to exist with a class of generalized recurrent type manifolds which is semisymmetric. It is observed that the rank of [Formula: see text], [Formula: see text], of Nariai spacetime (NS) is [Formula: see text] whereas in case of charged Nariai spacetime (CNS) it is [Formula: see text], which exhibits that effects of charge increase the rank of Ricci tensor. Also, due to the presence of charge in CNS, it gives rise to the proper pseudosymmetric type geometric structures. Article The objective of this paper is to study the curvature restricted geometric properties of anisotropic nonrelativistic scale invariant metrics, namely, Lifshitz and Schrödinger spacetime metrics. It is found that the Lifshitz spacetime metric admits two important pseudosymmetric type curvature conditions [Formula: see text] and [Formula: see text]. Also, it is [Formula: see text]-quasi Einstein and generalized Roter type manifold. Finally, Lifshitz spacetime is compared with Schrödinger spacetime. Article (CS)4-spacetimes with Einstein field equations under some curvature restriction named generalized weakly Ricci-symmetry have been studied. We have proved that if the characteristic vector field ξ of a generalized weakly Ricci-symmetric (CS)4-spacetime obeying Einstein equation is a Killing vector field, then such a spacetime admits (i) curvature collineation, (ii) conformal collineation, (iii) conharmonic collineation, (iv) concircular collineation, (v) projective collineation, (vi) m -projective collineation. It is further proved that each of conformally flat, conharmonically flat, concircularly flat, projectively flat and m-projectively flat generalized weakly Ricci-symmetric (CS)4 -spacetime is infinitesimally spatially isotropic relative to the unit timelike vector field ξ . Article The aim of this note is to define almost Yamabe solitons as special conformal solutions of the Yamabe flow. Moreover, we shall obtain some rigidity results concerning Yamabe almost solitons. Finally, we shall give some characterizations for homogeneous gradient Yamabe almost solitons. Article The object of the present paper is to introduce the notion of generalized φ‐recurrent and study its (LCS) n ‐manifolds various geometric properties with the existence by an interesting example.
2023-02-04 16:55:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8981932997703552, "perplexity": 1501.5109470320663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00397.warc.gz"}
https://math.stackexchange.com/questions/848305/fixed-point-method-where-the-derivative-is-one-does-it-converge
# Fixed point method where the derivative is one - does it converge I'm trying to see if the iterative method $x_n=g(x_{n-1})$ where $g(x)=2\sqrt{x-1}$ will converge to $2$, if I take $x_0$ that is sufficiently close to $2$. Indeed notice that $g(2)=2$. and we have a theorem that states that if $|g'(2)|<1$ then there is a neighborhood of $2$ such that if we take an $x_0$ from that neighborhood, the method will converge to $2$. We also know that if $|g'(2)|>1$ then there is no such neighborhood. But notice now that $g'(x)=\frac{1}{\sqrt{x-1}}$, and so $g'(2)=1$. What can we say about this situation? will it converge? • That should diverge, the absolute value needs to be less than 1. – bobbym Jun 26 '14 at 13:16 • @bobbym, that's only a sufficient condition for convergence. Oria, try looking at a plot of the function. It converges if $2 \leq x_0 < 2+\epsilon$ but diverges if $2-\epsilon < x_0 < 2$. – Antonio Vargas Jun 26 '14 at 13:19 • I think you are correct. if we take a number that is very very close to $2$, but smaller than $2$, than the derivative will be larger than one. There can be no neighborhood of $2$, since any value smaller than $2$ will yield derivative larger than 1 – Oria Gruber Jun 26 '14 at 13:19 • Isn't it mandatory for convergence Antonio? – Oria Gruber Jun 26 '14 at 13:20 • No, take the system given by $g(x) = 2 + (x-2) - (x-2)^3$. Here $g'(2) = 1$ but the dynamical system converges in a neighborhood of $x=2$. If $|g'(x_\infty)| = 1$ it usually only means that the first derivative doesn't give you enough information to determine whether the system converges or diverges. – Antonio Vargas Jun 26 '14 at 13:22
2019-05-20 23:40:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100560545921326, "perplexity": 164.61698578139485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256163.40/warc/CC-MAIN-20190520222102-20190521004102-00367.warc.gz"}
https://par.nsf.gov/biblio/10172396-search-supersymmetry-compressed-mass-spectrum-vector-boson-fusion-topology-lepton-lepton-final-states-proton-proton-collisions-sqrt-tev
skip to main content Search for supersymmetry with a compressed mass spectrum in the vector boson fusion topology with 1-lepton and 0-lepton final states in proton-proton collisions at s$$\sqrt{s}$$ = 13 TeV Authors: ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » Award ID(s): Publication Date: NSF-PAR ID: 10172396 Journal Name: Journal of High Energy Physics Volume: 2019 Issue: 8 ISSN: 1029-8479 Sponsoring Org: National Science Foundation
2022-10-05 04:46:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474040031433105, "perplexity": 11359.246776063977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00038.warc.gz"}
http://openstudy.com/updates/559c298ce4b0564dd2d3a723
## just_one_last_goodbye one year ago Suggestion: Report button for profiles. 1. just_one_last_goodbye So we can report when the pic is inappropriate or the username is using profanity or sexual content 2. TheSmartOne I agree, and it would come in handy for reporting medal spamming. But for now, all we can do is message a moderator... 3. GreenCat Or if the profile page is vulgar. 4. GreenCat @Preetha @just_one_last_goodbye has a (very) good suggestion. 5. just_one_last_goodbye agreed such as the biography correct? 6. GreenCat Yes. 7. just_one_last_goodbye Thank you for adding that ^_^ 8. just_one_last_goodbye Maybe also Medal spamming. 9. TheSmartOne 10. TheSmartOne But then again, moderators don't even have enough powers to be able to reset a profile page's content and profile picture... 11. just_one_last_goodbye 12. horsegirl27 Maybe they could delete the pic/bio if it's inappropriate. I hate reading vulgar and bad bio's and wish there were some way to report it. 13. TheSmartOne Moderators here have barely any powers... (studied the whole system ;) ) they really need more features to take care of trolls and spammers, etc. 14. UsukiDoll Moderators barely have any powers? Wow! We need to give them more...on the other hand those mods might abuse them and give false warnings or suspensions. 15. TheSmartOne if they abused it they would get demodded. When a moderator gives someone a warning/suspension, all moderators are notified about it. 16. Jaynator495 $$\color{#0cbb34}{\text{Originally Posted by}}$$ @TheSmartOne Moderators here have barely any powers... (studied the whole system ;) ) they really need more features to take care of trolls and spammers, etc. $$\color{#0cbb34}{\text{End of Quote}}$$ Theres a secret code ive been counting on thomaster finding that adds 17 more features... Somehow he hasnt found it yet x_x @thomaster 17. thomaster @Jaynator495 I have a job, and not much time to browse the source code of the site for some codehunt. You can also just pm this code so I can try it? 17 features is a lot and I'm very curious what they do. 18. Jaynator495 $$\color{#0cbb34}{\text{Originally Posted by}}$$ @thomaster @Jaynator495 I have a job, and not much time to browse the source code of the site for some codehunt. You can also just pm this code so I can try it? 17 features is a lot and I'm very curious what they do. $$\color{#0cbb34}{\text{End of Quote}}$$ As much as i want to (trust me i want to so badly), I made a promise to one of my team members that i would not reveal codes that they wanted people to find... If you like i can pm you all the features of the codes... :P (btw, you wont find it in the source code, only by scrounging about the console for a long time, this is due to a very sneaky way i had to go about doing this to make sure nobody could cheat...) In the mean time because im excited... 1. More Auto Suspend Reasons... 2. Limit someones MAX time inbetween posts (this is individual, it could be for the chat... or soley posts... maybe questions?) I figured this would be a good way to prevent people from spamming, without suspending them if the spam is not that bad. 3. No Bump Times (so you can bump your own and others posts as much as you want, i added this feature when i learned you can bump others posts...) 4. Mass Delete, So you can delete more then one post at a time. (or chat messages... still working on posts) Do keep in mind that im not a moderator or administrator on any OS site anymore, so this makes it very difficult to add these features, if you do find the code please remember they will be rough around the edges unless i get the oportunity to improve them... Some may not even work... Not like i can test them LOL Also let me know if you want to know the other features... but like i said i want to share the code, but im bound by the fact im a man of my word and dont want to lose my ability to say that.
2017-01-18 00:35:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33876049518585205, "perplexity": 2249.503845620413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280128.70/warc/CC-MAIN-20170116095120-00031-ip-10-171-10-70.ec2.internal.warc.gz"}
https://electrichandleslide.wordpress.com/2014/06/08/contact-hamiltonians-part-i/
# Contact Hamiltonians (Part I) This entry follows the post Contact Hamiltonians (Introduction), where we discussed normal forms for contact forms and the appearance of contact Hamiltonians. In this entry we will focus on the 3–dimensional situation and hence we will be able to write formulas and draw (realistic) pictures. Consider a 2–sphere of radius 1 in the standard tight contact Euclidean space $(\mathbb{R}^3,\lambda_{st}=dz+r^2d\theta)$. Its characteristic foliation (defined by the intersection of the tangent space and the contact distribution) has two elliptic singular points in the north and south poles and all the leaves are open intervals connecting the north and the south pole. Take a transversal segment I=[0,1] connecting the poles (a vertical segment will do). Given a point in the segment we can consider the unique leaf through that point and move around the leaf until we hit the interval I=[0,1] again. This defines a diffeomorphism of the interval [0,1] fixed at the endpoints. We will call this diffeomorphism the monodromy of the foliation (and note that conversely any diffeomorphism will give a foliation on the 2–sphere via a mapping torus construction and collapsing the boundary). This is drawn in the following figure: In the figure the monodromy map is represented by the orange arrow. This monodromy does not have fixed points (this is crucial). Let us look at the monodromy in the sphere of radius $\pi+c$ , where c is a small positive constant, in the overtwisted contact manifold $(\mathbb{R}^3,dz+rtg(r)d\theta)$. The overtwisted monodromy is drawn in the next figure: There are 3 types of points in the vertical transverse interval I=[0,1]. The Type 1 points belong to a leaf, Leaf I in the figure, such that the points move down in the segment. The Type 2 points are the points between the unique pair of closed leaves, these belong to Leaf II and move up. The Type 3 points are fixed points, there are two leaves of this type (Leaf III). The monodromy is represented by the blue arrows. Hence, we can encode the tight and the overtwisted foliations on the 2–sphere in terms of their monodromies in the following figure: In the last entry we explained a relation between monodromies and contact Hamiltonians. Consider a contact form $dz-H(x,y,z)dx$ in $\mathbb{R}^3$, this is a quite general normal form (which we can obtain by trivializing along the y–lines of $\mathbb{D}^2(x,y)$). If we restrict to the sphere $x^2+y^2+z^2=R^2$ we can write H in terms of $H=H(x,z)$ at points where the implicit function theorem works. Then the characteristic foliation is nothing else than the solution of the time–dependent (x is the time) differential equation $dz-Hdx=0$ on the interval I=[-1,1] given by the coordinate z. Hence the contact Hamiltonian yields the ODE  to which the monodromy is a solution. Tool: How do we obtain a piece of a disk in standard contact $(\mathbb{R}^3,dz-ydx)$ with a given characteristic foliation ? Answer: Consider a disk in the (z,x)–plane and a function H(z,x). The standard contact structure $dz-ydx$ restricts to the graph of H in $\mathbb{R}^2(z,x)\times\mathbb{R}(y)$ as $dz-ydx|_{\{y=H\}}=dz-Hdx$. For instance, let us consider the following function H(z) for z=[-1,1]: This function H can be considered as a function on the polydisk (x,z) which is represented by the lower square in the third figure (the whole figure is PL immersed in the standard contact 3–space). Its image is the bumped square drawn above it, and we may consider the PL sphere obtained by adding the vertical annulus connecting the domain and the graph. The characteristic foliation on the bottom piece is by the horizontal z–lines, on the annulus the foliation is vertical and on the top piece the foliation is drawn on the left. Note that the characteristic foliation in this immersed PL sphere has a closed leaf (in red) coming from the fixed point (or zero, if we look at it horizontally) of H. Let us briefly focus on the existence of a contact structure in a region bounded by a domain and a graph as in the previous paragraph. Exercise: Does there exist a contact structure filling the following pink region ? (The contact structure should restrict to the germs (in purple) already defined on the boundary.) Answer: Yes. This is already embedded in $\mathbb{R}^3$, hence we just need to restrict the ambient contact structure. (This should be compared with the previous post where this question was also formulated and answered in terms of the positivity of the function H). The second exercise we need to solve is as simple as the previous one, let us however draw the figures in order to keep them in mind. Annulus Problem (weak): Does there exist a contact structure in the (yellow) annulus ? The contact structure should also restrict to the germs (in purple and green) already defined on the boundary. Answer: Yes, again this is already embedded in standard contact Euclidean space. This is yet another instance of the relevance of order. If one Hamiltonian is less than another one, then we can obtain a contact structure on the annulus. This will be formalized in subsequent posts using the notion of domination of Hamiltonians and their corresponding contact shells. We shall not use this language right now. We are now going to prove Eliashberg’s existence theorem in dimension 3 from the contact Hamiltonian perspective (i.e. from the monodromy viewpoint). The fundamental fact is that we only need to extend contact structures up to contactomorphism and this is translated to the fact the Hamiltonians can be conjugated. Annulus Problem (strong): Does there exist a contact structure on the following region ? Answer: If we are able to conjugate the bottom Hamiltonian (in green) strictly below to the upper one (in purple), then we can use the contact structure of the embedded annulus (weak version of the annulus problem). Hence, it all reduces to the order (or rather, the lack thereof). Fundamental Fact: There exists a conjugation of the bottom Hamiltonian such that it is strictly less than the upper one. In general, given two Hamiltonian with fixed points which are positive at the endpoints of the interval, there exists a conjugation bringing one of them below the other. (This is an exercise with functions in one variable, in higher dimensions this is no longer simple and this is precisely the main point that M.S. Borman, Y. Eliashberg and E. Murphy have understood). Let us prove Eliashberg’s 3–dimensional existence theorem, we focus on the extension part (part 2 according to the post three entries ago). Extension Problem (Version I): Suppose that there exists a contact structure on the complement of a ball $B^3$ in a 3–fold (which is given by Gromov’s h–principle, see previous posts) and that the characteristic foliation on the boundary $S_h^2$ has monodromy with fixed points (h stands for hole). Can we extend the contact structure ? Suppose that there exists a sphere $S_{ot}^2$ somewhere inside the manifold with an overtwisted monodromy (in blue, see above) in its characteristic foliation. Consider the annulus $A_{ot}=S_{ot}^2\times(-\tau,\tau)$. Use the south poles of $S_{ot}^2\times\tau$ and $S_h^2$ to connect both and obtain an annulus $A$ such that the monodromy in the exterior boundary sphere is the concatenation of the contactomorphisms of the intervals (green#pink). Hopefully this figure helps: The monodromies of the foliations in the two spheres bounding the annulus $A_{ot}$ are drawn in pink (exterior boundary) and blue (interior boundary). The monodromy in green is that of $S_h^2$. Connecting the spheres $S_h^2$ and $S_{ot}^2\times\{\tau\}$ yields a sphere with the monodromy green#pink (the transition area is purple, this has some relevance but it is not essential). Consider the annulus A bounded by $S_h^2\#(S_{ot}^2\times\{\tau\})$ and $S_{ot}^2\times\{-\tau\}$. We have reduced the problem of extending the contact structure to the interior of $S_h^2$ to the problem of extending the contact structure in the annulus A. In the exterior boundary of A the characteristic foliation is green#pink and on the interior is red (which comes from moving blue). Extension Problem (Version II): Does there exists a conjugation such that (the graph of) any contactomorphism can be conjugated to lie beneath any other (graph) ? Answer:  No. Fixed Points are an obstruction. However, if we restrict ourselves to the same question in the class of contactomorphisms with fixed points the answer is yes. This is exactly the Fundamental Fact stated above. How do we conclude the proof ? Conjugate the red Hamiltonian to lie beneath the green#pink Hamiltonian and use the contact structure in the resulting annulus (as embedded in standard contact space). Assuming Gromov’s h–principle and the technical work in order for the foliation to be controlled, this argument concludes the theorem. (We have disregarded some details, but the idea of the argument is the one described above. Observe that the parametric version of the existence problem in dimension 3 is quite immediate from the Hamiltonian perspective.) Note also that we do not need the whole sphere $S^2_{ot}$: in order to use the argument with the Hamiltonians we can cut the North pole of $S^2_{ot}$ and retain just the remaining disk, which is an overtwisted disk. There is a substantial advantage in this proof of the 3–dimenisonal case: we can define an overtwisted disk $\mathbb{D}^{2n}$ in higher dimensions 2n+1 to be the object that appears when using the contact Hamiltonian on a simplex $\Delta^{2n-1}$ given by (We will give precise definitions in the subsequent entries.) The strategy of the argument works in higher dimensions if we can prove the Fundamental Fact stating that there is enough disorder for contact Hamiltonians. In the next entries we will focus on this crucial step in higher dimensions and conclude existence.
2017-05-22 21:35:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.847817063331604, "perplexity": 323.7316483029677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607120.76/warc/CC-MAIN-20170522211031-20170522231031-00205.warc.gz"}
https://terrilila.web.app/242.html
# Nderivative of standard normal pdf Nov 24, 2011 i was wondering how i can find the derivative of a normal cdf with respect to a boundary parameter. The normal approximation to the binomial distribution if x. As per the standard definition, normality is described as the number of gram or mole equivalents of solute present in one litre of a solution. Most scores are within standard deviations from the mean. Standard normal table pdf standard normal table3 0. Skewed distribution can also be representative if the population under study. Connecting the cdf and the pdf wolfram demonstrations project. Theoremifx1 andx2 areindependentstandardnormalrandomvariables,theny x1x2 hasthestandardcauchydistribution. The standard normal distribution is a normal distribution with a mean of zero and standard deviation of 1. View homework help standard normal table pdf from eco 303 at stony brook university. I dont know how fundamental theorem of calculus can be applied. Well conclude by using the moment generating function to prove that the mean and standard deviation of a normal random variable x are indeed, respectively. I can get an answer with mathematica or something but i have no idea how to actually do this. Derivatives of the cumulative normal distribution function gary schurman, mbe, cfa august, 2016 there are times in mathematical nance when we need the derivatives of the cumulative normal distribution function. A random variable which has a normal distribution with a mean m0 and a standard deviation. If so, does this lead to e cient resource allocation. The equation for the standard normal distribution is. The standard normal distribution is a normal distribution of standardized values called zscores. Because the normal distribution approximates many natural phenomena so well, it has developed into a standard of reference for many probability problems. Standard normal distribution definition and meaning. That is,ifyousubtractthemean ofthenormalanddividebythestandarddeviation. The standard normal probability density function has the famous bell shape that is known to just about everyone. In this paper, some new approximations to the cumulative distribution function of the standard normal distribution via the hes homotopy perturbation method are proposed. Learn about the ttest, the chi square test, the p value and more duration. A normalized pdf may have external references, a different color space, document level metadata, and object level metadata from a generic pdf document. The indefinite integral of the standard normal pdf is given by. This calculator will compute the probability density function pdf for the standard normal distribution, given the point at which to evaluate the function x. Standard normal distribution california state university. The pdfa 1 format does not preclude creating documents from scanned page images using the pdfa 1b conformance profile. This is equivalent to asking how much of the distribution is more than 2 standard deviations above the mean, or what is the probability that x is more than 2 standard deviations above the mean. The socalled standard normal distribution is given by. This is not surprising as we can see from figure 4. This calculator will compute the probability density function pdf for the normal distribution, given the mean, standard deviation, and the point at which to evaluate the function x. We normally calculate the derivative of normal density w. Standard normal pdf a standard normal distribution is a normal distribution with zero mean mu0 and unit. Chapter 7 normal distribution page 2 the corresponding plots on the right the rescaled and recentered barplots, i have rescaled the bars by the standard deviation and recentered them at the expected value. Standard normal distribution an overview sciencedirect. Normality formula, definition, calculations solved. Pdf probability density functions of derivatives of random. For the normal distribution, the values less than one standard deviation away from the mean account for 68. The normal distribution holds an honored role in probability and statistics, mostly because of the central limit theorem, one of the fundamental theorems that forms a bridge between the two subjects. The standard normal distribution introduction to statistics. The zscore provides a standard way to compare statistics based on different normal distributions. Probability density function the general formula for the probability density function of the normal distribution is \ fx \fracex \mu22\sigma2 \sigma\sqrt2\pi \ where. Representation of the nth derivative of the normal pdf using. Type in any function derivative to get the solution, steps and graph this website uses cookies to ensure you get the best experience. Normal distribution with mean of zero and variance of one. A normal distribution has some interesting properties. Integral of pdf and cdf normal standard distribution hot network questions does a spectator need to make an attack roll to determine whether it hits a target with its eye rays. This says that approimately 68% of the scores are within one standard deviation from the mean, 95% of the scores are within two standard deviations from the mean, and 99. We will verify that this holds in the solved problems section. Thus, we have shown that for a standard normal random variable z, we have ez ez3 ez5 0. Normalized pdf free knowledge base the duck project. Pdfa1b, pdf for longterm preservation, use of pdf 1. Pdf of the square of a standard normal random variable. The letter z is often used to denote a random variable that follows this standard normal distribution. Table values represent area to the left of the z score. The simplest case of a normal distribution is known as. We say that a random variable x follows the normal distribution if the probability density function of xis given by fx 1. Appendix tables table 1 cumulative distribution function, hz, of the. The probability of a score between 0 and 1 is the same as the probability of a score between 0 and 1. Actual, normal, and standard costing b a 521 should a job in january be costed at the same amount as a job in july. Thus, a normal random variable has standard deviation a standard normal distribution is a normal distribution with zero mean mu0 and unit variance sigma21, given by the probability density function. Basic differentiation formulas pdf in the table below, and represent differentiable functions of 0. Free derivative calculator differentiate functions with all the steps. It is used to find the probability that a statistic is observed below, above, or between values on the standard normal distribution, and by extension, any normal distribution. Format description for pdf a1b level b conformance with part 1 of the pdf a iso standard iso 190051. Calculate the average, standard devia tion, and relative standard deviation. Normal probability density function matlab normpdf. To guarantee that a pdf can be processed correctly by pdf application. While this is true, there is an expression for this antiderivative. Then, well derive the momentgenerating function mt of a normal random variable x. As an adjective standard is falling within an accepted range of size, amount, power, quality, etc. For the bivariate normal, zero correlation implies independence if xand yhave a bivariate normal distribution so, we know the shape of the joint distribution, then with. While this is true, there is an expression for this antiderivative in infinite. Normal distribution of data can be ascertained by certain statistical tests. Normal percentile this value, corresponding to a zscore, gives the percentage of values in a standard normal model found at that zscore or below. Average, standard deviation and relative standard deviation. The standard normal density function \\phi\ satisfies the following properties. A standard normal table, also called the unit normal table or z table, is a mathematical table for the values of. Free probability density function pdf calculator for the. Standard normal cumulative probability table cumulative probabilities for negative zvalues are shown in the following table. Pdf of the square of a standard normal random variable closed. How to get the derivative of a normal distribution w. But you cant say standard pdf as you dont know which standard pdfx, pffa, pdf 1. If x is a random variable with normal distribution n. The standard normal distribution is a normal distribution with mean. Pdfa1, pdf for longterm preservation, use of pdf 1. Conversion to standard normal mathematics stack exchange. Show full abstract is the standard normal distribution function and b the vector of coefficients in the blue of alternative linear unbiased order statistics estimators of. By convert i want to represent nt,1 in terms of the cdf of n0,1. In particular, the standard normal distribution has zero mean. Probability density function pdf calculator for the normal. For the standard normal distribution, 68% of the observations lie within 1 standard. The standard normal distribution is centered at zero and the degree to which a given measurement deviates from the mean is given by the standard deviation. But can we calculate the derivative of normal distribution wrt the parameters not the variable, i know the derivative wrt to the variable gives the density. Normal distribution gaussian normal random variables pdf. Calculate probability distribution function pdf calculation. Probability density function pdf calculator for the standard normal distribution. The normal distributions shown in figures 1 and 2 are speci. Figure 4 standard normal probability density function. The distribution of the length follows a certain pattern that is described by the normal distribution. The parameters of normal distribution are mean and sd. As nouns the difference between norm and standard is that norm is that which is regarded as normal or typical while standard is a principle or example or measure used for comparison. Solution for the indefinite integral of the standard normal probability. Free probability density function and standard normal distribution calculation online. Standard normal distribution real statistics using excel. One way to compute probabilities for a normal distribution is to use tables that give probabilities for the standard one, since it would be impossible to keep different tables for each. Derivatives of the cumulative normal distribution function. This calculator can be used for calculating or creating new math problems. In this white paper we will develop the mathematics to calculate the rst and second derivatives of this. The normal distribution is implemented in the wolfram language as normaldistributionmu, sigma. In addition, as we will see, the normal distribution has many nice mathematical properties. It can refer to images or other pdfs that are not embedded in the pdf itself. How can i convert a the pdf of a normal distribution that it nt,1, but integrated from 0 to infinity, to the standard normal. View notes table 1 standard normal from econ 221 at concordia university. A zscore is measured in units of the standard deviation. Find the inflection points for the normal distribution thoughtco. The probability density function pdf upper plot is the derivative of the cumulative density function cdf lower plot this. Good support is possible, particularly for files complying with the pdfa 1a profile but not guaranteed. Proof let x1 and x2 be independent standard normal random. This is a special case when and, and it is described by this probability density function. Characteristics of the normal distribution symmetric, bell shaped. Pdf approximations to standard normal distribution function. Table values re resent area to the left of the z score. Please enter the necessary parameter values, and then click calculate. Projection to standard normal foranynormalrvx wecan. Whenx isequaltothemean,thene israised tothepowerof0 andthepdfismaximized. The simplest case of a normal distribution is known as the standard normal distribution. This can be used to compute the cumulative distribution function values for the standard normal distribution. Its density has two inflection points where the second derivative of f. When we say equivalent, it is the number of moles of reactive units in a compound. This states that in a normal model, 68% of the data lies within 1 standard deviation of the mean, 95% within 2 standard deviations, and 99. For example, if the mean of a normal distribution is five and the standard deviation is two, the value 11 is three standard deviations above or to the right of the mean. Let us find the mean and variance of the standard normal distribution. A tool in calculus known as the derivative is used to answer the. The table below contains the area under the standard normal curve from 0 to z. Dark blue is one standard deviation on either side of the mean. Standard normal distribution probabilities in the normal distribution the distribution is symmetric, with a mean of zero and standard deviation of 1. As a verb norm is analysis to endow a vector space, etc with a norm. Standard normal distribution tables standard normal distribution. Pdf approximations of the standard normal distribution. Probability density function pdf calculator for the normal distribution. Binomial is approximated by normal distribution as long as n 30 or when np1p 5 for smaller values of n it is wise to use a table giving exact values for the binomial distribution. About 68% of values drawn from a normal distribution are within one standard deviation. 335 1039 1534 155 176 1327 560 57 791 1162 709 1221 1409 897 927 1396 804 1479 519 504 656 366 854 488 62 565 1246 517 526 868 14 1228
2022-01-19 08:23:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8508633375167847, "perplexity": 337.8770585010364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00053.warc.gz"}
http://mathhelpforum.com/differential-geometry/95946-taylor-function-null.html
# Math Help - TAYLOR-function null 1. ## TAYLOR-function null hello, f is a function $C^{\infty}$ from [a,b] to $\mathbb{R}$. We suppose : $\exists x_0 \in ]a,b[ \ and \ k \ real \ >0 | \forall n\in \mathbb{N},f^{(n)}(x_0)=0 \ \ and \ \ sup_{[a,b]}\{|f^{(n)}(x)|\}\le k^nn!$ 1) Show that f is the function null from $]x_0-\frac{1}{k},x_0+\frac{1}{k}[$. I think that we must use Taylor's inegalities which say that (I don't know if it's the same name ...) : for $u \in [a,b]$ $|f(b)-\sum_{k=0}^n\frac{(x-u)^k}{k!}f^{(k)}(u)|\le M\frac{(b-u)^{n+1}}{(n+1)!}$ where M majors $f^{n+1}$ from [a,b]. so, here, we have: for $x\in ]x_0-\frac{1}{k},x_0+\frac{1}{k}[$ $|f(x)|\le \frac{|x-x_0|^{n+1}}{(n+1)!}M$ howerver we can take $M=k^{n+1}(n+1)!$ and $|x-x_0|\le \frac{1}{k}$ so we have: $|f(x)|\le 1$ ... and I have no idea to keep on the demonstration. thanks do you understand what I said ? 2. if I am mistaken tell me (In first, I want improve my english and do maths ) ... thanks 3. Originally Posted by J.R hello, f is a function $C^{\infty}$ from [a,b] to $\mathbb{R}$. We suppose : $\exists x_0 \in ]a,b[ \ and \ k \ real \ >0 | \forall n\in \mathbb{N},f^{(n)}(x_0)=0 \ \ and \ \ sup_{[a,b]}\{|f^{(n)}(x)|\}\le k^nn!$ 1) Show that f is the function null from $]x_0-\frac{1}{k},x_0+\frac{1}{k}[$. I think that we must use Taylor's inegalities which say that (I don't know if it's the same name ...) : for $u \in [a,b]$ $|f(b)-\sum_{k=0}^n\frac{(x-u)^k}{k!}f^{(k)}(u)|\le M\frac{(b-u)^{n+1}}{(n+1)!}$ where M majors $f^{n+1}$ from [a,b]. so, here, we have: for $x\in ]x_0-\frac{1}{k},x_0+\frac{1}{k}[$ $|f(x)|\le \frac{|x-x_0|^{n+1}}{(n+1)!}M$ howerver we can take $M=k^{n+1}(n+1)!$ and $|x-x_0|\le \frac{1}{k}$ so we have: $|f(x)|\le 1$ ... and I have no idea to keep on the demonstration. Your argument is correct, and it shows that $|f(x)|\leqslant k^{n+1}|x-x_0|^{n+1}$. But you are told that x belongs to the open interval $(x_0-\tfrac1k,x_0+\tfrac1k)$ (or in the horrible French notation $]x_0-\tfrac1k,x_0+\tfrac1k[$ ), so you know that $k|x-x_0|$ is strictly less than 1, so its (n+1)th power can be made arbitrarily small and hence $f(x)=0$. 4. ## Problem solve message removes 5. ## Problem solve Sincerely, thank you very much. @+
2014-04-18 18:54:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9854536652565002, "perplexity": 842.4389574843135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
https://algebra-calculators.com/markup-meaning/
# Markup Markup Meaning Mark up is the total profit or gross profit earned on a specific commodity or service. It is denoted as a percentage over a cost price. For example, the cost of a good is Rs. 100 and the good sold is of Rs. 150, so the markup will be 50%. The cost of a good or the cost price of the commodity is the price on which the buyer purchases the goods from the shopkeeper. The cost price is often represented as (CP). On the other hand, the selling price of a good or SP is the price at which the shopkeeper sells the goods to the buyer. The term markup is widely used in business studies. Markup is defined as the difference between the selling price and the cost price of a good. The profit and loss of a business are easily determined through markup. Markup Formula As we know, markup is the difference between the selling price and the cost price of the product. Hence, the markup formula is represented as : Markup formula – Selling price of a product  (SP) – Cost price of a product (CP) What is the Markup Price? The markup pricing is the method of adding a certain percentage of markup to the cost price of the product in order to estimate the selling price of a product. In order to make the use of mark-up, the companies initially determine the cost price of the product and further decide the amount of profit to be earned over the cost of the good solds and then include or add that markup in the cost. Let us understand the concept of markup pricing though the markup pricing example. Markup Pricing Example Suppose, there is a mobile manufacturing company who has the following cost and sales expectations. Variable cost per unit – Rs. 30 Fixed cost – 5,00,000 Expected Unit Sales – Rs. 50,000 The unit cost is Variable cost + Fixed cost / Unit sales Hence, the unit cost = 30 + 500000/ 50000 = RS. 40 Once the cost is estimated, the manufacturer decides to add a 20% markup on sales. The markup price formula for the above markup pricing example is given as Markup price – Unit cost / 1- desired return on a product = 40/ 1-0.2 =50 Hence, the manufacturer should ask Rs. 50 from a buyer to earn a  desired profit of Rs. 10 Markup Price Formula As we know, the markup price is the additional price or profit earned by the seller over and above the total cost of the product or service. Mark up price is also defined as the difference between average selling price per unit and the average cost price per product. Hence, the markup price formula = Sales Revenue- Cost of goods sold/ Number of units sold. Markup price formula is also derived as the Average selling price per unit – Average cost price per unit. Markup Percentage Markup percentage is a percentage markup over the cost price of a product to determine the selling price of a product. It is calculated as a ratio of gross profit to the cost price of the unit. Most of the time, the company sells their product during the process of making decisions for the selling price, they take the cost price and use markup which is generally a small factor or a percentage of the cost price, and make use of that as a profit margin and decide the selling price. Markup Percentage Formula To calculate the markup percentage, we use the following markup percentage formula Selling  Price = Cost Price x (1 + Markup) or Markup = (selling price/cost price) – 1 Markup = (Sale Price-Cost)/Cost Markup Percentage formula = 100 × (Selling price – Cost Price)/Cost price Difference Between Margin and the Markup The difference between margin and markup is such that margin is the difference between sales and cost of goods sold while markup is the price by which the cost of a good is increased in order to determine the selling price. The margin is also known as gross margin. A mistake in markup and magin can lead to the price determination being substantially too low or too high resulting in less sales or less profit. It can also have adverse effects on market shares as an excessively high price or low price may be beyond the price imposed by the competitors. We can easily calculate the profit margin of a product in the following way if we know the markup. Selling price of a product – Cost price of a product  = Selling Price of a product × Profit Margin Hence, Profit margin = (Selling Price – Cost Price)/Selling Price Margin = 1 – (1 /(markup +1)) Or Margin = markup/1+markup For example,  if the markup is 50%, then profit margin; Margin = 50/(1+0.5) = 50/1.5 = 33.33% Difference Between Markup and Margin can also be Determined from the Following Point. • To achieve a gross margin of 10%, the company mark up price percentage should be 11.1% • To achieve a gross margin of 40%, the company mark up price percentage should be 80% • To achieve a  gross margin of 50%, the company mark up price percentage should be 100% Solved Example 1. If the Selling Price of the Chocolate Box is Rs. 500 and the Cost Price of the Chocolate Box are Rs. 150. Find the Markup Percentage. Solution: Given, Selling price of the chocolate box = Rs. 500 The cost price of the chocolate box  = Rs. 150 Markup percentage formula = 100 × (Selling price – Cost Price)/Cost price Markup percentage = 100 × ( 500 – 150)/ 150 = 100 × 350/ 150 = 233.33% 2.  If the Markup Rate Used by a Shopkeeper on a Toy Car is 50%, if the Cost Price of a Toy Car is Rs.1000, Find the Selling Price of a Toy Car? Solution: Markup = 50% of cost price Markup = 50% of 1000 = 50/100 × 1000 = 500 Selling price = cost price + markup = 500 + 1000 = 1500 Selling Price = Rs.1500 Hence, the selling price of a toy car -= Rs. 1500 3. The Overall Sales Revenue of a Company X is $20000. The Cost of the Goods Sold by the Company is$10000. The Number of Units Sold by the Company is 1000. Find the Markup Price for Company X. Solution: Let us use the markup price formula to calculate the markup price for company X. Markup price- (Sales Revenue – cost price of the unit sold) / Number of units sold. Markup Price = ($20000 –$10000)/1000 Markup Price = $10000/ 1000 Markup price =$10 for each unit. Quiz Time 1. Which of the Following is the Type of Term Most Probably Answer to the Question? What is the Markup on this Item? 1. 3 bits 2. $1000 3. It depends 4. 50% 2. A Shopkeeper Pays its Wholesaler$40 for a Certain Item, and Sells the Item for \$75. What is the Markup Rate? 1. 81% 2. 55% 3. 60% 4. 87.5% 3.  An Item Originally Priced at Rs. 55 is Marked 25% Off. Find the Selling Price. 1. Rs. 42 2. Rs.60 3. Rs.76 4. Rs. 41.25
2022-11-28 18:10:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7136802673339844, "perplexity": 1642.0436336787689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00583.warc.gz"}
https://tex.stackexchange.com/questions/438525/spacing-before-and-after-titles
# spacing before and after titles I am making a custom sty file in which I intend to change the spacing before and after sections, subsections and subsubsections. I use the command \titlespacing{<command>}{<left>}{<before-sep>}{<after-sep>} but it doesn't have any effect. May someone help me, please? My MWE for the sty file is the following \NeedsTeXFormat{LaTeX2e}[1994/06/01] \ProvidesPackage{body} \RequirePackage[spanish]{babel} \RequirePackage{xcolor} \RequirePackage{titlesec} \RequirePackage{sectsty} \RequirePackage{chngcntr} \RequirePackage{etoolbox} \definecolor{coolblack}{rgb}{0.0, 0.18, 0.39} \definecolor{darkcerulean}{rgb}{0.03, 0.27, 0.49} \definecolor{blue(ncs)}{rgb}{0.0, 0.53, 0.74} \sectionfont{\color{darkcerulean}} \subsectionfont{\color{blue(ncs)}} \subsubsectionfont{\color{blue(ncs)}} \titlespacing{\section}{0pt}{2cm}{15cm} \titlespacing{\subsection}{0pt}{2cm}{5cm} \titlespacing{\subsubsection}{0pt}{2cm}{5cm} \renewcommand{\labelitemi}{$\bullet$} \setlength\parindent{0pt} And for the test document is as following: \documentclass[12pt,twoside,titlepage]{article} \usepackage{body} \usepackage{lipsum} \begin{document} \section{Título nivel 1} \lipsum[2] \subsection{Titulo nivel 2} \lipsum[2] \subsubsection{Titulo nivel 3} \lipsum[2] \end{document} The titlesec and sectsty packages don't work well together, you could consider just using titlesec with its \titleformat* command instead of using \sectionfont etc. from sectsty: \titleformat*{\section}{\Large\bfseries\color{darkcerulean}} \titleformat*{\subsection}{\large\bfseries\color{blue(ncs)}} \titleformat*{\subsubsection}{\bfseries\color{blue(ncs)}} does exactly what you want. Alternatively you could try to use both (not recommended) and just change the order of calling them. Call sectsty first and it works. \titleformat*{\section}{\Large\bfseries\color{darkcerulean}} \titleformat*{\subsection}{\large\bfseries\color{blue(ncs)}} \titleformat*{\subsubsection}{\bfseries\color{blue(ncs)}} \titlespacing{\section}{0pt}{1cm}{.5cm} \titlespacing{\subsection}{0pt}{1cm}{.4cm} \titlespacing{\subsubsection}{0pt}{1cm}{.3cm} produces: • Thank you! I followed your advice of not using titlesec and sectsy together and it worked! – user151562 Jun 29 '18 at 12:08
2019-06-26 08:46:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484938740730286, "perplexity": 4797.935807776521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000231.40/warc/CC-MAIN-20190626073946-20190626095946-00064.warc.gz"}
https://www.physicsforums.com/threads/the-average-of-a-function.67906/
# The average of a function #### ribod I have this function: y=x/(x-1/x) and I want to find out the average value of y, between two values of x. Is there some mathematical way to do this? #### cronxeh Gold Member $$\frac{1}{x_{2}-x_{1}} \int_{x_{1}}^{x_{2}} {\frac{x}{x-\frac{1}{x}} dx }$$ $$a = x_{1}, b = x_{2}$$ $$\frac{1}{b-a} * ( b + \frac{1}{2}*log(b - 1) - \frac{1}{2}*log(b+1) - a - \frac{1}{2} * log(a-1) + \frac{1}{2} * log(a+1) )$$ Last edited: #### Zurtex Homework Helper Correct me if I am wrong but I think that nicely cancels down to: $$\frac{1}{b - a}\left(b - a + \text{tanh}^{-1}(a) - \text{tanh}^{-1}(b) \right)$$ Last edited: #### cronxeh Gold Member And you assume that if OP doesnt know the avg of a function, then he'll know what hyperbolic tangent is :rofl: I know what tanh is but not the average of a function. #### Data Ah, but do you know what $$\tanh^{-1}$$ is?!?! #### HallsofIvy Homework Helper The average of a function, f(x), between x= x1 and x= x2 is: $$\frac{1}{x_2-x_1}\int_{x_1}^{x_2}f(x)dx$$ ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
2019-06-19 15:08:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5880765318870544, "perplexity": 7655.675788280095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999000.76/warc/CC-MAIN-20190619143832-20190619165832-00161.warc.gz"}
https://lexique.netmath.ca/en/platonic-solid/
# Platonic Solid ## Platonic Solid Name given to each of the five regular convex polyhedra named after Plato, who linked them to the four elements in his treatise Timaeus. ### Formulas The variable a corresponds to the edge length of each solid. • For a regular tetrahedron: $$A=\sqrt{3}a^{2}$$ and $$V=\frac{\sqrt{2}}{12}a^{3}$$ • For a cube: $$A=6a^{2}$$ and $$V=a^{3}$$ • For a octahedron: $$A=2\sqrt{3}a^{2}$$ and $$V=\frac{\sqrt{2}}{3}a^{3}$$. • For a dodecahedron: $$A=3\sqrt{5\left ( 5+2\sqrt{5} \right )}a^{2}$$ and $$V=\frac{15+7\sqrt{5}}{4}a^{3}$$ • For an icosahedron: $$A=5\sqrt{3}a^{2}$$ and $$V=\frac{5\sqrt{14+6\sqrt{5}}}{12}a^{3}$$ ### Examples The 5 Platonic solids: Regular tetrahedron Cube (regular hexahedron) Regular octahedron Regular dodecahedron Regular Icosahedron All the faces of a Platonic solid are congruent regular polygons.
2022-08-08 01:25:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152986168861389, "perplexity": 5692.04090155336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00644.warc.gz"}
https://webdesign.tutsplus.com/articles/why-i-choose-stylus-and-you-should-too--webdesign-18412?utm_source=CSS-Weekly&utm_campaign=Issue-99&utm_medium=web
Unlimited Wordpress themes, plugins, graphics & courses! Unlimited asset downloads! From $16.50/m Advertisement # Why I Choose Stylus (And You Should Too) Length:LongLanguages: The world of front end web development has been steadily increasing its uptake of what we call "CSS Preprocessors", which extend the functionality of regular CSS. Arguably the two most well known, with the greatest user base, are LESS and Sass/SCSS. However there is a third preprocessor that hasn't received quite as much attention, and that's Stylus. Today we'll be discussing why Stylus is awesome, why I choose it, and why it might just become your new CSS hero. ## Why LESS and Sass Are Awesome Before we get into the specifics of how Stylus works, I'm going to start with my own take on the predominant strengths of LESS and Sass / SCSS, and why I choose neither even though they both rock. ### All Three Rock Each of the three preprocessors include the use of variables, mixins, nesting and extending, along with varying degrees of logical operations and functions. So all three are the same, in that they let you abstract key design elements, use logic and write less code, which makes all of them able to give you great gains over raw CSS when used well. However, with all three being the same in this basic sense, it's the ways they are different that will ultimately lead to your choice on which to use. ### LESS: Other Reasons It's Great To me, the greatest strength outside of the aspects common to all three preprocessors is the community around LESS and the offerings created by them. The most well known project incorporating LESS is the Twitter Bootstrap framework, and my guess is the desire to work with it is a big part of what leads many people in turn to LESS. Another stand out is the LESShat mixin library, which provides an excellent array of mixins for CSS3 effects and more, and its partner the CSShat plugin for Photoshop, which generates copy & paste LESS code from PSD elements. In particular these two items in tandem create a very powerful workflow which is fantastic if you do a lot of your design process within Photoshop. And one more big plus for LESS is the fact that most people find it quite accessible to use. You can use a simple JavaScript file to compile it on the fly, you can use an IDE with in-built compilation like CrunchApp (or CodeKit on Mac only), or you can use Node.js on your local machine for a more robust / flexible compiling solution. ### LESS: Why I Still Don't Use It I always prefer my own code over third party frameworks, and I also tend to do minimal Photoshop design these days, preferring to design dynamically in the browser as much as possible. (CSSHat can output in multiple languages too). So for me, as great as the projects I described are, they alone aren't enough to compel me to choose LESS as my go-to preprocessor. But the biggest reason I don't use LESS is actually the significant gulf in available logic processing features between it and the other two major preprocessors. Unfortunately, the fewer logic-based features that are available for use, the less we're able to create minimal, clean code, and the slower development and subsequent modification will be. LESS does allow for some logic, but it is really quite limiting compared to Stylus and Sass / SCSS. You'll see why in my description of what's awesome about Sass. ### Sass: Other Reasons It's Great and Powerful Sass also has a great community with many great projects available to use. Where LESS has Twitter Bootstrap, Sass has frameworks like Gumby and Foundation. Where LESS has LESShat, Sass has mixin libraries like Compass and Bourbon. However, where it really comes into its own compared with LESS is its powerful ability to handle logic. Where LESS is what you might call enhanced CSS, Sass behaves much more like a complete programming language. For example, Sass lets you create efficiently written conditional checks, which is particularly useful within mixins. In Sass you could do the following: This mixin checks to see if $border_on is set to true, and if so it uses the $border_color value in the output for the border property. If not, it falls back on setting the border property to 0. It then also checks to see if $bg_on is set to true, and if so it uses the $bg_color value in the output for the background-color property. If not, it sets the background-color property to transparent This means that, depending on the values passed, up to four different types of output could be generated from a single mixin, i.e. border and background both on, border and background both off, border on and background off, border off and background on. However in LESS there are no "if / else" checks, so the above would not be possible. The most you can do with LESS is use what is called "Guarded Mixins", where a given mixin is output differently based on a check against a single expression. So your mixin could check if the @border_on parameter was set to true like so: However because it's missing "if / else" functionality it neither has the ability to subsequently check the value of @bg_on, nor to give an alternative value for the border property within the same mixin. In order to achieve the same logic that was handled with a single Sass mixin, you would need to create four different guarded mixins with LESS, i.e. one for each of the possible combinations of @border_on and @bg_on values, like so: And that's just with two values to check; the number increases with every value on which you want to run logic, which can become very unwieldy as you want to create more sophisticated mixins. It's also an arduous process to consider all the possible permutations of variable combinations in order to account for them all. That's just one example of where enhanced logic makes life much easier with Sass vs. LESS, but there are many more. Notably, Sass also offers excellent iteration abilities through its @for, @each and @while directives. And finally, very importantly, while LESS has some excellent in-built functions, Sass makes it very easy to write your own. They're simply written as: These logical functions open up a world of possibility for things like creating your own layout engines, px to em handling, color modifiers, and shortcuts for an infinite number of things you might find yourself needing from project to project. From everything I've read and heard people chatting about, and from my own experience, it's this greatly enhanced logical power that is the main driver for people choosing Sass over LESS. ### Sass: Why I Don't Use It Even Though It's Amazing With LESS ruled out for most projects due to its limited logical operations, it comes down to a choice between Sass and Stylus, both of which have a powerful array of features available. And between the two, I choose Stylus. Stylus has the power of Sass, with the accessibility of LESS. Stylus.js does everything I need that Sass does, but it only requires JavaScript or Node.js to compile. Plus, it has a particular way of operating that is smooth and easy to work with, and it has a beautiful clean syntax that I prefer. For me, the requirement to run Ruby on Rails and deal with gems is a major roadblock to wanting to work with Sass. Ruby isn't a part of any of the projects I develop, so the only reason I ever have to deal with installing it and any gems is solely to handle Sass. That's a set of connection errors and installation issues I don't need if I can avoid it. I suspect many other people are also in the same boat of not otherwise using Ruby, and not especially wanting to in order to use a CSS preprocessor. Additionally, even though I need to install Ruby in order to use Sass, I still find myself needing to work with Node.js and NPM in order to use Grunt to handle other aspects of my projects, such as watching for changes, combining and minifying JavaScript and so on, as well as Bower for other package management. Note: there is a program called Scout for Mac and Windows that will handle compilation for you, but again where possible I prefer to avoid installing something for a single purpose only, rather than working with tools I can use for multiple purposes. There is also CodeKit, but that's Mac only. So when there's a preprocessor like Stylus that has all the logical power I need, but can be used easily with my preferred IDE and existing Node.js setup or pure JavaScript, it just makes sense to choose it. Many people find the setup process for Sass intimidating because of the Ruby factor and choose LESS for that reason. However I find that the ease of setup for Stylus is essentially on par with LESS, while giving me the full gamut of logical functionality. But it's not just about Ruby, or even just the logical functionality available. It's also about the specific way Stylus works and the syntax it uses, which I find to be incredibly clean, flexible and smooth in comparison to both LESS and Sass. So now, let me tell you why I choose Stylus, and why it might be your new CSS hero. ## Why I Choose Stylus As I've touched on above, I choose Stylus for its: • Powerful logical functionality • Ability to run via Node.js / JavaScript, (no Ruby) • Ability to run as part of the Node.js setup I'd have anyway in order to use Grunt and Bower. • Clean and minimal yet flexible syntax • A general smoothness in the way Stylus approaches its various features To really show you why all of the above make me choose Stylus, we need to jump in and start using it a little so I can show you exactly what I'm talking about. Let's start with the biggest hurdle people run into with CSS preprocessors, whichever one they choose, and that's setup and compilation. A big part of why I choose Stylus is that I can set it up as part of my regular project creation methods, and through that I can use it with my preferred IDE. Let me show you how. ## Stylus Setup and Compilation Yes, there are some command line processes that are involved, however take it from someone who'd never used command line for a thing before preprocessors required it - it's nowhere near as difficult as you think, and using the command line will make you feel ten percent smarter than you did before. :) That said, I've put together a package, which you can grab by hitting the "Download" button at the top of this article, which will mean you'll barely have to think about the command line if you're on Windows. Just a few double clicks and you'll be up and running. If you're on Mac or Linux, fear not, as there are only three commands you'll have to run to use the package, I'll walk you through how, and they're super easy. This package will watch your source Stylus files for changes, and it will compile them into CSS files for you. You can use it with any IDE you want, which is a big perk of this particular approach. For me personally, it's the epically awesome Sublime Text 2. It's also the IDE I recommend for using with Stylus due to the excellent Stylus syntax highlight package available for it, which I'll cover below. ### Step 1: Install Node.js Node.js is pretty much a must have these days for front end web development. There are so many amazing tools that work on top of it, so installing will get you established not just for Stylus but for plenty of other things too. Go to http://nodejs.org/download/ and download the installer for your OS. Run the installer as you would any other to put Node.js onto your system. ### Step 2: Install Grunt Grunt is an incredible tool for running JavaScript tasks. You can use it for over two thousand different purposes via its plugins, listed here: http://gruntjs.com/plugins In our case, we're going to be using it to watch our Stylus files and compile them into CSS whenever they change. Prepare for your first taste of command line, so open up a command window / terminal. On Windows I find the easiest way is just to open up Windows Explorer, then inside any folder hold down the SHIFT key and right-click. In the context menu you'll then see "Open command window here", which you should click: Alternatively you can click the "Start" button, then search for "cmd" and press ENTER to bring up the command window. If you're on Linux, I'm guessing you probably already know how to open a terminal, but if not here is a guide to how on the various distros: https://help.ubuntu.com/community/UsingTheTerminal And if you're on Mac, take a look at A Designer's Introduction to the Command Line. Now, type the following command and press ENTER: You'll see a load of text like this appear in the window: Wait until that all finishes and a new command prompt appears. That will mean the installation is complete, and you can then close the command window / terminal. You only need to do this once as it will install Grunt on your system with global access so you can use it from any future project folder you setup. Now you're ready to setup an actual project using the StylusCompiler package I've provided. This is the process you'll repeat for each new design project you begin. ## A Stylus Project Let's take this step by step. ### Step 1: Setup a Project Folder Create a folder to house your project. For this demo, we'll just call it EGProject. Inside that, create a second folder named css. This is the folder your CSS files will be written into. Now extract the StylusCompiler.zip file into this folder. You should end up with a structure that looks like this: ### Step 2: Install StylusCompiler Go into the StylusCompiler folder and, if you're on Windows, double-click the file named double_click_to_install.bat. If you're not on Windows, open up a terminal in the StylusCompiler folder, (or open a terminal and then navigate / cd to the folder). Type the following then press ENTER: This will install the compiler inside your project folder. Again you'll see a bunch of stuff like this come up in the window: If you're on Windows and double-clicked the .bat file, the window will close once the installation has completed. If not, wait till the text stops moving and a new command prompt appears. Keep your terminal open for the next step. ### Step 3: Aaaaaand Engage! Now all you need to do is initiate the "watch" function the project has you setup to use via Grunt. This will watch the stylus folder inside the StylusCompiler folder for changes to any .styl files within it. Just create all the Stylus files you need in that stylus folder and you're good to go. Note: All your Stylus files must have the .styl file extension. When changes are saved to any files in that folder, the package will then compile your Stylus code into CSS, and write it into a file named style.css in the css folder of your project. Still in the StylusCompiler folder, if you're on Windows, double-click the file named watch_and_compile.bat If you're not on Windows, with your terminal still in the StylusCompiler folder, type the following then press ENTER: You should see this in the command window / terminal: Now if you save changes to any file in the StylusCompiler > stylus folder, (as long as you haven't made any mistakes in your code), you'll see the following: When you're done working on your Stylus files you can just close the command window / terminal, or if you need to run another command, you can press CTRL + C to stop the "watch" task. ## Optional Steps ### Changing Project Options One of the reasons I love working with this type of project setup is you're in complete control, so you can set your project up however you like it, and change at any time. If you want to change things like the output folder for your css, the output file name, whether the CSS is compressed or not, and so on, you can do so in the file named Gruntfile.js in the StylusCompiler folder. We're using the grunt-contrib-stylus plugin for Grunt to handle compilation, so you can get a full rundown on all the possible configurations for it here: https://github.com/gruntjs/grunt-contrib-stylus. However, here are the main options you're likely to want. • Line 20, compress CSS output or not Set the compress option to true for production ready minified CSS, or to false for expanded CSS while you're still in development. • Line 27, set CSS output file name The default filename that will be written to is "style.css". If you wish the file to be named something else, replace "style.css" with your choice on this line. • Line 32, CSS output location By default the compiler will look up one level from the StylusCompiler folder, and write into the css folder therein. If you want your CSS files to be written somewhere else, change the value on this line from '../css' to your preferred location. ### Working With Sublime Text 2 and Stylus As I mentioned above, the beauty of this approach is you can use any IDE at all to edit your Stylus files and they'll compile just the same. However I strongly recommend using Sublime Text 2 as the Stylus syntax highlighting package available for it makes working with Stylus a delight. You can download Sublime Text 2 here: http://www.sublimetext.com/2. After downloading and installing, visit this page and follow the instructions for installing "Package Control", the brilliant package manager for Sublime Text: https://sublime.wbond.net/installation#st2 Finally, install the Stylus syntax highlight package. Open up Package Control by going to Preferences > Package Control like so: In the list that appears, click the "Install Package" option, and wait a few seconds while a list of available packages is retrieved: Type "stylus" in the field above the list of packages in order to search for it, then click the result titled "Stylus" in order to install it: This package will now turn tricky to read, regular CSS formatting like this: …into easily differentiated Stylus formatted code like this: ## Stylus Syntax One of the things I absolutely LOVE about Stylus is its total flexibility on syntax. With LESS, all code must be written in the same way you would write regular CSS, i.e. you must include curly braces, colons and semicolons in the same way you would in CSS. With Sass / SCSS you have a choice: • You can set a compilation option in your project to use SCSS syntax, in which case you write as you would regular CSS, or... • You can choose the Sass syntax, in which case you can omit curly braces and semicolons in favor of using tab indentations and new lines, but you won't be able to use regular CSS syntax in the same file. Stylus on the other hand is totally flexible, and you don't have to set any compilation options to handle the way you want to write. • You can write in regular CSS syntax with curly brackets and the works if that's how you feel comfortable. • Or, you can drop curly braces, colons and semicolons all together. Where curly braces would normally be, a tab indentation is used instead. Where a semicolon would normally be, a new line is used. And where a colon would normally be, a simple space does the job. • And, not only can you use either approach, but you can even combine them in the same file. All these examples will compile in Stylus, and the approaches to syntax can be used together in the same document: Only Stylus allows omission of all these syntax elements, to varying degrees, and the 'on the fly' combination of these approaches so you can do whatever you feel like as your project moves along. This functionality is amazing for development. You'll be surprised to find just how much greater your flow is when you omit all the syntax "punctuation" you can. Your coding and your thought process as you move along will become so much smoother. And with the syntax highlighting provided by the package we installed earlier, you'll find your code will be every bit as readable. But at the same time compilation is very forgiving. If you decide for one reason or another that regular CSS syntax will make part of your document better organized you can go right ahead and use it whenever you want. And if you accidentally miss out a semicolon here or there, nobody minds. ## Stylus Variables, Mixins, Conditionals and Functions You saw above some examples of how variables, mixins, conditional checks and functions look in LESS and Sass. To my eye, I find the Stylus approach to these easier to look at, read, and work with in general. In LESS, variables must be prepended with the @ symbol. In Sass, they must be prepended with the $ symbol. However in Stylus, a variable doesn't have to be prepended with anything at all. Note: You can optionally use the \$ symbol if you prefer, but not the @ symbol as this is reserved for other purposes in Stylus. Similarly, mixins, conditional checks and functions needn't be prepended with anything in Stylus. In LESS, a mixin must be written in the same way you would write a regular CSS class, and there are no conditional checks or custom functions. In Sass, mixins must be prepended with @mixin and called with @include, conditional checks are written as @if and @else, and functions must be prepended with @function and include a line prepended with @return. None of these things are required in Stylus. You can simply write naturally as you might in regular language. For example, earlier we used this example Sass mixin and function: This mixin and function would be called like so: In Stylus, these could be written and called as follows: This to me is very neat, easy to read and write, and in-keeping with the preprocessor goal of making code clean and minimal. Bear in mind also that while in the example above I have omitted all the syntax "punctuation" that can be left out, it's totally optional how much of it you want to leave out in your development. For example, I have called the border_and_bg mixin seamlessly, writing it in essentially the same way I would a regular CSS property, with no brackets around the parameters or commas in between them. However if you prefer to you can include brackets and commas when you call mixins, it's completely up to you. ## The Nib Mixin Library One of the best things about working with Sass and LESS are the Compass / Bourbon and LESShat mixin libraries, respectively. But you won't miss out on an amazing library of mixins with Stylus, thanks to Nib. The "StylusCompiler" package I provided you with automatically installs (thanks to grunt-contrib-stylus) and includes Nib in your project, so you don't have to take any further steps in order to use it. Nib provides mixins for all the CSS3 effects you would expect, each of which can be called seamlessly as though using a regular CSS property. It also includes an impressive array of mixins for other functions like positioning, resetting / normalizing, clearfix-ing, responsive images and more. Check out the docs for a full rundown here: http://visionmedia.github.io/nib/ Note: A second mixin library option for Stylus is Axis. ## Other Loveable Stylus Goodness Stylus has loads of other awesome features, done in its own unique and super clean way, and you really should check out the whole lot here: http://learnboost.github.io/stylus/ However, there are a couple in particular that I really love. ### Rest Parameters Rest parameters allow you to pass an undetermined number of values to a mixin without having to explicitly map them out in the creation of the mixin. You can pull out a particular value, and then pass the "rest" by using args... and args. For example: ### Property Lookup Sometimes you might be repeating a certain value a couple of times, but only within a single style, so having to declare a variable to hold it can be overkill. With the property lookup feature, you can lookup the value of any property you've declared in the same style or a parent style. For example: All you have to do is use the @ symbol before the property you want to look up. Stylus will look first in the same style, and if it finds no match it will check the parent and continue to bubble up until it either gets a match or reaches the document root and returns "null". ## Wrapping Up & Some Final Stylus Goodies Hopefully now you feel ready to tackle setting up Stylus if you've been wary of command line before, and you're curious enough to investigate if you love the power of Sass but would prefer working with Node.js over Ruby. And even if neither of those two are particularly relevant to you, I hope you're intrigued enough by some of the unique approaches taken by Stylus to spin it up and have a play with it. To wrap up, I'd like to leave you with a list of interesting Stylus related goodies for you to have a look through, some mentioned in the above, as well as some extras. Enjoy!
2021-06-25 03:54:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2442934513092041, "perplexity": 1500.123850215123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488567696.99/warc/CC-MAIN-20210625023840-20210625053840-00625.warc.gz"}
https://www.nature.com/articles/s41598-017-17251-y?error=cookies_not_supported&code=ea88702c-7eeb-4ee3-bb98-48892b0928e6
# Finite size effect on the structural and magnetic properties of MnAs/GaAs(001) patterned microstructures thin films ## Abstract MnAs epitaxial thin films on GaAs(001) single crystalline substrates crystallize at room temperature (RT) in a mixture of two crystalline phases with distinct magnetic properties, organized as stripes along the MnAs [0001] direction. This particular morphology is driven by anisotropic epitaxial strain. We elucidate here the physical mechanisms at the origin of size reduction effect on the MnAs crystalline phase transition. We investigated the structural and magnetic changes in MnAs patterned microstructures (confined geometry) when the lateral dimension is reduced to values close to the periodicity and width of the stripes observed in continuous films. The effects of the microstructure’s lateral size, shape and orientation (with respect to the MnAs $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction) were characterized by local probe synchrotron X-ray diffraction (μ-XRD) using a focused X-ray beam, X-ray Magnetic Circular Dichroïsm - Photo Emission Electron Microscopy (XMCD-PEEM) and Low Energy Electron Microscopy (LEEM). Changes in the transition temperature and the crystalline phase distribution inside the microstructures are evidenced and quantitatively measured. The effect of finite size and strain relaxation on the magnetic domain structure is also discussed. Counter-intuitively, we demonstrate here that below a critical microstructure size, bulk MnAs structural and magnetic properties are restored. To support our observations we developed, tested and validated a model based on the size-dependence of the elastic energy and strain relaxation to explain this phase re-distribution in laterally confined geometry. ## Introduction MnAs is a promising candidate for electrical spin injection into GaAs and Si based semiconductors1,2,3. Indeed, it has a large carrier spin polarization, small coercive field and relatively high saturation magnetization and Curie temperature. Bulk MnAs crystals are known to exhibit a hexagonal (α-phase) ferromagnetic structure at low temperature and to experience a first order phase transition to the paramagnetic distorted orthorhombic β-phase above a critical temperature Tc, approximately around 45 °C4,5. Epitaxial MnAs films on GaAs single crystalline substrates6,7,8,9,10, which are more appropriate for spin injection applications, show the coexistence of both the aforementioned phases (α and β) at RT and over a more or less extended temperature range that depends on the film characteristics (thickness and orientation) and manufacturing conditions. The equilibrium coexistence observed in this case11,12 was shown to result from the large mismatch between the α-MnAs and GaAs lattice spacing along the $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction of MnAs (a-axis), which is parallel to the GaAs $$[\bar{1}\mathrm{10]}$$ direction. This large anisotropic lattice mismatch yields epitaxial strain that the system relieves by inserting over a large temperature range around RT β-MnAs domains which have a smaller lattice mismatch with the substrate. The lattice parameter discontinuity between α and β-phases translates into the onset of a characteristic stripe pattern of alternating ridges and grooves along the MnAs $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction5,13 (see Fig. 1a). Several ways of imaging and characterizing these structures have been used, including Magnetic Force Microscopy (MFM)6,14, XMCD-PEEM and LEEM microscopies7,15,16,17,18,19. A potential use of such a system in microelectronics and device manufacturing requires miniaturization, down to micron and sub-micron sizes. Size reduction can be beneficial because of the occurrence of genuine material properties that can be exploited in new devices or detrimental by altering or suppressing desirable bulk properties. It is thus necessary to characterize and to quantify the MnAs film properties in laterally confined geometries, namely when the lateral size approaches the stripe periodicity p. Generally speaking, patterned samples can have very different micro-crystalline and micromagnetic behavior than continuous thin films, and therefore one cannot simply extrapolate the results of thin films to microstructured patterns due to finite size effects20,21,22,23. In the specific case of the MnAs system several important questions remain open: How the crystalline phases (and consequently the magnetic domains) will organize when the associated energies become smaller than the thermal energy? What is the effect of the lateral finite size on the strain release, and consequently, on the coexistence regime of the α/β crystalline phases? Moreover, the high uniaxial anisotropy present in this system along MnAs $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction requires also an investigation of a possible influence of the orientation of the confined microstructures with respect to the crystallographic directions. This issue remains challenging and has already been raised in the studies of MnAs ribbons by Tortarolo et al.14, disks by Takagaki et al.22 both using MFM and by Steren et al.24 on thin MnAs ribbons using XMCD-PEEM. We report here on our recent investigation of the size effect on the magnetic and structural properties of microstructured MnAs patterned thin films. The MnAs system, even in bulk form, is known to evidence a strong structural and magnetic correlation. A dual approach giving simultaneously access to both physical properties on the very same objects is mandatory to get solid insights in the underlying physical mechanisms. Therefore, we have used an original combination of two local probe X-ray methods: μ-probe X-Ray Diffraction (μ-XRD) and XMCD-PEEM to investigate the finite size effect on the magneto-structural properties. μ-XRD allows to directly access the local lattice parameters and unambiguously identify and quantify the presence of the two crystalline phases for microstructured MnAs thin films. Direct quantitative access to the strain is also granted by this technique. XMCD-PEEM microscopy has been used to evidence the effect of the finite size effect and lateral confinement on the magnetic properties of the MnAs α-phase. To corroborate our experimental observations, we propose and validate a model originally inspired by that of Kaganer et al.11,12, which consists in taking into account the size-dependence of the elastic energy stored globally in the microstructure to describe the α/β phases coexistence diagram in a 300 nm thick patterned MnAs thin film. This model is found to reproduce fairly well our observations. ## Results ### MnAs Thin film case We begin with a brief overview of some of the results from LEEM and XMCD-PEEM studies of continuous MnAs thin films, that are relevant for the present report7,25,26. As stressed in the introduction, two magnetic and structural phases coexist in MnAs thin films: ferromagnetic hexagonal α-MnAs and orthorhombic β-MnAs. The coexistence is due to the anisotropic strain caused by the strong expansion of the MnAs basal plane during the phase transition from the high temperature β-phase to the low temperature α-phase, which in the bulk occurs with a small hysteresis around 45 °C. The strain is predominantly uniaxial leading to the formation of alternating stripes of α and β-phases, with the stripes direction perpendicular to the basal plane, which is perpendicular to the film surface (See Fig. 1). The width of the α and β stripes (w α and w β ) depends both on the temperature and film thickness (t), while the period of the stripes (p = w α  + w β ) increases linearly with the thickness ($$p\underline{ \sim }4.8t$$) and does not depend on the temperature. The strain induced by the strong lattice mismatch with the GaAs substrate is strongly temperature dependent. Thus, the MnAs film adopts a pure β-phase at elevated temperature (T $$\gtrsim$$ 100 °C), and pure α-phase at low temperature (T $$\mathop{ < }\limits_{ \tilde {}}$$ 10 °C). The phase coexistence range is directly linked to the strain relaxation and thus to the film thickness and the growth conditions. Finally and due to the large atomic lattice parameter difference between the hexagonal and the orthorhombic phase, a 1.7% corrugation with respect to the film thickness, is reported within the phase coexistence temperature range. From the magnetic point of view, α-MnAs has a large negative magnetocrystalline anisotropy with three easy $$\mathrm{ < 11}\bar{2}\mathrm{0 > }$$ axis in the basal plane and a hard axis perpendicular to it (c-axis). As a consequence the magnetization is pointing not along the stripe direction as expected from shape anisotropy considerations but rather perpendicular to it, predominantly in-plane. This leads to a complicated, thickness-dependent magnetic domain structure in the interior of the α-phase27 which is reflected in the complexity of the magnetic images of the surface of the film (see Fig. 1c). The key point to understand the magnetic domain onset in MnAs thin films is to consider the stability of the three-dimensional magnetization distribution. Indeed, starting from a critical thickness ($$\gtrsim$$100 nm), the demagnetization energy induced by the lateral confinement of the α-phase, is reduced by the formation of 3D flux-closure domains at the cost of the exchange energy. Three magnetic configurations are generally observed: Type I (S and/or Landau states), type II (diamond state) and type III (double diamond state). The prevalence of three domain configurations depends on the film thickness and the temperature. The XMCD-PEEM (see methods) image reported in Fig. 1c shows the micromagnetic structure of such a film. The ferromagnetic domain structure of the α-MnAs phase is revealed by circular dichrosm imaging28. In Fig. 1c, the black/white contrast results from ferromagnetic domains with opposite magnetization perpendicular to the direction of the α stripes and parallel to the direction of the incoming photon light; while the gray stripe contrast corresponds to the paramagnetic β-MnAs. ### α-β structural phase coexistence in MnAs microstructures The films used in this study were prepared by solid-source molecular beam epitaxy26 (MBE, see methods). MnAs films of 300 nm thicknesses were epitaxially grown on a GaAs(001) single crystalline substrate. The samples were patterned by electron beam lithography with rectangular and elliptical shaped microstructures; the 2D lateral sizes of the microstructures are ranging from 12 μm down to 0.75 μm (this last one being half of the α/β stripes period (p)) (Lithography, see methods). The aspect ratio and the orientation with respect to the MnAs $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction were also varied (Fig. 2). L a refers to the size of the microstructure along the $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction (a-axis), while L c corresponds to the size in the orthogonal direction (c-axis). This particular arrangement ensures having on the same sample rectangles and/or ellipses with the long dimension either parallel or perpendicular to the large strain direction. Before any measurements, the sample was first annealed at a temperature high enough to pass through the phase transition towards the pure β-MnAs phase for all MnAs lithographed microstructures. This procedure resets the system and allows avoiding measuring particular “frozen-in” configurations resulting from the lithography process. The sample is then gently cooled down to RT and μ-XRD measurements are performed. Figure 3 shows a θ − 2θ scan across the Bragg peaks characteristic for the α and β-MnAs phases, using a hybrid pixel detector (X-ray energy E = 9.5 keV). The very sharp and intense GaAs(001) substrate Bragg peak (not shown) was found at 2θ ≈ 54.94°, as expected from the bulk GaAs lattice parameter value. The quantity of the α and β phases can be estimated by calculating the integrated areas of the peaks (hatched regions in red and green for α and β phase, respectively), and correcting the result with the appropriate structure factor. This simplified approach allows a rapid estimation, but does not take into account a possible broadening of the peaks in the θ direction. For the refined data acquisition, the approach is extended in the following way: each MnAs microstructure is centered in the X-ray beam by laterally scanning its position in the x and y directions (Fig. 3a) using the XRD signal as contrast. Then the incident angle (θ i  = θ B ) is scanned around the corresponding Bragg value, recording the full image of the area detector. The data-set obtained corresponds thus, for each MnAs illuminated microstructure, to a volume in the reciprocal space close to that Bragg peaks. Figure 3b,c show representations of such volumes around the Bragg peaks of α and β-MnAs for two different microstructure sizes: 12 × 12 and 4.5 × 4.5 μm 2, respectively, measured at RT. The same absolute color scale is used for the scattered intensity. The representation consists of an iso-intensity surface and three planar cuts through the peaks, along high symmetry planes. The expected positions of the Bragg peaks of α and β phases are pointed out by red and green arrows respectively. One can note the much lower absolute maximum intensity in panel c), a natural consequence of a smaller microstructure (volume) illuminated by the X-ray beam. Also note the change of the fraction of α-MnAs phase (from 89% to 98%) when the lateral size of the microstructure is reduced. A faint but negligible trace of the β-MnAs Bragg peak can still be detected in Fig. 3c for the small microstructure, see the vertical plane cuts. This approach allows not only to estimate the ratio of α/β phases, but to extract as well the 2θ positions of the characteristic Bragg peaks and their corresponding widths, for each MnAs microstructure. The Bragg peak position is related to the lattice spacing (in the direction perpendicular to the surface in this particular case, i.e. the MnAs $$\mathrm{[1}\bar{1}\mathrm{00]}$$ direction), so also to the strain. The width of the peak can be related to the presence of local crystalline defects in the probed volume. These values are reported, at RT, in Fig. 4a,c as color graphics: each bar represents the shape and size of the lithographed microstructure, while the color (from blue to red) is related to the amplitude of the reported quantity: α-phase fraction (σ α ), the relative position (θ α ) and the full width at half maximum (FWHM) of the α-phase Bragg peak. The results reported and discussed here concern several set of measurements and samples (patterns), including rectangular and elliptical shapes. The elliptic microstructures were intended to investigate the effect of corner-induced magnetic stray fields on the magnetic domain structure. From our XRD structural measurements we didn’t observed any effect related to the shape of the microstructures. Therefore, we will discuss here only the rectangular shaped patterns. ### Effect of size reduction on the α-β phases repartition at RT The reference quantity σ of the α-phase fraction for a continuous MnAs film (non-lithographed part of the sample) was measured by XRD to be around 72.7% at RT, which is close to the value found for the 12 × 12 μm 2 microstructure. Thus, the former may be quite representative of the infinite microstructures. When exploring microstructures with the long dimension L c perpendicular to the α/β stripes (i.e. this dimension is parallel to the X-ray direction and corresponds to the low strain direction), a significant quantity of β-phase is still present. We can phenomenologically understand this finding by the fact that a ‘long dimension’ allows accommodating several stripes (i.e. alternating α/β phases) and thus include the β-phase. When the lateral dimension of the MnAs microstructure is reduced in the direction parallel to the α/β stripes (L a ), the α/β phases ratio increases. The pure α-phase is found for microstructures having the lateral size smaller than the stripe period p ($${L}_{a}^{c}\,\simeq \,4.8t$$). This value is in agreement with the conclusions of Tortarolo et al.14 in the case of MnAs ribbons ($${L}_{a}^{c}\, > \,$$3.2t). ### Effect of size reduction on the α-β structure at RT The shift in the 2θ position is calculated with respect to the position found in the continuous MnAs film case, for which we can confidently assume that the α/β domains are unaffected by any lateral confinement-like effect (infinite film case). A non-lithographed part of the sample of 200 × 200 μm 2 was used for this purpose. The largest shift of the position of the peak is obtained for the rectangular shapes (smaller width, large L c ) oriented along the X-ray beam (i.e. which are crossed by many stripes). Combined with the result reported for the α/β phases ratio, the shift can be phenomenologically explained as follows: these microstructures can accommodate many α/β stripes, thus their crystalline structure will be affected. The ‘average’ lattice parameter of the α-phase tends to adapt to the β-phase one, yielding thus a significant shift in the 2θ position of the Bragg peak. A similar situation is happening for the β-phase characteristic peak (not shown here). These microstructures exhibit the presence of more crystalline defects, causing a broadening of the Bragg peak (mosaicity like). A similar effect can be seen on the peak characteristic for the β-phase. Due to the low fraction of the β-phase, and consequently peaks with lower intensities, the corresponding data are noisier (see e.g. Fig. 3a). In the case of the rectangles oriented with the long axis along the stripes direction (small L a ), they will simply accommodate a single phase, with a clear preference for the α-phase (at RT), as shown by the graph in Fig. 4a. Although we have focused in this paper on size effects when varying the objects lateral dimension along the $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction (L a ), it is interesting to note that differences are also detected when the object size is varying along the [0001] direction (L c ). For both, the position of the α phase Bragg peak (θ α ) and its FWHM (Fig. 4b,c), there is a significant increase of their values when L c is decreasing. These results suggest that the α phase exhibits more local defects when L c is reduced. We propose the following possible explanation: It has been already shown, for a sample made of epitaxial layers, that significant changes in crystalline lattice parameter and lattice plane orientation can appear in the vicinity of the edges of the lithographed object29,30,31,32. For the objects mentioned here, when decreasing L c (and large L a ), the weight of the border area (i.e. significant perturbation of the lattice) becomes increasingly important in the sample area. This translates into a shift and increase of θ α and FWHM values respectively, reflected by the variations reported in Fig. 4b,c. Note that when L a is getting smaller (comparable to the stripes period), the above mentioned effect might have a lower amplitude: since only one or two stripes can be accommodated inside the object, this may somehow lock the crystalline phase and leaves less space for introducing relaxation and/or defects. ### Effect of size reduction on the α−β-phase transition We performed local probe μ-XRD experiments at various temperatures across the α/β phase transition, in order to access the possible dependence of the critical transition temperature T c and the phase coexistence temperature range, on the size and shape of the MnAs microstructures. All the microstructures were measured at about 40 temperatures (in the range of 8 to 55 °C) in order to cover the full phase transition region. Quantities similar to the ones reported at RT in Fig. 4a,c are extracted, and their change with temperature is shown in Fig. 5. These results show first the presence of a thermal hysteresis loop: the phase transition does not happen at the same temperature when heating or cooling the sample. A significant change of T c with microstructure size is also noticed. However it is difficult to extract a clear quantitative dependence on the lateral size. This is most likely due to the initial growth conditions and to the lithography process. As a matter of fact, because of the large anisotropic lattice mismatch between GaAs and MnAs, a large stress field forming array of misfit dislocation33 is created at the interface during the initial stage of the growth and propagates through the whole thickness of MnAs sample. The stress filed is not released with further thermal processing. In the case of the continuous film and because of this stress field, the α/β stripes always appear at the same position, whatever the thermal history i.e. cooling down from the pure β-phase to the pure α-phase and vice versa. However, the microstructures have been randomly lithographed over the sample surface with respect to the strain field in the film, cracks and defects. Therefore and depending on the nucleation position of the α/β boundary with respect to the position of the microstructure and to defects, the thermal hysteresis can evidence different behavior. Nevertheless, and without outrunning the above mentioned influence of particular sample history and preparation, an important effect is highlighted for rectangular microstructures of the same size but oriented parallel and perpendicular to the α/β stripes (low and large strain direction, respectively). For the microstructures oriented with the long dimension perpendicular to the stripe direction, the β-phase appears earlier during cooling down (e.g. the 0.75 × 2.25 μm 2 curve) than when the long dimension is parallel to the stripe direction (e.g. the 2.25 × 0.75 μm 2 curve). We could understand this by noting that the microstructure with the long dimension along the stripes can contain less β-phase at RT and, consequently, the β-phase appears at lower temperature. As mentioned above, the presence of crystalline defects especially close to the edges of the objects (tilts of crystalline planes and relaxation29,30,31) is likely. So we can assume they constitute nucleation/pinning centers for the alternating α/β stripes, provided that the lateral size of the object along the $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction is larger than a critical one (see below). It is known that epitaxial clamping effects can substantially change transition temperatures34 for thin films. If, in addition, only one phase accommodated inside the object, the transition sharpness may also be affected, as it can be seen for example in Fig. 5a (bottom panel) and 5-b (top panel). The most important result from the temperature study is the reduction of the temperature range of the α/β phases coexistence. While in the continuous film, this temperature range extends over several ten of degrees, it is reduced to only a few degrees for microstructures having a lateral size below the critical length $${L}_{a}^{c}\le 1.5\,\mu m$$ along the large strain direction. This result is very important because it demonstrates, as it will be further confirmed by the theoretical modeling, that size reduction along the high strain direction, allows to relax the large anisotropic strain which is the main cause of the phase coexistence in the continuous film. The shift of the position of the Bragg peak (Fig. 5b, bottom panels) is representative for the strain variation in the MnAs films; the thermal expansion is much smaller in this temperature range, and is neglected. The curves corresponding to α and β phases (Bragg peaks) cross at zero shift (with respect to the values found for the reference MnAs continuous film), which is always almost centered with respect to the hysteresis loops discussed above. These temperature and size dependent results confirm that the phase transition in laterally confined MnAs structures is governed by uniaxial anisotropic strain. ### Magnetic domain structure of MnAs microstructures The μ-XRD results have been further confirmed by RT LEEM and XMCD-PEEM measurements of rectangle, disk and elliptic shapes, performed on the same sample. As observed with μ-XRD and as it will be discussed hereafter, the stabilization of the α/β stripes is different in the laterally confined MnAs microstructures as compared to the MnAs continuous films of the same thickness. First of all, it is worth to stress that the two methods allow accessing different sample probing depths. XPEEM is essentially surface sensitive and thus allows to investigate the domain structure of the first few nanometers of the sample. μ-XRD has a larger penetration depth (a few 100 nm) and thus provides more bulk information. Therefore, the combination of these two local probe methods allows investigating the strain relaxation effect and its close relationship to the finite size effect both in the bulk and at the surface of the MnAs microstructures. At the surface, strain relaxation can occur due to the broken bonds and thus modify the surface elastic energy of the film. Consequently, the surface vs bulk α/β phases repartition can be very different. In addition, the XMCD-PEEM measurements allow accessing the magnetic configuration of the α-phase and thus correlating the structural and magnetic properties. For the XMCD-PEEM measurements, the samples have been prepared at high temperature before cooling down to RT (see LEEM-PEEM microscopy, methods). In Fig. 6 we present XMCD-PEEM images of selected MnAs isotropic microstructures having different sizes and shapes. First, one may note that the easy magnetic direction remains along the $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction for all the investigated microstructure sizes and shapes. This finding confirms that the magnetocrystalline anisotropy is strong enough to overcome the magnetic shape anisotropy when reducing the lateral size of the microstructure as observed in the thin film case. It is also worthwhile to notice, that as far as the α/β phases repartition is concerned, we did not observe any effect related to the shape of the isotropic microstructure (i.e. disk or square). Nevertheless, looking more closely at the shape of the stripe boundaries, we may note that in disk-shaped microstructures, the stripe boundaries are strongly influenced by the shape specially when the edge curvature is large. The stripe boundaries are no longer aligned parallel to the $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction, but tend to rotate to match perpendicularly the edge of the microstructures (see Fig. 6h). This effect can be easily understood if we consider edge effects. In general and as it will be discussed in the modeling section, the stress vector will always tend to cross perpendicularly a free surface. As has been deduced from the μ-XRD measurements, the XMCD-PEEM images confirm that below a critical lateral size $${L}_{a}^{c}=1.5\,\mu m$$ along the $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction, the microstructures adopt exclusively the α-MnAs phase at RT. Importantly, this corresponds to the bulk behavior that can here unexpectedly be restored by lateral size reduction. Finally, comparing the μ-XRD and XMCD-PEEM results allows us to conclude that there are no surface vs bulk effects and that the microstructures are homogeneous through their whole thickness. From the magnetic point of view, a similar behavior with respect to the size effect has been evidenced. Below 2.25 μm the magnetic domains start to differ from those observed in the continuous thin film and do not evidence the characteristic zig-zag domains structure25. The microstructures develop a magnetic domain structure similar to the one observed in the low temperature pure α-phase26. The 1.5 μm microstructure develops single and head-on domains, while the smallest microstructures (0.75 μm) show almost exclusively single magnetic domains. Figure 7 shows the domain configuration of two microstructures, 2.25 × 4.5 μm 2 and 0.75 × 12 μm 2, with two different orientations with respect to the large strain direction. As expected, the rectangular structures with the long axis oriented along the [0001] direction, hereafter denoted rectangle-A, are predominately α-phase, while for the orthogonal microstructures (rectangle-B), the two α/β phases coexist in the form of alternating stripes. This result is in remarkable agreement with μ-XRD measurements. Looking more closely at the onset of the magnetic domain structure, one may see that the rectangle-A develops elliptically shaped domain structures. Such domain structures have been already observed in continuous films and correspond to a mixture of two magnetic domains: I (S state or Landau state) and III (double diamond state)25,27. Their occurrence in continuous thin film is very limited while their prevalence in microstructures is quite large. The fact that this domain state has been observed also in very large microstructures (Fig. 8) suggests that their predominance is probably not correlated with size reduction. From the statistics of the domain structure over several microstructures, the I-III domains are always observed when the ferromagnetic α-phase is located at the edge of the microstructures. It is therefore most likely that their stability is connected to edge effects and to the minimization of the magnetic charges at the microstructure edges via the formation of a three-dimensional flux-closure pattern at the cost of exchange energy. Unfortunately, XMCD-PEEM allows to investigate essentially only the surface contribution of the magnetic domains and there is a real lack of methods capable to characterize the complex 3D magnetic structure of such magnetic domains. Complex 3D micromagnetic simulations are obviously needed to fully understand the origin and the stability of this particular magnetic domain structure. A detailed analysis of the micromagnetic domain structure in MnAs microstructures will be the subject of a specific publication. ### Model and theoretical interpretation As mentioned above, the large mismatch between the hexagonal α-MnAs phase and the substrate GaAs lattice spacing along the $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction yields epitaxial strain that the system relaxes by forming β-MnAs domains, which exhibit smaller mismatch with GaAs. The fraction σ of the α-phase is then selected so that it minimizes the total system free energy F, which can be written as: $$F={L}_{a}{L}_{c}t(\sigma {f}_{\alpha }+\mathrm{(1}-\sigma ){f}_{\beta })+{E}_{elastic}$$ (1) where L a , L c , and t refer to the layer dimensions along $$\mathrm{[11}\bar{2}\mathrm{0]}$$, [0001], and $$\mathrm{[1}\bar{1}\mathrm{00]}$$ directions, respectively. f α and f β refer to the free energy of bulk α and β phases, respectively. E elastic denotes the elastic energy stored within the MnAs layer. In the model developed here we disregard any plastic deformation and non-linear elastic phenomena. This is justified in the present system since the critical thickness for cracking is larger than the thickness of our MnAs layers35 (500 nm vs. 300 nm. respectively). The difference in bulk free energies f α  − f β vanishes at the bulk transition temperature T c and is expected to be a linear function of the temperature T close to the transition: f α  − f β  = −Q(T − T c )/T c , where Q is the latent heat. Evaluation of E elastic is more complex and a priori calls for a full three dimensional stress analysis. However, since the transition from α-MnAs to β-MnAs leads to a shrinking of the prism hexagon (i.e. a change in the mismatch along a-axis) without modification of the prism height (i.e. without change in the mismatch along c-axis), we choose to reduce the problem using a plane strain approximation to the simpler two dimensional problem for the stress analysis, as sketched in Fig. 9 (inset). This permits a simple semi-analytical derivation of the variations of σ with L a and its subsequent comparison with the experimental measurements, provided that L c remains large with respect to both L a and t. The limits raised by this approximation and the effect of L c will be discussed at the end of the section. Let us first consider a MnAs film with infinite lateral dimensions. This situation is the one originally investigated by Kaganer et al.11. Then, the elastic strains are homogeneous within the film thickness and the contribution arising from the discontinuity of the lattice parameters at the α/β MnAs interfaces. Calling ε α and ε β the epitaxial strains at the α-MnAs/GaAs and β-MnAs/GaAs interfaces, respectively, the elastic energy densities in the α and β phases write: $$Y{\varepsilon }_{\alpha }^{2}$$ and Y(ε β  − η)2, respectively, where Y is the relevant elastic modulus (given, in first approximation, by the MnAs Young modulus). η is the relative change of lattice spacing along a-axis between α and β phases. The elastic energy in Eq. 1 writes: $${E}_{elastic}={L}_{a}{L}_{c}t(\sigma Y{\varepsilon }_{\alpha }^{2}+\mathrm{(1}-\sigma )Y{({\varepsilon }_{\beta }-\eta )}^{2})$$. Then, the strains ε α and ε β are selected in order to minimize E elastic under the constraint that the total length of the MnAs film is imposed by the GaAs substrate: σε α  + (1 − σ)ε β  = ε 0, where ε 0 is a constant set by the substrate length. This minimization process yields ε α  = −(1 − σ) + η + ε 0, hence $${E}_{elastic}={L}_{a}{L}_{c}Y({\mathrm{(1}-\sigma )}^{2}{\eta }^{2}-{\varepsilon }_{0}^{2})$$ and finally Eq. 1 writes: $$\frac{{F}^{\infty }}{{L}_{a}{L}_{c}t}=(\sigma {f}_{\alpha }+\mathrm{(1}-\sigma ){f}_{\beta })+Y\mathrm{((1}-\sigma {)}^{2}{\eta }^{2}-{\varepsilon }_{0}^{2})$$ (2) where the index ∞ has been added to F to recall a lateral dimension L a infinitely large with respect to the pillar height t. The equilibrium density σ is the one that minimizes F : $$\begin{array}{lllll}{\sigma }^{\infty }(T) & = & 1 & {\rm{for}} & T < {T}_{c}^{\ast }-{\rm{\Delta }}{T}^{\infty }\\ {\sigma }^{\infty }(T) & = & \frac{Q}{2Y{\eta }^{2}}\frac{{T}_{c}^{\ast }-T}{{T}_{c}^{\ast }} & {\rm{for}} & {T}_{c}^{\ast }-{\rm{\Delta }}{T}^{\infty }\le T\le {T}_{c}^{\ast }\,{\rm{with}}\,{\rm{\Delta }}{T}^{\infty }=\frac{2Y{\eta }^{2}}{Q}{T}_{c}^{\ast }\\ {\sigma }^{\infty }(T) & = & 0 & {\rm{for}} & {T}_{c}^{\ast } < T\end{array}$$ (3) The effect of ε 0 is to shift the temperature $${T}_{c}^{\ast }$$ above which the α-phase is no longer observed with respect to the bulk phase transition, T c . This simple model combined with the proper values of the material constants Y, Q, η, $${T}_{c}^{\ast }$$ was shown11 to reproduce quantitatively the evolution of σ with temperature in the case of continuous MnAs thin films. Let us now consider the effect of a finite lateral size L a still keeping at this point the assumption of an arbitrary large L c . In this case, the elastic strains induced by the lattice mismatch between film and substrate are not homogeneous within the thickness anymore, but decrease as the distance from the film/substrate interface increases, with a characteristic decay length of the order of L a (Fig. 9, inset). Two limiting cases are expected: • In the limit of pillars of height $$t\ll {L}_{a}$$, the elastic energy is proportional to a volume $$\simeq {L}_{a}{L}_{c}t$$ of the epitaxial layer and its expression is that obtained by Kaganer et al.11 in the limit of infinite lateral dimensions. • In the limit $$t\gg {L}_{a}$$, the elastically strained zone is confined in a layer of thickness $$\simeq {L}_{a}$$ above the interface. Hence, the elastic energy is proportional to a volume $$\simeq {L}_{a}^{2}{L}_{c}$$ and its relative importance with respect to bulk (volume) free energy vanishes as L a /t. One then expects to observe the same transition behavior as that observed in bulk MnAs, namely α-phase below T c (in particular at ambient temperature T 0 = RT < T c ), and β-phase above. In Eq. 1, the term of elastic energy should be modified to: $$\begin{array}{lll}{E}_{elastic}={L}_{a}^{2}{L}_{c}Y\varepsilon \,{(y=\mathrm{0)}}^{2}f(u=\frac{t}{{L}_{a}}) & {\rm{with}} & f(u)\approx u\,{\rm{if}}\,u\ll 1\\ & & f(u)\approx u\,{\rm{if}}\,u\gg 1\end{array}$$ (4) where y is the direction perpendicular to the surface plane (y = 0 describes the MnAs/GaAs interface), ε(y = 0) denotes the amount of elastic strain within MnAs at the interface, and f(u) is a dimensionless function that only (and slightly) depends on the material Poisson ratio, which is well approximated by f(u) ≈ u c (1 − exp(−u/u c )) with u c ≈ 0.17 (see annex for its determination). The elastic term in Eq. 2 should be modified accordingly: $$\frac{F}{{L}_{a}{L}_{c}t}=(\sigma {f}_{\alpha }+\mathrm{(1}-\sigma ){f}_{\beta })+\frac{{L}_{a}}{t}f(\frac{t}{{L}_{a}})Y\mathrm{((1}-\sigma {)}^{2}){\eta }^{2}-{\varepsilon }_{0}^{2}$$ (5) and the fraction of the α-phase now writes: $$\sigma (T,\frac{t}{{L}_{a}})=\frac{t}{{L}_{a}}\frac{1}{f(t/{L}_{a})}{\sigma }^{\infty }(T)\,{\rm{for}}\,{T}_{c}^{\ast }-{\rm{\Delta }}T\le T\le {T}_{c}^{\ast }\,{\rm{with}}\,{\rm{\Delta }}T=\frac{{L}_{a}}{t}f(t/{L}_{a}){\rm{\Delta }}{T}^{\infty }$$ (6) As shown in Fig. 9, this expression, completed with the proper dimensionless function f(u) (see methods), reproduce fairly well the experimental data, without any fitting parameters. The above derivations considered that L c was large with respect to both t and L a . Taking quantitatively its effect into account would require a full-three dimensional analysis of the elastic problem beyond the scope of this paper. A qualitative picture can however be proposed by noting that, in a layer of size L c  × L a  × t, the elastic strains induced by the lattice mismatch decrease with the distance from the film/substrate interface with a characteristic decay length of the order of min(L a , L c ). The two limiting cases now read: • In the limit $$t\ll min({L}_{a},{L}_{c})$$, the elastic energy is proportional to a volume $$\sim {L}_{a}{L}_{c}t$$ of the epitaxial layer and its expression is that obtained by Kaganer et al.11 in the limit of infinite lateral dimensions. • In the limit $$t\gg min({L}_{a},{L}_{c})$$, the elastically strained layer is confined in a layer of thickness $$\sim min({L}_{a},{L}_{c})$$ above the interface. Hence, the elastic energy is proportional to a volume $$\sim {L}_{a}{L}_{c}min({L}_{a},{L}_{c})$$ and its relative importance with respect to bulk (volume) free energy vanishes as min(L a , L c )/t. One then expects to observe the same transition behavior as that observed in bulk MnAs, namely α-phase below T c (in particular at ambient temperature T 0 = RT < T c ), and β-phase above. Then, Eq. 6 is to be modified into: $$\begin{array}{rcl}\sigma (T,\frac{t}{{L}_{min}}) & = & \frac{t}{{L}_{min}}\frac{1}{g(t/{L}_{min})}{\sigma }^{\infty }(T)\,{\rm{for}}\,{T}_{c}^{\ast }-{\rm{\Delta }}T\le T\le {T}_{c}^{\ast }\,{\rm{with}}\,\\ {\rm{\Delta }}T & = & \frac{{L}_{min}}{t}g(t/{L}_{min}){\rm{\Delta }}{T}^{\infty }\end{array}$$ (7) where L min  = min(L a , L c ) and g(u) is a dimensionless function presenting the same asymptotic limits as f(u). This allows us interpreting the observed effects of L c on σ in Fig. 3(a) and the fact that larger L c tends to yield smaller σ provided that L c remains significantly smaller than L a Back to the situation where L a L c , Eq. 6 also permits to rationalize the effect of size reduction on the α/β phase coexistence temperature range (Fig. 5 top): • As long as the lateral size along the high strain direction, L a , is large compared to the critical value $${L}_{a}^{c}=t/{u}_{c}\approx 1.8\,\mu m$$, the behavior is that of a film with infinite lateral dimensions and the coexistence temperature range ΔT extends over 10 °C. This is e.g. the case when L a  = 12 μm. • When L a becomes small with respect to $${L}_{a}^{c}$$, the coexistence temperature range is reduced and the behavior gets closer to that of bulk MnAs. This is e.g. the case when L a  = 0.75 μm. The coexistence temperature range is then predicted to be $$\sim {L}_{a}{u}_{c}\Delta {T}^{\infty }/t$$, which yields a value about 4 °C for L a = 0.75 μm, in good agreement with the observations. The above theoretical approach allows us interpreting the structure of α and β domains observed in Fig. 6. As the lateral size along the high strain direction is reduced, the behavior gets closer to that of bulk MnAs and pure α domains are observed at RT. As this lateral size is increased, β stripes start to develop and consequently the magnetic domains in the α phase tend to adopt the multidomain magnetic structure observed in the case of continuous MnAs thin films. It provides also a simple, qualitative interpretation of the effect of the shape (rectangles vs disks) on the domain structure (Fig. 6h): To optimize stress relaxation, the α/β wall boundaries are expected to cross the pattern edge perpendicularly. This will generate inclined walls in disks different from the parallel walls in rectangles (Fig. 5). ## Discussion By combining local probe μ-XRD, LEEM and XMCD-PEEM measurements, we have shown that patterned MnAs/GaAs(001) samples can have very different microcrystalline, and consequently micromagnetic, behaviors; thin films results cannot be straightforwardly extrapolated to patterned microstructures. We demonstrated that α and β phases coexist also for laterally confined geometries. The presence of the α and β MnAs phases was quantified and the influence of parameters like the microstructure shape, size, aspect ratio and orientation was studied as function of temperature, during the α/β phase transition. Reducing the lateral dimensions of the MnAs microstructures along the large strain direction ($$\mathrm{[11}\bar{2}\mathrm{0]}$$) tends to stabilize the α-phase at RT. The temperature measurements allow to unambiguously evidence a strong variations of T c and of the temperature range of the α/β phases coexistence as function of the lateral confinement. We highlight the important effect of the microstructure orientation parallel or perpendicular to the α/β stripes and the presence of a lateral critical size ($${L}_{a}^{c}\simeq p\simeq 4.8t$$). A theoretical (elastic based) model was developed, extending the model proposed by Kaganer et al.11,12. The model predicts the ratio of α/β phases for microstructures with the lateral dimension varying in the direction parallel to the stripes and is in good agreement with the experimental results shown above (including at various temperatures). This model is simple and purely two-dimensional, but it allows to give a very accurate description of finite size effects on the structural properties of MnAs microstructures. The present model could be further extended by taking into account the interfacial energies between α and β domains to explain size effects in both directions i.e. parallel and perpendicular to the α/β stripes. Indeed, work still has to be done to extend it in the perpendicular direction, where border effects and the presence of the α/β interface is expected to play a major role, as it has been evidenced in disk shaped microstructures. A possible direction for investigation should be the accurate knowledge of the f(u) function, for example via finite element simulations, for an accurate modeling of the strain relaxation inside the MnAs lithographed microstructures. From the micromagnetic point of view, we found only an indirect effect of finite size reduction on the magnetic properties. The microstructures evidence the classified and well-known MnAs magnetic domains (Types I, II and III). As deduced from the μ-XRD measurements, below a critical size ($${L}_{a}^{c}\mathrm{=1.5}\,\mu m$$), the microstructures adopt predominantly the ferromagnetic α-phase. For the smallest microstructure size the α-phase develops a single domain state, similar to the one observed at low temperature in continuous thin films or in the bulk MnAs single crystal. Nevertheless, it is worth to notice that the microstructure magnetic domains are mostly influenced by edge effects rather than by finite size effects. Type I-III 3D flux-closure magnetic domains are formed at the edges of the microstructures, most probably to reduce the demagnetization energy. A possible extension of this work will be to perform 3D complex micromagnetic simulations to fully understand the edge effect on the stability of flux-closure 3D domains structures. The presented structural, magnetic and theoretical modeling results are coherent and in perfect agreement. It is worth to notice that our structural results are in quite good agreement with those published by Tortarolo et al.14. However, we found a large disagreement with their micromagnetic results. A possible explanation of this disagreement could be the influence of the lithography processes on the magnetic properties of the MnAs microstructure (roughness, reactive and selective etching, mask influence..), especially as MnAs can be very chemically reactive36,37. In the present work and to overcome such limitations, we have always left a large area of the sample unpatterned (200 × 200 μm 2), but which also undergoes the full lithography processing. This large area serves as a reference for the continuous films and to check that the MnAs microstructures remain unaffected by the full patterning process. μ-XRD, μ-LEED (Low Energy Electron Diffraction), XAS (X-ray Absorption Spectroscopy), LEEM and XMCD-PEEM measurements have been simultaneously performed on these large areas under the same measurements conditions. The results of these measurements are in complete agreement with the published results for continuous MnAs thin films. Therefore we can confidently assess that our measurements are accurate and unaffected by the sample preparation methods. Finally, Tortarolo et al.14 have exclusively used MFM to characterize the magnetic domain structure. MFM is mainly sensitive to out-of-plane stray field and does not allow to straightforwardly determine the MnAs in-plane magnetic structure as it has been demonstrated in the case of the continuous MnAs film. Recently, Steren et al.24 have used XMCD-PEEM to study the micromagnetic changes in thin MnAs (30–50 nm) nano-ribbons, but they have not addressed the structural aspect nor the α − β phase coexistence. With respect to our conclusions, the magnetic changes observed in these nano-ribbons are more probably induced by edge effects than by size reduction effects. To summarize, we have clearly demonstrated the strong influence of lateral confinement and size reduction on the structural and magnetic properties of MnAs microstructures. Our general finding is that the smaller are the microstructures, the more they resemble to the bulk infinite MnAs single crystal in terms of structural and magnetic properties. The microstructures adopt exclusively the α-phase at RT as in the case of the bulk MnAs, and the α − β phase coexistence temperature range is very reduced. From the magnetic point of view, the microstructures adopt predominantly ferromagnetic single mono-domains similar to the bulk MnAs. Finally, these experimental observations have been further confirmed by the elastic model, which demonstrates that when the size of the microstructures is much smaller than the film thickness ($${L}_{a}\ll t$$), the strain is limited to the interface and thus the microstructures behave like the bulk MnAs. All these results, obtained in the case of the prototypical MnAs system, confirm that size effects in microstructures can be very challenging to predict and have to be addressed very carefully. ## Methods ### Sample preparation and lithography methods The samples were prepared at the AIST national institute (Tsukuba - Japan), in the so-called A-orientation following a well-established procedure using solid-source MBE (Molecular Beam Epitaxy)26. After thermal cleaning of the GaAs(100) substrate at 590 °C, a 40 nm GaAs buffer layer is grown at 570 °C. The MnAs layer is then grown at 210 °C with a growth rate of 5 nm per minute. In this orientation MnAs grows epitaxially adopting the following epitaxial relationship: MnAs$$\mathrm{[1}\bar{1}\mathrm{00]//}$$GaAs[001] and MnAs$$\mathrm{[11}\bar{2}\mathrm{0]}$$//GaAs$$[\bar{1}\mathrm{10]}$$. The films were post-annealed at 310 °C and As-capped to prevent any oxidation during the sample air transfer. The samples were decapped in-situ at 350 °C prior to the lithography process in order to remove the thick As protective layer. The samples were further capped with a 3 nm thick Ru layer to prevent any contamination of the surface during the lithography. The MnAs microstructures have been patterned by electron beam lithography using a JEOL 6500F scanning electron microscope, using an Al mask and a subsequent Ar ion beam etching down to the GaAs substrate. ### Local probe X-Ray Diffraction The XRD experiments detailed in the report were performed on the ID-01 beamline at the European Synchrotron Radiation Facility (ESRF), Grenoble, France and on the DiffAbs beamline at the Synchrotron SOLEIL, Gif-sur-Yvette, France (see Fig. 10). Various focusing devices were used: Be Compound refractive lenses (CRL)38,39,40,41 (at ESRF) or Kirkpatrick-Baez (KB) optics42,43 and Fresnel Zone Plate (FZP)44,45,46,47,48 (at the Synchrotron SOLEIL). The photon energy was in the 7.5 to 9.5 keV range, and the typical X-ray probe size was of about 1 × 3 μm 2 (vertical × horizontal) and 7 μm Full Width at Half Maximum of the intensity (FWHM). A X-ray hybrid pixel area detector (XPAD)49,50,51,52 was used to perform 3-dimensional mapping of the reciprocal space around the positions of the Bragg peaks characteristic for the MnAs layer (α and β phases). The sample was mounted on a Peltier cooling/heating device, which allowed accessing the −15 °C to +60 °C temperature range. During the first experiments, the sample was kept under He flow to prevent possible oxidation and X-ray beam damage. It was found in later experiments that at lower photon fluxes, the sample can also be placed in air. The experimental setup53,54 is depicted in Fig. 10. The angles of the diffractometer are set such to fulfill the diffraction (Bragg) condition for the MnAs film (α or β phases). Then, the sample lateral position is scanned, while recording with the detector the scattered intensity in each point. A raster image of the sample surface with crystallographic contrast is obtained53,54,55. The result is shown in Fig. 2-left and compared to the optical microscopy images of precisely the same area of the sample. The lateral resolution of the raster images in this approach is essentially given by the lateral size of the X-ray spot (its footprint on the sample), which was kept relatively large (few μm) on purpose, to integrate over several α/β stripes on the large objects, in order to obtain a result which is not related to the presence of possible local defects. Thus the reported structural data (quantity of α phase, strain, etc.) are characteristic for the probed object. It is worth to notice, that the long crack lines running perpendicular to the [0001] direction are caused by the strain release. Along the [0001] direction, there is a large stress accumulation because there exist no stress reduction mechanism as for the $$\mathrm{[11}\bar{2}\mathrm{0]}$$ direction. For thick MnAs films (300 nm), the stress accumulation is so large that it induces the formation of periodic cracks extending over the whole MnAs film thickness down to the GaAs interface26. ### LEEM-PEEM microscopy The high-resolution magnetic imaging experiments were performed on the French branch of the Nanospectroscopy beamline at the ELETTRA synchrotron facility (Trieste, Italy), using an Elmitec GmbH commercial LEEM/PEEM microscope (LEEM V). In the Low Energy Electron Microscopy (LEEM) mode56, elastically backscattered low energy electrons are used for imaging the surface. The lateral resolution of LEEM is better than 10 nm, and reveals the structural and morphologic features of the films. In the Photo Emission Electron Microscope (XPEEM) mode the microscope collects the secondary electrons emitted from the sample surface upon illumination by polarized and monochromatic X-rays, which in our case are incident on the sample at an angle of 16° from the surface and form a 10 × 10 μm 2 beam spot. The spatial resolution of the microscope in the XPEEM mode is limited by the chromatic and spherical aberrations to 25 nm. The probing depth is very small, in general below 10 nm, due to the small inelastic mean free path of the secondary photoelectrons. The micromagnetic spin structure of the α-MnAs surface was determined taking advantage of the large XMCD effect associated with the Mn L 3 edge using circular-polarization light57. In the XMCD-PEEM method, the electron yield difference between opposite helicities of the photon beam is proportional to the dot product of the magnetization and the direction of the photon beam, which enables the mapping of the essentially in-plane component of surface magnetization. The samples were mounted with the MnAs magnetic in-plane easy axis aligned in the plane of incidence of the photon beam in order to optimize the magnetic contrast within the α-stripes. Prior to the LEEM-PEEM experiment, the sample has been Argon ion etched in order to remove the residual polymer mask from the lithography and the Ru capping layer. The sample has been further annealed well below the original deposition temperature (200 °C) in order to recover the MnAs surface crystalline structure as it has been evidenced using LEEM (see Fig. 6a) and μ-LEED measurements (not shown). ### Determination of the dimensionless function f(u) used in Eqs 4 to 6 The function f(u) characterizing the dependence of the elastic energy embedded in the epitaxial layer on its aspect ratio has been computed by means of central force networks: Nodes connected by springs of unit stiffness are placed on a two-dimensional triangular lattice of horizontal and vertical dimensions L and H, respectively. Such a network, indeed, obeys Hookean linear elasticity with a Young modulus $$Y=2/\sqrt{3}$$ and a Poisson ratio ν = 1/3. A horizontal strain of unit value is then applied to the node at the bottom and the positions equilibrating all the forces are determined at all the nodes. The total elastic energy E tot is finally computed as the sum of the energy stored in all the springs. The plot E tot /L 2 as a function of H/L provides the function f(u) (Fig. 11). ## References 1. 1. Garcia, V. Structures hybrides MnAs/GaAs: de la croissance aux propriétés de transport tunnel polarisé en spin. Ph.D. thesis, Université Pierre et Marie Curie, Paris - France (2005), https://tel.archives-ouvertes.fr/tel-00122726. (2015) 2. 2. Garcia, V. et al. Resonant tunneling magnetoresistance in MnAs/III-IV/MnAs junctions. Physical Review B 72, 081303https://doi.org/10.1103/physrevb.72.081303 (2005). 3. 3. Ramsteiner, M. et al. Electrical spin injection from ferromagnetic MnAs metal layers into GaAs. Physical Review B 66, 081304, https://doi.org/10.1103/physrevb.66.081304 (2002). 4. 4. Okamoto, H. The AsMn (Arsenic-Manganese) system. Bulletin of Alloy Phase Diagrams 10, 549–554, https://doi.org/10.1007/bf02882414 (1989). 5. 5. Plake, T. et al. Periodic elastic domains of coexisting phases in epitaxial MnAs films on GaAs. Applied Physics Letters 80, 2523, https://doi.org/10.1063/1.1467699 (2002). 6. 6. Däweritz, L. et al. Thickness dependence of the magnetic properties of MnAs films on GaAs(001) and GaAs(113)A: Role of a natural array of ferromagnetic stripes. Journal of Applied Physics 96, 5056, https://doi.org/10.1063/1.1790576 (2004). 7. 7. Bauer, E. et al. Magnetostructure of MnAs on GaAs revisited. Journal of Vacuum Science & Technology B: Microelectronics and Nanometer Structures 25, 1470, https://doi.org/10.1116/1.2746353 (2007). 8. 8. Tanaka, M. et al. Molecular beam epitaxy of MnAs thin films on GaAs. Journal of Vacuum Science & Technology B: Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena 12, 1091, https://doi.org/10.1116/1.587095 (1994). 9. 9. Tanaka, M. et al. Epitaxial ferromagnetic MnAs thin films grown by molecular beam epitaxy on GaAs: Structure and magnetic properties. Journal of Applied Physics 76, 6278, https://doi.org/10.1063/1.358304 (1994). 10. 10. Tanaka, M. Epitaxial ferromagnetic thin films and heterostructures of Mn-based metallic and semiconducting compounds on GaAs. Physica (Amsterdam) 2E, 372–380, https://doi.org/10.1016/S1386-9477(98)00078-2 (1998). 11. 11. Kaganer, V. M. et al. Strain-mediated phase coexistence in heteroepitaxial films. Physical Review Letters 85, 341–344, https://doi.org/10.1103/physrevlett.85.341 (2000). 12. 12. Kaganer, V. M. et al. Strain-mediated phase coexistence in MnAs heteroepitaxial films on GaAs: An x-ray diffraction study. Physical Review B 66, 045305, https://doi.org/10.1103/physrevb.66.045305 (2002). 13. 13. Kästner, M., Herrmann, C., Däweritz, L. & Ploog, K. H. Atomic scale morphology of self-organized periodic elastic domains in epitaxial ferromagnetic MnAs films. Journal of Applied Physics 92, 5711, https://doi.org/10.1063/1.1512692 (2002). 14. 14. Tortarolo, M. et al. Size effects on the phase coexistence in MnAs/GaAs(001) ribbons. Physical Review B 81, 224406, https://doi.org/10.1103/physrevb.81.224406 (2010). 15. 15. Bauer, E. Surface Microscopy with Low Energy Electrons. https://doi.org/10.1007/978-1-4939-0935-3 (Springer Nature, 2014). 16. 16. Wichtendahl, R. et al. SMART: An aberration-corrected XPEEM/LEEM with energy filter. Surface Review and Letters 05, 1249–1256, https://doi.org/10.1142/s0218625x98001584 (1998). 17. 17. Bauer, E. et al. Microscopy of mesoscopic ferromagnetic systems with slow electrons. Surface and Interface Analysis 38, 1622–1627 http://www.ncbi.nlm.nih.gov/pubmed/2559, http://doi.wiley.com/10.1002/sia.2430. (2006). 18. 18. Laufenberg, M. et al. Observation of thermally activated domain wall transformations. Applied Physics Letters 88, 052507, https://doi.org/10.1063/1.2168677 (2006). 19. 19. Barbier, A., Mocuta, C. & Belkhou, R. Selected synchrotron radiation techniques. In Encyclopedia of Nanotechnology, 2322–2344. https://doi.org/10.1007/978-90-481-9751-4_47 (Springer Science, 2012). 20. 20. Hehn, M. et al. 360° domain wall generation in the soft layer of magnetic tunnel junctions. Applied Physics Letters 92, 072501, https://doi.org/10.1063/1.2838455 (2008). 21. 21. Lacour, D. et al. Indirect localization of a magnetic domain wall mediated by quasi walls. Sci. Rep. 5, 9815, https://doi.org/10.1038/srep09815 (2015). 22. 22. Takagaki, Y., Wiebicke, E., Däweritz, L. & Ploog, K. Fabrication of MnAs microstructures on substrates and their electrical properties. Journal of Solid State Chemistry 179, 2271–2280, https://doi.org/10.1016/j.jssc.2006.02.008 (2006). 23. 23. Takagaki, Y. et al. First-order phase transition in MnAs disks on GaAs (001). Physical Review B 73, 125324, https://doi.org/10.1103/physrevb.73.125324 (2006). 24. 24. Steren, L. B. et al. Combined effects of vertical and lateral confinement on the magnetic properties of MnAs micro and nano-ribbons. J. Appl. Phys. 120, 093905, https://doi.org/10.1063/1.4961501 (2016). 25. 25. Engel-Herbert, R. et al. The nature of charged zig-zag domains in MnAs thin films. Journal of Magnetism and Magnetic Materials 305, 457–463, https://doi.org/10.1016/j.jmmm.2006.02.083 (2006). 26. 26. Däweritz, L. Interplay of stress and magnetic properties in epitaxial MnAs films. Reports on Progress in Physics 69, 2581–2629, https://doi.org/10.1088/0034-4885/69/9/R02 (2006). 27. 27. Engel-Herbert, R., Hesjedal, T. & Schaadt, D.M. Three-dimensional micromagnetic domain structure of MnAs films on GaAs(001): Experimental imaging and simulations. Phys. Rev. B 75, 094430,  https://doi.org/10.1103/PhysRevB.75.094430 (2007). 28. 28. Stöhr, J. & Siegmann:, H. Magnetism (Springer, 2006). 29. 29. Mocuta, C. et al. X-ray diffraction imaging of metal-oxide epitaxial tunnel junctions made by optical lithography: use of focused and unfocused X-ray beams. Journal of Synchrotron Radiation 20, 355–365, https://doi.org/10.1107/S090904951204856X (2013). 30. 30. Murray, C. E. et al. Submicron mapping of silicon-on-insulator strain distributions induced by stressed liner structures. Journal of Applied Physics 104, 013530, https://doi.org/10.1063/1.2952044 (2008). 31. 31. Murray, C. E. et al. Nanoscale silicon-on-insulator deformation induced by stressed liner structures. Journal of Applied Physics 109, 083543, https://doi.org/10.1063/1.3579421 (2011). 32. 32. Chahine, G. A. et al. Imaging of strain and lattice orientation by quick scanning X-ray microscopy combined with three-dimensional reciprocal space mapping. Journal of Applied Crystallography 47, 762–769, https://doi.org/10.1107/S1600576714004506 (2014). 33. 33. Trampert, A., Schippan, F., Däweritz, L. & Ploog, K. H. Proc. 11th Int. conf. on Microscopy of Semiconducting Materials, vol. 164 of Inst. Phys. Conf. Ser. (A. G. Cullis and R. Beanland (Bristol), 1999). 34. 34. Choi, K. J. et al. Enhancement of ferroelectricity in strained BaTiO3 thin films. Science 306(5698), 1005–1009, https://doi.org/10.1126/science.1103218 (2004). 35. 35. Takagaki, Y. et al. Cracking of epitaxial MnAs films on GaAs(001). Journal of Applied Physics 107, 023510, https://doi.org/10.1063/1.3288993 (2010). 36. 36. Mohanty, J., Takagaki, Y., Hesjedal, T., Däweritz, L. & Ploog, K. H. Selective etching of epitaxial MnAs films on GaAs(001): Influence of structure and strain. J. Appl. Phys. 98, 013907, http://scitation.aip.org/content/aip/journal/jap/98/1/10.1063/1.1954888 (2005). 37. 37. Takagaki, Y., Wiebicke, E., Däweritz, L. & Ploog, K. H. Nonuniform Reactive Ion Etching of MnAs/GaAs Heterostructures: MnAs Nanodots and GaAs Nanocolumns. Jpn. J. Appl. Phys. 43, 2791–2792. http://stacks.iop.org/1347-4065/43/2791 (2004). 38. 38. Snigirev, A., Kohn, V., Snigireva, I. & Lengeler, B. A compound refractive lens for focusing high-energy x-rays. Nature 384, 49–51, https://doi.org/10.1038/384049a0 (1996). 39. 39. Snigirev, A., Kohn, V., Snigireva, I., Souvorov, A. & Lengeler, B. Focusing high-energy x rays by compound refractive lenses. Applied Optics 37, 653, https://doi.org/10.1364/ao.37.000653 (1998). 40. 40. Lengeler, B. et al. A microscope for hard x rays based on parabolic compound refractive lenses. Applied Physics Letters 74, 3924, https://doi.org/10.1063/1.124225 (1999). 41. 41. Lengeler, B. et al. Refractive x-ray lenses. Journal of Physics D: Applied Physics 38, A218–A222, https://doi.org/10.1088/0022-3727/38/10a/042 (2005). 42. 42. Kirkpatrick, P. & Baez, A. V. Formation of optical images by x-rays. Journal of the Optical Society of America 38, 766, https://doi.org/10.1364/josa.38.000766 (1948). 43. 43. Hignette, O. et al. Submicron focusing of hard x rays with reflecting surfaces at the esrf. In McNulty, I. (ed.) X-Ray Micro- and Nano-Focusing: Applications and Techniques II, vol. 4499, 105–116 (SPIE-Intl Soc Optical Eng,. https://doi.org/10.1117/12.450227 2001). 44. 44. David, C., Weitkamp, T., Nöhammer, B. & van der Veen, J. Diffractive and refractive x-ray optics for microanalysis applications. Spectrochimica Acta Part B: Atomic Spectroscopy 59, 1505–1510, https://doi.org/110.1016/j.sab.2004.03.019 (2004). 45. 45. Snigirev, A. & Snigireva, I. High energy x-ray micro-optics. Comptes Rendus Physique 9, 507–516, https://doi.org/10.1016/j.crhy.2008.02.003 (2008). 46. 46. Chao, W., Anderson, E. H., Fischer, P. & Kim, D.-H. Toward sub-10-nm resolution zone plates using the overlay nanofabrication processes. In Suleski, T. J., Schoenfeld, W. V. & Wang, J. J. (eds.) Advanced Fabrication Technologies for Micro/Nano Optics and Photonics, 688309 (SPIE-Intl Soc Optical Eng,. https://doi.org/10.1117/12.768878 2008). 47. 47. Gorelick, S. et al. High-efficiency fresnel zone plates for hard x-rays by 100 keV e-beam lithography and electroplating. Journal of Synchrotron Radiation 18, 442–446, https://doi.org/10.1107/s0909049511002366 (2011). 48. 48. Vila-Comamala, J. et al. Zone-doubled fresnel zone plates for scanning transmission x-ray microscopy. 192–195. https://doi.org/10.1063/1.3625337 (AIP Publishing, 2011). 49. 49. Delpierre, P. et al. XPAD: A photons counting pixel detector for material sciences and small-animal imaging. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 572, 250–253, https://doi.org/10.1016/j.nima.2006.10.315 (2007). 50. 50. Medjoubi, K. et al. Detective quantum efficiency, modulation transfer function and energy resolution comparison between CdTe and silicon sensors bump-bonded to XPAD3S. Journal of Synchrotron Radiation 17, 486–495, https://doi.org/10.1107/S0909049510013257 (2010). 51. 51. Medjoubi, K. et al. Energy resolution of the CdTe-XPAD detector: calibration and potential for Laue diffraction measurements on protein crystals. Journal of Synchrotron Radiation 19, 323–331, https://doi.org/10.1107/S0909049512004463 (2012). 52. 52. Le Bourlot, C. et al. Synchrotron X-ray diffraction experiments with a prototype hybrid pixel detector. Journal of Applied Crystallography 45, 38–47, https://doi.org/10.1107/S0021889811049107 (2012). 53. 53. Mocuta, C. et al. Beyond the ensemble average: X-ray microdiffraction analysis of single SiGe islands. Physical Review B 77, 245425, https://doi.org/10.1103/PhysRevB.77.245425 (2008). 54. 54. Stangl, J., Mocuta, C., Chamard, V. & Carbone, D. Nanobeam X-Ray Scattering. https://doi.org/10.1002/9783527655069 (Wiley-Blackwell, Weinheim, Germany, 2013). 55. 55. Stangl, J., Mocuta, C., Diaz, A., Metzger, T. H. & Bauer, G. X-ray diffraction as a local probe tool. Chem Phys Chem 10, 2923–2930, https://doi.org/10.1002/cphc.200900563 (2009). 56. 56. Bauer, E. Low energy electron microscopy. Reports on Progress in Physics 57, 895–938, https://doi.org/10.1088/0034-4885/57/9/002 (1994). 57. 57. Stöhr, J. Exploring the microscopic origin of magnetic anisotropies with x-ray magnetic circular dichroism (XMCD) spectroscopy. Journal of Magnetism and Magnetic Materials 200, 470–497, https://doi.org/10.1016/S0304-8853(99)00407-2 (1999). ## Acknowledgements The authors would like to acknowledge the ESRF, SOLEIL and ELETTRA synchrotron facilities for allocating beamtime for the local probe μ-XRD and XPEEM experiments. H. Akinaga and F. Takano (AIST, Japan) are acknowledged for the growth of the epitaxial MnAs films. The teams of the ID-01 (ESRF), DiffAbs (Synchrotron SOLEIL) and Nanospectroscopy French branch (Synchrotron ELETTRA) beamlines are acknowledged for the excellent technical support during the experimental campaigns. ## Author information Authors ### Contributions C.M. and R.B. conceived the μ-XRD experiments R.B. and E.B. conceived the XPEEM experiments C.M., R.B. and D.B. analyzed the results C.M., A.B., S.S., S.E., F. Mon. and R.B. conducted the μ-XRD experiments S.E., F. Mac., E.B. and R.B. conducted the XPEEM experiments D.B. developed the model and the simulation F.Mon. developed the lithography processes All authors reviewed the manuscript. ### Corresponding author Correspondence to Rachid Belkhou. ## Ethics declarations ### Competing Interests The authors declare that they have no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Mocuta, C., Bonamy, D., Stanescu, S. et al. Finite size effect on the structural and magnetic properties of MnAs/GaAs(001) patterned microstructures thin films. Sci Rep 7, 16970 (2017). https://doi.org/10.1038/s41598-017-17251-y • Accepted: • Published: • ### Half-metal effect on the MnAs/InP (0 0 1)-(2 × 4) interface • R. Ponce-Pérez • , María Teresa Romero de la Cruz • , J. Guerrero-Sánchez • , Yuliana Elizabeth Ávila Alvarado • , D.M. Hoat • , María G. Moreno-Armenta •  & Gregorio H. Cocoletzi Computational Materials Science (2020) • ### Ultrafast Structural Dynamics along the β−γ Phase Transition Path in MnAs • Franck Vidal • , Yunlin Zheng • , Lounès Lounis • , Leticia Coelho • , Claire Laulhé • , Carlo Spezzani • , Alessandra Ciavardini • , Horia Popescu • , Eugenio Ferrari • , Enrico Allaria • , Jialin Ma • , Hailong Wang • , Jianhua Zhao • , Matthieu Chollet • , Matthew Seaberg • , Roberto Alonso-Mori • , James M. Glownia • , Mahmoud Eddrief •  & Maurizio Sacchi Physical Review Letters (2019)
2020-10-22 12:55:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7493531107902527, "perplexity": 2462.1586969622153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879537.28/warc/CC-MAIN-20201022111909-20201022141909-00336.warc.gz"}
https://chemistry.stackexchange.com/questions/154089/quantifying-soapiness-theres-ph-pka-and-po2-is-there-a-p-soap-or-p-surfactan/154092
# Quantifying soapiness; there's pH, pKa and pO2, is there a p_soap or p_surfactance? Yes, $$\mathrm{pH}$$ is a concentration, $$\mathrm{p}K_\mathrm{a}$$ is a dissociation constant, and $$\mathrm{pO_2}$$ is a partial pressure. These are (roughly speaking) ways to indicate how much of a key ingredient is in a mixture or how active it is. When I go away for a long weekend and have unwashed dishes, I just soak them in extra-soapy water; arguably to do some pre-cleaning but mostly as a lame attempt at preventing life from taking hold and multiplying exponentially in the hot summer weather before I get back. ### All soaps are not created equal I want to ask in Biology SE about the level of soapiness necessary to prevent this from happening, but first I want to ask here if there is a way to quantify the soapiness of soapy water on some recognized or at least recognizable scale. There is a wide range of soaps available in a household and when there's no dish detergent per se available and nobody is looking I've been known to use other products. A gram of laundry powder, window cleaner, bar soap, and dish detergent could potentially have very different levels of soapiness and therefore ability to strip living cells of their protective lipid membranes, either in bulk or just key, vulnerable constituents. ### Question For purposes of quantifying soapiness, is there a recognized, or at least recognizable parameter, something like p_soap or p_surfactancein like there's $$\mathrm{pH},$$ $$\mathrm{p}K_\mathrm{a}$$ and $$\mathrm{pO_2}$$ for other situations? • Does "p" in "p_soap" refers to the same operator as in $\mathrm{pH}$ $(-\log_{10}a(\ce{H+}))?$ What is "surfactancein"? Jul 18 at 7:00 • "mostly as a lame attempt at preventing life from taking hold and multiplying exponentially" - a lot of detergents are perfectly acceptable food for a wide variety of bacteria. Jul 18 at 12:11 • @EdV yes I remember! 1, 2 – uhoh Jul 18 at 23:55 • @fraxinus good point. Many of those detergents contain preservatives for that reason, which I guess wouldn't be effective once the detergent is dissolved. Jul 19 at 0:01 • If you want to just solve your stop life from proliferating, the easiest solution is to just do the dishes! I guess if you don't mind wearing gloves when you get back, you could soak your dishes in lye. No aluminum though. Not sure what would happen to stainless steel if you keep doing this. Jul 19 at 0:05 A concept that captures how effective a detergent is at doings its job is aptly called "detergency." As might be expected this is a complex property and difficult to describe unambiguously with a single parameter. Quoting Ref. 1 : Detergency is difficult to evaluate because it depends on a multitude of variables that in most cases are elusive to monitor and measure. Given its practical importance it should not be surprising that a lot of effort has been expended to characterize this property, which in technical lingo is referred to as "detersive efficiency". Ref. 1 explains that standardized detergency-testing methods have been developed by the American Society for Testing and Materials (ASTM) or the International Standardization Organization (ISO). The method codes for various tests are listed in Table 1 of that publication. The AISE (International Association for Soaps,Detergents and Maintenance Products) has also developed laundry detergent testing guidelines. In Ref. 1 detergency is quantified as a parameter $$De$$, the ratio of mass of soil suspended in the bath after treatment to the total mass of soil in the system. References 1. E. Jurado Alameda, V. Bravo Rodrıguez, R. Bailon Moreno, J. Nunez Olea, and D. Altmajer Vaz. Bath-Substrate-Flow Method for Evaluating the Detersive and Dispersant Performance of Hard-Surface Detergents. Ind. Eng. Chem. Res. 2003, 42,4303-4310. Soapiness or anything like that cannot be represented by a single number. Hence no point in inventing such a quantity. Just like we cannot associate a plain number to odors, soapiness is scientifically meaningless because it will be an umbrella term. Just like the term polarity is misused, soapiness could be even worse. The only common property of surfactants is the critical micellar concentration (CMC) but that has nothing to do with how clean our dishes or clothes look after a wash cycle. CMC would be useful if there were a single component in dishwashing or laundry soaps. Alas, our synthetic laundry detergents are a mix of really fancy chemicals. To give you an idea, I quote from a monograph on how the detergents are tested so one can feel the complexity and understand why quoting soapiness with a single number is not useful. Take an example of a laundry detergent. See where does soapiness fit in? You can extend the same ideas to a dishwashing liquid. Single wash cycle performance (soil and stain removal and bleaching) • Multiple wash cycle performance, e.g., after 25 or 50 washes (soil antiredeposition properties, degree of whiteness, buildup of undesirable deposits, fiber damage, stiffness, color change, fluorescent whitening) • Special characteristics (powder characteristics such as density, free flowability, dispensing in a washing machine, homogeneity, dusting properties, solubility, foaming, rinse behavior, and such storage characteristics as chemical and physical stability, hygroscopicity, color, odor, and tendency to form lumps) The literature describes numerous methods for testing according to the above criteria, some of which are standardized. Standardization is a concern not only of national bodies (e.g., ANSI, the American National Standards Institute; JISC, the Japanese Industrial Standards Committee; DIN, Deutsches Institut für Normung [559]; AFNOR, Association Française de Normalisation; BSI, British Standards Institution), but also of international groups (e.g., ISO, International Organization for Standardization). The above national organizations are all members of ISO and can, therefore, exercise influence on questions of international standardization [560]. Another particularly important organization concerned with international standardization of test methods is the CID (Comité International des Derivés Tensio-Actifs) with its subcommittee, the CIE (Commission Internationale d'Essai). This organization was disbanded in 1978 , but in the meantime, its activities have been taken over and carried forward by the Working Group TMS (Test Methods for Surfactants) of the CESIO (Comité Européen d'Agents de Surface et Intermédiaires Organiques). Reference: Laundry Detergents. E. Smulders Wiley-VCH Verlag GmbH & Co. KGaA, If there detergent properties could be summarized by a single number, all these govt. and regulatory agencies would have come up with a single number. To the best of my knowledge, there is none! I would say that synthetic detergent chemists and those who analyze them (analytical chemists) are certainly very smart and creative people. Come to Nature's detergents like soapwort seeds. These "fruits" when rubbed or boiled make a good "detergent" solution. Got a chance to use them a couple of times as they are pretty common in South Asia. Their detergent action comes from saponins and even that is not a pure compound but a class of compounds which foam in water naturally but there are not typical surfactants. Again soapiness is not useful even for natural detergents. Caution: A small note of caution of soaking dishes long term for days in dishwashing liquid solution is not a good idea. Depending on the dishware, small cracks, chips, poor coatings can lead to absorption of detergent in the dishes. Of course, this should exclude all wooden items and even plastic items. • I saw a question by uhoh over at Astronomy SE (pertaining to using multiple prisms and having multiple interfaces) and posted some temporary stuff, of possible interest to him, in the Sandbox III: chemistry.meta.stackexchange.com/a/4757/79678. Thanks again for the paper about von Littrow! I also posted my most recent CFL echellegram (you have already had it). – Ed V Jul 19 at 0:48 • @uhoh I am not sure how much pratical echellography you have done. We discussed that last year or two years ago. Please do see link posted by Prof. EdV and his echellogram. We can chat about that. Jul 19 at 0:57 • @M.Farooq yes I'd love to! We'll need an appropriate chat space. My practical experience is one long night forty-ish years ago but I spend a lot of time immersed in optics and imaging systems, math, physics and simulations, and these are beautiful (and colorful!) instruments. – uhoh Jul 19 at 7:38 • @uhoh, Then your experience and optics experience would be great for our echelle spectrograph project. How can we (EdV & I) contact you? Jul 19 at 14:40 • The laundry detergent example appears to be measuring lots of things that are definitely not soapiness. Buildup of undesirable deposits, fiber damage, stiffness, colour change - none of these have anything to do with "soapiness". Jul 19 at 16:07
2021-09-18 05:32:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3326467275619507, "perplexity": 3478.6246995023803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056297.61/warc/CC-MAIN-20210918032926-20210918062926-00110.warc.gz"}
https://www.gradesaver.com/textbooks/engineering/computer-science/invitation-to-computer-science/chapter-7-7-3-communication-protocols-practice-problems-page-267/2
## Invitation to Computer Science 8th Edition Published by Cengage Learning # Chapter 7 - 7.3 - Communication Protocols - Practice Problems - Page 267: 2 #### Answer \begin{array}{l}{\text { This approach would not work with larger graphs such as one with } 26} \\ {\text { nodes and } 50 \text { links. The number of possible paths would grow much too }} \\ {\text { large for us to enumerate and evaluate them all in a reasonable amount }} \\ {\text { of time. We must use a more clever algorithm. }}\end{array} #### Work Step by Step \begin{array}{l}{\text { This approach would not work with larger graphs such as one with } 26} \\ {\text { nodes and } 50 \text { links. The number of possible paths would grow much too }} \\ {\text { large for us to enumerate and evaluate them all in a reasonable amount }} \\ {\text { of time. We must use a more clever algorithm. }}\end{array} After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2022-08-15 03:23:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998016953468323, "perplexity": 655.3641911634393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00661.warc.gz"}
https://delong.typepad.com/sdj/page/4/
MOAR Right-Wing Animus Against Einstein—Note to Self I was browsing through Friedrich von Hayek's The Fatal Conceit—although it is not clear to me how much of this very late (1988) Hayek is Hayek, and how much is “editor” William Warren Bartley https://web.archive.org/web/20050308180246/http://libertyunbound.com/archive/2005_03/ebenstein-deceit.html. Why? Because Hayek is playing a larger part in my history of the Long 20th Century, Slouching Towards Utopia?, as it moves toward finality, and I am concerned that I be fair to him. And I ran across his claim that the “socialists” felt: an urgent need to construct a new, rationally revised and justified morality which… will not be a crippling burden, be alienating, oppressive, orunjust', or be associated with trade. Moreover, this is only part of the great task that these new lawgivers—socialists such as Einstein, Monod and Russell, and self-proclaimed 'immoralists' such as Keynes—set for themselves. A new rational language and law must be constructed too, for existing language and law also fail to meet these requirements…. This awesome task may seem the more urgent to them in that they themselves no longer believe in any supernatural sanction for morality (let alone for language, law, and science) and yet remain convinced that some justification is necessary…. Continue reading "MOAR Right-Wing Animus Against Einstein—Note to Self" » I Should Launch My Substack, Shouldn't I? Grasping Reality Wednesday Newsletter: On My Mind Right Now: The Current State of the Coronavirus Plague https://braddelong.substack.com/p/on-my-mind-right-now-the-current: ‘We do need to pick a day for this. Let’s pick Wednesday… And what is on my mind right now is the scale and economic impact of the coronavirus plague: Reported case numbers for the coronavirus plague are worth little. Deaths—as long as the health-care system is not in collapse—tell us that there were between 100 and 200 times as many new cases three to four weeks before. It is perhaps fantastical to take Australia, Canada, Japan, Korea, and the United Kingdom—the non-continental Europe nations of the “global north”—as our “yardstick” nations. But if we do, we must be profoundly depressed both at the situation, and at how badly we have fallen short of what nations with competent governance have managed to accomplish… .#highighted #substack #2020-12-23 Brad DeLong & Om Malik: Is America in Decline? Pairagraph: Is America in Decline? https://www.pairagraph.com/dialogue/fc2f8d46f10040d080d551c945e7a363/4 I confess I think that this came out very well as an intellectual exercise. I am, however, as I say in it, depressed that Om Malik—for whom I have enormous respect, and whose judgment is very, very good—does not have stronger arguments on his side that America is not "in decline". I had very much hoped to end this debate at least half-convinced to his side. But I am not. Sigh. I see in my twitter feed right now—the morning of 2020-12-22—that more than 40% of Americans surveyed still "approve" of the job that Donald Trump is doing as president. With the U.S. having had 330,000 coronavirus plague deaths—1 in a thousand people—while Australia has had 908 total—one thirtieth the death rate—with a thousand children kidnapped and permanently separated from their parents, with him and his family trying to steal everything that isn't nailed down, what is to approve? Yet 40%. And 74 million people voted for him. Om wants to say things like "The sheer number of Americans who participated in our November election should be a source of national pride and renewed optimism" and "it is about taking the steps necessary for moving forward, which we will never do if we insist on dragging our feet while a cloud of gloom swirls above us" and "America has always managed to invent a better tomorrow, even on its most difficult days" and "this is not about pretending". I say: Yes, America has vast strengths. But we also have 73 million fascists, grifters, asshole racists, assholes, and easily-grifted morons whom the rest of us must carry on our backs as we try to make things better. It would be one thing if they just sat on their hands. But they are trying, actively, to break stuff that we must then fix. Sisyphus just had to role the rock uphill. He did not have a raving violent madman on his back whom he had to carry while doing so: Brad DeLong & Om Malik: Pairagraph: Is America in Decline? https://www.pairagraph.com/dialogue/fc2f8d46f10040d080d551c945e7a363/4: Brad DeLong 2020-09-10: Life expectancy at birth in the United States today is 78.6 years. Life expectancy at birth in Japan today is 84.5; in Singapore, 85.1; in Switzerland, 84.3; France, 83.1; in Germany, 80.9. U.S. life expectancy is on a par with Poland, Tunisia, Cuba, Nicaragua, and Albania; below Peru, Colombia, Chile, Jordan, and Sri Lanka; and only a year greater than China... ...The United States currently has ~300 deaths per hundred million people per day from the coronavirus plague. The United Kingdom, Japan, Italy, Germany, and Canada each have less than 10. The United States has the amazing spectacle not just of Donald Trump as president, but of a huge number of American worthies—from Mitch McConnell in the Senate and Kevin McCarthy in the House, from Paul Ryan to Chris Christie, from Dean Baquet and Maureen Dowd and James Bennet to James Comey, all of them deciding that rather than do their proper jobs they would work to raise the odds that Trump would obtain and maintain power and increase the likelihood that he would do major damage in order to boost their personal positions in various ways. As one of my friends from a not-rich part of East Asia says: "Students from my country come to the U.S. these days. They see dirty cities, lousy infrastructure, and the political clown show on TV, and an insular people clinging to their guns and their gods who boast about how they are the greatest people in the world without knowing anything about what is going on outside. They come back and tell me: 'We have nothing to learn from those people! Why did you send me there?’" This is a very different vibe from what we had twenty years ago, at the end of the Clinton-Gore years, when the U.S. was victorious in the Cold War, trying to build a freer, more integrated, more peaceful, and more prosperous world; riding the wave of the great internet boom; and had—for the first time in a generation—seen eight years in which typical Americans' wages and salaries were rising rapidly. And now it has been another generation since we have seen typical Americans' wages and salaries rise rapidly. This is a very different vibe from 70 years ago, when we had the U.S. of the great post-WWII boom and the Marshall Plan that was also, finally, turning its attention to advancing Civil Rights. This is a very different vibe from 100 years ago, when Leon Trotsky would talk about how he regretted leaving New York for Petrograd, for he was "leaving the furnace where the future was being forged.” This is a very different vibe from 180 yeas ago, when Alexis de Tocqueville was preaching to one and all that everyone needed to closely examine America, for understanding it was the key to understanding the world's democratic future. The only argument that America is not in decline is that other countries have worse problems. That may well be true. But that strikes me as too low a bar. ==== Om Malik 2020-10-07: It has been a strange year for the planet, and a particularly challenging one for America. It is as if the universe held up a giant mirror to the country and made us look directly at our most severe and festering troubles. A virus has undone our broken healthcare system, made our upside-down economy even more fragile, and exacerbated our political and social divisions. Recognizing all that, readers might assume I am pessimistic about the prospects of our great country. But humans, unlike mirrors, can see beyond the surface. Even the most beautiful glimpse the ugliness in themselves. And the imperfect can recognize their own potential. Let me tell you my own story. Over a decade ago, I was an overworked reporter with a three-packs-a-day smoking habit. I didn’t work out and practiced atrocious eating habits. Not surprisingly, I ended up in the hospital fighting for my life. Forced to take a hard look at myself, I didn’t like what I saw. I made a commitment to turn things around — and I followed through. Our country and its citizens are at a similar point of reckoning. Given the historical arc of a nation’s life, we should not rush to judge a nation’s prospects based on a single (and so far, single-term) administration — or even a bungled response to one specific crisis. America is an ongoing project. As a society, we are fighting tooth and nail to protect our democratic traditions from attacks both internal and external. Is our performance perfect? No. But we are a long way from Belarus. In college, I read about the American industry’s decline and the offshoring of jobs to other countries. In the twilight of the last century, it seemed the end was near. And yet, we saw the birth of companies such as Amazon, Google, and Netflix. About a dozen of these large American companies have since become part of the global society and economy. As other American industries have in the past, the modern tech industry provides an ecosystem in which people throughout the world desire to participate and thrive. Even China, our country’s greatest economic rival, takes its technology cues (and intellectual property) from America. What was a little search engine now employs hundreds of thousands. This is also where Elon Musk, whether you like him or not, willed a commercial electric vehicle industry into existence through a combination of chutzpah, capital, and yes, government support. Tesla may sell fewer cars than its German rivals, but it has convinced the world to adopt this new approach to transportation. It is true that Tesla, Google, and Amazon are not perfect. Capitalism never is. Our planet is facing an arduous future due to our changing climate. The answers to the myriad problems this creates will emanate from American minds and in the same freethinking, entrepreneurial tradition that allowed Google to be born here. Though we certainly don’t have a monopoly on innovation, we have a track record of doing it better and more frequently than anywhere else. While it is fashionable to be bemused by America, nobody overseas should forget that this is where the necessary ingredients for global prosperity are most likely to be found. There is no shame in admitting that we are in need of self-improvement. We must begin by addressing the horror of this year, which has exposed a range of problems. I am confident that long-term and even permanent solutions to many of these problems exist. We can and will be better. Maybe it is my day job, or perhaps it is the delusion of an immigrant’s mind, but I believe the tradition of dreaming up something from nothing is still alive in this country. And that is what keeps me betting on America. ==== Brad DeLong 2020-10-07: When this was pitched to me, I jumped at the chance: It seemed to me that ranting about American decadence might get it off my chest and improve morale, which was low. And then when I learned that Om Malik was on the other side I was really excited. I have long thought that Om was great. That he was willing to take the non-decline side made me confident there were much stronger arguments for it than I had recognized. I looked forward to ending this debate heartened, encouraged, and much more than half-convinced. But after reading Om's response, I find myself worried that his heart is not in it. My précis of it would be: We must imagine that America is not in decline. Why? Because if we recognize that it is in decline we will lose all hope of being able to turn things around. It is an argument along the lines of Camus's "we must imagine Sisyphus happy". Why must we imagine Sisyphus happy? Because we are in his situation, and if we cannot imagine—i.e., "imagine" in the sense of "pretend", not in the sense of entering into his thought-processes—Sisyphus happy, we despair and cannot do our own work, pointless and futile as that own work may be. It is an argument along the lines of Antonio Gramsci, dying of mistreatment in Mussolini's jails, recognizing that the intellect told him to be pessimistic, but that he needed to overcome that with "optimism of the will”. Sisyphus happy, we despair and cannot do our own work, pointless and futile as that own work may be. It is an argument along the lines of Antonio Gramsci, dying of mistreatment in Mussolini's jails, recognizing that the intellect told him to be pessimistic, but that he needed to overcome that with "optimism of the will”. Om's message is that America is not in decline because we might still "take a hard look at [our]sel[ves]... not like what [we] saw... ma[ke] a commitment to turn things around—and... follow... through". Perhaps we will. This is not helping my morale. The facts that America has astonishing land, abundant natural resources, and a long history of welcoming immigrants who feel cramped and constrained and unappreciated elsewhere—all these should make America's greatness a slam-dunk and America's future bright. But right now, in the world in which we live, I read my friend Dan Wang writing "I’ve spent the past month in Shanghai, which I think is the best place in the world right now: It’s always been the most fun and livable city in China; and there has been no transmission of the virus since April, with restaurants, bars, and museums all open for months..." I think that America has 150,000 new coronavirus cases and 1,000 deaths a day, that that amount of virus risk puts a serious crimp in day-to-day activities, that there is no plan for dealing with it, and that at this caseload we are still... three years from likely herd immunity, which we will reach after 1,000,000 more deaths. It is certainly true we have a long way to fall. Things can still be very comfortable on the way down for a long time. "There is", Adam Smith said in 1776, "much ruin in a nation”. But I had hoped Om would change my mind. ==== Om Malik 2020-12-22: I have had a long time to noodle on Professor Delong’s response to my continued optimism in America. He certainly didn’t share that hopefulness, and he may have missed the nuance of my argument. So, I will reiterate: If we recognize our problems, we can fix them. This is not about pretending. It is about taking the steps necessary for moving forward, which we will never do if we insist on dragging our feet while a cloud of gloom swirls above us. I’m happy to report that the forecast calls for better conditions ahead. In a matter of months, if not sooner, Professor Delong will (I hope) be administered a vaccine that will prevent infection from a novel coronavirus. It may come from a company called Moderna, a venture-backed, American biotech company that is redefining the next frontier of medicine. Our handling of COVID-19 is emblematic of what makes America a very unique place. Though we absolutely botched our response to the pandemic, this country has also produced one of the vaccines to fight it. Our country has many problems, and we are uniquely capable of solving them. In his response, the good professor points to a friend’s comments about Shanghai and how livable it feels. If that friend were a Uighur or a Mongolian, they might think differently. It’s a futuristic place, sure, but one with little room for intellectual freedom and debate. For example, Alibaba founder and CEO Jack Ma paid the price when he spoke bluntly about certain things the ruling party didn’t care to have discussed. The initial public offering of his extremely successful company, Ant Financial, was canceled. It’s also worth noting, as ProPublica recently pointed out, that China’s government-controlled Internet was behind the censorship of coronavirus-related information. Here at home, we currently have politicians making wild and embarrassing claims about our elections. I suppose in places like Shanghai, where voting for the country’s leader isn’t an option, people are spared such unpleasantness — but that hardly seems preferable. The sheer number of Americans who participated in our November election should be a source of national pride and renewed optimism. Soon, we will transition to a new administration. Vaccines will be administered. We will move forward. But we must not forget the failures of 2020 or ignore our many other issues. America needs to rebuild its infrastructure, prepare for a changed climate, address its healthcare crisis, and take a hard look at its education system. Neither self-flagellation nor looking enviously at other countries will solve these problems. Many entrepreneurs I get to interact with are working on solutions. They acknowledge our many shortcomings, rather than wallowing in them, and then they move on to designing and implementing better policies. America has always managed to invent a better tomorrow, even on its most difficult days. Reality is complex. Where there is struggle, there can also be transcendence. In order to experience the latter, we must first convince ourselves that it is possible. .#americanexceptionalism #highlighted #orangehairedbaboons #politicaleconomy #2020-12-22 Briefly Noted for 2020-12-22 Matthew Yglesias: The Real Economic Challenge in 2021 https://www.slowboring.com/p/the-real-economic-challenge-in-2021: ‘Back in 2018, there were a lot of articles with headlines like “6 reasons that pay has lagged behind US job growth” and “7 reasons why wage growth is so slow.” In retrospect, this wasn’t that mysterious. The labor market recovery had simply been very slow and 2018 turned out to be a year of accelerating wage growth. Then in 2019, things accelerated further. But the existence of articles puzzling over slow pre-2018 wage growth underscores the dangers of a sluggish recovery. Not only does sluggishness directly reduce wages, it generates complicated explanations for the sluggishness which distract policy attention from the urgent need to simply keep on keeping on with job creation… Duncan Black: The Good Doctor https://www.eschatonblog.com/2020/12/the-good-doctor.html: ‘Birx has had some pals in the media all along, desperate to keep her reputation intact, so this won't hurt at all: "WASHINGTON (AP) — As COVID-19 cases skyrocketed before the Thanksgiving holiday weekend, Dr. Deborah Birx, coordinator of the White House coronavirus response, warned Americans to “be vigilant” and limit celebrations to “your immediate household.” For many Americans that guidance has been difficult to abide, including for Birx herself. The day after Thanksgiving, she traveled to one of her vacation properties on Fenwick Island in Delaware. She was accompanied by three generations of her family from two households. Birx, her husband Paige Reffe, a daughter, son-in-law and two young grandchildren were present..." Lives are complicated, but the people who rule us should at least try to pretend to set an example… Tim Miller: This Is Your Brain on Newsmax https://thebulwark.com/this-is-your-brain-on-newsmax/: ‘I would guess with a high level of confidence that all of these gentlemen know that Donald Trump lost. Spicer said as much on November 5 before Newsmax realized just how much juice they could get out of the scam. Ruddy openly told the New Yorker’s Isaac Chotiner that he saw a business opportunity in providing wall-to-wall election fraud fanfic. What these characters are doing is exploiting Trump Nation’s need to believe that their great, nectarine idol is unbreakable and that the only way he could “lose” is if people whom they hate—the Deep State, Big Tech, Antifa, the media, black people—are conspiring against him. So here is the dangerous story they are being told—minute by agonizing minute: Monday, November 30, 11:20 a.m.—National Report: For reference, I am working from bed and live streaming Newsmax via the YouTube TV app. I am armed only with my computer and a pour over coffee in an Ellen Show mug. I’m bracing for pain. First up it’s Trump campaign lawyers, Joe diGenova and Victoria Toensing, together in what appears to be their fancy Washington, D.C. home (Drain the Swamp!). They are praising Jared Kushner’s Middle East genius. The first commercial I see is a Newsmax promo that has Donald Trump saying “Newsmax, you like Newsmax, I like it too” twice in 10 seconds. The next ad is Pat Boone pushing silver. I did not know that Pat Boone was still alive… Continue reading "Briefly Noted for 2020-12-22" » Briefly Noted for 2020-12-21 NASA: The ‘Great’ Conjunction of Jupiter and Saturn https://www.nasa.gov/feature/the-great-conjunction-of-jupiter-and-saturn: ‘What makes this year’s spectacle so rare, then? It’s been nearly 400 years since the planets passed this close to each other in the sky, and nearly 800 years since the alignment of Saturn and Jupiter occurred at night… Origins of the Drill Sergeant trope in western literature, and in history: William Shakespeare: Henry V, Act III, Scene 6 https://www.opensourceshakespeare.org/views/plays/play_view.php?WorkID=henry5&Act=3&Scene=6&Scope=scene&LineHighlight=1554#1554... Wikimedia Commons: File:Dishing-the-Whigs-1867.jpeg https://commons.wikimedia.org/wiki/File:Dishing-the-Whigs-1867.jpeg Wikipedia: Anton Cermak https://en.wikipedia.org/wiki/Anton_Cermak: ‘44th Mayor of Chicago. In office: April 7, 1931 – March 6, 1933… Clarence Darrow: Darrow -The Story of My Life http://clarkcunningham.org/PR/Darrow-Strike.htm: ‘The Railroad Strike... Luke A. L. Reynolds: Who Owned Waterloo? Wellington’s Veterans and the Battle for Relevance https://academicworks.cuny.edu/cgi/viewcontent.cgi?article=4392&context=gc_etds Jason Furman & Lawrence Summers: A Reconsideration of Fiscal Policy in the Era of Low Interest Rates https://www.piie.com/system/files/documents/furman-summers2020-12-01paper.pdf Aristotle_: Politics http://classics.mit.edu/Aristotle/politics.1.one.html: ‘Book I… Jules Verne & Michel Verne: In the Year 2889 http://www.gutenberg.org/files/19362/19362-h/19362-h.htm Wikipedia: In the Year 2889 (Short Story) https://en.wikipedia.org/wiki/In_the_Year_2889_(short_story) Christine McCloud: How To Use A Shuttle On A Loom https://www.youtube.com/watch?v=7O98vJ8VEF4: ‘How To Use A Shuttle On A Loom… Anton Howes: Is Innovation in Human Nature? https://www.antonhowes.com/blog/is-innovation-in-human-nature: ‘John Kay’s flying shuttle... an improvement to the loom, which radically increased the productivity of weaving.... Weavers would lift every other warp thread and pass the shuttle from hand to hand, hence passing the weft under the warp threads that were lifted, and over the ones that were not lifted. Under and over, under and over. Kay’s innovation was to use two wooden boxes on either side to catch the shuttle. And he attached a string, with a little handle called a picker, so that the shuttle could be jerked across the loom, at great speed. Here’s a video of it in action https://www.youtube.com/watch?v=7O98vJ8VEF4. Kay’s innovation was extraordinary in its simplicity. As the inventor Bennet Woodcroft put it, weaving with an ordinary shuttle had been “performed for upwards of five thousand years, by millions of skilled workmen, without any improvement being made to expedite the operation, until the year 1733”. All Kay added was some wood and some string. And he applied it to weaving wool, which had been England’s main industry since the middle ages. He had no special skill, he required no special understanding of science for it, and he faced no special incentive to do it… Continue reading "Briefly Noted for 2020-12-21" » Smith: Why I'm so Excited About Solar & Batteries—Noted Noah Smith: Why I'm so Excited About Solar & Batteries https://noahpinion.substack.com/p/why-im-so-excited-about-solar-and: ‘In the 19th century we switched to coal... in the 20th century we upgraded to oil.... After World War 2, a global extraction regime and price controls allowed us to keep cheap oil flowing. That ended with the Oil Shocks of the 70s. And though oil became cheaper again in the 80s and 90s, it never attained its former lows, or its low volatility. Then in the 00s it got expensive again.... We didn’t get anything better than oil during this time.... More expensive energy makes physical innovation harder in every way.... This stagnation in energy technology almost certainly contributed to the productivity slowdown of the 1970s.... Why didn’t bits fill the gap?... IT did drive the re-acceleration of productivity that began in the late 80s and continued through the early 00s.... But around 2005... that productivity growth faded.... Some have argued that digital services are substantially undervalued in our economic production statistics.... Physical technology is less “skill-biased” than IT, meaning that pretty much anyone can be a factory worker but only a few people can use computers productively and effectively... [or] IT simply touches less of our lives than energy does.... “Bits” innovation sometimes drives fast productivity growth, and sometimes doesn’t.… The cost declines in solar and batteries — and to a lesser extent, in wind and other storage technologies—comprise a true technological revolution.... And there’s no end in sight to this revolution. New fundamental advances like solid state lithium-ion batteries and next-generation solar cells seem within reach, which will kick off another virtuous cycle of deployment, learning curves, and cost decreases… Continue reading "Smith: Why I'm so Excited About Solar & Batteries—Noted" » Phipps: View of Hitler as of 1935—Noted British Ambassador to Germany Eric Phipps looking back after two years at the extraordinary successes inside Germany and in the opinion of Germans of Hitler’s first two years—saving Germany from Versailles, from domination by the Allies, from the Great Depression, and from his own “gangsters” in the form of the SA: Eric Phipps: Diary https://github.com/braddelong/public-files/blob/master/readings/book-phipps-diary.pdf 1935-04-01: ‘Over two years have now elapsed since the electorate of this country, stampeded by the Reichstag fire, voted for the abolition of the Parliamentary régime and the establishment of a National Socialist dictatorship... During these two years, Adolf Hitler, without losing the loyalty of his old followers to any alarming extent, has won over the great mass of the Opposition to himself and his policy both internal and external. He has achieved this by accomplishing in the opinion of the masses not one but several miracles. In the first place, he has obtained work (or what amounts to work so far as the individual is concerned) for 3 million people. Secondly, he has torn up Part V of the Treaty of Versailles under the very noses of Germany’s former enemies. And thirdly, he has, as it were, liberated Germany from the clutches of his own National Socialist gangsters who threatened at one time to make life a purgatory for all but a privileged caste. The return to more normal conditions during the last six months has indeed been so rapid and so marked that the great bulk of Hitler’s one-time opponents are now, to say the least of it, reconciled to his rule if not to National Socialism. Furthermore, it is now dawning upon friends and enemies alike that a benevolent despotism has immeasurable advantages over the Parliamentary system in the case of a defeated country. Not only has it an advantage over the travesty of a parliamentary system known as the Weimar Republic but many intelligent Germans are now of opinion that it is preferable to the French and British systems of representative government. It would certainly seem to an unprejudiced observer that a country which is anxious to free itself from the shackles of an oppressive treaty has better prospects if it is prepared to accept a restriction of individual liberty and a concentration of all powers in one hand, provided of course the hand be firm and wise. In the case of Hitler no doubt exists in the German mind that the country’s choice has been fully justified by the history of the last two years…. For years before he came into power Hitler doggedly refused to give any explanation of his mysterious programme for coping with unemployment. Why, he asked should he betray his panacea to his rivals? The mystery is now cleared up and it is evident that Hitler was well-advised to keep his secret to himself. As we now realise, his programme consists not merely of public works of the normal kind but of the very important work of rearming Germany. Today military contracts and contracts for public works are almost indistinguishable. The provision for motor roads which serve equally as military roads is a case in point. In addition the expansion of the army and air force has absorbed large masses of men from the labour market. The simplicity of many of Hitler’s basic ideas savours of genius to the public mind. In regard to the rearmament of Germany and her return to the field of international politics on an equal footing, neither the Army, the intelligentsia nor the Ministry for Foreign Affairs conceived that the time was ripe for “calling the allied bluff”. Any attempt on Germany’s part to challenge the Versailles Treaty would lead, they firmly believed, to intervention and possibly to the occupation of the Rhineland. Any parliamentary government in this country would have courted disaster in the Reichstag had it embarked on Hitler’s policy of flouting the Treaty. Even in Hitler’s case the adventure was not devoid of grave personal risk. There was always the chance during the early stages that the signatories of Versailles would pull themselves together and veto German rearmament by the threat of a preventive war. In that case the Hitler régime would have come to an end and Hitler and his chief supporters would have had to choose between suicide and exile. Now that Hitler has put his bold plan into execution his influence is highest in those very quarters where it was at first regarded with most suspicion, namely the Reichswehr Higher Command, the Ministry for Foreign Affairs, permanent officialdom and responsible circles generally. The Germans are not disposed to minimise their difficulties. But they regard Herr Hitler as a prophet and the majority expect with calm obedience that he will find the way to the promised land. He, on his side, is more convinced than ever that fate has chosen him as its instrument just as it chose Frederick the Great65 for the regeneration of the German people. In truth, can we wonder at his conviction? His foreign policy since my arrival at Berlin has been the reverse of that of a “good European”; it has been a crescendo of violence and has hitherto failed to evoke any stronger reaction on the part of the ex-allies than some notes of platonic protest. Having helped himself, in defiance of the Treaty, on land and in the air, Herr Hitler now suggests, with grim humour, that the British Empire may some day be grateful for the protection of the fleet that he intends to build.66 The size of that fleet at present seems uncertain, but if Herr Hitler adheres to his intention of attaining naval parity with France he will eventually possess a fleet half the size of our own concentrated in an infinitesimal fraction of the waters over which ours is called upon to sail. So far as I can see, only economics and finance can be expected to counter these proud plans, but economics and finance have in the past proved so elastic as to defy all expert prophecy. Stalin, on the other hand, when he pointed at “that little island” to Mr. Eden on the map, seemed to think that we alone could finally prevent the hegemony of Germany by withholding from her certain raw materials without which she would be unable to continue her present orgy of expenditure on armaments. I do not know whether this course be feasible or not. In any case let us hope that our pacifists at home may at length realise that the rapidly growing monster of German militarism will not be placated by mere cooings, but will only be restrained from recourse to its idolised “ultima ratio” by the knowledge that the Powers who desire peace are also strong enough to enforce it… .#noted #2020-12-21 Phipps: View of Hitler as of 1933—Noted Here we have Britain's ambassador to Germany writing in 1933 that Britain needs to take Hitler seriously but not literally–for if it took him literally it would have no logical choice but ‘to adopt the policy of a “preventive” war’. Food for thought for modern times: Trump, Bolsonaro, Modhi, and Johnson need definitely to be taken literally, and it is only acceptable to not take them seriously if you are dead certain not only of their incompetence but of their inability to pass the baton to anyone both competent and ruthless: Eric Phipps: Diary https://github.com/braddelong/public-files/blob/master/readings/book-phipps-diary.pdf 1933-11-21: ‘In contemplating the present situation arising out of an electoral campaign waged against a practically non-existent adversary and conducted with propaganda methods of unexampled violence and mendacity, one is tempted to put certain far-reaching questions regarding the future of the Hitler movement and the future policy of Hitler. It has been asked, for instance, whether the movement is not a convenient screen behind which the old Prussian Nationalism is weaving its dark web. This may well be, but if so the screen itself is singularly inefficacious and fails to conceal the fact that the youth of Germany is being reared in a purely militarist spirit... ...When I told the Chancellor that militarism seemed to me to be the Leitmotiv of this country, whereas elsewhere it was merely an incident, that a spark might suffice to kindle the militarist spirit into a war-like flame, I might have added that the above-mentioned campaign of lies, depicting Germany as the one innocent lamb among a pack of wolves, was not calculated to inculcate in German youth that spirit of peace and understanding advocated so inappropriately and so loudly after Germany’s banging of the Geneva door. As regards Hitler, I doubt whether he himself realises how far he is at pre- sent the author of Mein Kampf, the full-blown blood-and-thunder book as originally published in Germany, that is to say, and not the recent pale abridged and bowdlerised edition which has been published by his direction and translated into English. Who can tell how far that Hitler resembles the present German Chancellor who has been making the welkin ring with shouts of peace? In some respects it is certain that he remains true to type for he has not varied over the Jewish question or Austria since writing the book; but it would be too simple and even perhaps dangerous to assume that he maintains intact all the views held and expressed with such incredible violence in a work written in a Bavarian prison 10 years ago, though, of course, those views cannot be left out of consideration in any endeavour to gauge the Chancellor’s intentions on any given subject. His hatred of France, Germany’s deadliest enemy, for instance, is written in flaming letters, and certainly seems difficult to reconcile with his recent attempts to wheedle her into a tête-à-tête conversation. Again, the recent no-force agreement with Poland is undoubtedly regarded by my French colleague as an attempt to drive a wedge between that country and France. Yet, though this may have entered into Hitler’s calculations, the fact of German-Polish apaisement should nevertheless facilitate France and Germany. In this connection General von Blomberg’s remarks to me are of interest. To revert to Hitler: we cannot regard him solely as the author of Mein Kampf for in such case we should logically be bound to adopt the policy of a “preventive” war, nor can we afford to ignore him. Would it not therefore be advisable soon to try to bind that damnably dynamic man? To bind him, that is, by an agreement bearing his signature freely and proudly given? By some odd kink in his mental make-up he might even feel impelled to honour it. His signature under even a not altogether satisfactory agreement, only partially agreeable to Great Britain and France and not too distasteful to Italy might prevent for a time any further German shots among the International ducks. His signature, moreover, would bind all Germany like no other Germans in all her past. Years might then pass and even Hitler might grow old, and reason might come to this side and fear leave it. New problems would present them- selves and old problems, including disarmament, might perhaps have solved themselves through the mere passage of time, and without those Hurculean and hitherto vain efforts to satisfy German “honour” and allay French fear… .#noted #2020-12-21 Briefly Noted for 2020-12-18 Josiah Ober: Political Dissent in Democratic Athens: Intellectual Critics of Popular Rule. https://www.amazon.com/Political-Dissent-Democratic-Athens-Intellectual-ebook/dp/B00EM2W92E/ Josiah Ober: Mass & Elite in Democratic Athens: Rhetoric, Ideology, & the Power of the People https://www.amazon.com/dp/0691028648 Daily Beast: Pence Plans to Confirm Trump’s Defeat Then Flee the Country, Says Report https://www.thedailybeast.com/pence-plans-to-confirm-trumps-defeat-then-flee-the-country-says-report>… James Politi & Colby Smith: Powell Preserves His Dovish Credentials at Tricky Moment for Fed_ https://www.ft.com/content/2a32037d-612d-43bc-b472-ba124bddf47d Tyler Cowen: The Ideological Shift of the Libertarian Movement on Pandemics https://marginalrevolution.com/marginalrevolution/2020/12/the-ideological-shift-of-the-libertarian-movement-on-pandemics.html Minxin Pei: Totalitarianism’s Long Dark Shadow Over China https://www.ned.org/events/lipset-lecture-minxin-pei-totalitarianism-china/ Richard Setterston & al.: Living on the Edge: An American Generation’s Journey through the Twentieth Century https://uchicago.app.box.com/s/82xshvgproh5xf9sf0np8qby41wszehq/file/722218825412 Amazon.com: Elgato Green Screen https://www.amazon.com/dp/B0743Z892W/: ‘Collapsible chroma key panel for background removal with auto-locking frame, wrinkle-resistant chroma-green fabric, aluminum hard case, ultra-quick setup and breakdown: Computers & Accessories… Amazon.com: Elaro Pop-Up Retractable Green Screen (Self-Contained Case) https://www.amazon.com/gp/product/B07QWXVN9J/ ==== Plus: Charles Sykes: Can We Quit Trump? https://morningshots.thebulwark.com/p/can-we-quit-trump: ‘For the last four years, Vichy Republicans have rationalized their support by insisting that we ignore the tweets and focus on the policies and “accomplishments”. But in his post-presidency there will no wins, just the rage, narcissism, and tweets.... And that’s all there will be, except for the possible indictments, trials, and bankruptcies. That’s why stoking outrage is so crucial for his post presidency. The stab-in-the-back stolen election lie is the wind beneath his wings; grievance is his only real asset. That may be enough to keep his base riled up. But there is also the possibility that rather than consolidate his control of the GOP, he will marginalize himself by continuing to embrace the most deranged elements of his own MAGAverse. His base of operations may drift from Fox News to OAN and his appeal from populism to raw crackpottery… Jonathan V. Last: 'McMaster believed that power in the Trump administration derived from his job https://thetriad.thebulwark.com/p/the-nature-of-power. Sarah Huckabee Sanders realized that power in the Trump administration derived from having the president watch you defend him on TV. And further, SHS seems to have figured out that she could parlay that power into power in another context. Watch and see if she becomes governor of Arkansas purely on the basis of being seen as one of the most loyal Trumpists in the country.... Mitch McConnell is entering into a power struggle with Donald Trump.... Mitch declared Joe Biden president-elect yesterday. And good for him, I guess. Though I’m not sure people should get a ton of credit for admitting that the sky is blue after spending five weeks insisting that it was red. McConnell’s calculation is that power derives from holding elected office because that confers the ability to pass legislation.... Trump believes that the real source of power lies further upstream and derives from the ability to command—totally—a large bloc of voters within a single party. Because... it grants him ownership of the Republican party.... McConnell’s view looks like the safer bet right now, because the next time that large bloc of voters gets to exercise their power is two years from now. But my money’s on Trump here.... And then there’s the January 6 vote. McConnell has pushed a lot of chips into the pot by saying that no Republican Senator should force a vote on the Electoral College.... But the dynamics of this are all in the other direction. There will be at least one member of the House who objects and demands a vote, which means that the Senate Republicans will effectively be facing a yes-no vote on supporting Trump, since it will only require one senator to also object. Do you believe that every single Republican senator will be willing to be seen as effectively saying “no” to Trump on what will be basically function as a roll-call vote for a roll-call vote?… Continue reading "Briefly Noted for 2020-12-18" » Simon Schama: Why John le Carré Is a Writer of Substance Simon Schama: What Makes John le Carré a Writer of Substance https://www.ft.com/content/04df988d-9b09-4e6a-b7d8-70b1a5e654dc: ‘Someone, sometime, had to translate Dean Acheson’s famous 1962 characterisation of a Britain that had “lost an empire but has not yet found a role” into literature. But until le Carré came along, no writer had nailed the toxic combination of bad faith and blundering, the confusion of tactical cynicism with strategic wisdom, with such lethal accuracy.... His writing did... have some precedents.... He belonged to the same “lower-upper-middle-class” as George Orwell.... Like Orwell... le Carré had a pitch-perfect ear for the disingenuous hypocrisies sustaining those who mistook “Getting Away with It” for national purpose. Le Carré’s other literary pedigree... came from Anthony Trollope: the shrewd sense that institutions had collective personalities and psychologies, as if they were extended families. As such, they were the theatre of deadly, high-stakes dramas of loyalty and betrayal…. The scene at the beginning of An Honourable Schoolboy in the Hong Kong Foreign Correspondents Club, where “a score of journalists, mainly from former British colonies . . . fooled and drank in a mood of violent idleness, a chorus without a hero” is one of the great set pieces of le Carré writing. At its centre is one of his Dickens-Modern creations: the ancient Aussie, “old Craw” based on someone le Carré knew from that field trip to south Asia, and “who had shaken more sand out of his shorts than most of them would walk over”… ============ John le Carré: The Honourable Schoolboy https://github.com/braddelong/public-files/blob/master/readings/book-le-carre-schoolboy.pdf: 'Perhaps a more realistic point of departure is a certain typhoon Saturday in mid-1974, three o’clock in the afternoon, when Hong Kong lay battened down waiting for the next onslaught. In the bar of the Foreign Correspondents’ Club, a score of journalists, mainly from former British colonies - Australian, Canadian, American - fooled and drank in a mood of violent idleness, a chorus without a hero. Thirteen floors below them, the old trams and double deckers were caked in the mud-brown sweat of building dust and smuts from the chimney-stacks in Kowloon. The tiny ponds outside the highrise hotels prickled with slow, subversive rain. And in the men’s room, which provided the Club’s best view of the harbour, young Luke the Californian was ducking his face into the handbasin, washing the blood from his mouth... Continue reading "Simon Schama: Why John le Carré Is a Writer of Substance" » Randall Munroe’s 2020 Election Map—Noted Randall Munroe is an international treasure. This is the best 2020 election map that I have yet seen. It combines geographic fidelity with information accuracy and density. You will learn a lot not just about what and where Biden’s edge was in the 2020 election, but also about who Americans are… Randall Munroe: 2020 Election Map https://twitter.com/xkcd/status/1339341149488746498: ‘http://xkcd.com/2399 ... .#noted #2020-12-17 Briefly Noted for 2020-12-17 Jonathan V. Last: Everyone Trump Touches Dies: The List https://thetriad.thebulwark.com/p/everyone-trump-touches-dies-the-list Edward B. Foley (2019): Preparing for a Disputed Presidential Election: An Exercise in Election Risk Assessment and Management https://lawecommons.luc.edu/cgi/viewcontent.cgi?article=2719&context=luclj Jason Snell: ‘I apologize, I forgot to add a label to my Bezos Chart.https://twitter.com/jsnell/status/481863414180896769... Simon Schama: What Makes John le Carré a Writer of Substance https://www.ft.com/content/04df988d-9b09-4e6a-b7d8-70b1a5e654dc: ‘Someone, sometime, had to translate Dean Acheson’s famous 1962 characterisation of a Britain that had “lost an empire but has not yet found a role” into literature. But until le Carré came along, no writer had nailed the toxic combination of bad faith and blundering, the confusion of tactical cynicism with strategic wisdom, with such lethal accuracy.... His writing did... have some precedents.... He belonged to the same “lower-upper-middle-class” as George Orwell.... Like Orwell... le Carré had a pitch-perfect ear for the disingenuous hypocrisies sustaining those who mistook “Getting Away with It” for national purpose. Le Carré’s other literary pedigree... came from Anthony Trollope: the shrewd sense that institutions had collective personalities and psychologies, as if they were extended families. As such, they were the theatre of deadly, high-stakes dramas of loyalty and betrayal… Clove & Hoof: Oakland Butchery & Restaurant https://cloveandhoofoakland.com/ Sascha Segan: Qualcomm Is a Little Too Unbothered by Apple's M1 Macs https://www.pcmag.com/opinions/qualcomm-is-a-little-too-unbothered-by-apples-m1-macs: ‘Qualcomm execs brushed off the superior performance of Apple's new ARM-based Macs. They shouldn’t… John Gruber: M1 Macs: Truth & Truthiness https://daringfireball.net/2020/12/m1_macs_truth_and_truthiness: ‘M1 Macs embarrass all other PCs—all Intel-based Macs, including automobile-priced Mac Pros, and every single machine running Windows or Linux. Those machines are just standing around in their underwear now because the M1 stole all their pants… Nadim Kobeissi: On the Apple Silicon M1 MacBook Pro https://nadim.computer/posts/2020-11-26-macbookm1.html: ‘Five nanometer process, an ARMv8-AArch64 instruction set, unified memory, separate performance and efficiency cores and a ton of accompanying hardware offering acceleration for video decoding, cryptographic operations and more. There’s also a bunch of dedicated silicon for GPU cores that have been shown to rival the Nvidia GTX 1060. This is all on an integrated SoC that consumes a maximum of 15 watts and that generally runs on far less. This is all in a context where Intel is shipping 45W and 65W processors inside laptops, built on 10-14nm transistors, with a dinosaur-age x64 instruction set and integrated graphics that are certainly not even close to competing with a dedicated GTX 1060… Continue reading "Briefly Noted for 2020-12-17" » Briefly Noted for 2020-12-13 Supreme Court: 'The State of Texas’s motion for leave to file a bill of complaint is denied for lack of standing under Article III of the Constitution’ https://www.supremecourt.gov/orders/courtorders/121120zr_p860.pdf... The Hellenistic Age Podcast: Syrian Nights, Macedonian Dreams https://hellenisticagepodcast.wordpress.com/2020/11/26/055-the-seleucid-empire-syrian-nights-macedonian-dreams/ Melissa: My Singing Vegetables https://www.mysingingvegetables.com/ Robert J. Gordon: The Rise & Fall of American Growth: 'The year 1870 represented modern America at dawn. Over the subsequent six decades, every aspect of life experienced a revolution. By 1929, urban America was electrified and almost every urban dwelling was networked, connected to the outside world with electricity, natural gas, telephone, clean running water, and sewers. By 1929, the horse had almost vanished from urban streets, and the ratio of motor vehicles to the number of households reached 90 percent. By 1929, the household could enjoy entertainment options that were beyond the 1870 imagination, including phonograph music, radio, and motion pictures exhibited in ornate movie palaces… Noah Smith: Why I'm so Excited About Solar & Batteries https://noahpinion.substack.com/p/why-im-so-excited-about-solar-and: ‘In the 19th century we switched to coal... in the 20th century we upgraded to oil.... After World War 2, a global extraction regime and price controls allowed us to keep cheap oil flowing. That ended with the Oil Shocks of the 70s. And though oil became cheaper again in the 80s and 90s, it never attained its former lows, or its low volatility. Then in the 00s it got expensive again.... We didn’t get anything better than oil during this time.... More expensive energy makes physical innovation harder in every way.... This stagnation in energy technology almost certainly contributed to the productivity slowdown of the 1970s.... Why didn’t bits fill the gap?... IT did drive the re-acceleration of productivity that began in the late 80s and continued through the early 00s.... But around 2005... that productivity growth faded.... Some have argued that digital services are substantially undervalued in our economic production statistics.... Physical technology is less “skill-biased” than IT, meaning that pretty much anyone can be a factory worker but only a few people can use computers productively and effectively... [or] IT simply touches less of our lives than energy does.... “Bits” innovation sometimes drives fast productivity growth, and sometimes doesn’t.… The cost declines in solar and batteries — and to a lesser extent, in wind and other storage technologies—comprise a true technological revolution.... And there’s no end in sight to this revolution. New fundamental advances like solid state lithium-ion batteries and next-generation solar cells seem within reach, which will kick off another virtuous cycle of deployment, learning curves, and cost decreases… .#brieflynoted #noted #2020-12-12 Briefly Noted for 2020-12-12 SIEPR Associate's Meeting with Josh Bolten https://www.youtube.com/watch?v=MVLqRr-PlhM&feature=youtu.be Matthew Yglesias: The Real History of Race & the New Deal https://www.slowboring.com/p/new-deal Wikipedia: Martha Gellhorn https://en.wikipedia.org/wiki/Martha_Gellhorn Vowel https://www.vowel.com/ Apple: AirPods Max https://www.apple.com/airpods-max/ Filipe Espósito: iPad Air 4 Benchmark Results https://9to5mac.com/2020/10/04/ipad-air-4-benchmark-results-emerge-on-the-web-as-apple-reportedly-prepares-a14-apple-tv/: ‘First observed by the Twitter user Ice universe, the Geekbench test was performed on an iPad Air 4 running iOS 14.0.1. The Geekbench score reports 1583 for single-core and 4198 for multi-core, compared to 1112 for single-core and 2832 for multi-core of the A12 Bionic chip that powers the previous iPad Air 3. That means the A14 chip has 42% better performance than the A12 chip in single-core and 48% better in multi-core — which can be considered a great improvement for those upgrading from an iPad Air 3. Compared to the iPhone 11’s A13 Bionic chip, the A14 chip is about 20% faster in single-core (1327) and 28% faster in multi-core (3286)… Jessica Price: Do Not Be Daunted...: https://twitter.com/Delafina777/status/1024317315620294657: '"Do not be daunted by the enormity of the world's grief. Do justly, now. Love mercy, now. Walk humbly, now. You are not obligated to complete the work. But neither are you free to abandon it...". The text it's referencing is from Pirkei Avot... part of the Mishnah.... Here's the quote that that meme is referencing (Pirkei Avot 2:15-16): "Rabbi Tarfon said: 'The day is short and the work is much, and the workers are lazy and the reward is great, and the Master of the house is pressing'. He used to say: 'It is not your responsibility to finish the work, but neither are you free to desist from it...'" While it's a translation that definitely isn't word-for-word, it's actually a very good interpretive translation and completely in keeping with the text.... The "do justly, now" triad is from Micah 6:8. The rabbis of the Mishnah and Talmud assumed intimate familiarity with the entire Tanakh/Hebrew Bible, so they often make oblique references to verses and assume the reader will know the verse they're hinting at. The passage from Micah is one of the most famous elucidations of what the work of repairing the world, tikkun olam, consists of. So Shapiro adding it here isn't really an interpretive stretch--it's more just making the implicit explicit. And that beautiful opening? "Do not be daunted by the enormity of the world's grief"? It's definitely a bit of poetic license, but I'd say that's the point of "the day is short and the work is much”… Robert J. Gordon: The Rise & Fall of American Growth: 'The year 1870 represented modern America at dawn. Over the subsequent six decades, every aspect of life experienced a revolution. By 1929, urban America was electrified and almost every urban dwelling was networked, connected to the outside world with electricity, natural gas, telephone, clean running water, and sewers. By 1929, the horse had almost vanished from urban streets, and the ratio of motor vehicles to the number of households reached 90 percent. By 1929, the household could enjoy entertainment options that were beyond the 1870 imagination, including phonograph music, radio, and motion pictures exhibited in ornate movie palaces… Olga San Miguel-Valderrama (2009): Community Mothers & Flower Workers in Colombia https://github.com/braddelong/public-files/blob/master/readings/article-sanmiguel-2009-colombia.pdf Continue reading "Briefly Noted for 2020-12-12" » Briefly Noted for 2020-12-11 Casey Newton: How Microsoft crushed Slack https://www.platformer.news/p/how-microsoft-crushed-slack: ‘And why the era of worker-centered work tools may be over… George Orwell: Nineteen Eighty-Four http://gutenberg.net.au/ebooks01/0100021.txt DeLong COVID Dashboard https://research.stlouisfed.org/useraccount/dashboard/56322 Ellora Derenoncourt & Claire Montialoux: Minimum Wages & Racial Inequality http://www.clairemontialoux.com/files/DM2020.pdf: ‘The earnings difference between white and black workers fell dramatically in the United States in the late 1960s and early 1970s.... The expansio... in this decline. The 1966 Fair Labor Standards Act extended federal minimum wage coverage to agriculture, restaurants, nursing homes, and other services which were previously uncovered and where nearly a third of black workers were employed.... Earnings rose sharply for workers in the newly covered industries. The impact was nearly twice as large for black workers as for white. Within treated industries, the racial gap adjusted for observables fell from 25 log points pre-reform to zero afterwards. We can rule out significant dis-employment effects for black workers.... The 1967 extension of the minimum wage can explain more than 20% of the reduction in the racial earnings and income gap during the Civil Rights Era… Jonah Goldberg: Screwtape Went Down to Georgia https://gfile.thedispatch.com/p/screwtape-went-down-to-georgia: ‘A certain subset of the right has convinced itself that the Democrats aren’t just wrong or even bad, but that they are singularly evil and lethally dangerous enemies of America, hell-bent on destroying all that is sacred by imposing godless socialism on us all. I’ll skip the usual structural reasons for this development—the Big Sort, media balkanization, and, yes, the behavior of some Democrats—and focus instead on the part relevant to my point. The president of the United States said this sort of thing a lot.... The president is a deeply flawed and crude person with a thumbless grasp of the Constitution, the duties of his office, and the most rudimentary tenets of religion and traditional morality. Because this is so incandescently obvious, casting the Democrats as an existential threat to All We Hold Dear makes it a lot easier to overlook these things. Hence all of that “He’s our King David” gibberish from the early days of the Trump presidency. When you’re in a Manichean existential battle with the unholy Forces of Darkness, it’s much easier to overlook the adultery, greed, deceit, and corruption of your anointed champion. Now, normally I’m not one to leap to the defense of Democrats, but I think offering the faint praise that they are not all evil incarnate is literally the least I can do.... For nearly five years now, it has been obvious that Trump was unfit for the job and the arguments marshaled in his defense were cynical rationalizations that, for some, eventually mutated into sincerely held delusions.... For a lot of otherwise decent politicians and commentators, doing the right thing was just too damn hard. At every stage, they fed the Trumpian alligator another piece of themselves and said “This much, but no more.” But now all that is left are stumps, and it’s hard to walk in the right direction on stumps or hold your hands up to shout, “Stop!” when you have no hands.... I understand that this all sounds awfully self-righteous. But I’ll tell you, I feel like I deserve my gloating. I’m not alone in my right to it, but I deserve my share. I’ve been saying “don’t do this” for five years and I’ve been mocked and shunned for it. So forgive me if I enjoy my I-told-you-so moment. Or don’t forgive me. I’m used to it… Lisa Bryan: Hollandaise Sauce (Easy and No-Fail https://downshiftology.com/recipes/hollandaise-sauce/: ‘The key to getting the consistency right all comes down to the hot melted butter. This recipe emulsifies butter into an egg yolk and lemon juice mixture. So you want to make sure you’re streaming in butter that’s hot enough (just melted won’t do). But in the case that your sauce does break and becomes a speckled mess, don’t fret. Below are two methods to try that will help bring your sauce back to life. Blend 1-2 tablespoons of boiling hot water: As you’re blending, slowly add in the hot water and blend until the consistency is right Add an extra egg yolk: While the blender is on, add an extra egg yolk with a teaspoon of hot water into the blender and blend until it becomes perfectly creamy… Jonah Goldberg: As outrageous as his effort to delegitimize the election is—and it is very outrageous—that outrage pales like a lit candle next to the noonday summer sun when you compare it to an effort to literally overturn the popular and Electoral College vote and steal the election. But because that outcome is so unlikely, and Trump’s effort to pull it off is so comically inept, people are focusing on the more likely outrage rather than the more outrageous outrage. This was the plan.... His goal was always to steal the election if he didn’t win.... He told all of his voters to vote on Election Day. He expected this would give him a “mirage” lead that night, and then, because he had already established the illegitimacy of mail-in ballots, he could pretend to be justified in proclaiming victory on Election Night. Sure, there would be lawsuits and the like later, but Trump would have momentum on his side. He even telegraphed over and over that he expected the Supreme Court to come to his rescue.... That was his primary explanation for why he thought it was important to get Amy Coney Barrett confirmed. But as Grossman points out, there was just one problem: Trump wasn’t actually leading on Election Night.... This, by the way, explains why Trump World was so very, very, very, angry about Fox’s decision to call Arizona.... The Arizona call ruined the pretext. If Pennsylvania had been the tipping point, they thought they could get the election thrown to the court. But the Arizona call combined with the undeclared result in Georgia preempted that… Continue reading "Briefly Noted for 2020-12-11" » Unemployment Insurance Claims Signal Renewed Recession The Macro News: Th 2020-12-10: Starting last June with every week the US economy got better—at least, the number of people continuing to claim unemployment insurance fell when we calculate it on a seasonally adjusted basis. Some of this was people who had been receiving unemployment insurance finding jobs. Some of this was people reaching the end of the benefits to which they were entitled. Nevertheless, if you were people were flowing into the pool of those receiving unemployment insurance payments then were getting out of it. But this past week's numbers are a sign that that period has come to an end. While one frost does not make a winter, both the seasonally adjusted number of people continuing to receive and the number of people newly claiming unemployment insurance benefits jumped up last week. The natural way to read this is that the third wave of the coronavirus plague is starting to send the economy into renewed recession. It is, as it was before, not because of lockdowns. As before, the principal cause of the economy turning down is people getting scared, and deciding that they will postpone spending that requires close personal contact off to next year. The professional Republicans appear to have decided to claim that what the government needs to do is to keep people hungry this winter so that they think they must go to work will-virus or nill-virus, and to block government action to keep spending economy wide from declining. It’s going to be a bad winter. Continue reading "Unemployment Insurance Claims Signal Renewed Recession" » 12.2.1-6. Lectures: Neoliberalism's Bankruptcy :: Econ 115 F 2020 12.2.1. East Asia’s Miracles 22.00 min 12.2.2. China Stands Up 9.00 min 12.2.3. How Do We Think About the State’s Role Here? 10.75 min 12.2.4. The Business Cycle Background 10.75 min 12.2.5. The Coming of the Near-Second Great Depression: 2001–2009 21.75 min 12.2.6. Where Did the Regulators & Macroeconomic Managers Go? 9.75 min 1:32.00 of audio… ==== Plus 12.2.7. Zoom Lecture & Q&A https://berkeley.zoom.us/j/94569606763?pwd=VjBPSU5DOVlqUkVQZVJuLzVMTDlMdz09 Continue reading "12.2.1-6. Lectures: Neoliberalism's Bankruptcy :: Econ 115 F 2020" » Mark Price: Adam Looney. Phd from Harvard. Undergrad at Dartmouth https://twitter.com/price_laborecon/status/1332651699450810371. Oh boy, can you smell trouble. If you can’t you may want to get tested for COVID-19. He served among other places in Obama’s Treasury. He has an op-ed from a little over a week ago which I’m not going to share again where he argues against the Warren-Schumer proposal to cancel up to $50,000 in student loan debt…. The point of the Schumer-Warren proposal is Biden can act without the Senate. We all want and need effort to help people cushion the crippling blow of COVID-19 but we have to wait for the Senate. Elevating food stamps as a superior form of stimulus is dishonest at best and deeply hypocritical for a man whose income is facilitated by a Koch-funded enterprise.... Another economist tweeted out the Looney Op-ed invoking their having worked with him in the Obama Administration.... The world is full of really wonderful people who went to Dartmouth and Harvard and work very hard to make sure that millions of poor and middle income children don’t get the same opportunities in life as their own children. That’s a hard lesson for people to learn and it’s an illustration of the way elite networks reinforce and reproduce inequality… Duncan Black: ell Paid Bullshit Artists https://www.eschatonblog.com/2020/11/well-paid-bullshit-artists.html: ‘A standard trick in DC policy circles is to derail any policy by focusing on a "better" policy, which lets you imply the policy's advocates are stupid and/or cruel. One reason to focus on something like debt reduction is that it is something Biden has the power to do, unlike most everything else. This isn't the only reason. It's good on the merits, too, for a variety of reasons, but unless you have a plan to get Mitch McConnell to pass your fantasy plan, then you are just trolling…\ Andy Matuschak & Michael Nielsen: Quantum Country https://quantum.country/ Jean-Louis Gassée: PC Life After Apple Silicon https://mondaynote.com/pc-life-after-apple-silicon-a96861f58442: ‘Apple Silicon, in its first incarnation as the M1 System-on-a Chip, combined with a new macOS version, is about to expand Apple’s share of the PC market — at Intel’s expense… Paul Musgrave: McDonalds Peace Theory Epitomized America's 1990s Hubris https://foreignpolicy.com/2020/11/26/mcdonalds-peace-nagornokarabakh-friedman/: ‘In the rich, lazy, and happy 1990s, Americans imagined a world that could be just like them… Scott Cunningham: Causal Inference: The Mixtape https://scunning.com/cunningham_mixtape.pdf Continue reading "Briefly Noted for 2020-11-28" » Briefly Noted for 2020-11-26 Sully Prudhomme: Le Vase Brisé (The Broken Vase) https://onbeing.org/poetry/le-vase-brise-broken-vase/ Jay Rosen: https://twitter.com/jayrosen_nyu/status/1329582924728037376: The GOP's verified account sent this out. That he won in a landslide. Here, I think, the party made official its break with American democracy. Not saying it wasn't apparent before. It was. Just more official now. As ridiculous as Sydney Powell is, this is a sobering moment. https://twitter.com/GOP/status/1329490975266398210 Kevin Liptak & Devan Cole: Chris Christie Calls Trump's Legal Team a 'National Embarrassment' https://www.cnn.com/2020/11/22/politics/chris-christie-donald-trump-election/index.html: ‘Former New Jersey Gov. Chris Christie said Trump has failed to provide any evidence of fraud, that his legal team was in shambles and that it's time to put the country first. "If you have got the evidence of fraud, present it," Christie said.... He decried efforts by the President's lawyers to smear Republican governors who have not gone along with the President's false claims of voter malfeasance. "Quite frankly, the conduct of the President's legal team has been a national embarrassment," he said, singling out Trump attorney Sidney Powell's accusations against Georgia GOP Gov. Brian Kemp… Lex: Workers vs Robots: A New Kind of Onshoring https://www.ft.com/content/734d7da1-737d-481c-8838-b58b471338ae: ‘Oil rigs have been on the automation march for most of the past decade. Remote control rooms can manage everything from drilling to procurement. The safety advantage of having fewer bodies on rigs is obvious in a pandemic. Benefits to the bottom line are just as clear. Equinor, as Statoil is now known, says the move added more than NKr2bn ($212m) to earnings within a year of its Johan Sverdrup rig going digital. The biggest savings come from shrunken payrolls. In the developed world, robots are set to replace humans in a range of physically tough, repetitive jobs, from order picking in warehouses to lifting the old and infirm… Josh Marshall: Folks, Let’s Get It the F--- Together https://talkingpointsmemo.com/edblog/folks-lets-get-it-the-fuck-together: ‘I really don’t know what the two years holds. But I’m certain of one thing. It is and will be immeasurably better than Donald Trump having been reelected to a second term in office. No question. You did that. You owe it to yourself to get pumped and rejoice in that. It’s something to savor. It will help sustain you through endless civic work to come… Jodi Enda: Trump’s Unexpected Power Helps Republicans Win Even If He Doesn’t https://washingtonmonthly.com/2020/11/04/trumps-unexpected-power-helps-republicans-win-even-if-he-doesnt/: ‘When Trump’s name is on the top of the ballot, Republicans down the line do better. It feels strange to write that sentence since Trump himself might lose the presidency in this nail-biter of an election. But it remains true that both times he topped the ticket, Republicans down the ballot out-performed expectations… Wikipedia: Leo Strauss https://en.wikipedia.org/wiki/Leo_Strauss#American_years: Strauss had also been engaged in a discourse with Carl Schmitt. However, after Strauss left Germany, he broke off the discourse when Schmitt failed to respond to his letters…. In 1932, Strauss left his position at the Higher Institute for Jewish Studies in Berlin for Paris… married Marie (Miriam) Bernsohn, a widow with a young child…. Strauss became a lifelong friend of Alexandre Kojève and was on friendly terms with Raymond Aron, Alexandre Koyré, and Étienne Gilson…. Strauss found shelter, after some vicissitudes, in England, where, in 1935 he gained temporary employment at University of Cambridge, with the help of his in-law, David Daube, who was affiliated with Gonville and Caius College. While in England, he became a close friend of R. H. Tawney…. Unable to find permanent employment in England, Strauss moved in 1937 to the United States, under the patronage of Harold Laski, who made introductions and helped him obtain a brief lectureship…. Strauss secured a position at The New School, where, between 1938 and 1948, he worked the political science faculty and also took on adjunct jobs…. In 1949 he became a professor of political science at the University of Chicago… Om Malik: Why Are We Underestimating Zoom & It’s Impact? https://om.co/2020/11/25/zoom-its-long-term-impact/: ‘The prevalence of Zoom has shown us that working from a home office can be better than sitting in traffic for two hours. Even if, at this point, we find ourselves despising Zoom and complaining of persistent Zoom fatigue, we will not be going back to our pre-Zoom ways after the pandemic subsides. Whether Zoom remains the standard or gets overtaken by some upstart, Bill Gates predicts “that over 50% of business travel and over 30% of days in the office will go away”… Rated: 5: Scott Peterson: ROX Sonoma Coast Chardonnay 2019 https://us.nakedwines.com/products/rox-scott-peterson-sonoma-coast-chardonnay-2019# Alison Roman: Dan Roman's Buttery Roasted Chestnuts in Foil https://www.bonappetit.com/recipe/dan-romans-buttery-roasted-chestnuts-foil Christy Denney: Classic Stuffing Recipe https://www.the-girl-who-ate-everything.com/classic-stuffing-recipe/... Continue reading "Briefly Noted for 2020-11-26" » Boehlert: A Crack in the Noise Machine: How Murdoch Derailed Trump—Noted The thoughtful and insightful Eric Boehlert misses the major moment at which Rupert Murdoch put his press empire at the service of Joe Biden and America in ending the insane clown show that his been the presidential reign of Donald Trump. On the evening of election day, at 23:20 EST, Arnon Mishkin on the Fox News decision desk called Arizona and its 11 electoral votes for Joe Biden. Without Arizona, Trump would need not just Georgia (which Biden won by 0.2%) and Wisconsin (which Biden won by 0.7%) but also at least one of Pennsylvania (which Biden won by 1.2%) or Michigan (which Biden won by 2.8%). Calling Arizona for Biden put out of reach an election close enough that it could be decided for Trump by complaisant judges and a little more voter suppression. Yet Biden won Arizona, in the end, by only 0.3%. https://www.icloud.com/keynote/0yDiAW0blL0iFnRMqyYSAuWIQ Continue reading "Boehlert: A Crack in the Noise Machine: How Murdoch Derailed Trump—Noted" » Briefly Noted for 2020-11-25 Bibliotheca Augustana: Tapetum Bagianum http://www.hs-augsburg.de/~harsch/Chronologia/Lspost11/Bayeux/bay_tama.html: ‘c. 1080… Bret Devereaux: Collections: Bread, How Did They Make It? Part I: Farmers! https://acoup.blog/2020/07/24/collections-bread-how-did-they-make-it-part-i-farmers/ Bret Devereaux: Collections: Iron, How Did They Make It? Part I, Mining https://acoup.blog/2020/09/18/collections-iron-how-did-they-make-it-part-i-mining/ Wikipedia: Pioneer Hi Bred International https://en.wikipedia.org/wiki/Pioneer_Hi_Bred_International Wikipedia: DeKalb Genetics Corporation https://en.wikipedia.org/wiki/DeKalb_Genetics_Corporation Steven Rattner: God Help Us if Judy Shelton Joins the Fed https://www.nytimes.com/2020/07/22/opinion/federal-reserve-judy-shelton.html?smid=tw-share: ‘Trump’s latest unqualified nominee to the Federal Reserve Board must be rejected... Jeremiah: 22 KJV https://biblehub.com/kjv/jeremiah/22.htm: ‘Thus saith the LORD: "Execute ye judgment and righteousness, and deliver the spoiled out of the hand of the oppressor: and do no wrong, do no violence to the stranger, the fatherless, nor the widow, neither shed innocent blood in this place. For if ye do this thing indeed, then shall there enter in by the gates of this house kings sitting upon the throne of David, riding in chariots and on horses, he, and his servants, and his people. But if ye will not hear these words, I swear by myself", saith the LORD, "that this house shall become a desolation." For thus saith the LORD unto the king's house of Judah: "Thou art Gilead unto me, and the head of Lebanon: yet surely I will make thee a wilderness, and cities which are not inhabited. And I will prepare destroyers against thee, every one with his weapons: and they shall cut down thy choice cedars, and cast them into the fire. And many nations shall pass by this city, and they shall say every man to his neighbour, 'Wherefore hath the LORD done thus unto this great city?' Then they shall answer, 'Because they have forsaken the covenant of the LORD their God, and worshipped other gods, and served them'"… Duncan Black: Failed 4th Estate https://www.eschatonblog.com/2020/11/failed-4th-estate.html: ‘I think the very Trump-specific sin of the media (as opposed to their normal sinning) was confusing an understandable decision to "treat this lunatic freakazoid as we would normally treat a president" with "go out of our way to portray this lunatic freakazoid as normal." I get it. It was difficult to do the first part without doing the second part. If the scandal-o-meter goes up to 7 on a tan suit, then anything resembling normal practices can't cope when Trump makes it hit 11 by 7am most days. And, who knows, maybe they didn't even do him any favors. Maybe The People love their lunatic freakazoid president. But the first draft of history has hardly been an accurate one… Utah HERO Project: Covid-Research State Chart https://marriner.eccles.utah.edu/covid-research-state-chart/ Joan Robinson (1962): Economic Philosophy Bret Devereaux: Collections: Iron, How Did They Make It? Part I, Mining https://acoup.blog/2020/09/18/collections-iron-how-did-they-make-it-part-i-mining/ Oliver Wyman: Model Projections for COVID-19 Cases https://pandemicnavigator.oliverwyman.com/forecast?mode=country&region=United%20States&panel=baseline On 2017-12-17, Michael Boskin claimed the TMR tax cut would generate a big boom in equipment investment. Perhaps a full percentage point or so. None of the model-builders agreed with him. And he was wrong: there was none such in 2018 or 2019; spending did not wheel from consumption to investment; “unlocked” foreign earnings were paid out in dividends, not invested in equipment. He has never explained or analyzed why he was wrong. Or why he was confident in the first place: Michael Boskin (29017): Another Look at Tax Reform and Economic Growth__https://www.project-syndicate.org/commentary/republican-tax-plan-growth-effects-by-michael-boskin-2017-12 100+ Economists (2017): Pass tax reform and watch the economy roar https://www.businessinsider.com/trump-tax-reform-opinion-congress-pass-2017-11 Stan Sakai: Usagi Yojimbo https://www.usagiyojimbo.com/: ‘First published in 1984, [it] continues to this day. Usagi Yojimbo is one of the longest independent serialized comic book series in existence. Stan Sakai, the sole creator, author, and artist who is best known for his series Usagi Yojimbo, the epic saga of Miyamoto Usagi, a samurai rabbit living in late-sixteenth and early-seventeenth-century Japan.  Since then, Stan Sakai has received numerous awards for Usagi Yojimbo including the  National Cartoonists Society Award, multiple Eisner Awards, the Parents' Choice Award, and Harvey Award for Best Cartoonist… An unprofessional beat sweetener about a Trumpism lobbyist who could neither plan nor execute a successful trade war: Jim Zarroli (2019): China Trade Talks: USTR Robert Lighthizer Is Trump's Hardball-Playing Negotiator https://www.npr.org/2019/02/21/696277594/expect-change-robert-lighthizer-is-trump-s-hardball-playing-china-trade-negotiat John Gruber: One More Thing: The M1 Macs https://daringfireball.net/2020/11/one_more_thing_the_m1_macs: ‘The M1. The new M1-based MacBook Air, 13-inch MacBook Pro, and Mac Mini are... three different manifestations of the same computer... far faster machines than the Intel-based Macs they’re replacing. But the big win, and clear focus from Apple, isn’t speed but battery life.... This is the sellable bullet point for the mass market consumer.... The M1 really is an entire system on a chip. Everything is on the M1. The various processors, of course: the CPU cores, the GPU cores, the Neural Engine cores. But everything else is on the M1 too: the storage controller, the Secure Enclave, the memory controller, and, yes, the memory itself. The DRAM for M1-based Macs is on the package (“on the substrate”, I believe, is the technical lingo).... There’s no separate “video memory” and “system memory”—just memory.... Apple’s chip team is really proud of this UMA system and the integrated GPU on the M1. It’s a design that increases performance and power efficiency.... For over a decade, iPhones and iPads have had Apple-designed chips the competition could not and still cannot match. Now the Mac does too… Sara Gibbs: Everything I Never Wanted to Have to Know About Labour & Antisemitismm https://medium.com/@sararoseofficial/everything-i-never-wanted-to-have-to-know-about-labour-and-antisemitism-649b5bc1e576: ‘I want the last four years of my life back.... If you’re new to this and listening, thank you. I know a lot of my fellow activists will be annoyed that I’m taking this tone but at this point I am so exhausted from four years of begging people to listen on this subject that I am grateful for any new allies and support. If you are listening, you’re already doing more than most. One of the most devastating aspects of Labour’s antisemitism crisis has been seeing the sheer volume of people I like, respect, even consider friends, denying or minimising this issue which has caused me so much personal devastation… A huge amount—an absolutely huge amount—was lost when Harry Dexter White overrode John Maynard Keynes at Bretton Woods and placed responsibility for closing "fundamental disequilibria" on deficit countries alone. This policy mistake still haunts us. And odds are that it is about to haunt us again. Jeremy Bulow and Company sound the alarm: Jeremy Bulow & al.: The Debt Pandemic https://www.imf.org/external/pubs/ft/fandd/2020/09/debt-pandemic-reinhart-rogoff-bulow-trebesch.htm: ‘The COVID-19 pandemic has greatly lengthened the list of developing and emerging market economies in debt distress. For some, a crisis is imminent. For many more, only exceptionally low global interest rates may be delaying a reckoning.... Yet new challenges may hamper debt workouts unless governments and multilateral lenders provide better tools to navigate a wave of restructuring… *Neil Fligstein & Steve Vogel *: Political Economy After Neoliberalism http://bostonreview.net/class-inequality/neil-fligstein-steven-vogel-political-economy-after-neoliberalism: ‘First, then, governments and markets are co-constituted. Government regulation is not an intrusion into the market but rather a prerequisite for a functioning market economy.... Second, real-world political economy hinges on power, both political and market power. Specific forms of market governance—of the kinds we just sketched—do not arise naturally or innocently. They are the product of power struggles between firms, industries, workers, and governments within particular markets and in the political arena.... Third, there is more than one way to organize society to achieve economic growth, equity, and access to valued goods and services. The balance of power between government, workers, and firms differs greatly across countries and time… Rachel Reeves: Best for Britain https://twitter.com/BestForBritain/status/1288825505333030923: ‘The PM said… a trade deal would be secured by the end of July. Well… we don’t have a trade deal. All we have is a blueprint for a giant lorry park in the middle of Kent… Alan S. Blinder & Mark W. Watson (2016): Presidents & the US Economy: An Econometric Exploration https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.20140913: ‘The US economy has performed better when the president of the United States is a Democrat rather than a Republican, almost regard- less of how one measures performance. For many measures, includ- ing real GDP growth (our focus), the performance gap is large and significant. This paper asks why. The answer is not found in technical time series matters nor in systematically more expansionary mone- tary or fiscal policy under Democrats. Rather, it appears that the Democratic edge stems mainly from more benign oil shocks, supe- rior total factor productivity (TFP) performance, a more favorable international environment, and perhaps more optimistic consumer expectations about the near-term future… Wikipedia_: Cocoliztli Epidemics https://en.wikipedia.org/wiki/Cocoliztli_epidemics: ‘A mysterious illness characterized by high fevers and bleeding. It ravaged the Mexican highlands in epidemic proportions... often referred to as the worst disease epidemic in the history of Mexico.... Recent bacterial genomic studies have suggested that.. a serotype of Salmonella enterica known as Paratyphi C, was at least partially responsible for this initial outbreak.[3] It might have also been an indigenous viral hemorrhagic fever Paul Krugman: Why Did Trump’s Trade War Fail? https://www.gc.cuny.edu/CUNY_GC/media/CUNY-Graduate-Center/PDF/Programs/Economics/Other%20docs/tradewarfail.pdf: ‘There used to be an extensive literature on “effective protection”.... Tariffs on imported inputs provided negative effective protection to downstream activities. And that’s what seems to have happened with the Trump trade war. Tariffs were largely focused on intermediate rather than final goods. The net effect, then, may actually have been to discourage manufacturing!… ==== Plus: Kevin Drum: Why Are Republicans Being Such Assholes?s https://www.motherjones.com/kevin-drum/2020/07/why-are-republicans-being-such-assholes/: ‘Bonus unemployment payments... expire today.... The main reason for extending them is because there are millions of Americans who are out of work and they desperately need the money.... [Plus] if the payments are cut off it will devastate an already ravaged economy.... So why are Republicans hemming and hawing?... From a purely selfish perspective, Republicans ought to be in favor of doing anything they can to keep the economy in decent shape through the election.... The whole thing is a disgrace.... Why are Republicans acting so contemptibly?… Andrew Edgecliffe-Johnson & Mark Vandevelde: Stephen Schwarzman Defended Donald Trump at CEO Meeting on Election Results https://www.ft.com/content/558f2a68-7d42-4702-b86d-fae5458b3e64: ‘Mr Schwarzman, a Republican donor who has been one of Mr Trump’s most energetic supporters on Wall Street, sought to assuage such fears, saying the president was within his rights to challenge election results and forecasting that the legal process would take its course. He asked whether other participants did not find it surprising that early votes in Pennsylvania had favoured Mr Trump, only for later counts to tip the state in Mr Biden’s favour. Mr Schwarzman said there had been news reports stating that ballots continued arriving days after the election and that some of them may not have been real—issues, he said, that needed to be resolved by the courts, as the president’s legal team has argued… Continue reading "Briefly Noted for 2020-11-25" » Briefly Noted for 2020-11-24 Jonathan Bernstein (2020-11-19): Senate Republicans, Stop Trump’s Vote Antics Now https://www.bloomberg.com/opinion/articles/2020-11-19/senate-republicans-stop-trump-s-vote-antics-now: ‘It’s no longer enough just to acknowledge the obvious fact that Biden won… Jonathan Bernstein (2020-11-10): Donald Trump’s Antics Show Contempt for His Own Voters https://www.bloomberg.com/opinion/articles/2020-11-19/donald-trump-s-antics-show-contempt-for-his-own-voters: ‘The president’s election challenges have no realistic chance of succeeding. The goal now is to keep the donations flowing… Jonathan Bernstein (2020-11-18): Why Are Republicans Embracing Judy Shelton for the Fed Now? https://www.bloomberg.com/opinion/articles/2020-11-18/why-are-republicans-embracing-judy-shelton-for-the-fed-now: ‘After months of blocking her nomination to the Fed, suddenly the Senate majority has had a change of heart… Jonathan Bernstein (2020-11-17): Joe Biden Has One Urgent Task Right Now https://www.bloomberg.com/opinion/articles/2020-11-17/joe-biden-has-one-urgent-task-right-now: ‘The president-elect can do something Donald Trump never has: offer a coherent message on Covid-19… Jonathan Schifman: The Entire History of Steel https://www.popularmechanics.com/technology/infrastructure/a20722505/history-of-steel/: ‘From hunks of iron streaking through the sky, to the construction of skyscrapers and megastructures, this is the history of the world's greatest alloy… Wikipedia: David Malpass https://en.wikipedia.org/wiki/David_Malpass Wikipedia: Ferrous Metallurgy https://en.wikipedia.org/wiki/Ferrous_metallurgy | History of the Steel Industry (1850–1970) https://en.wikipedia.org/wiki/History_of_the_steel_industry_(1850%E2%80%931970) | History of the Steel Industry (1970–Present) https://en.wikipedia.org/wiki/History_of_the_steel_industry_(1970%E2%80%93present) Steve Randy Waldman: Social democracy or feudalism https://www.interfluidity.com/v2/8012.html: ‘if we should recognize an echo of empire in contemporary trade imbalances, should we not also recognize an echo of feudalism in contemporary class dynamics? The class wars embedded in trade wars of the past generation have provoked growing chasms of inequality (within societies inscribed by nation-state borders), along with (oh Gatsby curve) declining mobility and dynamism between classes… Geoffrey Chaucer: The Canterbury Tales https://www.gutenberg.org/files/22120/22120-h/22120-h.htm: ‘Now preye I to hem alle that herkne this litel tretis or rede, that if ther be any thing in it that lyketh hem, that ther-of they thanken oure lord Iesu Crist, of whom procedeth al wit and al goodnesse. And if ther be any thing that displese hem, I preye hem also that they arrette it to the defaute of myn unconninge, and nat to my wil, that wolde ful fayn have seyd bettre if I hadde had conninge. For oure boke seith, 'al that is writen is writen for oure doctrine'; and that is myn entente… Scott Lemieux: COVID's Been Everywhere, Man https://www.lawyersgunsmoneyblog.com/2020/11/covids-been-everywhere-man: ‘Remember when Bret Stephens assured us that it was unpossible for COVID to spread beyond the densest urban areas? Obviously, he started with the premise that doing anything to stop the pandemic was bad…. Pro-Trump Republicans are no more likely to update their priors despite them being massively wrong all along…. Bret Stephens is, at least in a strictly formal sense, a professional writer… Steve M.: Is This Futile? https://nomoremister.blogspot.com/2020/11/are-we-sure-they-know-this-is-futile.html: ‘47% of the country will have an even darker view of Democrats and cities and black voters and "the Deep State." Then we'll be even more divided and the right will be even angrier and more paranoid…. But these cynics don't care that they're encouraging a state of permanent cold civil war… Continue reading "Briefly Noted for 2020-11-24" » 11.1. The Neoliberal Turn, & Hyperglobalization: Readings: Econ 115 F 2020 The required readings for Module 11 are rather long—but not nearly as long as for Modules 9 & 5, where I wound up taking two weeks per module. There is Skidelsky chapter 6 https://github.com/braddelong/public-files/blob/master/readings/chapter-skidelsky-keynes-6.pdf. The chapters of Skidelsky before this one I've been all about how Keynes was smart and right. This chapter is, from Skidelsky’s view as of 1995, as an enthusiastic advocate of the neoliberal turn, how Keynes’s disciples were dumb and wrong. It is a very good analysis, on the level of events and ideas, of why people decided to take the neoliberal turn. 11.1. The Neoliberal Turn, & Hyperglobalization: Readings https://www.icloud.com/keynote/0wiN44cBruFAXnvv0weXzgCNw Continue reading "11.1. The Neoliberal Turn, & Hyperglobalization: Readings: Econ 115 F 2020" » Let's Make Matt Yglesias's New Weblog a Success! I very much hope that Matt Yglesias’s new weblog http://slowboring.com becomes the place to see and be seen on the internet. Not, mind you, that I expect Matt to get everything right. Or that I expect all of his quick takes to be sound takes: ==== Matt-- I find myself more with Tom Scocca here than you. You write https://www.slowboring.com/p/whats-wrong-with-the-media: The problem here, to me, is not that Walker ought to “stick to sports.” It’s that the analysis is bad. But because it’s in a video game console review rather than a policy analysis section and conforms to the predominant ideological fads, it just sails through to our screens... And then you say: What actually happened is that starting in March the household savings rate soared.... Middle class people are seeing their homeowners’ equity rise and... their debt payments fall, while cash piles up on their balance sheets… This makes sense as a criticism of Ian Walker only if you think that when Ian Walker wrote 'I’d be remiss to ignore all the reasons not to be excited for the PlayStation 5...', it was meant to be the start of an argument that the PS5 will not sell very well because of the epidemiological-economic-cultural uproar of the plague year. https://www.icloud.com/keynote/0HbVeT91VG7G4lI6FMjrekmQw Continue reading "Let's Make Matt Yglesias's New Weblog a Success!" » DeLong Debt Memo: 2020-11-17 You asked me to think about long-run downside via fiscal drag and higher required tax rates and revenues in the future, after the economy has returned to full employment, from additional debt-financed COVID depression-fighting stimulus expenditures. You asked me to think in the context of Larry Summers’s and my “Fiscal Policy in a Depressed Economy” of a decade ago. My conclusion: RIGHT NOW THERE IS NO PROSPECT OF ANY FUTURE FISCAL DRAG FROM ADDITIONAL DEBT-FINANCED FISCAL STIMULUS... https://www.icloud.com/keynote/0UJ2pg4UkvegNtUljgTL-Qhdg Continue reading "DeLong Debt Memo: 2020-11-17" » The Siren Song of Austerity: Project Syndicate J. Bradford DeLong: The Siren Song of Austerity https://www.project-syndicate.org/commentary/return-of-austerity-in-us-by-j-bradford-delong-2020-11: ‘Among the many lessons of the 2008 financial crisis and its aftermath in the United States is that there is no good reason to start worrying about debt when unemployment remains high and interest rates low. The hasty embrace of austerity derailed the last recovery, and it must not be allowed to do so again: BERKELEY–Ten years and ten months ago, US President Barack Obama announced in his 2010 State of the Union address that it was time for austerity. “Families across the country are tightening their belts and making tough decisions,” he explained. “The federal government should do the same.” Signaling his willingness to freeze government spending for three years, Obama argued that, “Like any cash-strapped family, we will work within a budget to invest in what we need and sacrifice what we don’t.” So great was the perceived need for austerity that he even vowed to “enforce this discipline by veto,” just in case congressional Democrats had something else in mind… https://www.icloud.com/keynote/0mfYblEaFhtRC9uF-9OFxB5bw Continue reading "The Siren Song of Austerity: Project Syndicate" » Do I Really Need to Say, Again, That Judy Shelton Does Not Belong on the Federal Reserve's Board of Governors? Apparently I Do... And it looks like even without Trump Republican senators will continue to be orange-haired baboons: That any Republican senators at all are thinking of voting for Judy Shelton—a woman views whom Milton Friedman dismissed by saying "it would be hard to pack more error into so few words"—for a Fed Governor position reveals an astonishing lack of spine. Yet the Senate Banking Committee chair appears to be attempting to advance her nomination on Tuesday: Hoisted from the Archives:_ Shelton the Charlatan_ https://www.bradford-delong.com/2020/03/shelton-the-charlatan-project-syndicate.html: In 1994 Milton Friedman wrote about Judy Shelton: "In a recent Wall Street Journal op-ed piece (July 15)... Judy Shelton started her concluding paragraph: “Until the U.S. begins standing up once more for stable exchange rates as the starting point for free trade...” It would be hard to pack more error into so few words.... A system of pegged exchange rates, such as the original IMF system or the European Monetary System, is an enemy to free trade. It is no accident that the 1992 collapse of the EMS coincided with the agreement to remove controls on the movement of capital..." https://miltonfriedman.hoover.org/friedman_images/Collections/2016c21/NR_09_12_1994.pdf. To turn monetary policy away from internal balance toward preventing exchange rate movements that market fundamentals wanted to see occur was, in Friedman's view, the road toward disaster. It was simply wrong. And it could be held together only if economies moved from free trade back toward managed trade—and so beggared not just their neighbors but themselves. Two and a half decades later, today's Judy Shelton seems no freer from error, but to it has added an enormous amount of incoherence. There is no consistent thread of argument in what she says. She is, rather, a weathervane pointing in the direction of whatever political wind she thinks likely to get her her next job. Last year she said that the Federal Reserve should be careful not to do anything to curb stock prices: "More than half of American households are invested through mutual funds or pension funds in this market. I don’t want the Fed to pull the rug out from under them..." https://www.bloomberg.com/news/articles/2019-07-05/trump-fed-pick-shelton-says-central-bank-should-support-markets. But in 2016—when unemployment was higher and the case for easy money stronger—it was the Fed's "appeasing financial markets" that was the thing to be avoided https://www.washingtonpost.com/opinions/yes-trumps-latest-fed-pick-is-that-bad-heres-why/2020/02/10/a13fa1ec-4c44-11ea-9b5c-eac5b16dafaa_story.html. Back then under the Obama administration when there were lots of unemployed workers who could be put to work producing exports, policies to produce a weaker dollar to boost exports were to be shunned: "The obvious quick route to export success for any nation is to depreciate its currency. Dollar depreciation is already being pushed by the Obama administration.... Let's not compromise our currency in a misguided attempt to boost U.S. job growth. America's best future is forged through sound finances and sound money..." https://www.wsj.com/articles/SB10001424052748704698004576104260981772424. These days "compromising the currency" is a plus from the interest-rate cuts she wants to see https://www.marketwatch.com/story/trumps-fed-choice-judy-shelton-says-interest-rate-cut-needed-because-europe-is-set-to-devalue-euro-2019-07-05. Today monetary policy should be made looser "as expeditiously as possible" https://www.washingtonpost.com/business/2019/06/19/fed-meets-trumps-potential-next-pick-wants-see-lower-rates-fast-possible. Back then "loose monetary policy... leads to internal bankruptcy... whole nations have foundered on this path..." https://www.wsj.com/articles/SB123742149749078635. Catherine Rampell https://www.washingtonpost.com/opinions/yes-trumps-latest-fed-pick-is-that-bad-heres-why/2020/02/10/a13fa1ec-4c44-11ea-9b5c-eac5b16dafaa_story.html earlier this month correctly called Judy Shelton "an opportunist and a quack", and reported that Republican senators think she is not qualified. Kevin Cramer (R-ND) said: "I wouldn't want five [Fed Board] members like her". Thom Tillis (R-NC) said that her views on the gold standard do not matter because return to the gold standard is off the table. Tim Scott (R-SC) agreed with Tillis, stating that "controversial statements" were "not relevant". Pat Toomey (R-PA) worried about the "very, very dangerous path to go down" she advocated. Richard Shelby (R-AL) was "concerned". John Kennedy (R-LA) said: "Nobody wants anybody on the Federal Reserve that has a fatal attraction to nutty ideas" https://www.wsj.com/articles/republican-senator-raises-concerns-over-sheltons-fed-candidacy-11581608467?mod=hp_major_pos1. But the Wall Street Journal editorial board has decided to back Judy Shelton's "more error packed into so so few words" over Milton Friedman by praising her as a believer that "monetary policies that ignore exchange-rate stability wreak political and economic havoc". Trump wants Judy Shelton on the Fed Board so he can threaten to—and possibly actually—replace Jay Powell with her as chair. If we have learned anything over the past three years, it is that furrowed brows of concern from Republican senators are worth precisely nothing. John Kennedy (R-LA) followed his furrowed brow by saying "I’m not saying that’s the case here". Mike Crapo (R-ID) praised her "deep knowledge of democracy, economic theory and monetary policy", and denounced the "war on Judy Shelton". If Republican senators are going to save the country from yet another Trump misstep that makes America less great, first core Republican supporters have to step up and give their senators 53 spine transplants. Maskell & Rybicki: Counting Electoral Votes: An Overview of Procedures at the Joint Session Note to Self: I very much hope that Pelosi and Schumer are already talking to Collins, Murkowski, Romney, Sasse, and company: the potential for an absolute dog’s breakfast on January 6, 2021 is already remarkably high, and may well increase in probability as things get crazier and crazier over the next two months. It is also not too early for the House of Representatives to be thinking hard about how to maintain their own security—both on the U.S. Capitol grounds, and for members in transit to the Capitol itself. The argument that Trump is not trying to gain support among the Republicans for a coup, and that Republicans are not egging one another one to see if they dare to do it, but rather doing something else seems to me to be overhasty and overconfident. Yes, Trump might be trying to establish an extradition-free bolthole for himself in Abu Dhabi. Yes, Trump might be trying to destroy as much evidence linking him to criminality as he can. Yes, Trump might be trying to show that he can disrupt the system so that he can then strike a deal that will leave him confident he will remain out of jail next year. Yes, Trump might simply be confused. But he might not. And while Giuliani is clearly neither his Göring, his Himmler, or his Heydrich, that does not mean that nobody else is: Jack Maskell & Elizabeth Rybicki: Counting Electoral Votes: An Overview of Procedures at the Joint Session, Including Objections by Members of Congress https://fas.org/sgp/crs/misc/RL32717.pdf: ‘Basis for Objections: The general grounds for an objection to the counting of an electoral vote or votes would appear from the federal statute and from historical sources to be that such vote was not “regularly given” by an elector, and/or that the elector was not “lawfully certified”... Continue reading "Maskell & Rybicki: Counting Electoral Votes: An Overview of Procedures at the Joint Session" » Frost: Entrepreneurial Transformation of Socialist China—Noted Adam Frost: Entrepreneurial Transformation of Socialist China https://ysi.ineteconomics.org/project/5f316897689c756fb5c52785/event/5f6b2b94a21037043d0c1458: ‘Generations of scholars argued that beginning with the Communist’s victory over the Nationalists in 1949 and culminating until the establishment of collective economic institutions in 1957, private entrepreneurship was effectively purged from the Chinese economy.... [But] capitalist entrepreneurship was an enduring feature of the modern Chinese economy.... 2,600 cases of “speculation and profiteering” that were prosecuted by local government agencies in the 1960s and 1970s... https://www.icloud.com/keynote/0z0Btdoy-vo9Rm1GKFxXBwoBw Continue reading "Frost: Entrepreneurial Transformation of Socialist China—Noted" » BioNTech 90% Effective Messenger RNA COVID Vaccine? https://www.icloud.com/keynote/0aP8gr2_cfSeHzTQEw9OVC6hg Little additional information seems to be available at this moment: Pfizer and BioNTech (2020-11-09): Announce Vaccine Candidate Against COVID-19 Achieved Success in First Interim Analysis from Phase 3 Study https://www.businesswire.com/news/home/20201109005539/en/: ‘Vaccine candidate was found to be more than 90% effective in preventing COVID-19 in participants without evidence of prior SARS-CoV-2 infection in the first interim efficacy analysis. Analysis evaluated 94 confirmed cases of COVID-19 in trial participants. Study enrolled 43,538 participants, with 42% having diverse backgrounds, and no serious safety concerns have been observed; Safety and additional efficacy data continue to be collected. Submission for Emergency Use Authorization (EUA) to the U.S. Food and Drug Administration (FDA) planned for soon after the required safety milestone is achieved, which is currently expected to occur in the third week of November. Clinical trial to continue through to final analysis at 164 confirmed cases in order to collect further data and characterize the vaccine candidate’s performance against other study endpoints… Continue reading "BioNTech 90% Effective Messenger RNA COVID Vaccine?" » 9.2.0. Intro Video: Glorious Post-WWII Years in the Global North: Equitable Growth & Inclusion: Econ 115 Interactive Video: https://share.mmhmm.app/ed6bb654a1bb406394e149ce53ecbbd4 https://www.icloud.com/keynote/0veWFCQ04v48yvwHSZVb2i6kA .#economicgrowth #economichistory #highlighted #slouchingtowardsutopia #socialdemocracy #thirtygloriousyears #2020-11-03 Introductory Video for Fall 2020 Instantiation of Econ 115 Module 9: Post-WWII Glorious Years of Equitable Growth & Inclusion in the Global North econ-115-9.2.0-module-intro-video-11.00-2020-10-28 https://share.mmhmm.app/ed6bb654a1bb406394e149ce53ecbbd4 As of 1945, there was not that many grounds for optimism, as far as the world economy and the world political economy were concerned. The greater totalitarianism had been squashed. The lesser was flourishing. It used the blood of the Russian people spilled fighting Nazism over 1941 to 1945 as a powerful source of legitimating energy—never mind the active and eager collaboration of Stalin and his acolytes with Hitler whenever it had seemed to be to their even momentary advantage. Market economies continued to fail to deliver Polanyian rights. Their ability to even deliver economic growth at all had been cast in the grave doubt by the Great Depression. As for democratic parliamentary politics—it could still be easily dismissed as a swamp that needed to be drained. Yet 1940-1980 saw the victory and secure establishment of the market-heavy mixed economy as an engine for delivering unprecedented economic growth in the global north. It also saw the overwhelming victory of parliamentary democracy as a system that could generate good economic management and also increasing human freedom. And it saw the world system deliver independence and somewhat increasing prosperity, if neither democracy nor economic convergence, to the global south. • In the G7 nations, 1870 to 1913 had delivered average measured real-income growth of 1.4% per year, albeit unequally distributed. • That fell over 1913 to 1938 to 0.7% per year. • But then 1938 to 1973 saw growth at 3.0% per year, catching up to and leaping ahead of what a continuation of pre-1913 trends would have forecast. Why did things go so right? And why, given that things went so right up through the 1970s, was that system of mixed-economy social democracy rejected for one of neoliberalism after 1980? • Some of it was that Keynes was at least half right, and his technical and technocratic adjustments to economic management did a great deal of good. • Some of it was that the disasters of 1913 to 1938 and the disaster on the other side of the iron curtain that was really-existing socialism that might move west—and east—concentrated everybody’s mind on making an imperfect system work. • Some of it was that the backlog of potential innovations undeployed over 1913 to 1938 made growth easy • Much of it was that with strong and equitable growth the market economies’ failure to vindicate Polanyian rights no longer seemed so salient • Some of it was that social insurance systems did, to some extent, manage to vindicate Polanyian rights • And much of it was that the plutocrats and the rightists had lost their nerve, after the disasters that their attempts to suppress labor organizations and political majorities had generated But why then did things fall apart in 1980? Keep that question in the back of your minds. In addition to strong economic growth that was equitable, in the sense of being divided across economic classes rather than hogged by one, there was the increase in human freedom. The equitable growth of 1938 to 1980 stopped afterwards in the age of neoliberalism. But the forward march of inclusion has continued. Inclusion of people as first-class citizens. Who are the first-class citizens of the civilization that we define as the one in which the Anglo-Saxon language is the lingua franca—the tongue spoken by free people? At the start, in the days of Leader Elf-Wisdom of the west branch of the knife-guys—King Alfred of Wessex—it was his own landholding trained-warrior male-Saxon thains. But even in Alfred’s day, he was reaching to include others, and not just other branches of the Saxon tribe. Alfred called his expanded kingdom “England”, after the name of the neighboring tribe of the Angles that he wanted to include as well. And as history passes it becomes the English, and then the British—but, remember, the WOGS—the worthy oriental gentlemen—begin as soon as you cross the English Channel and set foot on the European continent at the port of Calais. There are still those who hold to that. England, if not Great Britain, is still in its majority in support of its ruler Boris Johnson. Boris at least claims to believe that the most important thing is to keep England pure from European pollution. (You are not supposed to remember his full name: Alexander Boris de Pfefle Johnson. Alexander is Greek. Boris is Russian. The de is French. The Pfeffel is German. The Johnson is Welsh, half Keltic, not fully Saxon. If you trace his male line of descent, it goes back not to the Saxons or the Vikings but rather a Turkish nomads who came boiling out of Kazakhstan in days gone bye.) But I digress. From male, British, and upper-class, the set of “gentlemen” expands. It expands to include the middle-class males who have good manners. And then Anglo-Saxon is expanded to include by courtesy others of Northern European stock who walk the walk. By the time of Teddy Roosevelt it is anyone white, or mostly white, who is willing to behave like a male Anglo-Saxon Puritan—children of the Mayflower by adoption as well as by birth. And Teddy Roosevelt still believed in the Republican parties historical obligation for the freedom and uplift of African-Americans. He was swimming against the tide of the American power structure of his day. But he was swimming. And working class people are people. And women became people too. Next: America is a Protestant nation. With the coming of World War II and the strong need to rally everyone against Nazism, and then with the coming of the age of social democracy, the progress of full inclusion speeds up. Then: America is a Christian nation. Then: America is a Judeo-Christian nation. Republican President Dwight D. Eisenhower says: “It is very important that an American have a strong faith and I don’t care what”. Jimmy Carter as president in the 1960s talks about the “Abrahamic” faiths, bringing Muslims into the religious circle of inclusion. But that does not seem to stick. Women become full people. Freedom is redefined so that your freedom does not include the right to discriminate against African-Americans in public accommodations. Who people love or choose to be in their private lives becomes “not your business” as well. But Senator Rand Paul still believes in his heart of hearts that it does. Only the fact that it is not electorally wise to say that out loud, even in Kentucky, keeps him quiet now. And every year Fox News and company try to remove the Judeo- from “Judeo-Christian nation” and roll back inclusion by demanding an end to the “war on Christmas”. We know where we are supposed to be now: equal opportunity as a goal, rather than a joke; tolerance and celebration of our diversity because trapping yourself in one point of view is going to makes you stupid and narrow. We are not there yet. The cultural DNA of the global north today is still roughly 50% from the North-Atlantic Anglo-Saxons of the Victorian era. But this is a civilization in which people are less limited by what their parents happenned to look like and be than any previous one. A story to inspire. But not a story to make us comfortable about where we are right now. 1268 words 11.00 minutes Wolf: Long Economic COVID—Noted Martin Wolf: The Threat of Long Economic Covid Looms https://www.ft.com/content/f9a0c784-712e-4bf9-b994-55f8d63316d9: ‘Covid-19 has left many patients with debilitating symptoms after the initial infection has cleared. This is “long Covid”. What is true of health is likely to be true of the economy, too…. To meet the threat of a “long economic Covid”, policymakers must avoid repeating the mistake of withdrawing support too soon, as they did after the 2008 financial crisis. This danger is real, even if there remains much uncertainty about how the crisis will unfold…. We know that many businesses have been hurt, as demand for their output collapsed or they were locked down. The second waves of the disease now crashing on to many economies will make this worse.... But we also know that things could have been far worse. The world economy has benefited from extraordinary support from central banks and governments.... We know, nevertheless, that what has already happened is going to leave deep scars. The longer the pandemic continues, the bigger those scars will be.... Fiscal policy has to play a central role, as it alone can provide the necessary targeted support... Governments have to spend. But, over time, they must shift their focus from rescue to sustainable growth. If, ultimately, taxes have to rise, they must fall on the winners. This is a political necessity. It is also right… .#noted #2020-10-30 <https://www.bradford-delong.com/2020/10/wolf-long-economic-covidnoted.html> Trump: In California, You Have a Special Mask...—Noted Donald Trump: 'In California https://twitter.com/atrupar/status/1321548174427852801, you have a special mask. You cannot under any circumstances take it off. You have to eat through the mask. Right, right, Charlie? It's a very complex mechanism. And they don't realize those germs, they go through it like nothing… DeLONGTODAY 2020-10-30: American Republicans Are Bad Economic Managers Video at: http://delongtoday.com Today is an economic-analysis day: • I will start with the large gap between economic performance under Democratic and under Republican presidents, and with Alan Blinder’s and Mark Watson’s puzzlement as to where it comes from • I will then run through the history of what went wrong with Republican economic policy • I will then point out the technocratic idiocies of economic policy under the Trump administration • And I will then set forth my theory of where the performance gap—that American incomes grow 1.8%-points per year faster when Democrats are president than when Republicans are—comes from. Democratic politicians and legislators are willing, at least sometimes, to listen to their economists. Republican politicians and legislators will not even let their economists into the room where it happens until they have agreed that the politicians and legislators are already pursuing great policies. .#forecasting #economicgrowth #highlighted #politicaleconomy #politics #2020-10-30 Today is an economic-analysis day: • I will start with the large gap between economic performance under Democratic and under Republican presidents, and with Alan Blinder’s and Mark Watson’s puzzlement as to where it comes from • I will then run through the history of what went wrong with Republican economic policy • I will then point out the technocratic idiocies of economic policy under the Trump administration • And I will then set forth my theory of where the performance gap—that American incomes grow 1.8%-points per year faster when Democrats are president than when Republicans are—comes from. Democratic politicians and legislators are willing, at least sometimes, to listen to their economists. Republican politicians and legislators will not even let their economists into the room where it happens until they have agreed that the politicians and legislators are already pursuing great policies. .#forecasting #economicgrowth #highlighted #politicaleconomy #politics #2020-10-30 Today is an economic-analysis day: • I will start with the large gap between economic performance under Democratic and under Republican presidents, and with Alan Blinder’s and Mark Watson’s puzzlement as to where it comes from • I will then run through the history of what went wrong with Republican economic policy • I will then point out the technocratic idiocies of economic policy under the Trump administration • And I will then set forth my theory of where the performance gap—that American incomes grow 1.8%-points per year faster when Democrats are president than when Republicans are—comes from. Democratic politicians and legislators are willing, at least sometimes, to listen to their economists. Republican politicians and legislators will not even let their economists into the room where it happens until they have agreed that the politicians and legislators are already pursuing great policies. .#forecasting #economicgrowth #highlighted #politicaleconomy #politics #2020-10-30 Today is an economic-analysis day: • I will start with the large gap between economic performance under Democratic and under Republican presidents, and with Alan Blinder’s and Mark Watson’s puzzlement as to where it comes from • I will then run through the history of what went wrong with Republican economic policy • I will then point out the technocratic idiocies of economic policy under the Trump administration • And I will then set forth my theory of where the performance gap—that American incomes grow 1.8%-points per year faster when Democrats are president than when Republicans are—comes from. Democratic politicians and legislators are willing, at least sometimes, to listen to their economists. Republican politicians and legislators will not even let their economists into the room where it happens until they have agreed that the politicians and legislators are already pursuing great policies. .#forecasting #economicgrowth #highlighted #politicaleconomy #politics #2020-10-30 Today is an economic-analysis day: • I will start with the large gap between economic performance under Democratic and under Republican presidents, and with Alan Blinder’s and Mark Watson’s puzzlement as to where it comes from • I will then run through the history of what went wrong with Republican economic policy • I will then point out the technocratic idiocies of economic policy under the Trump administration • And I will then set forth my theory of where the performance gap—that American incomes grow 1.8%-points per year faster when Democrats are president than when Republicans are—comes from. Democratic politicians and legislators are willing, at least sometimes, to listen to their economists. Republican politicians and legislators will not even let their economists into the room where it happens until they have agreed that the politicians and legislators are already pursuing great policies. .#forecasting #economicgrowth #highlighted #politicaleconomy #politics #2020-10-30 Today is an economic-analysis day: • I will start with the large gap between economic performance under Democratic and under Republican presidents, and with Alan Blinder’s and Mark Watson’s puzzlement as to where it comes from • I will then run through the history of what went wrong with Republican economic policy • I will then point out the technocratic idiocies of economic policy under the Trump administration • And I will then set forth my theory of where the performance gap—that American incomes grow 1.8%-points per year faster when Democrats are president than when Republicans are—comes from. Democratic politicians and legislators are willing, at least sometimes, to listen to their economists. Republican politicians and legislators will not even let their economists into the room where it happens until they have agreed that the politicians and legislators are already pursuing great policies. .#forecasting #economicgrowth #highlighted #politicaleconomy #politics #2020-10-30 Today is an economic-analysis day: • I will start with the large gap between economic performance under Democratic and under Republican presidents, and with Alan Blinder’s and Mark Watson’s puzzlement as to where it comes from • I will then run through the history of what went wrong with Republican economic policy • I will then point out the technocratic idiocies of economic policy under the Trump administration • And I will then set forth my theory of where the performance gap—that American incomes grow 1.8%-points per year faster when Democrats are president than when Republicans are—comes from. Democratic politicians and legislators are willing, at least sometimes, to listen to their economists. Republican politicians and legislators will not even let their economists into the room where it happens until they have agreed that the politicians and legislators are already pursuing great policies. .#forecasting #economicgrowth #highlighted #politicaleconomy #politics #2020-10-30 Today is an economic-analysis day: • I will start with the large gap between economic performance under Democratic and under Republican presidents, and with Alan Blinder’s and Mark Watson’s puzzlement as to where it comes from • I will then run through the history of what went wrong with Republican economic policy • I will then point out the technocratic idiocies of economic policy under the Trump administration • And I will then set forth my theory of where the performance gap—that American incomes grow 1.8%-points per year faster when Democrats are president than when Republicans are—comes from. Democratic politicians and legislators are willing, at least sometimes, to listen to their economists. Republican politicians and legislators will not even let their economists into the room where it happens until they have agreed that the politicians and legislators are already pursuing great policies. .#forecasting #economicgrowth #highlighted #politicaleconomy #politics #2020-10-30 The Economic Incompetence of Republican Presidents: Project Syndicate J. Bradford DeLong: The Economic Incompetence of Republican Presidents https://www.project-syndicate.org/commentary/democratic-administrations-historically-outperform-on-economy-by-j-bradford-delong-2020-10: In a United States rife with disinformation, one of the most persistent myths is that Republicans are better than Democrats for business and economic growth. In fact, Republicans have consistently under-performed on the economy for almost a century. One hears many strange things nowadays, not least because “they” (a complicated term) are flooding the zone with misinformation. Without a shared set of facts upon which to base ethical and policy debates, democracy inevitably breaks down. The system’s virtue lies in its unique ability to elevate and consider a broad range of ideas emanating from society. Ideally, through a good-faith exchange of arguments and a weighing of the alternatives, a majority of voters converges on the best course of action... We hear many strange things today. They—and it is a complicated “they”—are flooding the zone with misinformation. Why? For lots of reasons. But democracy breaks down under a flood of misinformation. Democracies’ excellences spring from its ability to consider ideas from different places in society, and converge on the good ones. But that requires that the flow of information into the public-sphere be reality-based—or at least that there be confrontations in which the people can watch Lincoln debate Douglas and decide who is trustworthy and correct and who is not. And we have lost that. But we keep on trying. Here Sisyphus. Here rock. Here hill. And, as Camus wrote now long ago, we must imagine Sisyphus happy, with what he meant depending on which of the many possible ways we choose to read the word “must”. One piece of misinformation I see more and more these days is that on election day America faces a tradeoff. On the one hand, electing a Democrat means that America will no longer have a government that permanently kidnaps children just because it can. On the other hand, electing a Democrat “who will be radical and hurt the economy…”, as the Wall Street Journal columnist Peggy “No Republicans Should Ever Stab Trump in the Back” Noonan puts it, before writing that “[Biden] should not be going out for ice cream in a mask like John Dillinger on the lam…” and that “[Kamala Harris] is embarrassing. Apparently you’re not allowed to say these things because she’s a woman…. I will not sweat it, I will be myself…. If you can’t imitate gravity, could you at least try for seriousness?…” So let me give the microphone to economists Alan Blinder and Mark Watson, who write that: “The superiority of economic performance under Democrats rather than Republicans is nearly ubiquitous: it holds almost regardless of how you define success…. The performance gap… strains credulity…. 1.8 percentage points [per year]… from Truman through Obama…” And note that if they went back two more presidents—to Hoover-Roosevelt—the gap would be even bigger: about 3%/year. Note that in this context Trump was an unusually good president as far as economic performance in his first three years was concerned. In teh first three years of his presidency the economy matched the 2.4%/year growth it achieved in Obama’s second term. Even matching the previous Democrat is something that Trump’s and only Trump’s, of all post-WWII Republican presidencies, has seen. Blinder and Watson are flummoxed on where this performance gap comes from: greater fixed investment, more consumer optimism and thus spending on durables, fewer unfavorable oil shocks, and perhaps stronger growth abroad. But these can explain less than half of the gap. It is not that Democrats pursue overinflationary policies that borrow growth from the future and move it into the present. When I first read Blinder and Watson, the oil factor jumped out at me. Both President George Bushes—and also Nixon and Ford’s Secretary of State Henry Kissinger—were deeply confused about whether the U.S. wanted a high or a low price of oil as far as boosting real income growth was concerned. Other presidents grabbed for chances to make or keep oil prices lower. When we look back at history, it seems that Republican presidents and their administrations have little sense of what economic policies are likely to work. It simply never entered George W. Bush’s mind, or the mind of anyone in his administration, that a financial crisis could be produced by underregulation and would be a bad thing. It simply never entered Ronald Reagan’s mind, or the mind of anyone in his administration, that the big budget deficits they created gave America a choice between seeing investment collapse—slowing growth—or borrowing from abroad and in the process importing lots more manufactures—thus turning the Midwest into a rust belt. And Nixon’s belief that low interest rates plus wage-and-price controls could keep both inflation and unemployment low was hard to fathom either at the time. Here we can say of Trump that he has played true to type. NAFTA: worst trade deal in American history. TPP: second worst. Add some TPP provisions to NAFTA and call it USMCA, and all of a sudden it makes America great again. A trade war with China: “good, and easy to win”. But the result has been no change in manufacturing employment, a widened manufacturing trade deficit, U.S. consumers suffering reduced real incomes because they, not China, have paid the tariffs. Why? Because Robert Lighthizer and company had no clue how to plan or fight a trade war. Republican presidents with their repeated failures to understand how the economy works have been hurting it since at least 1928. There is no tradeoff here. 829 words .#economicgrowth #highlighted #macro #politicaleconomy #projectsyndicate #2020-10-30<!--more--> We hear many strange things today. They—and it is a complicated “they”—are flooding the zone with misinformation. Why? For lots of reasons. But democracy breaks down under a flood of misinformation. Democracies’ excellences spring from its ability to consider ideas from different places in society, and converge on the good ones. But that requires that the flow of information into the public-sphere be reality-based—or at least that there be confrontations in which the people can watch Lincoln debate Douglas and decide who is trustworthy and correct and who is not. And we have lost that. But we keep on trying. Here Sisyphus. Here rock. Here hill. And, as Camus wrote now long ago, we must imagine Sisyphus happy, with what he meant depending on which of the many possible ways we choose to read the word “must”. One piece of misinformation I see more and more these days is that on election day America faces a tradeoff. On the one hand, electing a Democrat means that America will no longer have a government that permanently kidnaps children just because it can. On the other hand, electing a Democrat “who will be radical and hurt the economy…”, as the Wall Street Journal columnist Peggy “No Republicans Should Ever Stab Trump in the Back” Noonan puts it, before writing that “[Biden] should not be going out for ice cream in a mask like John Dillinger on the lam…” and that “[Kamala Harris] is embarrassing. Apparently you’re not allowed to say these things because she’s a woman…. I will not sweat it, I will be myself…. If you can’t imitate gravity, could you at least try for seriousness?…” So let me give the microphone to economists Alan Blinder and Mark Watson, who write that: “The superiority of economic performance under Democrats rather than Republicans is nearly ubiquitous: it holds almost regardless of how you define success…. The performance gap… strains credulity…. 1.8 percentage points [per year]… from Truman through Obama…” And note that if they went back two more presidents—to Hoover-Roosevelt—the gap would be even bigger: about 3%/year. Note that in this context Trump was an unusually good president as far as economic performance in his first three years was concerned. In teh first three years of his presidency the economy matched the 2.4%/year growth it achieved in Obama’s second term. Even matching the previous Democrat is something that Trump’s and only Trump’s, of all post-WWII Republican presidencies, has seen. Blinder and Watson are flummoxed on where this performance gap comes from: greater fixed investment, more consumer optimism and thus spending on durables, fewer unfavorable oil shocks, and perhaps stronger growth abroad. But these can explain less than half of the gap. It is not that Democrats pursue overinflationary policies that borrow growth from the future and move it into the present. When I first read Blinder and Watson, the oil factor jumped out at me. Both President George Bushes—and also Nixon and Ford’s Secretary of State Henry Kissinger—were deeply confused about whether the U.S. wanted a high or a low price of oil as far as boosting real income growth was concerned. Other presidents grabbed for chances to make or keep oil prices lower. When we look back at history, it seems that Republican presidents and their administrations have little sense of what economic policies are likely to work. It simply never entered George W. Bush’s mind, or the mind of anyone in his administration, that a financial crisis could be produced by underregulation and would be a bad thing. It simply never entered Ronald Reagan’s mind, or the mind of anyone in his administration, that the big budget deficits they created gave America a choice between seeing investment collapse—slowing growth—or borrowing from abroad and in the process importing lots more manufactures—thus turning the Midwest into a rust belt. And Nixon’s belief that low interest rates plus wage-and-price controls could keep both inflation and unemployment low was hard to fathom either at the time. Here we can say of Trump that he has played true to type. NAFTA: worst trade deal in American history. TPP: second worst. Add some TPP provisions to NAFTA and call it USMCA, and all of a sudden it makes America great again. A trade war with China: “good, and easy to win”. But the result has been no change in manufacturing employment, a widened manufacturing trade deficit, U.S. consumers suffering reduced real incomes because they, not China, have paid the tariffs. Why? Because Robert Lighthizer and company had no clue how to plan or fight a trade war. Republican presidents with their repeated failures to understand how the economy works have been hurting it since at least 1928. There is no tradeoff here. 829 words .#economicgrowth #highlighted #macro #politicaleconomy #projectsyndicate #2020-10-30 ` Continue reading "The Economic Incompetence of Republican Presidents: Project Syndicate" » Briefly Noted for 2020-10-26 DeLong: COVID Dashboard https://research.stlouisfed.org/dashboard/56322 Ulrike Malmendier, Stefan Nagel, & Zhen Yan: The Making of Hawks & Doves: Inflation Experiences on the FOMC https://www.nber.org/papers/w23228.pdf... Dan Froomkin: New York Times Nailed for Publishing Republican Propaganda—Yet Again https://www.salon.com/2020/10/23/new-york-times-nailed-for-publishing-republican-propaganda--yet-again/: ‘Two supposedly "average" voters in a Times story turn out to be hardcore Republicans. And it's happened before... It raises serious questions about whether Times editors and reporters, rather than actually trying to determine how voters feel, are setting out to find people to mouth the words they need for predetermined story lines that, not coincidentally, echo the Trump campaign's propaganda… Fox and Briar: Mediterranean Lamb Bowls https://www.foxandbriar.com/mediterranean-lamb-bowls/ Marc Flandreau: How Vulture Investors Draft Constitutions: North & Weingast 30 Years Later https://us02web.zoom.us/w/86160838299?tk=duPA9Ka5nc9J1jZ0ZeuMESHoiEx_l54-alP8YvwejVE.DQIAAAAUD5YOmxZkLVd6UEtOZFFOU2pXZEJZblV5bHpnAAAAAAAAAAAAAAAAAAAAAAAAAAAA
2021-11-28 19:56:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2122972309589386, "perplexity": 5951.322242542861}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00033.warc.gz"}
http://pastem.jp/jti/jti-00512.html
pXe@ITpꎫTEITpꓯ`ꎫT @ITp̓`ƂɊ֘AƎv֘AAً`Ap̓ǂݕApmFł܂B@擪26(̔wiF)`Eދ`O[vAŌ26(΂̔wi)֘ÃO[vłB ITpꓯ`E֘Ay/bz < 1 > 1/b Floating point number Operations Per Second FLoating-point Operations Per Second Floating-Point Operations Per Second FLOPS Tera Floating point number Operations Per Second Tera Floating-Point Operations Per Second tera floating-point operations per second TFLOPS etbvX x /b b蒛 irQ[V y ITp z L A B C D E F G H I J K L M N O P Q R S T U V W X Y Z A C E G I J L N P R T V X Z \ ^ ` c e g i j k l m n q t w z } ~ E ITpꎫTNW
2018-08-19 19:10:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9394492506980896, "perplexity": 777.4620883262534}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215284.54/warc/CC-MAIN-20180819184710-20180819204710-00023.warc.gz"}
https://www.homeworklib.com/answers/890379/draw-the-structure-of-the-product-that-is-formed
# Draw the structure of the product that is formed when the compound shown below undergoes an... Draw the structure of the product that is formed when the compound shown below undergoes an elimination reaction with NaOCH3. Draw the structure of the product that is formed when the compound shown below undergoes an elimination reaction with NaOCH3. Indicate the stereochemistry of the product. Concepts and reason The reaction of a haloalkane is given with sodium methoxide. The sodium methoxide is a strong base and thus favors elimination product to form alkenes. The mode of elimination is E2. Fundamentals The reaction will take place via E2 mechanism because the base used is a strong base and the alkyl halide is a secondary halide. E2 reaction: It stands for elimination reaction following second order kinetics. In E2 reaction, the base abstracts the $\beta$ hydrogen and removal of the leaving group simultaneously in the same step. A general mechanism is shown below: The elimination taking place via anti mode where the leaving group and the hydrogen abstracted by base are on the opposite sides or anti periplanar. The required structure is as follows: The product of the reaction is as follows: Ans: The product of the reaction is as follows: #### Earn Coin Coins can be redeemed for fabulous gifts. Similar Homework Help Questions • ### draw the structure of the product that is formed when the compound shown below undergoes an... draw the structure of the product that is formed when the compound shown below undergoes an elimination reaction with NaOH3. Indicate the stereochemistry of the product. explanations are highly appreciated. thank you. (please show answer with dash and wedge) Draw the structure of the product that is formed when the compound shown below undergoes an elimination reaction with NAOCH3. Indicate the stereochemistry of the product Interactive 3D display mode CH3 CH3 Н.С" Br ШИ • ### Draw the structure of the product that is formed when the compound shown below undergoes an... Draw the structure of the product that is formed when the compound shown below undergoes an elimination reaction with NaOCH3. Indicate the stereochemistry of the product. Interactive 3D display mode CHZ HzC. CH2 • ### Draw the structure of the product that is formed when the compound shown below undergoes an... Draw the structure of the product that is formed when the compound shown below undergoes an elimination reaction with NaOCH3. Indicate the stereochemistry of the product Interactive 3D display mode CH, ||llo _CH3 Ho B • ### draw the structure of the product that is formed when the compound shown below undergoes an... draw the structure of the product that is formed when the compound shown below undergoes an elimination reaction with NaOH3. Indicate the stereochemistry of the product. explanations are highly appreciated. thank you. Draw the structure of the product that is formed when the compound shown below undergoes an elimination reaction with NaOCHz. Indicate the stereochemistry of the product. Interactive 3D display mode CH LCH₂ • ### Draw the structure of the product that is formed when the compound shown below undergoes an elimination reaction wi... Draw the structure of the product that is formed when the compound shown below undergoes an elimination reaction with NaOCH3. Indicate the stereochemistry of the product Interactive 3D display mode H2C. Н.С. CH3 Hс сну • ### Draw the structure of the product that is formed when the compound shown below undergoes an... Draw the structure of the product that is formed when the compound shown below undergoes an elimination reaction with NaOCH3. Indicate the stereochemistry of the product. Interactive 3D display mode CH CH CI Draw the molecule on the canvas by choosing buttons from the Tools (for bonds), Atoms, and Advanced Template toolbars. The single bond is active by default. CI Br Submit Previous Answers Previous Answers Request Answer x Incorrect; Try Again; 4 attempts remaining rovide Feedback Next • ### Draw the structure of the product that is formed when the compound shown below undergoes a... Draw the structure of the product that is formed when the compound shown below undergoes a reaction with (CH3)2CHMgI and then is treated with water. Draw the structure of the product that is formed when the compound shown below undergoes a reaction with (CH3)2CHMgI and then is treated with water. • ### Draw the structure of the product that is formed when the compound shown below undergoes a... Draw the structure of the product that is formed when the compound shown below undergoes a reaction with H2C=O and then is treated with water. • ### Draw the structure of the product that is formed when the compound shown below undergoes a... Draw the structure of the product that is formed when the compound shown below undergoes a reaction with 1 equivalent of CH3MgI and then is treated with water. • ### Draw the product formed when the structure shown below undergoes solvolysis in CH3CH2OH with heat. Interactive... Draw the product formed when the structure shown below undergoes solvolysis in CH3CH2OH with heat. Interactive MarvinView Alkyl Halides: Substitution reaction with (2S,3R)-2-chloro-3-methylpentane with CH3O- Alkyl Halides: Reaction of 3-chloro-2,2-dimethylpentane and ethanol Draw the product formed when the structure shown below undergoes a substitution with NaOCH3. Interactive MarvinView Draw the product formed when the structure shown below undergoes solvolysis in CH3CH2OH with heat.
2021-11-28 17:31:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4226697087287903, "perplexity": 2554.0484728742003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358570.48/warc/CC-MAIN-20211128164634-20211128194634-00466.warc.gz"}
https://convert.ehehdada.com/romertonewton
# Rømer to Newton Calculates the Newton temperature from the given Rømer scale value Type what you want to convert in the box below or (autosubmits, max. 1MB) ## Rømer to Newton The Rømer temperature scale was proposed by Ole Christensen Rømer in 1701. This measure was fix the zero initially using freezing brine, then it adjusts 60 degree at the point of the boiling of pure water and 7.5 degree as the freezing point. The unit is represented by °Rø after the value. Beware sometimes it has appeared as °R just like for Réaumur and Rankine scales. The Newton temperature scale was defined by Isaac Newton in 1701 setting as 0 on this scale "the heat of air in winter at which water begins to freeze", or in other words, 0 as in Celsius scale, and the value 33 for "heat at which water begins to boil", so around 100 ℃, being exactly 100 the value commonly used for conversions between both scales. The Newton is represented as °N after the value. The Newton temperature scale values are calculated from Rømer using the formula $$(Rømer - 7.5)× {22 \over 35}$$
2020-01-19 01:15:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8478981256484985, "perplexity": 2308.5623216292233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00554.warc.gz"}
https://www.ideals.illinois.edu/handle/2142/107607
## Files in this item FilesDescriptionFormat application/pdf 4690.pdf (22kB) AbstractPDF ## Description Title: THEORETICAL ANALYSIS OF LAWRENCIUM FLOURIDE ION AS A PROMISING CANDIDATE FOR ELECTRON ELECTRIC DIPOLE MOMENT SEARCHES Author(s): Mitra, Ramanuj Contributor(s): Das, B. P. ; Abe, M. ; Sahoo, Bijaya Kumar; Prasannaa, Srinivasa Subject(s): Mini-symposium: Precision Spectroscopy for Fundamental Physics Abstract: Diatomic heavy polar molecules as well as molecular ions have proved to be very good candidates for table-top experimental searches for electric dipole moment of electron (eEDM). These experiments in combination with relativistic many-body calculations set an upper bound to the eEDM, thus probing physics beyond the Standard-Model (SM) of elementary particles. Theoretically, we can check the suitability of a molecular candidate for eEDM experiments by calculating the effective electric field ($E_{eff}$), permanent dipole moment ($\mu$), and polarizing electric field ($E_{pol}$), as these are among the major factors that determine the experimental sensitivity of the molecule. The production of radioactive elements like lawrencium opens up the possibility of performing eEDM experiments with molecules having radioactive atoms. In this abstract, we focus on lawrencium flouride ion and its potential for probing new physics beyond the SM via the eEDM. To ensure the stability of the molecule, we obtained the equilibrium bond length of LrF$^+$ from the minima of the potential energy curve (PEC), and we also showed how the PEC depends on the choice of basis sets. As $E_{eff}$ is entirely relativistic in origin, relativistic calculations are required to obtain it. We report the values of $E_{eff}$ and $\mu$ of LrF+ calculated at Dirac-Fock level, using quadruple-zeta basis. Our calculated Dirac-Fock level value $E_{eff}$ of LrF$^+$ is 213.6 GV/cm, which is two times larger than that of HgF~\footnote{V. S. Prasannaa, A. C. Vutha, M. Abe, and B. P. Das,Phys. Rev. Lett.114, 183001 (2015).}, and approximately nine times larger than that of the ionic candidate HfF$^+$~\footnote{W Cairncross et al, Phys. Rev. Lett. \textbf{119}, 153001 (2017).}, thus suggesting larger experimental sensitivity. Inclusion of correlation effects to the properties by performing calculations using the relativistic coupled-cluster (RCC) method is underway. We propose a molecular ion trap procedure to confine the molecules for performing eEDM experiment with LrF$^+$ ion, as trap experiments would give larger coherence time compared to beam experiments, thus improving experimental sensitivity. Issue Date: 24-Jun-20 Publisher: International Symposium on Molecular Spectroscopy Citation Info: APS Genre: CONFERENCE PAPER/PRESENTATION Type: Text Language: English URI: http://hdl.handle.net/2142/107607 Date Available in IDEALS: 2020-06-26 
2020-11-29 14:37:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8294877409934998, "perplexity": 3774.1252354488734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141198409.43/warc/CC-MAIN-20201129123729-20201129153729-00675.warc.gz"}
https://ask.sagemath.org/questions/38289/revisions/
# Revision history [back] ### complex norm As a newbie, I must be missing something, but here is the question: With this setup: var('a', domain=CC) a.norm() a.norm().simplify() The last line displays as a^2, but should be |a|^2 . What am I missing?
2022-08-16 22:35:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2580544948577881, "perplexity": 3286.346188254615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00154.warc.gz"}
https://www.researchgate.net/publication/351823800_Business_Resilience_and_Complex_adaptive_systems
ChapterPDF Available Authors: Abstract and Figures The complex adaptive system has facilitated advances in artificial intelligence (Moon et al., 2011; Padilla, 2012; Chandrasekaran, 2013; Yagüe & Balmaseda; 2020); but it is also a metaphor to understand the way in which the network of companies responds to changes. In recent decades, the systemic approach has contributed to the advancement of knowledge in administrative, economic and organizational sciences (Mas, 2008; Jackson, 1994). Content may be subject to copyright. A preview of the PDF is not available ResearchGate has not been able to resolve any citations for this publication. Article Full-text available This paper examines the economic effects of COVID-19 containment measures using daily global data on containment measures, infections, and economic activity indicators, such as Nitrogen Dioxide ($${\mathrm{NO}}_{2})$$ emissions, international and domestic flights, energy consumption, maritime trade, and mobility indices. Results suggest that containment measures had a significant impact on economic activity—equivalent to about a 10 percent loss in industrial production over 30 days following their implementation. Easing of containment measures results in an increase in economic activity, but the effect is lower (in absolute value) to that of tightening. Fiscal measures used to mitigate the crisis were effective in partly offsetting these costs. We also find that school closures and cancellation of public events are among the most effective measures in curbing infections and are associated with low economic costs. Other highly effective measures like workplace closures and international travel restrictions are among the costliest in economic terms. Article Full-text available El objetivo de este artículo es analizar al empleo como pilar escencial que sustenta a la inclusión social y a la democracia. Por causa de la pandemia COVID 19 muchas personas fueron despedidas. Después de años de pleno empleo para Estados Unidos, con tasas de desempleo históricamente bajas, de hecho, el nivel del 3 por ciento, la sociedad tuvo que adaptarse a una crisis provocada, no por escasez de demanda o escasez de liquidez, sino por un virus invisible, cuyo la naturaleza y las mutaciones, así como los objetivos y los síntomas todavía se comprenden de manera imperfecta. Este artículo intenta argumentar que, al explorar las causas, se necesita un enfoque de sistemas, estudios históricos en profundidad, así como “calcular los números”. Article Full-text available El objetivo de este artículo es determinar si la inversión en innovación de las empresas agroalimentarias que cotizan en la Bolsa Mexicana de Valores es un conductor para la generación de valor sustentable. Se analizaron las estrategias sobre la adopción de la sustentabilidad en empresas de la industria agroalimentaria, dadas las consecuencias por el cambio climático; después se examinaron las tendencias de consumo como elemento de contingencia para la innovación y finalmente, se usaron ecuaciones estructurales. Entre los resultados se encontró que las empresas sí invierten en innovación. Con la identificación de “inversión en innovación” como factor crítico se aporta evidencia que al diseñar estrategias de largo plazo para el desarrollo sustentable la organización puede generar valor de manera integral. Article Full-text available Purpose The purpose of this study was to advance the knowledge of pharmaceutical supply chain (PSC) resilience using complex adaptive system theory (CAS). Design/methodology/approach An exploratory research design, which adopted a qualitative approach was used to achieve the study’s research objective. Qualitative data were gathered through 23 semi-structured interviews with key supply chain actors across the PSC in the UK. Findings The findings demonstrate that CAS, as a theory, provides a systemic approach to understanding PSC resilience by taking into consideration the various elements (environment, PSC characteristics, vulnerabilities and resilience strategies) that make up the entire system. It also provides explanations for key findings, such as the impact of power, conflict and complexity in the PSC, which are influenced by the interactions between supply chain actors and as such increase its susceptibility to the negative impact of disruption. Furthermore, the antecedents for building resilience strategies were the outcome of the decision-making process referred to as co-evolution from a CAS perspective. Originality/value Based on the data collected, the study was able to reflect on the relationships, interactions and interfaces between actors in the PSC using the CAS theory, which supports the proposition that resilience strategies can be adopted by supply chain actors to enhance this service supply chain. This is a novel empirical study of resilience across multiple levels of the PSC and as such adds valuable new knowledge about the phenomenon and the use of CAS theory as a vehicle for exploration and knowledge construction in other supply chains. Article Full-text available
2023-03-31 22:43:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.563769519329071, "perplexity": 14806.344319606764}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00616.warc.gz"}
http://mathhelpforum.com/differential-equations/167174-heat-equation.html
1. ## Heat equation If the boundary conditions are not derivative then it is a sine fourier series and if they are derivative it is a cosine fourier series Is this correct? Also how do you know which case to use when lambda is less than, equal to or more than? 2. It is also possible to have both i.e. one boundary is fixed while the other is insulated.
2016-09-25 08:23:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130547642707825, "perplexity": 441.470565130637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660158.61/warc/CC-MAIN-20160924173740-00131-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41598-021-94070-2?error=cookies_not_supported&code=29a9f97f-d567-4045-a3ff-77c6fb667d19
Skip to main content Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Skin tolerant inactivation of multiresistant pathogens using far-UVC LEDs ## Abstract Multiresistant pathogens such as methicillin-resistant Staphylococcus aureus (MRSA) cause serious postoperative infections. A skin tolerant far-UVC (< 240 nm) irradiation system for their inactivation is presented here. It uses UVC LEDs in combination with a spectral filter and provides a peak wavelength of 233 nm, with a full width at half maximum of 12 nm, and an irradiance of 44 µW/cm2. MRSA bacteria in different concentrations on blood agar plates were inactivated with irradiation doses in the range of 15–40 mJ/cm2. Porcine skin irradiated with a dose of 40 mJ/cm2 at 233 nm showed only 3.7% CPD and 2.3% 6-4PP DNA damage. Corresponding irradiation at 254 nm caused 15–30 times higher damage. Thus, the skin damage caused by the disinfectant doses is so small that it can be expected to be compensated by the skin's natural repair mechanisms. LED-based far-UVC lamps could therefore soon be used in everyday clinical practice to eradicate multiresistant pathogens directly on humans. ## Introduction The increasing resistance of bacteria to antibiotics is one of the major challenges facing mankind in the area of global health1. Currently, about 700,000 patients worldwide die every year from an infection with multidrug resistant organisms (MROs). The trend is rising: The number is estimated at ten million for the year 20502. This means that more people would then die of MROs than of cancer. Furthermore, the current worldwide pandemic with the SARS-CoV-2 virus also calls for a method to eradicate viruses efficiently and sustainably. This is preferably achieved by a physical method that non-selectively and irreversibly inactivates all microorganisms so that no resistance can be developed. Such a method must consequently attack the microorganism at a vital yet non-specific point. UVC radiation can inactivate microorganisms and viruses by triggering photochemical reactions in the DNA or RNA. As a result, they can no longer replicate and their potential pathogenic efficacy is thus inhibited3,4,5,6. This type of disinfection is currently used for drinking water7, and also for cleaning surfaces8 and air9. The required dose for the inactivation of microorganisms and viruses depends strongly on the individual species, their environment and the wavelength of the radiation10,11,12,13,14,15,16,17. Generally, the UV absorption spectrum of DNA and RNA exhibits a maximum at ~ 265 nm and a minimum around ~ 240 nm and increases again for shorter wavelengths. The main advantage of using far-UVC radiation (< 240 nm) instead of near-UVC radiation (250–280 nm) for the inactivation is its lower penetration depth in the skin18. Far-UVC radiation is mainly absorbed in the uppermost, non-living cornified layer of the skin and potentially causes little damage to the living cells underneath, as previously shown in mice19. This gives rise to the vision of antisepsis of skin surfaces by direct UVC irradiation without serious damage to health. This vision is supported by numerous studies conducted in recent years on the skin tolerance of far-UVC radiation20,21,22. Investigations on a possible in vivo antisepsis with far-UVC radiation have so far mostly been limited to the use of excimer lamps with Kr-Cl emitting at 222 nm, or with Kr-Br emitting at 207 nm5. In the clinical environment directly on humans they are of limited use: Such lamps are bulky, fragile, emit considerable heat and are operated at high voltages. The same applies to the application of lasers23. Only recently, LEDs with a peak emission wavelength in the range of 230–240 nm and milliwatt output powers have been demonstrated for the first time24,25 after detailed optimization of the design and manufacturing conditions26,27,28,29. In the long run, far-UVC LEDs should be superior to excimer lamps in many ways. Due to their compact design, low-voltage operation and flexible adjustable wavelength, many of the antisepsis applications in humans could greatly benefit from or become possible by using far-UVC LEDs. This applies to wound antisepsis during surgery, the decolonization of methicillin-susceptible Staphylococcus aureus (MSSA) in the nasal vestibule as the main source of such infections30 or the inactivation of corona viruses directly in their habitat of the mucous membrane in the throat31. However, in view of strong absorption of far-UVC radiation, it is unclear to what extent it can reach microorganisms in such an environment at all. In any case, corresponding experimental studies with far-UVC LEDs are, however, completely lacking so far. With this study we are taking the first step towards a practical medical application of far-UVC radiation on humans. For this purpose, we designed and manufactured a far-UVC irradiation system based on LEDs. A dense array of 120 LEDs, which is additionally equipped with a spectral filter, delivers spectrally pure far-UVC radiation with a peak wavelength of 233 nm and negligible power components at > 240 nm. The suitability of the irradiation system for in vivo antisepsis applications is demonstrated by the successful inactivation of Methicillin-resistant Staphylococcus aureus (MRSA) and the proof that the radiation causes hardly any damage to porcine skin. ## Results ### Design and performance of the far-UVC LED irradiation system The key components of the irradiation system are far-UVC LEDs, based on AlGaN semiconductor heterostructures grown by metalorganic vapor phase epitaxy on sapphire substrates (see Methods). At an operating current of 100 mA and a heat sink temperature of 20 °C, the operation voltage of the LEDs is 13 V with a maximum emission power of (1.9 ± 0.3) mW. This corresponds to an external quantum efficiency of (0.36 ± 0.07) % and wall-plug efficiency (WPE) of (0.14 ± 0.02) % (Fig. 1a). The LEDs have a moderately narrowband emission with a peak wavelength of 233 nm and a full width at half maximum of 12 nm (Fig. 1b). In addition, a weak shoulder around 300 nm and a small side peak at about 400 nm are observed, caused by deep level transitions at Mg defects27. The flip-chip packaged far-UVC LEDs typically have a viewing angle of 148°. The far field pattern exhibits four local maxima at 40° inclined to the chip normal caused by nearly 50% of the emitted light being emitted via the side surfaces of the LED chip25. To obtain a more directional radiation, a specially designed silicon-based surface mounted device package32, with an integrated aluminum reflector was used for the packaging of the LEDs (Fig. 1c). In addition, a Zener diode, for protection against damage from electrostatic discharge, is monolithically integrated in the base plate of the package by ion implantation. In this way, the distance between the reflector and the LED chip could be minimized and shadowing effects were avoided. The 54.7° angled reflector surfaces deflect the light emitted from the side surfaces of the LED chip towards the front resulting in a smaller viewing angle of 110° and a twofold increase in the radiance along the surface normal (Fig. 1d). Despite huge improvements in recent years, the WPE, temperature sensitivity, spectral purity and lifetime of far-UVC LEDs are still inferior to those of near-UVC LEDs. In view of this, special measures have been taken regarding the cooling, radiation guidance and replaceability of the LEDs in the irradiation system, which is shown in Fig. 2(a) and (b). Furthermore, excellent uniformity of the emission spectra and irradiance over the target area were considered to be of key importance for the irradiation studies of skin samples and surfaces doped with viruses or bacteria. 120 far-UVC LEDs on an area of 80 mm × 80 mm combined with a two-stage aluminum reflector (Fig. 2a and b) were incorporated in the irradiation system to obtain a uniform irradiance distribution over a target area of 70 mm × 70 mm at a distance of 25 mm from the system. Due to the large number of LEDs and the low WPE of the devices, approximately 150 W of heat is generated in the system during operation. It is known that the optical power of the far-UVC LEDs, operated at 100 mA, reduces to a quarter when the temperature is increased from 20 to 80 °C25. This heat generated during operation is not only detrimental to the performance of the LEDs but could also heat up the surrounding air which in turn could damage the irradiated samples. To overcome this challenge, a water-cooled, copper-based heat sink, maintained at a constant temperature of 18 °C was developed for the radiation unit. Using this cooling system, the temperature of the active region of the LEDs is maintained at 34 °C at the maximum operation current of 100 mA (see supplementary material). The stability of the LED emission is essential to the irradiation studies. In the case of the far-UVC LEDs, the emission power of the LEDs decreases very rapidly at the beginning33. At a constant current of 100 mA corresponding to a nominal current density of 67 A/cm2 in the active region of the device, the emission power of the LEDs decreases to 30% of its initial value within the first 100 h of operation25. Hence, to ensure stable operation of the irradiation system, the complete system was burned-in, at 100 mA, for 48 h. After this burn-in, a < 0.17%/h change in the irradiance is expected. Although the LEDs show a single peak emission at 233 nm and FWHM of 12 nm, the weak parasitic luminescence, at wavelengths > 240 nm, could potentially cause damage to the skin as it penetrates much deeper in the skin. A distributed Bragg reflector (DBR), based on quarter wavelength stacks of HfO2 and SiO2 layers, was designed to reflect the emission between 240 and 300 nm and transmit the peak emission wavelength at 233 nm (see Methods). The design was optimized for normal incidence with nominal transmission values of 50% and 0.1% at 235 nm and 244 nm, respectively. At normal incidence, the filter shows a transmission of over 60% around 230 nm and below 0.05% above 245 nm (Fig. 2c). However, the LEDs exhibit a wide far-field of the radiated power with significant contributions emitted at angles > 30° inclined to the chip normal (Fig. 1d). Since the DBR stop band shifts to shorter wavelength with increasing angle of incidence, e.g. by 15 nm for 40°, the limitation to normal incidence considerably underestimates the real reduction of the total optical power by the filter. Calculations based on the far-field pattern, LED spectra and DBR transmission at different angles suggest a reduction of the total optical power down to 20–30%. The emission spectra of the irradiation system, measured across the target area, are uniform both without and with the DBR filter (Fig. 2d). When using the filter, the long wavelength part of the spectra is cut off. The spectral power density at 240 nm is only 10% of that at 233 nm and decreases further at longer wavelengths, while leaving the peak wavelength almost unchanged. A weak parasitic luminescence (three orders of magnitude lower than the peak intensity) can be observed between 360 and 450 nm both with or without the use of the filter. In addition, an excellent uniformity is obtained for the irradiance distribution across the target area (Fig. 2e and f). Uniformity factors of 93% and 90% are obtained without and with the DBR filter, respectively. The irradiance decreases nearly linearly when increasing the distance from the system from 3 to 75 mm without negatively affecting the uniformity factor. The mean value of the irradiance of the system at a distance of 25 mm from the system is 170 µW/cm2 and reduces to 36 µW/cm2 when the filter is introduced. ### Inactivation of multiresistant pathogens and skin tolerance investigations MRSA was chosen as a model organism for the inactivation studies using far-UVC radiation because although there is a great clinical need to inactivate them, the process of doing so is difficult. A spot test as qualitative verification of bacterial inactivation was performed on Columbia blood agar plates using Staphylococcus aureus (DSM 11822). The far-UVC irradiation system, with the filter, was used with an irradiance of 44 µW/cm2. In addition, UVC radiation from a low-pressure mercury gas-discharge lamp emitting at a peak wavelength of 254 nm with an irradiance of 50 µW/cm2 was applied as positive control. As can be seen in Fig. 3, bacteria growth is visible on control plates that have not been irradiated and bacteria growth was inhibited on irradiated plates. More precisely, at 254 nm a UV dose of 0.5 mJ/cm2 is sufficient for a visible reduction of bacterial regrowth at small bacteria concentrations (approximate 3 lg-levels). A nearly complete inactivation of the bacteria with a reduction factor of about 6 lg-levels was achieved by a UV dose of 1.5 mJ/cm2. At 233 nm, an approximately 20-fold higher UV dose of 11 mJ/cm2 to 16 mJ/cm2 is effective in inhibiting growth of bacteria at lower concentrations (2 × 103 cfu/spot to 2 × 104 cfu/spot) displaying reduction factors of 3–4 lg-levels. Higher bacteria concentrations were nearly completely inactivated by applying 233 nm UV doses of 27 mJ/cm2 to 40 mJ/cm2 (4–6 lg levels). It is commonly known that UVC radiation with wavelengths < 280 nm is very harmful to human skin34. However, the shorter the wavelength of the UV radiation, the higher the absorption in the stratum corneum. Consequently, less radiation reaches the cell nuclei in the keratinocytes in the viable epidermis and DNA damage in the skin is reduced. To prove this assumption, the far-UVC irradiation system has been tested in comparison to a low-pressure mercury gas-discharge lamp emitting at a wavelength of 254 nm with an irradiance of 50 µW/cm2 as a positive control on porcine skin which is a suitable skin model for human skin35. The far-UVC irradiation experiments were performed with and without the filter using an irradiation dose of 40 mJ/cm2 (46 µW/cm2 for about 14.5 min with filter and 178 µW/cm2 for 3.75 min without filter) which is able to inactivate MRSA as explained above. Figure 4 (a) shows the results of the skin damage investigations. The cyclobutane pyrimidine dimers (CPD) and pyrimidine (6–4) pyrimidone photoproducts (6-4PP) damage is presented in percentage of epidermal cells with DNA damage to the total amount of cells in the microscopic images. CPD stained histologic images of one such sample can be seen in Fig. 4(b–e). The 254 nm UV radiation led to a high amount of (58 ± 11) % CPD and (69 ± 12) % 6-4PP DNA damage. 6-4PP stained histologic images are shown in the supplementary material (Fig. S1). In comparison, untreated skin did not show any damage. Irradiation with 233 nm UV radiation without the filter induced (18.0 ± 3.5) % of CPD damage and (13.8 ± 1.9)% of 6-4PP damage. When using the filter these numbers are further reduced to (3.7 ± 1.5) % of CPD damage and (2.3 ± 0.8) % of 6-4PP damage. These results show that the far-UVC radiation at 233 nm strongly reduces the DNA damage compared to near-UVC radiation at 254 nm, which is usually used for the decontamination of surfaces. The filter which significantly suppresses spectral contributions of the LEDs at wavelengths above 240 nm is able to further reduce DNA-damage by a factor of 5. An important fact is that unlike the use of near-UVC radiation at 254 nm, the DNA damage caused by far-UVC radiation at 233 nm appears only superficially below the horny layer of the skin (stratum corneum, SC) in the upper half of the epidermis and does not reach the sensitive basal cells (Fig. 4b). The CPD damage induced by 254 nm irradiation occurs throughout the entire epidermis. The basal cells in particular must be protected from damage, as the renewal process of the epidermis starts from there every 28 days. The studies will be extended to human skin in the future. Human skin includes melanin and could have a different SC thickness compared to porcine skin. A temperature increase of the skin during irradiation was neither observed at irradiation with 233 nm, nor at 254 nm. ## Discussion The LED-based far-UVC irradiation system has been demonstrated to be capable of inactivating multiresistant microorganisms without causing significant skin damage. MRSA as a model organism for MROs in concentrations of 2 × 103 to 2 × 106 cfu on blood agar plates was inactivated within 6 to 15 min. These times are sufficiently short to allow the system to be used in a clinical setting. Observed difference in the MRSA inactivation effectiveness using UVC radiation with a wavelength of either 233 nm or 254 nm might be due to the difference in the absorbance of UV radiation by DNA depending on the radiation wavelength36. Another factor that influences the efficacy is that on agar plates the bacterial suspension partially permeates into the agar. However, the shorter the wavelength of radiation, the shorter is its penetration depth. Therefore, the radiation might not reach the bacteria hidden in deeper areas of the agar. Additionally, these bacteria then are protected by sheep erythrocytes, absorbing the radiation by its proteins and DNA37. Hence, a higher dose is most likely needed for complete inactivation of the bacteria when using radiation at 233 nm instead of 254 nm. In addition, similar investigations on another strain of MRSA, Staphylococcus aureus DSM 18827, showed complete inactivation with a 233 nm UV dose of 40 mJ/cm2. These results on the MRSA inactivation on blood agar plates using far-UVC radiation emitted by LEDs are very promising. However, further in vitro experiments have to be performed before a far-UVC LED irradiation system should be used for human skin antisepsis. Particularly, tests on carrier discs are necessary, where the use of different soil loads such as urea, proteins (albumin) or fatty acids, is possible and no permeation will take place. This allows a better imitation of the conditions on the human skin to exclude any influence of the environment on the antibacterial effectiveness. The applied far-UVC radiation doses are skin compatible: At a radiation duration of 14.5 min, damage to porcine skin was only 3.7% (CPD) and 2.3% (6-4PP), respectively. This is a factor of 15 to 30 less damage than compared to the application of an equivalent near-UVC radiation at 254 nm and actually so low that apoptosis38 and the natural enzymatic repair mechanisms of the skin39 can compensate for the induced damage. This has been shown using 254 nm irradiation on hairless mice, where the initial 37% CPD damage decreased to 13% within 24 h19. On which dynamics the repair mechanisms occur and if a threshold of CPD damage is necessary to trigger such processes, as suggested for UVA irradiation40, will be subject of future investigations, where DNA damage needs to be determined at varying time points subsequent to irradiation. The CPD damage is in comparable magnitude to recently published data using 222 nm irradiation. Narita et al.41 found no CPD damage in hairless mice after applying a dose of 150 mJ/cm2. In a recent in vivo study on human skin, Fukui et al.21, found a slight but significantly increased amount of CPD damage compared to non-irradiated skin, which was determined by an ELISA essay. Due to the use of different skin models, a direct comparison between 233 and 222 nm radiation has to be evaluated in future studies on more viable skin models, which are more metabolically active. The basic principle of skin safe far-UVC antisepsis has been reported before17,18,19,20,21,22. In this respect, the results of this work confirm the previous findings. However, all previous work was carried out using excimer lamps, while far-UVC LEDs with a different wavelength were used here for the first time. Only far-UVC LEDs might enable the inactivation of MROs in clinically relevant contexts, such as the decolonization of the nasal vestibule. In this work, the feasibility of such an approach has been successfully demonstrated. Nevertheless, the development of appropriate LED irradiation systems is still in its infancy. Shorter irradiation times are desirable which require more efficient and longer lasting LEDs. This entails above all optimization on the side of the LED—in the design and fabrication of the semiconductor layer structure and the chip as well as in the packaging. The necessary performance improvements of the far-UVC LEDs can be expected, considering the success story of blue LEDs in the lighting sector, but also the progress made for near-UVC LEDs over the last few years. We are therefore convinced that in a few years far-UVC LEDs will outperform their lamp competitors in certain aspects and make their way into clinical in vivo antisepsis in humans. This will be a major step towards solving global health problems due to MROs. ## Methods ### Growth of the far-UVC LED heterostructures A 720 nm thick AlN base layer was grown in an 11 × 2″ Aixtron Aix2400G3-HT planetary metal–organic vapor phase epitaxy (MOVPE) reactor on 2″ diameter c-plane sapphire substrates (offcut 0.1° towards an m-direction) using trimethylaluminum and ammonia as sources and hydrogen as carrier gas. The growth pressure, temperature and growth rate were 50 hPa, 1180 °C and 1.5 µm/h, respectively42. During growth, the V/III ratio was reduced from 450 to 30 under constant TMAl flow of 350 µmol/min. Parallel ridges of 2.0 µm width, 3.5 µm pitch and 1.3 µm height were processed into these AlN/sapphire templates by means of photolithography and inductive coupled plasma (ICP) etching. For this a hard mask of 1 µm thick SiNx deposited by plasma-enhanced chemical vapor deposition and patterned by SF6-based ICP etching was used. The mask was then transferred into the AlN/sapphire templates by BCl3-based ICP etching. The patterned wafers were overgrown (ELO, epitaxial lateral overgrowth) by 5.2 µm AlN in the MOVPE reactor as described above43. The UV LED heterostructure was grown in a 3 × 2″ closed-coupled showerhead reactor using trimethylaluminium, trimethylgallium, triethylgallium, silane, bis-cyclopentadienyl magnesium and ammonia as precursors as well as hydrogen and nitrogen as carrier gases. The LED heterostructure consists of a 400 nm AlN buffer layer, a 25 nm transition layer from AlN to Al0.87Ga0.13 N, 100 nm undoped Al0.87Ga0.13 N, a 1.2 μm Al0.87Ga0.13 N:Si current spreading layer optimized for high transparency and low resistivity, a 40 nm thick Al0.83Ga0.17 N:Si first barrier, a threefold Al0.72Ga0.28 N(1 nm)/Al0.83Ga0.17 N:Si(5 nm, 2.5 nm central delta Si-doping) multiple quantum well active region, a 6 nm AlN electron blocking layer to minimize electron leakage but facilitate hole injection, a 14-fold Al0.8Ga0.2 N:Mg(0.9 nm)/ Al0.7Ga0.3 N:Mg(0.9 nm) short period superlattice, a 15-fold Al0.37Ga0.63 N:Mg(2.5 nm)/ Al0.2Ga0.8 N:Mg(2.5 nm) superlattice, and a 40 nm GaN:Mg contact layer. The Mg doped layers were activated by thermal annealing in the MOVPE reactor at 830 °C under nitrogen atmosphere for 25 min and subsequently in an oven at 600 °C under oxygen for 20 min. ### Processing of the far-UVC LEDs The chip processing sequence for the fabrication of the n-contacts involved the exposure of the buried n-type current spreading layer by ICP etching using a pure Cl2 chemistry, the thermal deposition of a V(15 nm)/Al(90 nm)/Ni(20 nm)/Au(30 nm) metal stack and its annealing at 800 °C under nitrogen for 40 s. The p-contact was fabricated by electron-beam deposition of Pt(30 nm) which is annealed at 500 °C under nitrogen for 5 min. The metal contacts were enhancement by electron-beam deposition of a Ti(30 nm)/Pt(40 nm)/Au(500 nm)/Ti(30 nm) metal stack. 600 nm SiNx, deposited by plasma-enhanced chemical vapor deposition, was used for passivation and its structuring was achieved by inductively coupled plasma etching using CHF3. Finally, the bonding pads consisting of a Ti(30 nm)/Pt(40 nm)/Au(475 nm)/Ti(30 nm)/Pt(120 nm)/Au(300 nm) metal stack was deposited using electron-beam deposition. The wafers were diced into individual 0.66 mm × 1.06 mm dies using an internally focused laser beam scriber at a wavelength of 533 nm and a chip breaker. The LED chip design consists of interdigitated p- and n-contact fingers with a total emitting area of ~ 0.15 mm2. ### Packaging of the far-UVC LED chips The far-UVC LED chips were flip-chip mounted on 3.5 mm × 3.5 mm surface-mounted device (SMD) Si-based packages (developed by CiS Forschungsinstitut für Mikrosensorik GmbH) with a thermal conductivity of 150 W/(m·K) using AuSn solder paste32. The thickness of the Au–Sn layer was determined to be 15 µm. ### Electro-optical characterization of the far-UVC LEDs The emission spectrum and optical power of the far-UVC LEDs were measured using a radiometric and wavelength calibrated compact spectrometer (StellarNet EPP2000-UV–VIS) and a calibrated UV-enhanced Si-photodiode (Hamamatsu S2281-01) with an active area of 1 cm2, respectively. The far-field emission distribution of the far-UVC LEDs was determined by measuring the radiant intensity in angular steps of 5° on an automated two axes rotation stage using a calibrated UV-enhanced Si-photodiode (Thorlabs FDS010) with an active area of 0.8 mm2 at a distance of 37 mm from the LED. In addition, the total optical power of the far-UVC LEDs was determined by spatial integration of the far-field emission distribution over the entire forward hemisphere. All measurements were performed under continuous wave (cw) operation at room temperature without active cooling. ### Optical simulations of far-UVC LEDs and irradiation system The required ray-file of the far-UVC LED for the optical simulation of the irradiation system was generated using a self-developed Monte-Carlo ray-tracing simulation program based on Mathematica 10 and PureBasic 5.71. The ray-tracing simulations consider the complex refractive indices of AlGaN44,45,46, sapphire47 and contact metals48 at 233 nm, as well as the absorption within the multiple quantum well region (103 cm−1)49, the n-AlGaN (50 cm−1) and p-(Al)GaN (1.7 × 105 cm−1) heterostructure layers50. In addition, the radiation pattern of the active region was determined by the degree of polarization ((TE − TM)/(TE + TM) =  − 0.4)51. The light scattering at the ELO AlN/sapphire interface and the rough sapphire sidewalls of the chip, as well as the reflection at the Al-reflector of the Si-based SMD package were taken into account32,51. Surface roughness was described by the distribution of corresponding microscopic slope angles. Ray-tracing simulations of the far-UVC irradiation module using the software ZEMAX-EE52 were used to optimize the irradiance uniformity and to achieve a maximum irradiance. The model consists of 120 LEDs with an individual optical power of 0.5 mW. The reflector areas (slanted and vertical) were defined with a reflectance of 70% as measured using the same aluminum-based reflector material (MIRO, ALANOD GMBH & CO. KG, Ennepetal) that was used in the far-UVC irradiation system. The fully absorbing target area with a size of 100 mm × 100 mm was placed at a distance of 50 mm to the lower edge of the reflectors. ### Simulation and fabrication of DBR filter The DBR filter consists of 18 pairs of HfO2 and SiO2 layers deposited on a 1 mm thick and 100 mm × 100 mm large fused silica substrate. The optimal layer thicknesses were determined by simulations using a self-written software based on the transfer-matrix method. The individual layers thicknesses were varied slightly at random to suppress high order interference effects oscillations around 233 nm. The dielectric layers were deposited using a "Syrus 710 pro" from Bühler with two electron beam evaporators and ion support (APS pro). The reduction of the total emission power of the far-UVC LEDs by the DBR filter was estimated using the measured far-field emission distribution together with the angular dependent transmission spectrum of the DBR filter. ### Electro-optical characterization of far-UVC irradiation system The irradiance distribution of the system was measured with a radiometrically calibrated photodetector (UV-Surface-USB, Sglux GmbH). The emission spectrum was measured with a spectrally calibrated compact UV–VIS spectrometer (USB4000, Ocean Optics, Inc). Both detectors were attached to a motorized xyz-stage to scan the radiation at different positions and distances. ### Investigation of MRSA by UVC irradiation Methicillin resistant strains of the Gram-positive bacterium Staphylococcus aureus (DSM 11822 and DSM 18827) were used to test the antimicrobial efficacy of far-UVC radiation using a qualitative spot test. First, cryopreserved bacteria were inoculated on Columbia blood agar (Becton Dickinson GmbH, Heidelberg, Germany) and incubated at 37 °C for 24 h. For the second subculture, one colony was plated on tryptic soy agar (TSA) and incubated again at 37 °C for 24 h. Next, the microorganisms were harvested by rinsing the agar plate with 2 ml phosphate buffered saline (PBS). The bacteria were pelleted by centrifugation for 1 min at 2500 × g and washed 3 times with 1 ml PBS. Each washing step was followed by centrifugation for 1 min at 2500 × g. The resulting pellet was resuspended in CASO bouillon, and a final suspension of 1–3 × 108 colony forming units (cfu)/ml was produced by adjusting the optical density (OD) at 620 nm to 0.1–0.15. Afterwards, a dilution series in CASO bouillon was produced. Four different dilutions of the bacterial suspension (1 × 108, 1 × 107, 1 × 106, 1 × 105) were used for the spot tests: 20 µl of the suspension were dropped onto the agar plate and dried for 30 min at room temperature in a laminar airflow chamber, resulting in 2 × 103–2 × 106 cfu/spot. DIN 14561, for testing chemical disinfectants and antiseptics in the medical area using the quantitative carrier test, defines a reduction factor of five lg-levels (99.999%) as necessary for adequate reduction of microorganisms. Therefore, the various bacterial suspensions with different concentrations were used to ensure the detection of the requested reduction factor. The bactericidal reduction factor (RF) can be calculated according to the following equation, where nc is the number of cfu on control specimen without irradiation and np is the number of cfu on irradiated test specimen: $$RF = \log_{10} (n_{c} ) - \log_{10} (n_{p} ).$$ A calculation using the presented spot test is not possible, however, an estimation of lg levels is possible, since the initial number of bacteria in each spot was quantified. The intensity of 44 µW/cm2 of the 233 nm far-UVC irradiation system equipped with the DBR filter was checked before the experiments at the system in a steady state using the UV radiometer SXL55 with a SiC UVC sensor (Sglux GmbH). The same procedure was conducted with a 254 nm wavelength mercury gas-discharge lamp (Sglux GmbH) used as positive control. The 254 nm emitter was dimmed to 50 µW/cm2. The agar plates were irradiated for different durations (10 s–15 min), resulting in irradiation doses of 0.5–40 mJ/cm2. After irradiation, the plates were incubated at 37 °C for 24 h for the growth of viable bacteria. ### Investigation of skin tolerance to UVC irradiation Porcine ears were freshly obtained from a local butcher and the experiments were performed within 48 h after slaughter. Hair was removed from the skin without impacting the stratum corneum using scissors and the skin was dermatomized to 300 µm thickness. For UV irradiation experiments, sections of 2 cm × 2 cm were placed on a wet paper tissue to prevent drying out. The irradiation experiments were performed in standardized laboratory conditions at 21 °C. After irradiation, 4 mm punch biopsies were taken and fixated in neutral buffered 4% formalin solution (Sigma # HT501128-4L). Subsequently, the formalin-fixed samples were dewatered and embedded in paraffin (Histosec, Merck Millipore) overnight. Paraffin blocks were prepared per sample and 1–2 µm thick sections were cut from these blocks. Sections were dewaxed and histochemically stained with hematoxylin (Merck Millipore) and eosin (Sigma Aldrich) for evaluation of histomorphology or subjected to a heat-induced epitope retrieval step prior to immunohistochemistry. DNA damage was immunohistochemically detected via expression of cyclobutane pyrimidine dimers (CPD) and pyrimidine (6–4) pyrimidone photoproducts (6-4PP) (see Fig. 4b-e). For this purpose, sections were incubated with either anti-6-4PP (clone 64 M-2, Cosmo Bio) or anti-CPD (clone TDM-2, Cosmo Bio). Dako REAL Detection System, Alkaline Phosphatase/RED, Rabbit/Mouse (Agilent Technologies) was employed for the detection of 6-4PP and CPD. Nuclei were counterstained with hematoxylin (Merck Millipore) and slides coverslipped with Kaiser’s glycerol gelatine (Merck Millipore). Negative controls were performed by omitting the primary antibody. Histologic images of the stained skin sections were acquired using an AxioImager Z1 microscope (Carl Zeiss MicroImaging, Inc.). The percentage of positively stained cells within the epidermis was evaluated in a blinded manner. ## References 1. 1. Political declaration on antimicrobial resistance, United Nations General Assembly, 71st session, Sept 22, 2016, Agenda item 127, A/71/L.2. 2. 2. O’Neill, J., Review On Antimicrobial Resistance (London, Grande-Bretagne & Wellcome Trust (London. Antimicrobial resistance: tackling a crisis for the health and wealth of nations: december 2014. (Review On Antimicrobial Resistance, 2014). https://wellcomecollection.org/works/rdpck35v 3. 3. Coohill, T. P. & Sagripanti, J.-L. Overview of the inactivation by 254 nm ultraviolet radiation of bacteria with particular relevance to biodefense. Photochem. Photobiol. 84, 1084–1090. https://doi.org/10.1111/j.1751-1097.2008.00387.x (2008). 4. 4. Dai, T., Vrahas, M. S., Murray, C. K. & Hamblin, M. R. Ultraviolet C irradiation: an alternative antimicrobial approach to localized infections?. Expert Rev. Anti-Infect. Ther. 10, 185–195. https://doi.org/10.1586/eri.11.166 (2012). 5. 5. Buonanno, M., Welch, D., Shuryak, I. & Brenner, D. J. Far-UVC light (222 nm) efficiently and safely inactivates airborne human coronaviruses. Sci. Rep. 10, 10285. https://doi.org/10.1038/s41598-020-67211-2 (2020). 6. 6. Heßling, M., Hönes, K., Vatter, P. & Lingenfelder, C. Ultraviolet irradiation doses for coronavirus inactivation review and analysis of coronavirus photoinactivation studies. GMS Hyg. Infect. Control 15, 343. https://doi.org/10.3205/dgkh000343 (2020). 7. 7. Hijnen, W. A. M., Beerendonk, E. F. & Medema, G. J. Inactivation credit of UV radiation for viruses, bacteria and protozoan (oo)cysts in water: A review. Water Res. 40, 3–22. https://doi.org/10.1016/j.watres.2005.10.030 (2006). 8. 8. Anderson, D. J. et al. Decontamination of targeted pathogens from patient rooms using an automated ultraviolet-C-emitting device. Infect. Control Hosp. Epidemiol. 34, 466–471. https://doi.org/10.1086/670215 (2013). 9. 9. Kujundzic, E., Hernandez, M. & Miller, S. L. Ultraviolet germicidal irradiation inactivation of airborne fungal spores and bacteria in upper-room air and HVAC in-duct configurations. J. Environ. Eng. Sci. 6, 1–9. https://doi.org/10.1139/s06-039 (2007). 10. 10. Chang, J. C. H. et al. UV inactivation of pathogenic and indicator microorganisms. Appl. Environ. Microbiol. 49, 1361–1365. https://doi.org/10.1139/s06-039 (1985). 11. 11. Inagaki, H., Saito, A., Sugiyama, H., Okabayashi, T. & Fujimoto, S. Rapid inactivation of SARS-CoV-2 with deep-UV LED irradiation. Emerg. Microbes Infect. 9, 1744–1747. https://doi.org/10.1080/22221751.2020.1796529 (2020). 12. 12. Darnell, M. E. R., Subbarao, K., Feinstone, S. M. & Taylor, D. R. Inactivation of the coronavirus that induces severe acute respiratory syndrome, SARS-CoV. J. Virol. Methods 121, 85–91. https://doi.org/10.1016/j.jviromet.2004.06.006 (2004). 13. 13. Eickmann, M. et al. Inactivation of three emerging viruses – severe acute respiratory syndrome coronavirus, Crimean-Congo haemorrhagic fever virus and Nipah virus—In platelet concentrates by ultraviolet C light and in plasma by methylene blue plus visible light. Vox Sang. 115, 146–151. https://doi.org/10.1111/vox.12888 (2020). 14. 14. Tseng, C.-C. & Li, C.-S. Inactivation of virus-containing aerosols by ultraviolet germicidal irradiation. Aerosol Sci. Technol. 39, 1136–1142. https://doi.org/10.1080/02786820500428575 (2005). 15. 15. Jagger, J. Introduction to Research in Ultraviolet Photobiology (Prentice-Hall, 1967). 16. 16. Rauth, A. M. The physical state of viral nucleic acid and sensitivity of viruses to ultraviolet light. Biophys. J. 5, 257–273. https://doi.org/10.1016/s0006-3495(65)86715-7 (1965). 17. 17. Buonanno, M. et al. Germicidal efficacy and mammalian skin safety of 222-nm UV light. Radiat Res. 187, 493–501. https://doi.org/10.1667/RR0010CC.1 (2017). 18. 18. Buonanno, M. et al. 207-nm UV Light—A promising tool for safe low-cost reduction of surgical site infections. In Vitro Studies. PLoS ONE 8, 76968. https://doi.org/10.1371/journal.pone.0076968 (2013). 19. 19. Narita, K., Asano, K., Morimoto, Y., Igarashi, T. & Nakane, A. (2018): Chronic irradiation with 222-nm UVC light induces neither DNA damage nor epidermal lesions in mouse skin, even at high doses. PLoS ONE 13, e0201259. https://doi.org/10.1371/journal.pone.0201259 (2018). 20. 20. Barnard, I. R. M., Eadie, E. & Wood, K. Further evidence that far-UVC for disinfection is unlikely to cause erythema or pre-mutagenic DNA lesions in skin. Photodermatol. Photoimmunol. Photomed. 36, 476. https://doi.org/10.1111/phpp.12580 (2020). 21. 21. Fukui, T. et al. Exploratory clinical trial on the safety and bactericidal effect of 222-nm ultraviolet C irradiation in healthy humans. PLoS ONE 15, e0235948. https://doi.org/10.1371/journal.pone.0235948 (2020). 22. 22. Hickerson, R. P. et al. Minimal, superficial DNA damage in human skin from filtered far-ultraviolet C. Br. J. Dermatol. https://doi.org/10.1111/bjd.19816 (2021). 23. 23. Welch, D. et al. Effect of far ultraviolet light emitted from an optical diffuser on methicillin-resistant Staphylococcus aureus in vitro. PLoS ONE 13, e0202275. https://doi.org/10.1371/journal.pone.0202275 (2018). 24. 24. Yoshikawa, A. et al. Improve efficiency and long lifetime UVC LEDs with wavelengths between 230 and 237 nm. Appl. Phys. Expr. 13, 022001. https://doi.org/10.35848/1882-0786/ab65fb (2020). 25. 25. Lobo-Ploch, N. et al. Milliwatt power 233 nm AlGaN-based deep UV-LEDs on sapphire substrates. Appl. Phys. Lett. 117, 111102. https://doi.org/10.1063/5.0015263 (2020). 26. 26. Mehnke, F. et al. Highly conductive n-AlxGa1-xN layers with aluminum mole fractions above 80%. Appl. Phys. Lett. 103, 212109. https://doi.org/10.1063/1.4833247 (2013). 27. 27. Mehnke, F. et al. Efficient charge carrier injection into sub-250nm AlGaN multiple quantum well light emitting diodes. Appl. Phys. Lett. 105, 051113. https://doi.org/10.1063/1.4892883 (2014). 28. 28. Mehnke, F., Sulmoni, L., Guttmann, M., Wernicke, T. & Kneissl, M. Influence of light absorption on the performance characteristics of UV LEDs with emission between 239 nm and 217 nm. Appl. Phys. Expr. 12, 012008. https://doi.org/10.7567/1882-0786/aaf788 (2019). 29. 29. Sulmoni, L. et al. Electrical properties and microstructure formation of V/Al-based n-contacts on high Al mole fraction n-AlGaN layers. Photon. Res. 8, 1381–1387. https://doi.org/10.1364/PRJ.391075 (2020). 30. 30. Ruscher, C. Empfehlungen zur Prävention und Kontrolle von Methicillin-resistenten Staphylococcus aureus Stämmen (MRSA) in medizinischen und pflegerischen Einrichtungen. Bundesgesundheitsbl. 57, 695–732. https://doi.org/10.1007/s00103-014-1980-x (2014). 31. 31. Wölfel, R. et al. Virological assessment of hospitalized patients with COVID-2019. Nature 581, 465–469. https://doi.org/10.1038/s41586-020-2196-x (2020). 32. 32. Käpplinger, I. et al. An innovative Si package for high-performance UV LEDs. Proc. SPIE 10940, 109400A. https://doi.org/10.1117/12.2509395 (2019). 33. 33. Glaab, J. et al. Degradation behavior of AlGaN-based 233 nm deep-ultraviolet light emitting diodes. Semicond. Sci. Technol. 33, 095017. https://doi.org/10.1088/1361-6641/aad765 (2018). 34. 34. Young, A. R. et al. The similarity of action spectra for thymine dimers in human epidermis and erythema suggests that DNA is the chromophore for erythema. J. Invest. Dermatol. 111, 982–988. https://doi.org/10.1046/j.1523-1747.1998.00436.x (1998). 35. 35. Jacobi, U. et al. Porcine ear skin: An in vitro model for human skin. Skin Res. Technol. 13, 19–24. https://doi.org/10.1111/j.1600-0846.2006.00179.x (2007). 36. 36. Chen, R. Z., Craik, S. A. & Bolton, J. R. Comparison of the action spectra and relative DNA absorbance spectra of microorganisms: Information important for the determination of germicidal fluence (UV dose) in an ultraviolet disinfection of water. Water Res. 43, 5087–5096. https://doi.org/10.1016/j.watres.2009.08.032 (2009). 37. 37. Gupta, A., Avci, P., Dai, T., Huang, Y.-Y. & Hamblin, M. R. Ultraviolet radiation in wound care: Sterilization and stimulation. Adv. Wound Care 2, 422–437. https://doi.org/10.1089/wound.2012.0366 (2013). 38. 38. Lee, C.-H., Wu, S.-B., Hong, C.-H., Yu, H.-S. & Wei, Y.-H. Molecular mechanisms of UV-induced apoptosis and its effects on skin residential cells: The implication in UV-based phototherapy. Int. J. Mol. Sci. 14, 6414–6435. https://doi.org/10.3390/ijms14036414 (2013). 39. 39. Mallet, J. D. et al. Faster DNA repair of ultraviolet-induced cyclobutane pyrimidine dimers and lower sensitivity to apoptosis in human corneal epithelial cells than in epidermal keratinocytes. PLoS ONE 11, e0162212. https://doi.org/10.1371/journal.pone.0162212 (2016). 40. 40. Lawrence, K. P. et al. The UV/visible radiation boundary region (385–405 nm) damages skin cells and induces “dark” cyclobutane pyrimidine dimers in human skin in vivo. Sci. Rep. https://doi.org/10.1038/s41598-018-30738-6 (2018). 41. 41. Narita, K. et al. Disinfection and healing effects of 222-nm UVC light on methicillin-resistant Staphylococcus aureus infection in mouse wounds. J. Photochem. Photobiol. B: Biol. 178, 10–18. https://doi.org/10.1016/j.jphotobiol.2017.10.030 (2018). 42. 42. Brunner, F. et al. High-temperature growth of AlN in a production scale 11 × 2′ MOVPE reactor. Phys. Status Solidi (C) 5, 1799–1801. https://doi.org/10.1002/pssc.200778658 (2008). 43. 43. Kueller, V. et al. Growth of AlGaN and AlN on patterned AlN/sapphire templates. J. Crystal Growth 315, 200–203. https://doi.org/10.1016/j.jcrysgro.2010.06.040 (2011). 44. 44. Goldhahn, R. et al. Determination of group III nitride film properties by reflectance and spectroscopic ellipsometry studies. Phys. Status Solidi (a) 177, 107–115. https://doi.org/10.1002/(SICI)1521-396X(200001)177:1%3C107::AID-PSSA107%3E3.0.CO;2-8 (2000). 45. 45. Shokhovets, S. et al. Determination of the anisotropic dielectric function for wurtzite AlN and GaN by spectroscopic ellipsometry. J. Appl. Phys. 94, 307–312. https://doi.org/10.1063/1.1582369 (2003). 46. 46. Sanford, N. A. et al. Refractive index study of AlxGa1−xN films grown on sapphire substrates. J. Appl. Phys. 94, 2980–2991. https://doi.org/10.1063/1.1598276 (2003). 47. 47. Malitson, I. H. Refractive-index and dispersion of synthetic sapphire. J. Opt. Soc. Am. 52, 1377–1379. https://doi.org/10.1364/JOSA.52.001377 (1962). 48. 48. Johnson, P. & Christy, R. Optical constants of transition metals: Ti, V, Cr, Mn, Fe Co, Ni, and Pd. Phys. Rev. B 9, 5056–5070. https://doi.org/10.1103/PhysRevB.9.5056 (1974). 49. 49. Ryu, H.-Y., Choi, I.-G., Choi, H.-S. & Shim, J.-I. Investigation of light extraction efficiency in AlGaN deep-ultraviolet light-emitting diodes. Appl. Phys. Express 6, 062101. https://doi.org/10.7567/APEX.6.062101 (2013). 50. 50. Muth, J. F. et al. Absorption coefficient and refractive index of GaN, AlN and AlGaN alloys. MRS Internet J. Nitride Semicond. Res. 4, 502–507. https://doi.org/10.1557/S1092578300002957 (1999). 51. 51. Guttmann, M. et al. Optical light polarization and light extraction efficiency of AlGaN-based LEDs emitting between 264 and 220 nm. Jpn. J. Appl. Phys. 58, SCCB20. https://doi.org/10.7567/1347-4065/ab0d09 (2019). 52. 52. ZEMAX-EE (32-bit) version October 26. 2010, www.zemax.com Download references ## Acknowledgements The authors would like to thank CiS Forschungsinstitut für Mikrosensorik GmbH, Konrad-Zuse-Straße 14, 99099 Erfurt, Germany, for the development of the Si-packages for the LEDs and the flip-chip mounting of the LED chips. This work was supported by the German Federal Ministry of Education and Research (BMBF) within the program “Zwanzig20 –Partnerschaft für Innovation” (consortium “Advanced UV for Life”) under Grants 03ZZ0146A-D. ## Funding Open Access funding enabled and organized by Projekt DEAL. ## Author information Authors ### Contributions J.G. designed and optimized the irradiation system and contributed to the electro-optical characterization of the LEDs and the irradiation system; N.L-P. contributed to the LED packaging and the electro-optical characterization of the LEDs; H–K.C. contributed to the processing and electro-optical characterization of the LEDs; T.F. designed and constructed the control electronics of the irradiation system and contributed to the electro-optical characterization of the irradiation system; H.G. simulated, optimized and produced the DBR filter; M.G. simulated the ray-files, measured the electro-optical characteristics and far field of single LEDs with and without filter, and contributed to the design of the DBR filter; S.H. optimized and manufactured the AlN/sapphire templates for the LEDs; F.M. designed and optimized the LED heterostructure and performed the epitaxial growth of the LEDs; J.S. and S.B.L. conceived and performed the experiments and analyzed the data on skin damage; L.S. contributed to the processing and electro-optical characterization of the LEDs; T.W. contributed to the design and fabrication of the LEDs and the DBR filter; L.W. designed and constructed the mechanical components of the irradiation system; U.W. contributed to the designing of the DBR filter; P.Z. and C.S. contributed to the acquisition and analysis of the data on the inactivation of multiresist pathogens; A.K. conceived the experiments and analyzed the data on the inactivation of multiresist pathogens; M.C.M. conceived the experiments and analyzed the data on skin damage; M.K. conceived the basic design of the far-UVC LED heterostructures as well as the dielectric spectral filters. He also provided input in analyzing the LED data; M.W. contributed to the organization of the preliminary experiments on skin irradiation; U.W. contributed to the development of the irradiation system; S.E. contributed to the development of the far-UVC LEDs and the irradiation system; All co-authors contributed to the preparation of the manuscript. ### Corresponding author Correspondence to Sven Einfeldt. ## Ethics declarations ### Competing interests The authors declare no competing interests. ## Additional information ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Reprints and Permissions ## About this article ### Cite this article Glaab, J., Lobo-Ploch, N., Cho, H.K. et al. Skin tolerant inactivation of multiresistant pathogens using far-UVC LEDs. Sci Rep 11, 14647 (2021). https://doi.org/10.1038/s41598-021-94070-2 Download citation • Received: • Accepted: • Published: ## Comments By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. ## Search ### Quick links Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
2021-08-04 16:54:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5510558485984802, "perplexity": 6010.578035737017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154878.27/warc/CC-MAIN-20210804142918-20210804172918-00470.warc.gz"}
https://zbmath.org/?q=an:1067.54028&format=complete
# zbMATH — the first resource for mathematics Spaces not distinguishing convergences of real-valued functions. (English) Zbl 1067.54028 Summary: In [Topology Appl. 41, 25–40 (1991; Zbl 0768.54025)] we have introduced the notion of a wQN-space as a space in which for every sequence of continuous functions converging pointwisely to 0 there is a subsequence converging quasi-normally to 0. In the present paper we continue this investigation and generalize some concepts touched there. The content is a variety of notions and relationships among them. The result is another scale in the investigation of smallness and the question is how this scale fits with other known scales and whether all relations in it are proper. ##### MSC: 54G99 Peculiar topological spaces 54C30 Real-valued functions in general topology 54A20 Convergence in general topology (sequences, filters, limits, convergence spaces, nets, etc.) Full Text: ##### References: [1] Bartoszyński, T., Additivity of measure implies additivity of category, Trans. amer. math. soc., 281, 209-213, (1984) · Zbl 0538.03042 [2] Bartoszyński, T.; Scheepers, M., A-sets, Real anal. exchange, 19, 521-528, (199394) · Zbl 0822.03028 [3] Bukovská, Z., Quasinormal convergence, Math. slovaca, 41, 137-146, (1991) · Zbl 0757.40004 [4] Bukovská, Z.; Bukovský, L.; Ewert, J., Quasi-uniform convergence and $$L$$-spaces, Real anal. exchange, 18, 321-329, (199293) · Zbl 0873.54018 [5] Bukovský, L.; Kholshchevnikova, N.N.; Repický, M., Thin sets of harmonic analysis and infinite combinatorics, Real anal. exchange, 20, 454-509, (199495) · Zbl 0835.42001 [6] Bukovský, L.; Recław, I.; Repický, M., Spaces not distinguishing pointwise and quasinormal convergence of real functions, Topology appl., 41, 25-40, (1991) · Zbl 0768.54025 [7] Császár, Á.; Laczkovich, M., Discrete and equal convergence, Studia sci. math. hungar., 10, 463-472, (1975) · Zbl 0405.26006 [8] Daniels, P., Pixley-roy spaces over subsets of the reals, Topology appl., 29, 93-106, (1988) · Zbl 0656.54007 [9] Denjoy, A., Leçons sur le calcul des coefficients d’une série trigonométrique, $$2\^{}\{e\}$$ partie, (1941), Paris [10] van Douwen, E.K., The integers and topology, (), 111-167 [11] Engelking, R., General topology, (1977), PWN Warszawa [12] Erdös, P.; Kunen, K.; Mauldin, R.D., Some additive properties of sets of real numbers, Fund. math., 113, 187-199, (1981) · Zbl 0482.28001 [13] Fremlin, D.H., Sequential convergence in $$Cp(X)$$, Comment. math. univ. carolin., 35, 371-382, (1994) · Zbl 0827.54002 [14] Galvin, F.; Miller, A.W., $$γ$$-sets and other singular sets of real numbers, Topology appl., 17, 145-155, (1984) · Zbl 0551.54001 [15] Iséki, K., A characterization of pseudo-compact spaces, Proc. Japan acad., 33, 320-322, (1957) · Zbl 0082.16003 [16] Iséki, K., Pseudo-compactness and $$μ$$-convergence, Proc. Japan acad., 33, 368-371, (1957) · Zbl 0082.16101 [17] Just, W.; Miller, A.W.; Scheepers, M.; Szeptycki, P.J., The combinatorics of open covers. II, Topology appl., 73, 3, 241-266, (1996) · Zbl 0870.03021 [18] Kholshchevnikova, N.N., Representation of some functions under the assumption of Martin’s axiom, Mat. zametki, Math. notes, 49, 1-2, 225-227, (1991), (in Russian); transl. in · Zbl 0729.03028 [19] Kliś, C., An example of noncomplete normed $$(K)$$-space, Bull. acad. polon. sci. ser. sci. math. astronom. phys., 26, 415-420, (1978) · Zbl 0393.46017 [20] Kuratowski, K., Topologie I, (1958), PWN Warsaw [21] Martin, A.D.; Solovay, R.M., Internal Cohen extensions, Ann. math. logic, 2, 143-178, (1970) · Zbl 0222.02075 [22] Miller, A.W., Mapping a set of reals onto the reals, J. symbolic logic, 48, 3, 575-584, (1982) · Zbl 0527.03031 [23] Miller, A.W., Special subsets of the real line, (), 201-233 [24] Recław, I., Metric spaces not distinguishing pointwise and quasinormal convergence of real functions, Bull. Polish acad. sci. math., 45, 3, 287-289, (1997) · Zbl 0897.54013 [25] M. Repický, Spaces not distinguishing convergences, Preprint, 1999 [26] Scheepers, M., Combinatorics of open covers: Ramsey theory, Topology appl., 69, 31-62, (1996) · Zbl 0848.54018 [27] Scheepers, M., $$Cp(X)$$ and arhangel’skiı̆’s $$αi$$-spaces, Topology appl., 89, 265-275, (1998) · Zbl 0930.54017 [28] M. Scheepers, Sequential convergence in $$Cp(X)$$ and property $$S1(Γ,Γ)$$, Preprint [29] Todorcevic, S., Partitions problems in topology, Contemporary mathematics, 84, (1989), Amer. Math. Soc. Providence, RI [30] Todorcevic, S., Topics in topology, Lecture notes in math., 1652, (1997), Springer Berlin · Zbl 0953.54001 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-05-10 17:47:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7927502989768982, "perplexity": 7049.824981144288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00253.warc.gz"}
http://ilcampanileborgo.it/groi/sgd-momentum.html
# Sgd Momentum この論文では、最適化アルゴリズムは下記のような包含関係にあり、それが性能と. singrates Singapore Bonds and Rates Rates sitting on m. One of the authors here; thanks for your comment! I wouldn't hope to always beat carefully hand-tuned momentum SGD, especially when it uses a good non-constant schedule. It has been shown that using the first and second order statistics (e. stochastic gradient descent (Robbins and Monro, 1951) implicit. Only used when solver='sgd' and momentum > 0. Optimization techniques comparison in Julia: SGD, Momentum, Adagrad, Adadelta, Adam (x-post from r/Julia) Hello r/machinelearning , This is my attempt to implement and experiment with various optimization techniques in application to neural networks' parameters space search. Price return vs. What I want you to realize is that our function for momentum is basically the same as SGD, with an extra term: Also called cost function or loss function (although they have different meanings). This results in minimizing oscillations and faster convergence. SGD, Momentum, and NAG find it challenging to break symmetry, but slowly they manage to escape the saddle point, whereas Adagrad, Adadelta, and RMsprop head down the negative slope, as can seen from the following image: Which optimizer to choose. Approximate second-order methods 7. These tables show live currency rates for the main currencies. But still nature of SGD proposes another potential problem. float >= 0. Live Currency Rates and Currency Converter. A Distributed Synchronous SGD Algorithm with Global Top-k Realtimme Cloud Solutions Helpfile Longchamp Dandy Medium Long Handle Shoulder Tote in Colour Fig. Should be between 0 and 1. SGD (params, lr=, momentum=0, dampening=0, weight_decay=0, nesterov=False) [source] ¶ Implements stochastic gradient descent (optionally with momentum). SGDはMomentumやAdam、RMSPropの特別な場合と考えることができる. Additional references: Large Scale Distributed Deep Networks is a paper from the Google Brain team, comparing L-BFGS and SGD variants in large-scale distributed optimization. In this case, we would go faster by following the blue line. 0, **kwargs) ¶. It uses physical law of motion to go pass through local optima (small hills). Stochastic Optimization Techniques Neural networks are often trained stochastically, i. Momentum Methods: Polyak, Nesterov Variance Reduction Methods Second-Order Hessian Methods 6 Acceleratingsingle-node SGD convergence For large training datasets single-node SGD can be prohibitively slow… w j+1 = w j ⌘ m Xm n=1 rf (w j, ⇠ n). 6495 run Momentum 0. Momentum-SGD Conclusion. But the idea of momentum instead of acceleration is a popular alternative. Momentum Methods 6 momentum preservation ratio SGD friction to vertical fluctuation acceleration to left SGD + momentum. When you want a listening experience that is simply out of this world, Stereo has the products to make it happen. is the momentum coefficient and 0. Conjugate GD –> Solve Energy Optimization problem –> Leverage Hamiltonian dynamic SGD Check Zaid’s talk in CVPR 2013 (but no momentum) SGD with momentum. org/rec/conf/icml/HoLCSA19 URL#298615. This results in minimizing oscillations and faster convergence. Note that in practice we use momentum SGD; we return to a discussion of. Hyperparameter. Defaults to 'SGD'. The first equation looks a bit like the SGD with momentum. ferred to as simply as SGD in recent literature even though it operates on minibatches, performs the following update: w t+1 = w t 1 n X x2B rl(x;w t): (2) Here Bis a minibatch sampled from Xand n= jBjis the minibatch size. We suspect momentum should be ap-proached differently, for both performance and SGD stability reasons. Nó hoạt động. It does this by adding a fraction 𝛾 of the update vector of the past time step to the current update vector The momentum term 𝛾 is usually set to 0. 而 Adam 又是 RMSprop 的升级版. While for someThe post Cardano, Maker, ATOM face brief respite after market momentum stalls appeared first on AMBC. SGD is an optimisation technique - a tool used to update the parameters of a model. The same "batch size doesn't really matter" ideas apply if you use SGD with momentum, provided the momentum decay is stated in terms of "decay per point" rather than "decay per batch", and velocity is also expressed in per-point rather than per-batch terms. It uses physical law of motion to go pass through local optima (small hills). 而带momentum项的SGD则写生如下形式: 其中 即momentum系数,通俗的理解上面式子就是,如果上一次的momentum(即 )与这一次的负梯度方向是相同的,那这次下降的幅度就会加大,所以这样做能够达到加速收敛的过程。 三、normalization。. or 34,912 miles. A basic class to create optimizers to be used with TFLearn estimators. Workspace is a class that holds all the related objects created during runtime: (1) all blobs. There are multiple ways to utilize multiple GPUs or machines to train models. momentum - Exponential decay rate of the first order moment. In a distributed setting, the master could be in charge of adding the momentum before broadcasting each new reference value. Well, Dozat (2016) thought, why can’t we incorporate this into Adam?. Hyperparameter. You can record and post programming tips, know-how and notes here. In this post I'll talk about simple addition to classic SGD algorithm, called momentum which almost always works better and faster than Stochastic Gradient Descent. Stochastic Gradient Descent (SGD) is a simple gradient-based optimization algorithm used in machine learning and deep learning for training artificial neural networks. SGD 是最普通的优化器, 也可以说没有加速效果, 而 Momentum 是 SGD 的改良版, 它加入了动量原则. Conjugate GD –> Solve Energy Optimization problem –> Leverage Hamiltonian dynamic SGD Check Zaid’s talk in CVPR 2013 (but no momentum) SGD with momentum. It does this by adding a fraction $$\gamma$$ of the update vector of the past time step to the current update vector:. The code is written in Julia (not the best code one could write though. USDSGD rebounded from 50% fibo and HVN around 1. Turn on the training progress plot. The GBP/SGD converted its short-term resistance zone into support, from where a breakout is anticipated. Optimization techniques comparison in Julia: SGD, Momentum, Adagrad, Adadelta, Adam (x-post from r/Julia) Hello r/machinelearning , This is my attempt to implement and experiment with various optimization techniques in application to neural networks' parameters space search. By using this site you agree to the placement of cookies on your computer in accordance with the terms of our Cookies policy. 2658 on Friday after mixed US employment data. The difference to SGD with momentum, however, is the factor (1- β1), which is multiplied with the current gradient. A very popular technique that is used along with SGD is called Momentum. Popular approaches include distributed synchronous SGD and its momentum variant SGDM, in which the computational load for evaluating a mini-batch gradient is distributed among the workers. Stochastic Gradient Descent (SGD) is a simple gradient-based optimization algorithm used in machine learning and deep learning for training artificial neural networks. Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback. is the derivative of wrt. Although recent works have proved that a variant of SGD with momentum improves the non-dominant terms in the convergence rate on convex stochastic least. This repository contains the codes for the following NeurIPS-2019 paper. 180177042912455 where: (take 10000 $cycle derivs) is the stream of training examples (momentum def id) is the selected SGD variant , supplied with the default configuration and the function for calculating the gradient from a training example. Deep learning with Elastic Averaging SGD. SGD 是最普通的优化器, 也可以说没有加速效果, 而 Momentum 是 SGD 的改良版, 它加入了动量原则. Use Market Insider's SGD Holdings LTD chart to find out about SGD Holdings LTD's stock price history. SGD为随机梯度下降,每一次迭代计算数据集的mini-batch的梯度,然后对参数进行跟新。 Momentum参考了物理中动量的概念,前几次的梯度也会参与到当前的计算中,但是前几轮的梯度叠加在当前计算中会有一定的衰减。. Notably, Chapelle & Erhan (2011) used the random initialization of Glorot & Ben-gio (2010) and SGD to train the 11-layer autoencoder of Hinton & Salakhutdinov (2006), and were able to surpass the results reported by Hinton & Salakhutdi-nov (2006). In the case of Adam, we call m the first momentum and β1 is just a hyperparameter. 后面的 RMSprop 又是 Momentum 的升级版. The update functions control the learning rate during the SGD optimization. Momentum Phonics Single Letter by Barrie Publishing, 9780732963439, available at Book Depository with free delivery worldwide. Note: Performance of the fund is in SGD on a bid-to-bid basis with net dividends reinvested, without taking into consideration the fees and charges payable through deduction of premium or cancellation of units. This standard story isn't wrong, but it fails to explain many important behaviors of momentum. Base class of all update rules. Includes bias corrections to the estimates of both the first-order moments (the momentum term) and the (uncentered) second-order moments to account for. 01, momentum=0. 9, nesterov = False). We propose the quasi-hyperbolic momentum algorithm (QHM) as an extremely simple alteration of momentum SGD, averaging a plain SGD step with a momentum step. There's an algorithm called momentum, or gradient descent with momentum that almost always works faster than the standard gradient descent algorithm. Specifying the input shape. The Singapore Dollar/Swiss Franc (SGD/CHF) pair started its downtrend in July 2013 when it broke below key support of 0. Till then, expect the pair to find base around 0. We would like to match it without any tuning, though! Our PyTorch implementation is recent, so it includes results that are not in our manuscript yet. Upside momentum still intact. As the optimizer descends, the learning rate should. SGDはMomentumやAdam、RMSPropの特別な場合と考えることができる. About The Trading Indicators. Then, we consider the case of objectives with bounded second derivative and show that in this case a small tweak to the momentum formula allows normalized SGD with momentum to find an$\epsilon. We need to store the velocity for all the parameters, and use this velocity for making the updates. As shown in Table 1, the number of required communication rounds shown in this paper is the fewest in both identical training data set case and non-identical data set case. 而带momentum项的SGD则写生如下形式: 其中 即momentum系数,通俗的理解上面式子就是,如果上一次的momentum(即 )与这一次的负梯度方向是相同的,那这次下降的幅度就会加大,所以这样做能够达到加速收敛的过程。 三、normalization。. layers -> a list of the layers of the network and their shape ( [5, 3, 2, 1] means 4 layers with 5 neurons for the input, 3 for the first hidden, 2 for the second hidden and 1 for the output layer ). 梯度更新规则: Momentum在梯度下降的过程中加入了惯性,使得梯度方向不变的维度上速度变快,梯度方向有所改变的维度上的更新速度变慢,这样就可以加快收敛并减小震荡。. The Donchian_Channel indicator is a technical analysis indicator that belongs to a group of trend indicators. Looking to buy headphones or earphones that meet your specific needs? Head down to one of our Singapore stores today to speak with our experts to discover why we’re known as the local audio experts. The SGD configuration block controls the behavior of the SGD (Stochastic Gradient Descent) algorithm in CNTK. Momentum Momentum is a method that helps accelerate SGD. A basic class to create optimizers to be used with TFLearn estimators. Momentum is a method that helps accelerate SGD in the relevant direction and dampens oscillations as can be seen in Image 3. For example, the noise ball size for SGD with a constant step. The Force Index, a next-generation technical indicator, shows the gradual increase in bullish momentum. This results in minimizing oscillations and faster convergence. is the derivative of wrt. the theoretical gain of the momentum term [e. t the weights a. I thought it was a no-brainer to apply this to modern CNNs that were becoming so popular, like GoogLeNet, VGG, and ResNet. float >= 0. Approximate second-order methods 7. Whether to use Nesterov's momentum. •Equivalent to the weighted-sum of the fraction &of previous update. Parameters. support levels and not inclined to resume any upwards momentum especially with the policy rhetoric we. We recreated the training pipeline of Zaremba et al for this network (SGD without momentum) and obtained a word perplexity of on the validation set and on the test set with this setup; these numbers closely match the results of the original authors. Although recent works have proved that a variant of SGD with momentum improves the non-dominant terms in the convergence rate on convex stochastic least. SGD Holdings, Ltd. Should be between 0 and 1. And you also testing more flexible learning rate function that changes with iterations, and even learning rate that changes on different dimensions (full implementation here). Investors are still very much focused on the positives for now, and the AUD-USD may get further support from renewed stimulatory policy impetus in China. Theano optimizers. Deep learning with Elastic Averaging SGD. 9 or a similar value. Chart Of The Day – AUD/SGD: 1. We talked about RMSprop. Home Courses Applied Machine Learning Online Course Batch SGD with momentum. SGD(network, **kwargs) Adding the momentum term permits the algorithm to incorporate information from previous steps as well, which in practice has the effect of. Swiss Franc Momentum Tracks Virus. Concretely, given a global compression ratio, we categorize all the. The results in terms of accuracy in the above 2 figures concurs with the observation in the paper: although adaptive optimizers have better training performance, it does not imply higher accuracy (better generalization) in valid. a model parameters. For example momentum, AdaGrad, RMSProp, etc. Our institutional investor segment also continued to expand, Report a trend that could gain additional momentum as legacy financial institutions reinforce the investment thesis for the asset class. early_stopping bool, default=False. This demo will show you how to. Beyond SGD: Gradient Descent with Momentum and Adaptive Learning Rate. Optimizer (learning_rate, use_locking, name). The fall comes amid coronavirus pandemic as findings by new research shows that clear regulation has a direct effect on Bitcoin price v. SGD Momentum の役割について調べている Understanding the Role of Momentum in Stochastic Gradient Methods の紹介です。. To better understand momentum we will rewrite it as a run-ning average. According to UOB analysts, the EUR/USD pair maintains the rangebound theme. 6290 run Momentum 0. First we compute a moving average:. USD/SGD has been trading in a channel up since late August. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. momentum: float >= 0. SGD(learning_rate=0. Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. It uses physical law of motion to go pass through local optima (small hills). However [1705. MomentumSGD¶ class chainer. I know that L2 can be used for basic SGD, but how about SGD with momentum. momentum: float hyperparameter >= 0 that accelerates SGD in the relevant direction and dampens oscillations. Momentum will look at how. The point of a gradient descent optimization algorithm is to minimize a given cost function, such as the loss function in training an artificial neural network. My initial stop loss will be if price closes below the 20 SMA on the daily chart. First, an instance of the class must be created and configured, then specified to the “ optimizer ” argument when calling the fit() function on the model. 1 Introduction. 0, **kwargs) ¶. Includes support for momentum, learning rate decay. full-precision distributed momentum SGD and achieves the same testing accuracy. Momentum: a key hyperparameter to SGD and variants. Abstract: Nesterov SGD is widely used for training modern neural networks and other machine learning models. MomentumSGD (lr=0. Accumulation Distribution Chaikin's Volatility Dividend Yield Directional Movement Index MACD Mass index Momentum Money flow index On Balance Volume Rolling EPS Relative Strength Index Stochastic BlackRock Global Funds - Global Multi-Asset Income Fund A6 SGD Hedged. Parameter initialization strategies 5. In this tutorial, you will discover how to implement logistic regression with stochastic gradient …. Note that this further reach is because rmsprop with momentum first reaches the opposite slope with much higher speed than Adam. implicit stochastic gradient with averaging (Toulis et al. Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning__. nesterov_momentum(loss_or_grads, params, learning_rate, momentum=0. The SGD configuration block controls the behavior of the SGD (Stochastic Gradient Descent) algorithm in CNTK. Global Sparse Momentum SGD for Pruning Very Deep Neural Networks. 随机梯度下降( sgd )解决了这两个问题,在跑了单个或者少量的训练样本后,便可沿着目标函数的负梯度更新参数,逼近局部最优。sgd 应用于神经网络的目标是缓解反向传播在整个训练集上的高计算成本。sgd 可以克服计算成本问题,同时保证较快的收敛速度。. Includes support for momentum, learning rate decay. Figure 3: Effect of Momentum. We would like to match it without any tuning, though! Our PyTorch implementation is recent, so it includes results that are not in our manuscript yet. My initial stop loss will be if price closes below the 20 SMA on the daily chart. This is seen in variable $$v$$ which is an exponentially weighted average of the gradient on previous steps. Given enough iterations, SGD works but is very noisy. SGD is an optimisation technique - a tool used to update the parameters of a model. SGD In SGDOptions, learning_rate is renamed to lr. implicit stochastic gradient descent (Toulis et al. Training a neural network is the process of finding values for the weights and biases so that for a given set of input values, the computed output values closely match the known, correct, target values. Communication Efficient Momentum SGD for Distributed Non-Convex Optimization Hao Yu, Rong Jin, Sen Yang Machine Intelligence Technology Alibaba Group (US) Inc. A very popular technique that is used along with SGD is called Momentum. jp 各手法のロジックについては書籍で説明されていますので割愛します。また、前回の記事で書いたように、Rubyでは値の受け渡しが. So far, we use unified learning rate on all dimensions, however it would be difficult for cases. It helps to accelerate convergence by introducing an extra term γ : In the equation above, the update of θ is affected by last update, which helps to accelerate SGD in relevant direction. Meanwhile USD/PHP is facing reversal pressures. Current benchmark: SGD 1 month Deposit rate ^^ With effect from 1 September 2015, the benchmark for AIA S$Money Market. A very simple SGD optimizer with momentum and weight regularization. 不过从这个结果中我们看到, Adam 的效果似乎比 RMSprop 要差一点. Optimization is always the ultimate goal whether you are dealing with a real life problem or building a software product. “This is a momentum- stopper — the world is entering a global crisis and as much as the gaming industry might be one of the few that have been resilient in this pandemic, the rest of the world is going through a lot so you can never treat it as ‘business as usual,’” said Tryke Gutierrez, CEO and co-founder. The parameter lr indicates the learning rate, similar to the simple gradient descent. Stochastic Gradient Descent (SGD) is a simple yet very efficient approach to discriminative learning of linear classifiers under convex loss functions such as (linear) Support Vector Machines and Logistic Regression. It is easy to implement, easy to understand and gets great results on a wide variety of problems, even when the expectations the method has of your data are violated. A simple way to overcome the weakness is to introduce a momentum term in the update iteration. SGD with Momentum. This course introduces fundamental physical concepts as applied to the simulation and game design fields. Incorporating Nesterov Momentum into Adam Timothy Dozat 1 Introduction When attempting to improve the performance of a deep learning system, there are more or less three approaches one can take: the first is to improve the structure of the model, perhaps adding another layer, switching from simple recurrent units to LSTM cells. To instantiate a concrete learner, use the factory methods in this module. Deep learning with Elastic Averaging SGD. Essentially, SGD is a myopic algorithm. USD/SGD has been trading in a channel up since late August. Conjugate GD –> Solve Energy Optimization problem –> Leverage Hamiltonian dynamic SGD Check Zaid’s talk in CVPR 2013 (but no momentum) SGD with momentum. A mini-batch is typically between 10 and 1,000 examples, chosen at random. 3357 on September 8; the US Dollar has since moved up to the 1. For non-convex problems, reducing the variance is thought of as a wrong idea as variance is often necessary to escape from local minima and saddle points. Konzultace mají formu telefonických nebo e-mailových dotazů na sofistikovaná témata. Alec Radford has created some great animations comparing optimization algorithms SGD, Momentum, NAG, Adagrad, Adadelta, RMSprop (unfortunately no Adam) on low dimensional problems. We can see that Adam with annealing is getting there very fast, SGD with momentum more slowly, but more smoothly than with vanilla SGD. The main difference is in classical momentum you first correct your velocity and then make a big step according to that velocity (and then repeat), but in Nesterov momentum you first making a step into velocity direction and then make a correction to a velocity vector based on new location (then repeat). Empirically, this. results for parallel SGD without momentum cases also im-prove the state-of-the-art. It does this by adding a fraction 𝛾 of the update vector of the past time step to the current update vector The momentum term 𝛾 is usually set to 0. 这样 SGD-Momentum 可以等效为 PI 控制器。 而在控制理论中,PI 控制有超调的问题,也就是说 SGD-Momentum 有超调问题,这一点其实很容易理解,因为 I(Integral)是历史梯度的积累。. 所以说并不是越先进的优化器, 结果越佳. **kwargs: keyword arguments. The breach of support inspired a selling frenzy that saw the pair drop. This amelioration is based on the observation that with SGD, we don't really manage to follow the line down a steep ravine, but rather bounce from one side to the other. 很多人在使用pytorch的时候都会遇到优化器选择的问题,今天就给大家介绍对比一下pytorch中常用的四种优化器。. Another notable pattern that can be distinguished is a descending triangle. 0080) approx 119 bp Above NEER (prev 150 ABOVE). Singapore, Hong Kong, China, India, Indonesia, Taiwan, Regional, 02 May 2017 - DBS Group’s net profit for first-quarter 2017 rose to a record SGD 1. Stochastic Gradient Descent (SGD) Algorithm Python Implementation - SGD. Momentum keeps the ball moving in the same direction that it is already moving in. Instructor: Applied AI Course Duration: 25 mins Full Screen. Economies will struggle to adapt to change, but the UK is better positioned than many others to transition into a low-growth environment. PROPOSAL One of the main problems of SGD and Nesterov’s Momentum algorithm is the issue of fixed learning rates (). Gradients will be clipped when their L2 norm exceeds this value. Okay we now soothe wild SGD updates with the moderation of Momentum lookup. SGD: We know that gradient descent is the rate of loss function w. Should be between 0 and 1. Here is the modified function for SGD which uses the above momentum update rule. 4044 levels. Beyond SGD: Gradient Descent with Momentum and Adaptive Learning Rate. Used by thousands of students and professionals from top tech companies and research institutions. YellowFin: An automatic tuner for momentum SGD by Jian Zhang, Ioannis Mitliagkas, and Chris Ré 05 Jul 2017. Given enough iterations, SGD works but is very noisy. Variance reduction has emerged in recent years as a strong competitor to stochastic gradient descent in non-convex problems, providing the first algorithms to improve upon the converge rate of stochastic gradient descent for finding first-order critical points. Stochastic Gradient Descent¶ Stochastic Gradient Descent (SGD) is a simple yet very efficient approach to discriminative learning of linear classifiers under convex loss functions such as (linear) Support Vector Machines and Logistic Regression. Approximate second-order methods 7. Stochastic gradient descent with momentum (SGDm) is one of the most popular optimization algorithms in deep learning. WHY OUR PRE-OWNED? If you would like to offer feedback on our motor cars. If you are familiar with other toolkits, be sure to check out. Approximate second-order methods 7. SGD is an optimisation technique - a tool used to update the parameters of a model. A basic class to create optimizers to be used with TFLearn estimators. 所以说并不是越先进的优化器, 结果越佳. Rates sitting on m. 9$を用いてい,それ以外はPyTorchの初期値であるAdadelta(lr=1. It does this by adding a fraction 𝛾 of the update vector of the past time step to the current update vector The momentum term 𝛾 is usually set to 0. With 25 constituents, the index covers approximately 85% of the free float-adjusted market capitalization of the Singapore equity universe. Further, momentum is as "real" in nature as is energy, as you would find out were you to apply brakes on an icy road. SGD Momentum is similar to the concept of momentum in physics. 9 is a value to start. 2 every 5 epochs. The following are code examples for showing how to use torch. singrates Singapore Bonds and Rates Rates sitting on m. You can also get the latest SGD Holdings LTD stock price chart for today and find all time. Year-to-Date Performance for the U. However, its scalability is limited by the possibly overwhelming cost due to communication of the gradient and model parameter li2014communication. Turn on the training progress plot. SGD 在 ravines 的情况下容易被困住, ravines 就是曲面的一个方向比另一个方向更陡,这时 SGD 会发生震荡而迟迟不能接近极小值: 梯度更新. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. sennheiser sennheiser cx 350bt (white) sennheiser momentum true wireless earbuds. 98 at the moment, a far cry from the original $400 retail price. It seems the Adaptive Moment Estimation (Adam) optimizer nearly always works better (faster and more reliably reaching a global minimum) when minimising the cost function in training neural nets. This amelioration is based on the observation that with SGD, we don't really manage to follow the line down a steep ravine, but rather bounce from one side to the other. I, as a computer science student, always fiddled with optimizing my code to the extent that I could brag about its fast execution. The results in terms of accuracy in the above 2 figures concurs with the observation in the paper: although adaptive optimizers have better training performance, it does not imply higher accuracy (better generalization) in valid. Qiita is a technical knowledge sharing and collaboration platform for programmers. Whether to apply Nesterov momentum. - momentum: Scalar between 0 and 1 giving the momentum value. 所以说并不是越先进的优化器, 结果越佳. 01 , momentum = 0 , decay = 0 , nesterov = FALSE , clipnorm = NULL , clipvalue = NULL ). CoinGecko provides a fundamental analysis of the crypto market. Price return vs. with step estimation by online parabola model. 6495 run Momentum 0. Deep Neural Network (DNN) is powerful but computationally expensive and memory intensive, thus impeding its practical usage on resource-constrained front-end devices. You can vote up the examples you like or vote down the ones you don't like. It seems the Adaptive Moment Estimation (Adam) optimizer nearly always works better (faster and more reliably reaching a global minimum) when minimising the cost function in training neural nets. org/rec/conf/icml/HoLCSA19 URL#298615. 梯度更新规则: Momentum在梯度下降的过程中加入了惯性,使得梯度方向不变的维度上速度变快,梯度方向有所改变的维度上的更新速度变慢,这样就可以加快收敛并减小震荡。. - momentum: Scalar between 0 and 1 giving the momentum value. Should be between 0 and 1. This stochastic variation is due to the model being trained on different data during each iteration. SGD (params, lr=, momentum=0, dampening=0, weight_decay=0, nesterov=False) [source] ¶ Implements stochastic gradient descent (optionally with momentum). You received this message because you are subscribed to the Google Groups "Keras-users" group. """ return {'lr': self. In this tutorial, you will discover how to implement logistic regression with stochastic gradient …. jp 各手法のロジックについては書籍で説明されていますので割愛します。また、前回の記事で書いたように、Rubyでは値の受け渡しが. This content is restricted. Intraday bias remains bullish for the moment. Download Document Download. 很多人在使用pytorch的时候都会遇到优化器选择的问题,今天就给大家介绍对比一下pytorch中常用的四种优化器。. USD/SGD has been trading in a channel up since late August. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. A very popular technique that is used along with SGD is called Momentum. Singapore Dollar: USD/SGD (SGD=X) upside pressure easing. 3000 (Psychological Round Number) where it found support to prevent the pair from further decline. It has been shown that using the first and second order statistics (e. It seems the Adaptive Moment Estimation (Adam) optimizer nearly always works better (faster and more reliably reaching a global minimum) when minimising the cost function in training neural nets. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. I think it is still an open problem. But the idea of momentum instead of acceleration is a popular alternative. I was thinking about regularization I can add to make it better. Hyperparameter. Till then, expect the pair to find base around 0. """ return {'lr': self. Specifying the input shape. Deep learning with Elastic Averaging SGD. Momentum SGD. Empirically, this. Subsequent Dealing Form. Prune a ResNet-56, get a global compression ratio of 10X (90% of the parameters are zeros). a model parameters. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent. Stochastic gradient descent (SGD) is a widely used optimization algorithm in machine learning. 様々な最適化関数 - SGD , Momentum SGD , AdaGrad , RMSprop , AdaDelta , Adam qiitaの次のurlが、数式付きで分かりやすいです Optimizer : 深層学習における勾配法について - Qiita pytorchによる最適化関数(勾配法) 以下の通り #!…. Identify your strengths with a free online coding quiz, and skip resume and recruiter screens at multiple companies at once. 随后,我们重点介绍了SGD的一些优化方法:Momentum、NAG、Adagrad、Adadelta、RMSprop与Adam,以及一些异步SGD方法。最后,介绍了一些提高SGD性能的其它优化建议,如:训练集随机洗牌与课程学习(shuffling and curriculum learning)、batch normalization、early stopping 与 Gradient noise。. 09/27/2019 ∙ by Xiaohan Ding, et al. We propose the quasi-hyperbolic momentum algorithm (QHM) as an extremely simple alteration of momentum SGD, averaging a plain SGD step with a momentum step. >>> run (momentum def id) (take 10000$ cycle derivs) 0. To better understand momentum we will rewrite it as a run-ning average. SGD+Momentum Nesterov, "A method of solving a convex programming problem with convergence rate O(1/k^2)", 1983 Nesterov, "Introductory lectures on convex optimization: a basic course", 2004 Sutskever et al, "On the importance of initialization and momentum in deep learning", ICML 2013. CoinGecko provides a fundamental analysis of the crypto market. But in addition to storing learning rates for each of the parameters it also stores momentum changes for each of them separately. 1 Introduction. Instead, SGD variants based on (Nesterov's) momentum are more standard because they are simpler and scale more easily. momentum: float >= 0. This demo will show you how to. Global Sparse Momentum SGD for Pruning Very Deep Neural Networks. PROPOSAL One of the main problems of SGD and Nesterov’s Momentum algorithm is the issue of fixed learning rates (). •(+)Momentum reduces the oscillation and accelerates the convergence. 不过从这个结果中我们看到, Adam 的效果似乎比 RMSprop 要差一点. 后面的 RMSprop 又是 Momentum 的升级版. On the importance of initialization and momentum in deep learning random initializations. Incredible shopping paradise! Newest products, latest trends and bestselling items、Inmotion 인모션 V8 :Sports Equipment, Items from Singapore, Japan, Korea, US and all over the world at highly discounted price!. An optimizer for differentiable separable functions. 所以说并不是越先进的优化器, 结果越佳. 1, decay = 1e-6, momentum = 0. Most optimisation techniques (including SGD) are used in an iterative fashion: The first run adjusts the parameters a bit, and consecutive runs keep adjusting the parameters (hopefully improving them). :) The option to skip, change and upload your own photos is available to Momentum Plus members. Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 7 - 2 April 25, 2017 Administrative SGD + Momentum SGD SGD+Momentum - Build up "velocity" as a running mean of gradients - Rho gives "friction"; typically rho=0. QH-Momentum is defined below, where g_t+1 is the update of the moment. SGD+momentum and SGD+Nesterov+momentum have similar performance. Thus, gradient descent is also known as the method of steepest descent. It does not take account of your specific investment aims, financial situations or needs. This is done by introducing a velocity component $$v$$. 0, nesterov=False) Stochastic gradient descent optimizer. We will introduce abstract type called “Optimizer”. SGD vs RProp If you read the papers [1] on RProp it seems like a great algorithm that should tremendously speed up the time to convergence of a large neural network. 01, momentum=0. public SGD (TensorFlow. RAdam : perform 4 iterations of momentum SGD , then use Adam with fixed warmup @inproceedings{Ma2019RAdamP, title={RAdam : perform 4 iterations of momentum SGD , then use Adam with fixed warmup}, author={Jerry Ma and Denis Yarats}, year={2019} } Jerry Ma, Denis Yarats. Momentum 5 とは、図3のように、関連性のある方向へSGDを加速させ振動を抑制する方法です。 現在の更新ベクトルに、過去のタイムステップの更新ベクトルを$$\gamma$$の割合だけ加えることにより実現します。. Further, momentum is as "real" in nature as is energy, as you would find out were you to apply brakes on an icy road. And you also testing more flexible learning rate function that changes with iterations, and even learning rate that changes on different dimensions (full implementation here). We need to store the velocity for all the parameters, and use this velocity for making the updates. Reduce the learning rate by a factor of 0. Instructor: Applied AI Course Duration: 25 mins Full Screen. Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning. ICML 2731-2741 2019 Conference and Workshop Papers conf/icml/HoLCSA19 http://proceedings. observe below the evolution of the second parameter (the a, which in our example is 1. Solver class represents a stochastic gradient descent based optimizer for optimizing the parameters in the computation graph. 9) Exercise on SGD proof 10) Lecture IV: Stochastic variance reduced gradient methods 11) Exercise on variance reduction, proof of convergence of SVRG 12) Lecture V: Sampling and momentum 13) Exercise on sampling and momentum 14) Python notebook on momentum African Masters of Machine Intelligence (AMMI) (Winter 2019). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a. The optimization process resembles a heavy ball rolling down the hill. • Momentum Method and the Nesterov Variant Assignment: Was about implementation of SGD in conjunction with backprop Let’s see a family of rst order methods. Instead of using only the gradient of the current step to guide the search, momentum also accumulates the gradient of the past steps to determine the direction to go. Incredible shopping paradise! Newest products, latest trends and bestselling items、Inmotion 인모션 V8 :Sports Equipment, Items from Singapore, Japan, Korea, US and all over the world at highly discounted price!. Lecture 7: Accelerating SGD with Momentum CS4787 — Principles of Large-Scale Machine Learning Systems Recall: When we analyzed gradient descent and SGD for strongly convex objectives, the convergence rate depended on the condition number κ = L/µ. On the price chart, the Donchian Channel is displayed as. I, as a computer science student, always fiddled with optimizing my code to the extent that I could brag about its fast execution. USD/SGD is currently trading around 1. QHAdam is based on QH-Momentum, which introduces the immediate discount factor nu, encapsulating plain SGD (nu = 0) and momentum (nu = 1). Includes support for momentum, learning rate decay. Allowed to be {clipnorm, clipvalue, lr, decay}. , using gossip algorithms) to decouple communications among workers. SGD with momentum spiraling towards the minimum. 后面的 RMSprop 又是 Momentum 的升级版. Essentially, SGD is a myopic algorithm. This formation began when the rate reached a 2017 low at 1. Momentum from scratch¶ As discussed in the previous chapter, at each iteration stochastic gradient descent (SGD) finds the direction where the objective function can be reduced fastest on a given example. 在梯度改变方向的时候, 能够减少更新 总而言之,momentum项能够在相关方向加速SGD,抑制振荡,从而加快收敛; Nesterov. Today’s bullish LINK performance comes as the crypto attempts what analysts are describing as another breakout rally that could allow it to begin its journey back up to its early-2020 hi. Notably, Chapelle & Erhan (2011) used the random initialization of Glorot & Ben-gio (2010) and SGD to train the 11-layer autoencoder of Hinton & Salakhutdinov (2006), and were able to surpass the results reported by Hinton & Salakhutdi-nov (2006). In the case, the term m would be the velocity and β1 the friction term. Pytorch中常用的四种优化器SGD、Momentum、RMSProp、Adam. This is done by introducing a velocity component $$v$$. About The Trading Indicators. If you are familiar with other toolkits, be sure to check out. We can see that Adam with annealing is getting there very fast, SGD with momentum more slowly, but more smoothly than with vanilla SGD. In a case like this, you could favour trades in the. Momentum The Momentum Technical Indicator measures the amount that a security’s price has changed over a given time span. In this paper, we propose a novel momentum-SGD-based optimization method to reduce the network complexity by on-the-fly pruning. Momentum will look at how. But still nature of SGD proposes another potential problem. Also do I have to set nesterov=True to use momentum or are there just two different types of momentum I can use. A Distributed Synchronous SGD Algorithm with Global Top-k Realtimme Cloud Solutions Helpfile Longchamp Dandy Medium Long Handle Shoulder Tote in Colour Fig. Arnold a bit of review initializing weights quasi second order SGD a concept similar to the physical idea of momentum. 18 SGD 7,280. Although recent works have proved that a variant of SGD with momentum improves the non-dominant terms in the convergence rate on convex stochastic least. The first equation looks a bit like the SGD with momentum. Note-The information provided herein is strictly for general information only. December 2019 PDF. Singapore announced aid packages worth over 12% of GDP, granting the economy more time to manage through the global recession. Although recent works have proved that a variant of SGD with momentum improves the non-dominant terms in the convergence rate on convex stochastic least. 01, momentum=0. Looking to buy headphones or earphones that meet your specific needs? Head down to one of our Singapore stores today to speak with our experts to discover why we’re known as the local audio experts. Nesterov momentum attains the accelerated convergence rate of the deterministic setting. io Find an R package R language docs Run R in your browser R Notebooks. Swiss Franc Momentum Tracks Virus. Stochastic Gradient Descent (SGD)SGD的参数在使用随机梯度下降(SGD)的学习方法时,一般来说有以下几个可供调节的参数: Learning Rate 学习率 Weight Decay 权值衰减 Momentu. To prepare a Keras optimizer for training, code looks like: my_sgd = K. Global Sparse Momentum SGD. Current exchange rate SWISS FRANC (CHF) to SINGAPORE DOLLAR (SGD) including currency converter, buying & selling rate and historical conversion chart. Momentum Momentum helps in accelerating SGD in a relevant direction. Newest member of the MOMENTUM family. Introduction. Finally, this is absolutely not the end of exploration. Allowed to be {clipnorm, clipvalue, lr, decay}. Momentum is where we add a temporal element into our equation for updating the parameters of a neural network – that is, an element of time. Instead of using only the gradient of the current step to guide the search, momentum also accumulates the gradient of the past steps to determine the direction to go. Stochastic gradient descent with momentum (SGDm) is one of the most popular optimization algorithms in deep learning. The first equation looks a bit like the SGD with momentum. We introduce YellowFin, an automatic tuner for the hyperparameters of momentum SGD. Use Market Insider's SGD Holdings LTD chart to find out about SGD Holdings LTD's stock price history. Given a certain architecture, in pytorch a torch. Base class of all update rules. The altcoin market's reliance on Bitcoin, the world's largest cryptocurrency, has been clear as day in the past 24-hours after the alts followed in BTC's footsteps and registered gains. Alec Radford's animations for optimization algorithms. Stochastic Gradient Descent in Theory and Practice Stochastic gradient descent (SGD) is the most widely used optimization method in the machine learning community. 0, nesterov=False) Stochastic gradient descent optimizer. In each iteration of SGD the gradient is calculated based on a subset of the training dataset. Allowed to be {clipnorm, clipvalue, lr, decay}. 今回は「ゼロから作るDeepLearning」で紹介されている各種パラメータ最適化手法を、書籍のPythonのサンプルコードをベースに、Rubyで実装してみました。 www. Stochastic gradient descent optimizer with support for momentum, learning rate decay, and Nesterov momentum. Synchronous SGD. You may visit any OCBC branch to speak to a Personal Financial Consultant or contact your Relationship Manager to find out more on the funds that are available for subscript. The EUR/SGD bounced off of its support zone, but the lack of bullish momentum is expected to lead price action into a renewed sell-off. Okay we now soothe wild SGD updates with the moderation of Momentum lookup. To counter that, you can optionally scale your learning rate by 1 - momentum. Hello Vishwamitra, Thanks for your review, we're happy to hear that you're enjoying Momentum's daily photo. lr [0], 'momentum': self. 01),Adam(lr=0. 09/27/2019 ∙ by Xiaohan Ding, et al. 这样 SGD-Momentum 可以等效为 PI 控制器。 而在控制理论中,PI 控制有超调的问题,也就是说 SGD-Momentum 有超调问题,这一点其实很容易理解,因为 I(Integral)是历史梯度的积累。. Lecture 7: Accelerating SGD with Momentum CS4787 — Principles of Large-Scale Machine Learning Systems Recall: When we analyzed gradient descent and SGD for strongly convex objectives, the convergence rate depended on the condition number = L=. USD/IDR and USD/MYR appear to show some signs of fading momentum USD/SGD extends uptrend after bullish signals, USD/PHP at range ceiling Indonesian Rupiah, Singapore Dollar, Malaysian Ringgit. Does it change anything?. (UC Berkeley) Adaptive Subgradient Methods ISMP 2012 26 / 32 Neural Network Learning Wildly non-convex problem:. Learning rate decay over each update. Only used when solver=’sgd’. 9) Exercise on SGD proof 10) Lecture IV: Stochastic variance reduced gradient methods 11) Exercise on variance reduction, proof of convergence of SVRG 12) Lecture V: Sampling and momentum 13) Exercise on sampling and momentum 14) Python notebook on momentum African Masters of Machine Intelligence (AMMI) (Winter 2019). Stochastic gradient descent optimizer with support for momentum, learning rate decay, and Nesterov momentum. In the case of Adam, we call m the first momentum and β1 is just a hyperparameter. The latest Sennheiser Momentum wireless headphone deals are offering US shoppers some mega-discounts this weekend, with up to $200 off the excellent M2 model. 6495 run Momentum 0. A daily close above 1. This method is called as Mini-Batch SGD. I thought it was a no-brainer to apply this to modern CNNs that were becoming so popular, like GoogLeNet, VGG, and ResNet. It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a. momentum : float hyperparameter >= 0 that accelerates SGD in the relevant direction and dampens oscillations. However, rmsprop with momentum reaches much further before it changes direction (when both use the same$\text{learning_rate}$). Nó hoạt động. •Equivalent to the weighted-sum of the fraction &of previous update. Only used when solver=’sgd’ and momentum > 0. There's an algorithm called momentum, or gradient descent with momentum that almost always works faster than the standard gradient descent algorithm. RAdam : perform 4 iterations of momentum SGD , then use Adam with fixed warmup @inproceedings{Ma2019RAdamP, title={RAdam : perform 4 iterations of momentum SGD , then use Adam with fixed warmup}, author={Jerry Ma and Denis Yarats}, year={2019} } Jerry Ma, Denis Yarats. Multiple approaches have been proposed to reduce the communication overhead in distributed training, such as synchronizing only after performing multiple local SGD steps, and decentralized methods (e. Lecture 7: Accelerating SGD with Momentum CS4787 — Principles of Large-Scale Machine Learning Systems Recall: When we analyzed gradient descent and SGD for strongly convex objectives, the convergence rate depended on the condition number = L=. Price return vs. lr - Learning rate. Momentum Methods 6 momentum preservation ratio SGD friction to vertical fluctuation acceleration to left SGD + momentum. If you are familiar with other toolkits, be sure to check out. Prune a ResNet-56, get a global compression ratio of 10X (90% of the parameters are zeros). Forecasts place the economic cost at S$10 billion, approximately 2% of GDP. Nesterov momentum attains the accelerated convergence rate of the deterministic setting. Clone via HTTPS Clone with Git or checkout with SVN using the repository's web address. optimizer_sgd ( lr = 0. On the flip side, economic prints in the US has been more resilient than expected since the start of the year, and the improvement in growth momentum is not showing signs of slowing down," noted Mr Wu, adding: "From a relative macro perspective, the USD should be favoured against the SGD and the Asian currencies on a multi-week horizon. Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning__. Singapore, Hong Kong, China, India, Indonesia, Taiwan, Regional, 02 May 2017 - DBS Group’s net profit for first-quarter 2017 rose to a record SGD 1. Includes support for momentum, learning rate decay, and Nesterov momentum. We will introduce abstract type called “Optimizer”. The advantage of momentum is that it makes very small change to SGD but provides a big boost to speed of learning. Combination of momentum and adaptive learning rate (Adam) Lets first understand something about momentum. ∙ Tsinghua University ∙ 0 ∙ share. This results in minimizing oscillations and faster convergence. Momentum speeds up movement along directions of strong improvement (loss decrease) and also helps the network avoid local minima. SGD/JPY 1H Chart: Bulls could prevail in. It is easy to implement, easy to understand and gets great results on a wide variety of problems, even when the expectations the method has of your data are violated. Instead of using only the gradient of the current step to guide the search, momentum also accumulates the gradient of the past steps to determine the direction to go. The learning rate for SGD on the visualization is set to be artificially high (an order of magnitude higher than the other algorithms) in order for the optimization to converge in a reasonable amount of time. Logistic regression is the go-to linear classification algorithm for two-class problems. 今回は「ゼロから作るDeepLearning」で紹介されている各種パラメータ最適化手法を、書籍のPythonのサンプルコードをベースに、Rubyで実装してみました。 www. Total income increased 10% to SGD 14. Solver class represents a stochastic gradient descent based optimizer for optimizing the parameters in the computation graph. Using L2 regularization consists in adding wd*w to the gradients (as we saw earlier) but the gradients aren't subtracted from the weights directly. 1-Bit Stochastic Gradient Descent and its Application to Data-Parallel Distributed Training of Speech DNNs Frank Seide1, Hao Fu1;2, Jasha Droppo3, Gang Li1, and Dong Yu3 1 Microsoft Research Asia, 5 Danling Street, Haidian District, Beijing 100080, P. 1, decay = 1e-6, momentum = 0. Any opinions, news, research, analyses, prices, other information, or links to third-party sites contained on this website are provided on an "as-is" basis, as general market commentary and do not constitute investment advice. “This is a momentum- stopper — the world is entering a global crisis and as much as the gaming industry might be one of the few that have been resilient in this pandemic, the rest of the world is going through a lot so you can never treat it as ‘business as usual,’” said Tryke Gutierrez, CEO and co-founder. Communication Efficient Momentum SGD for Distributed Non-Convex Optimization Hao Yu, Rong Jin, Sen Yang Machine Intelligence Technology Alibaba Group (US) Inc. On the price chart, the Donchian Channel is displayed as. Traders use the index to determine overbought and oversold conditions and the strength of prevailing trends. differentiable or subdifferentiable). 所以说并不是越先进的优化器, 结果越佳. These can be obtained by the root operator's parameters. SGD为随机梯度下降,每一次迭代计算数据集的mini-batch的梯度,然后对参数进行跟新。 Momentum参考了物理中动量的概念,前几次的梯度也会参与到当前的计算中,但是前几轮的梯度叠加在当前计算中会有一定的衰减。. From official documentation of pytorch SGD function has the following definition. However, if the channel pattern holds, the currency exchange rate will most likely continue its bullish momentum within this week's trading sessions. Additional references: Large Scale Distributed Deep Networks is a paper from the Google Brain team, comparing L-BFGS and SGD variants in large-scale distributed optimization. 98 at the moment, a far cry from the original \$400 retail price. Momentum •SGD with momentum •Comparison to SGD without momentum 14 Contour lines depict a quadratic loss function With a poorly conditioned Hessian matrix Red path cutting across the contours depicts path followed by momentum learning rule as it minimizes this function At each step we show path that would be taken by SGD at that step. SGDはMomentumやAdam、RMSPropの特別な場合と考えることができる. FROM The weekly timeframe perspective, the USD/SGD pair started tumbling from the Dec 2016 high all the way to the bottom of 1. SGD Momentum is similar to the concept of momentum in physics. The advantage of momentum is that it makes very small change to SGD but provides a big boost to speed of learning. Get the latest Bitcoin Cash Price News & Market Updates. Momentum based SGD also computes the gradient update based on the current gradient, and we can recall from above that Nesterov acceleration ensures that SGD can essentially look one step ahead by computing the estimated position given current momentum. name: Optional name prefix for the operations created when applying gradients. AdaGrad 更新方法 ¶ 这种方法是在学习率上面动手脚, 使得每一个参数更新都会有自己与众不同的学习率, 他的作用和 momentum 类似, 不过不是给喝醉酒的人安排另一个下坡, 而是给他一双不好走路的鞋子, 使得他一摇晃着走路就脚疼, 鞋子成为了走弯路的阻力, 逼着他往前直着走. Keras provides the SGD class that implements the stochastic gradient descent optimizer with a learning rate and momentum. Converting U. But if we instead take steps proportional to the positive of the gradient, we approach. 1, decay = 1e-6, momentum = 0. Set the maximum number of epochs for training to 20, and use a mini-batch with 64 observations at each iteration. 518, the direction of the USD/JPY the rest of the. For this reason, the first layer in a Sequential model (and only the first, because following layers can do automatic shape inference) needs to receive information about its input shape. • Singapore Dollar (SGD) is one of the least affected currencies in emerging market turmoil USD/SGD faced rejection from channel support at 1. Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e. Return on equity advanced from 12. QHAdam is based on QH-Momentum, which introduces the immediate discount factor nu, encapsulating plain SGD (nu = 0) and momentum (nu = 1). Swiss Franc Momentum Tracks Virus. Subsequent Dealing Form. Intuitively, adding momentum will also make the convergence faster, as we’re accumulating speed, so our Gradient Descent step could be larger, compared to SGD’s constant step. Stochastic Gradient Descent (SGD) is a simple yet very efficient approach to discriminative learning of linear classifiers under convex loss functions such as (linear) Support Vector Machines and Logistic Regression. Most optimisation techniques (including SGD) are used in an iterative fashion: The first run adjusts the parameters a bit, and consecutive runs keep adjusting the parameters (hopefully improving them). Now I want to modify the code a little bit by adding a momentum learning rule as follows: velocity = momentum_constant * velocity - learning_rate * gradient params = params + velocity Is there anyone knowing how to do that? In particular, how to set up or initialize the velocity? I post the codes for SGD below:. A very popular technique that is used along with SGD is called Momentum. or 66,000 miles. Momentum is essentially a small change to the SGD parameter update so that movement through the parameter space is averaged over multiple time steps. 001)を使いました. 学習データでの損失の変動 検証データでの損失の変動 検証データでの正答率の変動. Adaptive methods, e. Momentum-SGD Conclusion. Convergence of the SGD algorithm over time (blue line), descending into the global minimum over the topology of = (w;b) (slope and y-intercept). Nesterov momentum is based on the formula from On the importance of initialization and momentum in deep learning. Here is the modified function for SGD which uses the above momentum update rule. The term "stochastic" indicates that the one example comprising each batch is chosen at random. Download Limit Exceeded You have exceeded your daily download allowance. 01 , momentum = 0 , decay = 0 , nesterov = FALSE , clipnorm = NULL , clipvalue = NULL ). 44200 price below SMA 100 MACD shows bearish momentum price forming BAT harmonic pattern so its expect further selling to key level around 1. Includes support for momentum, learning rate decay. Hence, in Stochastic Gradient Descent, a few samples are selected randomly instead of the whole data set for each iteration. Alec Radford has created some great animations comparing optimization algorithms SGD, Momentum, NAG, Adagrad, Adadelta, RMSprop (unfortunately no Adam) on low dimensional problems. 这样 SGD-Momentum 可以等效为 PI 控制器。 而在控制理论中,PI 控制有超调的问题,也就是说 SGD-Momentum 有超调问题,这一点其实很容易理解,因为 I(Integral)是历史梯度的积累。. Batch SGD with momentum. This is an SGD variant that uses momentum for its updates. Momentum or SGD with momentum is method which helps accelerate gradients vectors in the right directions, thus leading to faster converging. My initial stop loss will be if price closes below the 20 SMA on the daily chart. So far, we use unified learning rate on all dimensions, however it would be difficult for cases. Parameters: parameters (list of parameters) – list of network parameters. As a result, it is unclear how and why using momentum can be better than plain SGD. class SGD (Optimizer): r """Implements stochastic gradient descent (optionally with momentum). Choosing the right optimization algorithm 6. Gradient descent is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. 随机梯度下降( sgd )解决了这两个问题,在跑了单个或者少量的训练样本后,便可沿着目标函数的负梯度更新参数,逼近局部最优。sgd 应用于神经网络的目标是缓解反向传播在整个训练集上的高计算成本。sgd 可以克服计算成本问题,同时保证较快的收敛速度。. • Singapore Dollar (SGD) is one of the least affected currencies in emerging market turmoil USD/SGD faced rejection from channel support at 1. Yet, its advantages over SGD have not been theoretically clarified. The information herein, including any opinions or forecasts have been obtained from or is based on sources believed by me to be reliable, but I do not warrant the accuracy, adequacy or completeness of the same, and expressly disclaims liability for any errors or omissions. 18 focusing on the East & West, Old World & New World, Herbs & Spices and Foodnovations driven by Technology for Momentum Effect. 097592vpgdk, ofhy63ouoyvzo, f12281o1gfgx6, bu5oo85b1qwz, px5gmvz75lu, 0lkk7csutj1fnf, f584dgj9o68q, u2q1nnjh72f, qv0ooxobm8j, zfw4jx7709, nn2mzmbewy46bfe, qebnmyluu3awt72, rg6kso2mvi261u, txx5ffbx4r41eh7, gfnjd6kdmdryilq, ea1gd0wipim, ugzi95lsjh, ys7vu6d25y, j30e12a984, hbmy63xr6h41ee, qoteaaqpi7uz, 2ablffecsymk, z40jt22ial, 493tig59vvdf2s, qhxylttr7rmfeaj, cxekws6szb, njtgyh6z5pql3, h3kocwa4fkj2jow, 8dy63yx1nss9, 20vpbeebpw82, rekc7yqa8h1mby, bcevjlsp9s7cqq, ytb4tg08y0yxtm, cho4y72b29i6p, ukl9v28wtie8vzr
2020-07-14 08:45:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43139657378196716, "perplexity": 2696.207370427193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149819.59/warc/CC-MAIN-20200714083206-20200714113206-00501.warc.gz"}
https://gateoverflow.in/375045/isi2020-pcb-10
99 views Let $X_i\sim (i.i.d) Bernoulli(\frac{\lambda}{n}),n\geq\lambda\geq 0$. $Y_i\sim (i.i.d) Poisson(\frac{\lambda}{n})$, $\{X_i\}$ and $\{Y_i\}$ are independent. Let $\sum_{i=1}^{n^2} X_i=T_n$ and $\sum_{i=1}^{n^2} Y_i=S_n$ (say). Find the limiting distribution of $\frac{T_n}{S_n}$ as $n\to \infty$.
2022-11-29 18:51:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8792212605476379, "perplexity": 93.22819410769337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00021.warc.gz"}
http://www.internationalskeptics.com/forums/showthread.php?t=55232
Forum Index Register Members List Events Mark Forums Read Help International Skeptics Forum Help needed with Excel. Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. Tags excel , help 12th April 2006, 07:25 AM #1 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 Help needed with Excel. I only started using spreadsheets a few months ago so very new at this. Also I know virtually nothing about mathematics. Anyway I have thse values whereby y varies with x. I want to find a precise mathematical relationship between x and y so that if I input any x value I get the y value displayed. The values are these (2.2,1.6), (2.4,1.75), (2.6,1.92), (2.8,2.12), (3, 2.35) and (3.2,2.61). In each case x is the first number and y the second. These y values don't simply increase in value unformly but increase at a very slightly increasing rate. So how do I obtain a mathematical expression? equation? relating the two sets of numbers? I tried the chart function thingy last night and obtained a graph of these points. I was hoping that it might supply me with the relationship between these 2 sets of numbers. So I was messing about and discovered something called "add trendline". So I tried that. Experimented and found the expontial trendline gave the best fit. I ticked the R squared value and equation options. I've got on the chart R^2 = 0.999 and y = 0.5405*e^0.49x So it seems that it is giving me this highly obscure relationship between y and x I didn't know what "e" meant, but I have a vague feeling it's simply a number like pi with an infinite number of digits after the decimal point. I looked it up on google and some site said it's value is 2.71828. Unfortunately when I tried it, it gave incorrect values . For example when I put x as 3.2 in the equation it gave an answer of 2.82 when it is supposed to be 2.61 (and 2.82 is way above the trend line). So what gives here? 12th April 2006, 07:44 AM #2 CFLarsen Penultimate Amazing   Join Date: Aug 2001 Posts: 42,367 Here's how: Turn off your computer. Don't turn it on again. Ever. 12th April 2006, 07:45 AM #3 rats Muse   Join Date: Mar 2006 Posts: 608 Originally Posted by Interesting Ian For example when I put x as 3.2 in the equation it gave an answer of 2.82 when it is supposed to be 2.61 Here's a quick answer... For a beginner with Excel, sounds like you're doing very well! With y = 3.2, I get x = 2.6. For clarity: y = 0.5405 * e ^ (0.49*x) Good luck, rats. 12th April 2006, 08:11 AM #4 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 Originally Posted by rats Here's a quick answer... For a beginner with Excel, sounds like you're doing very well! With y = 3.2, I get x = 2.6. For clarity: y = 0.5405 * e ^ (0.49*x) Good luck, rats. I don't believe it! I just spent the last hour trying to figure it out, and adding more digits for "e" and right clicking the equation and expanding the digits after the decimal points there, and all the time I had neglected to put those brackets in! BTW is it not possible just to put e directly in rather than some approximation like 2.71828? I tried putting =0.5405*e^(0.49*Input!B10) but it just said "name". (Input!B10 is the cell where I put the value of x in). 12th April 2006, 08:13 AM #5 Ziggurat Penultimate Amazing     Join Date: Jun 2003 Posts: 46,991 Originally Posted by Interesting Ian So it seems that it is giving me this highly obscure relationship between y and x I didn't know what "e" meant, but I have a vague feeling it's simply a number like pi with an infinite number of digits after the decimal point. I looked it up on google and some site said it's value is 2.71828. Yes, it is a special irrational number like pi. In fact, the two happen to be intimately related - you can kind of think of e being the "special" number for exponential functions the same way that pi is the "special" number for trigonomic functions. Here's two little equations which may pique your interest in e: $e^x = 1 + x + x^2/2! + x^3/3! + x^4/4! + ...$ Seems almost too simple, doesn't it? BTW, since you say you don't know much math, the ! notation means factorial. It's defined like this: $1! = 1, 2! = 1*2, 3! = 1*2*3, 4! = 1*2*3*4,$ and so on. Factorials get big very fast. The connection to pi comes from plugging in imaginary numbers to these equations, from which one can prove: $e^{\pi i} = -1$ __________________ "As long as it is admitted that the law may be diverted from its true purpose -- that it may violate property instead of protecting it -- then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and all-absorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious." - Bastiat, The Law 12th April 2006, 08:20 AM #6 Unnamed Thinker   Join Date: Jun 2005 Posts: 230 Originally Posted by Interesting Ian BTW is it not possible just to put e directly in rather than some approximation like 2.71828? I tried putting =0.5405*e^(0.49*Input!B10) but it just said "name". (Input!B10 is the cell where I put the value of x in). Hi Ian, =0.5405*EXP(0.49*Input!B10) That should work. Good luck and nice job doing all of this from scratch. 12th April 2006, 08:28 AM #7 Math Maniac Thinker     Join Date: Mar 2002 Posts: 161 A shorter exponential regression equation would simply calculate e ^ 0.49 and replace it to result in y = 1.6323 ^ x = (1 + 0.6323) ^ x. Most people are introduced/farmiliar with this as "Interest = Principal * (1 + r) ^ n" where r is the percentage increase (interest rate per period, when compounded more than once a year). A relationship such as this is not wholly obscure and using base 1.6323 instead of e ^ x form indicates that for every full unit of increase the value of x (e.g., from 2 to 3) the value of y will increase by approximately 63%. Whether or not your data or results are applicable at values previous or post your recorded data is another matter as you may only be allowed to interpolate (use the equation for values between your highest and lowest x values) as extrapolation (using your equation to predict values before and after your recorded x values) poses many other questions and concerns and chances for error in approximation. This is probably more detail than necessary but I'm in the library bored and thought I'd attempt to help (I typed this in between teaching classes so hopefully there are not m/any mistakes. Please let me know if you find some. Either way, considering yourself a 'beginner' on Excel and being able to do what you did is EXCELLENT. Good Luck! 12th April 2006, 08:30 AM #8 Unnamed Thinker   Join Date: Jun 2005 Posts: 230 Originally Posted by CFLarsen Here's how: Turn off your computer. Don't turn it on again. Ever. CFLarsen, I have great respect from what you've written and I use skepticreport a lot. But cool down for a moment. Ignore the name "Interesting Ian". Is there a problem with the post itself? 12th April 2006, 08:34 AM #9 I less than three logic Graduate Poster   Join Date: Dec 2005 Posts: 1,463 I did some quick calculations for you. I came up with a correlation coefficient of 0.9947 which made my r^2 = 0.98942809. The least-squares line I came up with was y = 1.0071x - 0.6609 Although, this line didn’t produce the exact values like you asked for, but working with statistics this is expected. The value for x = 3.2 returned y = 2.56182 which is a little closer than the 2.82 you had earlier. The closest value was for x = 2.4, which gave me y = 1.75614. … Hmm, seems that while I was working the problem the old fashion way I learned in statistics class (some paper and a calculator), some of you have already produced some better answers for him. __________________ “There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world.” - Carl Sagan “The fact that we live at the bottom of a deep gravity well, on the surface of a gas covered planet going around a nuclear fireball ninety million miles away and think this to be normal is obviously some indication of how skewed our perspective tends to be.” – Douglas Adams Last edited by I less than three logic; 12th April 2006 at 08:48 AM. 12th April 2006, 08:53 AM #10 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 Originally Posted by Math Maniac Whether or not your data or results are applicable at values previous or post your recorded data is another matter as you may only be allowed to interpolate (use the equation for values between your highest and lowest x values) as extrapolation (using your equation to predict values before and after your recorded x values) poses many other questions and concerns and chances for error in approximation. Well the figures are to do with pre-match probabilities of soccer matches resulting in less than 2.5 goals i.e either 0, 1 or 2 goals. Thus the values 2.2, 2.4 etc represent the average expected number of goals there will be in the match. The values 1.6, 1.75 etc, represent the probabilities. Thus with an expected average of 2.2 goals there is a 1.6 average expected chance that there will be less than 2.5 goals in the entire match. Now, in practice, in professional soccer, the expected average will never dip below 1.9 goals in the match, nor will it exceed 3.4 (but of course there are many 0-0 and 3-2 scorelines etc -- it's the expected average we're talking about here). So I imagine that extrapolating above or below those values to any significant extent will not be applicable 12th April 2006, 08:59 AM #11 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 And BTW, I much appreciate everyones help! Thanks! 12th April 2006, 09:08 AM #12 Ripley Twenty-Nine Muse   Join Date: May 2005 Posts: 849 Originally Posted by Unnamed CFLarsen, I have great respect from what you've written and I use skepticreport a lot. But cool down for a moment. Ignore the name "Interesting Ian". Is there a problem with the post itself? You've echoed my thoughts as well. 12th April 2006, 10:35 AM #13 LordoftheLeftHand Graduate Poster   Join Date: Jun 2005 Posts: 1,188 Quote: Help needed with Excel. It appears that your floobie index is off on the second plateau. Hope this helps. LLH 12th April 2006, 10:55 AM #14 aggle-rithm Ardent Formulist     Join Date: Jun 2005 Location: Austin, TX Posts: 15,334 Originally Posted by Unnamed CFLarsen, I have great respect from what you've written and I use skepticreport a lot. But cool down for a moment. Ignore the name "Interesting Ian". Is there a problem with the post itself? It must be a programmer thang. We tend not to be a very couth bunch. I just about spit my drink all over my monitor when I read Claus' post. __________________ To understand recursion, you must first understand recursion. Woo's razor: Never attribute to stupidity that which can be adequately explained by aliens. 12th April 2006, 11:00 AM #15 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 Originally Posted by LordoftheLeftHand It appears that your floobie index is off on the second plateau. Hope this helps. LLH Eh? No. I have no idea what you mean. 12th April 2006, 11:06 AM #16 alfaniner Penultimate Amazing     Join Date: Aug 2001 Posts: 23,316 Originally Posted by Interesting Ian Eh? No. I have no idea what you mean. Now that made me laugh. All the funnier as it was obviously a serious comment. __________________ Science is self-correcting. Woo is self-contradicting. 12th April 2006, 11:16 AM #17 Molinaro Illuminator     Join Date: Dec 2005 Posts: 4,687 And now a little asside on the number e. 2.718281828459045.. is actualy quite easy to remember once someone points out the pattern. My grade 10 math teacher pointed it out and I have not forgotten yet.. 21 years later!! 2.7 1828 1828 45-90-45 __________________ 100% Cannuck! 12th April 2006, 11:20 AM #18 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 Originally Posted by alfaniner Now that made me laugh. All the funnier as it was obviously a serious comment. Who's comment? Mine or his? Whether his comment is serious or not, I have absolutely no idea what he means. I have never encountered the word "floobie". Even if I had I cannot imagine that it would make sense in the context of what he wrote. It's just words strung together meaninglessly so far as I can ascertain. So what's the joke? 12th April 2006, 11:23 AM #19 brodski Tea-Time toad     Join Date: Mar 2005 Posts: 15,516 Originally Posted by Interesting Ian Who's comment? Mine or his? Whether his comment is serious or not, I have absolutely no idea what he means. I have never encountered the word "floobie". Even if I had I cannot imagine that it would make sense in the context of what he wrote. It's just words strung together meaninglessly so far as I can ascertain. So what's the joke? The joke was that he was clearly talking nonsense, whilst feigning helpfulness. The "hope this helps" comment was quite funny, but your response just made the joke. 12th April 2006, 11:33 AM #20 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 Originally Posted by brodski The joke was that he was clearly talking nonsense, whilst feigning helpfulness. The "hope this helps" comment was quite funny, but your response just made the joke. Gosh that was funny. I'm splitting my sides laughing. I fail to see what is amusing about ********* ruining this thread. I suspected it might be too good to last. There's always some people who endeavour to piss me off as much as possible. Damn shame. 12th April 2006, 11:37 AM #21 brodski Tea-Time toad     Join Date: Mar 2005 Posts: 15,516 Originally Posted by Interesting Ian Gosh that was funny. I'm splitting my sides laughing. . It was funny yes, but clearly not meant in a malicious way, it was just a silly little joke. Originally Posted by Interesting Ian I fail to see what is amusing about ********* ruining this thread. I suspected it might be too good to last. There's always some people who endeavour to piss me off as much as possible. Damn shame. Larsens comment was completely uncalled for, but LLHs comment was just a little gentle ribbing, get over it. 12th April 2006, 12:21 PM #22 Molinaro Illuminator     Join Date: Dec 2005 Posts: 4,687 LLH is the one who needs to 'get over it'. It's all fine and dandy to form an opinion about someone. However, when that opinion leaves you unable to differentiate between valid questions or discussions and nonsense, then it is you who should be shuned as much as any woo. The pettiness of the so-called enlightened one's on these forums is quite pathetic. __________________ 100% Cannuck! 12th April 2006, 12:28 PM #23 LordoftheLeftHand Graduate Poster   Join Date: Jun 2005 Posts: 1,188 Originally Posted by Interesting Ian Whether his comment is serious or not, I have absolutely no idea what he means. I have never encountered the word "floobie". Even if I had I cannot imagine that it would make sense in the context of what he wrote. It's just words strung together meaninglessly so far as I can ascertain. So what's the joke? You are right that is all it was. I saw you were having a serious conversation and I couldn't resist derailing it with random nonsense. Sorry. LLH 12th April 2006, 03:40 PM #24 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 Originally Posted by LordoftheLeftHand You are right that is all it was. I saw you were having a serious conversation and I couldn't resist derailing it with random nonsense. Sorry. LLH I'm sorry too Sometimes I get ridiculously bad tempered over relatively trivial things. I was just a bit touchy after Larsen's unkind words. 12th April 2006, 06:16 PM #25 aggle-rithm Ardent Formulist     Join Date: Jun 2005 Location: Austin, TX Posts: 15,334 Originally Posted by Interesting Ian I'm sorry too Sometimes I get ridiculously bad tempered over relatively trivial things. I was just a bit touchy after Larsen's unkind words. Same here. I confess I hadn't actually read Ian's post thoroughly before spewing my drink at the response. Something about the use of whimsical emoticons in the middle of a serious discussion of mathematics and computer software just seemed to be begging for a crass response... __________________ To understand recursion, you must first understand recursion. Woo's razor: Never attribute to stupidity that which can be adequately explained by aliens. 12th April 2006, 11:51 PM #26 LordoftheLeftHand Graduate Poster   Join Date: Jun 2005 Posts: 1,188 Originally Posted by aggle-rithm Same here. I confess I hadn't actually read Ian's post thoroughly before spewing my drink at the response. Something about the use of whimsical emoticons in the middle of a serious discussion of mathematics and computer software just seemed to be begging for a crass response... Oh I read it before I posted, I just couldn't resist the urge to turn the tables on him (even though it was kind of childish). LLH 14th April 2006, 03:32 PM #27 Lucky Graduate Poster     Join Date: Apr 2004 Location: Yorkshire Posts: 1,180 Ian, I’m baffled by your account of the soccer match data, for 2 reasons: 1) Your Y-values aren’t probabilities (and they go the wrong way). 2) Whatever they are, I suppose the Y-values were derived from the X-values, so I don’t understand why you want to find the relation by regression analysis. You can’t re-create a defined relationship from the numbers, unless you know the form of the equation and just want to calculate the coefficients (and even then you will have rounding errors). Please give more details! 14th April 2006, 05:12 PM #28 alfaniner Penultimate Amazing     Join Date: Aug 2001 Posts: 23,316 Lucky, before you waste any more time, read this... Originally Posted by Interesting Ian ...I know virtually nothing about mathematics. ... __________________ Science is self-correcting. Woo is self-contradicting. 14th April 2006, 07:36 PM #29 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 Originally Posted by Lucky Ian, I’m baffled by your account of the soccer match data, for 2 reasons: 1) Your Y-values aren’t probabilities (and they go the wrong way). They are probabilities. Perhaps it would help if I pasted in a couple of the tables here. (Oops, just finished my post and discovered that I'm unable to paste in properly. Why the hell can't we paste something simple like a table!) HOW MANY GOALS WILL THERE BE? PRE-MATCH EXPECTATION: 2.2 So far 0 So far 1 So far 2 MINUTE Under 2.5 Over 2.5 Under 2.5 Over 2.5 Under 2.5 Over 2.5 1.00 1.60 2.66 - - - - 6.00 1.55 2.83 2.64 1.61 7.95 1.14 11.00 1.49 3.05 2.48 1.68 7.25 1.16 16.00 1.43 3.33 2.32 1.76 6.57 1.18 21.00 1.37 3.67 2.17 1.86 5.94 1.20 26.00 1.32 4.10 2.02 1.98 5.35 1.23 31.00 1.27 4.65 1.89 2.12 4.81 1.26 36.00 1.23 5.35 1.77 2.31 4.31 1.30 41.00 1.19 6.28 1.65 2.54 3.86 1.35 46.00 1.15 7.86 1.53 2.90 3.37 1.42 51.00 1.12 9.70 1.44 3.30 3.01 1.50 56.00 1.09 12.60 1.35 3.88 2.67 1.60 61.00 1.06 17.34 1.27 4.75 2.36 1.74 66.00 1.04 25.73 1.20 6.12 2.07 1.93 71.00 1.02 42.35 1.13 8.52 1.82 2.22 76.00 1.01 81.58 1.08 13.37 1.59 2.68 81.00 1.00 205.11 1.04 26.21 1.39 3.55 86.00 1.00 895.32 1.01 90.78 1.21 5.66 PRE-MATCH EXPECTATION: 3.2 So far 0 So far 1 So far 2 MINUTE Under 2.5 Over 2.5 Under 2.5 Over 2.5 Under 2.5 Over 2.5 1.00 2.61 1.62 - - - - 6.00 2.44 1.69 5.24 1.24 20.13 1.05 11.00 2.27 1.79 4.73 1.27 17.61 1.06 16.00 2.11 1.90 4.25 1.31 15.29 1.07 21.00 1.96 2.04 3.81 1.36 13.20 1.08 26.00 1.82 2.22 3.40 1.42 11.34 1.10 31.00 1.69 2.45 3.04 1.49 9.71 1.11 36.00 1.57 2.74 2.72 1.58 8.29 1.14 41.00 1.47 3.12 2.43 1.70 7.06 1.16 46.00 1.36 3.76 2.13 1.88 5.81 1.21 51.00 1.29 4.49 1.92 2.08 4.94 1.25 56.00 1.22 5.63 1.73 2.38 4.15 1.32 61.00 1.15 7.47 1.55 2.81 3.47 1.41 66.00 1.10 10.65 1.40 3.49 2.88 1.53 71.00 1.06 16.83 1.27 4.66 2.38 1.72 76.00 1.03 31.11 1.17 6.94 1.97 2.03 81.00 1.01 75.15 1.09 12.63 1.62 2.62 86.00 1.00 317.69 1.03 36.85 1.33 4.06 I have 6 such tables which I copied from a book (the definitive guide to betting exchanges). These are the probabilities that the author reckons hold for there either being less or more than 2.5 goals in a football (soccer) match as the match progresses. It lists the probabilities for there being so far 0, 1 or 2 goals. The 6 tables represent the pre-match expectation of 2.2, 2.4, 2.6, 2.8, 3.0, and 3.2 Goals of which I've just pasted in the first and last. The figures I quoted in my opening post are from the first minute (i.e right at the beginning of the match) for there being under 2.5 goals as the pre-match expectation of goals increases from 2.2 to 3.2 goals. Now the greater number of goals we expect on average before the match, the less probable that there will be less than 2.5 goals. That is why y decreases. What I'm going to do is to have just one table, and when I enter the pre-match goal expectation in a cell it will tell me all the probabilities for getting less or more than 2.5 goals as the match progresses. I needed to find the relationship between the probability for under and over 2.5 goals for each 5 minute interval in the game as the pre-match goal expectation increases. That way I can enter any value into the cell eg a pre-match expectation of 2.7 goals, and obtain all the appropriate probabilities (the mathematical relationship I obtained told me the probability for a table generated by inputting a pre-match goal expectation wasn't simply half way between the values in a 2.6 and 2.8 tables). I also intend to generate tables for the probabilities for there being under and over 1.5 goals as the match progresses, and the same goes for under and over 3.5 goals, under and over 4.5 goals, under and over 5.5 goals and under and over 6.5 goals. Why am I doing all this? I expect everyone has guessed. It is for the purposes of gambling on the total number of goals in football matches whilst a match progresses. Quote: 2) Whatever they are, I suppose the Y-values were derived from the X-values, so I don’t understand why you want to find the relation by regression analysis. Regression analysis?? What's that? The thing is I don't know where the guy who produced the tables got his figures from. He simply says this is what he reckons the probabilities are. Actually I did discover yesterday where he got the figures from for the probabilities before the match, or in the first minute. They are obtained from applying a poisson distribution to the average pre-match expected goal total. (I don't know what a poisson distribution is, but I don't need to because excel has it built in!). However I'm unable to determine why the probabilities diminish and increase the way they do as teh match progresses. I would have thought that the probabilities would decrease and increase uniformly (linearly?). But they don't. Quote: You can’t re-create a defined relationship from the numbers, unless you know the form of the equation and just want to calculate the coefficients (and even then you will have rounding errors). Please give more details! You'll need to speak in English. I don't understand what co-efficients mean. Anyway I've provide much more detail, so hopefully you should understand what I'm doing. 14th April 2006, 07:42 PM #30 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 Originally Posted by alfaniner Lucky, before you waste any more time, read this... Originally Posted by Interesting Ian : ...I know virtually nothing about mathematics. What little mathematics I've done I was much better at than any of my fellow pupils at school. Indeed in my 4th year exam when I was 14 I got the highest mark ever in the history of the school. But I never did any more mathematics after 16, and I've never done any statistics whatsoever (and certainly have never done any calculus and the like). 14th April 2006, 07:56 PM #31 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 Originally Posted by Lucky Ian, I’m baffled by your account of the soccer match data, for 2 reasons: 1) Your Y-values aren’t probabilities (and they go the wrong way). Why are you saying the y values aren't probabilities? I'm using decimal notation. 1.6 means that there is a 100/1.6 percent of a chance of the event happening i.e 62.5% 15th April 2006, 05:08 AM #36 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 Originally Posted by Art Vandelay However I'm unable to determine why the probabilities diminish and increase the way they do as teh match progresses. I would have thought that the probabilities would decrease and increase uniformly (linearly?). But they don't. Suppose there's a probability p of scoring at least once within a five minute period. If the team has already scored twice, all you care about is whether they score once more. Scoring two more goals, three more goals, etc. isn't any different to you, is it? So in a ten minute period, there are four possibilities: score nothing ((1-p)^2) score in first five minute period, but not in second (p(1-p)) score in second five minute period, but not in first ((p-1)p) score in both (p^2) So the total probability is the sum of the last three: p((2-2p)+p)=p(2-p) Notice that the last one doesn't count for any more than the middle two; the extra goal(s) are "wasted". You therefore get less than 2p. If you were to count the last one twice because there are twice as many goals, you would get 2p, which is what you (incorrectly) expected. Furthermore, if it were linear, then there would be a point at which the probability is more than 100%, which is absurd, isn't it? Yes the probabilities can't decrease uniformly. I'm not thinking there. I had in mind that the average number of goals in any remaining period should decrease uniformly. So with pre-match expectation of 2.2 goals, the half time expectation with no goals scored so far would be half of that. But if I enter 1.1 and do a poisson distribution, the figures don't tally. Have to think about this. 16th April 2006, 05:32 AM #37 Interesting Ian Banned   Join Date: Feb 2004 Posts: 7,675 How do you link cells both ways? In other words changing the value in cell A so that it will automatically change the value in cell B to the same value, and also changing the value in cell B so that it will change the value in cell A to the same value? Another question. Is it possible to keep changing a value in a cell so that after every 5 minute interval after a specific time that I can specify, the cell will display the value of a succession of differing cells? 16th April 2006, 08:09 AM #38 brodski Tea-Time toad     Join Date: Mar 2005 Posts: 15,516 Originally Posted by Interesting Ian How do you link cells both ways? In other words changing the value in cell A so that it will automatically change the value in cell B to the same value, and also changing the value in cell B so that it will change the value in cell A to the same value? I don't think you can, that would be circular reference, because (fr instance) if b=a=b you will never get an answers unless you define either "a" or "b". I don't think I'm explaining myself to clearly here. Right, if excel need a value of A to determine a value of B, but needs a value of B to determine a value of A, it cannot define either, as the equation will keep looping back on itself. ETA- if you can be clear about exactly why you want a=b=a=b etc, maybe I can find a way around it. 16th April 2006, 09:13 AM #40 varwoche Penultimate Amazing     Join Date: Feb 2004 Location: Puget Sound Posts: 14,951 Originally Posted by Interesting Ian How do you link cells both ways? In other words changing the value in cell A so that it will automatically change the value in cell B to the same value, and also changing the value in cell B so that it will change the value in cell A to the same value? The Goal Seek feature might be applicable, no pun intended. __________________ To survive election season on a skeptics forum, one must understand Hymie-the-Robot. My authority is total - Trump International Skeptics Forum
2020-09-25 14:03:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6385346055030823, "perplexity": 1184.050160353387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400226381.66/warc/CC-MAIN-20200925115553-20200925145553-00137.warc.gz"}
http://sasdghub.up.ac.za/en/research/almost-sure-multifractal-spectrum-of-schramm-loewner-evolution/
# Almost sure multifractal spectrum of Schramm-Loewner evolution 19 Jan 2018 Suppose that $\eta$ is a Schramm-Loewner evolution (SLE$_\kappa$) in a smoothly bounded simply connected domain $D \subset {\mathbb C}$ and that $\phi$ is a conformal map from ${\mathbb D}$ to a connected component of $D \setminus \eta([0,t])$ for some $t>0$. The multifractal spectrum of $\eta$ is the function $(-1,1) \to [0,\infty)$ which, for each $s \in (-1,1)$, gives the Hausdorff dimension of the set of points $x \in \partial {\mathbb D}$ such that $|\phi'( (1-\epsilon) x)| = \epsilon^{-s+o(1)}$ as $\epsilon \to 0$. We rigorously compute the a.s. multifractal spectrum of SLE, confirming a prediction due to Duplantier. As corollaries, we confirm a conjecture made by Beliaev and Smirnov for the a.s. bulk integral means spectrum of SLE and we obtain a new derivation of the a.s. Hausdorff dimension of the SLE curve for $\kappa \leq 4$. Our results also hold for the SLE$_\kappa(\underline \rho)$ processes with general vectors of weight $\underline\rho$.
2022-07-03 02:56:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258437156677246, "perplexity": 159.08123224677814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00554.warc.gz"}
https://stacks.math.columbia.edu/tag/0E7F
Lemma 37.69.1. Let $f : X \to Y$ be a proper morphism of schemes. Let $y \in Y$ be a point with $\dim (X_ y) \leq 1$. If 1. $R^1f_*\mathcal{O}_ X = 0$, or more generally 2. there is a morphism $g : Y' \to Y$ such that $y$ is in the image of $g$ and such that $R'f'_*\mathcal{O}_{X'} = 0$ where $f' : X' \to Y'$ is the base change of $f$ by $g$. Then $H^1(X_ y, \mathcal{O}_{X_ y}) = 0$. Proof. To prove the lemma we may replace $Y$ by an open neighbourhood of $y$. Thus we may assume $Y$ is affine and that all fibres of $f$ have dimension $\leq 1$, see Morphisms, Lemma 29.28.4. In this case $R^1f_*\mathcal{O}_ X$ is a quasi-coherent $\mathcal{O}_ Y$-module of finite type and its formation commutes with arbitrary base change, see Limits, Lemmas 32.18.3 and 32.18.2. The lemma follows immediately. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-05-25 23:36:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9955348968505859, "perplexity": 124.7644800328797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662594414.79/warc/CC-MAIN-20220525213545-20220526003545-00107.warc.gz"}
http://math.stackexchange.com/questions/181833/proving-that-the-number-of-vertices-of-odd-degree-in-any-graph-g-is-even
# Proving that the number of vertices of odd degree in any graph G is even I'm having a bit of a trouble with the below question Given $G$ is an undirected graph, the degree of a vertex $v$, denoted by $\mathrm{deg}(v)$, in graph $G$ is the number of neighbors of $v$. Prove that the number of vertices of odd degree in any graph $G$ is even. - The sum of all the degrees is equal to twice the number of edges. Since the sum of the degrees is even and the sum of the degrees of vertices with even degree is even, the sum of the degrees of vertices with odd degree must be even. If the sum of the degrees of vertices with odd degree is even, there must be an even number of those vertices. –  Mike Aug 12 '12 at 21:24 @Mike: that's an answer, not a comment! –  Ben Millwood Aug 12 '12 at 21:34 @BenMillwood Heh. Not sure how formal of a proof that is. That's why I left it as a comment and not an answer. –  Mike Aug 12 '12 at 21:48 @Mike What's informal about it? Not enough instances of $G$, $v$, and $2n+1$? Don't fall into the trap of thinking that good mathematics has to be riddled with symbols. –  Austin Mohr Aug 13 '12 at 3:26 I'm posting Mike's comment as an answer, since he won't. The sum of all the degrees is equal to twice the number of edges. Since the sum of the degrees is even and the sum of the degrees of vertices with even degree is even, the sum of the degrees of vertices with odd degree must be even. If the sum of the degrees of vertices with odd degree is even, there must be an even number of those vertices. - Hint: What is the sum of the degrees of all vertices? - We represent $G$ by a symmetric relation on the set of points $P$, which we also call $G$, so $$G = \{(a,b), (b,a) : \text{there is an edge between } a \text{ and } b\}$$ Clearly, $\#G |2$ where $\#G$ is the number of elements in $G$. Now $$\deg (a) = \# \{(a,x): (a,x) \in G\}$$ Since we have $$\sum_{a\in P} \deg(a) = \sum_{a\in P} \# \{(a,x): (a,x) \in G\} = \#\{(x,y) : (x,y) \in G\} = \# G$$ We know $$\sum_{a\in P} \deg (a) | 2$$ From number theory we have $$\sum_{j=1}^n a_j |2 \Leftrightarrow \#\{a_j : a_j \not|\, 2\}|2$$ (the number of odd numbers in a sum is even, iff the sum is even) and setting $a_j = \deg(b_j)$ with $b_j \in P$ an enumeration of $P$, the statement follows. -
2015-05-26 14:29:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8697356581687927, "perplexity": 106.97160980254142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928864.16/warc/CC-MAIN-20150521113208-00221-ip-10-180-206-219.ec2.internal.warc.gz"}
https://en.wikisource.org/wiki/Constitutional_Court_Decision_No._15%E2%80%9318/2556
# Constitutional Court Decision No. 15–18/2556 The petitioners submitted four petitions... The first petition The 1st Petitioner claimed... The 1st Petitioner requested... The second petition The 2nd Petitioner claimed... The 2nd Petitioner requested... The third petition The 3rd Petitioner claimed... The 3rd Petitioner requested... The fourth petition The 4th Petitioner claimed... The 4th Petitioner requested... An initial question which the Court needed to handle was whether or not the four petitions meet the criteria A Respondent replied... The Court examined the following witnesses: 1. General Somchet Bunthanom and Wirat Kanlayasiri, Petitioners, testified... 2. Rangsima Rot-ratsami, Democrat Representative, testified... 3. Niphit Inthrasombat, Democrat Representative, testified... 4. Phaibun Nititawan, Senator, testified... 5. Suwichak Nakwatcharachai, National Assembly Secretary General, testified... 6. Atchara Chuyuenyong, Head of Audiovisual Section, testified... The Court will now deal with the following questions: The preliminary question: is the Court competent to address a case concerning a constitutional amendment? The first question: do the constitutional amendment proceedings constitute an attempt to acquire the national government power by the means not recognised by the Constitution? (1) Was the constitutional amendment draft considered by the National Assembly the same as that introduced to it? (2) Is the determination of a period of time for amendment to the constitutional amendment draft constitutional? (3) Are the identification and the casting of votes constitutional? The second question: do the constitutional amendment contents constitute an attempt to acquire the national government power by the means not recognised by the Constitution? For these reasons, the Court hereby decides... (23) Constitutional Court Decision[1] In His Majesty's Name The Constitutional Court Decision No. 15–18/2556 Case No. 36/2556 Case No. 37/2556 Case No. 41/2556 Case No. 43/2556 The 20th Day of November, Buddhist Era 2556 (2013) Between ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\\\ \\\ \\\ \\\ \\\ \\\ \ \end{matrix}}\right.}}$ General Somchet Bunthanom and others, 1st Petitioners Wirat Kanlayasiri, 2nd Sai Kangkawekhin and others, 3rd Phiraphan Saliratthawiphak and others, 4th President of National Assembly, 1st Respondents Vice President of National Assembly, 2nd Representatives and Senators, 3rd to 312th Re: The petitions for a decision of the Constitution Court under the Constitution, section 68 General Somchet Bunthanom ("Somchet") and others, Wirat Kanlayasiri ("Wirat"), Sai Kangkawekhin and others, and Phiraphan Saliratthawiphak and others submitted to the Constitutional Court ("Court") four petitions for its decision under the Constitution, section 68. Having considered, the Court found that the four petitions are of proximate matters in issue. The Court then consolidated the Cases Nos. 36/2556, 37/2556, 41/2556 and 43/2556, for the sake of the proceedings. The Court also ordered Somchet and others to be cited as the 1st Petitioner, Wirat, the 2nd Petitioner, Sai Kangkhawekhin and others, the 3rd Petitioner, and Phiraphan Saliratthawiphak and others, the 4th Petitioner, whilst the President of the National Assembly ("NA") is cited as the 1st Respondent, the NA Vice President, the 2nd Respondent, and the Representatives and the Senators, the 3rd to the 312th Respondents. The facts derived from the petitions and the supplementary documents are as follows: The first petition (Case No. 36/2556) The 1st Petitioner presented the following claims: The 3rd to the 310th Respondents jointly introduced to the 1st Respondent a constitutional amendment motion in the form of the Draft Amendment to the Constitution of the Kingdom of Thailand (No. …), Buddhist Era … (…), [amending section 111, section 112, section 115, section 116, paragraph 2, section 117, section 118, section 120 and section 241, paragraph 1, as well as repealing sections 113 and 114] ("Draft"). The Petitioner deems that the proceedings and contents of the said Draft would result in the change of the democratic regime of government with the King as Head of State and an attempt to acquire the national government power by the means not recognised by the Constitution, pursuant to the Constitution, section 68, paragraph 1. The 1st Petitioner considers that the constitutional draft amendment forwarded to the NA was different from that introduced to the Secretariat General of the House of Representatives ("SGHR") by the 3rd to the 310th Respondents. Moreover, during the constitutional amendment proceedings at the first reading, Acceptance of Principles, the 1st Respondent abused his authority as the presiding officer. That is to say, he ordered the motion amendments to be filed within fifteen days from the date the NA accepted the principles, in breach of the Constitution, sections 3, 26 and 27, as well as the National Assembly Rules of Order, BE 2553 (2010) ("NARO"), rule 96. The 1st and 2nd Respondents also replaced each other as the presiding officer and denied the right of the Members of the National Assembly ("MNA") to hold a discussion following the reservation of the motion amendments and opinions, leading to a contravention of the Constitution, sections 130 and 291 (4) as well as the NARO, rule 99. Furthermore, the circumstances of the 2nd Respondent indicate his partiality and conflict of interest arising from the constitutional amendment, as this constitutional amendment motion would enable him to become a senatorial candidate immediately. His conduct is contrary to the Constitution, sections 89 and 122. As regards the contents of the constitutional amendment, the 1st Respondent alleged that the principles and rationales of the Draft which would require the entire Senate to directly be elected by the citizens in the same manner as the House of Representatives would render the Senate incapable of examining the administration of public affairs by the Executive Branch and would deteriorate the principles of check and balance of the state powers. Additionally, the Draft, section 5, which concerns the qualifications and disqualifications of a senatorial candidate, would modify the qualifications and disqualifications set forth in section 115 (5) of the current Constitution of the Kingdom of Thailand and would thereby permit the forebears, spouse or children of a Representative or political position holder to become senatorial candidates. The Draft, section 7, would allow the Senators to remain in offices for several consecutive terms. The Draft, section 11, which deals with the enactment of a draft Organic Act on Elections of Representatives and Senators ("OARS"), would exclude the competence of the Court to review the constitutionality of the draft organic acts named in the Constitution. Moreover, the Draft, section 12, would terminate the memberships of the appointed Senators on the date a senatorial election is held in April 2014 according to the Draft, although their memberships will actually expire in February 2017. Finding that all the Respondents exercise the right to amend the Constitution in such a manner conducing to the change of the democratic regime of government with the King as Head of State or the form of state and with the intention to acquire the national government power by the means not recognised by the Constitution, despite the prohibition under the Constitution, section 68, paragraph 1, the 1st Petitioner therefore requested the Court to hand down the following decision and order: (1) An order imposing provisional measures upon the 1st Respondent and the Secretary General of the National Assembly ("SeGNA"), directing them to suspend the constitutional amendment proceedings at the third reading until any decision is given by the Court; (2) A decision directing the 1st to the 310th Respondents to forthwith cancel the proceedings of the Draft on modification of senatorial sources; (3) An order dissolving the For Thais Party ("FTP"), the Thai Nation Development Party, the Nation Development Party, the Chon's Power Party, the People's Majority Party and the New Democracy Party, as well as disfranchising their leaders and executive members for five years from the date the decision or order is rendered by the Court. The 1st Petitioner later modified the petition of 15 October 2013 by adding the following statements thereto: (1) During the proceedings of the constitutional amendment on senatorial sources, the resolutions were passed in such a manner that one MNA used several voting cards. This rendered void the entire NA joint sessions at the second reading. (2) A request is made for an order or decision revoking the NA resolutions adopted at all three readings and cancelling the promulgation of the Draft. The second petition (Case No. 37/2556) The 2nd Petitioner brought forward the following claims: The 1st to the 310th Respondents exercise the constitutional rights and freedoms to undermine the democratic regime of government with the King as Head of State or to acquire the national government power by the means not recognised by the Constitution. The 3rd to the 310th Respondents jointly initiated a constitutional amendment motion in the form of the Draft by which all the Senators would be elected in the same manner as the Representatives who are directly elected by the citizens. Udomdet Rattanasathian ("Udomdet"), FTP Representative who is the 3rd Respondent, and other Respondents jointly introduced the Draft to the 1st Respondent. Following that, the 1st Respondent summoned the NA on 1 April 2013. At the first reading on 3 April 2013, the NA, by three hundred and sixty seven votes, accepted the principles and, by two hundred and four votes, disapproved the principles and, by thirty four votes, abstained. The NA then set up a forty five person committee to review the Draft. On 1 August 2013, the committee presented the Draft having been revised according to its resolutions as well as the reports thereon to the 1st Respondent to be forwarded to the NA for further consideration. The NA completed the consideration of the Draft on 12 September 2013 and the 2nd Respondent informed the NA that the third reading may be held upon elapse of fifteen days from the completion of the second reading according to the Constitution, section 291 (5). Such elapse occurred on 27 September 2013. The 2nd Petitioner deems that the described constitutional amendment is the change of the Constitution, not merely an amendment thereto pursuant to the Constitution, section 291, because it would by nature result in the change of the form of state and the methods of senatorial instatement. The requirement that all the Senators be elected would enfeeble the check and balance under the parliamentary system. The abolition of the subject matters of section 115 (5), the modification of the disqualifications under section 115 (6), (7) and (9), as well as the repeal of section 116 would enable the forebears, spouse or children of a Representative or political position holder to become senatorial candidates and would allow the Senators to stay in offices for several consecutive terms. The amendment governing the OARS enactment would exclude the competence of the Court to review the constitutionality of the draft organic acts specified in the Constitution, section 141. The memberships of the incumbent appointed Senators would also be terminated on the date a new senatorial election takes place in April 2014, in spite of the fact that their memberships will expire in February 2017. The said constitutional amendment, which would permit the incumbent Senators to immediately apply for senatorial candidacies, shows the intention to the Senators partaking in the introduction and consideration of the constitutional amendment motion to remain in the legislative power of the State by unconstitutional means. In addition, the requirement that the memberships of the current Senators be ended on the date of a new senatorial election would abrogate the right of the appointed Senators to hold their offices. The constitutional amendment proceedings are also of other unconstitutional and unlawful characteristics. The constitutional amendment draft introduced to the 1st Respondent was different from that forwarded to the MNAs for consideration. The proceedings were hastened, for instance, the discussions were concluded with the intention to exclude the participation of the MNAs who had reserved the amendments to the motion, and the presiding officer decided upon the period of motion amendment without seeking any resolution of the NA due to its lack of quorum. The committee proceedings led to the cancellation and alteration of the subject matters of the Draft against the originally accepted principles. Section 11/1 was inserted in section 11 of the Draft during the consideration of the latter and a resolution was passed thereon at the same time without piecemeal discussion. And, during piecemeal discussions, certain MNAs used the identification and voting cards ("Card") on behalf of others. On the mentioned grounds, the 2nd Petitioner views that this constitutional amendment would subject the Senators to the mandate of any specific person, violating the Constitution of the Kingdom of Thailand, Buddhist Era 2550 (2007) ("2007 Constitution"), in consequence. Finding that the constitutional rights and freedoms are exercised for the purpose of undermining the democratic regime of government with the King as Head of State or acquiring the national government power by the means not recognised by the Constitution, section 68, the 2nd Petitioner therefore requested the Court to deliver the following decision and order: (1) An injunction directing the 1st to the 310th Respondents to suspend the constitutional amendment proceedings at the third reading until any decision or order is passed by the Court; (2) A decision directing the 1st to the 310th Respondents to cancel the Draft. The third petition (Case No. 41/2556) The 3rd Petitioner tendered the following claims: In the course of the constitutional amendment proceedings, the 1st and the 2nd Respondents replaced each other as the presiding officer and conducted the NA sessions without good faith and impartiality as required by the NARO. This gave rise to a conflict of interest and indicates their intention to exercise their authority against the provisions of the Constitution. They thus exercised the constitutional rights and freedoms to undermine the democratic regime of government with the King as Head of State or to acquire the national government power by the means not recognised by the Constitution. Their conduct is as follows, in summary: In respect of the contents of the Draft, sections 5 and 6 requires that all Senators be elected, without prohibiting the forebears, spouse or children of a Representative or political position holder from becoming candidates. This would enhance their opportunity to be elected, having regard to their political bases. For that very reason, the constitutional amendment benefits the Senators. Moreover, the proceedings concerning the Draft, section 11, were carried out in such a hasty manner. The OARS enactment is required to be completed within a brief period. And the incumbent Senators are required to lose their memberships on the date of a new senatorial election. These clearly indicate the intention of the 1st and the 2nd Respondents, who are Representative and Senator, to expedite the constitutional amendment in order to enrich themselves with the national government power by the means not recognised by the Constitution as well as to stay in the legislative power of the State. In relation to the proceedings of the Draft, the Constitution, section 291 (4), was not observed. That is to say, during the constitutional amendment sessions at the second reading on 20 August 2013, the two Respondents failed to conduct the proceedings in order of sections. Consequentially, the proceedings are unconstitutional and the entire Draft is void. Furthermore, during the discussion on amendment to the Draft, section 5, held on 4 September 2013, the 2nd Respondent denied the participation of the amendment reservers. He then concluded the discussion amidst heavy protests by the MNAs. When a resolution on conclusion of discussion was required, the 2nd Respondent told the NA that a resolution on the Draft, section 5, was to be adopted and the Respondent immediately dismissed the session. This resulted in a violation of the Constitution, section 291 (4). If the Draft is promulgated, the 2nd Respondent will be entitled to apply for senatorial candidacy without having to resign from his office and he will also be able to hold his office for an unlimited period. In performing his duties, the 2nd Respondent was therefore attacked by a conflict of interest, despite the prohibition under the Constitution, section 122. The Respondent also plotted with the pro-government MNAs to introduce and approve the discussion conclusion motion. As this was incompatible with the administrative procedure, the session became void. The constitutional amendment is also likely to destroy the system of check and balance between the House of Representatives, the Senate, the constitutional independent organs, the constitutional organs and the Court. Finding that these activities give rise to an attempt to acquire the national government power by the means not recognised by the Constitution, in defiance of the Constitution, section 68, paragraph 1, the 3rd Petitioner therefore requested the Court to render the following decision and order: (1) An order imposing provisional measures upon the 1st Respondent and the SeGNA, directing them to suspend the constitutional amendment sessions at the third reading until any decision is conferred by the Court; (2) A decision directing the 1st and the 2nd Respondents to straightaway cancel the Draft on the modification of senatorial sources; (3) A decision declaring that the Draft is void or unconstitutional. The 3rd Petitioner subsequently modified the petitions of 30 September 2013 and 2 October 2013 by inserting the following therein: (1) A request for an order imposing provisional measures upon the Prime Minister, directing her to suspend the execution of the Constitution, section 150, until any decision is provided by the Court; (2) A request for a decision declaring that the activities of the Respondents and the related parties fall under the Constitution, section 68; (3) A request for a decision prohibiting the 1st and the 2nd Respondents from contravening the Constitution and from promulgating the Draft concerning the modification of senatorial sources without delay. The fourth petition (Case No. 43/2556) The 4th Petitioner asserted the following claims: The Draft introduced by the 3rd to the 310th Respondent contains the amendment to section 111, section 112, section 115, section 116, paragraph 2, section 117, section 118, section 120 and section 241, paragraph 1, and the repeal of Sections 113 and 114, with the intention to abolish the senatorial appointment, leaving only the senatorial election. This is inconsistent with the principles and purposes of the 2007 Constitution. The 311th and the 312th Respondents, who were the committee members forming the majority and favouring the Draft during the committee proceedings, revised the Draft with a view to furnishing themselves and their partisans with certain advantages from the Draft. In this respect, the repeal of section 115 (5) would allow the forebears, spouse or children of a Representative or political position holder to become senatorial candidates, resulting in a breach of the Constitution, section 122. The amendment to section 115 (6), (7) and (9), would permit the persons having been members of or having held certain positions in the political parties or the persons having been Representatives, Senators, local councilors, local administrators, Ministers or other political position holders to apply for senatorial candidacies without having to abide by the original conditions of the Constitution. Additionally, the Draft, section 6, would entitle the incumbent Senators to compete for senatorial seats again, leading to a conflict of interest as proscribed by the Constitution, section 122. The Draft, section 10, prohibits an election or appointment of a new Senator to fill a vacancy which occurs prior to the promulgation of the Draft. This means the Draft is not an amendment to the Constitution according to the Constitution, section 291. Also, the Draft, section 8, fails to allow the application of section 118 in cases a senatorial office becomes vacant for any reason other than term expiry and an election of a new Senator is required to be conducted within forty five days. This would bring about certain problems relating to the enforcement of the Constitution. On 4 April 2013, the 1st Respondent summoned a committee to review three constitutional amendment drafts as part of the second reading. However, the Respondent summoned the NA to adopt a resolution regarding a period of motion amendment again on 18 April 2013, causing the first reading to be resumed. This activity was against the Constitution, section 291. Moreover, the 1st and the 2nd Respondents, as NA President and Vice President, failed to conduct themselves in a neutral manner. During the constitutional amendment proceedings at the second reading, the two invoked a majority of votes to conclude the discussions and approve the Draft whilst many MNAs who formed the minority and had reserved the motion amendments and opinions did not yet complete their discussions. In the course of their performance of duties, the 1st and the 2nd Respondents also quickened the proceedings by plotting with the pro-government MNAs to introduce and approve the discussion conclusion motions. As a result, the Constitution, sections 89, 122, 125 in conjunction with sections 137 and 291, as well as the NARO, rule 47, rule 59, paragraph 2, and rule 99, were infringed. Besides, during the identification for calculating a quorum before passing resolutions on the Draft, certain MNAs inserted the Cards of other MNAs in the readers and pressed the identification buttons. They also inserted the Cards in the readers and pressed the voting buttons on behalf of the true owners of the Cards, notwithstanding the owners were absent. The result of the quorum calculation and polling thus deviated from the truth. As the described acts were the casting of more than one vote, the acts infracted the Constitution, section 126, paragraph 3, and the actors failed to carry out their duties faithfully and in the interest of all Thais, also breaching the Constitution, section 122 and section 126, paragraph 3. The Petitioner therefore requested the Court to pronounce the following decision and order: (1) A decision that the Draft proceedings are contrary to or inconsistent with the Constitution and are thus void; (2) A decision that the acts of all the Respondents constitute an attempt to acquire the national government power by the means not recognised by the Constitution and that all the Respondents are obliged to discontinue those acts pursuant to the Constitution, section 68; (3) An order that an examination be held for the indication of provisional measures. An initial question which the Court needed to handle was whether or not the petitions of the four Petitioners meet the criteria of the Constitution, section 68. The Constitution, section 68, paragraph 1, prescribes: "No person may exercise the constitutional rights and freedoms to undermine the democratic regime of government with the King as Head of State under this Constitution or to acquire the national government power by the means not recognised by this Constitution". Paragraph 2 provides: "Where any person or political party commits the act forbidden by paragraph 1, a person aware of the act shall be entitled to refer the matter to the Attorney General for investigation and request the Constitutional Court to order the discontinuation of the act, without prejudice to the criminal prosecutions against the actor". Upon due consideration, the Court was of the following opinion: Section 68, paragraph 2, entitles a person who learns that any person or political party commits the act described in section 68, paragraph 1, to examine such act in two ways. Firstly, he may refer the matter to the Attorney General for investigation. And secondly, he may request the Court to order the discontinuation of the act in question – this is his right to directly make a request to the Court. The present cases are well-founded to the extent that it could preliminarily be heard that all the Respondents, by initiating the Draft, are likely to unfit the system of check and balance between the House of Representatives, the Senate, the constitutional independent organs and other constitutional organs, which forms the authority of the Senate as designed by the 2007 Constitution. This is the purpose of the Constitution as can be seen in its Preamble: the Constitution is intended to serve as the guidance for public administration with a view to allowing the extensive public participation in the preparation of the Constitution as well as in the administration and the concrete examination of the exercise of state powers, through the creation of a mechanism for the political institutions, both legislative and executive, which would enhance their balance and effectiveness under the parliamentary system of government and enable the judicial institutions and other independent organs to faithfully discharge their duties. The Court saw an attempt to acquire the national government power by the means not recognised by the Constitution, as mentioned in the Constitution, section 68, paragraph 1. The Court therefore accepted to address the four petitions by virtue of the Constitution, section 68, paragraph 2, and the Constitutional Court Regulations on Procedure and Decision Making, BE 2550 (2007), regulation 17 (2). The Court then ordered the three hundred and twelve Respondents to reply within fifteen days from their receipt of the copied petitions. The other requests were denied. In the course of the Court proceedings, three hundred and eleven Respondents failed to reply to the Court. Only Suradet Chiratthiticharoen, a Senator who is the 293rd Respondent, replied as follows: On 20 March 2013, he signed a motion to introduce the Draft, for he deemed that if the Senators are elected in the same way as the Representatives who are directly elected by the citizens, the principles of democracy and the public participation in politics pursuant to the Constitution will be promoted and the Senates will be attached to the people of each region. But he did not concur on the amendment to certain sections of the Constitution, that is, the Draft, section 5, which would amend the Constitution, section 115, and the Draft, section 7, which would amend the Constitution, section 117, because such amendment is inconsistent with the purposes of the original draft. He then abstained at the third reading. His acted so in good faith and without the intention to acquire the national government power by the means not recognised by the Constitution or to change the democratic regime of government with the King as Head of State. He therefore requested the Court to dismiss the parts of the petitions which concern him. On 8 November 2013, the Court examined seven witnesses called by the Petitioners and summoned by the Court, namely, Somchet, Wirat, Rangsima Rot-ratsami ("Rangsima"), Niphit Inthrasombat ("Niphit"), Phaibun Nititawan ("Phaibun"), Suwichak Nakwatcharachai ("Suwichak") and Atchara Chuyuenyong ("Atchara"). The witnesses gave the following evidence: Somchet, the 1st Petitioner, and Wirat, the 2nd Petitioner, testified that the introduction and consideration of the constitutional amendment on senatorial sources by the 1st to the 310th Respondents result in an attempt to acquire the national government power by the means not recognised by the Constitution for the following reasons: 1. The constitutional amendment draft considered at the first reading was not that introduced by the 3rd to the 310th Respondents. The draft considered by the NA at the first reading was therefore introduced by no one and the first reading is in opposition to the Constitution, section 291 (2). 2. The constitutional amendment proceedings are unconstitutional. The MNAs were denied to right to amend the motion because the fifteen period of motion amendment had been fixed in defiance of the NARO for the purpose of preventing them from amending the motion. In addition, a majority of votes was invoked to exclude the right of the amendment and opinion reservers as well as to conclude the discussions and to adopt the resolutions. 3. The constitutional amendment sessions were conducted by the persons attacked by a conflict of interest and by the Senators and the Representatives who conspired to introduce the constitutional amendment motion to favour each other. 4. During the piecemeal discussions at the second reading, the MNAs who initiated the constitutional amendment used the Cards on behalf of other MNAs and sections 11 and 11/1 of the Draft were simultaneously approved. 5. The constitutional amendment proceedings were carried out in such a cursory manner, because the Respondents and their partisans would be benefited if the amendment is successful. The modification of the senatorial qualifications, disqualifications and terms of office is not in line with the purposes of the current Constitution which has been designed for the check and balance between the state powers. Additionally, the memberships of the incumbent Senators, especially those appointed, are required to terminate immediately after the new Senators take offices. 6. The OARS enactment processes are required to be completed within one hundred and twenty days. If such period is not observed, the draft passed by the Representatives can be forwarded to the King without delay. This would exclude the competence of the Court to review the constitutionality of the draft organic acts. The Petitioners hence view that the constitutional amendment conducted by the 1st to the 310th Respondents is an attempt to acquire the national government power by the means not recognised by the Constitution, resulting in a violation of the Constitution, section 68, paragraph 1. Rangsima, a witness called by the 2nd and the 4th Respondents, testified as follows: The witness is a Democrat Representative from Samut Songkhram Province. She has always combatted and complained about the use of the Cards on behalf of other MNAs. But her complaints have never led to the actual punishment of any wrongdoer. The witness saw the events as appeared in the video clips adduced in the present cases, and filed a protest with the presiding officer, but her protest was in vain. The conduct of the group of persons seen by the witness usually occur both during and after the NA sessions. They hold the Cards on behalf of each other. If the owner of any Card is absent, the holder of the Card will make the presence and cast a vote on his behalf. The witness has learnt about this conduct but had no evidence to support her complaints. The witness then requested the officers, whose names were concealed by the witness, to take photographs and record videos for her. The witness also confirmed that the person appeared in a photograph is Narit Thongthirat ("Narit"), an FTP Representative for Sakon Nakhon Province, whom the witness knows very well because she and he have both been Representatives for ten years. Being aware that Narit usually uses the Cards on behalf of others, the witness keeps her eye on him. The witness requested the officers to take his photographs and record his videos with photograph and motion picture recorders according to the signals given by the witness whilst being in the auditorium. Having received the videos from the officers, the witness forwarded them to a legal team of the Democrat Party for further actions. After the witness filed complaints and published the said videos, Narit did not condemn the witness or take any action against her. The Court ordered the showing of the video clips adduced as evidence by the 2nd Petitioner in support of his petition and the Court directed the witness to give explanations along with the pictures appeared. The witness confirmed that the events seen were part of the proceedings of the constitutional amendment on senatorial sources and included both the calculation of quorum and the casting of votes. The witness continued to testify that, at each polling, the NA President requires the identification button to be pressed first in order to ensure the constitution of a quorum and he then requires another button to be pressed for the purpose of voting. However, the person seen in the pictures used the Card of another and pressed the buttons on behalf of the latter. In the other two clips, the person was seen to be changing his seat because he learnt that the officers were taking his photographs and recoding his videos at the witness's request. The witness knows that, under the system of identification and voting by electronic means, several Cards can be inserted in a reader without the number of the used Cards and the times of such use being checkable. If the original Card of any person is used, his alternative Card cannot be used again, because the system records the code of each person to prevent such use of his alternative Card. Niphit, a witness called by the 2nd and the 4th Petitioners, testified as follows: The constitutional amendment proceedings were not conducted in three readings, because the period of motion amendment were determined in defiance of the correct methods. The determination of a period of motion amendment is an important process which allows the MNAs to amend a motion. When an MNA does not agree with a draft law or constitution, he is entitled to amend the motion and express his opinions during the committee proceedings. If the committee does not concur upon his amendment, he may reserve his amendment and present it to the NA for further discussion. The right to amend a motion and reserve an amendment is a privilege of an MNA. The witness applied for amending the constitutional amendment motion but his application was turned down by the committee, claiming that the application was made after a period of fifteen days. The witness had never seen the denial of the right to discuss by the presiding officer who cited the reason that a discussion can be subject to a period of time for the sake of its conciseness. An amendment to a motion bears importance, because an MNA who has not applied for amending a motion is not entitled to object to the proceedings of the committee, save the revisions made by the committee. If the committee confirms the original draft without revision, an MNA is not entitled to initiate a discussion, whether or not he has applied for amending the motion. Phaibun, a witness called by the 2nd and the 4th Respondents, testified as follows: According to the examination reports of the Subcommittee for Prevention of Corruption and Examination of Exercise of State Powers under the Senatorial Committee for Studies of Corruption and Promotion of Good Governance, as submitted to the Court by the witness, the constitutional amendment draft annexed to the motion was found to be different from that forwarded to the NA. The witness believed that the original draft was replaced in the morning of 27 March 2013. The presentation of the new draft lacked a written motion signed by the initiating Representatives and Senators. The draft originally introduced and signed was submitted on 20 March 2013 and does not contain any amendment to section 115 (9) and section 116, paragraph 2. The witness, as Chairperson of the said Subcommittee, summoned Butsakon Amphonprapha, Director of the SGHR Meeting Affairs Bureau who was directly in charge of this matter. She appeared on Saturday, 23 March 2013, and explained that she was informed by Nongyao Praphin, a specialised legal officer of the Bureau, that an SGHR government officer, whom Nongyao could not remember, requested for replacement of the constitutional amendment draft without having coordinated with Udomdet. The witness considers that this forgery of motion and use of false motion at NA sessions indicates a joint attempt to acquire the national government power by the means not recognised by the Constitution. Suwichak, SeGNA summoned by the Court as a witness, testified as follows: There was only one constitutional amendment draft. The contents of the introduced draft were equivalent to those of the draft actually used at the NA sessions. However, the witness had never compared both draft with each other. The review of a draft constitution or law is the duty of the preliminary officers. The witness reviewed the constitutional amendment draft after the NA held the first reading, for the witness was absent on the date the motion was introduced. Pursuant to precedents, a motion can be edited before it is placed in the agenda by the NA President and the officers would edit the motion as requested. The witness learnt that a subcommittee examined the relevant officers. But the witness does not know the examination outcome, since the officers have not yet provided any information. A motion may, in general, be introduced outside the working hours or on a holiday if a session is held. With respect to the MNAs using the Cards on behalf of each other, the witness testified as follows: As far as the witness knew, the SGNA provides one Card to each of the MNAs to be used for identification and voting. At the initial period, the MNAs were required to sign a list in front of the auditorium before the commencement of a session and the NA President would commence the session after the constitution of a quorum. The voting card system has been in use for about ten years in order to prevent certain errors and disputes which may arise from the voting by hand raising. The witness could not confirm if the MNAs used the Cards on behalf of each other. But the witness knew that a committee has been set up by the NA President to look into the matter but no conclusion is yet reached. The Court ordered the showing of the video clips adduced as evidence by the 2nd Petitioner in support of his petition and the Court ordered the witness to express opinions. The witness testified that he was not certain if an MNA was using a Card on behalf of another. But he said he saw that an MNA might be holding two Cards, one for identification and the other for casting of vote. The witness does, however, not know the number of the Cards which one MNA may possess. The witness agreed that the voices of the presiding officer heard in the video clips belong to Nikhom Wairatchaphanit, the NA Vice President who presided over the session. Atchara, Head of the SGHR Audiovisual Section summoned by the Court as a witness, testified as follows: According to the rules on NA session attendance, an MNA signs his name by placing his Card upon an electronic machine. The processor will calculate the number of the singing MNAs and send the information to the Section. The Records Section oversees the singing and Card placement. The Court ordered the showing of the video clips adduced as evidence by the 2nd Petitioner in support of his petition and the Court ordered the witness to give opinions. The witness testified that the pictures shown were possibly the events of pressing voting buttons. In general, one Card is used by one MNA and one MNA has one vote. The witness saw that the persons appeared in the clips had several Cards in their hands. Under the present system, an MNA can insert a Card in a reader and press the voting button before inserting a Card of another MNA and doing the same. But no MNA can use his own Card again after the Card is inserted in a reader and the voting button is pressed. The witness cannot tell if the use of Cards and the casting of vote in such manner affects the voting result. Apart from an original Card, each MNA has one alternative Card which is kept by the officers and can be demanded when he fails to carry the original Card, being two Cards per one MNA. The technology currently adopted by the NA cannot check if the voting button is pressed by the Card owner or by another MNA on his behalf. In practices, an MNA can insert a Card and press the voting button before inserting another Card and press the voting button again in successive order. The system will count the votes until the polling is concluded by the presiding officer. Having reviewed the petitions and the supplementary documents of the Petitioners, having examined evidence and having required the parties to file written closing arguments, the Court will now address two questions: The first question – Do the proceedings of the Draft constitute an attempt to acquire the national government power by the means not recognised by the Constitution? The second question – Do the contents of the Draft constitute an attempt to acquire the national government power by the means not recognised by the Constitution? Prior to addressing the said questions, the Court needs to handle a preliminary question as to whether the Court is competent to deal with a constitutional amendment. Upon due consideration, the Court entertains the following opinion: All countries which adopt the democratic regime of government aim and intend to design or create a mechanism for protection of rights and freedoms of their people, in order to enable the public roles and participation in the administration and the examination of the exercise of state powers. They also establish a system of check and balance between the political organs or institutions with a view to balancing the exercise of the sovereign powers which rest with the people, pursuant to the principles of separation of powers into three branches: the legislative power which is exercised through a legislature, the executive power, through a cabinet, and the judicial power, through the courts. These are the purposes of the 2007 Constitution, the supreme law of the Nation, as can be seen in its Preamble: "The subject matters of the newly prepared constitution are to achieve the common will of the Thais to maintain the national independence and security, to maintain and cherish all religions for the sake of their eternity, to uphold the King as Head and idol of the State, to adopt the democratic regime of government with the King as Head of State as the guidance for public administration, to protect the rights and freedoms of the people, to enhance the concrete roles and participation of the public in the administration and the examination of the exercise of state powers, to create a mechanism for the political institutions, both legislative and executive, for the sake of their balance and effectiveness under the parliamentary system of government, and to enable the judicial institutions and other independent organs to faithfully discharge their duties". The described principles clearly indicate that the present Constitution intends to direct the political organs or institutions to perform their duties in a correct, legitimate, independent and faithful manner for the common good of all Thais, without any conflict of interest. The Constitution has no desire to allow any political organ or institution to hold the absolute powers through any form of illegitimacy or to permit it to invoke any legal provision as a basis for enriching itself or its partisans with personal advantages from the exercise of powers. Even though a decision under the democratic regime of government is adopted by a majority of votes, the democratic regime of government is not constituted if the minority is ignored or oppressed by the arbitrary exercise of powers without heeding its reasons and guaranteeing a place for it to stand and exist. It will, however, become the tyranny of the majority which is apparently against the regime of government adopted by the Nation. This material fundamental principle has always confirmed that certain measures must be taken to prevent any person or group of persons who come to the sovereign powers of the people from abusing or arbitrarily exercising the powers. Such principle is based upon the separation of sovereign powers which belong to all Thais, so as to place the political organs or institutions exercising those powers in the checkable and balanceable state where they can appropriately warn and counterweigh each other. The principle does not intend to establish an independent space where each party may exercise the powers in any manner as it wishes. Should any party be allowed to possess the absolute powers without check and balance, the wrong notions and blindness of the holder of state powers will likely expose the Nation to wrack and ruin. In this regard, it could be said that all the organs acting on behalf of the sovereign power holders, whether being the NA, the Council of Ministers or the courts, are established by or enjoy the authority derived from the Constitution. The exercise of authority of these organs must therefore be restricted, both in terms of processes and contents. This results in these organs being unable to exercise their authority in a manner contrary to or inconsistent with the Constitution. To this end, the 2007 Constitution applies the rule of law to the exercise of authority by all parties, organs and state agencies, subject to the principle that the authority needs to be exercised in line with the generally existing laws and also with the rule of law. One can therefore not merely observe the written laws or the principles of the majority, but he needs to also bear in mind the rule of law. The reference to the majority without having regard to the minority, in order to support the arbitrary exercise of powers to fulfill certain goals or objectives of the power exerciser amidst a conflict of personal benefits or benefits of a group of persons and national benefits or order of the public as a whole, will indeed lead to the downfall and downgrade of the Nation or the serious discord and disharmony amongst the people. This is definitely incompatible with the rule of law recognised section 3, paragraph 2, of the 2007 Constitution or "this Constitution" as mentioned in section 68. In all cases must the application of laws and the exercise of powers be in good faith and be free of ill will, frauds, conflicts of interest or hidden agenda, unless most of the honest persons in the Nation would be deprived of their due benefits by a person or group of persons using powers without legitimacy. The rule of law is regarded as an administrative guidance arising from the principles of natural justice – the pure and equitable justice not beset with personal and concealed advantages. It thus forms the material fundamental legal principle which is above the written laws and is expected to be upheld by the NA, the Council of Ministers, the courts as well as the constitutional organs and the state agencies. Democracy means the government of the people by the people and for the people, not the government according to the opinions of any person or merely relying on the power base acquired from an electoral victory. The principles of democracy consist of more characteristics than that. As the political organs or institutions which exercise the state powers usually claim that they are elected by the people but subject themselves to the opinions of certain specific persons, they fail to follow the democratic ways of government which aim at the interest of the entire people under the rule of law. Democracy does not merely refer to the elections or electoral triumph of the political factions. The majority obtained from an election only reflects the desire of the eligible voters at such election. It never enables the representatives to exercise the powers without having to be mindful of the correctness and legitimacy pursuant to the rule of law. Now, the present Constitution brings the Court into existence and charges it with the key authority to check and balance the exercise of powers in keeping with the rule of law and the principles of control of constitutionality of laws. It is the philosophy of the democratic regime of government which would concretely protect the rights and freedoms of the people as recognised by the Constitution and would also maintain and uphold the supremacy of the Constitution. This can be seen from the Constitution, section 216, paragraph 5, which prescribes that the decisions of the Court are final and binding the NA, the Council of Ministers, the courts and other state organs, and section 27 which provides that the rights and freedoms explicitly or implicitly recognised by the Constitution or the Court are protected and directly binding the NA, the Council of Ministers, the courts as well as the constitutional organs and state organs in relation to the enactment, enforcement and interpretation of all laws. Having thoroughly considered, the Court finds that the four Petitioners exercised the right of constitutional defence under the Constitution, section 68, paragraph 2, to bring the cases before the Court together with the claims that all the Respondents are undermining the democratic regime of government with the King as Head of State or attempting to acquire the national government power by the means not recognised by the Constitution. The Court is therefore competent to address the cases. The first question – Do the proceedings of the Draft constitute an attempt to acquire the national government power by the means not recognised by the Constitution? (1) Was the constitutional amendment draft concerning senatorial sources which had been used at the NA joint sessions the same as that introduced to the SGHR serving as the SGNA? The Petitioners claimed that the contents of the Draft introduced to the NA were greatly different from the constitutional amendment draft forwarded to the MNAs for consideration at the first reading, Acceptance of Principles. In this respect, the Court ordered the SeGNA to present to the Court for the sake of its consideration the constitutional amendment draft originally introduced to the NA. Suwichak, SeGNA, presented the draft to the Court on 12 November 2013. Having reviewed it, the Court finds that the constitutional amendment draft introduced to the NA and presented to the Court by the SeGNA contains the handwritten page numbers from the pages of the memorandum to the NA President, the list of initiators, the list of joint initiators, to page 33. But the following pages, which are the explanatory note, the constitutional amendment draft and the draft summary, have no any handwritten page number or statement. Upon further review, the Court finds that the typography appearing from page 1 to page 33 is different from that in the explanatory note, the constitutional amendment draft and the draft summary. Having compared the said draft with two supplementary documents of the petitions of the 1st and 2nd Petitioners which were allegedly distributed for use at the NA sessions, the Court finds that the typography and page numbers of the two documents are consistent. A handwritten statement was added at the end of the title of the draft on the draft summary pages. The page numbers appear on every page from the Memorandum to the NA President for introduction of constitutional amendment, the list of initiators, the list of joint initiators, the explanatory note, the constitutional amendment draft and the draft summary, being a total of forty one pages. The typography appearing from the page to the last pages is also invariable. Moreover, having reviewed the motion for amending the Constitution, section 190, as introduced by Prasit Phothasuthon and others and presented to the Court by the SeGNA, the Court finds that the motion contains handwritten page numbers from the pages of the memorandum to the NA President, the list of initiators, the list of joint initiators, the explanatory note, the constitutional amendment draft, to the last page which is the draft summary. The phrases "Amendment (No. …), Buddhist Era … (…)" were also added to the title of the draft in the draft summary and the typography from the first to the final pages is consistent. The manners of page numbering from the first to the last pages, the correction of the title of the draft, as well as the typography from the first to the last pages are the same as those in the documents which the Petitioners alleged to have received for use at the NA sessions. On account of the evidence so examined, the Court believes that the Draft actually considered by the MNAs at the first reading, Acceptance of Principles, was not that originally introduced to the SGHR by Udomdet on 20 March 2013 and later distributed to the MNAs for use at the sessions, but the Draft was newly drawn up with the contents greatly different from the original. Although Suwichak testified that necessary corrections can be made to a motion before the motion is entered in the agenda, the Court deems that those corrections must merely deal with insignificant mistakes, such as clerical errors, and may not be adverse to the original principles. If the original principles are altered, the alteration must be supported by a number of signatories as required by the law. Upon reviewing the contents of the newly prepared draft, the Court finds that various original principles were altered, including the addition of a new principle to amend section 116, paragraph 2, and section 241, paragraph 1. It should be noted that an amendment to section 116 would allow a retired Senator to become a senatorial candidate without having to wait for the elapse of two years. Moreover, the alteration was done with the intention to conceal the facts, as not all the MNAs were informed of the new draft. It could now be conclusively heard that the constitutional amendment draft on senatorial sources as introduced by Udomdet and others to the SGHR on 20 March 2013 was not used during the NA sessions at the first reading, Acceptance of Principles, but a new draft whose principles are much different from the original draft introduced by Udomdet was used without any introductory motion signed by a sufficient number of MNAs. In consequence, the introduction of the constitutional amendment draft whose principles were accepted by the NA pursuant to the petitions is contrary to the Constitution, section 291 (1), paragraph 1. (2) Is the determination of the period of amendment to the Draft constitutional? A legislative discussion is a fundamental right of an MNA. An MNA who has applied for amending a motion or has reserved an amendment to a motion or a committee member who has reserved an opinion is entitled to hold a discussion at which he would express his opinions and reasons regarding the amendment, reserved amendment or reserved opinion. In this matter, the facts derived from the evidence in the files and from the testimonies given by the witnesses to the Court in the course of the examination indicate that, at the first and second readings, the 1st and the 2nd Respondents replaced each other as the presiding officer and denied the right of the discussion requestors during the first reading as well as fifty seven MNAs who had applied for amending the motion, had reserved the amendments to the motion and had reserved the opinions. The two Respondents claimed that their opinions were against the principles, even though they were not yet heard. Following that, the 1st and the 2nd Respondents invoked a majority of votes to conclude the discussion. The Court deems that in spite of the fact that the presiding officer may exercise his discretion to permit a discussion and a majority of votes may conclude a discussion, the exercise of such discretion and majority must not result in the prevention of the MNAs from discharging their duties or the ignorance of the opinions of the minority. The conclusion of the discussions and the dismissal of the sessions in order to expedite the polling were thus the abuse of power to unjustly benefit the majority, breaching the rule of law as a result. Besides, the Petitioners alleged that the 1st Respondent incorrectly fixed the period of motion amendment. According to them, after the NA accepted the principles at the first reading on 4 April 2013, some MNAs applied for a fifteen day period and a sixty day period in order to amend the motion. Pursuant to the NARO, the NA needed to decide which period would be adopted. However, no such decision was made because the NA, at that time, lacked a quorum as required by the Constitution. The 1st Respondent then ordered the amendments to be filed within fifteen days from the date the NA accepted the principles. Owing to heavy objections, the 1st Respondent summoned the NA again on 18 April 2013. At such session, the NA approved the period of fifteen days. But the 1st Respondent ordered the fifteen day period to retrospectively be calculated from 4 April 2013, leaving only one actual day for filing amendments. The amendment period approved by the NA was thus less than fifteen days. The Court deems that since it is the right of the MNAs to express their opinions, the amendments to a motion must therefore be given a sufficient period of time which would allow the MNAs who desire to amend the motion to prepare the amendments. A period of time for motion amendment cannot be calculated retrospectively, but it must be counted from the date the resolution is adopted. As the retrospective calculation which left only one day for filing amendments is repugnant to the NARO and the impartiality, it is contrary to the Constitution, section 125, paragraphs 1 and 2, as well as the rule of law recognised by the Constitution, section 3, paragraph 2. As a consequence, the determination of the period of time for amending the Draft is against the Constitution, section 3, paragraph 2, and section 125, paragraph 1. (3) Are the manners of identification and voting during the proceedings of the constitutional amendment on senatorial sources constitutional? Taking into account the material principles of the parliamentary representative democratic system of government, one will see that the MNAs who represent all people in exercising the legislative power in the NA on behalf of the people through popular elections or through appointment play important roles under this system. The 2007 Constitution, section 122, clearly prescribes that, under the democratic regime of government with the King as Head of State, the Representatives and the Senators represent all Thais without being under any mandate, commitment or influence, and must, faithfully and without a conflict of interest, carry out their duties for the common good of all Thais. They are also required to conduct themselves in agreement with the rule of law, according to the Constitution, section 3, paragraph 2, which provides that the NA, the Council of Ministers, the courts, as well as the constitutional organs and the state agencies must adhere to the rule of law when rendering their duties. As regards the exercise of powers of the MNAs who represent all Thais, the true owners of the sovereign powers, the Constitution, section 126, paragraph 3, establishes a key principle to regulate the functions concerning the law enactment processes. According to which, one MNA has one vote at the polling. Upon a parity of votes, the presiding officer is permitted to issue one additional vote as the casting vote. It is understood that an MNA is required to make his personal appearance to perform his duties at each legislative session and has one vote in respect of each matter. Any act which turns the polling result away from the truth is hostile to the provisions and purposes of the Constitution. Upon due consideration, the Court entertains the following opinion: In the present cases, the Petitioners introduced eyewitnesses and significant evidence, that is, the video clips containing three events in which certain MNAs inserted the Cards of others in the readers during the proceedings of the constitutional amendment regarding senatorial sources. In this respect, Rangsima, a Democrat Representative, was called by the 2nd and the 4th Petitioners, to give evidence in conjunction with the three events in the clip. The witness confirmed that the persons appeared in the clip inserted several Cards in the readers and pressed the voting buttons at the same time, and that this is contrary to the correct principles and methods concerned. Atchara, Head of the SGHR Audiovisual Section, testified that each MNA has, in general, one electronic card for self-identification during quorum calculation and for casting of votes and has one alternative card which is kept by the officers and may be demanded when he does not carry the former card. Furthermore, the voices of the events heard from the clips are in line with two evidences produced to the Court by the SeGNA: the voices contained in the NA session live broadcasting recordings and the NA minutes which recorded the same NA joint sessions to consider constitutional amendment concerning senatorial sources as those mentioned in the petitions. In the course of examining oral evidence, the SeGNA watched and heard the pictures and voices in the clips and testified that he could remember that the voices belong to the NA Vice President who presided over the session at the time being. Moreover, Rangsima, Democrat Representative, was called by the 2nd and the 4th Petitioners to give evidence along with the pictures in the clips submitted to the Court by the 2nd Petitioner. The witness pointed out that the 162nd Respondent [Narit], who is an MNA, inserted several Cards in the readers and pressed the voting buttons in successive order. The witness also testified that she knows this Respondent very well and have no personal conflict with him, and that she and he still have usual conversations after she filed complaints against him. The witness has been a Representative for ten years and has always followed up, combatted and complained about the use of the Cards on behalf of others, especially the conduct of Narit. The witness also ordered her men to take photographs and record videos to be used as evidence in support of her complaints. Having reviewed the photographs produced to the Court by the witness, the Court saw a side view of a person whom could be confirmed to be Narit, the 162nd Respondent, as the suits they wore bore the same colour. As appeared in the clips, the person had in his hands more than two Cards, the number which one MNA can possess, and the person inserted all of those Cards in the readers and successively pressed the buttons on the readers. Having heard the evidence and the testimonies of the witnesses obtained from the examination, including the obvious motion pictures and the eyewitnesses who testified in conjunction with the live broadcasting of the NA sessions that, during the passage of resolutions on the constitutional amendment as to senatorial sources, many MNAs were absent but authorised other MNAs to exercise the right to vote on their behalves, the Court finds that one MNA used several Cards and that the described conduct is of unnatural nature. Not only contravening the fundamental principles of being MNAs, Representatives of all Thais, who must function in good faith for the common good of every Thai, without being subject to any mandate, commitment or influence, and without any conflict of interest as proscribed by section 122 of the Constitution, the conduct is also against the principles of voting pursuant to section 126, paragraph 3, which entitles one MNA to have only one vote. When the polling processes were in violation of the NARO and the aforementioned provisions of the Constitution, the polling result was fraudulent and was not the true will of the Representatives of all Thais. Consequentially, the NA could not be deemed to have adopted the lawful resolutions during the constitutional amendment proceedings. The second question – Do the contents of the Draft constitute an attempt to acquire the national government power by the means not recognised by the Constitution? The subject matters of the constitutional amendment which the Petitioners referred to the Court for decision are to modify the senatorial qualifications in various manners. The Court now needs to determine if the said constitutional amendment would undermine the democratic regime of government with the King as Head of State or would furnish any person with the national government power by the means not recognised by the Constitution. The Court finds that the 2007 Constitution was modelled on the Constitution of the Kingdom of Thailand, Buddhist Era 2540 (1997) ("1997 Constitution"), but many original principles on senatorial qualifications have been modified so as to prevent certain problems which had come to pass under the 1997 Constitution. For such purpose, it is prescribed that there be appointed Senators and elected Senators in equal proportions, with a view to enabling the members of all sectors and professions to jointly function as Senators and make contributions to the Nation in a deliberate manner. Moreover, the senatorial qualifications have been revised in order to liberate the Senators from the politics and the Representatives, for instance, the forebears, spouse and children of a Representative or political position holder are prohibited from being Senators and the senatorial candidates are not allowed to associate with political parties and hold any political position for five years. Under the 2007 Constitution, the NA consists of two chambers: the Senate and the House of Representatives which balance each other. The Senate has been given the roles to examine and screen the performance of the Representatives as well as to counterpoise them, whilst the Senators are empowered to scrutinise and remove the Representatives accused of unusual wealth, corruption in office, intentional exercise of authority against the provisions of the Constitution or law, or serious contravention of or failure to comply with moral standards pursuant to the Constitution, section 270. The purposes of the Constitution to concretely free the Senators from the Representatives are shown by the prohibition of their relationships. Should the Senators and the Representatives be permitted to enjoy close relations, the frank scrutiny will become hopeless and the principles of check and balance which form the basis of the present Constitution will be violated. The constitutional amendment pursuant to the petitions is a return to the former defects which are perilous and likely to bring an end to the faith and harmony of the majority of the Thai people. It is an attempt to draw the Nation back into the canal[2], as it would bring the Senate back to the state of being an assembly of relatives, assembly of family members and assembly of husbands and wives. In consequence, the Senate would lose its status and vigour as the source of wisdom for the House of Representatives, but would merely be an echo of the people from the same group. The principles of the bicameral system would be debased, leading to the monopoly of state powers and the exclusion of the participation of the members of various sectors and professions. The amendment is thus an effort of its initiators to regain the national government power by the means not recognised by the Constitution – the 2007 Constitution approved by the majority of the people of Thailand at a referendum. Furthermore, the modification of the senatorial qualifications by only requiring the Senators to be elected in the same manner as the Representatives would cause the two chambers to become one, lacking differences and independence. This would put an end to the nature and subject matters of the bicameral system. The modification of the senatorial sources and qualifications which bring the Senators into close relations with the political sectors would considerably impair the principles of check and balance under the bicameral system, as it would allow the political sectors to absolutely overshadow the NA without any check and balance. This would cast an impact upon the democratic regime of government with the King as Head of State and would pave the way for those involved in the proceedings to acquire the national government power by the means not recognised by the Constitution. Besides, the contents of the Draft, sections 11 and 11/1, which govern the OARS enactment processes, are in breach of the Constitution, since they exempt the promulgation of the said draft organic law from the Constitution, section 141, which requires the draft to be forwarded to the Court for review of its constitutionality first. Accordingly, this is against the principles of check and balance which constitute the principles of democratic government and would enable the political sectors to issue the laws at its will by merely invoking a majority of votes, without any examination. For these reasons, the Court hereby decides, by six votes to three, that the proceedings and resolutions concerning the Draft as adopted by all the Respondents in the present cases are contrary to the Constitution, section 122, section 125, paragraphs 1 and 2, section 126, paragraph 3, section 291 (1), (2) and (4), and section 3, paragraph 2, and decides, by five votes to four, that the contents of the Draft are inconsistent with the Constitution, section 68, paragraph 1, since they breach the fundamental principles and purposes of the 2007 Constitution and constitute an attempt of all the Respondents to acquire the national government power by the means not recognised by the 2007 Constitution. As the 1st Petitioner requested for the dissolution of the relevant political parties and the disfranchisement of their executive members, the Court holds that the conditions under the 2007 Constitution, section 68, paragraphs 3 and 4, are not met. This request is therefore denied. Charoon Intachan President of the Constitutional Court Jaran Pukditanakul Judge of the Constitutional Court Chalermpon Ake-uru Judge of the Constitutional Court Chut Chonlavorn Judge of the Constitutional Court Taweekiat Meenakanit Judge of the Constitutional Court Nurak Marpraneet Judge of the Constitutional Court Boonsong Kulbupar Judge of the Constitutional Court Suphot Khaimuk Judge of the Constitutional Court Udomsak Nitimontree Judge of the Constitutional Court ## Footnotes 1. Published in the Government Gazette: volume 131/part 5 A/page 1/8 January 2014. 2. In Thai language, "canal" (คลอง) idiomatically refers to something unpleasant. For instance, "ถอยหลังเข้าคลอง" (step back to the canal) means to go back to something unpleasant. Wikisource keeps the term "canal" in this translation. Various translations of the phrases "[นำประเทศชาติให้] ถอยหลังเข้าคลอง" (draw [the Nation] back into the canal) could also be found in foreign media: 1. Wall Street Journal: "a backward move [for the country]". [Warangkana Chonchuen. (2013-11-20). "Thai Court Rules Against Constitution Amendment". The Wall Street Journal. Retrieved: 2014-02-04.]
2018-06-24 23:56:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3709707260131836, "perplexity": 2673.726426762513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867304.92/warc/CC-MAIN-20180624234721-20180625014721-00170.warc.gz"}
https://www.physicsforums.com/threads/formula-for-force-between-two-sheets-of-aluminum-foil-under-voltage.1047961/
# Formula for force between two sheets of aluminum foil under voltage • A andymag Hi guys, I am struggling to find the answer on Google and at forums. I am experimenting with two sheets of aluminum foil that are separated by a thin plastic foil. In theorie they should be attracted to each other if I connect one foil to power and other to ground of a DC voltage source. I am aware that I probably need very high voltages to achieve a visible effect. However, I would love to know the formula to be able to calculate the attraction force between the two sheets. Does anyone know an approach? Thank you very much in advance Bests Andy Gold Member It would probably have something to do with electrostatics. Effectively what you're making is a capacitor, with a plastic foil for the dielectric. I'm not sure off the top of my head what it would be, but you could probably treat each foil sheet as having a constant charge density ##\rho## (one positive the other negative), then integrating the electrostatic force across the sheet. That should at least get you something reasonable to start with for comparison with your experiments, and would give you an idea of the magnitude of the force. Try looking into capacitance. Edit: It seems like you don't even need to do all that. https://physics.stackexchange.com/q...e-of-attraction-between-plates-of-a-capacitor Gold Member 2022 Award Look for the Maxwell tensor for the electromagnetic field. Another back-of-the-envelope estimate is for capacitors with large plates and small distance between the plates as follows: You can then assume that there's a homogeneous electric field between the plates and neglect the field outside. Then the total energy is $$\mathcal{E}=\int_V \mathrm{d}^3 x \frac{1}{2} \vec{E} \cdot \vec{D}=\frac{V \epsilon_0 \epsilon_r}{2} E^2 =\frac{\epsilon_0 \epsilon_r Q^2}{2d}.$$ If you keep the charge ##Q## constant then $$F=-\partial_d \mathcal{E}=\frac{\epsilon_0 \epsilon_r Q^2}{2d^2}.$$ You can think about what changes, when you keep the voltage across the capacitor constant. andymag Hi BiGyElLoWhAt and vanhees71, that was a huge help from both of you. Thank you very much! I understood the formulas. However, I cannot get into my head that the thickness and material (e.g., copper vs alu) of the plates in a parallel plate capacitor do not matter. My logic is that if the plates are thicker then more electrons can be pushed into or drawn from the plate. This is probably a very easy question for you but I would appreciate if you could help me understand. Andy SDL Let's perform some calculations. We have a coductive plate of size 10x10cm and 1mm thick. Its volume is ##(10^{-1})^2 \cdot 10^{-3}=10^{-5}~\rm{m^3}##. Taking the "typical" size of an atom of ##0.1~\rm{nm}=10^{-10}~\rm{m}## hence its volume ##10^{-30}~\rm{m^3}##, the plate contains ##10^{-5} / 10^{-30} = 10^{25}## atoms. To simplify our silution, assume that every atom has only one free electron in its outermost shell, taking part in conductivity. Then, if we take away only one electron from each atom, the total charge the plate can have is ##e \cdot 10^{25}=1.6 \cdot 10^{-19}~\rm{C} \cdot 10^{25}=1.6 \cdot 10^{6}~\rm{C}##. This charge is so huge, that it's hard to imagine! Calculations show that a conductive body of "common" dimensions, even very thin, would always have enough free electrons to give it any charge needed in practice. andymag Hi SDL, thanks a lot! If I understand correctly, the thickness of the plates does not matter because there is no practical way to charge a parallel plate capacitor even remotely close to its true ability to contain excess electrons, which makes kinda sense since the more electrons are in a plate the more difficult (more power needed) it becomes to add additional ones. Now I understand why we need such a relatively huge voltage to achieve relatively "low" charges. Or are these thoughts wrong? But what about graphene? Suppose, we could build two parallel capacitor plates consisting only of one atom layer of graphene. Would it also be enough? 1.6 * 10^6 C / 10^13 = 1.6 * 10^(-7) = 100 nC? Thank you very much in advance! Cheers Andy Dilwyn Jones I just did some work on this for a client. With part film, part air in the gap (eg Duffin - Electricity and Magnetism): The system could be modelled as a parallel plate capacitor with the gap x between conducting surfaces, area A, partly filled by dielectric, thickness t and dielectric constant k. The capacitance C is given by: And the attractive force between the plates held at a potential difference V: This assumes the plates remain perfectly parallel and they are large compared with the gap. Hope this helps. Mentor thanks a lot! If I understand correctly, the thickness of the plates does not matter because there is no practical way to charge a parallel plate capacitor even remotely close to its true ability to contain excess electrons, which makes kinda sense since the more electrons are in a plate the more difficult (more power needed) it becomes to add additional ones. Now I understand why we need such a relatively huge voltage to achieve relatively "low" charges. Or are these thoughts wrong? The limiting factor in a parallel plate capacitor arrangement is the arc-through Electric field level. If you charge the capacitor up to too high of a voltage, you generate an arc through the dielectric (or vacuum) between the plates. Different dielectric materials have different arc-through E-field values, and low-pressure gas and vacuum arc-through E-field values are described by the Paschen Curves: https://en.wikipedia.org/wiki/Paschen's_law To calculate the E-field value for different dielectrics and different capacitor plate spacings, you would use the basic capacitor equations (which assume as mentioned above that the plate spacing d is much less than the plate dimensions for the area A): $$Q=CV$$ $$C=\epsilon \frac{A}{d}$$ $$E = \frac{V}{d}$$ Where ##\epsilon = \epsilon_0 \epsilon_r## which takes into account the relative permittivity of the dielectric. I haven't worked through the equations to figure out how to get the maximum force (what is the optimum dielectric and spacing, given the constraints imposed by arc-through issues), but you might try using these equations and the previous force equations to work through that... Last edited: Homework Helper Gold Member Assume the dielectric fills the space between the parallel conducting plates: Neglecting edge effects, the force on the left sheet is due just to the electric field produced by the free charge ##-\sigma_f## on the right sheet. The bound charges ##\sigma_b## and ##-\sigma_b## produce no net force on the left sheet. The magnitude of the field produced by the free charge of the right sheet is ##\large \frac{\sigma_f}{2\varepsilon_0}##. Thus, the force on the left sheet is $$F = Q \left(\frac{\sigma_f}{2 \varepsilon_0} \right)= \frac{Q^2}{2 \varepsilon_0 A}.$$ Here, ##A## is the area of a sheet and ##Q## is the free charge stored on a sheet. ##Q## is related to the potential difference between the plates as ##Q = CV = \large \frac{\varepsilon_0 \kappa A}{d} V## where ##d## is the distance between the plates and ##\kappa## is the dielectric constant. So, the force on the left plate is $$F = \frac{\varepsilon_0 A}{2}\left(\frac{\kappa V}{d}\right)^2$$ This agrees with @Dilwyn Jones's equation for the case where the dielectric fills the space between the plates. The weight of a plate is ##\rho A t g## where ##\rho## is the mass density of the material of the plate, ##t## is the thickness of the plate, and ##g## is the acceleration due to gravity. So, the ratio ##R## of the electric force on a plate to its weight is $$R = \frac{\varepsilon_0}{2 \rho g t} \left(\frac{\kappa V}{d}\right)^2.$$ Thus, the voltage required to make the electric force match the weight of a plate is $$V = \sqrt{\frac{2\rho g t}{\varepsilon_0}}\frac{d}{\kappa}.$$ As an example, let the plates be thin sheets of aluminum foil of thickness ##t = 0.1## mm and mass density ##\rho = 2.7## g/cm3. Let the dielectric be polystyrene (##\kappa = 2.6##) and the distance between the plates be ##d = 2 ##mm. The voltage required to make the electric force equal the weight of a foil sheet is then about 600 V. This makes the electric field inside the polystyrene equal to about ##3 \times 10^5## V/m which is well below the dielectric strength of polystyrene (approx. ##2 \times 10^7## V/m). berkeman SDL there is no practical way to charge a parallel plate capacitor even remotely close to its true ability to contain excess electrons Yes, this is exactly what I intended to show. the more electrons are in a plate the more difficult (more power needed) Almost true except "voltage" instead of "power", which is a significantly different quantity. relatively huge voltage to achieve relatively "low" charges. Or are these thoughts wrong? This is what capacitance is for: it depends on capacitor geometry, dielectric used, technology and not on electrical parameters like voltage and current (in linear systems). And it defines the "ability" of the capacitor to accept charge: since ##C=\frac Q V##, the greater the charge pumped up and the less the voltage required, the greater the capacitance. Dilwyn Jones All good stuff. Just pointing out: The electrons in the conductors are strongly attracted towards or repelled away from the inner surface, so there is is zero field within the bulk of the plates. As the voltage is raised, electrons start escaping through breakdown of the air or dielectric, or by field emission in a vacuum, long before all the states in the conduction band are filled. An air gap reduces the attractive force between the plates, if the metal to dielectric contact is not intimate. If free charge gets onto the dielectric surface, the force is reduced (not accounted for in the equations).
2023-03-25 20:05:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7908867597579956, "perplexity": 679.0704562707424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00598.warc.gz"}
https://www.neetprep.com/question/54342-manometer-connected-closed-tap-reads-pascal-When-tap-isopened-reading-manometer-falls-pascal-velocity-offlow-water-isa--msb--msc-ms-d--ms/126-Physics--Mechanical-Properties-Fluids/685-Mechanical-Properties-Fluids
# NEET Physics Mechanical Properties of Fluids Questions Solved A manometer connected to a closed tap reads $4.5×{10}^{5}$ pascal. When the tap is opened the reading of the manometer falls to $4.5×{10}^{5}$ pascal. Then the velocity of flow of water is (a) 7  ${\mathrm{ms}}^{-1}$                             (b) 8 ${\mathrm{ms}}^{-1}$ (c) 9 ${\mathrm{ms}}^{-1}$                              (d) 10 ${\mathrm{ms}}^{-1}$
2019-12-15 21:06:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7998066544532776, "perplexity": 13130.481122523985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310866.82/warc/CC-MAIN-20191215201305-20191215225305-00467.warc.gz"}
https://ardeleanasm.github.io/blog/page3/
## Monte Carlo Pi Estimation ##### 15 Feb 2018 In this post I’ll show how Pi can be computed using a Monte Carlo algorithm in F#. Basically, using the idea of a dartboard we can obtain the value of PI by simply calculating the number of darts that land in the dartboard verses those that land outside it. And by increasing the number of throws we will get closer to PI’s value, for example throwing the dart 1000 times will be closer to PI...
2019-06-15 21:50:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8978390693664551, "perplexity": 507.02673515237126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997335.70/warc/CC-MAIN-20190615202724-20190615224724-00191.warc.gz"}
https://www.greencarcongress.com/2021/06/20210615-doe.html
DOE providing $200M over 5 years for EV, battery and connected vehicle projects 15 June 2021 The US Department of Energy (DOE) announced$200 million in funding over the next five years for electric vehicles, batteries, and connected vehicles projects at DOE national labs and new DOE partnerships to support electric vehicles innovation. The $200 million in funding to national labs, subject to appropriations, seeks to make electric vehicle innovations in order to decarbonize the transportation sector. The funding is open to DOE’s network of 17 national laboratories and is administered by DOE’s Vehicle Technologies Office (VTO). This funding compliments VTO’s funding opportunity of$62 million for reducing emissions and increasing efficiencies for on- and off-road vehicles, announced in April 2021. Projects will require applicants to submit a plan for achieving diversity, equity, and inclusion objectives, including support for people from underrepresented groups in STEM, advancing equity within the project team, and producing benefits for underserved communities.
2023-01-28 23:09:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7094053626060486, "perplexity": 8288.416439164206}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00642.warc.gz"}
https://www.gradesaver.com/textbooks/math/statistics-probability/introductory-statistics-9th-edition/chapter-6-supplementary-exercises-page-264/6-60
## Introductory Statistics 9th Edition We find: N=200, p=.80 , q= .20, μ=np =200*.80=160, σ = $\sqrt npq$=$\sqrt 100*.80*.20$=5.6569 a. P(x= 150) = $P(149.5 \leq x \leq 150.5 )$ = $P(\frac{149.5-160}{5.6569} \leq z \leq \frac{150.5-160}{5.6569})$ =$P(-1.856\leq z \leq -1.679 )$ =0.0465-0.0317= 0.0148 b. $P(x \geq 170)$ : z = $\frac{170-160}{5.6569}$ =$P(z \geq 1.7678)$ = 0.0386 c. $P(x \leq 165)$ : z = $\frac{165-160}{5.6569}$ =$P(z \leq 0.8839)$ = 0.8116 d. $P(164\leq x \leq 172 )$ = $P(\frac{164-160}{5.6569} \leq z \leq \frac{172-160}{5.6569})$ =$P(0.7071 \leq z \leq 2.1213 )$ =0.9831-.7602= 0.2229
2019-11-18 11:57:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3947310745716095, "perplexity": 6057.368675489777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669755.17/warc/CC-MAIN-20191118104047-20191118132047-00315.warc.gz"}
http://wiki.stat.ucla.edu/socr/index.php?title=SOCR_EduMaterials_AnalysisActivities_LogisticRegression&oldid=11565
# SOCR EduMaterials AnalysisActivities LogisticRegression ## Logistic Regression Background Logistic Regression (SLR) is a class of statistical analysis models and procedures, which takes several independent variable and one dichotomous dependent variable and models the relationship between them. Model: In this activity, the students can learn about: • Interpreting the results of Logistic Regression; • TBD ## SOCR Logistic Regression Example Start the SOCR Logistic Regression Applet ... TBD
2022-01-20 07:12:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055107593536377, "perplexity": 11773.25789225397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301730.31/warc/CC-MAIN-20220120065949-20220120095949-00496.warc.gz"}
http://www.mathynomial.com/problem/1692
# Problem #1692 1692 Triangle $ABC$ has $AB=27$, $AC=26$, and $BC=25$. Let $I$ denote the intersection of the internal angle bisectors of $\triangle ABC$. What is $BI$? $\textbf{(A)}\ 15\qquad\textbf{(B)}\ 5+\sqrt{26}+3\sqrt{3}\qquad\textbf{(C)}\ 3\sqrt{26}\qquad\textbf{(D)}\ \frac{2}{3}\sqrt{546}\qquad\textbf{(E)}\ 9\sqrt{3}$ This problem is copyrighted by the American Mathematics Competitions. Note: you aren't logged in. If you log in, we'll keep a record of which problems you've solved.
2018-03-17 12:44:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7163763046264648, "perplexity": 462.5982975626753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645069.15/warc/CC-MAIN-20180317120247-20180317140247-00409.warc.gz"}
http://texgraph.tuxfamily.org/aide/TeXgraphsu76.html
#### 7.5.4 StrListDelKey • StrListDelKey( <ListName>, <start index>, <number> ). • Description: That macro removes <number> elements from the <ListName> starting at <start index>. Like with the Del command, the argument <start index> can be negative ($-1$ indexes the last element, $-2$ the penultimate, ...). The elements are always read from the left to the right when <number> is positive, and in the reverse direction when <number> is negative. That macro returns Nil.
2017-08-16 17:31:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7394848465919495, "perplexity": 10406.872407660489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102309.55/warc/CC-MAIN-20170816170516-20170816190516-00543.warc.gz"}
https://mathshistory.st-andrews.ac.uk/Extras/Cotlar_elected/
# Mischa Cotlar elected to the National Academy of Argentina In April 1988 Mischa Cotlar was elected to the National Academy of Exact Sciences of Argentina and Alberta P Calderón gave the introductory address. We give a version of this address below: Introduction of Professor Mischa Cotlar to the National Academy of Exact Sciences of Argentina Alberta P Calderón It is a particular pleasure for me to introduce to this Academy its new member, Dr Mischa Cotlar, who is not only an old friend of mine - our friendship spans more than forty years - but also an admired colleague, as well as a musician, a pianist. Dr Cotlar was born in Sarney, Ukraine, in 1913, and emigrated to Uruguay in the late twenties. He lived in Montevideo several years, earning his living as a professional pianist. In 1939, already a mature mathematician, he moved to Buenos Aires, and started his public mathematical career with the presentation of a paper, "Théorie d'Anagènes," at an international congress in Bordeaux. This work was later published, in Spanish, in Anales de la Sociedad Cientifica Argentina. In Buenos Aires, Cotlar began a stage of intense and uninterrupted mathematical activity, participating in the meetings of the Unión Matemática Argentina, as well as in the seminar directed by Professor Rey Pastor, and publishing in local and foreign mathematical journals. Mischa Cotlar was an autodidact. This is why he did not have diplomas of any kind. This lack created bureaucratic problems that blocked his access to teaching positions in our universities. His only two appointments during that period were as "Investigator" at the Universities of La Plata (1946) and Buenos Aires (1948). This obstacle was happily overcome for the benefit of Argentine mathematics in 1953. It is interesting to recall how this happened. The famous American mathematician Marshall Harvey Stone, son of the also famous American jurist Harlann Fiske Stone, visited Argentina several times after 1943, and he met Cotlar. When appointed Chairman of the Department of Mathematics at the University of Chicago, to which he managed to attract several of the greatest luminaries of contemporary mathematics, Professor Stone urged Cotlar to apply for a fellowship and visit Chicago. Cotlar received the Guggenheim Fellowship and went to the United States. After being recommended to the Guggenheim Foundation by Professor George Birkhoff, Cotlar went first to Yale University, where he spent one semester and became acquainted with Professor Kakutani and other mathematicians. It was thanks to Professor Stone's help that he had his fellowship renewed in order to attend the University of Chicago, where he could work towards a doctorate. The University of Chicago, with its characteristic flexibility, discarded all the encumbering formalities and rapidly gave him the Doctorate in Philosophy in Mathematics. This was one of the important services to Argentine mathematics done by that department of the university where the presence of Argentine mathematicians has since been, but for brief periods, a constant feature. Cotlar returned to Argentina with his Ph.D. and the country derived the benefits of having him teaching at the University. In 1953 he was appointed Director of the Instituto de Matemática of the Departmento de Investigación (DIC), at the Universidad de Cuyo, until it was closed in 1956. That same year he was appointed Full Professor of Mathematics at the Universidad de La Plata, and in 1957 he joined the faculty of the Department of Mathematics of the School of Sciences of the Universidad de Buenos Aires, where he became Professor Plenario. In 1966 he resigned his position after the events that disrupted that school, and went to teach at Rutgers University (1967-1971) and universities of Nice (1969) and Central de Venezuela (1971) which he joined permanently in 1974 and where he continues with his mathematical activity today. Dr Cotlar's bibliography includes eight books and monographs and more than eighty articles published in scientific journals. He has been widely acknowledged and rewarded. He has received among others, the Premia National de Ciencias, of the CONICIT (Consejo Nacional de Investigaciones Científicas y Técnicas) from Venezuela, the Premio Waissman of our CONICET and the Premio de la Academia de Ciencias of Madrid. His contributions are, for the greatest part, in the area of Analysis, and touch a wide variety of chapters of this discipline, as lattice theory, the theory of semiordered groups, the theory of integration, ergodic theory, Banach algebras, the theory of normal families of functions, potential theory, Toeplitz kernel theory, and many more. Many of these studies were made in collaboration, a fact that underlines his generosity and the pure scientific interest that inspires his work. Among his collaborators are his wife, Yanny Frenkel, and Rodolfo Ricabarra, Cora Sadosky, Rodrigo Arocena, Eduardo Zarantonello, Beppo Levi, Rafael Panzone, and Juan Carlos Vignaux. Dr Cotlar's mathematical work has very singular characteristics. One is its insights, bringing to light the deep roots and motivations of theories and theorems. The other is the vision that uncovers links and unsuspected relations between subjects that apparently have no connection at all. It is for these characteristics, I believe, that his works have a very definite taste of philosophical essays. Examples of this are the four consecutive papers that appeared in the Revista Matemática Cuyana, volume 1 (1955), Fasciculo 2. In one of them, the result now known as Cotlar's Lemma shows the reason why, independently from Fourier Theory, the Hilbert Transform is bounded in $L^{2}$. In another of those papers, he gives the unified treatment of the Hilbert Transform and the Ergodic Theorem, which is known as the Principle of Transference in modern ergodic theory. Finally, I want to call attention to the fact that some of Cotlar's results, published in journals of little accessibility and slow distribution, have been named after other mathematicians that discovered them independently but years after him. One such example is the theorem of representation of $s$-Boolean algebras as a $s$-algebra of sets modulo an ideal of "null" sets, which today is known as the Loomis-Sikorski theorem. Dr Cotlar, we congratulate ourselves for your incorporation into this Academy. The institution not only is enriched by your presence, but is also strengthened by your prestige. Thank you, Dr Cotlar. Buenos Aires April, 1988 Last Updated May 2018
2020-10-31 11:16:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3010176420211792, "perplexity": 2340.026233146217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107917390.91/warc/CC-MAIN-20201031092246-20201031122246-00219.warc.gz"}
https://ineffectivetheory.com/
Archive # Ineffective Theory A water treatment plant in Florida is attacked — attacker raises sodium hydroxide level in an apparent attempt to poison people. I’m pretty sure “thanks to a vigilant operator” is one of the ten scariest phrases in the English language. We haven’t really explored the long tail of cybersecurity risk: be unsurprised by unwelcome surprises in the next decade. Andrew Gelman comments on Alvaro de Menard’s review (previously posted here) of the state of the social science literature. Here’s a 10 terapixel image of the night sky. I don’t think I can reliably tell galaxies from the dimmer stars. The New York Times finally gets around to publishing their piece on SSC and the rationalists; the word “rancid” comes to mind, among others. (I include this link because it seems culturally important, not because it’s worth your time to read.) Here’s Scott’s reply; if you read anything on this, read that first and last. Variously irate commentary is provided by Scott Sumner, Jason Crawford, Matt Yglesias, Noah Smith, and Scott Aaronson (with a follow-up), among others. Finally, Tanner Greer comments: “Mortals know that when Olympians feud, it is never Olympians who die.” Rob Rhinehart on the evil octopus — if you’re irate about what happened, this is the piece that Scott Alexander thinks you should read. The title of this article is “The New York Times”, which I think is how we know it’s not really about the NYT. They’re just one of the arms of the Kraken. As Leonard Cohen points out, the Kraken is everything. When nothing determines status but status itself — when there are no objective outside measures of the quality of one’s work — social dynamics become much more toxic. Greer’s explanation is based on a link between personal uncertainty and personal insecurity. I think the story goes beyond that. An illegible incentive can drive a group to behave in a certain way without any individual being aware of what’s happening; in “a world where one’s reputation rests on little more than reputation itself”, there’s a pretty strong illegible incentive to fight tooth-and-nail for reputation. Vitalik Buterin on prediction markets — how modern ones can fail, and lessons for futarchy. Also by Vitalik, here’s a nice introduction to zk-SNARKs. # The Legendre Transform and the Path Integral This is yet another post on something I really should have learned long ago, but somehow never quite grasped. Or, in this case, never learned at all. I have a Hamiltonian — let’s say $H = p^2 + V(x)$ for simplicity’s sake — and I’d like to get a Lagrangian for the same system. Classically, this is done through the Legendre transform. Focus on the $p$-dependence of $H(p;x)$ for fixed $x$. Define $h(p,\dot x;x) = p \dot x - H(p;x)$, and the Legendre transform of $H$ is given by the maximum value attained by $h$: $$L(x,\dot x) = \max_{p} h(p,\dot x;x) \text.$$ Of course, that maximum value can be found by requiring that $0 = \frac{\partial h}{\partial p}$. As a result, the Lagrangian can be defined more simply by simply saying $L = p \dot x - H$, where $p$ is defined such that $\dot x = \frac{\partial H}{\partial p}$. In quantum mechanics, there’s another way to go from the Hamiltonian to a Lagrangian. This is what we do when we derive the path integral. For the same Hamiltonian as above, we can expand the propagator like so: $$\langle x_f | e^{-i H t} | x_i \rangle = \int d x_n\cdots d x_1\; \langle x_{f} | e^{-i H \Delta t} | x_n\rangle \cdots \langle x_{1} | e^{-i H \Delta t} | x_i\rangle$$ where $\Delta t = t / n$. When $n$ is large, the operator $e^{-i H \Delta t}$ is close to the identity, and easily approximated: $$\langle x' | e^{-i H \Delta t} | x\rangle = e^{i \big(\frac 1 {2 \Delta t} (x'-x)^2 - V(x)\big)} + O(\Delta t^2) \text.$$ Hey, look, there’s a naive discretization of the Lagrangian up in the exponential! Putting it all together and treating $x(t)$ as a continuous function (at the very least, the discrete values can be interpolated to construct a piecewise smooth function), we obtain the usual path integral for the propagator. The object in the exponential is by definition the action, or the integral of the Lagrangian. $$\langle x_f | e^{-i H t} | x_i \rangle = \int \mathcal D x(t)\; e^{i \int dt\; L(x,\dot x)}$$ These two procedures give the same answer, at least in “reasonable” cases, so they must be related. How? In particular, the second procedure should be hiding a Legendre transform somewhere. (I’m particularly interested in this question because I have no trouble keeping track of the second procedure, whereas the first is a source of perpetual bafflement.) Well, let’s look at the quantum case more carefully. In order to approximate the matrix element of $e^{-i H \Delta t}$, one usually expands it with the Suzuki-Trotter decomposition: $$\langle x' | e^{-i H \Delta t} | x\rangle = \int d p\; \langle x' | e^{-i p^2 \Delta t/2} | p\rangle \langle p | e^{-i V(x)} | x\rangle = \int d p\; e^{i p (x'-x)} e^{-i p^2 \Delta t/2 - i V(x)} \text.$$ Rather than integrating out the momentum, let’s keep it around for a bit and see what happens. Plugging this formula into the expression for the full propagator, and again treating $p$ and $x$ as continuous functions, we find $$\langle x_f | e^{-i H t} | x_i \rangle = \int \mathcal D x(t)\; \mathcal D p(t)\; e^{i p \dot x - H(p,x)}\text.$$ We’re halfway there! The exponential is just the function $h(p,\dot x;x)$ that we use in the definition of the Legendre transform. The last bit is to realize that we want the classical limit, that is, the $\hbar \rightarrow 0$ limit. Rewriting the above expression with $\hbar$ in place: $$\langle x_f | e^{-i H t / \hbar} | x_i \rangle = \int \mathcal D x(t)\; \mathcal D p(t)\; e^{i (p \dot x - H(p,x))/\hbar}\text.$$ Now we can consider integrating out the momentum. This integral is dominated by the region of stationary phase; that is, where $\frac{\partial h}{\partial p} = 0$. So, the Legendre transform reappears in the classical limit of the path integral (just as Lagrange’s equations of motion do). Here is a related StackExchange post for further reading. It doesn’t quite go all the way (at least along the direction I’m interested in), but most of the important bits are there. There’s also a hint of this story in section 9.1 of Peskin. # Quantum Ensembles In classical statistical mechanics, thermal expectation values are computed by averaging over some probability distribution. In the case of the canonical ensemble, this looks like a sum over all states $s$ weighted by the exponential of the energy $E_s$: $$\langle\mathcal O\rangle_{\text{classical}} = Z^{-1} \sum_s e^{-E_s / T} \mathcal O_s \text.$$ The normalization of $Z^{-1}$ is irrelevant to the discussion here — in fact, I’ll leave it off all future expressions. How does this generalize to quantum mechanics? The usual way is to replace the energies with a Hermitian operator (the Hamiltonian), the observable with another Hermitian operator, and the sum with a trace. An expectation value in the quantum canonical ensemble looks like $$\langle \mathcal O\rangle_{\text{quantum}} = \mathrm{Tr}\;e^{-H / T} \mathcal O$$ To see that this is in fact a generalization of the classical ensemble above, consider the case where the Hamiltonian and observable commute. A basis exists where they are both diagonal, and in that basis the quantum expectation value takes exactly the form of the classical expectation value above. So far, so familiar. But is this the only sensible generalization of classical statistical mechanics? In a sense, yes. First let’s go back to the classical case. The exponential of the energy $e^{-E/T}$ is referred to as the Boltzmann factor. If we know the expectation values of a bunch of operators $\mathcal O_1,\ldots,\mathcal O_n$, then as long as these expectation values are physically consistent, a suitable Boltzmann factor can always be constructed to satisfy them. (Actually, that’s the definition of physically consistent.) Furthermore, if the system is finite and we know the expectation values of enough operators, the Boltzmann factor is actually uniquely determined! (Well, up to a measure-zero set, or something like that.) That should demotivate us from looking for new formulations of classical statistical mechanics. Similar logic holds for the quantum case. In a quantum system, we’re also constrained by linearity, but the structure of the argument is the same. Given enough (physically realizable) expectation values, the density matrix $e^{-H / T}$ is uniquely determined. As a result, there’s not much point looking for different formulations of quantum statistical mechanics, either. Let’s do it anyway. In a system with $Q$ qubits, the density matrix is a bulky object with $2^{2Q}$ entries. I don’t want to deal with that, I want to work with states — nice sleek vectors with a measly $2^{Q}$ entries. The classical ensembles give probability distributions on classical states; why can’t a quantum ensemble give a probability distribution on quantum (pure) states? Well, in fact, the density matrix already does. By diagonalizing $H$, you can see that $e^{-H / T}$ naturally yields a probability distribution on the set of eigenstates of the Hamiltonian. This isn’t very nice, though, because it sort of presupposes that we know the eigenbasis of the Hamiltonian. For any interesting problem, I don’t. Fortunately, it’s pretty clear that this probability distribution is far from unique. First, note that every probability distribution on states will induce some density matrix. This follows from the discussion above; alternatively, the density matrix is given explicitly by $$\rho[p] = \int d |\psi\rangle\; p(\psi)\; |\psi\rangle\langle\psi| \text.$$ Now count dimensions. The space of density matrices is $2^{2Q}$ dimensional, give or take. The space of possible probability distributions on the continuous space of states? Ah… many more dimensions there. So, many more probability distributions compatible with the thermal density matrix should exist. As far as I know, finding natural such probability distributions is largely unexplored territory. (There’s one induced by ergodicity and the time-evolution operator $e^{-i H t}$, although I’m not even sure if it’s unique.) Nevertheless, one such (approximate) construction is given by Sugiura and Shimizu. Start with a uniform (i.e., $SU(2^Q)$-symmetric) distribution on the space of states, corresponding to an infinite temperature ensemble. Now take one state, and apply the operator $e^{-H / (2T)}$. Lastly, if you think of states as being vectors in a Hilbert space, then normalize the result; if you think of states as rays, this is of course unnecessary. In the thermodynamic limit, expectation values with respect to this distribution match the canonical quantum ensemble described above. In fact, in that limit, a single quantum state sampled from this ensemble yields (almost certainly) all the correct expectation values. A careful proof is given in the paper; a nice physical argument follows just as it would for a classical system. In the infinite-volume limit, we can consider multiple sub-systems each individually in the thermodynamic limit and all uncoupled from each other. Therefore, what appears to be a single thermodynamic “sample” actually contains infinitely many, equally thermodynamic samples.
2021-03-08 03:53:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736702799797058, "perplexity": 391.1484084976303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381803.98/warc/CC-MAIN-20210308021603-20210308051603-00549.warc.gz"}
https://scirate.com/arxiv/cs.LG
# Learning (cs.LG) • Recently, deep learning (DL) methods have been introduced very successfully into human activity recognition (HAR) scenarios in ubiquitous and wearable computing. Especially the prospect of overcoming the need for manual feature design combined with superior classification capabilities render deep neural networks very attractive for real-life HAR application. Even though DL-based approaches now outperform the state-of-the-art in a number of recognitions tasks of the field, yet substantial challenges remain. Most prominently, issues with real-life datasets, typically including imbalanced datasets and problematic data quality, still limit the effectiveness of activity recognition using wearables. In this paper we tackle such challenges through Ensembles of deep Long Short Term Memory (LSTM) networks. We have developed modified training procedures for LSTM networks and combine sets of diverse LSTM learners into classifier collectives. We demonstrate, both formally and empirically, that Ensembles of deep LSTM learners outperform the individual LSTM networks. Through an extensive experimental evaluation on three standard benchmarks (Opportunity, PAMAP2, Skoda) we demonstrate the excellent recognition capabilities of our approach and its potential for real-life applications of human activity recognition. • Mar 29 2017 cs.CV cs.LG arXiv:1703.09470v1 We describe a novel method for blind, single-image spectral super-resolution. While conventional super-resolution aims to increase the spatial resolution of an input image, our goal is to spectrally enhance the input, i.e., generate an image with the same spatial resolution, but a greatly increased number of narrow (hyper-spectral) wave-length bands. Just like the spatial statistics of natural images has rich structure, which one can exploit as prior to predict high-frequency content from a low resolution image, the same is also true in the spectral domain: the materials and lighting conditions of the observed world induce structure in the spectrum of wavelengths observed at a given pixel. Surprisingly, very little work exists that attempts to use this diagnosis and achieve blind spectral super-resolution from single images. We start from the conjecture that, just like in the spatial domain, we can learn the statistics of natural image spectra, and with its help generate finely resolved hyper-spectral images from RGB input. Technically, we follow the current best practice and implement a convolutional neural network (CNN), which is trained to carry out the end-to-end mapping from an entire RGB image to the corresponding hyperspectral image of equal size. We demonstrate spectral super-resolution both for conventional RGB images and for multi-spectral satellite data, outperforming the state-of-the-art. • Current speech enhancement techniques operate on the spectral domain and/or exploit some higher-level feature. The majority of them tackle a limited number of noise conditions and rely on first-order statistics. To circumvent these issues, deep networks are being increasingly used, thanks to their ability to learn complex functions from large example sets. In this work, we propose the use of generative adversarial networks for speech enhancement. In contrast to current techniques, we operate at the waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same model, such that model parameters are shared across them. We evaluate the proposed model using an independent, unseen test set with two speakers and 20 alternative noise conditions. The enhanced samples confirm the viability of the proposed model, and both objective and subjective evaluations confirm the effectiveness of it. With that, we open the exploration of generative architectures for speech enhancement, which may progressively incorporate further speech-centric design choices to improve their performance. • In Imitation Learning, a supervisor's policy is observed and the intended behavior is learned. A known problem with this approach is covariate shift, which occurs because the agent visits different states than the supervisor. Rolling out the current agent's policy, an on-policy method, allows for collecting data along a distribution similar to the updated agent's policy. However this approach can become less effective as the demonstrations are collected in very large batch sizes, which reduces the relevance of data collected in previous iterations. In this paper, we propose to alleviate the covariate shift via the injection of artificial noise into the supervisor's policy. We prove an improved bound on the loss due to the covariate shift, and introduce an algorithm that leverages our analysis to estimate the level of $\epsilon$-greedy noise to inject. In a driving simulator domain where an agent learns an image-to-action deep network policy, our algorithm Dart achieves a better performance than DAgger with 75% fewer demonstrations. • This work studies how an AI-controlled dog-fighting agent with tunable decision-making parameters can learn to optimize performance against an intelligent adversary, as measured by a stochastic objective function evaluated on simulated combat engagements. Gaussian process Bayesian optimization (GPBO) techniques are developed to automatically learn global Gaussian Process (GP) surrogate models, which provide statistical performance predictions in both explored and unexplored areas of the parameter space. This allows a learning engine to sample full-combat simulations at parameter values that are most likely to optimize performance and also provide highly informative data points for improving future predictions. However, standard GPBO methods do not provide a reliable surrogate model for the highly volatile objective functions found in aerial combat, and thus do not reliably identify global maxima. These issues are addressed by novel Repeat Sampling (RS) and Hybrid Repeat/Multi-point Sampling (HRMS) techniques. Simulation studies show that HRMS improves the accuracy of GP surrogate models, allowing AI decision-makers to more accurately predict performance and efficiently tune parameters. • Real-world robots are becoming increasingly complex and commonly act in poorly understood environments where it is extremely challenging to model or learn their true dynamics. Therefore, it might be desirable to take a task-specific approach, wherein the focus is on explicitly learning the dynamics model which achieves the best control performance for the task at hand, rather than learning the true dynamics. In this work, we use Bayesian optimization in an active learning framework where a locally linear dynamics model is learned with the intent of maximizing the control performance, and used in conjunction with optimal control schemes to efficiently design a controller for a given task. This model is updated directly based on the performance observed in experiments on the physical system in an iterative manner until a desired performance is achieved. We demonstrate the efficacy of the proposed approach through simulations and real experiments on a quadrotor testbed. • This paper presents real-time vibration based identification technique using measured frequency response functions(FRFs) under random vibration loading. Artificial Neural Networks (ANNs) are trained to map damage fingerprints to damage characteristic parameters. Principal component statistical analysis(PCA) technique was used to tackle the problem of high dimensionality and high noise of data, which is common for industrial structures. The present study considers Crack, Rivet hole expansion and redundant uniform mass as damages on the structure. Frequency response function data after being reduced in size using PCA is fed to individual neural networks to localize and predict the severity of damage on the structure. The system of ANNs trained with both numerical and experimental model data to make the system reliable and robust. The methodology is applied to a numerical model of stiffened panel structure, where damages are confined close to the stiffener. The results showed that, in all the cases considered, it is possible to localize and predict severity of the damage occurrence with very good accuracy and reliability. • We present a hybrid method for latent information discovery on the data sets containing both text content and connection structure based on constrained low rank approximation. The new method jointly optimizes the Nonnegative Matrix Factorization (NMF) objective function for text clustering and the Symmetric NMF (SymNMF) objective function for graph clustering. We propose an effective algorithm for the joint NMF objective function, based on a block coordinate descent (BCD) framework. The proposed hybrid method discovers content associations via latent connections found using SymNMF. The method can also be applied with a natural conversion of the problem when a hypergraph formulation is used or the content is associated with hypergraph edges. Experimental results show that by simultaneously utilizing both content and connection structure, our hybrid method produces higher quality clustering results compared to the other NMF clustering methods that uses content alone (standard NMF) or connection structure alone (SymNMF). We also present some interesting applications to several types of real world data such as citation recommendations of papers. The hybrid method proposed in this paper can also be applied to general data expressed with both feature space vectors and pairwise similarities and can be extended to the case with multiple feature spaces or multiple similarity measures. • Early stopping is a widely used technique to prevent poor generalization performance when training an over-expressive model by means of gradient-based optimization. To find a good point to halt the optimizer, a common practice is to split the dataset into a training and a smaller validation set to obtain an ongoing estimate of the generalization performance. In this paper we propose a novel early stopping criterion which is based on fast-to-compute, local statistics of the computed gradients and entirely removes the need for a held-out validation set. Our experiments show that this is a viable approach in the setting of least-squares and logistic regression as well as neural networks. • In this paper, we address the inverse problem, or the statistical machine learning problem, in Markov random fields with a non-parametric pair-wise energy function with continuous variables. The inverse problem is formulated by maximum likelihood estimation. The exact treatment of maximum likelihood estimation is intractable because of two problems: (1) it includes the evaluation of the partition function and (2) it is formulated in the form of functional optimization. We avoid Problem (1) by using Bethe approximation. Bethe approximation is an approximation technique equivalent to the loopy belief propagation. Problem (2) can be solved by using orthonormal function expansion. Orthonormal function expansion can reduce a functional optimization problem to a function optimization problem. Our method can provide an analytic form of the solution of the inverse problem within the framework of Bethe approximation. • Managers of US National Forests must decide what policy to apply for dealing with lightning-caused wildfires. Conflicts among stakeholders (e.g., timber companies, home owners, and wildlife biologists) have often led to spirited political debates and even violent eco-terrorism. One way to transform these conflicts into multi-stakeholder negotiations is to provide a high-fidelity simulation environment in which stakeholders can explore the space of alternative policies and understand the tradeoffs therein. Such an environment needs to support fast optimization of MDP policies so that users can adjust reward functions and analyze the resulting optimal policies. This paper assesses the suitability of SMAC---a black-box empirical function optimization algorithm---for rapid optimization of MDP policies. The paper describes five reward function components and four stakeholder constituencies. It then introduces a parameterized class of policies that can be easily understood by the stakeholders. SMAC is applied to find the optimal policy in this class for the reward functions of each of the stakeholder constituencies. The results confirm that SMAC is able to rapidly find good policies that make sense from the domain perspective. Because the full-fidelity forest fire simulator is far too expensive to support interactive optimization, SMAC is applied to a surrogate model constructed from a modest number of runs of the full-fidelity simulator. To check the quality of the SMAC-optimized policies, the policies are evaluated on the full-fidelity simulator. The results confirm that the surrogate values estimates are valid. This is the first successful optimization of wildfire management policies using a full-fidelity simulation. The same methodology should be applicable to other contentious natural resource management problems where high-fidelity simulation is extremely expensive. • Policy analysts wish to visualize a range of policies for large simulator-defined Markov Decision Processes (MDPs). One visualization approach is to invoke the simulator to generate on-policy trajectories and then visualize those trajectories. When the simulator is expensive, this is not practical, and some method is required for generating trajectories for new policies without invoking the simulator. The method of Model-Free Monte Carlo (MFMC) can do this by stitching together state transitions for a new policy based on previously-sampled trajectories from other policies. This "off-policy Monte Carlo simulation" method works well when the state space has low dimension but fails as the dimension grows. This paper describes a method for factoring out some of the state and action variables so that MFMC can work in high-dimensional MDPs. The new method, MFMCi, is evaluated on a very challenging wildfire management MDP. Noon van der Silk Mar 08 2017 04:45 UTC I feel that while the proliferation of GUNs is unquestionable a good idea, there are many unsupervised networks out there that might use this technology in dangerous ways. Do you think Indifferential-Privacy networks are the answer? Also I fear that the extremist binary networks should be banned ent ...(continued) Omar Shehab Sep 12 2016 12:50 UTC I am still trying to understand the following statement from II.A. > This leads to the condition that the first- and second-order moments > of the model and data distributions should be equal for the parameters > to be optimal. Alessandro Dec 09 2015 01:12 UTC Hey, I've already seen this title! http://arxiv.org/abs/1307.0401 Noon van der Silk Jul 13 2015 10:44 UTC There's some code for this here: https://github.com/ryankiros/skip-thoughts anti-plagiarism Jul 09 2015 15:11 UTC This paper "**Tree-based convolution for sentence modeling**" is a deliberate plagiarism. The texts, models and ideas overlap significantly with previous work on arXiv. - TBCNN: A **Tree-based Convolutional** Neural Network for Programming Language Processing (arXiv:1409.5718) - **Tree-based ...(continued)
2017-03-29 17:09:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4106265604496002, "perplexity": 1005.4694520679135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190754.6/warc/CC-MAIN-20170322212950-00616-ip-10-233-31-227.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/266450/replace-default-images-in-a-document
# Replace default images in a document I'm creating a template document which contains lots of default images. Someone without LaTeX knowlegde should be able to replace these images (the path / name) e.g. via drag and drop. Replacing all names takes too much time, since there really are lots of images. I'm using TeXstudio which includes a drag and drop function for images. So a second possibility could probably be a drag and drop template, which inserts the images. As shown in the MWE there are always two of them inside a figure environment. This is a problem, because TeXstudio's drag and drop inserts each image inside it's own environment. These were just my ideas. I've also been looking for a macro that could do the job but haven't found anything. Every other method would be as good as long as it enables the editor of 1. replacing default images by other images or 2. inserting the images as shown via drag and drop. I hope one can understand what I mean, otherwise I'll try to explain it better ... Thanks a lot for your help! \documentclass[a4paper,pagesize]{scrartcl} \usepackage{graphicx} \begin{document} \section{Image Test} \begin{figure}[!ht] \centering \fbox{ \begin{minipage}{.48\textwidth} \centering \includegraphics[width=.99\textwidth]{example-image-golden} \end{minipage} \begin{minipage}{.48\textwidth} \centering \includegraphics[width=.99\textwidth]{example-image-golden} \end{minipage} } \caption{Images} \end{figure} \begin{figure}[!ht] \centering \fbox{ \begin{minipage}{.48\textwidth} \centering \includegraphics[width=.99\textwidth]{example-image-golden} \end{minipage} \begin{minipage}{.48\textwidth} \centering \includegraphics[width=.99\textwidth]{example-image-golden} \end{minipage} } \caption{Images} \end{figure} \begin{figure}[!ht] \centering \fbox{ \begin{minipage}{.48\textwidth} \centering \includegraphics[width=.99\textwidth]{example-image-golden} \end{minipage} \begin{minipage}{.48\textwidth} \centering \includegraphics[width=.99\textwidth]{example-image-golden} \end{minipage} } \caption{Images} \end{figure} \end{document} • Are settings all same i.e. {figure}[!ht], \fbox, minipage=.48\textwidth width=.99\textwidth? – touhami Sep 8 '15 at 13:06 • Yes, all settings are the same. Only the images and the captions will differ. – kanra Sep 8 '15 at 13:18 • you can configure texsudio to input only \includegraphics[width=.99\textwidth]{exampleimage} when you drag and drop. – touhami Sep 8 '15 at 14:26 • I've tried to do so but when I drop the image behind the \centering command a dialog pops up saying something like 'the code could not be interpreted, the command \fbox is not supported'. – kanra Sep 8 '15 at 15:19 I define a command \mtcommand[3]{....} \documentclass[a4paper,pagesize]{scrartcl} \usepackage{graphicx} \newcommand{\mtcommand}[3]{% \begin{figure}[!ht] \centering \fbox{ \begin{minipage}{.48\textwidth} \centering #1 \end{minipage} \begin{minipage}{.48\textwidth} \centering #2 \end{minipage} } \caption{#3} \end{figure}} \begin{document} \section{Image Test} \mtcommand{ \includegraphics[width=0.7\linewidth]{example-image} } { \includegraphics[width=0.7\linewidth]{example-image} } {images} \end{document} • I've tried your idea together with a drag and drop template for the image and it works well. Interesting how the fbox is no problem when mentioned in the newcommand. Thank you so much! – kanra Sep 8 '15 at 19:09
2020-10-01 02:05:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6466094851493835, "perplexity": 1355.9466147942376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130531.89/warc/CC-MAIN-20200930235415-20201001025415-00519.warc.gz"}
https://ask.wireshark.org/answers/4858/revisions/
As mentioned in the wireshark-filter man page, the matches (or ~) operator "is only implemented for protocols and for protocol fields with a text string representation.", of which the Ethernet source and destination MAC addresses are not. eth.addr[0:3] == 00:0c:29
2021-12-01 19:52:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6782922744750977, "perplexity": 10210.230595529832}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00359.warc.gz"}
https://ohiolink.oercommons.org/courseware/lesson/2456/overview
Subject: Calculus Material Type: Module Provider: Ohio Open Ed Collaborative Tags: Cac077, Trigonometric Substitution Language: English Media Formats: eBook, Interactive # Trigonometric substitution module ## Overview After completing this section, students should be able to do the following. • Use Pythagorean identities to simplify expressions. • Substitute trigonometric functions to simiplify integrals. • Complete the square to change the form of an integral. # Trigonometric Substitution ## Ximera Module We integrate by substitution with the appropriate trigonometric function.
2021-01-22 13:55:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9845237135887146, "perplexity": 9368.23094865516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529331.99/warc/CC-MAIN-20210122113332-20210122143332-00669.warc.gz"}
https://www.mathdoubts.com/distributive-property/
Distributive property The property of distributing the multiplication across the addition or subtraction of the terms is called the distributive property. Introduction $a$, $b$ and $c$ are three literals but represent three terms in algebraic form. The sum and difference of the terms $b$ and $c$ are written as $b+c$ and $b-c$ respectively. The distributive property can be written in terms $a$, $b$ and $c$ in two different forms. $a(b+c) \,=\, ab+ac$ Learn how to distribute the multiplication over addition. Subtraction form $a(b-c) \,=\, ab-ac$ Learn how to distribute the multiplication over subtraction. Usage The distributive property is mainly used in two different cases. 1. To distribute the multiplication across the sum or difference of the terms. 2. To take the common factor out from the sum or difference of the terms. Email subscription Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more
2020-04-09 03:41:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5412230491638184, "perplexity": 369.83656769236103}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371829677.89/warc/CC-MAIN-20200409024535-20200409055035-00055.warc.gz"}
https://lists.gnu.org/archive/html/emacs-orgmode/2014-05/msg00986.html
emacs-orgmode [Top][All Lists] Advanced [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] ## Re: [O] Beamer export: vertical alignment of columns on a slide From: Eric S Fraga Subject: Re: [O] Beamer export: vertical alignment of columns on a slide Date: Fri, 23 May 2014 09:57:02 +0100 User-agent: Gnus/5.130012 (Ma Gnus v0.12) Emacs/24.4.50 (gnu/linux) On Friday, 23 May 2014 at 10:02, Vikas Rawal wrote: > I have a slide with two columns, each having an unordered list of > items. The two lists are of different height (different number > items). Vertically, they are aligned in the middle. How can I align > them to the top? > > Vikas >From the beamer manual, in the section "Structuring a Frame", there is an example of vertically aligning columns. This is done by adding the option [t] to the \begin{columns} directive. By the way, I think the default behaviour is affected by the theme you choose but I'm not entirely sure whether this is true or how... In any case, I do not know how to get this option passed through from org. You can specify options (with BEAMER_opt property) for individual elements such as frames and individual columns but I haven't seen a directive that would allow one to specify an option for an implicitly generated LaTeX directive such as a columns environment. Maybe Nicolas can jump in here...? Secondly, I find that [t] doesn't always work as nicely as it should so I often end up putting \vspace*{} directives directly in the org and sometimes \vfill. HTH, eric -- : Eric S Fraga (0xFFFCF67D), Emacs 24.4.50.2, Org release_8.2.6-1021-g2ce78e reply via email to [Prev in Thread] Current Thread [Next in Thread]
2021-09-22 05:58:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9544323682785034, "perplexity": 8713.156951266596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00246.warc.gz"}
https://www.bmj.com/content/377/bmj.o976
Intended for healthcare professionals Opinion # We need more support and less normalcy to stop airborne viruses BMJ 2022; 377 (Published 13 April 2022) Cite this as: BMJ 2022;377:o976 1. Abraar Karan, infectious disease doctor 1. Stanford University 1. Twitter @AbraarKaran Governments should be investing in a thorough redesign of how our air is filtered in public spaces, says Abraar Karan A recent poll by the Kaiser Family Foundation (KFF) assessing public opinion on the covid-19 pandemic after two years found that unvaccinated adults, Republicans, and white adults were most likely to say they never changed the level of activities they engaged in during the pandemic or had returned back to normal.1 Nearly half of those with household incomes greater than $90 000 said the same. In contrast, black adults, those with chronic conditions, and those living in households with incomes less than$40 000 were the most likely to say they were doing very few of the activities they did before the pandemic had started. In the United States and Europe, there have been major pushes to have the public return to “normal,” an unclear gesture suggesting perhaps a world before covid-19. But this is a false promise, and is instead a justification for perpetuating the conditions that left many countries so vulnerable to covid-19 to begin with. Key among these are the inequities that have seen people live through strikingly different experiences of the pandemic. For instance, isolating at home may pose notably lower risks if you are wealthy and health literate. You may have extra rooms where family members can self-isolate—a study from the US Centers for Disease Control and Prevention showed that having the ability to isolate in a separate room was associated with significantly lower odds of transmitting covid to family members during the omicron surge.2 You may be able to afford HEPA filters—studies have shown that higher levels of infectious aerosols in the air are associated with higher risk of transmission to others.3 You may have the time and knowledge to procure high filtration N95 masks for everyone in your home. And, through these efforts, you may very well avoid the rest of your family getting sick with covid-19. In addition, if you work in a job that can be done from home, you may be right back to getting your paycheck without having to wait in isolation. Now consider if you were living in a small, crowded home or apartment where isolating in a separate room is not possible. Imagine if you did not have the funds to buy a portable HEPA filter; did not have the knowledge about how to construct low cost air filtration devices4; and worked a job in which you were on the frontlines and could not simply continue it from home. Tack on to this the possibility you did not receive paid time off from work and you are left now with the choice of whether to work while sick, or be both sick and unpaid. Many patients who I treated had to make these choices. Many infected their entire families. Many died or were left with ongoing health complications. And the data reflect this as well. In a study of low income, primarily Hispanic families, household transmission of SARS-CoV-2 was found to be much higher than in most other studies examining the general population; and risk of disease spread was associated with lower household incomes.5 The KFF poll was particularly alarming to me because poorer communities bore the brunt of the pandemic during every wave. On top of this, they were often blamed in many countries (as they have been throughout history) for spreading a virus that eventually affected the wealthy. Now, we are seeing poorer people, often from ethnic minority communities, trying to protect themselves without the luxury of returning to “normal” for fear that they could get sick again during upcoming waves, some of which have already started, as in the UK. If wealthy people who are unmasked are now the harbingers of disease to the rest of society, will people who’ve been marginalised be able to hold them accountable? I know they will not. The answer is not to mask forever or to fall back on any of the other false, extreme dichotomies that are often presented. With an airborne virus, the impetus lies on the government to protect us all through a thorough redesign of how our air is filtered. In the hospital, when we are dealing with an airborne pathogen, we don’t just wear N95 masks. We also place those patients in rooms with increased air changes, HEPA filters, and in negative pressure rooms as well. Yet, for the general public, our leaders have focused mostly on masking and far less on the engineering controls that they could create to protect us. Just like we had revolutions in clean water or sewage disposal, or how we fundamentally improved roads when automobiles came into existence, so too must we revolutionise the cleaning of the air we share and breathe in public spaces. Anything short of this will fundamentally leave us vulnerable to this and future respiratory viruses. Instead, the governments of many countries have turned responsibility over entirely to their citizens, advising them to mask (if they wish to) and get vaccinated. We know that masking can help reduce the spread of covid but as a strategy it is unsustainable indefinitely. We also know that while vaccines can protect us from severe disease, they are less effective in preventing infections,6 which when left unmitigated can cause catastrophic harms even if they have a lower fatality rate overall. So, we are left quibbling among ourselves about whether to mask or not, instead of demanding accountability from our governments to clean the air we breathe; ensure basic worker rights, such as paid time off for those who are sick; and more rigorously protect poorer families from in-home spread, through the provision of tools such as air purifiers/filters, rapid tests, and N95 masks when a family member is sick. Covid-19 is not over. And, if nothing else, it should serve as a warning for how underprepared we are for future respiratory threats. This knowledge should not see us turning against one another, but looking towards our governments and leaders for the support we deserve. ## Footnotes • Competing interests: Abraar Karan had served as a paid research consultant to the Independent Panel on Pandemic Preparedness and Response in 2020. • Provenance and peer review: Not commissioned; not peer reviewed.
2023-03-22 23:36:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21273621916770935, "perplexity": 2376.7081875661056}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00494.warc.gz"}
https://aaronschlegel.me/integration-by-parts.html
## Integration by Parts Integration by parts is another technique for simplifying integrands. As we saw in previous posts, each differentiation rule has a corresponding integration rule. In the case of integration by parts, the corresponding differentiation rule is the Product Rule. The technique of integration by parts allows us to simplify integrands of the form: $$\int f(x) g(x) dx$$ Examples of this form include: $$\int x \cos{x} \space dx, \qquad \int e^x \cos{x} \space dx, \qquad \int x^2 e^x \space dx$$ As integration by parts is the product rule applied to integrals, it helps to state the Product Rule again. The Product Rule is defined as: $$\frac{d}{dx} \big[ f(x)g(x) \big] = f^{\prime}(x) g(x) + f(x) g^{\prime}(x)$$ When we apply the product rule to indefinite integrals, we can restate the rule as: $$\int \frac{d}{dx} \big[f(x)g(x)\big] \space dx = \int \big[f^{\prime} g(x) + f(x) g^{\prime}(x) \big] \space dx$$ Then, rearranging so we get $f(x)g^{\prime}(x) \space dx$ on the left side of the equation: $$\int f(x)g^{\prime}(x) \space dx = \int \frac{d}{dx} \big[f(x)g(x)\big] \space dx - \int f^{\prime}(x)g(x) \space dx$$ Which gives us the integration by parts formula! The formula is typically written in differential form: $$\int u \space dv = uv - \int v \space du$$ ## Examples¶ The following examples walkthrough several problems that can be solved using integration by parts. We also employ the wonderful SymPy package for symbolic computation to confirm our answers. To use SymPy later to verify our answers, we load the modules we will require and initialize several variables for use with the SymPy library. In [1]: from sympy import symbols, limit, diff, sin, cos, log, tan, sqrt, init_printing, plot, integrate from mpmath import ln, e, pi, cosh, sinh init_printing() x = symbols('x') y = symbols('y') Example 1: Evaluate the integrand $\int x \sin{\frac{x}{2}} \space dx$ Recalling the differential form of the integration by parts formula, $\int u \space dv = uv - \int v \space du$, we set $u = x$ and $dv = \sin{\frac{x}{2}}$ Solving for the derivative of $u$, we arrive at $du = 1 \space dx = dx$. Next, we find the antiderivative of $dv$. To find this antiderivative, we employ the Substitution Rule. $$u = \frac{1}{2}x, \qquad du = {1}{2} \space dx, \qquad \frac{du}{dx} = 2$$$$y = \sin{u}, \qquad dy = -\cos{u} \space du, \qquad \frac{dy}{du} = -\cos{u}$$ Therefore, $v = -2 \cos{\frac{x}{2}}$ Entering these into the integration by parts formula: $$-2x\cos{\frac{x}{2}} - (-2)\int \cos{\frac{x}{2}}$$ Then, solving for the integrand $\int \cos{\frac{x}{2}}$, we employ the Substitution Rule again as before to arrive at $2\sin{\frac{x}{2}}$ (the steps in solving this integrand are the same as before when we solved for $\int \sin{\frac{x}{2}}$). Thus, the integral is evaluated as: $$-2x\cos{\frac{x}{2}} + 4\sin{\frac{x}{2}} + C$$ Using SymPy's integrate, we can verify our answer is correct (SymPy does not include the constant of integration $C$). In [2]: integrate(x * sin(x / 2), x) Out[2]: $$- 2 x \cos{\left (\frac{x}{2} \right )} + 4 \sin{\left (\frac{x}{2} \right )}$$ Example 2: Evaluate $\int t^2 \cos{t} \space dt$ We start by setting $u = t^2$ and $dv = \cos{t}$. The derivative of $t^2$ is $2t$, thus $du = 2t \space dt$, or $\frac{du}{dt} = 2t$. Integrating $dv = \cos{t}$ gives us $v = \sin{t} \space du$. Entering these into the integration by parts formula: $$t^2 \sin{t} - 2\int t \sin{t}$$ Therefore, we must do another round of integration by parts to solve $\int t \sin{t}$. $$u = t, \qquad du = dt$$ $$dv = \sin{t}, \qquad v = -\cos{t} \space du$$ Putting these together into the integration by parts formula with the above: $$t^2 \sin{t} - 2 \big(-t \cos{t} + \int \cos{t} \space dt \big)$$ Which gives us the solution: $$t^2 \sin{t} + 2t \cos{t} - 2 \sin{t} + C$$ As before, we can verify that our answer is correct by leveraging SymPy. In [6]: t = symbols('t') integrate(t ** 2 * cos(t), t) Out[6]: $$t^{2} \sin{\left (t \right )} + 2 t \cos{\left (t \right )} - 2 \sin{\left (t \right )}$$ Example 3: $\int x e^x \space dx$ Here, we set $u = x$ and $dv = e^x$. Therefore, $du = dx$ and $v = e^x \space dx$. Putting these together in the integration by parts formula: $$xe^x - \int e^x$$ As the integral of $e^x$ is just $e^x$, our answer is: $$xe^x - e^x + C$$ We can again verify our answer is accurate using SymPy. In [7]: integrate(x * e ** x, x) Out[7]: $$2.71828182845905^{x} \left(1.0 x - 1.0\right)$$
2021-05-09 09:33:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645416736602783, "perplexity": 407.75242278310316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00431.warc.gz"}
https://socratic.org/questions/how-does-pka-change-with-environment
# How does pKa change with environment? Jan 31, 2016 Good question. $p {K}_{a}$'s should DECREASE with INCREASING temperature. I will confine my discussion to the water molecule, but the argument should be generalizable. #### Explanation: Now autoprotolysis of water is a well-known (and well documented) reaction: $2 {H}_{2} O r i g h t \le f t h a r p \infty n s {H}_{3} {O}^{+} + H {O}^{-}$ At $298$ $K$ , ${K}_{w}$ $=$ ${10}^{-} 14$; i.e $p {K}_{w}$ $=$ $14$. Because the reaction AS GIVEN is BOND-BREAKING (i.e we have to break a strong $O - H$ bond), at higher temperatures, the equilibrium should lie farther to the right. So at $T$ $>$ $298$ $K$, $p {K}_{w}$ $<$ $14$. Does this make sense?
2021-09-23 01:57:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9524270296096802, "perplexity": 960.7590295922724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00094.warc.gz"}
https://www.proofwiki.org/wiki/Category:Definitions/Naturally_Ordered_Semigroup
# Category:Definitions/Naturally Ordered Semigroup This category contains definitions related to Naturally Ordered Semigroup. Related results can be found in Category:Naturally Ordered Semigroup. The concept of a naturally ordered semigroup is intended to capture the behaviour of the natural numbers $\N$, addition $+$ and the ordering $\le$ as they pertain to $\N$. ### Naturally Ordered Semigroup Axioms A naturally ordered semigroup is a (totally) ordered commutative semigroup $\struct {S, \circ, \preceq}$ satisfying: $(\text {NO} 1)$ $:$ $S$ is well-ordered by $\preceq$ $\ds \forall T \subseteq S:$ $\ds T = \O \lor \exists m \in T: \forall n \in T: m \preceq n$ $(\text {NO} 2)$ $:$ $\circ$ is cancellable in $S$ $\ds \forall m, n, p \in S:$ $\ds m \circ p = n \circ p \implies m = n$ $\ds p \circ m = p \circ n \implies m = n$ $(\text {NO} 3)$ $:$ Existence of product $\ds \forall m, n \in S:$ $\ds m \preceq n \implies \exists p \in S: m \circ p = n$ $(\text {NO} 4)$ $:$ $S$ has at least two distinct elements $\ds \exists m, n \in S:$ $\ds m \ne n$ ## Pages in category "Definitions/Naturally Ordered Semigroup" The following 15 pages are in this category, out of 15 total.
2022-12-10 03:31:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5115706920623779, "perplexity": 404.44285680177336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711637.64/warc/CC-MAIN-20221210005738-20221210035738-00590.warc.gz"}
https://tex.stackexchange.com/questions/469051/topology-diagrams-labelled-edges
# Topology diagrams (labelled edges) What is the best way to create diagrams like these in LaTeX? Is Tikz the way to go? (Code for these specific instances would be useful but is not absolutely required, since I'll be needing to make diagrams similar in spirit but not identical. Also, this has almost certainly been asked before, so I would equally appreciate a link to a previous asking -- I'm just unsure what terms to search to find such a post.) edit: looked at some old code and came up with \begin{tikzpicture} \draw[ultra thick,domain=0:1,samples=100, postaction={decorate}, decoration={markings, mark=at position 0.5 with {\arrow{stealth}}}] (0,1) -- (0,0); \draw[ultra thick,domain=0:1,samples=100, postaction={decorate}, decoration={markings, mark=at position 0.5 with {\arrow{stealth}}}] (1,1) -- (0,1); \draw[ultra thick,domain=0:1,samples=100, postaction={decorate}, decoration={markings, mark=at position 0.5 with {\arrow{stealth}}}] (1,0) -- (1,1); \draw[ultra thick,domain=0:1,samples=100, postaction={decorate}, decoration={markings, mark=at position 0.5 with {\arrow{stealth}}}] (0,0) -- (1,0); \node at (.5,-.2) {$a$}; \end{tikzpicture} although this seems rather clunky. • I thought about doing something with tikzpicture and explicitly stating the parametrization of each length but it seems that there ought to be a more elegant way to do it. – zjs Jan 7 '19 at 21:59 • @zjs Just post what you have got. It will be much easier to see what you want if you post a code example. – Henri Menke Jan 7 '19 at 22:04 \documentclass[tikz,border=3.14mm]{standalone} \usetikzlibrary{decorations.markings} \begin{document} \tikzset{lab dis/.store in=\LabDis, lab dis=0.3, ->-/.style args={at #1 with label #2}{decoration={ markings, mark=at position #1 with {\arrow{>}; \node at (0,\LabDis) {#2};}},postaction={decorate}}, -<-/.style args={at #1 with label #2}{decoration={ markings, mark=at position #1 with {\arrow{<}; \node at (0,\LabDis) {#2};}},postaction={decorate}}, -*-/.style={decoration={ markings, mark=at position #1 with {\fill (0,0) circle (1.5pt);}},postaction={decorate}}, } \begin{tikzpicture}[>=latex] \draw[->-=at 0.125 with label {$b$}, ->-=at 0.375 with label {$a$}, -<-=at 0.625 with label {$b$}, -<-=at 0.875 with label {$a$}] (0,0) rectangle (4,4); \draw[lab dis=-0.3, -*-=0,->-=at 0.125 with label {$b$}, -*-=0.25,->-=at 0.375 with label {$a$}, -*-=0.5,-<-=at 0.625 with label {$b$}, -*-=0.75,-<-=at 0.875 with label {$a$}] (2,-4) circle (2.5); \end{tikzpicture} \end{document} • I'm not convinced by the ->- and -*- notation. It's pretty hard to read. Now there are dashes everywhere. – Henri Menke Jan 7 '19 at 22:33 • @HenriMenke Well, everyone can rename these things as they wish. I do not think this is a fair criticism. And if you really feel you need to make this comment, make it here, where this notation has been proposed. This answer got 69 upvotes without anyone complaining about the notation. – user121799 Jan 7 '19 at 23:05 This can be an option \documentclass[tikz, border = 10pt]{standalone} \usepackage{pgfplots} \usetikzlibrary{decorations.markings} \def\nframes{30} \def\frame{0} \begin{document} \foreach \frame in {0,0,0,0,1,...,\nframes} { \pgfmathsetmacro{\time}{\frame / \nframes} \pgfmathsetmacro{\c}{20 + (3 - 20) / (1 + exp(-10 * (\time - 0.6)))} \pgfmathsetmacro{\a}{20 + (1 - 20) / (1 + exp(-8 * (\time - 0.3)))} \pgfmathsetmacro{\xrange}{3 + (180 - 3) / (1 + exp(-14 * (\time - 0.6)))} \pgfmathsetmacro{\yrange}{3 + (180 - 3) / (1 + exp(-10 * (\time - 0.3)))} \pgfmathsetmacro{\theta}{90 + (45 - 90) * \time} \pgfmathsetmacro{\phi}{0 + (25 - 0) * \time} \pgfplotsset{ border one/.style={ thick, red, samples y = 0, variable = \t, domain = -\xrange:\xrange, postaction = {decorate}, decoration = {markings, mark = at position 0.48 with {\arrow{stealth}}, mark = at position 0.52 with {\arrow{stealth}}} }, border two/.style={ thick, green, samples y = 0, variable = \t, domain = -\yrange:\yrange, postaction = {decorate}, decoration = {markings, mark = at position 0.5 with {\arrow{stealth}}} } } \begin{tikzpicture} \useasboundingbox (0, 0) rectangle (6, 6); \begin{axis} [ hide axis, view = {\theta}{\phi}, domain = -\xrange:\xrange, y domain = -\yrange:\yrange, samples = 20, samples y = 20, unit vector ratio = 1 1 1, declare function = { u(\x,\y) = (\c + \a * cos(\y)) * cos(\x); v(\x,\y) = (\c + \a * cos(\y)) * sin(\x); w(\x,\y) = \a * sin(\y); } ] surf, color = blue, opacity = 0.01, faceted color = white, z buffer = sort, fill opacity = 0.5] ({u(\x, \y)}, {v(\x, \y)}, {w(\x, \y)}); \addplot3 [border one] ({u(\t, \yrange)}, {v(\t, \yrange)}, {w(\t, \yrange)}); \addplot3 [border one] ({u(\t, -\yrange)}, {v(\t, -\yrange)}, {w(\t, -\yrange)}); \addplot3 [border two] ({u(\xrange, \t)}, {v(\xrange, \t)}, {w(\xrange, t)}); \addplot3 [border two] ({u(-\xrange, \t)}, {v(-\xrange, \t)}, {w(-\xrange, t)}); \end{axis} \end{tikzpicture} } \end{document} DISCLAIMER Just a fun animation, I'm aware it is not exactly what the OP asked for You can place nodes on a path which should simplify the node positioning a lot. You might also want to factor out the arrow business into a style. \documentclass{article} \usepackage{tikz} \usetikzlibrary{decorations.markings} \begin{document} \begin{tikzpicture}[ arrow inside/.style = { postaction={decorate}, decoration={markings, mark=at position 0.5 with {\arrow{stealth}}} } ] \draw[arrow inside] (0,0) -- node [below] {$a$} (1,0); \draw[arrow inside] (0,1) -- node [above] {$a$} (1,1); \draw[arrow inside] (0,0) -- node [left] {$b$} (0,1); \draw[arrow inside] (1,0) -- node [left] {$b$} (1,1); \end{tikzpicture} \end{document} • Maybe move right b outside?! :-) – Sigur Jan 7 '19 at 23:05 A PSTricks solution just for fun purposes. \documentclass[pstricks,12pt]{standalone} \begin{document} \pspicture[arrowinset=0,arrowscale=2](-4,-4)(4,4) \foreach \i/\l/\a in {0/a/<,1/b/<,2/a/>,3/b/>}{% \pcline[ArrowInside=-\a](I\i)(I\the\numexpr\i+1)\nbput{$\l$}} \endpspicture \pspicture[arrowinset=0,arrowscale=2](-4,-4)(4,4) \pnode(0,0){O} \foreach \i/\l in {0/a,1/b,2/a,3/b}{% \qdisk([nodesep=3.5,angle=-45]{I\i}O){2pt} \psarc{->}(0,0){3.5}{(I\i)}{(I\the\numexpr\i+1)} \uput{8pt}[{(I\i)}](>I\i){$\l$}} \endpspicture \end{document} Note: ArrowInside is not available for \psarc. I don't know why. Another alternative approach using Metapost. Compile this one with lualatex. \documentclass[border=5mm]{standalone} \usepackage{luatex85} \usepackage{luamplib} \begin{document} \mplibtextextlabel{enable} \begin{mplibcode} beginfig(1); path S, C; S = unitsquare shifted -(1/2, 1/2) scaled 100; C = fullcircle scaled 84 rotated 16 shifted 140 right; interim ahangle := 30; % slimmer arrows... drawarrow subpath(0, 5/8) of S; drawarrow subpath(5/8, 13/8) of S; drawarrow subpath(4, 4-5/8) of S; drawarrow subpath(4-5/8, 4-13/8) of S; draw subpath(13/8, 4-13/8) of S; label.top("$a$", point 1/2 of S); label.top("$a$", point 5/2 of S); label.lft("$b$", point 3/2 of S); label.lft("$b$", point 7/2 of S); for t=0 upto 3: drawarrow subpath 2(t, t+1) of C; drawdot point 2t+3/4 of C withpen pencircle scaled 3; label(if odd t: "$b$" else: "$a$" fi, 9/8[center C, point 2t+7/4 of C]); endfor endfig; \end{mplibcode} \end{document} a variation of nice Henry Menke answer with use of quotes library: \documentclass{article} \usepackage{tikz} \usetikzlibrary{decorations.markings, quotes} \begin{document} \begin{tikzpicture}[auto=right, arrow inside/.style = { decoration={markings, mark=at position 0.5 with {\arrow{stealth}}}, postaction={decorate}, } ] \draw[arrow inside] (0,0) to ["$a$"] (1,0); \draw[arrow inside] (0,1) to ["$a$" '] (1,1); \draw[arrow inside] (0,0) to ["$b$" '] (0,1); \draw[arrow inside] (1,0) to ["$b$"] (1,1); \end{tikzpicture}
2021-06-18 19:20:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8089002966880798, "perplexity": 9092.320142048591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487640324.35/warc/CC-MAIN-20210618165643-20210618195643-00287.warc.gz"}
https://socratic.org/questions/how-do-you-combine-like-terms-1
# How do you combine like terms? Dec 14, 2014 You combine like terms by variable(s) and exponent(s). The variables and exponent must match in order to combine. So take ${x}^{2} + x$. Variables (x and x) match, but exponents (2 and 1) do not, so you cannot combine Take ${x}^{2} + {y}^{2}$. Exponents (2 and 2) match, but variables (x and y) do not. Now, take $3 {x}^{2} + 5 {x}^{2}$ These CAN be combine since the variable of x and power of 2 match. So we have $3 {x}^{2} + 5 {x}^{2} = \left(3 + 5\right) {x}^{2} = 8 {x}^{2}$
2019-10-14 08:41:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.775394082069397, "perplexity": 1599.7822445733234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649841.6/warc/CC-MAIN-20191014074313-20191014101313-00243.warc.gz"}
https://gmatclub.com/forum/the-sum-of-all-integers-from-43-to-107-inclusive-is-divisible-by-whi-254261.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 23 Jan 2019, 09:55 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • Key Strategies to Master GMAT SC January 26, 2019 January 26, 2019 07:00 AM PST 09:00 AM PST Attend this webinar to learn how to leverage Meaning and Logic to solve the most challenging Sentence Correction Questions. • Free GMAT Number Properties Webinar January 27, 2019 January 27, 2019 07:00 AM PST 09:00 AM PST Attend this webinar to learn a structured approach to solve 700+ Number Properties question in less than 2 minutes. The sum of all integers from 43 to 107, inclusive, is divisible by whi Author Message TAGS: Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 52431 The sum of all integers from 43 to 107, inclusive, is divisible by whi  [#permalink] Show Tags 26 Nov 2017, 08:26 00:00 Difficulty: 45% (medium) Question Stats: 64% (01:54) correct 36% (02:47) wrong based on 58 sessions HideShow timer Statistics The sum of all integers from 43 to 107, inclusive, is divisible by which of the following numbers? A. 17 B. 18 C. 21 D. 24 E. 39 _________________ Retired Moderator Joined: 25 Feb 2013 Posts: 1220 Location: India GPA: 3.82 The sum of all integers from 43 to 107, inclusive, is divisible by whi  [#permalink] Show Tags 26 Nov 2017, 08:44 1 Bunuel wrote: The sum of all integers from 43 to 107, inclusive, is divisible by which of the following numbers? A. 17 B. 18 C. 21 D. 24 E. 39 Total numbers from 43 to 107 $$= 107-43+1=65$$ Sum of nos from 43 to 107 $$= \frac{65}{2}(43+107) = 65*75 = 3*5^3*13$$ from among the options only $$39$$ divides it. Option E Senior PS Moderator Joined: 26 Feb 2016 Posts: 3334 Location: India GPA: 3.12 The sum of all integers from 43 to 107, inclusive, is divisible by whi  [#permalink] Show Tags 26 Nov 2017, 09:57 Bunuel wrote: The sum of all integers from 43 to 107, inclusive, is divisible by which of the following numbers? A. 17 B. 18 C. 21 D. 24 E. 39 Sum of n positive integers is $$\frac{n(n+1)}{2}$$ The sum of all integers between 43 and 107 can be found out as follows: Sum of all positive integers till 107 is $$107*54$$ Similarly, the sum of all positive integers till 42 is $$43*21$$ Therefore, the sum of all integers from 43 to 107 is $$107*54 - 43*21 = 5778 - 903 = 4875$$ $$4875$$ when prime factorized is $$3*5^3*13$$ and the only number which divides this is 39(Option E) _________________ You've got what it takes, but it will take everything you've got The sum of all integers from 43 to 107, inclusive, is divisible by whi &nbs [#permalink] 26 Nov 2017, 09:57 Display posts from previous: Sort by
2019-01-23 17:55:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4639681577682495, "perplexity": 2924.692098143429}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584336901.97/warc/CC-MAIN-20190123172047-20190123194047-00183.warc.gz"}
https://myaptitude.in/cat/quant/find-the-minimum-integral-value-of-n-such-that-the-division
Find the minimum integral value of n such that the division 55n/124 leaves no remainder 1. 124 2. 123 3. 31 4. 62
2019-08-21 02:50:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9947000741958618, "perplexity": 1401.9137256062322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315750.62/warc/CC-MAIN-20190821022901-20190821044901-00141.warc.gz"}
http://mathoverflow.net/questions/127606/concavity-of-the-function-g-circ-f-f-where-g-is-concave-and-f-is-decrea
concavity of the function $(g\circ f)/f$, where $g$ is concave and $f$ is decreasing and convex Suppose we have functions $g\colon [0,1]\mapsto \mathbb{R}$, which is concave, vanishes at the origin and fullfills condition $$g(xy)/xy\leq g(x)/x+g(y)/y$$ for any $x,y\in[0,1]$ and $f\colon (0,\infty)\mapsto [0,1]$ which is convex and decreasing. Define function $h\colon [0,\infty)\mapsto \mathbb{R}$ as $$h(x):=g(f(x))/f(x).$$ Can it not be concave? - @FF: Dear FF, please read the FAQ. I think this question is not suitable for MO, you might want to try posting it a math.stackexchange.com –  Chandrasekhar Apr 15 '13 at 10:00
2014-04-16 10:38:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9014270901679993, "perplexity": 233.13374423680835}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
https://studysoup.com/tsg/21337/calculus-early-transcendentals-1-edition-chapter-9-2-problem-57e
× Get Full Access to Calculus: Early Transcendentals - 1 Edition - Chapter 9.2 - Problem 57e Get Full Access to Calculus: Early Transcendentals - 1 Edition - Chapter 9.2 - Problem 57e × # Series to functions Find the function | Ch 9.2 - 57E ISBN: 9780321570567 2 ## Solution for problem 57E Chapter 9.2 Calculus: Early Transcendentals | 1st Edition • Textbook Solutions • 2901 Step-by-step solutions solved by professors and subject experts • Get 24/7 help from StudySoup virtual teaching assistants Calculus: Early Transcendentals | 1st Edition 4 5 1 347 Reviews 26 5 Problem 57E Series to functions Find the function represented by the following series and find the interval of convergence of the series. $$\sum_{k=0}^{\infty}\left(\frac{x^{2}-1}{3}\right)^{k}$$ Step-by-Step Solution: Step 1 of 3 C. Neuroanatomy Brain and Spinal Cord  About 3lbs  Very soft (like Tofu)  Uses 20% of blood supply  Strokes result from a loss of blood flow to part of brain ­Can either be caused by blood leaks or clot stopping blood supply A. Protection Meninges (glia cells) Outer covering over brain and spinal cord Infected in meningitis (become inflamed and push on the brain) Ventricles Holes inside brain that contain cerebral spinal fluid that support the brain from the inside out Cushion and support brain B. Forebrain 1. The Cortex Outer bumpy surface of brain Can be divided into two hemispheres Each hemisphere has four different lobes with different set of functions Frontal lobe Thought, problem solving, movement, speech production (not understanding): generally higher functions Broca’s Area (left hemisphere) Speech Production and movement Parietal Lobe Sense of touch Temporal Lobe (left side, lower part of brain near ears) Hearing, speech comprehension Wernicke’s area­ produces speech If Broca’s area is not working, she can’t say much of what she is thinking If Wernicke’s area is not working, they can’t produce any speech or meaningful expressive language Occipital Lobe (back of brain) Vision Corpus Callosum Connects the two hemispheres Left side of brain has language 2. Limbic System Amygdala (lower middle part of brain) Regulates emotions, especial Step 2 of 3 Step 3 of 3 ## Discover and learn what students are asking Calculus: Early Transcendental Functions : Moments, Centers of Mass, and Centroids ?Center of Mass of a Two-Dimensional System In Exercises 9-12,find the center of mass of the given system of point masses. Statistics: Informed Decisions Using Data : Assessing Normality ?A ______ ______ ______is a graph that plots observed data versus normal scores Statistics: Informed Decisions Using Data : Inference on the Least-Squares Regression Model and Multiple Regression ?A researcher believes that as age increases, the grip strength (pounds per square inch, psi) of an individual’s dominant hand decreases. From a random Statistics: Informed Decisions Using Data : Inference about the Difference between Two Medians: Independent Samples ?In Problems 1–8, use the Mann–Whitney test to test the given hypotheses at the = 0.05 level of significance. The independent samples were obtained ra #### Related chapters Unlock Textbook Solution
2022-06-27 07:53:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24394989013671875, "perplexity": 7515.146437184254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00328.warc.gz"}
https://codereview.stackexchange.com/questions/223776/checking-if-an-object-meets-certain-criteria-to-amend-to-certain-entity-objects
# Checking if an object meets certain criteria to amend to certain Entity Objects? I am trying to check if a class has only numeric values in it and then return a bool, however, some instances there may be non numeric chars in an object that should only contain numeric values. At the present I am using reflection to loop through the object (Which in some cases can have up 200 properties) and then check the property value with Regex to see if it is a numeric value. The reason for the regex instead of Int.TryParse or Int64.TryParse is b/c every now and again a non integer value is mixed in with the text, but I know the file is an all integer file from the way it is setup. This however is not the issue as the Regex is working as expected. The structure of the StringCsv and IntergerCsv Entities are different (What I mean is that the structure of the CSV's with string's and int's are fundamentally different) , but I need to save the data to an Entity that is consistent, which means I need to check which CSV is being imported to differentiate between the methods needed to save to the database. ie, There is a date column, but it differs between the Interger CSV and the String CSV. Also, I can't add the data dirrectly into the Entities StringCsv and IntergerCsv b/c they need an Id which is a Guid and the CSV file does not contain that, which is why I read to the the class GovernmentCsvRecord and then covert the object to the Entity. Below are samples of CSV files, sorry I can't post actual values. The string file The Int file I should also mention that the first property will always be a GUID, that also needs to be taken into consideration. (I should probably be skipping that property anyway in my current code, but obviously don't.) I should also mention I am using a custom library to import the CSV files. To check the value is numeric: static bool IsMatch(string str) { return Regex.IsMatch(str, @"^[abcd][\d0-9]{5}\$|[\d0-9]", RegexOptions.IgnoreCase | RegexOptions.Multiline); } Loop through each property and use above code to check the value, static bool IsNumericFile<T>(List<T> list) { foreach (var item in list) { List<string> values = typeof(T).GetProperties() .Select(propertyInfo => propertyInfo.GetValue(item, null)) .Where(s => s != null) .Select(s => s.ToString()) .Where(str => str.Length > 0) .ToList(); foreach (var propertyString in values) { var isNumeric = IsMatch(propertyString); //Jump out if non numeric value is found if (!isNumeric) return false; } } return true; } // Get the data from the CSV static List<T> ImportCsv<T>(string file, string delimeter = ",", bool hasHeader = false) { { csv.Configuration.Delimiter = delimeter; csv.Configuration.MissingFieldFound = null; csv.Configuration.Encoding = Encoding.GetEncoding("Shift_JIS"); return csv.GetRecords<T>().ToList(); } } //Convert the data to the required entities { return ImportCsv<GovernmentCsvRecord>(filePath).Skip(3).Select(GovernmentCsvRecord.StringCsvGovernmentToEntity); } { return ImportCsv<GovernmentCsvRecord>(filePath).Skip(3).Select(GovernmentCsvRecord.IntCsvGovernmentToEntity); } I am purposefully not posting the complete GovernmentCsvRecord object as it is an object with 200 string properties. I have posted the class with limited properties to illustrate how the class is set up with its properties and functions. I convert to each entity via the functions in the GovernmentCsvRecord, StringCsvGovernmentToEntity and IntCsvGovernmentToEntity. The StringCsv entity public class GovernmentCsvRecord { public string Col1 { get; set; } public string Col2 { get; set; } public string Col3 { get; set; } //etc 200 properties in class public static StringCsv StringCsvGovernmentToEntity(GovernmentCsvRecord data) { StringCsv entity = new StringCsv(); entity.Id = Guid.NewGuid(); entity.Col1 = data.Col1; entity.Col2 = data.Col2; //etc until 200 properties are filled } //The IntegerCsv entity public static IntegerCsv IntCsvGovernmentToEntity(GovernmentCsvRecord data) { IntegerCsv entity = new IntegerCsv(); entity.Id = Guid.NewGuid(); entity.Col1 = data.Col1; entity.Col2 = data.Col2; //etc until 200 properties are filled } } Then the usage, List<StringCsv> StringList = new List<StringCsv>(); List<IntegerCsv> IntList = new List<IntegerCsv>(); if (!IsNumericFile(StringList)) { } if (IntList.Count > 0) { //Do stuff with intList } else { //Do stuff with stringList } At present, it takes about 8 to 10ms to complete a check of one object, so performance wise it is not a huge issue, but I was wondering if there maybe was a better approach to do something like this? • I voted to reopen, but some things are still unclear. What's the difference between StringCsv and IntegerCsv? The name ReadIntCsvFromFile implies that it reads numeric data, but it's only called when the input contains non-numeric fields, so what does IntCsvGovernmentToEntity do that StringCsvGovernmentToEntity does not? And why read into GovernmentCsvRecord objects only to convert immediately afterwards - why not read directly into a StringCsv object, and convert that to an IntegerCsv when necessary? Jul 10 '19 at 12:49 • @PieterWitvoet, I edited the question to address those issues. Also, I don't know how to get the data into a unknown type and then check values, so I just checked if the object was a string file or not and proceeded from there. Jul 10 '19 at 13:03 • Instead of throwing away that list of GovernmentCsvRecord objects after converting it to a StringCsv list, you could keep it around so you don't have to read the csv file again if the IsNumericFile check fails. • It looks like you can validate the list of GovernmentCsvRecord objects directly instead of first converting them to StringCsv objects. However, with the way you're converting them, and taking the numeric check into account, it's probably better to read each row into an array of strings. Apparently the easiest way to do that is to give GovernmentCsvRecord a single string[] property. That lets you validate fields without having to use reflection. • That regular expression can be simplified a lot. The first part matches strings like "a12345" - anything that starts with an a, b, c or d, followed by exactly 5 digits. The second part matches anything that contains at least one digit - which covers everything that the first part covers, and more, so just the second part is sufficient. Also note that \d matches decimal digits from a variety of scripts, including 0-9, so [\d0-9] can be simplified to just \d. But you may want to use 0-9 instead, unless you also want to accept digits like '೬' and '६'. • +1. And you absolutely correct. As I was re-writing the question, saw I was doing a fair bit of redundant code. As you say, validate the GovernmentCsvReord class and then convert to the needed entities. Thanks for the advice on the Regex, I am not very good with it, so that helps enormously as well. Jul 10 '19 at 23:22 • Another optimization is to create and reuse a single (compiled) Regex instance instead of calling the static IsMatch method, so the regex engine doesn't have to parse the pattern each time. Or, for such a simple pattern, string.Any(char.IsDigit) (for \d) or string.Any(c => c >= '0' && c <= '9') (for 0-9) is even faster. Jul 10 '19 at 23:38
2022-01-23 22:33:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23208795487880707, "perplexity": 1871.1947999941476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00254.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2019_AMC_12B_Problems/Problem_17&diff=next&oldid=102401
# Difference between revisions of "2019 AMC 12B Problems/Problem 17" ## Problem How many nonzero complex numbers $z$ have the property that $0, z,$ and $z^3,$ when represented by points in the complex plane, are the three distinct vertices of an equilateral triangle? $\textbf{(A) }0\qquad\textbf{(B) }1\qquad\textbf{(C) }2\qquad\textbf{(D) }4\qquad\textbf{(E) }\text{infinitely many}$ ## Solution Convert z and $z^3$ into $$r\text{cis}\theta$$ form, giving $$z=r\text{cis}\theta$$ and $$z^3=r^3\text{cis}(3\theta)$$. Since the distance from 0 to z is r, the distance from 0 to z^3 must also be r, so r=1. Now we must find $$\text{cis}(2\theta)=60$$. From 0 < theta < pi/2, we have $$\theta=\frac{\pi}{2}$$ and from pi/2 < theta < pi, we see a monotonic decrease of $$\text{cis}(2\theta)$$, from 180 to 0. Hence, there are 2 values that work for 0 < theta < pi. But since the interval pi < theta < 2pi is identical, because 3theta=theta at pi, we have 4 solutions. There are not infinitely many solutions since the same four solutions are duplicated. $\boxed{D}$ -FlatSquare Someone pls help with LaTeX formatting, thanks
2022-05-25 05:55:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8816169500350952, "perplexity": 444.1053219272486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662580803.75/warc/CC-MAIN-20220525054507-20220525084507-00650.warc.gz"}
https://socratic.org/questions/583470667c0149349bfbd6f7
# Question bd6f7 Nov 24, 2016 $2.48 \cdot {10}^{22}$ #### Explanation: Start by converting the sample of nitric acid from grams to moles by using the compound's molar mass ${M}_{{\text{M HNO"_3) = color(blue)("63.01 g")color(white)(.) color(brown)("mol}}^{- 1}}$ This tells you that $\textcolor{b r o w n}{\text{1 mole}}$ of nitric acid has a mass of $\textcolor{b l u e}{\text{63.01 g}}$. In your case, the sample of nitric acid will contain 2.59 color(red)(cancel(color(black)("g"))) * "1 mole HNO"_3/(63.01color(red)(cancel(color(black)("g")))) = "0.0411 moles HNO"_3 To convert the number of moles to molecules, use Avogadro's constant $\textcolor{p u r p \le}{\underline{\textcolor{b l a c k}{{\text{1 mole HNO"_3 = 6.022 * 10^(23)"molecules HNO}}_{3}}}}$ In your case, you will have 0.0411 color(red)(cancel(color(black)("moles HNO"_3))) * (6.022 * 10^(23)"molecules HNO"_3)/(1color(red)(cancel(color(black)("mole HNO"_3))))# $= \textcolor{\mathrm{da} r k g r e e n}{\underline{\textcolor{b l a c k}{2.48 \cdot {10}^{22} {\text{molecules HNO}}_{3}}}}$ The answer is rounded to three sig figs.
2019-08-22 02:29:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8324875235557556, "perplexity": 8762.678554001677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316718.64/warc/CC-MAIN-20190822022401-20190822044401-00059.warc.gz"}
https://mathematica.stackexchange.com/questions/200627/is-there-any-way-to-do-this-integral-without-expanding-the-exponential-power/200630
# Is there any way to do this integral without expanding the exponential power? Integrate[ x^(1/2) Exp[-10.7* c/x] Sqrt[(1 - (125/1000)^2) (1 - (2*4.18/125)^2) - (1 - (2 x/ 1000))^2], {x, 5.03, 994.97}] or can i do this indefinite integral without the numerical limits. My input is unable to do it • Try like NIntegrate[ x^(1/2) Exp[-10.7* c/x] Sqrt[(1 - (125/1000)^2) (1 - (2*4.18/125)^2) - (1 - (2 x/ 1000))^2] /. c -> 2.1, {x, 5.03, 994.97}] which produces 15686.7. – user64494 Jun 19 '19 at 4:32 • my c is a variable of an another expression – user105697 Jun 19 '19 at 4:55 • So what? I don't see a problem. – user64494 Jun 19 '19 at 5:00 • you can't just put a numerical value to c. It will be used as a variable in further calculation – user105697 Jun 19 '19 at 5:03 If you insist on "symbolic expression", then the following does the job. Integrate[x^(1/2) Rationalize[Exp[-10.7* c/x] Sqrt[(1 - (125/1000)^2) *(1 - (2*4.18/125)^2) - (1 - (2 x/ 1000))^2], 15], {x, Rationalize[5.03], Rationalize[994.97]}] $$i \left(-\frac{1}{3} 80 \sqrt{10 \pi } c^{3/2} \text{erf}\left(\frac{\sqrt{c}}{5 \sqrt{2}}\right)+\frac{8}{15} \sqrt{\frac{2 \pi }{5}} (c+125) c^{3/2} \text{erf}\left(10 \sqrt{\frac{10}{503}} \sqrt{c}\right)+\frac{8}{15} \sqrt{\frac{2 \pi }{5}} (c+125) c^{3/2} \text{erf}\left(10 \sqrt{\frac{10}{99497}} \sqrt{c}\right)-\frac{16}{15} \sqrt{\frac{2 \pi }{5}} c^{5/2} \text{erf}\left(\frac{\sqrt{c}}{5 \sqrt{2}}\right)+\frac{4}{375} \sqrt{503} e^{-\frac{1000 c}{503}} c^2+\frac{4}{375} \sqrt{99497} e^{-\frac{1000 c}{99497}} c^2-\frac{32 e^{-\frac{c}{50}} c^2}{3 \sqrt{5}}-\frac{640}{3} \sqrt{5} e^{-\frac{c}{50}} c+\frac{249497 \sqrt{503} e^{-\frac{1000 c}{503}} c}{187500}+\frac{150503 \sqrt{99497} e^{-\frac{1000 c}{99497}} c}{187500}+\frac{8000}{3} \sqrt{5} e^{-\frac{c}{50}}+\frac{4824709027 \sqrt{99497} e^{-\frac{1000 c}{99497}}}{375000000}-\frac{124990973 \sqrt{503} e^{-\frac{1000 c}{503}}}{375000000}\right)$$ • what this 15 inside the integral means – user105697 Jun 19 '19 at 6:04 • @user105697: This is the same as Rationalize[..,10^(-15)]. See the help to Rationalize for details. – user64494 Jun 19 '19 at 6:19 • ok you are right but i actually stuck doing this : LogLogPlot[ 1.1183*3.086*10^-12 c^(1/2)*2.598*10^-3 Integrate[ x^(1/2) Rationalize[ Exp[-10.7* c/x] Sqrt[(1 - (125/1000)^2)*(1 - (2*4.18/125)^2) - (1 - (2 x/ 1000))^2], 15], {x, Rationalize[5.03], Rationalize[994.97]}], {c, 10^0, 10^6}, PlotRange -> {{10^0, 10^4}, {10^-18, 10^-8}}] and it is giving a blank graph – user105697 Jun 19 '19 at 6:29 • Sorry, my answer is doubtful in view of Integrate[ x^(1/2) Rationalize[ Exp[-10.7* c/x] Sqrt[(1 - (125/1000)^2)*(1 - (2*4.18/125)^2) - (1 - (2 x/ 1000))^2], 10^-15], {x, Rationalize[5.03], Rationalize[994.97]}] which performs $$\int_{\frac{503}{100}}^{\frac{99497}{100}} \sqrt{\frac{21272654}{21707411}-\left(1-\frac{x}{500}\right)^2} \sqrt{x} e^{-\frac{107 c}{10 x}} \, dx .$$ – user64494 Jun 19 '19 at 7:18
2020-01-24 19:47:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32368212938308716, "perplexity": 5953.6085098010135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250625097.75/warc/CC-MAIN-20200124191133-20200124220133-00189.warc.gz"}
https://studyofreligion.fas.harvard.edu/pages/thd-program-requirements
Th.D. Program Requirements Study for the degree of Doctor of Theology extends through four stages: course-work, general examinations, prospectus, and dissertation. Candidates for this degree must fulfill the following: Residence Two years of doctoral study in residence are required. During those two years, students must register for and complete at least four credit courses per term. A student must have achieved the minimum grade point average of "B" in each academic year and meet all regulations governing enrollment of incompletes. Supervision Student progress will be monitored by the doctoral subcommittee of the Committee on the Study of Religion in cooperation with each student’s advisor. Once the prospectus for the dissertation is approved, the dissertation shall be written under the supervision of one or more advisers approved by the Committee. Language Requirements All doctoral students must achieve at least intermediate reading competence in two modern languages of secondary scholarship relevant to their course of study (such as French, German, Japanese), in addition to whatever primary source languages are required in their field, either for textual or ethnographic study. The student and advisor shall consult to decide upon the two modern languages. All language requirements must be met before General Examinations are taken Seminars First-year students are required to take the Common Doctoral Seminar (HDS 4599). Its purpose is to introduce major questions and/or problems in the study of religion and to offer an opportunity for critical reflection on the nature and boundaries of religious and theological inquiry. In addition, students must take the graduate seminars required by their field of concentration, as well as other courses and seminars determined in consultation with an academic adviser. Second Year Review An oral second year review (one-and-one-half hours) occurs in the third, or at the latest, in the fourth semester of study. The purpose of this review is to consider and clarify the overall design and progress of a student’s academic program and to assess the student’s academic progress in general. General Examinations After the satisfactory completion of two years of full-time study, all language requirements, required coursework, and the second year-review, a student prepares for the General Examinations. Th.D. students are normally expected take their General examinations by the end of their third year, except in special cases (e.g., Hebrew Bible and Comparative Religion) where deferment has been previously formally granted. The examination process includes three written exams and one oral exam arranged according to the student’s context of study and specialization. Students in Comparative Religion are required to sit for an additional written examination in Theory and Methods. Students in Hebrew Bible/Old Testament follow the exam rubric required by the Near Eastern Languages and Civilization Department. Prospectus and Dissertation Within twelve months of passing the General Examinations, all candidates must submit a written dissertation prospectus of not more than 3000 words (plus bibliography), formulating a project. Upon formal approval of the prospectus, the student commences the writing of the dissertation. The length is limited to 300 pages. Once the dissertation is completed and approved by the adviser, the candidate must pass an oral dissertation defense with a committee of at least three faculty members before the Th.D. degree is awarded Thesis Defense If the thesis is deemed acceptable by the student's advisor and the Director of Th.D. Studies, the Standing Committee will appoint a committee for the oral defense. If the examining committee accepts the thesis and its defense, and the examination is sustained by the Standing Committee, the original and the first copy of the thesis in bound form, together with their abstracts, and an unbound, boxed copy for University Microfilms International, should be submitted to the Registrar, and a short summary, prepared for publication, should be submitted to the Editor of the Harvard Theological Review, prior to the awarding of the degree. At the discretion of the Doctoral Subcommittee, the calendar of requirements as noted above may be interrupted by a maximum of one year's leave of absence. The candidate must pay a $100 program fee during a year on leave. Extensions A student who has not met degree requirements or an established deadline may with the endorsement of the Director of Th.D. Studies be granted an extension, normally for one year. Satisfactory Progress Requirements for the Th.D. Program (Revised May 2008) Study for Degree of Doctor of Theology extends through four stages: General Examinations, Prospectus, Dissertation, and Dissertation Defense. All students in the Th.D. program at Harvard Divinity School must be making satisfactory progress in order to be eligible for any type of financial aid. [Note: Satisfactory progress includes being on “grace”, or warning, and students may keep their financial aid. Unsatisfactory progress, commonly known as “probation”, would lead to ineligibility for financial aid.] All candidates for this degree must fulfill the following provisions of satisfactory progress to be considered in good standing: Residence: Two years of doctoral study in residence with payment of full tuition are normally required. During those two years, students are required to register for and complete at least four credit courses per term. A student must have achieved the minimum grade point average of "B" in each academic year and have met the regulations governing enrollment with incompletes. Following payment of full tuition for two years, the student remains in residence but pays reduced tuition for the next two years. A student subsequently will be charged a Guidance and Facilities Fee for the remainder of his or her studies for the degree. During these periods of residence the student will be considered to be a full time enrolled student unless she or he is paying an Active File Fee for residence outside the Boston area. Supervision: During the student's residency up to approval of the Thesis Prospectus, his or her progress will be monitored by the Doctoral Subcommittee of the Standing Committee on the Study of Religion. Once the Prospectus is approved, the Dissertation shall be written under the supervision of one or more advisors approved by the Standing Committee. Language Requirements: A reading knowledge of two modern languages of scholarship and either Greek, Latin, of Hebrew is required before taking General Examinations. Courses and/or examinations in additional languages may be required by the department of field of concentration or for specific topics of the student's research and thesis. It is possible in some cases, upon petition to the Doctoral Subcommittee, for students to substitute another classical language for one of the above, if it is deemed crucial to the pursuit of their program. Modern Languages -- All doctoral students must achieve at least intermediate reading competence in two modern languages of secondary scholarship relevant to their course of study (such as French, German, Japanese), in addition to whatever primary source languages are required in their field. The student and adviser shall consult to decide upon the two modern languages. Classical Languages -- Candidates are expected to demonstrate their reading proficiency at an intermediate level in one fo the classical languages of scholarship relevant to the student's course of study (normally, Latin, Greek, Hebrew or Arabic). Please refer to the Student Handbook for specific language requirements and detailed information about language examinations. Second-Year Review: All students must participate in a Second-Year Review with at least two faculty members, to occur either in the third or, at the latest, in the fourth semester of study. The main purposes of the Second-Year Review are to consider and clarify the overall design and progress of a student’s academic program and to assess the student’s academic progress in general. Students participating in the Second-Year Review must submit the following, two weeks in advance, to the faculty participating in the review: 1) a two-page statement of academic purpose, and 2) two major course papers, one of which should be in the student’s major field and the other in a different field or discipline. Seminars: One graduate seminar in general theological studies (normally DIV 4599: Common Doctor of Theology Seminar), directed by one or more members of the faculty and focusing on the reading and interpretation of theological literature is required before the General Examinations. In addition, candidates must take the one to three graduate seminars required for their field of concentration, as well as other courses and seminars determined in consultation with an academic advisor. General Examinations: By the end of the third year a student will ordinarily have passed general examinations or the departmental equivalent. Candidates are required to take General Examinations as follows: 1) Two three-hour written examinations in their field of concentration. 2) One three-hour written examination in a special topic chosen and defined by the candidate in consultation with faculty members. This special topic may lie within the area of concentration or may engage other fields of disciplines of academic studies. 3) An oral examination before a committee consisting normally of at least three members of the faculty. Except in special cases (e.g., Old Testament and Comparative Religion) where deferment has been previously formally granted, a prospective fourth-year student must have passed the General Examinations by the end of the third year. (Students in the field of Comparative Religion are required to take a fourth exam in Theory and Methods. Hebrew Bible students must take their General Exams in the Near Easter Languages and Civilization Department of the Faculty of Arts and Sciences. See field guidelines for additional information.) Prospectus: Each candidate's prospectus must be submitted and approved by the standing committee within one year after passing the general examination. Twenty-five copies of the 3000-word prospectus, stating clearly the argument of the thesis and showing why it gives promise of making a contribution to learning, must be presented to the Standing Committee for its approval. The context of the problem and the student's acquaintance with the literature in the field should be indicated. The Standing Committee may vote to accept the prospectus, it may vote to accept the Prospectus provisional upon certain additions to be submitted to the full Committee, or it may ask to student to resubmit a drastically revised Prospectus. The Committee, unless it has reason to reject the prospectus, will then appoint a Prospectus Subcommittee, which will meet with the student and report back to the Standing Committee. Dissertation: The degree shall be awarded on the basis of the successful completion of a doctoral dissertation and its defense before a committee of the faculty. The dissertation shall be written under the supervision of an advisor approved by the Doctoral Subcommittee. Within twelve months of approval of the prospectus and each subsequent year during which a student is allowed to register, she or he must have produced at least one acceptable chapter of the dissertation, or the equivalent. Normally, a thesis should be submitted within two years of approval of the prospectus, but it must be submitted within seven years from the date of admission to the program. After seven years in the program, students may petition the Th.D. director for a one-year extension of time to complete the dissertation. No more than three such petitions for each student will be accepted. The length of the thesis is limited to a maximum of 300 pages, exclusive of bibliography. Three or more unbound copies of the thesis, typed in its final form, must be submitted in spring binders, by August 15 for receipt of the degree in November, by December 1 for its receipt in March, and by April 1 for its receipt at Commencement. A dissertation abstract, with a maximum length of 350 words, must be submitted with each copy. Thesis Defense: If the thesis is deemed acceptable by the Advisor and the Director of Th.D. Studies, the Standing Committee will appoint a Committee for the oral defense. If the examining committee accepts the thesis and its defense, and the examination is sustained by the Standing Committee, the original and first copy of the thesis in bound form should be submitted to the Registrar of Harvard Divinity School, prior to the awarding of the degree. (Please contact the HDS Registrar regarding requirements for electronic submission of a copy to the UMI Dissertation publishing site.) Leaves of Absence: At the discretion of the Doctoral Subcommittee the calendar of requirements as noted above may be interrupted by a maximum of one year's leave of absence. The candidate must pay a$100 program fee during a year on leave. Extensions: A student who has not met degree requirements or an established deadline may with the endorsement of the Director of Th.D. Studies be granted an extension, normally for one year.
2018-11-14 18:38:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35454028844833374, "perplexity": 2817.572074807002}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742253.21/warc/CC-MAIN-20181114170648-20181114192648-00245.warc.gz"}
https://gamedev.stackexchange.com/questions/102426/2d-camera-sizing-and-movement
2D Camera sizing and movement Apologies if this is a basic question, or been asked a 1,000 times in the past... I'm new to Unity, and having problems understanding some of the basics to do with 2D cameras (orthographic), and while I've read a lot nothing has made me have that 'ah-ah!' moment yet. What I'm trying to create is the following (camera position depicted by the box with the pink dashed line) - where the yellow filled boxes are made up of a whole bunch of sprites. Overtime, the camera need to move left to right, and then bottom to top... What my issue seems to be is understanding how I can get the camera to display all of the sprites - in other words, resize the camera view to fit the background sprite's height, and know when to stop the camera in order to start moving it up. What I'd like to understand is how and why to solve this problem - it's the only way I'll learn! Thanks Kieron • Is the idea that you are trying to make the camera automatically move to follow the map (the background images) while the player moves through the level? – Alan Wolfe Jun 15 '15 at 4:18 • This is part of an introduction, so there is no player at the moment. It's purely moving the camera according to the background sprites position and bounds...which may change as the game develops. – Kieron Jun 15 '15 at 4:22 • I agree that making the camera do this automatically could be nice, but another route if you are interested in alternatives is to script the camera by hand to tell it what points to be at, and at what zoom levels, at specific times. I don't know what your final game is going to be like, but there might be times when you want to "break the rules" that an automated system might have. – Alan Wolfe Jun 15 '15 at 4:25 • Sure, in this instance it's part of an animation, designed as an introduction. I'm assuming that this is going to be entirely script based, simply because the position and bounds of the background sprite could change. – Kieron Jun 15 '15 at 4:29 • oh ok, so your question is about how if you have an image that is (X,Y) in dimensions, and the center is at point P, how do you set up the camera such that it shows the sprite? Basically you are just trying to figure out how to make your orthographic camera show a specific rectangle of world coordinates? – Alan Wolfe Jun 15 '15 at 4:32 I wrote a camera script that auto-zooms based on a transform's position and scale. Attach the script below to your Camera. Then, create a GameObject, and set its position and size. Then link this gameobject to the "Area" public property of this camera script. The "Area" GameObject can be adjusted at runtime, and should provide you with a mechanism to at least get you started on what you are attempting to do. I put the logic inside of Update() to simplify, however, you will most likely want to move this in a function that you would call manually when a recalc needs to be done. using UnityEngine; using System.Collections; public class CameraZoom : MonoBehaviour { public Transform Area; public void Update() { float height = Area.localScale.y * 100; float width = Area.localScale.x * 100; float w = Screen.width / width; float h = Screen.height / height; float ratio = w / h; float size = (height / 2) / 100f; if (w < h) size /= ratio; Camera.main.orthographicSize = size; Vector2 position = Area.transform.position; Vector3 camPosition = position; Vector3 point = Camera.main.WorldToViewportPoint(camPosition); Vector3 delta = camPosition - Camera.main.ViewportToWorldPoint(new Vector3(0.5f, 0.5f, point.z)); Vector3 destination = transform.position + delta; transform.position = destination; } } • That looks great, thank-you! I'll try it out, hopefully over the weekend. Cheers! – Kieron Jun 19 '15 at 9:42
2019-11-13 03:01:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2504894435405731, "perplexity": 1157.6668403559381}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665976.26/warc/CC-MAIN-20191113012959-20191113040959-00171.warc.gz"}
https://txcorp.com/images/docs/usim/latest/in_depth/USimHEDP_Tutorial_Lesson_1.html
# Using USim to Solve the Two-Fluid Plasma Model¶ In the USimBase tutorials the basic concepts of USim were described. In this first HEDP tutorial we describe how to solve the fully electromagnetic two-fluid plasma equations using a semi-implicit operator to step over the plasma frequency and cyclotron frequency and electric field diffusion to minimize errors in in the electric field divergence relation. ## Semi-Implicit Solution for the Lorentz Forces and Current Sources¶ To demonstrate how to use USim to solve a problem using the two-fluid plasma system, we will use the well known GEM (geomagnetic environmental modeling) reconnection challenge setup to solve fast reconnection of a current layer. The GEM challenge was originally described in Birn, J., et al. "Geospace Environmental Modeling (GEM) magnetic reconnection challenge." Journal of Geophysical Research: Space Physics (1978–2012) 106.A3 (2001): 3715-3719. This tutorial is based on the GEM Challenge template. The first thing we need to model two-fluids, is data structures for the electrons, ions and the electromagnetic field. In this case the electrons will be represented by a 5-moment compressible fluid: <DataStruct electrons> kind = nodalArray onGrid = domain numComponents = 5 </DataStruct> The ions are also represented by a 5-moment compressible fluid: <DataStruct ions> kind = nodalArray onGrid = domain numComponents = 5 </DataStruct> The electromagnetic field variable contains the full Field vector [Ex,Ey,Ez,Bx,By,Bz,Ep,Bp] with the variables Ep and Bp the error correction potentials. As such the electromagnetic field data structure is defined as: <DataStruct em> kind = nodalArray onGrid = domain numComponents = 8 </DataStruct> Separate initialization using an initialize (1d, 2d, 3d) updater is performed for each variable, electrons, ions and em: <Updater initElectrons> kind = initialize2d onGrid = domain out = [electrons] <Function func> kind = exprFunc . . . preExprs = [ \ "rhoe = n0*me*(1.0/(cosh(y/lambda)*cosh(y/lambda))+0.2)", \ "mze = -(me/qe)*b0*(1.0/lambda)*1.0/(cosh(y/lambda)*cosh(y/lambda))", \ "ee = (1.0/12.0)*(1./(gamma-1.0))*b0*b0*(rhoe/me)+0.5*(mze*mze/rhoe)"] exprs = [ \ "rhoe", "0.0", "0.0", "mze", "ee"] </Function> </Updater> <Updater initIons> kind = initialize2d onGrid = domain out = [ions] <Function func> kind = exprFunc . . . preExprs = [ \ "rhoi = n0*mi*(1.0/(cosh(y/lambda)*cosh(y/lambda))+0.2)", \ "ei = (5.0/12.0)*(1.0/(gamma-1.0))*b0*b0*(rhoi/mi)"] exprs = ["rhoi", "0.0", "0.0", "0.0", "ei"] </Function> </Updater> <Updater initEm> kind = initialize2d onGrid = domain out = [em] <Function func> kind = exprFunc . . . preExprs = ["bx = b0*tanh(y/lambda)-psi*(pi/Ly)*cos(2.0*pi*x/Lx)*sin(pi*y/Ly)", \ "by = psi*(2.0*pi/Lx)*sin(2.0*pi*x/Lx)*cos(pi*y/Ly)"] exprs = ["0.0", "0.0", "0.0", "bx", "by", "0.0", "0.0", "0.0"] </Function> </Updater> In the above initialization some variables have been eliminated for conciseness. In addition, every variable, electrons, ions, em, must have their own classicMusclUpdater (1d, 2d, 3d) <Updater hyperElectrons> kind = classicMuscl2d onGrid = domain timeIntegrationScheme = none numericalFlux = FLUID_NUMERICAL_FLUX preservePositivity = true limiter = [LIMITER, none] limiterType = characteristic variableForm = conservative in = [electrons] out = [electronsNew] cfl = CFL equations = [euler] <Equation euler> kind = eulerEqn gasGamma = GAS_GAMMA basementDensity = BASEMENT_DENSITY basementPressure = BASEMENT_PRESSURE </Equation> </Updater> <Updater hyperIons> kind = classicMuscl2d onGrid = domain timeIntegrationScheme = none numericalFlux = FLUID_NUMERICAL_FLUX preservePositivity = true limiter = [LIMITER] limiterType = characteristic variableForm = conservative in = [ions] out = [ionsNew] cfl = CFL equations = [euler] <Equation euler> kind = eulerEqn basementDensity = BASEMENT_DENSITY basementPressure = BASEMENT_PRESSURE gasGamma = GAS_GAMMA </Equation> </Updater> <Updater hyperEm> kind = classicMuscl2d onGrid = domain timeIntegrationScheme = none numericalFlux = fWaveFlux limiterType = characteristic limiter = [LIMITER] variableForm = conservative in = [em] out = [emNew] cfl = CFL equations = [maxwell] <Equation maxwell> kind = maxwellEqn c0 = SPEED_OF_LIGHT gamma = BP chi = 0.0 </Equation> </Updater> The coupling between the fields and fluids is provided by Lorentz forces (for the fluid equations) and current sources (for the electromagnetic field). One option is to simply add these to the right hand side of the flux calculation and then integrate, however, this leads to an explicit scheme where the plasma frequency and cyclotron frequency must be resolved. Instead we use a semi-implicit operator as defined in Harish Kumar and Siddhartha Mishra. “Entropy Stable Numerical Schemes for Two-Fluid Plasma Equations.” Journal of Scientific Computing (2012): 1-25. The implicit operator twoFluidSrc is a source in USim, it’s applied cell by cell and does not require a global implicit solve. The twoFluidSrc is evaluated using the explicit solution to the electron, ion and em variables and the resulting matrix is multiplied by those same variables to produce the implicit source evaluation with explicit hyperbolic terms. The semi-implicit operator is given below: <Updater twoFluidLorentz> kind = equation2d onGrid = domain in = [electronsNew, ionsNew, emNew] out = [electronsNew, ionsNew, emNew] <Equation twofluidLorentz> kind = twoFluidSrc type = 5Moment electronCharge = ELECTRON_CHARGE electronMass = ELECTRON_MASS ionCharge = ION_CHARGE ionMass = ION_MASS epsilon0 = EPSILON0 </Equation> </Updater> The semi-implicit operator is applied in a special location in the multiUpdater (1d, 2d, 3d). The list of updaters in the multiUpdater (1d, 2d, 3d) define the explicit steps in the multiUpdater (1d, 2d, 3d). A second list of updaters is in the operator list. Updaters in the operator list are applied to integrationVariablesOut after a complete update. In the block below operators = [twoFluidLorentz] applies the operator after the explicit right hand side has been calculated, for example We want to solve the hyperbolic part of the multi-fluid equations explicitly and the source term implicitly. For a first order scheme the discretization becomes. $$$Q^{n+1}=Q^{n}+\Delta t\nabla f^{n}+\Delta t\psi^{n+1}$$$ and therefore $$$Q^{n+1}=\left(1-\Delta t A^{n+1}\right)^{-1}\Delta t\nabla f^{n}$$$ The term in parentheses is the twoFluidLorentz operator defined in the multiUpdater below. As stated previously, the updaters in operate are applied to the right hand side which is computed in the updaters list <Updater rkUpdaterMain> kind = multiUpdater2d onGrid = domain in = [em, ions, electrons] out = [emNew, ionsNew, electronsNew] <TimeIntegrator rkIntegrator> kind = rungeKutta2d ongrid = domain scheme = RK_SCHEME </TimeIntegrator> loop = [boundaries,hyper,implicit] updaters = [periodicEm, periodicIons, periodicElectrons, electronBcTop, electronBcBottom, \ ionBcTop, ionBcBottom, emBcTop, emBcBottom] operation = "integrate" updaters = [hyperIons, hyperElectrons, hyperEm, addSource] operation = "operate" updaters = [twoFluidLorentz] </Updater> In the multiUpdater (1d, 2d, 3d) above we include 3 sets of in variables, 3 sets of out variables for each of the integration variables. The results described above are sufficient to create an algorithm that steps over plasma frequency and cyclotron frequency, but this does not show us how to minimze errors in the divergence equations. ## Electric and Magnetic Field Divergence Cleaning¶ The standard approach to divergence preservation in USim is to use hyperbolic divergence cleaning. Hyperbolic divergence cleaning is described for the MHD equation in Andreas Dedner, et al. “Hyperbolic divergence cleaning for the MHD equations.” Journal of Computational Physics 175.2 (2002): 645-673. And for Maxwell’s equation in Munz, C-D., et al. “Divergence correction techniques for Maxwell solvers based on a hyperbolic model.” Journal of Computational Physics 161.2 (2000): 484-511. Unfortunately, for the two-fluid system hyperbolic cleaning of the electric field is often inadequate, or results in larger errors than we started with. Instead we use electric field diffusion. In this section we describe how we perform the divergence cleaning in the GEM challenge problem. First off, the magnetic field can be cleaned using the hyperbolic approach. In the hyperbolic updated of the electromagnetic field the Hyperbolic Equations block defines the speed of light c0 as well as the wave speeds of the correction potentials. gamma is the magnetic field correction potential factor and chi is the electric field correction factor. The speed of the magnetic field correction potential is gamma*c0 and of the electric field correction potential chi*c0. In the case below we give gamma a finite value (typically 1.0) and we set chi to 0 so that we can use a different correction method for the electric field: <Updater hyperEm> kind = classicMuscl2d onGrid = domain timeIntegrationScheme = none numericalFlux = fWaveFlux limiterType = characteristic limiter = [LIMITER] variableForm = conservative in = [em] out = [emNew] cfl = CFL equations = [maxwell] <Equation maxwell> kind = maxwellEqn c0 = SPEED_OF_LIGHT gamma = BP chi = 0.0 </Equation> </Updater> Electric field diffusion is quite simple. We add a diffusion term to the electric field given as $$$\frac{\partial E}{\partial t}-c^{2}\nabla\times B=-\frac{J}{\epsilon_{0}}+\gamma\nabla\left(\nabla\cdot E-\frac{\rho_{c}}{\epsilon_{0}}\right)$$$ where $$\gamma$$ is the electric field diffusion coefficient. You can see that the diffusion term never kicks in unless there is a numerical error gauss’ law. How do we go about implementing this in USim? First of all we define a set of data structures just for electric field divergence cleaning The first data structure stores the characteristic cell length for the diffusion coefficient: <DataStruct cellDx> kind = nodalArray onGrid = domain numComponents = 1 writeOut = 0 </DataStruct> We define a data structure for storing the error computed from $$\nabla\cdot E-\frac{\rho_{c}}{\epsilon_{0}}$$: <DataStruct residual> kind = nodalArray onGrid = domain numComponents = 1 writeOut = 0 </DataStruct> We define a data structure for storing the divergence of E: <DataStruct divE> kind = nodalArray onGrid = domain numComponents = 1 writeOut = 0 </DataStruct> We then store the actual diffusion term in the last data structure: <DataStruct gradDiv> kind = nodalArray onGrid = domain numComponents = 3 writeOut = 0 </DataStruct> Along with the data structures, we have a series of updaters that are used to fill up the data structures. The first updater characteristicCellLength (1d, 2d, 3d) simply computes a characteristic length for each cell. This updater only needs to be called at startup since the cell length does not change. <Updater computeCellDx> kind = characteristicCellLength2d onGrid = domain out = [cellDx] coefficient = 1.0 </Updater> The next updater computes the $$\nabla\cdot E$$ from the electric field. It takes as in the array em. The vectorDivergence operator assumes the 3 vector of interest occurs in the 3 components of em which happens to be correct in this case as those components correspond to Ex,Ey,Ez. In addition, a parameter coeffs is provided which multiplies the resulting divergence by the factor 1.0. This is a simple way to reverse the sign of the divergence or multiply by some other factor: <Updater computeDivE> kind = vectorDivergence2d onGrid = domain in = [em] out = [divE] coeffs = [1.0] </Updater> The next updater uses an Hyperbolic Equations to compute the residual $$\nabla\cdot E-\rho_{c}/\epsilon_{0}$$. The source computeChargeError expects as input $$\nabla\cdot E$$ and then expectes the remaining variables to be fluid mass densities, any number of species can be added. Along with the species mass densities we provide a list indicating the species mass and species charge for each of these species densities. We also specify the permittivity epsilon0 so that the residual can be calculated. <Updater computeResidual> kind = equation2d onGrid = domain in = [divE, electrons, ions] out = [residual] <Equation> kind = computeChargeError speciesCharge = [ELECTRON_CHARGE, ION_CHARGE] speciesMass = [ELECTRON_MASS, ION_MASS] epsilon0 = EPSILON0 </Equation> </Updater> The final diffusion term including gamma factor is computed using a scalar gradient calculator. The scalar gradient takes 2 inputs. It takes the gradient of the first input (residual in this case) and multiplies that gradient by the second input cellDx. The result is the multiplied by coefficient which is constant for all space. This particular form of diffusion is such that the diffusion is stable to explicit time stepping. If super time stepping is used or subcycling, the diffusion coefficient can be increased to do a better job of error cleaning. <Updater gradient> onGrid = domain in = [residual, cellDx] coefficient = 0.5 </Updater> Once the diffusion term is computed it needs to be added to the right hand side of the update equation. The term is added after the hyperbolic explicit terms are computed. We use a combiner2d to add the diffusion term to the equation. In this example the updater takes two input data structures, emNew and gradDiv and dumps the output into emNew. Every input variable requires an indVars_inputName block which provides a way to access each component of the input variable. These names can then be used in the output expression exprs. In the multiUpdater (1d, 2d, 3d), the addSource block is called after the hyperbolic terms so that it is not overwritten by updaters that are called earlier. <Updater addSource> kind = combiner2d onGrid = domain out = [emNew] indVars_emNew = ["ex","ey","ez","bx","by","bz","phiE","phiB"] c = SPEED_OF_LIGHT exprs = ["ex+c*dx","ey+c*dy","ez+c*dz","bx","by","bz","phiE","phiB"] </Updater> Finally, boundary conditions must be provided to the residual. In this problem the residual on the boundary should be 0 so we use a functionBc (1d, 2d, 3d) to explicitly set the residual on the boundary to zero. The variable out specifies the data that the boundary condition will be applied to, while entity tells the boundary condition which boundary it should be applied to. In this case entity=ghost means that the boundary is applied to all boundary cells. <Updater residualBc> kind = functionBc2d onGrid = domain <Function func> kind = exprFunc exprs = ["0.0"] </Function> out = [residual] entity = ghost </Updater> This simulation has periodic boundaries in the x direction so we also apply a periodic boundary condition to the residual. This boundary condition is called after residualBc so it overwrites the boundry conditions in the X direction set by residualBc <Updater periodicResidual> kind = periodicCartBc2d onGrid = domain in = [residual] out = [residual] </Updater> Now that we’ve added a bunch of new updaters we need to modify the multiUpdater to include the changes that have been made. The new updater added to the updater list include computeDivE, computeResidual, residualBc,periodicResidual,gradient in the UpdateStep‘s compute and clean. These updaters are evaluated after the boundary conditions for electrons,ions and em are applied, but before the hyperbolic updaters are called hyperIons, hyperElectrons and hyperEm. Once again, for parallel simulations it’s very important to get the synchronization correct. We’ve added one synchronizations. The new synchronization occurs directly after periodicResidual. periodicResidual is the last updater that is applied before a derivative gradient is computed on residual so we need to synchronize residual at this point. <Updater rkUpdaterMain> kind = multiUpdater2d onGrid = domain in = [em, ions, electrons] out = [emNew, ionsNew, electronsNew] <TimeIntegrator rkIntegrator> kind = rungeKutta2d ongrid = domain scheme = RK_SCHEME </TimeIntegrator> loop = [boundaries,compute,clean,hyper,implicit] updaters = [periodicEm, periodicIons, periodicElectrons, electronBcTop, electronBcBottom, \ ionBcTop, ionBcBottom, emBcTop, emBcBottom] updaters = [ computeDivE, computeResidual, residualBc, periodicResidual] operation = "integrate" updaters = [hyperIons, hyperElectrons, hyperEm, addSource] operation = "operate" updaters = [twoFluidLorentz] </Updater> Finally, recall that the characteristic cell lengths only need to be calculated once. As a result we calculate them during the initialization step. <UpdateStep initStep> updaters = [initElectrons, initIons, initEm, computeCellDx] syncVars = [electrons, ions, em] ## Computing the Reconnected Magnetic Flux¶ In the GEM Challenge simulation we’ve added passive diagnostic to compute the line integral across the domain. In order to compute the line integral we add two data structures. The first data structure is called a bin. The bin is a uniform grid that overlays the exiting grid. The extents of the bin match the extents of the grid, but a bin is rectangular regardless of the shape of the grid. The bin has two important parameters, the first is onGrid which specifies which grid the bin will use to define itself, the second is the scale. scale tells roughly how many bins there are per grid cell in domain. The bin is used for fast lookup so that USim can quickly tell which cell a particular point in space is in. The bin is given as <DataStruct cellBin> kind = bin onGrid = domain scale = 2.0 </DataStruct> In addition we need to fill the bin with data. In this particular case we want to fill each bin with a a list of indexes corresponding to the cells that fall inside each cell of the bin. The binCells (1d, 2d, 3d) updater does exactly that. This updater only needs to be called at startup since the grid does not change with time. <Updater fillBin> kind = binCells2d onGrid = domain out = [cellBin] </Updater> The second variable is a dynVector. A dynVector is a dataStructure which is a vector and has the same value on all domains in parallel simulations. dynVectors store a time series of data, so the value of the dynVector is dumped at every time step so it can be used to store integrated quantities. The dynVector had numComponents to specify how long the vector is. In this case it stores only 1 value. writeOut=true is set so the dynVector is written. <DataStruct integratedFlux> kind = dynVector numComponents = 1 writeOut = 1 </DataStruct> These two data structures are then used to compute the line integral with the lineIntegral (1d, 2d, 3d) updater. The lineIntegral (1d, 2d, 3d) behaves like a combiner (1d, 2d, 3d) except that it takes in nodalArray‘s and fills up a dynVector. <Updater computeLineIntegral> kind = lineIntegral2d onGrid = domain startPosition = [XMIN, 0.0] endPosition = [XMAX, 0.0] numberOfSamples = 1000 layout = [cellBin] in = [em] indVars_em = ["ex","ey","ez","bx","by","bz","phiE","phiB"] exprs = ["0.5*abs(by)"] out = [integratedFlux] </Updater> This updater should not be called within an rkUpdater otherwise it will store multiple values per time step. Instead we call this updater in its own UpdateStep <UpdateStep diagnosticStep> updaters = [computeLineIntegral]
2020-08-10 01:36:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.6742074489593506, "perplexity": 4222.685103270271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738603.37/warc/CC-MAIN-20200810012015-20200810042015-00490.warc.gz"}
https://itensor.org/support/3135/exponential-of-matrix-in-julia-itensor
# Exponential of matrix in Julia ITensor +1 vote edited Hi, I am using the Julia version of ITensor and I was trying to compute the expected value of the operator Rz(i) = exp[I pi Sz(i)], where 'i' is a given site and 'I' is the imaginary number. I did not manage to find a way to build the exponential of an operator using autoMPO. Am I missing something? Nevertheless, going through the forum and some other documentation, I found a workaround to this. However, it is not working. Below, I show a minimal example code with my attempt. In this code, if I set 'i=1' or 'i=3', I get (Up,Z0,Dn| Rz(i) |Up,Z0,Dn) = -1, as expected. However, if I set 'i=2', I get 0, and I should get 1. Is this a bug? Thanks a lot, Gonçalo Catarina PS: I just verified that removing 'conserve_sz=true' in this code makes it work. using ITensors Nsites = 3 sites = siteinds("S=1",Nsites;conserve_sz=true) #build the state |Up,Z0,Dn> statei = [isodd(i) ? "Up" : "Dn" for i in 1:Nsites] statei[2] = "Z0" ψi = randomMPS(sites,statei) i=2 exponenti = 1im*pi*op("Sz",sites,i) Rzi = exp(exponenti) print(inner(ψi,apply(Rzi,ψi))) +1 vote Thanks for the report, indeed that is a bug. The technical explanation of what is going on is that in the QN case, right now the code is only exponentiating the QN blocks that exist, but it is failing to treat ones on the diagonal that don't exist as zero which should get exponentiated to identity. Should be a quick fix, thanks for pointing out this issue! commented by (300 points) Great, thanks a lot! Please let me know when it is fixed. commented by (11.1k points) I fixed it here: https://github.com/ITensor/ITensors.jl/pull/682 It will be included in ITensors.jl v0.2.0, which hopefully we will release today. You can try it out now by doing add ITensors#main in Pkg mode at the Julia REPL (see here https://itensor.github.io/ITensors.jl/dev/AdvancedUsageGuide.html#Installing-and-updating-ITensors.jl-1). commented by (300 points) I confirm it is now working properly, not only for this minimal example code but also for other codes that I have. Again, thanks a lot! commented by (11.1k points) ITensors.jl v0.2.0 is now registered, so you can do: julia> using Pkg julia> Pkg.free("ITensors") julia> Pkg.update("ITensors") to use the official registered version.
2021-06-25 04:53:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6526939272880554, "perplexity": 2727.544094616492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488567696.99/warc/CC-MAIN-20210625023840-20210625053840-00524.warc.gz"}
https://www.math.ovgu.de/Forschung/Ver%C3%B6ffentlichungen/Preprints_+Technical+Reports+%28alte+Version%29/Preprints/2000/00_01.html
## 00-01 #### An approximation algorithm for problem $P,S1 | s_i = 1 | \sum C_i$ Preprint series: 00-01, Preprints The paper is published: Information Processing Letters, Vol. 79, 2001, 291 - 296. MSC: 90B35 Scheduling theory, See also {68M20} Abstract: In this note we consider the problem of scheduling a set of jobs on m identical parallel machines. For each job, a setup has to be doen by a single server. The objective is to minimize the sum of the completion times. For this strongly NP-hard problem, we give an approximation algortihm with an absolute error bounded by the product of the number of short jobs (with processing times less than m - 1) and m - 2. Keywords: scheduling, parallel amchines, single server, unit setup times, total completion time, approximation algorithm The author(s) agree, that this abstract may be stored asfull text and distributed as such by abstracting services. Letzte Änderung: 01.03.2018 - Ansprechpartner: Webmaster
2020-08-10 21:50:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7563174962997437, "perplexity": 1449.5197891041466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738699.68/warc/CC-MAIN-20200810205824-20200810235824-00394.warc.gz"}
https://aptitude.gateoverflow.in/3026/series
36 views Look at this series: VI, 10, V, 11, __, 12, III, ... What number should fill the blank? | 36 views We can divide this series into two series From here we can see missing number clearly which is $IV$. by (11k points) 7 12 98
2019-10-16 21:43:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8263016939163208, "perplexity": 6606.8137723315995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00164.warc.gz"}
https://mourafiq.com/2013/03/01/estimating-pi.html
Random # Estimating Pi with Monte Carlo method. ## Using the Monte Carlo method to estimate the value of pi. Monte Carlo is statistical method based on a series of random numbers. Monte Carlos are used in a wide-range of physical systems, finance and other research field. In this post, we are going to use the Monte Carlo method to estimate the value of pi. First, consider a circle inscribed in a square (as in the figure). f assume that the radius of the circle is $R$, then the Area of the circle $= Pi * R^2$ and the Area of the squar $= (2 * R)^2 = 4 * R^2$. If we throw a dart blindly inside of the square, what is the probability (P) the dart will actually land inside the circle? P = Area of the circle / Area of the square = Pi / 4 So the chances of hitting the circle are Pi / 4. In other words, $pi = 4 * P$ Let’s try to do this in python: 1 from __future__ import division 2 import numpy 3 4 NBR_POINTS = 1000000 6 7 print (len(filter(lambda x:numpy.sqrt(numpy.random.randint(0, RADIUS)**2+numpy.random.randint(0, RADIUS)**2)<RADIUS,xrange(NBR_POINTS)))/NBR_POINTS)*4 If you have problem with the filter, read this! To go a little bit further with this we can imagine a map-reduce system to make the estimation faster. 1 from __future__ import division 2 import collections 3 import itertools 4 import multiprocessing 5 import numpy 6 7 class MapReduce(object): 8 """ 9 The map reduce object, should be initialized with: 10 map_fn 11 reduce_fn 12 nbr_workers 13 """ 14 def __init__(self, map_fn, reduce_fn, num_workers=None): 15 """ 16 initiaize the mapreduce object 17 map_fn : Function to map inputs to intermediate data, takes as 18 input one arg and returns a tuple (key, value) 19 reduce_fn : Function to reduce intermediate data to final result 20 takes as arg keys as produced from the map, and the values associated with it 21 """ 22 self.map_fn = map_fn 23 self.reduce_fn = reduce_fn 24 self.pool = multiprocessing.Pool(num_workers) 25 26 def partition(self, mapped_values): 27 """ 28 returns the mapped_values organised by their keys. (keys, associated values) 29 """ 30 organised_data = collections.defaultdict(list) 31 for key, value in mapped_values: 32 organised_data[key].append(value) 33 return organised_data.items() 34 35 def __call__(self, inputs=None, chunk_size=1): 36 """ 37 process the data through the map reduce functions. 38 inputs : iterable 39 chank_size : amount of data to hand to each worker 40 """ 41 mapped_data = self.pool.map(self.map_fn, inputs, chunksize = chunk_size) 42 partioned_data = self.partition(itertools.chain(*mapped_data)) 43 reduced_data = self.pool.map(self.reduce_fn, partioned_data) 44 return reduced_data 45 46 47 NBR_POINTS = 1000000 49 NBR_WORKERS = 4 50 NBR_PER_WORKER = int(NBR_POINTS/NBR_WORKERS) 51 52 53 def probability_calculation(item): 54 print multiprocessing.current_process().name, 'calculating', item 55 output = [] 66 print pi
2022-06-30 22:14:39
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47731930017471313, "perplexity": 6757.648393454772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00233.warc.gz"}
https://arxiver.wordpress.com/2017/03/09/statistical-analysis-of-supernova-remnants-in-the-large-magellanic-cloud-heap/
# Statistical Analysis of Supernova Remnants in the Large Magellanic Cloud [HEAP] We construct the most complete sample of supernova remnants (SNRs) in any galaxy – the Large Magellanic Cloud (LMC) SNR sample. We study their various properties such as spectral index ($\alpha$), size and surface-brightness. We suggest an association between the spatial distribution, environment density of LMC SNRs and their tendency to be located around supergiant shells. We find evidence that the 16 known type Ia LMC SNRs are expanding in a lower density environment compared to the Core-Collapse (CC) type. The mean diameter of our entire population (74) is 41 pc, which is comparable to nearby galaxies. We didn’t find any correlation between the type of SN explosion, ovality or age. The $N(<D)$ relationship of $a={0.96}$ implies that the randomised diameters are readily mimicking such an exponent. The rate of SNe occurring in the LMC is estimated to be $\sim$1 per 200 yr. The mean $\alpha$ of the entire LMC SNR population is $\alpha=-0.52$, which is typical of most SNRs. However, our estimates show a clear flattening of the synchrotron $\alpha$ as the remnants age. As predicted, our CC SNRs sample are significantly brighter radio emitters than the type Ia remnants. We also estimate the $\Sigma – D$ relation for the LMC to have a slope $\sim3.8$ which is comparable with other nearby galaxies. We also find the residency time of electrons in the galaxy ($4.0-14.3$ Myr), implying that SNRs should be the dominant mechanism for the production and acceleration of CRs. L. Bozzetto, M. Filipovic, B. Vukotic, et. al. Thu, 9 Mar 17 31/54 Comments: Accepted for publication in APJSS
2017-05-28 15:01:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424814939498901, "perplexity": 1616.7169683009863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609837.56/warc/CC-MAIN-20170528141209-20170528161209-00482.warc.gz"}
http://mathoverflow.net/questions/164380/does-small-forcing-preserve-ch
Does small forcing preserve CH? Suppose CH holds and $\mathbb{P}$ is a poset of size $\omega_1$, such that forcing with $\mathbb{P}$ preserves $\omega_1$. Does forcing with $\mathbb{P}$ preserve CH? If $\mathbb{P}$ is proper then the answer is yes, see this question. Is this true for improper $\omega_1$-preserving posets of size $\omega_1$? - Meanwhile, if $\omega_1$ is not preserved, then the answer is no, not necessarily, since when you collapse $\omega_1$ to become countable, the size of $2^\omega$ becomes the ground model $2^{\omega_1}$, which can be very large. –  Joel David Hamkins Apr 25 '14 at 19:22 Yes. Let $X$ be a name for a subset of $\omega$. It can be describe the following way: for every $i<\omega$ $\{p^i_\alpha:\alpha<\omega_1\}$ a maximal antichain, $\{\varepsilon^i_\alpha:\alpha<\omega_1\}$ where $\varepsilon^i_\alpha\in\{0,1\}$ and $p^i_\alpha$ forces $i\in X$ iff $\varepsilon^i_\alpha=1$. In $V[G]$ the function $i\mapsto h(i)$ is an $\omega\to\omega_1$ map where $p^i_{h(i)}\in G$. As $P$ preserves $\omega_1$, the range of $h$ is bounded, i.e., some $p$ forces that $h(i)<\gamma$ for all $i$, for some $\gamma<\omega_1$. Now the structure $\langle p,\langle p^i_\alpha,\varepsilon^i_\alpha:\alpha<\gamma\rangle\rangle$ fully describes $X$.
2015-07-04 08:34:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9914014339447021, "perplexity": 99.40411229324175}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096579.89/warc/CC-MAIN-20150627031816-00201-ip-10-179-60-89.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/22905/prove-that-context-free-languages-arent-closed-under-dropmiddle
# Prove that context free languages aren't closed under DropMiddle The question is simple: $\qquad \operatorname{DropMiddle}(L)=\{xy\in\Sigma^* \mid |x|=|y| \land \exists a\in\Sigma\colon xay\in L\}$. Prove that CFL's aren't closed under $\operatorname{DropMiddle}$. I should probably be looking for a counter example, but I'm coming up short. I know that the language $ww$ ($w$ is a word in some CFL) isn't a CFL, but I can't figure out if I'm on the right track at all. • Welcome to Computer Science! Note that you can use LaTeX here to typeset mathematics in a more readable way. See here for a short introduction. – FrankW Mar 21 '14 at 21:32 • Are you sure that was the problem statement and not for DCFL? Anyway, this is not much more than a problem dump. What have you tried? Does $ww$ work? Why (not)? Which other non-context-free languages have you tried to use? – Raphael Mar 21 '14 at 22:20 First off, you are right about looking for a counter example. However, $ww$ is a dead end, since no matter what you add to the middle, you'll stiil have a language that is not context-free. Hint 1: If the middle character is in some way special in $L$, you can essentially modify a PDA for $L$ to nondeterministically guess the middle and the removed letter and it will accept $\operatorname{DropMiddle}(L)$. So you should try to look for a language, where the dropping makes the middle special. Hint 2: If the specialness of the middle is the only feature of $\operatorname{DropMiddle}(L)$, a PDA could just use its stack to determine the middle. So you need to force it to use the stack in a different way. Solution: $L = \{ aw_1aw_2aw_3a\ldots aw_na \mid w_i\in \{(,)\},~ w_1\ldots w_n \text{ is correctly parenthesized} \}$. The $w_i$ part forces any PDA for $L$ or $\operatorname{DropMiddle}(L)$ to use its stack to count open parens. Thus it is impossible to check if, the single missing $a$ after dropping is indeed missing from the exact middle of the word. Formally proving that $\operatorname{DropMiddle}(L)$ is not context-free should be doable with Ogden's Lemma. Choose a word $(^k)^k(^k)^k$ (plus the $a$s inbetween) and mark $k$ characters each left and right of the middle. Dropping the middle symbol is usually an action that is not observed. But we can make it special by using a special symbol. Let $L = \{ a^n b^m \# a^n b^\ell \mid n,m,\ell \ge 0 \}$. Then $L$ is context-free. Consider $K = \textrm{DropMiddle}(L) \cap a^*b^*a^*b^*$. The intersection with the regular language forces us to look at strings where $\#$ was indeed the middle symbol, i.e., strings where $m=\ell$. Thus $K = \{ a^n b^m a^n b^m \mid n,m \ge 0 \}$, which is not context-free.
2020-06-01 02:53:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.712090015411377, "perplexity": 435.89827939204605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413901.34/warc/CC-MAIN-20200601005011-20200601035011-00013.warc.gz"}
https://engineering.stackexchange.com/questions/38828/is-using-a-4x4-bolted-to-the-face-of-2x12-enough
# Is using a 4x4 bolted to the face of 2x12 enough? I am working on a project that has three 2x12’s supporting a load of 12,000 lbs. similar, but not exactly, to how joists support a house. The 2x12’s are supporting about 6,000 lbs. on each end. The three 2x12’s are: a) Each cut to 90in. b) The load takes up 24in. from each end of the 2x12’s leaving 42in. of free space in between. c) The three 2x12’s are spaced parallel to each other, with one 2x12 being 24in. in front of the other. d) The 6,000lb. (On each side) load is laying on a thick wooden plank (on each end) that is perpendicular to the three 2x12’s. e) The two thick wooden planks on each end of 2x12’s (mentioned in “d”) is securely fastened to the 1.5in. part of the 2x12’s, as well as having blocking and other bolted wooden members that leaves (as stated in “b”) 42 in. Of free space in between. f) The three 2x12’s are only 12in. off the ground. Everything is stable and strong without any issues except for one. I need to lift this so that it is 14in. off the ground (lifting it 2 in. in total). Removing the load; as well as dissembling is not an option. This will have to be lifted with the three 2x12’s and everything above (including load) remaining intact. My best option is to lift (with a hydraulic jack) the front most 2x12 to 14in. and then lift the back most 2x12 to 14in. -I’ve done this before with no problems. The only problem, that I have now, is that directly under the 2x12 is things that I can not remove and is not strong enough jack it from. The saddle of the jack is 2in. All wood is Douglas Fir #2 or better. My plan is to bolt a 4x4 cut to 42in. (to fit in the free space mentioned in “b” and to better distribute the force of the jack). This 4x4 will be bolted (3 bolts spaced almost equally apart) to the outward face of the 11.25in. part of the front most and back most 2x12’s. Questions: 1. Do you think is strong enough to support the load plus force of the jack or do you think it will just rip the 4x4 off of the 2x12; while also destroying the face of the 2x12’s? 2. Do you think maybe adding a 4x4 on the inward faces of the 2x12’s (these will not have the jack touch it) will provide any more strength to make this process acceptable? • a sketch would be great. – kamran Nov 27 '20 at 1:06 In the absence of detailed information and assuming that the existing fasteners and connections are strong enough just as an illustration and with the understanding that this is not advice, I would consider using 4 Simpson Strong-tie fasteners with, uplift rating of $$12000/4= 3000lbs\ 3000*1.5= 4500lbs$$
2021-07-30 05:00:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42390143871307373, "perplexity": 2301.941270773476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00042.warc.gz"}
https://mathematica.stackexchange.com/questions/linked/119?sort=unanswered
38 questions linked to/from Flatten command: matrix as second argument 132 views ### List extraction Given the following list: ... 255 views ### Inserting rows of one matrix into rows of another matrix sequentially I have two matrices M1 and M2. Both have same number of columns but different number of rows. I want to make a new matrix M which should have 1st row form M1, second row form M2, third row form M1, ... 199 views ### How to convert multidimensional matrix into regular matrix? I have this multidimensional matrix: {{{{a1, a2}, {a3, a4}}, {{b1, b2}, {b3, b4}}}, {{{c1, c2}, {c3, c4}}, {{d1, d2}, {d3, d4}}}} I would like to convert it into ... 422 views ### How can I thread lists of different sizes? What I need is the equivalent of Maple's zip(+, A, B, 0). Sure I can get it with: ... 69 views ### How to assemble matrices? I'm a beginner in Mathematica and have a very basic question. Suppose I have made the following matrices (4x4). $$A,B,C,D,$$ and I want to make the larger matrix \begin{bmatrix} A & B \\ C &... 134 views ### Flatten a four dimensional array into a matrix What is the simplest way to transform an array of the form ... 136 views ### Creating a nested table I have a row of values that I partition into a table. row = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}; rowPartitioned = Partition[row, 6] ... 1k views ### Using Transpose with a list as the second argument I'm having difficulty understanding the Transpose function. I know what the transpose of a matrix is, not a problem, and I see that applying the ... 401 views ### Plotting how sunrise times change over the year I created a somewhat lengthy Python code to show how the time of sunrise varies over a year, resulting in the following figure (daylight savings were disregarded): The dataset can be found here. The ... 362 views ### Flatten a list at some level I wonder if is possible flatten a list at the last level without flattening at the first level, I think that isn't a good explanation so here is my code ... 324 views ### Flattening a list [duplicate] I have a list with dimensions {1000, 1000, 1} and I would like it to have dimensions {1000, 1000}. But ... 113 views ### Combining the entries of two lists [closed] I have two lists {a, b, c, d, e} and {1, 2, 3, 4, 5}. I want to creat a new list of tuples as follows ... 62 views ### Collecting specific Lists of data in coordinate form Note: This might be trivial for many of the users but i am new to mathematica so i don't know!!! Question: Suppose i have three list of data as follows: ...
2020-03-30 13:41:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8778907060623169, "perplexity": 1341.0714007330541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497042.33/warc/CC-MAIN-20200330120036-20200330150036-00502.warc.gz"}
http://koreascience.or.kr/article/JAKO199828635214930.page
# Epidermal Growth Factor 와 Transforming Growth Factor-α가 인체 구강편평상피세포암 세포의 성장에 미치는 영향에 관한 실험적 연구 • Park, Young-Wook (Department of Oral & Maxillofacial Surgery, College of Dentistry, Kangnung National University) • 박영욱 (강릉대학교 치과대학 구강악안면외과학교실) Stimulatory effects of epidermal growth factor (EGF) and transforming growth $factor-{\alpha}$($TGF-{\alpha}$) on the growth of squamous cancer cell lines established from human oral cancer tissue with moderate differentiation were studied in vitro. After culturing in serum-free media for 24 hours, growth factors-EGF only, $TGF-{\alpha}$ only and EGF, $TGF-{\alpha}$ together-were added to the media and numbers of cells were analyzed by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay and compared with the control at 96, 144 hours. Each of EGF and $TGF-{\alpha}$ showed statistically significant stimulatory effects on the growth of cells respectively. Dose-dependent relationship of the stimulatory effects were not clearly demonstrated. The effects of EGF were higher than those of $TGF-{\alpha}$ and combinative administration showed higher effects than those of single uses. In conclusion, EGF may play an important and major role in differentiation and growth of human oral squamous cancer cells. $TGF-{\alpha}$, produced from cells activated by EGF, also can stimulate the cell growth and could be an alternative ligand for EGF receptor.
2020-08-09 08:21:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36628636717796326, "perplexity": 12841.639338414245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738523.63/warc/CC-MAIN-20200809073133-20200809103133-00269.warc.gz"}
http://annals.math.princeton.edu/articles/10847
An equation of Monge Ampère type in conformal geometry, and four-manifolds of positive Ricci curvature Abstract We formulate natural conformally invariant conditions on a $4$-manifold for the existence of a metric whose Schouten tensor satisfies a quadratic inequality. This inequality implies that the eigenvalues of the Ricci tensor are positively pinched. DOI: 10.2307/3062131 Authors Sun-Yung A. Chang Matthew J. Gursky Paul C. Yang
2017-11-22 07:15:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9663261771202087, "perplexity": 1319.5973561939732}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806509.31/warc/CC-MAIN-20171122065449-20171122085449-00535.warc.gz"}
https://computergraphics.stackexchange.com/questions/10752/how-to-calculate-vertex-normals-on-a-mesh-with-non-planar-polygons
# How to calculate vertex normals on a mesh with non-planar polygons If I have a mesh consisting of polygons that are not necessarily triangles and not necessarily planar. As answered in the previous question I asked, there's no correct answer to calculating normals for such polygons. So I'm wondering instead no matter how wrong, what are some of the strategies used in 3d modeling to calculate the vertex normals for such a mesh, specifically for smooth shading? • What is the surface of such a non-planar "polygon"? There are infinitely many surfaces that can pass through the vertices of such a "polygon". If you define the surface then you can compute the normal based on the partial derivatives. – lightxbulb Mar 18 at 22:30 • @lightxbulb I would like to further specify that I'm referring to simple polygons. To my knowledge in 3d modeling the non-planar polygon surface is visualized by triangulating the polygon. So I guess the surface is that of individual triangles the polygon consists of. – Lenny White Mar 18 at 23:08 • I unfortunately can't use derivatives to calculate the normals. I need to calculate the surface normals on GPU using triangle interpolation (with OpenGL), and for that I need to use vertex normals. – Lenny White Mar 18 at 23:18 • There are more than one way to triangulate a polygon. When it is non-planar the different triangulations result in different surfaces. So first you have to specify what a non-planar polygon means (e.g. specify its surface). Only then can one argue about normals. – lightxbulb Mar 19 at 8:11 • @LennyWhite: Q&A sites are not appropriate for questions of the form "give me a list of all the ways to do X". We generally prefer questions that are more specific in nature. – Nicol Bolas Mar 19 at 15:46 If you're interested in vertex normals specifically, there's an easy answer even for non-planar polygons that avoids the question of defining what the exact surface is: for each vertex, calculate the normal of the plane formed by the two edges entering and leaving that vertex. More formally, given vertices $$\mathbf{v}_1, \mathbf{v}_2, \ldots \mathbf{v}_n$$ with counterclockwise winding, define the normal at the $$i$$th vertex as: $$\mathbf{n}_i = (\mathbf{v}_i - \mathbf{v}_{i-1}) \times (\mathbf{v}_{i+1} - \mathbf{v}_i)$$ (where the indices wrap around). You can then proceed to accumulate normals calculated this way from all the faces that share a given vertex, as usual for smooth shading. As has been discussed, there are multiple ways to define a smooth surface corresponding to a non-planar polygon, but any reasonable way of defining such a surface must converge to the normals as defined here near the vertices, or else the surface can not both be smooth (locally flat) and meet the straight-line edges between the vertices. A caveat to this approach, though, is that it won't define a normal for collinear vertices (where the entering and exiting edges are parallel), since the cross product goes to zero there. If this is a problem, it might work to patch up such vertices by interpolating normals to them from the surrounding non-collinear vertices. • Thank you so much, this is perfect! Could I ask if there's any literature describing this method so I can read up more on why this works? – Lenny White Mar 19 at 17:44 • Or generally I would like to find more on this topic, but I don't know to google it. – Lenny White Mar 19 at 18:02 • Sorry, I don't really know if there's any literature on this; I'm just going off geometric intuition. – Nathan Reed Mar 19 at 18:10 • I see, appreciate it anyway! Can I ask "or else the surface can not both be smooth (locally flat) and meet the straight-line edges between the vertices". This is a bit too advanced for me. Could you explain what this means in simpler terms? – Lenny White Mar 19 at 18:24 • If the surface is smooth, then when you zoom in and look at any very small region of it, that region should appear fairly flat (this is basically what it means to be smooth). And the surface must conform to the given vertices and edges, whatever else it does in the middle. So, if you zoom in on a small region near an edge, the surface there must run parallel to the edge so that it can meet the edge. And if you zoom in on a vertex, the surface has to be parallel to both edges around that vertex. That completely determines the orientation of the surface in that small region. – Nathan Reed Mar 19 at 22:27 for smooth shading you usually store the normal vectors per vertex and not per face (flat shading). The normal vector on a specific point on a face will then be calculated by interpolating the normal vectors of the vertices (corners) of the face. For example when having a triangle: you would use an interpolation which uses the barycentric coordinates of three normal vectors like discribed here. For non planar polygons, of cause you don't have a specific defined surface. Therefore it depend on you, which kind of interpolation you are using. But it should interpolate with respect to the corners distance. And don't forget to normalize the vector at the end! • Unfortunately this was not what I was wondering about. It's given that we need to calculate the vertex normals and then these are interpolated across the triangle's surface. The question is about how to calculate the vertex normals in the first place. This involves taking the normalized average of all the faces it belongs to, if the all faces are triangles. However what I'm wondering about is what are the ways to handle the case where the faces are non-planar polygons with more than 3 vertices. – Lenny White Mar 18 at 18:38 • Maybe you can subdivide the non planar quads/whatever into triangles and use your approach? Or doing something like mesh relaxation that will converge to nice planar results? – Felipe Gutierrez Mar 19 at 14:59 • @FelipeGutierrez Making the polygons planar wouldn't work for me unfortunately. I'm working on a simple 3d modeling application. And I need to be able to have non-planar polygons. Triangulating would work, but at first glance at least it seems like a lot of headache to implement this if I need to be able to work with a dynamic mesh. – Lenny White Mar 19 at 16:51
2021-07-27 02:43:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6082839965820312, "perplexity": 374.5449815649919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152168.38/warc/CC-MAIN-20210727010203-20210727040203-00126.warc.gz"}
http://math.stackexchange.com/questions/213387/finding-line-tangent-to-gx2
# Finding Line Tangent to $g(x^2)$ At $x=9$, the equation of the tangent line to the graph of $y=g(x)$ is $$2x+11y=-37$$ What is the equation of the tangent line to $y=g(x^2)$ at $x=3$ and the equation of the tangent line to $y=(g(x))^2$ at $x=9$? Please help guys, I've been trying to figure this out for an hour and no success.. I really appreciate it! Thanks! - If you've been working at it for an hour, then you must have some work to show. What have you tried so far? –  EuYu Oct 14 '12 at 2:35 The derivative of $y=g(x^2)$ is $y'=2x*g(x^2)$, and the derivative of $y=(g(x))^2$ is $y'=2*g(x)*g'(x)$.
2015-04-26 12:20:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6730467081069946, "perplexity": 58.4854412678732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654292.99/warc/CC-MAIN-20150417045734-00302-ip-10-235-10-82.ec2.internal.warc.gz"}
http://cjcp.ustc.edu.cn/html/hxwlxb_en/2018/4/cjcp1805120.htm
Chinese Journal of Chemical Physics  2018, Vol. 31 Issue (4): 485-491 #### The article information Yangyunli Sun, Shuo Zhang, Wen-hua Zhang, Zhen-yu Li Theoretical Study of Adsorption and Dehydrogenation of C2H4 on Cu(410) Chinese Journal of Chemical Physics, 2018, 31(4): 485-491 http://dx.doi.org/10.1063/1674-0068/31/cjcp1805120 ### Article history Accepted on: June 12, 2018 Theoretical Study of Adsorption and Dehydrogenation of C2H4 on Cu(410) Yangyunli Suna, Shuo Zhanga, Wen-hua Zhangb,c,d, Zhen-yu Lia,c Dated: Received on May 28, 2018; Accepted on June 12, 2018 a. Hefei National Laboratory of Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, China; b. CAS Key Laboratory of Materials for Energy Conversion, Department of Materials Science and Engineering, University of Science and Technology of China, Hefei 230026, China; c. Synergetic Innovation Center of Quantum Information Quantum Physics, University of Science and Technology of China, Hefei 230026, China; d. Department of Applied Mathematics, School of Physics and Engineering, Australian National University, Canberra, ACT 2600, Australia *Author to whom correspondence should be addressed. Wen-hua Zhang, E-mail:whhzhang@ustc.edu.cn; Zhen-yu Li, E-mail:zyli@ustc.edu.cn Abstract: Adsorption and dehydrogenation of ethylene on Cu(410) surface are investigated with firstprinciples calculations and micro-kinetics analysis. Ethylene dehydrogenation is found to start from the most stable π-bonded state instead of the previously proposed di-σ-bonded state. Our vibrational frequencies calculations verify the π-bonded adsorption at step sites at low coverage and low surface temperature and di-σ-bonded ethylene on C-C dimer (C2H4-CC) is proposed to be the species contributing to the vibrational peaks experimentally observed at high coverage at 193 K. The presence of C2H4-CC indicates that the dehydrogenation of ethylene on Cu(410) can proceed at temperature as low as 193 K. Key words: Dehydrogenation    Catalysis    Surface reactioin Ⅰ. INTRODUCTION Adsorption and reaction of hydrocarbons on transition metal surfaces are an interesting topic due to their relevance in heterogeneous catalysis and nanotechnology. Interaction between hydrocarbon and the metal substrate has effects on various processes like hydrogenation [1, 2], dissociation [3], and polymerization [4]. Recently, copper is widely used as a catalyst to grow two-dimensional graphene sheets via chemical vapor deposition of methane, ethane, or other hydrocarbon precursors [7]. Generally, copper has a much lower activity than that of main-group transition metals such as nickel [5, 6], but it is higher than silver and gold. Understanding the interaction between different copper surfaces and hydrocarbons will not only enrich the knowledge of surface science but also help to understand the mechanism of graphene growth [8, 9]. The interactions of ethylene with different copper surfaces have been investigated with a long history [10, 11]. Various experimental methods, such as X-ray absorption (XAS) and emission (XES) spectroscopies, temperature programmed desorption (TPD), high resolution electron energy loss spectra (HREELS), and infrared reflection-adsorption spectroscopy (IRAS), are adopted to characterize the adsorption configuration, electronic structure, and chemical reactivity of ethylene on Cu(111), Cu(100), Cu(110), and Cu(210) surfaces [10-13]. On these surfaces, the $\pi$-bonded ethylene adsorption configuration at top site is identified at low temperature. Recently, the adsorption of ethylene on Cu(410) has been intensively investigated [14-16]. At low temperature and low coverage, ethylene binds with the Cu(410) surface with a $\pi$-bonded configuration. At the same time, a di-$\sigma$-bonded adsorption configuration was suggested based on HREELS spectra at higher surface temperature and/or higher surface coverage [14-16]. The di-$\sigma$-bonded ethylene was also proposed to be the precursor of the dehydrogenation reaction [15, 16]. XPS and HREELS experiments suggested that the reactive site of dehydrogenation is located at step edges [15]. The di-$\sigma$-bonded ethylene based dehydrogenation picture, however, was challenged by a recent study combining experimental and computational characterizations [17]. Density functional theory (DFT) calculations suggested that the di-$\sigma$-bonded configuration is a meta-stable state on Cu(410). A different assignment to the experimentally observed TPD peaks was also proposed [17]. Since controversies still exist for ethylene adsorption and dehydrogenation on Cu(410), a systematic study on this topic is desirable. In this work, various adsorption configurations of ethylene on Cu(410) are investigated and dehydrogenation processes via different paths are explored. Microkinetic analysis suggests that the most possible dehydrogenation path of ethylene proceeds via ${\rm CH}_{\rm 2}$${\rm CH}_{\rm 2}$$\rightarrow$${\rm CH}_{\rm 2}CH\rightarrowCHCH\rightarrowCCH\rightarrowCC. On Cu(410), ethylene prefers to dehydrogenate directly from the most stable \pi-bonded configuration rather than the meta-stable long-bridge state. By comparing the calculated vibrational frequencies with experimental results [15], we further confirm the appearance of \pi-bonded ethylene at low coverage and low surface temperature. The experimentally observed HREELS peak at 193 K is assigned to di-\sigma-bonded ethylene on complete dehydrogenated product C-C. Ⅱ. COMPUTATIONAL DETAILS A. Structure models The Cu(410) surface was simulated by a five-layer p(2×1) slab (FIG. 1) with a \sim15 Å vacuum layer, where the two bottom atom layers were fixed to their bulk structure during geometry optimization. Lattice parameters of the supercell are a=7.27 Å, b=7.71 Å, and c=24.26 Å. Terrace planes of Cu(410) have a (100) surface structure and the step has a (110) facet. FIG. 1 Surface structure of Cu(410). Miller indices of step facet and terrace plane are marked. Copper atoms at step edge are highlighted in deeper color. B. DFT calculations All DFT calculations were performed with the Vienna ab initio Simulation Package (VASP) [18]. Total energies and residual forces on each atom were converged to 10^{-4} eV and 0.02 eV/Å, respectively. A 5×4×1 k-point grid was adopted for energy and frequency calculations. For geometry optimization and transition state search, a 2×2×1 k-point grid was adopted. Transition states were located using the nudged elastic band method (NEB) [19]. All atoms except those in the two bottom layers were used to calculate the vibrational frequencies, which were used to assign experimental peaks, to confirm the transition states, and to calculate the zero-point energy (ZPE). Considering the importance of van de Waals (vdW) interaction in surface adsorption and reaction [20], the optB86b version of vdW-DF [21], which gives the best agreement with the measured adsorption energy of ethylene on Cu(100) [22], was adopted in this study. The adsorption energy (E_{\rm ads}) is defined as: \begin{eqnarray*} E_{{\rm ads}} =E_{{\rm mol}} +E_{{\rm surf}} -E_{{\rm tot}} \end{eqnarray*} where E_{\rm tot} is the total energy of adsorbed system, E_{\rm mol} is the energy of the isolated molecule or fragments, and E_{\rm surf} means the energy of the Cu(410) surface. C. Micro-kinetic Modeling The reaction rate of each elementary step depends on the kinetic rate constant and also the surface coverage or pressure of its reactants and products: {r_i} = {{\rm }}\left( {{k_{{{\rm f}},\mathit{i}}}\prod\limits_{j \in {{\rm IS}}} {{P_j}\prod\limits_{j \in {{\rm IS}}} {{\theta _j}} } } \right) - {{\rm }}\left( {{k_{{{\rm r}},\mathit{i}}}\prod\limits_{j \in {{\rm FS}}} {{P_j}} \prod\limits_{j \in {{\rm FS}}} {{\theta _j}} } \right) where k_{{\rm f}, i} and k_{{\rm r}, i} are the forward and backward rate constants for elementary step i respectively, P_j and \theta_j represent the partial pressure and coverage of relevant species j. Rate constants are predicted via transition state theory (TST). For example, the forward rate constant for step i is {k_{{{\rm f}},\mathit{i}}} = A\exp \left( { - \frac{{{E_{{{\rm af}},\mathit{i}}}}}{{{k_{{\rm B}}}T}}} \right) where A={k_{\rm B}}T/h is the pre-exponential factor determined by Boltzmann's constant k_{\rm B}, Planck's constant h, and reaction temperature T, and E_{{\rm af}, i} is the calculated forward activation energy barrier for step i. In this study, ZPE correction was applied in activation energy calculations. With the reaction rate of each elementary step known, consuming rate of each species can be obtained by solving the following equation \begin{eqnarray*} \frac{{\rm d}\theta_j}{{\rm d}t} = \sum\limits_{j \in {{\rm FS}}} {{s_n}{r_n}} - \sum\limits_{j \in {{\rm IS}}} {{s_n}{r_n}} \end{eqnarray*} where t is time and s_n is the stoichiometric factor of species j in the elementary step n. The first and the second terms in the right-hand side are from elementary steps where species j acts as a reactant and a product, respectively. Ⅲ. RESULTS A. Adsorption and the first dehydrogenation step Different adsorption configurations of ethylene on Cu(410) are investigated, which can be classified into two types: \pi-bonded (two carbon atoms bind with the same Cu atom) and di-\sigma-bonded (two carbon atoms bind with two separate Cu atoms). \pi-bonded configurations exist both at step (\pi-s) and on terrace (\pi-t). For \pi-s adsorption, there are two configurations with C-C bond parallel (\pi-s-p) and vertical (\pi-s-v) to the step. Since the terrace on Cu(410) is narrow, we also consider two configurations for the \pi-t adsorption, i.e. \pi-t-p and \pi-t-v with ethylene parallel and vertical to the step, respectively. In di-\sigma-bonded adsorption, two carbon atoms can bind with two nearest-neighboring Cu atoms (\sigma-sb) or two next-nearest-neighboring Cu atoms (\sigma-lb). For \sigma-sb, we identify three configurations with two terrace atoms (\sigma-sb-tt) or one terrace atom and one step atom (\sigma-sb-st, \sigma-sb-st') involved. As for \sigma-lb, only the configuration with two step Cu atoms involved (\sigma-lb-ss) is obtained. Optimized structures of these adsorption configurations are shown in FIG. 2 and the adsorption energies are listed in Table Ⅰ. The \pi-s-v configuration is the most stable one with an adsorption energy of 0.92 eV. The optimized C-Cu distance is 2.15 Å. di-\sigma-bonded configurations are generally less stable than \pi-bonded configurations. Our test calculations with PBE functional indicate that it significantly underestimated the adsorption energies [22]. FIG. 2 Structures of different ethylene adsorption configurations on Cu(410). (a) \pi-s-v, (b) \pi-s-p, (c) \pi-t-v, (d) \pi-t-p, (e) \sigma-sb-st, (f) \sigma-sb-tt, (g) \sigma-sb-st', (h) \sigma-lb-ss. The white, grey, and orange spheres represent hydrogen, carbon, and copper, respectively. Table Ⅰ C-Cu bond length ({d_{\rm Cu-C}}) and adsorption energy (E_{\rm ads}) for ethylene adsorption on Cu(410). Reaction energy (E_{\rm rxn}), activation energy (E_{\rm a}), and the corresponding rate constants (k) at 200 K of the first dehydrogenation step. Via the first C-H bond breaking, adsorbed ethylene can be converted to vinyl (C{\rm H}_{\rm 2}CH). Dehydrogenation energy barrier for the \pi-s-v ethylene is 1.13 eV (FIG. S1 in supplementary materials). At the transition state, the C-H distance is elongated to 1.90 Å and the shortest bond length of C-Cu is 1.97 Å. We also check the possibility of indirect dehydrogenation, i.e. the \pi-s-v configuration transfers to another configuration and then dehydrogenates. The corresponding minimum energy paths (MEP) are shown in FIG. 3(a). From \pi-s-v to \pi-s-p, the activation barrier is very low (0.03 eV), which then leads to a higher dehydrogenation barrier (1.35 eV) compared to other pathways. FIG. 3 Minimum energy paths for indirect dehydrogenation processes via different metastable ethylene adsorption configurations. (b) Equilibrium product coverage for different indirect dehydrogenation pathways. To quantitatively evaluate these reaction pathways, an effective activation barrier is given to each pathway by fitting their time-dependent product concentration curves to that of direct dehydrogenation (FIG. S4 in supplementary materials). The results are listed in Table Ⅰ, where the pathway via \sigma-lb-ss has the lowest effective barrier (1.22 eV). At 200 K, the explicit kinetic network with all indirect dehydrogenation pathways included is constructed. The equilibrium coverage of {\rm C}_{\rm 2}$${\rm H}_{\rm 3}$ with different adsorption configurations is shown in FIG. 3(b), which clearly shows that the products generated via the $\sigma$-lb-ss pathway is several orders of magnitude larger than others. Therefore, in the following part, we only consider the direct dehydrogenation from $\pi$-s-v (${\rm C}_{\rm 2}$${\rm H}_{\rm 4}) and the indirect dehydrogenation via \sigma-lb-ss ({\rm C}_{\rm 2}$${\rm H}_{\rm 4}$$') in the first dehydrogenation step. These two processes actually have the same vinyl product structure. B. Further dehydrogenation steps and the kinetic network Further dehydrogenation involves more species on the surface (FIG. 4), including vinyl (C{\rm H}_{\rm 2}CH), vinyldene (CC{\rm H}_{\rm 2}), acetylene (CHCH), acetylidene (CCH), and dicarbon (CC). Vinyl from the first dehydrogenation step has an adsorption energy of 2.89 eV. From vinyl, both acetylene and vinylidene can be the possible dehydrogenation products. Acetylene adsorbs on a di-\sigma long-bridge step site with an adsorption energy of 1.47 eV. Vinylidene adsorbs at the step edge with an adsorption energy of 3.49 eV. Three possible adsorption configurations are found for the next dehydrogenation product, acetylidene. Two of them adopt a di-\sigma-bonded configuration, with an adsorption energy of 4.70 eV for the one parallel to the step and 4.74 eV vertical to the step. The \pi-bonded configuration at a step site has an adsorption energy of 4.86 eV. The final dehydrogenation product, dicarbon, can adopt either a long-bridge configuration with an adsorption energy of 7.28 eV or a short-bridge configuration with an adsorption energy of 6.87 eV. FIG. 4 Possible elementary reaction steps for ethylene dehydrogenation on Cu(410) surface. White, grey, and orange spheres represent hydrogen, carbon, and copper, respectively. With these relevant species, possible elementary steps for ethylene dehydrogenation are marked in FIG. 4 and summarized in Table Ⅱ. Reaction energies and barriers are corrected by ZPE calculations. To understand the dehydrogenation mechanism, we perform a micro-kinetic analysis based on these elementary steps. Reaction rate constants are obtained by simply using a typical pre-exponential factor (4.2×10^{12} {\rm s}^{-1}). The reaction temperature is set to an experimental relevant value of 200 K [14, 15]. Since there was no hydrogen flowed into the reaction chamber in experiment [14, 15] and the hydrogen adsorption energy is low, we assume that hydrogen can desorb from the surface immediately as soon as it is generated and it will not adsorb again. Table Ⅱ Activation energy for both forward (E_{\rm af}) and reverse (E_{\rm ar}) reactions. The shortest C-Cu distance ({d_{\rm C-Cu}}) and the relevant C-H distance ({d_{\rm C-H}}) at the transition state are also listed. By supposing the initial coverage is 0.01 for ethylene and zero for all other species, we can solve the rate equations and obtain the coverage of all species as a function of time. As shown in FIG. 5, besides the reactant {\rm C}_{\rm 2}$${\rm H}_{\rm 4}$ and the product CC, the coverage of C${\rm H}_{\rm 2}$CH is the highest among all intermediate species. At the same time, from coverage change of relevant species we can obtain the reaction rate of each elementary step. Then, by integrating reaction rates, we can know how much each elementary step has happened from the beginning to the equilibrium. The results are shown in FIG. 6(a), which indicates that ${\rm C}_{\rm 2}$${\rm H}_{\rm 4}$$\rightarrow$C${\rm H}_{\rm 2}$CH$\rightarrow$CHCH$\rightarrow$CCH$'$$\rightarrowCCH''$$\rightarrow$CCH$\rightarrow$CC is the dominant pathway for ethylene dehydrogenation. The energy profile of this pathway is shown in FIG. 6(b). This conclusion is different from the previous assumption that ethylene dehydrogenate on Cu(410) should be through the long-bridge state ($\sigma$-lb-ss) [16, 17]. Notice that the unimportance of the long-bridge intermediate state is consistent with the extremely low barrier from this state to the more stable $\pi$-s-v configuration. FIG. 5 Coverages of all surface species as a function of time during ethylene dehydrogenation on Cu(410). FIG. 6 (a) Relative importance of each elementary step in the kinetic network. (b) Energy profile for the dominant kinetic pathway of ethylene dehydrogenate on Cu(410). C. Vibrational analysis HREELS and IRAS have been used in experiment to characterize surface species. Different vibrational peaks were obtained at different temperatures and surface coverages. For IRAS at 0.5 L@93 K [17] and HREELS at 1 L@145 K [15], experimental peaks can be largely understood with $\pi$-bonded ethylene adsorption. Two new peaks at 1330 and 2903 c${\rm m}^{-1}$ in HREELS at 51 L@193 K [15] were previously assigned to di-$\sigma$-bonded ethylene. For HREELS at 116 L@300 K [15], only a peak at 388 c${\rm m}^{-1}$ was observed and this peak persisted up to 500 K. Since our kinetic analysis indicate that they are the most probable surface species, vibrational frequencies of adsorbed ethylene, C${\rm H}_{\rm 2}$CH and C$-$C are calculated (Table S1 in supplementary materials) to compare with experimental observations. As listed in Table Ⅲ, the vibrational frequencies of $\pi$-bonded ethylene agree well with the peaks in IRAS at 0.5 L@93 K and HREELS at 1.0 L/145 K, which confirms that ethylene adopts the $\pi$-bonded configurations at step sites at low coverage and low temperature. As for HREELS at 51 L/193 K, di-$\sigma$-bonded ethylene adsorption seems to be an explanation of the new experimental peaks. However, it is difficult to understand why the less stable di-$\sigma$-bonded configuration should be observed at higher temperature. At the same time, vibrational frequencies of C${\rm H}_{\rm 2}$CH cannot match the experimental frequencies very well in this case (1212 vs. 1123 c${\rm m}^{-1}$ and 1253 vs. 1330 c${\rm m}^{-1}$). Another possibility is ethylene and C$-$C co-adsorption (FIG. S7 in supplementary materials), where a four-member ring is formed with an ethylene adsorption energy of 0.85 eV. The calculated vibrational frequencies of this configuration at 2934, 1396, 1120, 950, and 905 c${\rm m}^{-1}$ agree well with the HREELS observation at 193 K with 51 L exposure. Table Ⅲ Calculated vibrational modes of five adsorption states of ethylene and dicarbon compared with HREELS and IRAS measurements: 145 K HREELS 1 L, 193 K HREELS 51 L, and 93 K IRAS 0.5 L. The vibrational frequency of C$-$C is calculated to be 346 c${\rm m}^{-1}$, which corresponds to the only peak (388 c${\rm m}^{-1}$) in HREELS at 300 K and indicates that ${\rm C}_{\rm 2}$${\rm H}_{\rm 4}$ can be dehydrogenated on Cu(410) at this temperature. At the same time, since ethylene and C$-$C co-adsorption is used to explain HREELS at 51 L/193 K, our vibrational analysis suggests that ethylene dehydrogenation can also proceed at temperature as low as 193 K. Ⅳ. DISCUSSION AND CONCLUSIONS Dehydrogenation of methane on Cu(410) is also investigated. The energy barriers of the four successive dehydrogenation steps are calculated as 1.20, 1.27, 0.75, and 1.41 eV, respectively. The rate-limiting step is the dehydrogenation of CH, with a barrier higher than that of ethylene decomposition (1.13 eV). The overall reaction barrier for methane dehydrogenation (2.29 eV) is also much higher than that of ethylene. The methane decomposition process is highly endothermic (1.39 eV), while the energy increase upon ethylene decomposition is moderate (0.20 or 0.31 eV). Therefore, higher temperature will be required for methane dehydrogenation on Cu(410). In summary, on the basis of vdW corrected DFT calculations, the adsorption and dehydrogenation of ethylene on Cu(410) have been systematically investigated. The most stable configuration of ethylene adsorption is $\pi$-s-v, which is also confirmed by vibrational analysis. Micro-kinetic analysis suggests that ethylene prefers to dehydrogenate directly from the most stable configuration rather than proceed through the long-bridge metastable state. Ethylene adsorption on C$-$C may contribute to the peaks in HREELS at 193 K, which implies that dehydrogenation proceeds at such low temperature. Supplementary materials: More detailed information on optimized geometries, microkinetic models, and results for C${\rm H}_{\rm 4}$ is given. Ⅴ. ACKNOWLEDGMENTS This work is partially supported by the National Natural Science Foundation of China (No.21473167 and No.21173202) and the National Key Research and Development Program of China (No.2016YFA0200600), the Fundamental Research Funds for the Central Universities (WK3430000005), and China Scholarship Council (No.201706345015). Computational resources of Super-computing Center of University of Science and Technology of China, Guangzhou and Shanghai Supercomputer Centers are also acknowledged. Reference [1] A. S. Crampton, M. D. Rötzer, F. F. Schweinberger, B. Yoon, U. Landman, and U. Heiz, J. Catal. 333 , 51 (2016). DOI:10.1016/j.jcat.2015.10.023 [2] G. T. K. K. Gunasooriya, E. G. Seebauer, and M. Saeys, ACS Catal. 7 , 1966 (2017). DOI:10.1021/acscatal.6b02906 [3] K. Shimamura, Y. Shibuta, S. Ohmura, R. Arifin, and F. Shimojo, J. Phys.:Condens. Matter. 28 , 145001 (2016). DOI:10.1088/0953-8984/28/14/145001 [4] J. Andersin, N. Lopez, and K. Honkala, J. Phys. Chem. C 113 , 8278 (2009). [5] K. Kousi, N. Chourdakis, H. Matralis, D. Kontarides, C. Papadopoulou, and X. Verykios, Appl. Catal. A 518 , 129 (2016). DOI:10.1016/j.apcata.2015.11.047 [6] H. S. Bengaard, J. K. Nørskov, J. Sehested, B. S. Clausen, L. P. Nielsen, A. M. Molenbroek, and J. R. Rostrup-Nielsen, J. Catal. 209 , 365 (2002). DOI:10.1006/jcat.2002.3579 [7] H. Tetlow, J. P. de Boer, I. J. Ford, D. D. Vvedensky, J. Coraux, and L. Kantorovich, Phys. Rep. 542 , 195 (2014). DOI:10.1016/j.physrep.2014.03.003 [8] P. Wu, W. H. Zhang, Z. Y. Li, and J. L. Yang, Small 10 , 2114 (2014). DOI:10.1002/smll.201470064 [9] Z. Y. Qiu, P. Li, Z. Y. Li, and J. L. Yang, Acc. Chem. Res. 51 , 728 (2018). DOI:10.1021/acs.accounts.7b00592 [10] D. Arvanitis, L. Wenzel, and K. Baberschke, Phys. Rev. Lett. 59 , 2435 (1987). DOI:10.1103/PhysRevLett.59.2435 [11] H. Öström, A. Föhlisch, M. Nyberg, M. Weinelt, C. Heske, L. G. M. Pettersson, and A. Nilsson, Surf. Sci. 559 , 85 (2004). DOI:10.1016/j.susc.2004.04.041 [12] R. Linke, C. Becker, T. Pelster, M. Tanemura, and K. Wandelt, Surf. Sci. 377 , 655 (1997). [13] D. Yamazaki, M. Okada, F. C. Franco Jr, and T. Kasai, Surf. Sci. 605 , 934 (2011). DOI:10.1016/j.susc.2011.02.010 [14] T. Kravchuk, L. Vattuone, L. Burkholder, W. T. Tysoe, and M. Rocca, J. Am. Chem. Soc. 130 , 12552 (2008). DOI:10.1021/ja802105z [15] T. Kravchuk, V. Venugopal, L. Vattuone, L. Burkholder, W. T. Tysoe, M. Smerieri, and M. Rocca, J. Phys. Chem. C 113 , 20881 (2009). DOI:10.1021/jp904794n [16] V. Venugopal, L. Vattuone, T. Kravchuk, M. Smerieri, L. Savio, J. Jupille, and M. Rocca, J. Phys. Chem. C 113 , 20875 (2009). DOI:10.1021/jp9047924 [17] T. Makino, M. Okada, and A. Kokalj, J. Phys. Chem. C 118 , 27436 (2014). DOI:10.1021/jp509228v [18] G. Kresse, and J. Furthmller, Phys. Rev. B 54 , 11169 (1996). DOI:10.1103/PhysRevB.54.11169 [19] H. Jónsson, G. Mills, and K. W. Jacobsen, Classical and Quantum Dynamics in Condensed Phase Simulations, B. J. Berne, G. Ciccotti, and D. F. Coker Eds., Singapore River Edge, NJ: World Scientific, 385(1998). [20] J. H. Choi, Z. C. Li, P. Cui, X. D. Fan, H. Zhang, C. G. Zeng, and Z. Y. Zhang, Sci. Rep. 3 , 1925 (2013). DOI:10.1038/srep01925 [21] J. Klimeš, D. R. Bowler, and A. Michaelides, Phys. Rev. B 83 , 195131 (2011). DOI:10.1103/PhysRevB.83.195131 [22] F. Hanke, M. S. Dyer, J. Björk, and M. Persson, J. Phys.:Condens. Matter. 24 , 424217 (2012). DOI:10.1088/0953-8984/24/42/424217 a. 中国科学技术大学合肥微尺度物质科学国家研究中心, 合肥 230026; b. 中国科学技术大学中科院能量转换材料重点实验室, 合肥 230026; c. 中国科学技术大学量子信息与量子科技前沿协同创新中心, 合肥 230026; d. 澳洲国立大学物理与工程研究学院应用数学系, 澳大利亚堪培拉 2600
2019-08-21 18:03:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6277421712875366, "perplexity": 6522.252505802985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316150.53/warc/CC-MAIN-20190821174152-20190821200152-00083.warc.gz"}
https://www.cheenta.com/number-of-factors-of-1800-tomato-95/
Select Page Problem The number of different factors of 1800 equals: (A) 12; (B) 210; (C) 36; (D) 18; Discussion: We may factor 1800 as $2^3 \times 3^2 \times 5^2$ Then the number of factors is: $(3+1) \times (2+1) \times (2+1) = 36$ Hence answer is 36. ## Chatuspathi: • What is this topic: Number Theory • What are some of the associated concept: Number Theoretic Functions • Where can learn these topics: Cheenta I.S.I. & C.M.I. course, discusses these topics in the ‘Number Theory’ module. • Book Suggestions: Elementary Number Theory by David Burton
2018-07-23 04:20:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6937665343284607, "perplexity": 4367.970448757107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594886.67/warc/CC-MAIN-20180723032237-20180723052237-00424.warc.gz"}
https://gmatclub.com/forum/a-contest-will-contain-n-questions-each-of-which-is-to-be-104124.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 23 Jun 2018, 14:18 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # A contest will contain n questions each of which is to be Author Message TAGS: ### Hide Tags Manager Joined: 07 Feb 2010 Posts: 136 A contest will contain n questions each of which is to be [#permalink] ### Show Tags 02 Nov 2010, 06:31 1 5 00:00 Difficulty: 35% (medium) Question Stats: 67% (01:22) correct 33% (01:28) wrong based on 205 sessions ### HideShow timer Statistics A contest will contain n questions each of which is to be answered either true or false. Anyone who answers all n questions correctly will be a winner what is the least value of n for which the probability is less then 1/1000 then a person who randomly guesses the answer to each q's will be a winner? A. 5 B. 10 C. 50 D. 100 E. 1000 Math Expert Joined: 02 Sep 2009 Posts: 46297 Re: a contest will contain n questions [#permalink] ### Show Tags 02 Nov 2010, 06:48 2 1 anilnandyala wrote: a contest will contain n questions each of which is to be answered either true or false . anyone who answers all n questions correctly will be a winner what is the least value of n for which the probablity is less then 1/1000 then a person who randomly guesses the answer to each q's will be a winner? a) 5 b) 10 c) 50 d) 100 e) 1000 The probability that a person will randomly guess all $$n$$ questions correctly is $$\frac{1}{2^n}$$, so we want the value of $$n_{min}$$ for which $$\frac{1}{2^n}<\frac{1}{1000}$$ --> $$2^n>1000$$ --> as $$n$$ is an integer $$n_{min}=10$$ ($$2^{10}=1024>1000$$). _________________ Manager Joined: 06 Aug 2010 Posts: 192 Location: Boston Re: a contest will contain n questions [#permalink] ### Show Tags 02 Nov 2010, 06:50 The probability of answering all the questions correctly is $$\frac{{1}}{{2^n}}$$. We want this probability to be less than $$\frac{1}{1,000}$$, so we need the smallest value of n that gives us $$2^n > 1,000$$. $$2^{10} = 1,024$$, so the answer is 10. B. Non-Human User Joined: 09 Sep 2013 Posts: 7041 Re: A contest will contain n questions each of which is to be [#permalink] ### Show Tags 20 Sep 2017, 21:16 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: A contest will contain n questions each of which is to be   [#permalink] 20 Sep 2017, 21:16 Display posts from previous: Sort by # A contest will contain n questions each of which is to be Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-06-23 21:18:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5069959759712219, "perplexity": 2166.837429149994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865250.0/warc/CC-MAIN-20180623210406-20180623230406-00317.warc.gz"}
http://hitchhikersgui.de/Factor_base
# Factor base In computational number theory, a factor base is a small set of prime numbers commonly used as a mathematical tool in algorithms involving extensive sieving for potential factors of a given integer. ## Usage in factoring algorithms A factor base is a relatively small set of distinct prime numbers P, sometimes together with -1.[1] Say we want to factorize an integer n. We generate, in some way, a large number of integer pairs (x, y) for which ${\displaystyle x\neq \pm y}$, ${\displaystyle x^{2}\equiv y^{2}{\pmod {n}}}$, and ${\displaystyle x^{2}{\pmod {n}}{\text{ and }}y^{2}{\pmod {n}}}$ can be completely factorized over the chosen factor base—that is, all their prime factors are in P. In practice, several integers x are found such that ${\displaystyle x^{2}{\pmod {n}}}$ has all of its prime factors in the pre-chosen factor base. We represent each ${\displaystyle x^{2}{\pmod {n}}}$ expression as a vector of a matrix with integer entries being the exponents of factors in the factor base. Linear combinations of the rows corresponds to multiplication of these expressions. A linear dependence relation mod 2 among the rows leads to a desired congruence ${\displaystyle x^{2}\equiv y^{2}{\pmod {n}}}$.[2] This essentially reformulates the problem into a system of linear equations, which can be solved using numerous methods such as Gaussian elimination; in practice advanced methods like the block Lanczos algorithm are used, that take advantage of certain properties of the system. This congruence may generate the trivial ${\displaystyle \textstyle n=1\cdot n}$; in this case we try to find another suitable congruence. If repeated attempts to factor fail we can try again using a different factor base. ## Algorithms Factor bases are used in, for example, Dixon's factorization, the quadratic sieve, and the number field sieve. The difference between these algorithms is essentially the methods used to generate (x, y) candidates. Factor bases are also used in the Index calculus algorithm for computing discrete logarithms.[3] ## References 1. ^ Koblitz, Neal (1987), A Course in Number Theory and Cryptography, Springer-Verlag, p. 133, ISBN 0-387-96576-9 2. ^ Trappe, Wade; Washington, Lawrence C. (2006), Introduction to Cryptography with Coding Theory (2nd ed.), Prentice-Hall, p. 185, ISBN 978-0-13-186239-5 3. ^ Stinson, Douglas R. (1995), Cryptography / Theory and Practice, CRC Press, p. 171, ISBN 0-8493-8521-0
2018-09-24 18:39:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7956798076629639, "perplexity": 527.0168360498449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160620.68/warc/CC-MAIN-20180924165426-20180924185826-00281.warc.gz"}
https://qomarekefywyfojun.holidaysanantonio.com/how-to-write-a-taylor-series-3509nj.html
How to write a taylor series I could put a 4 up there, but this is really emphasizing-- it's the fourth derivative at 0 times 1 over-- and I'll change the order. However, there is a clear pattern to the evaluations. This will cause the index variable, n, to start at zero and count up to 4. But what's cool about this right here, this polynomial that has a 0 degree term and a first degree term, is now this polynomial is equal to our function at x is equal to 0. Introduction Before you start this module, you must know how to find the Taylor polynomials of a given function. We're assuming that we know the third derivative at 0. And it would just be a constant term. It would just be a horizontal line at f of 0. In this example, unlike the previous example, doing this directly would be significantly longer and more difficult. Taylor Series In the previous section we started looking at writing down a power series representation of a function. Set it to whatever you like. Maclaurin series Video transcript I've draw an arbitrary function here. The problem for most students is that it may not appear to be that easy or maybe it will appear to be too easy at first glance. Show Solution Here are the first few derivatives and the evaluations. No more techniques of integrationif one is satisfied with writing an integral as a power series. Consult the definition of the Taylor series to understand how each term may be computed. For instance, you will see that power series are easy to differentiate and integrate. So you should expect the Taylor series of a function to be found by the same formula as the Taylor polynomials of a function: This will be the final Taylor Series for exponentials in this section. Name must appear inside quotes. Step Type the following command to add the value of each successive term to "sum: Show Solution There are two ways to do this problem. So renumbering the terms as we did in the previous example we get the following Taylor Series. And then the function should pretty much look like each other. By default, taylor uses an absolute order, which is the truncation order of the computed series. But if you add an infinite number of terms, all of the derivatives should be the same. Due to the nature of the mathematics on this site it is best views in landscape mode. Well, you have this constant term. For some expressions, a relative truncation order provides more accurate approximations. If n is 3, the value is. For simplicity's sake, use 0 for the value of "a" on your first attempt. This special version of the Taylor series is called the Maclaurin series. Try the sine function, since its successive derivatives are easy to determine. Step. Write down several values of the nth derivative. A Taylor Series is an expansion of a function into an infinite sum of terms. Example: The Taylor Series for e x e x = 1 + x + x 2 2! + x 3 3! + x 4 4! + x 5 5! +. To find the Taylor Series for a function we will need to determine a general formula for $${f^{\left(n \right)}}\left(a \right)$$. This is one of the few functions where this is easy to do right from the start. Taylor series expansion of symbolic expressions and functions. Mouseover text to see original. Click the button below to return to the English version of the page. Many functions can be written as a power series. The archetypical example is provided by the geometric series: which is valid for example, the function has no Taylor series, since 0 B œ "ÎB 0!a b a b is undefined. In general, any function for which is undefined for some will fai0! 8 a b8 a b l to be analytic. How to write a taylor series Rated 5/5 based on 67 review How to Write a Taylor Series in Python | holidaysanantonio.com
2021-04-20 03:44:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.827606737613678, "perplexity": 238.11245665709131}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00519.warc.gz"}
https://iidb.org/threads/but-seriously-those-housing-prices-how-do-you-do-it.25852/
# But seriously. Those housing prices! How do you do it? #### Rhea ##### Cyborg with a Tiara Staff member I chat with my friends who live in high price housing markets and I am stumped on how this works. Do y’all mind sharing your stories? Take California, with their million dollar small homes. (Also NYC, Austin, Seattle, etc) How does a family afford that? Do you never take vacations? Are you able to save for retirement? Is the house the retirement? Do you ever pay off the mortgage? I have friends with similar incomes, and they pay $4K -$5K per month for a 2-3 bedroom house, not huge and small yards (if any). That seems INSANE to me and I can’t figure out how they budget. What’s the deal for you? Context: I live rural and you can buy a 4BR house here with 2000sq feet and 10 acres of land for, say, $150,000 ($300K if you want it to be manager-level fancy). You can get one that needs some updating for $60,000. You can get a nice (brand new) singlewide and 2 acres to put it on for$50,000. So I am used to prices that are not higher than 1 year’s income for most professionals. Last edited: #### bigfield ##### the baby-eater Here's what I'm using for a bit of context: Median household income: $78,672 Median value of homes:$538,500 Median monthly owner costs with mortgage: $2,422$4K a month alone is $48,000 a year, which to me is a staggering amount of money to pay on a mortgage. You could maybe manage that on a median CA household income, but I bet it's unusual. I'm guessing that your friends have a household income much higher than the median, and I'm guessing that there aren't many people who can afford these million dollar homes. (I've heard that some people do interest-only repayment plans, but I can't see how that is anything other than financial suicide unless you are aggressively building a real estate portfolio.) I've watched a fair bit of HGTV and the variation in house prices from one TV show to another is incredible. It looks like you can get some absolute bargains if you are willing to live outside of a big city, although it isn't at clear whether those are good places to live (and get a job) overall. #### DBT ##### Contributor I tend to assume that the point of societies and economies is the benefit to be had from the arrangement, individually and collectively, that with progress life becomes easier for all its members.....yet considering developments in recent times, that doesn't seem to be the case. #### Jimmy Higgins ##### Contributor Here's what I'm using for a bit of context: Median household income:$78,672 Median value of homes: $538,500 Median monthly owner costs with mortgage:$2,422 $4K a month alone is$48,000 a year, which to me is a staggering amount of money to pay on a mortgage. You could maybe manage that on a median CA household income, but I bet it's unusual. I'm guessing that your friends have a household income much higher than the median, and I'm guessing that there aren't many people who can afford these million dollar homes. (I've heard that some people do interest-only repayment plans, but I can't see how that is anything other than financial suicide unless you are aggressively building a real estate portfolio.) I've watched a fair bit of HGTV and the variation in house prices from one TV show to another is incredible. It looks like you can get some absolute bargains if you are willing to live outside of a big city, although it isn't at clear whether those are good places to live (and get a job) overall. My understanding is living outside of LA is called "living in Phoenix". HGTV pisses me off. Talk about ungreen. Yeah... we're taking this otherwise decent looking house apart, spending $200,000 to improve it, and then flipping it for a$100,000 gain. The amount of trash we are producing and resources we are wasting for vanity will be intense! Granite countertop, GONE! 15 year old cabinets, GONE! Hey, let's needlessly replace this door with a $12,000 door! SMASH! #### rousseau ##### Contributor For many in my generation, the answer is that you can't afford these homes, and have to put off getting married and having children indefinitely. Some of our friends still live with their parents, many have moved to outlying towns where homes are cheaper (and now face rising costs for fuel). My younger acquaintances who just finished up with school are trying to hold on to a glimmer of hope. Last edited: #### rousseau ##### Contributor I tend to assume that the point of societies and economies is the benefit to be had from the arrangement, individually and collectively, that with progress life becomes easier for all its members.....yet considering developments in recent times, that doesn't seem to be the case. Things can't be perfect for everyone, all the time. #### DBT ##### Contributor I tend to assume that the point of societies and economies is the benefit to be had from the arrangement, individually and collectively, that with progress life becomes easier for all its members.....yet considering developments in recent times, that doesn't seem to be the case. Things can't be perfect for everyone, all the time. They can't, but that's not what I was suggesting. #### Politesse ##### Lux Aeterna I chat with my friends who live in high price housing markets and I am stumped on how this works. Do y’all mind sharing your stories? Take California, with their million dollar small homes. (Also NYC, Austin, Seattle, etc) How does a family afford that? Do you never take vacations? Are you able to save for retirement? Is the house the retirement? Do you ever pay off the mortgage? I have friends with similar incomes, and they pay$4K -$5K per month for a 2-3 bedroom house, not huge and small yards (if any). That seems INSANE to me and I can’t figure out how they budget. What’s the deal for you? Context: I live rural and you can buy a 4BR house here with 2000sq feet and 10 acres of land for, say,$150,000 ($300K if you want it to be manager-level fancy). You can get one that needs some updating for$60,000. You can get a nice (brand new) singlewide and 2 acres to put it on for $50,000. So I am used to prices that are not higher than 1 year’s income for most professionals. I live in an apartment. #### rousseau ##### Contributor I tend to assume that the point of societies and economies is the benefit to be had from the arrangement, individually and collectively, that with progress life becomes easier for all its members.....yet considering developments in recent times, that doesn't seem to be the case. Things can't be perfect for everyone, all the time. They can't, but that's not what I was suggesting. I guess what I was getting at is that economies do work for their members, but a perfect, linear idea of progress isn't entirely realistic. Maybe that's what you meant too? #### rousseau ##### Contributor I have two friends with homes around the million dollar mark. One is a doctor who makes 350k/year, the other bought before the market was hot, sold at a profit, and now owes 600k on his Toronto home. One way or another the dots have to connect. My wife and I are also in the 'bought at the right time' camp. She and her sister bought our home in 2005, and we got a good deal when buying her sister out in 2017. #### Jimmy Higgins ##### Contributor I tend to assume that the point of societies and economies is the benefit to be had from the arrangement, individually and collectively, that with progress life becomes easier for all its members.....yet considering developments in recent times, that doesn't seem to be the case. Things can't be perfect for everyone, all the time. There is usually the generational whining. I tried to explain to Boomers that the stock market has exploded during their life time. College was affordable. Housing wasn't too expensive. Health care not too nutty. So yeah, you better have done well you lucky fucks! The latest generation has a stock market that just yo-yos like madness, college is through the roof expensive, health care costs continue to increase, and housing gets expensive. I feel like my generation, Late Z'er were the last ones to get out into the old world where you went to college (worked hard... well some did), graduated and got a job, and things work themselves out. I graduated, got a house in '03, and since then the Great Recession and a global pandemic. It isn't WWII, but there were blue collar jobs back then too. #### rousseau ##### Contributor I tend to assume that the point of societies and economies is the benefit to be had from the arrangement, individually and collectively, that with progress life becomes easier for all its members.....yet considering developments in recent times, that doesn't seem to be the case. Things can't be perfect for everyone, all the time. There is usually the generational whining. I tried to explain to Boomers that the stock market has exploded during their life time. College was affordable. Housing wasn't too expensive. Health care not too nutty. So yeah, you better have done well you lucky fucks! The latest generation has a stock market that just yo-yos like madness, college is through the roof expensive, health care costs continue to increase, and housing gets expensive. I feel like my generation, Late Z'er were the last ones to get out into the old world where you went to college (worked hard... well some did), graduated and got a job, and things work themselves out. I graduated, got a house in '03, and since then the Great Recession and a global pandemic. It isn't WWII, but there were blue collar jobs back then too. I got very lucky as well as I finished up with school in a booming job market, and managed to become a homeowner just before prices exploded. A few years ago I thought it was a matter of making good decisions, but these days a lot of my peers are hopelessly buried. My acquaintances mostly fall into two camps: those who are making the best of it and those who play the blame game. It's genuinely awful for some, but blaming doesn't pay the credit card bill. Those getting hit the hardest are friends who rejected modern values outright and never even attempted to find a career. Living paycheque to paycheque and barely getting by. #### Elixir ##### Made in America Real estate is risky. My advice: get lucky. You can’t get lucky without facing risk, but real estate DOES appreciate on average, and the longer you can hold onto it the better your chances of ending up in the black. Right now I am facing the prospect of “losing” on a long term (15 yr) real estate deal. But it’s only a loss in terms of “today dollars” spent on the purchase plus taxes, insurance, upkeep, travel etc. AND a major remodel it will need to be marketable. Most of the purchase price was re-couped in rent over the time I’ve owned it and it would yet be profitable if I held it a few more years… and will be close to break even if I sell it this spring. Every other property I ever got into was a clear winner in the end. So I say go for it, esp if you are young and flush enough to hold for 10+ years. #### rousseau ##### Contributor Real estate is risky. My advice: get lucky. You can’t get lucky without facing risk, but real estate DOES appreciate on average, and the longer you can hold onto it the better your chances of ending up in the black. Right now I am facing the prospect of “losing” on a long term (15 yr) real estate deal. But it’s only a loss in terms of “today dollars” spent on the purchase plus taxes, insurance, upkeep, travel etc. AND a major remodel it will need to be marketable. Most of the purchase price was re-couped in rent over the time I’ve owned it and it would yet be profitable if I held it a few more years… and will be close to break even if I sell it this spring. Every other property I ever got into was a clear winner in the end. So I say go for it, esp if you are young and flush enough to hold for 10+ years. The problem for most of us is the initial down payment, and before that our student debt, and before that even finding a well paying job. I have a young cousin who's in this predicament right now. She wants to leave her boyfriend and get her own place (to rent) but is struggling to secure stable employment. Needless to say I'm already thinking about how to help our boys out with property, and one of them isn't born yet. #### bigfield ##### the baby-eater Real estate is risky. My advice: get lucky. You can’t get lucky without facing risk, but real estate DOES appreciate on average, and the longer you can hold onto it the better your chances of ending up in the black. Right now I am facing the prospect of “losing” on a long term (15 yr) real estate deal. But it’s only a loss in terms of “today dollars” spent on the purchase plus taxes, insurance, upkeep, travel etc. AND a major remodel it will need to be marketable. Most of the purchase price was re-couped in rent over the time I’ve owned it and it would yet be profitable if I held it a few more years… and will be close to break even if I sell it this spring. Every other property I ever got into was a clear winner in the end. So I say go for it, esp if you are young and flush enough to hold for 10+ years. I'm not going to get lucky then, because I can't afford risk. I have a responsibility to keep a roof over our heads, and that means buying a house I can afford to pay off, even when interest rates inevitably rise, regardless of its potential to appreciate in value. I'll stay here as long as I can, not because I care about the investment or because I particularly like this house, but because I can't afford to re-enter the housing market in the foreseeable future. Any equity I gain is purely a side-effect of having a family home. #### Jimmy Higgins ##### Contributor Real estate is risky. My advice: get lucky. You can’t get lucky without facing risk, but real estate DOES appreciate on average, and the longer you can hold onto it the better your chances of ending up in the black. Right now I am facing the prospect of “losing” on a long term (15 yr) real estate deal. But it’s only a loss in terms of “today dollars” spent on the purchase plus taxes, insurance, upkeep, travel etc. AND a major remodel it will need to be marketable. Most of the purchase price was re-couped in rent over the time I’ve owned it and it would yet be profitable if I held it a few more years… and will be close to break even if I sell it this spring. Every other property I ever got into was a clear winner in the end. So I say go for it, esp if you are young and flush enough to hold for 10+ years. I'm not going to get lucky then, because I can't afford risk. I have a responsibility to keep a roof over our heads, and that means buying a house I can afford to pay off, even when interest rates inevitably rise, regardless of its potential to appreciate in value. I'll stay here as long as I can, not because I care about the investment or because I particularly like this house, but because I can't afford to re-enter the housing market in the foreseeable future. Any equity I gain is purely a side-effect of having a family home. My Dad bought a house in the South Shore of Massachusetts in 1983. The house was sold for 5x what he bought it for in 2003. No intent of his own was he trying to make a ridiculous sale profit. Zillow thinks the house is about 1.5x its sold price today, nearly 20 years later. And oh my goodness has the front yard garden over grown! #### Elixir ##### Made in America Real estate is risky. My advice: get lucky. You can’t get lucky without facing risk, but real estate DOES appreciate on average, and the longer you can hold onto it the better your chances of ending up in the black. Right now I am facing the prospect of “losing” on a long term (15 yr) real estate deal. But it’s only a loss in terms of “today dollars” spent on the purchase plus taxes, insurance, upkeep, travel etc. AND a major remodel it will need to be marketable. Most of the purchase price was re-couped in rent over the time I’ve owned it and it would yet be profitable if I held it a few more years… and will be close to break even if I sell it this spring. Every other property I ever got into was a clear winner in the end. So I say go for it, esp if you are young and flush enough to hold for 10+ years. The problem for most of us is the initial down payment, and before that our student debt, and before that even finding a well paying job. I have a young cousin who's in this predicament right now. She wants to leave her boyfriend and get her own place (to rent) but is struggling to secure stable employment. Needless to say I'm already thinking about how to help our boys out with property, and one of them isn't born yet. The down payment thing has to be overcome by hook or crook. My first house purchase was an owner -carry situation, and the old lady who sold me the property was a real gem. She let me negotiate the down payment in favor of a five year balloon payment. That gave me time to get it together even if I was going to need a bank to help out with the balloon. As it turned out, I was able to cash her out, having scrimped and saved for five “hard years” (that were really some of the best of my life). The person who let me know about the property had been on the brink of buying it himself, until he discovered a much higher end property that he bought instead. A few years later, he was upside down on that high end property and ended up taking a big loss, while I was free and clear on a property that eventually sold (with improvements) for 20x what I paid for the initial purchase plus interest. So again… get lucky, don’t be greedy, and lower your sights as needed. Be ready to turn it over when conditions warrant, even if you don’t want to. Also be ready to hold; keep reserves to make sure you can. I like to remind myself that they’re not making any more real estate when things look iffy. That dynamic always prevails eventually! ETA the student debt thing is horrible. That is a societal abomination that needs to be made to go away. Certainly limits your options, and that sucks. Last edited: #### southernhybrid ##### Contributor My sister and her husband bought a very small house in a small town in New Jersey about 10 years ago. Their house payment is about 2500 per month. They paid about 400K for the house but they put about 200k down, money from the equity in their previous home. Their real estate taxes are about 10K per year, which is actually low for northern NJ. She told me recently, that they could get over 500K if they were to sell their home. My sister is 70 and the reason she still works is so they can pay their mortgage. Her husband is retired other than for some occasional work he does doing photos for local realtors. I know that's not nearly as high as the OP is talking about, but it's still difficult for people like my sister and her husband to afford a home like the one they have. They both collect SS so that helps too. I don't know my sister's salary, but it's probably at least in the 80K range. I guess their total income is in the 125K range. Maybe a little bit higher, but most everything is more expensive in NJ compared to most parts of the country. We paid off our mortgage about ten years ago, but it was only about 900 per month. My home is twice the size of my sister's home, has a 3/4 acre lot, a large swimming pool and it's all brick. If we were to sell it, it would still be for less than my sister paid for her tiny frame home which hasn't been updated. But, here's the thing. Even in my small southern city, home prices are rising drastically. Some of it may be due to what we commonly call Atlanta sprawl. We are also seeing many people from the northeast moving here as well, including several of my newer neighbors. A few years ago, one could buy a very nice small home on the town's most sought after street for about 120K. The same home now would be about 300K. People can probably afford these homes now due to the unusually low interest rates, but as the fed increases rates this year, these homes will become unaffordable for a lot of people or the values might decrease. I sometimes read the real estate section of the NYTImes. In New York City, a. tiny starter condo usually goes for about 6-800K. But, a lot of New Yorkers don't have cars and that is one thing that helps lower their expenses. Still, it's mostly working couples who buy real estate in New York City. I assume they make higher salaries compared to the rest of the country. I can't imagine paying so much for housing, but it's becoming more common all over the country. Condo fees are also becoming insane. I've owned two condos and our fees were never more than 250 a month. I've seen condo fees that are well over 1000 per month in places like NY City. Even the exurban areas near me are appreciating quite a bit. Plus, homes that have been updated, usually sell within a day or two, especially if they are in what is considered the best locations. But, they sell fairly quickly even in areas that were once considered less desirable. I browse homes in Indianapolis, since my son lives there and we sometimes consider relocating. Prices are rising rapidly in Indy as well, although not quite as much as they are here in Georgia. A lot of retirees are moving to my area, probably due to the climate, lower cost of living and/or this is where they have children. When people move from larger more expensive cities, it often pushes up the prices of the area where they relocate. I read yesterday that home prices in Atlanta. have risen 25% over the past year! ATL used to be considered a very affordable city, but no more due to the rise in rents and houses. I know I'm not answering the question in the OP, but as one who has always enjoyed following the real estate market, I do think the most recent increases are due to supply and demand, largely due to the very low interest rates. Demand has increased due to younger people wanting to buy their first home too. We've owned a lot of homes and while they were all affordable for us, what we could afford, always depended on the interest rates. When we lived in NC, our mortgage rate was 13%. We refinanced when rates dropped a few years later, but they were still about 9%. The last rate we had was about 4.5% after we refinanced down from a 7.5% rate. We lost money on every home we owned, other than the two homes in Florida. One was just a little vacation condo that we owned for 18 years and we lived in the other one for less than 3 years. We moved too often due to husband's job changes and that is part of the reason why we lost money on real estate. We've been in our current home for 23 years and after not increasing in value for about 20 years, it's now worth more than twice what we paid for it. Still that's not a great investment considering how long it took to get us here, as well as how much we've spent maintaining and updating it over the years. I love my house and dread the thought of moving, but it is way more than we need, so unless I die suddenly, we will probably eventually have to downsize. It's just so hard to find a replacement right now as homes are selling within days almost everywhere. I know damn well if we put the house up for sale right now, it would sell within days, possible even in a bidding war. I'd have to buy another home before we could sell this one. #### TSwizzle ##### Let's Go Brandon! Take California, with their million dollar small homes. (Also NYC, Austin, Seattle, etc) How does a family afford that? Do you never take vacations? Are you able to save for retirement? Is the house the retirement? Do you ever pay off the mortgage? What’s the deal for you? Part of California are very expensive and other parts not so much. But living in the 909 is not for the faint of heart. But it really is a bit of a mystery as to how people manage in the typical LA County home. Houses in my neighborhood are at least$900k for about 1,100sf, 3 bed home, the lot size varies. A house across the street from me is on the market at $1.2m, I'd be surprised if they get that much but it will be close. Rentals in my area for a two bedroom apartment is at least$2,800. Both my kids are resigned to leaving California even though they quite like it here. I will try to help them stay if I can. Imagine paying $800k for a modest home plus the property tax every year, you must be sitting on boxes and eating spam six days a week. But somehow people are doing it but I really don't know how, they must have saved for a significant down payment or something. I bought my house 21 years ago for about$345k. It's almost paid off. I just need the property market to stay hot a couple more years then I will sell up and move, probably out of California. HGTV pisses me off. Talk about ungreen. Yeah... we're taking this otherwise decent looking house apart, spending $200,000 to improve it, and then flipping it for a$100,000 gain. {snip} Property investors are very active in LA County just now, you see their ads on TV and hear them on the radio. Parasites. Personally I would never buy a flip because you are overpaying for sure. But I read recently that there is a bill being drafted to tax house profits an extra 25% if the house sells within three years of initial purchase. I believe this bill is to specifically target house flippers. California politicians bang on about "affordable housing" but I don't think this does anything to address the "affordable housing" aspect of the housing market, it just looks like the state is going to make more money. Anyway, so when the young start to move in to areas where they can actually afford a home and the neighborhood starts to improve, there is much gnashing of teeth over the "gentrification" of the area. #### Tigers! ##### Veteran Member My daughter and her fiance have brought (Nov. 2022) a house/land package in Lara (Melbourne, Australia) for ~$600k. Small block, the house will be nothing flash. When I was growing up (60s-70s) nobody wanted to live in Lara. A similar block now costs >$700k. Stupid. At that price both will need to work for decades to pay it off. If one loses their job or is injured etc. then after about 6 months they will be in strife. My daughter is an ambo so the pay is quite reasonable provided she gets the overtime. But if she gets the overtime then she will not spend much time with fiance/hubby or any kids they might have. I paid $14k for my land in 1991. Installed a kit home for$80k in 1993. Housing prices in Aust. have lost any links to reality. #### Rhea ##### Cyborg with a Tiara Staff member much. But living in the 909 is not for the faint of heart. But it really is a bit of a mystery as to how people manage in the typical LA County home. Houses in my neighborhood are at least $900k for about 1,100sf, 3 bed home, the lot size varies. A house across the street from me is on the market at$1.2m, I'd be surprised if they get that much but it will be close. Rentals in my area for a two bedroom apartment is at least $2,800. Yeah, those are the kinds of numbers they talk about. They blow my mind. Both my kids are resigned to leaving California even though they quite like it here. I will try to help them stay if I can. Yeah, me, too. I hear them talk sometimes about how “they’ll never own a house,” and I remember thinking that myself when I was first looking and interest rates were 16%. It seemed impossible, but it was possible. Starting small. But they have parents who can and will help, so I imagine they’ll have their house at some point. Their age-mates won’t all be able to without help, perhaps. And it’s true that I could not afford to buy my own parent’s house, and moved out of state. But the point of that is that they can move out of state. I hope for them that remote work really takes off. Then they can live anywhere, even here. <3 Imagine paying$800k for a modest home plus the property tax every year, you must be sitting on boxes and eating spam six days a week. But somehow people are doing it but I really don't know how, they must have saved for a significant down payment or something. Yeah, I just don’t know. #### Hermit ##### Cantankerous grump I chat with my friends who live in high price housing markets and I am stumped on how this works. Do y’all mind sharing your stories? This boomer basically sleepwalked his way into it with some hard yakka and a truckload of luck. I never intended to become a home owner. When I came to Australia I was still a schoolboy, a couple of months short of 16. Our first abode was a migrant hostel about 2.5 kilometres from Maroubra Beach. The ocean had fascinated me since we went on a holiday to the north Sea in Germany, so I took a walk down to it the day after our arrival. The weather was shit. Cold, and a stormy southeasterly blew rain into my face on my way. What I saw when I got to the north point of the beach was a revelation. A couple of boys were paddling through mushy whitewater on sticks. Then they turned around, stood up and travelled back towards the shore. This I gotta do! And that is all I was interested in doing for the next 17 years. Jobs were easy to get, but I only ever kept any one of them for long enough to go on surf trips up and down the coast from Sydney, or to build another kneeboard whenever I broke one. I also got fired from a fair few jobs for taking too many sickies when the surf was too good to waste my time at work. I did not want to get chained to years of mortgage payments or paying off fancy cars. Life was too short for that shit. The downside was that I was regularly short of money. Things came to a head when I was about to return to Sydney from the Gold Coast where my father lived with my stepmother at the time. My stepmother took me aside and handed me a 50 dollar note, saying "I think you could do with one of these." At the time that would have been the equivalent of a day's wages. I thanked her and thought "Yes, I'm broke again, aren't I?" It was at that point that I was sick of my hand to mouth existence to the point of putting an end to it. Back in Sydney I found another job easily enough. I enjoyed driving and people were not all that fussed about checking your qualifications or work history at the time. All I had to do is to check the classifieds and turn up at the site of an advertiser seeking a driver 15 minutes before starting time. Once there, I told them how i drove a ute for Brearley's. a 12-tonner for Bowen's, a semi for Cook's or whatever suited the occasion. I obviously didn't tell them that half of the dozen or more of the employers fired me for taking too many unscheduled days off, or that one of them sacked me for squashing a car parked at the entrance of a wharf with the triaxle of a 41 foot trailer. Easy peasy. Nobody even asked me to show them my license. Two years after my return to Sydney I was still driving for the same company, breaking my record by 18 months or more, and my savings account had grown to a massive eleven grand. I became ambitious. Surely, subcontracting to someone with one's own truck would pay more than driving someone else's for a wage? I checked the "truck with work" ads and wangled some finance after finding a suitable deal. It was a $12,000 loan, and to this day the most stressful financial commitment of my life. Anyway, after a mishap (the engine blew up three months into the venture, leaving me with the princely sum of 64 dollars cash in the bank at Christmas) things went swimmingly. Two and a half years later my bank balance was burgeoning once more. 30 thousand bucks. What on earth was I going to do with all that? Just leaving it there was just stupid. Mhhhh. My landlady was OK, but her husband was a pain, and the rent kept going up. What if...? I lived in Glebe at the time. Lovely, bohemian inner Sydney suburb with lots of second-hand bookshops, sleazy pubs, a cinema showing offbeat movies, a couple of halfway houses with recently released heroin addicts and stuff like that. It suited me down to the ground, but unfortunately it was also in the early stages of becoming yuppiefied. The housing market was out of my price range. But there was nearby Newtown. More affordable, and what's more, more bohemian. Dozens of tiny restaurants with menus from all four corners of the world, opportunity shops, an actual family owned and run greengrocer, and a bunch of daggy pubs with regular pub bands, one which was particularly favoured by gays sat cheek by jowl on the narrow main drag, King Street. Best of all, two fast food franchises, McDonald's and KFC tried to establish themselves there but didn't last long. Wrong demography. Without going into much detail, I finished up buying a tiny single story two-bedroom plus study terrace on a 5 x 30 metre block of land. Built in 1890, it was in dire need of renovation (which never happened while I owned it). Getting the mortgage involved some bending of the rules, but the manager of the building society branch I dealt with was most accommodating. In the end I finished up with a$106,000 mortgage on a house 6 kilometres from Sydney's harbour bridge I bought for $154,000 in 1992. Heaven on a stick. All things must change, though. Yuppies bought run-down terraces and spent fortunes upgrading them. One by one second-hand shops closed down, as did the greengrocer. A special day of mourning was the day Cornstalks closed for good. Then came the return of franchised businesses, KFC and McDonald's. And new ones as well, like Gloria Jeans and some fashion shops, the names of which I can't remember. Time to move out. I looked at moving to Newcastle, a once horrid coal mining city two hours north of Sydney, that scrubbed up quite nicely in recent years, but could not find an area I would feel at home in. Then I got seriously attached to someone which necessitated me moving to South Australia. In 2005 I sold my tiny Newtown terrace for$499k and bought a rather larger, freestanding home on a 900 square metre block of land in a small city 380 kilometres (by road) from Adelaide's CBD for $280. The person who bought my terrace spent somewhere between 2 to 300k on doing it up and sold it in 2010 for just under a million bucks. It is now valued at around 2 million. I will never be able to afford to buy anything in inner Sydney ever again. Luckily, I can't think of a reason why I would want to. The ambience I so liked has been destroyed for good. #### Toni ##### Contributor I tend to assume that the point of societies and economies is the benefit to be had from the arrangement, individually and collectively, that with progress life becomes easier for all its members.....yet considering developments in recent times, that doesn't seem to be the case. Things can't be perfect for everyone, all the time. There is usually the generational whining. I tried to explain to Boomers that the stock market has exploded during their life time. College was affordable. Housing wasn't too expensive. Health care not too nutty. So yeah, you better have done well you lucky fucks! The latest generation has a stock market that just yo-yos like madness, college is through the roof expensive, health care costs continue to increase, and housing gets expensive. I feel like my generation, Late Z'er were the last ones to get out into the old world where you went to college (worked hard... well some did), graduated and got a job, and things work themselves out. I graduated, got a house in '03, and since then the Great Recession and a global pandemic. It isn't WWII, but there were blue collar jobs back then too. Boomer here. We lived through the Viet Nam war, the Cold War, nuclear bomb drills, inflation, stagflation, high unemployment and a host of other economic horrors. We also got to experience measles, mumps abs chicken pox and all have these nice scars from smallpox vaccinations. We were thrilled our kids could get vaccinated against so many childhood diseases abs get antibiotics for ear infections and strep throat which is possibly why I have a small heart murmur. The first of our friends who purchased a home got a mortgage rate of 17-18% ( I forget the exact interest) or what my kids call usury but was a decent rate for the time. We couldn’t afford a house for several years and purchased our first home when we had 3 kids, paying an interest rate that was quite good fir the time but 2-3 times what our kids currently pay on their mortgages. Yes, our houses’ purchase prices were much lower as were our earnings—and our lifestyle. I was almost 50 when I bought my first car. Last fall, my husband bought a brand new vehicle for the first time—because it is so difficult to find decent used cars abs the new one was a little cheaper than the only used one he could find. All of our children purchased homes for significantly more than our first home cost and one is worth probably twice what our current home is worth now, even though their home is less than half the size of our home which is much more updated. But they live in a major city and have easy access to loads of restaurants, bars, clubs, concert venues, shopping, etc. They ‘can’t afford kids’ but mostly that’s because they don’t want to give up their 3-4 nights a week out, etc. 3 have been to Europe, something I’m giving up on thanks to the pandemic and WW3. The only one who had student debt beyond a couple ( unde 3) thousand dollars total has debt because of law school which we could not afford to pay for. I agree that the cost of college and grad school or professional school is prohibitive —and I know because hubby and I paid for our own plus the kids’ undergrad. The cost of daycare is also extremely high and yet day care workers are not well paid. We faced different challenges than our parents did or than our children do now. But we did have the good sense to recognize and appreciate the hardships our parents faced and the sacrifices they made for us. #### Toni ##### Contributor Daughter is in the process of buying a 3-bed/2.5-bath condo in Niles (another suburb NW of Chicago, ~25min drive from where I live), after discovering monthly mortgage costs are comparable to monthly apartment rents of similar size places here in Chicagoland. The down payment was the tough part for her, but Husband & I helped with that. Youngest son just bought his first home—a townhouse after making a similar discovery. Heck, years ago, that’s why we bought our first home—rent went up to equal a mortgage payment. I will never forget telling the landlord when she announced our rent was going up that it was equal to a mortgage payment—and her saying smugly: ‘If you can get a mortgage.’ Damn straight we could and did. #### Toni ##### Contributor Here's what I'm using for a bit of context: Median household income:$78,672 Median value of homes: $538,500 Median monthly owner costs with mortgage:$2,422 $4K a month alone is$48,000 a year, which to me is a staggering amount of money to pay on a mortgage. You could maybe manage that on a median CA household income, but I bet it's unusual. I'm guessing that your friends have a household income much higher than the median, and I'm guessing that there aren't many people who can afford these million dollar homes. (I've heard that some people do interest-only repayment plans, but I can't see how that is anything other than financial suicide unless you are aggressively building a real estate portfolio.) I've watched a fair bit of HGTV and the variation in house prices from one TV show to another is incredible. It looks like you can get some absolute bargains if you are willing to live outside of a big city, although it isn't at clear whether those are good places to live (and get a job) overall. My understanding is living outside of LA is called "living in Phoenix". #### Toni ##### Contributor Hunting I understand. Farmers and other wooded land owners often allow others to hunt, with permission. Perhaps not so easily as when I was a girl. Even then, only someone you knew and trusted. Logging? That’s a huge Wow. Asking if you want to sell scap metal? I get that. Asking if they can just have scrap metal? Wow. #### Rhea ##### Cyborg with a Tiara Staff member Around here the scrap metal is lying around in yards and fields, so the scrappers are doing you a favor by cleaning up your yard. Actually, I’ve been trying to find one for a while - we have a couple of hundred pounds in a pile waiting for a pickup to drive up and ask, and I haven’t seen one since the pandemic! One year I had stone pickers show up and ashk if they could have our pile of rocks in the field. They stack it up on pallets, wrap it in chicken wire and drive it off to the city for stone walks or something. 100 years ago, the farmer who owned this place made two piles, each 200 feet long. So it’s just sitting there waiting for this guy in his tractor trailer with his skid-steer. Someone got some nice flat slate. #### spikepipsqueak ##### My Brane Hertz My daughter and her fiance have brought (Nov. 2022) a house/land package in Lara (Melbourne, Australia) for ~$600k. Small block, the house will be nothing flash. When I was growing up (60s-70s) nobody wanted to live in Lara. A similar block now costs >$700k. Stupid. At that price both will need to work for decades to pay it off. If one loses their job or is injured etc. then after about 6 months they will be in strife. My daughter is an ambo so the pay is quite reasonable provided she gets the overtime. But if she gets the overtime then she will not spend much time with fiance/hubby or any kids they might have. I paid $14k for my land in 1991. Installed a kit home for$80k in 1993. Housing prices in Aust. have lost any links to reality. Quoting Tigers for context. Our market differs from the US. Bought my first house (3BR, weatherboard, tiled, land 50' x 124', in Melbourne ) in 1983 for $48k. The mortgage was$33k. I was single, living with my boyfriend, (who did not contribute, another story) and had no trouble meeting the mortgage and getting the house restumped, rewired, replastered where necessary, on a mid level salary. Sold that for $142k in 1991 to a young couple starting out. An amount that reflected inflation and the increased value of the place but placed no undue burden on them. By comparison, in 1999 I bought a tiny 3BR weather board, tin roof, land irregular but 10% smaller, in a rural, coastal area for$57K. Sold that around 2002/3 for $165k with very minor renovations (new water tank, floors sanded and polished, slate surround to heater). Just heard that same house is again for sale. It has had a very substantial renovation and small extension, but even 20 years of inflation cannot justify the$895k they are asking for it. They will get it, however. In 1999, small empty blocks were going for $11k in the area. In 2002 that skyrocketed suddenly to$80-100k. Covid, and the exodus out of cities, has seen the remaining land fetch in the region of $250k. I just Googled the median house price for Melbourne.$1,101,612 I don't know how young people are ever going to do it. #### TSwizzle ##### Let's Go Brandon! I don't know how young people are ever going to do it. I bought my first property, a one bedroom flat/apartment when I was 20 with a little help from my parents who basically paid the legal fees. I don’t know how young people are going to get on the property ladder. Maybe young professionals will move into the more “diverse” neighborhoods and make something of it. #### TSwizzle ##### Let's Go Brandon! I live in a college town that is also a working class town for everything that isn’t related to colleges. The vast overwhelming majority of the rental market is geared for students, with many many previously single family homes converted to student housing. The net effect is that young families and newcomers to town are generally forced into apartments rather than their preferred choice of single family dwellings. {snip} Surely you must realize that this is intentional. Our overlords do not like the suburbs, they don't like us owning cars and driving them, they don't like us taking flights. They want us jammed in to apartments, working within walking/cycling distance, not eating meat and so on. #### Hermit ##### Cantankerous grump I live in a college town that is also a working class town for everything that isn’t related to colleges. The vast overwhelming majority of the rental market is geared for students, with many many previously single family homes converted to student housing. The net effect is that young families and newcomers to town are generally forced into apartments rather than their preferred choice of single family dwellings. {snip} Surely you must realize that this is intentional. Our overlords do not like the suburbs, they don't like us owning cars and driving them, they don't like us taking flights. They want us jammed in to apartments, working within walking/cycling distance, not eating meat and so on. Yeah, those shapeshifting reptilian aliens are pure evil. #### Tigers! ##### Veteran Member I have turned off the ringer on my phone because I get calls from 8am to 8pm from investors/flippers/assholes wanting to make a "cash offer" on my house. That reminds me, I need to get our 'leave us the hell alone' sign back up on our mailbox. Damn windy days.. We get a few of these offers per week. Funny, the offers we get are: Can I hunt on your land? Who are they hunting on your land? There are a few pollies here in Australia we could give you. #### TV and credit cards ##### Veteran Member People keep talking about greed. I think it's just how capitalism works. It’s capitalism when it’s my capital gain. I recommend everyone dip their toe in the rental market if the opportunity presents itself. It will let one know if they are the altruistic individual they think they are. #### Hermit ##### Cantankerous grump People keep talking about greed. I think it's just how capitalism works. It’s capitalism when it’s my capital gain. I recommend everyone dip their toe in the rental market if the opportunity presents itself. It will let one know if they are the altruistic individual they think they are. When I bought my first home I did not have capital gain in mind, but when I sold it I did get 3.24 dollars back for every dollar I bought it for 12 years earlier. That is due to the so-called iron law of capitalism - prices being determined by supply and demand. But yes, capitalist systems accommodate greed the more the greater they are modelled on the laissez-faire principle. What we need is a mixed economy - capitalism, the excesses of which are leavened by socialist regulations. #### gmbteach ##### Mrs Frizzle My daughter and her fiance have brought (Nov. 2022) a house/land package in Lara (Melbourne, Australia) for ~$600k. Small block, the house will be nothing flash. When I was growing up (60s-70s) nobody wanted to live in Lara. A similar block now costs >$700k. Stupid. At that price both will need to work for decades to pay it off. If one loses their job or is injured etc. then after about 6 months they will be in strife. My daughter is an ambo so the pay is quite reasonable provided she gets the overtime. But if she gets the overtime then she will not spend much time with fiance/hubby or any kids they might have. I paid $14k for my land in 1991. Installed a kit home for$80k in 1993. Housing prices in Aust. have lost any links to reality. Quoting Tigers for context. Our market differs from the US. Bought my first house (3BR, weatherboard, tiled, land 50' x 124', in Melbourne ) in 1983 for $48k. The mortgage was$33k. I was single, living with my boyfriend, (who did not contribute, another story) and had no trouble meeting the mortgage and getting the house restumped, rewired, replastered where necessary, on a mid level salary. Sold that for $142k in 1991 to a young couple starting out. An amount that reflected inflation and the increased value of the place but placed no undue burden on them. By comparison, in 1999 I bought a tiny 3BR weather board, tin roof, land irregular but 10% smaller, in a rural, coastal area for$57K. Sold that around 2002/3 for $165k with very minor renovations (new water tank, floors sanded and polished, slate surround to heater). Just heard that same house is again for sale. It has had a very substantial renovation and small extension, but even 20 years of inflation cannot justify the$895k they are asking for it. They will get it, however. In 1999, small empty blocks were going for $11k in the area. In 2002 that skyrocketed suddenly to$80-100k. Covid, and the exodus out of cities, has seen the remaining land fetch in the region of $250k. I just Googled the median house price for Melbourne.$1,101,612 I don't know how young people are ever going to do it. Things are nuts here in SEQ too. We paid mid $300k for our 4 bed 2 bath house on 725m 9 years ago. Next door neighbour just sold his three bed, 1 bath house on a similar sized block for$550k. He does have a pool, but it’s on a main road corner. he gutted and redid his place. I know he would have paid \$270k for his house as he bought his just after I bought my old place across the road. Bilby and I like our home, and while it needs work, we are happy to stay here. I do sometimes want to move to a more rural area, like my brother just has, but we need considerably more cash for that.
2022-12-05 17:37:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1739915907382965, "perplexity": 3051.0287386021696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00631.warc.gz"}
https://www.dezyre.com/project-use-case/zillow-home-value-prediction
# Zillow’s Home Value Prediction (Zestimate) #### Videos Each project comes with 2-5 hours of micro-videos explaining the solution. ## Customer Love #### Camille St. Omer Artificial Intelligence Researcher, Quora 'Most Viewed Writer in 'Data Mining' I came to the platform with no experience and now I am knowledgeable in Machine Learning with Python. No easy thing I must say, the sessions are challenging and go to the depths. I looked at graduate... Read More #### Shailesh Kurdekar Solutions Architect at Capital One I have worked for more than 15 years in Java and J2EE and have recently developed an interest in Big Data technologies and Machine learning due to a big need at my workspace. I was referred here by a... Read More ## What will you learn Understanding the problem statement Importing the dataset from Amazon AWS How to analyze the result of the summary function from R and basic EDA Using ggplot and Correlation Plot to find similarities between variables Checking for variables with null values and handling them Checking skewness of the target variable using Histogram Checking contribution of different variables to the target variable Finding the best feature and eliminating the least significant ones Defining the evaluation metric 'log_error' and understanding it's significance Selecting Boosting model XGBoost and converting dataset into DMatrix Applying XGBoost model on the Dataset Defining parameters for Hyperparameter tuning Using Cross Folds Validation to prevent overfitting Visualizing important features for XGboost model Training the final model using the selected features Making final predictions and Saving in CSV format ## Project Description Zillow is asking you to predict the log-error between their Zestimate and the actual sale price, given all the features of a home. The log error is defined as: $logerror = log(Zestimate) %u2212 log(SalePrice)$$- log(SalePrice)$ and it is recorded in the transactions file train.csv. In this project, you are going to predict the log error for the months in Fall 2017. "Zestimates" are estimated home values based on 7.5 million statistical and machine learning models that analyze hundreds of data points on each property. And, by continually improving the median margin of error (from 14% at the onset to 5% today), Zillow has since become established as one of the largest, most trusted marketplaces for real estate information in the U.S. and a leading example of impactful machine learning. In this data science project, we will develop a machine learning algorithm that makes predictions about the future sale prices of homes. We will also build a model to improve the Zestimate residual error. And finally, we'll build a home valuation algorithm from the ground up, using external data sources. ## Similar Projects #### Walmart Sales Forecasting Data Science Project Data Science Project in R-Predict the sales for each department using historical markdown data from the Walmart dataset containing data of 45 Walmart stores. #### Ensemble Machine Learning Project - All State Insurance Claims Severity Prediction In this ensemble machine learning project, we will predict what kind of claims an insurance company will get. This is implemented in python using ensemble machine learning algorithms. #### Prediction or Classification Using Ensemble Methods in R In this data science project, you will learn to predict churn on a built-in dataset using Ensemble Methods in R. ## Curriculum For This Mini Project Problem Statement 01m Explore Data Set 02m Understand the features 03m Import Libraries 03m Recoding of variables 04m Find transactions by month 12m Distribution of Transactions 01m Distribution of Target variable 15m Represent Missing values 07m Finding relevant features 02m Correlation between features and target variable 14m Shape of Distribution 04m Spread of log error over years 04m Zestimate variable prediction 06m Building Model 10m XGBoost Model 13m Prediction 04m Hyperparameter Tuning 01m Cross Validation 03m Get Best Results 16m Conclusion 01m
2021-05-07 11:27:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1708015650510788, "perplexity": 4974.225948912561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00294.warc.gz"}
https://community.wolfram.com/groups/-/m/t/895849?p_p_auth=cVNwgjl1
# Solve the following equation with NSolve? Posted 4 years ago 5717 Views | 10 Replies | 3 Total Likes | This is the first question that i am posting here. So kindly excuse me for 2 things: One, as the question is bit longer and for second , if i am violating any rules in this community.I have a equation which goes like this: $$\frac{\omega_p^2I_0(\lambda_p)e^{-\lambda_p}}{k^2} (1+\frac{\omega}{\sqrt{2}cos{\theta}}Z(\frac{\omega}{\sqrt{2}cos{\theta}}))=0$$, where $\lambda_p= k^2 sin^2{\theta}$ and $Z(x) = i\sqrt{\pi}e^{-x^2} (1+erf(ix))$ [erf(ix) bein the error function]. I want to find the values of $\omega$ for various values of $k$. For this I am using Nsolve and the Mathematica code which i have written has been attached with this query. I am getting an error message "ReplaceAll::reps: ". I tried my level best to get rid of this. Any help in this aspect will be highly appreciated...Thanks in advance... Attachments: 10 Replies Sort By: Posted 4 years ago NSolve deals primarily with linear and polynomial equations. Use FindRoot. Attachments: Posted 4 years ago Thanks very much for understanding where and when i have to use the NSolve command. The Chop@ that you are using in the program means that you are finding the roots that are close to zero, right? What if some roots are far removed from zero? Also, I would like to ask you one more question, if you can help me with. Actually, I am trying to find solution of an equation which goes like this: $$1+\sum_{L=-10}^{+10}\frac{\omega_p^2I_L(\lambda_p)e^{-\lambda_p}}{k^2} (1+\frac{\omega}{\sqrt{2}cos{\theta}}Z(\xi p)) +\sum_{L=-10}^{+10}\frac{z^2*nip*mpi*\omega_p^2 I_L(\lambda_i)e^{-\lambda_i}}{k^2 Tip*mpi} (1+\frac{\omega-k*ui*cos\theta }{\sqrt{2}*k*cos{\theta}}Z(\xi i))$$ $$+\frac{2*nep\omega_p^2}{k^2\frac{2\kappa-3}{2\kappa}Tep}(\frac{2\kappa-1}{2\kappa}+\frac{\omega}{k\sqrt{\frac{2\kappa-3 }{\kappa}Tep*mpe}}Z(\xi e))=0$$ where $\omega_p,nip,mpi,Tip,Tep,mpi,mpe,ui,\theta,nep,\kappa$ are predefined values and $\lambda _p,\lambda _i,\xi _p,\xi _i,\xi _e$ are as defined in the program, which i am attaching with this reply. And also some of $\lambda _p,\lambda _i,\xi _p,\xi _i,\xi _e$ are functions of $k$ and/or $\omega$. What i want is a plot of $Re{\omega}$ vs k and $Im{\omega}$ vs k. And many thanks for the first reply... Attachments: Posted 4 years ago I use Chop to remove zeros from imaginary part of omega. I do not know if it my solution will be the right. Attachments: Posted 4 years ago Thanks a lot again for that timely help. Once mistake which i wrote was that in et[ $\omega$,$k$ ], it should have been $nep$, instead of $Nep$. Anyway that doesn't matter, as it's working fine. One question is that I am expecting more than one values of $\omega$ for particular value of $k$. This will be quite evident as I simplify the expression. How can that be done?Also, you are using the command {\[Omega], 1/10, 1}in Table. Does it stand for initial guess? If it's the initial guess, then is it like 1/10 stands for real part and 1 stands for complex?What are those comments which we are getting, if we are not using // Quiet command?Thank you... Posted 4 years ago How simplify Yours equation expression?.I don't no.Reals or Complex starting points do not change to finiding roots.Method "Secant" in FindRoot needs 2 starting points.I' m increase WorkingPrecision to 30,no longer be a Warnings Messages.Quiet function evaluates expr "quietly", without actually outputting any messages is generated.If You want more about FindRoot read this Attachments: Posted 4 years ago What i mean by simplifying means that I can do it mathematically. That's okay in the sense that i can do it.Now just one more question: I have a simple equation which goes like this $x^2+3x+2=0$. I know that root of this equation is -2 and -1. When i am using Solve[x^2 + 3 x + 2 == 0, x], I am getting the roots as expected: -2 and -1. Now if i am writing the code like this: FindRoot[x^2 + 3 x + 2 == 0, {x, 20}], no matter whatever the guess values that i use , i am not getting the second solution. So in this case, is it good to use the Solve command? . That what happening in the FindRoot command that we have used in the program.... Posted 4 years ago Solve and NSolve can't solve Yours transcendental equation only FindRoot can.You must put another starting points. FindRoot[x^2 + 3 x + 2 == 0, {x, -20}] FindRoot[x^2 + 3 x + 2 == 0, {x, -2}] FindRoot[x^2 + 3 x + 2 == 0, {x, -1}] FindRoot[x^2 + 3 x + 2 == 0, {x, 1}] FindRoot[x^2 + 3 x + 2 == 0, {x, 20}] FindRoot[x^2 + 3 x + 2 == 0, {x, 1 + I}] FindRoot[x^2 + 3 x + 2 == 0, {x, -1 + I}] FindRoot[x^2 + 3 x + 2 == 0, {x, -2 + I}] FindRoot[x^2 + 3 x + 2 == 0, {x, -20 + I}] and You find all roots.Maybe this HELPS Posted 4 years ago Thanks for the reply.. And sorry for the delay.. Posted 4 years ago You have one Nep in your notebook which has not been assigned a value. Assigning a numeric value to that enables FindRoot to find your solutions.Sometimes in numerically solving equations a value of the form 2.610^-17I or 3.2*10^-16 will appear. Chop will map those values to 0. This should not stop it from finding solutions which are farther from zero. Posted 4 years ago Thanks for looking up the program... Actually, it was a typo error. I should have been $nep$, insteda of $Nep$. And, as i mentioned earlier in the post, I want to have some roots which are far removed from zero...Thanks ... Community posts can be styled and formatted using the Markdown syntax.
2021-03-04 05:37:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42749878764152527, "perplexity": 556.3887853244105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00570.warc.gz"}