url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-2-solving-equations-2-5-literal-equations-and-formulas-practice-and-problem-solving-exercises-page-114/57
## Algebra 1: Common Core (15th Edition) Published by Prentice Hall # Chapter 2 - Solving Equations - 2-5 Literal Equations and Formulas - Practice and Problem-Solving Exercises - Page 114: 57 #### Answer $y=3$ #### Work Step by Step $2(y-4)=-4y+10 \\\\ 2y-8=-4y+10 \\\\ -8=-6y+10 \\\\ -18=-6y \\\\ y=3$ We first use the distributive property on the left side of the equation to simplify. We then subtract by 2y on both sides. Next, we subtract by 10 on both sides. Finally, we divide by -6 to get that y=3. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-12-10 01:54:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5401400923728943, "perplexity": 1267.7177077229721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525781.64/warc/CC-MAIN-20191210013645-20191210041645-00239.warc.gz"}
https://gap-packages.github.io/kan/README.html
# kan including double coset rewriting systems Version 1.27 This project is maintained by Christopher D. Wensley README file for the KAN package =============================== Introduction ------------ This package was conceived as a package for computing induced actions of categories. This version only deals with double coset rewriting systems for finitely presented groups. History ------- This package started out as part of Anne Heyworth's thesis in 1999. The first version on general release was 0.91 in July 2005. Kan'' became an accepted package in May 2015. A more detailed history is included as Chapter 3 of the manual, and in the CHANGES file. Distribution ------------ The Kan'' package is distributed with the accepted GAP packages: see http://www.gap-system.org/Packages/kan.html It may also be obtained from the GitHub repository at: http://gap-packages.github.io/kan/ --------- The Kan'' package is Copyright {\copyright} Chris Wensley and Anne Heyworth, 1996--2016. Kan'' is free software; you can redistribute it and/or modify the Free Software Foundation; either version 2 of the License, or (at your option) any later version. Installation ------------ 1) unpack kan-.tar.gz' in the pkg' subdirectory of the GAP root directory. 2) From within GAP load the package with: If you have a question relating to Kan'', encounter any problems, `
2017-03-28 19:47:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.536763072013855, "perplexity": 7444.081117042696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189884.21/warc/CC-MAIN-20170322212949-00012-ip-10-233-31-227.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Probability_of_exceedance
Frequency of exceedance (Redirected from Probability of exceedance) The frequency of exceedance, sometimes called the annual rate of exceedance, is the frequency with which a random process exceeds some critical value. Typically, the critical value is far from the mean. It is usually defined in terms of the number of peaks of the random process that are outside the boundary. It has applications related to predicting extreme events, such as major earthquakes and floods. Definition The frequency of exceedance is the number of times a stochastic process exceeds some critical value, usually a critical value far from the process' mean, per unit time.[1] Counting exceedance of the critical value can be accomplished either by counting peaks of the process that exceed the critical value[1] or by counting upcrossings of the critical value, where an upcrossing is an event where the instantaneous value of the process crosses the critical value with positive slope.[1][2] This article assumes the two methods of counting exceedance are equivalent and that the process has one upcrossing and one peak per exceedance. However, processes, especially continuous processes with high frequency components to their power spectral densities, may have multiple upcrossings or multiple peaks in rapid succession before the process reverts to its mean.{{sfn|Richardson|2014|pp=2029–2030} } Frequency of exceedance for a Gaussian process Consider a scalar, zero-mean Gaussian process y(t) with variance σy2 and power spectral density Φy(f), where f is a frequency. Over time, this Gaussian process has peaks that exceed some critical value ymax > 0. Counting the number of upcrossings of ymax, the frequency of exceedance of ymax is given by[1][2] ${\displaystyle N(y_{\max })=N_{0}e^{-{\tfrac {1}{2}}({\tfrac {y_{\max }}{\sigma _{y}}})^{2}}.}$ N0 is the frequency of upcrossings of 0 and is related to the power spectral density as ${\displaystyle N_{0}={\sqrt {\frac {\int _{0}^{\infty }{f^{2}\Phi _{y}(f)\,df}}{\int _{0}^{\infty }{\Phi _{y}(f)\,df}}}}.}$ For a Gaussian process, the approximation that the number of peaks above the critical value and the number of upcrossings of the critical value are the same is good for ymaxy > 2 and for narrow band noise.[1] For power spectral densities that decay less steeply than f−3 as f→∞, the integral in the numerator of N0 does not converge. Hoblit gives methods for approximating N0 in such cases with applications aimed at continuous gusts.[3] Time and probability of exceedance As the random process evolves over time, the number of peaks that exceeded the critical value ymax grows and is itself a counting process. For many types of distributions of the underlying random process, including Gaussian processes, the number of peaks above the critical value ymax converges to a Poisson process as the critical value becomes arbitrarily large. The interarrival times of this Poisson process are exponentially distributed with rate of decay equal to the frequency of exceedance N(ymax).[4] Thus, the mean time between peaks, including the residence time or mean time before the very first peak, is the inverse of the frequency of exceedance N−1(ymax). If the number of peaks exceeding ymax grows as a Poisson process, then the probability that at time t there has not yet been any peak exceeding ymax is eN(ymax)t.[5] Its complement, ${\displaystyle p_{ex}(t)=1-e^{-N(y_{\max })t},}$ is the probability of exceedance, the probability that ymax has been exceeded at least once by time t.[6][7] This probability can be useful to estimate whether an extreme event will occur during a specified time period, such as the lifespan of a structure or the duration of an operation. If N(ymax)t is small, for example for the frequency of a rare event occurring in a short time period, then ${\displaystyle p_{ex}(t)\approx N(y_{\max })t.}$ Under this assumption, the frequency of exceedance is equal to the probability of exceedance per unit time, pex/t, and the probability of exceedance can be computed by simply multiplying the frequency of exceedance by the specified length of time. Applications • Probability of major earthquakes[8] • Weather forecasting[9] • Hydrology and loads on hydraulic structures[10] Notes 1. Hoblit 1988, pp. 51–54. 2. ^ a b Rice 1945, pp. 54–55. 3. ^ Hoblit 1988, pp. 229–235. 4. ^ Leadbetter 1983, pp. 176, 238, 260. 5. ^ Feller 1968, pp. 446–448. 6. ^ Hoblit 1988, pp. 65–66. 7. ^ Richardson 2014, p. 2027. 8. ^ Earthquake Hazards Program (2016). "Earthquake Hazards 101 – the Basics". U.S. Geological Survey. Retrieved April 26, 2016. 9. ^ Climate Prediction Center (2002). "Understanding the "Probability of Exceedance" Forecast Graphs for Temperature and Precipitation". National Weather Service. Retrieved April 26, 2016. 10. ^ Garcia, Rene (2015). "Section 2: Probability of Exceedance". Hydraulic Design Manual. Texas Department of Transportation. Retrieved April 26, 2016. 11. ^ Hoblit 1988, Chap. 4. References • Hoblit, Frederic M. (1988). Gust Loads on Aircraft: Concepts and Applications. Washington, DC: American institute of Aeronautics and Astronautics, Inc. ISBN 0930403452. • Feller, William (1968). An Introduction to Probability Theory and Its Applications. Vol. 1 (3rd ed.). New York: John Wiley and Sons. ISBN 9780471257080. • Leadbetter, M. R.; Lindgren, Georg; Rootzén, Holger (1983). Extremes and Related Properties of Random Sequences and Processes. New York: Springer–Verlag. ISBN 9781461254515. • Rice, S. O. (1945). "Mathematical Analysis of Random Noise: Part III Statistical Properties of Random Noise Currents". Bell System Technical Journal. 24 (1): 46–156. doi:10.1002/(ISSN)1538-7305c. • Richardson, Johnhenri R.; Atkins, Ella M.; Kabamba, Pierre T.; Girard, Anouck R. (2014). "Safety Margins for Flight Through Stochastic Gusts". Journal of Guidance, Control, and Dynamics. AIAA. 37 (6): 2026–2030. doi:10.2514/1.G000299.
2019-11-20 13:57:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763416409492493, "perplexity": 1584.5083723528614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00473.warc.gz"}
http://pit-contents.iitp.ru/1-16.html
PROBLEMS OF INFORMATION TRANSMISSION A translation of Problemy Peredachi Informatsii Volume 52, Number 1, January–March, 2016 Back to contents page CONTENTS Remark on “On the Capacity of a Multiple-Access Vector Adder Channel” by A.A. Frolov and V.V. Zyablov   [View Abstract] E. A. Bakin and G. S. Evseev pp. 1–5 Upper Bound on the Minimum Distance of LDPC Codes over $\mathop{\it GF}(q)$ Based on Counting the Number of Syndromes   [View Abstract] A. A. Frolov pp. 6–13 Analysis of Queues with Hyperexponential Arrival Distributions   [View Abstract] V. N. Tarasov pp. 14–23 Lattice Flows in Networks   [View Abstract] V. D. Shmatkov pp. 24–38 Simple One-Shot Bounds for Various Source Coding Problems Using Smooth Rényi Quantities   [View Abstract] N. A. Warsi pp. 39–65 Interactive Function Computation via Polar Coding   [View Abstract] T. C. Gülcü and A. M. Barg pp. 66–91 Time Series Prediction Based on Data Compression Methods   [View Abstract] A. S. Lysyak and B. Ya. Ryabko pp. 92–99 Erratum to: “On One Method for Constructing a Family of Approximations of Zeta Constants by Rational Fractions” [Problems of Information Transmission 51, 378 (2015)] E. A. Karatsuba pp. 100–101
2022-05-26 08:41:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6614221334457397, "perplexity": 9237.635347916432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00109.warc.gz"}
https://simple.m.wikipedia.org/wiki/Cherenkov
electromagnetic phenomenon (Redirected from Cherenkov) Cherenkov radiation, also known as Vavilov–Cherenkov radiation[1] (/əˈrɛŋkɒf/;[2] Russian: Черенков) is a type of electromagnetic radiation produced by charged particles when they pass through an optically transparent medium at a speed which is greater than the speed of light in that medium.[3] (It doesn't violate special relativity because the refractive index slows down the speed of light in a medium. So, the particle doesn't have to travel at the speed of light in a vacuum.) It is named after Pavel Alekseyevich Čerenkov, who discovered this phenomenon in 1934 under the supervision of Sergey Vavilov. Igor Tamm and Ilya Frank developed a theory on this effect in 1937. Pavel Čerenkov, Igor Tamm and Ilya Frank shared the 1958 Nobel Prize in physics because of their contribution in cherenkov radiation.[4] NRC photo of Cherenkov effect in the Reed Research Reactor Theory According to special relativity, a particle cannot move faster than the speed of light in a vacuum. However, when light travels in a transparent medium (such as water or glass), it moves more slowly than it would in a vacuum. This means that particles can actually move faster than the speed of light in certain mediums. When a particle with an electric charge moves faster than light in a medium which can be polarized, it causes the medium to send out photons (light particles) and thereby loses energy. The photons that are sent out can be measured, as they are simple light. Explanation The formation of Cherenkov radiation is analogous to the bow wave caused by a power boat traveling faster than the speed of water waves or to the shock wave (sonic boom) produced by an airplane traveling faster than the speed of sound in air.[5] bow wave in water sonic boom cloud Formation of sonic boom When an aircraft passes through the air, it creates a series of pressure waves in front of the aircraft and behind it. These waves travel at the speed of sound and, as the speed of the object increases, the waves get compressed as they cannot get out of each other's way quickly enough. When the aircraft gets the velocity of sound wave, the pressure waves get merge into a single shock wave. As the velocity keeps increasing, the single shock wave extends mostly to the rear and extends from the craft in a restricted widening cone.[6] We hear this as sonic boom. Similarly, a bow wave forms when something moves through a fluid at a speed greater than the speed of a wave moving across the fluid.[7] The mechanism of Cherenkov radiation is same but it occurs for light waves.[8][9] When a charged particle moves inside a polarizable medium, it excites some the ellectrons of that medium. As the excited ellectrons return to their ground state, they emits electromagnetic radiation. According to the Huygens principle, the emitted waves move out spherically at the phase velocity of that medium. If the particle moves faster than the speed of light in that medium, the emitted waves add up and at an angle with respect to the particle direction a radiation is emitted which is known as Cherenkov radiation.[10][11][12] Because nothing can move faster than light in a vacuum, there is no Cherenkov light in a vacuum. However, if we say that light in water moves only with 75% of its speed in vacuum, particles with very high energy are now able to move faster than light (through water) and create Cherenkov light. The reason Cherenkov light often appears blue is because its effect is proportional to the frequency, in that the higher the frequency, the higher the effect of the radiation. Because higher frequency light equates to shorter wavelengths, and blue light has one of the shortest wavelengths of visible light, Cherenkov light is usually blue. Emission angle Image of an ideal Cherenkov radiation Image of an ideal Cherenkov radiation Emission angle with respect to the first image In the first image, a charged particle (red) moves at a speed of ${\displaystyle v_{\text{p}}}$  where ${\displaystyle c/n . Ratio between the speed of the particle and the speed of light is ${\displaystyle \beta =v_{\text{p}}/c}$ In this medium, the velocity of light is (n is the refractive index) ${\displaystyle v_{\text{em}}=c/n}$ Left side of this image is the initial point (t = 0) and the right side is the location of the particle after t time। So, the distance passed by the particle in t time is ${\displaystyle x_{\text{p}}=v_{\text{p}}t=\beta \,ct}$ And the distance passed by the light is ${\displaystyle x_{\text{em}}=v_{\text{em}}t={\frac {c}{n}}t}$ So, the emission angle is (using trigonometry) ${\displaystyle \cos \theta ={\frac {1}{n\beta }}}$ We can also derive the emission angle from the second image too. References 1. Cherenkov, P. A. (1934). "Visible emission of clean liquids by action of γ radiation". Doklady Akademii Nauk SSSR. 2: 451. Reprinted in Selected Papers of Soviet Physicists, Usp. Fiz. Nauk 93 (1967) 385. V sbornike: Pavel Alekseyevich Čerenkov: Chelovek i Otkrytie pod redaktsiej A. N. Gorbunova i E. P. Čerenkovoj, M., Nauka, 1999, s. 149-153. (ref Archived 22 October 2007 at the Wayback Machine) 2. "Cherenkov". Dictionary.com Unabridged. Random House. Retrieved 26 May 2020. 3. "Cherenkov radiation". Encyclopedia Britannica. 4 October 2018. 4. "Pavel A. Cherenkov". The Noble Prize. 5. Gibbs, Philip. "Is there an equivalent of the sonic boom for light?". Archived from the original on 8 November 2020. Retrieved 30 October 2020. 6. "Sonic boom". Britannica. 16 December 2008. 7. "Bow wave". Britannica. 29 November 2016.
2022-01-27 18:31:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6343970894813538, "perplexity": 1106.7604974927435}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00247.warc.gz"}
http://tex.stackexchange.com/questions/93809/place-two-logo-on-top-right-and-top-left-in-beamer-presentation?answertab=active
# Place two logo on top right and top left in beamer presentation I have a beamer presentation and use the PaloAlto theme. As you can see in the code sample I have two logos over each other. I'd like to place one on the left hand and the other on the right side. Would please help? \logo{\includegraphics[height=0.8cm]{images/city_magazine_custom_logo}\hspace{-60pt} \includegraphics[height=0.8cm]{images/ses50}\hspace{-10pt} } - Welcome to TeX.sx! A tip: If you indent lines by 4 spaces, they'll be marked as a code sample. You can also highlight the code and click the "code" button (with "{}" on it). –  egreg Jan 14 '13 at 21:54 Here's one possibility redefining the headline template used by the PaloAlto theme; I defined a command \logoii to be used for the second logo in a fashion completely analogous to the standard \logo command (I used the same image for both logos in my example, but, of course, you can use whatever images you want instead): \documentclass{beamer} \usetheme{PaloAlto} \makeatletter \newcommand\insertlogoii{} {% \begin{beamercolorbox}[wd=\paperwidth]{frametitle} \ifx\beamer@sidebarside\beamer@lefttext% \else% \hfill% \fi% \ifdim\beamer@sidebarwidth>0pt% \usebeamercolor[bg]{logo}% \hskip-\beamer@sidebarwidth% \hbox to \beamer@sidebarwidth{\hss\vbox to \hfill% \hskip-\beamer@sidebarwidth% \hbox to \beamer@sidebarwidth{\hss\vbox to \else% \fi% \end{beamercolorbox} } \makeatother \title{The Title} \subtitle{CTAN lion drawing by Duane Bibby} \author{The Author} \logo{\includegraphics[height=0.8cm]{ctanlion}} \logoii{\includegraphics[height=0.8cm]{ctanlion}} \begin{document} \begin{frame} \maketitle \end{frame} \end{document} - thanks a lot Gonzalo. Muchas gracias. I put as answer but I don't have enough point to vote up. –  user1098135 Jan 15 '13 at 7:18 @user1098135 ¡De nada! I'm glad I could help! Thanks for accepting the answer. –  Gonzalo Medina Jan 15 '13 at 20:06 Motivated by Positioning logo in the front page as well as slides I decided to post an answer using tikz package. If one decide to use tikz package, he can add different logos, to different part of a slide. The code is practically the same as Gonzalo Medina's work, but I added a few lines. \documentclass{beamer} \usetheme{Berkeley} \usepackage{tikz} \title{The Title} \author{The Author} \institute{The Institute} \begin{document} { \setbeamertemplate{logo}{} \begin{frame} \maketitle \end{frame} } \begin{tikzpicture}[remember picture,overlay] \node[anchor=north west,yshift=-4pt] at (current page.north west) {\includegraphics[height=0.9cm]{example-image-a}}; \end{tikzpicture}} \begin{tikzpicture}[remember picture,overlay] \node[anchor=north east,yshift=-4pt] at (current page.north east) {\includegraphics[height=0.9cm]{example-image-b}}; \end{tikzpicture}} \begin{frame}{Motivation} Now the logo is visible \end{frame} \begin{frame}{Motivation} \framesubtitle{A} Now the logo is visible \end{frame} \end{document} - This answer was moved from a duplicate using the Berkeley theme. The code is practically the same as Gonzalo Medina's (since both themes use the sidebar outer theme), but maybe it adds some value for the interested reader (no background color on the right and width/height of the logos) This might work: I will add a longer explanation how I came to this solution (and I am by far a Beamer expert), so other can see how to approach problems like this. First you see that the used theme is "Berkeley", so going into the definition file beamerthemeBerkeley.sty (just google it and you'll find the code). There we can see that the outer theme (the one responsible for the sidebar and title and all) is "sidebar", so next we take a look at beamerouterthemesidebar.sty and search for the keyword logo and see that it is used in the headline definition. What I did then is edit the definition to add another logo (logoright) the the right side (by copy and paste with an \hfill inbetween) and define some commands to set the logo. Just adjust the size of the images to your liking. If you want to adjust the height or width (or both) of the images to the header/sidebar you can set the logos by: \makeatletter \makeatother \documentclass{beamer} \usetheme{Berkeley} \def\insertlogoright{\usebeamertemplate*{logoright}} \def\logoright{\setbeamertemplate{logoright}} \makeatletter {% \begin{beamercolorbox}[wd=\paperwidth]{frametitle} \ifx\beamer@sidebarside\beamer@lefttext% \else% \hfill% \fi% \ifdim\beamer@sidebarwidth>0pt% \usebeamercolor[bg]{logo}% \hskip-\beamer@sidebarwidth% \hbox to \beamer@sidebarwidth{% \hss% \vss\hbox{\color{fg}\insertlogo}\vss% }% \hss}% \hfill% \hbox to \beamer@sidebarwidth{% \hss% \vss\hbox{\color{fg}\insertlogoright}\vss% }% \hss}% \else% \fi% \end{beamercolorbox} } \makeatother \logo{\includegraphics[width=1.2cm,keepaspectratio]{example-image-a}} \logoright{\includegraphics[width=1.2cm,keepaspectratio]{example-image-b}} \title{The Title} \author{The Author} \institute{The Institute} \begin{document} { \setbeamertemplate{logo}{} \setbeamertemplate{logoright}{} \begin{frame} \maketitle \end{frame} } \begin{frame}{this} test \end{frame} \end{document} - Its a dirty hack but one could try something like this \logo{% \makebox[1.85\paperwidth]{% \hfill% \hspace{3em} \includegraphics[width=1cm,keepaspectratio]{example-image-a}% \hfill% \includegraphics[width=1cm,keepaspectratio]{example-image-b}% }% } The idea: the logo is centered in the top left corner. If you make the box big enough (around 2 times the page width) it will extend to the top right corner. Then place the logo in the center and right end of the box. - If I use different width for two logos, they won't align in a horizontal line and one of them goes lower. –  user263485 Dec 4 '14 at 11:53
2015-05-26 08:22:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.776547372341156, "perplexity": 1927.9415294934165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928817.29/warc/CC-MAIN-20150521113208-00237-ip-10-180-206-219.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/499111/why-any-modern-cpu-masks-5-lower-bits-in-a-cl-register-for-shifting-operations
# Why any modern CPU masks 5 lower bits in a CL register for shifting operations I'm digging into left and right shift operations in ASM. From IA-32 Intel Architecture Software Developer’s Manual 3 All IA-32 processors (starting with the Intel 286 processor) do mask the shift count to 5 bits, resulting in a maximum count of 31. This masking is done in all operating modes (including the virtual-8086 mode) to reduce the maximum execution time of the instructions. I'm trying to understand the reasoning behind this logic. Maybe it works as it works because on a hardware level it is hard to implement shift for all 32 (or 64) bits in a register using 1 cycle? Any detailed explanation would help a lot! • Why would you want to shift all 32 bits? – Andy aka May 12 at 7:37 • @Andyaka the question is not "why I want to do so". The question is "why does it work this way?". It is just seems weird, since the SSE shift instructions (PSLL* etc.) do not mask the shift count. – No Name QA May 12 at 7:44 • Why would anyone want to shift all 32 bits then? – Andy aka May 12 at 7:57 • @Andyaka because it leads us to an inconsistency in shifting behavior. Please, if you know the answer tell me. If not then stop trolling. – No Name QA May 12 at 11:16 • Part of the reason is, C has banned shifting beyond word length so there is no need for other behaviors. Also shifting beyond word length would probably return 0 and can easily be handled as a conditional. – user3528438 May 12 at 14:27 • Thank you! In your last sentence that allows shifting of 16-bit registers via carry you specify 16-bit registers. So in an old good times we could do x << 16 = 0, right? If so, why did they implement x<<32 = x? – No Name QA May 12 at 9:29 • Thank you for clarification. I'm sorry, but I'm not a native english speaker and it is very hard for me to translate your first sentence Because you have 16 bit registers, you must be able to shift 16 bit positions at minimum to be compatible, for example shifting any bit to carry bit. Could you please rewrite it in a more simpler or detailed manner? – No Name QA May 12 at 16:42
2020-09-19 09:41:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1880137175321579, "perplexity": 1272.8629403555453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00579.warc.gz"}
https://datascience.stackexchange.com/questions/28650/removing-categorial-features-in-linear-regression
# Removing Categorial Features in Linear Regression This is more of a design question regarding linear regression. Here is some info on our dataset: • Our dataset has 8 features; 3 of them being categorical. We are willing to perform linear regression to fit our target data. • We have tried including all of our 8 features (categorical ones being encoded in integer) and doing the linear regression. • We have also tried taking out the categorical features and running the linear regression algorithm for each possible combination of our 3 categorical features. That of course yields in a lot of regression runs (each category has 4 possible values; so 64 to be exact). The latter approach gave us better results. So, fixing the categorical features and creating a new dataset for each combination turned out to give better estimations. What does this tell us about our data? • Categorical features make the data non-linear, so they should indeed be taken out? For example, there is a complex relationship between categorical feature A and categorical feature B that can't be captured in linear regression? • If I am forced to step out of linear regression, which algorithm should I try to apply? I would prefer to have one dataset with 8 features instead of having 64 datasets with 5 features. There should be an algorithm that can capture this model. • What do you mean by 'better results'? A lower $R^2$? – Elias Strehle Mar 5 '18 at 17:29 • You should think in scope of your research question: what am I trying to solve by using linear regression? Linear regression doesn’t care about the form of the predictor variables as long as your residuals are approximately iid Normally distributed. – Jon Mar 5 '18 at 21:08 • What's wrong with ANOVA? It's considered a subset of multiple regression where the categorical factors are evaluated, depending on the specficition, either against an intercept or against the base value of the factor. – DJohnson Mar 8 '18 at 17:08 I think using Linear Regression is not a good option as, 1. This performs very well on numeric variables(categorical -> binary). 2. Cannot handle Missing Data(suggestible to ignore those records). 3. When you are trying to predict, By chance a category is not trained then it cannot predict. For example: A variable has 4 categories and in the random sample it could pick only 3 categories, if the test has 4th category then it will throw an error. So, we should be careful when the data is divided between Test and Train Now, what are the other algorithms which are available: 1. Random Forest(you cannot use this when any category variable has more than 52 categories, in your case it shouldn't be an issue) 2. XGBoost Do let me know if you need any additional explanations. Sounds like you have a lot of complex categorical variables in your model. Here's what I would do to see which ones are significant and which ones are not. For each of the 4 categorical variables, you will only need 3 binary variables to represent the options. If all 3 binary options are 0, then the fourth category is 1, so it simplifies the model a little. Here's what I would do: 1) Run a regression model for each categorical variable using the binary variables. You'll have 4 models in total. 2) Run these models with backwards stepwise regression. You should analyze these four models to look for similarities or patterns, maybe something will jump out at you. 3) After all four models are run using this selection method, run a final regression using backwards stepwise selection using only the significant variables from the previous 4 runs. This will leave you with a final model (and results) without cluttering the regression with all 64 iterations of the categorical variables. If the significant variables from this are too cumbersome, maybe only discuss or highlight the most significant independent variables or trim it down some other way. Good luck, let us know how it goes! Encoding categorical variables as integers is generally bad for linear regression, because the model will interpret that to mean that category 2 is twice as significant as category 1, and so on, which is not necessarily true. It isn't surprising that you got bad results. A better approach is to encode your categories with dummy variables. Let's say your categorical variables are C1, C2, and C3, each taking values from 1 to 4. Then we can have twelve 0/1 dummy variables corresponding to each possible category for each categorical variable. For any input exactly four dummy variables will be 1 and the rest will be zero. Your linear regression now looks like: $$\hat{y}=a_1*d_{C11}+a_2*d_{C12}+a_3*d_{C13}+a_4*d_{C14} + a_5*d_{C21}+a_6*d_{C22}+a_7*d_{C23}+a_8*d_{C24} + a_9*d_{C31}+a_{10}*d_{C32}+a_{11}*d_{C33}+a_{12}*d_{C34} + a_{13}*x_1+a_{14}*x_2+a_{15}*x_3+a_{16}*x_4+a_{17}*x_5$$ where $x_1$ through $x_5$ are your numerical inputs. If a given input has C1=1, C2=4, and C3=3, for example, then this would reduce to: $\hat{y}=a_1*1+a_8*1+a_{11}*1$ It's also possible to do the same thing with 64 dummy variables for each possible combination of the categorical variables as you were doing, but in a single linear regression as above. If you are still not getting good results with linear regression, then consider using Gradient Boosting Regression Trees. • what do the a variables stand for? Also is this one-hot encoding? – mLstudent33 Apr 22 '19 at 10:46 • Also wouldn't there be 3 dummy variables that are "1" and 9 that are "0" for C1, C2 and C3 each with 4 categories? Ie. if there were C1, C2, C3 and C4, we would end up with four dummy variables that are "1" with the rest "0" – mLstudent33 Apr 22 '19 at 11:12 • Yes, the features here are the concatenated one-hot encodings of the categorical variables. The reason I have 17 variables is because the original question specifies 3 categorical variables each taking 4 possible values and 5 numerical variables, so 3*4+5 = 17. – Imran Apr 23 '19 at 21:54 What I would do to optimise the performance of linear regression 1. One Hot encode the categorical features 2. Use PCA to reduce the dimensionality of the data 3. Scale the data (subtract the mean, divide by the standard deviation) 4. Train the regression model on the reduced scaled dataset If you have enough data (>10k examples), you could even train a neural network on the data to capture the complex relationships between features which linear regression wouldn’t capture. We have tried including all of our 8 features (categorical ones being encoded in integer) and doing the linear regression. If it is not dummy encoding and your categories can not be ranged - that is wrong. For example, doing this: [apples, bananas, strawberries] -> [0, 1, 2] for almost all tasks will be not correct. We have also tried taking out the categorical features and running the linear regression algorithm for each possible combination of our 3 categorical features. That of course yields in a lot of regression runs (each category has 4 possible values; so 64 to be exact). Also needed to be revised. If some of your combinations have very small amount of cases you can not trust the results. So, Our dataset has 8 features; 3 of them being categorical. We are willing to perform linear regression to fit our target data. Do dummy encoding. If you see some strict relationship between categories you can also add one-hot dummy encoding: category A - category B 1 - 1 2 - 0 2 - 3 1 - 1 ... Possibly here category A and category B are strongly correlated while both of them have category 1. Create new feature for this case. If I am forced to step out of linear regression, which algorithm should I try to apply? I would prefer to have one dataset with 8 features instead of having 64 datasets with 5 features. There should be an algorithm that can capture this model. Forests, XGBoost as mentioned earlier. For this you do not need one-hot or dummy encoding. By the way, usage of simple Decision Trees may give you beautiful pattern of relationships between categories and it's influence on target variable. Try simple neural network after dummy and one-hot encoding too. All the answers mentioned are great, but what I will do is ( a noob ) • Go with RF first to get the feature Importance. • Then plot the Hierarchical Clustering and plotting the Dendograms using Scipy to see which columns are close to each other as the model this them to be.. • After That, Go with CatBoost to give your model the final touch.. CatBoost is really very effective when you have categorical data... It will handle all of them automatically...(try it if you haven't)
2020-08-08 12:28:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3964422047138214, "perplexity": 678.6678281644173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737645.2/warc/CC-MAIN-20200808110257-20200808140257-00254.warc.gz"}
https://anime.stackexchange.com/questions/47611/why-are-titles-in-touhou-often-formatted-like-japanese-title-english-subtitle
# Why are titles in Touhou often formatted like “Japanese title ~ English subtitle”? I've noticed that a lot of Touhou games, music, and print works seem to have titles that consist of a Japanese part and an English part, where the two are separated with a tilde. For example, most of the main game titles follow this pattern, although some of the .5 games seem to use "Japanese title ~ Japanese subtitle" instead? - TH06: 東方紅魔郷 ~ the Embodiment of Scarlet Devil - TH07: 東方妖々夢 ~ Perfect Cherry Blossom - TH07.5: 東方萃夢想 ~ Immaterial and Missing Power - TH08: 東方永夜抄 ~ Imperishable Night - TH09: 東方花映塚 ~ Phantasmagoria of Flower View - TH09.5: 東方文花帖 ~ Shoot the Bullet - TH10: 東方風神録 ~ Mountain of Faith - TH10.5: 東方緋想天 ~ Scarlet Weather Rhapsody - TH11: 東方地霊殿 ~ Subterranean Animism - TH12: 東方星蓮船 ~ Undefined Fantastic Object - TH12.3: 東方非想天則 ~ 超弩級ギニョルの謎を追え - TH12.5: ダブルスポイラー ~ 東方文花帖 - TH12.8: 妖精大戦争 ~ 東方三月精 - TH13: 東方神霊廟 ~ Ten Desires - TH13.5: 東方心綺楼 ~ Hopeless Masquerade - TH14: 東方輝針城 ~ Double Dealing Character - TH14.3: 弾幕アマノジャク ~ Impossible Spell Card - TH14.5: 東方深秘録 ~ Urban Legend in Limbo - TH15: 東方紺珠伝 ~ Legacy of Lunatic Kingdom - TH15.5: 東方憑依華 ~ Antinomy of Common Flowers - TH16: 東方天空璋 ~ Hidden Star in Four Seasons Lots of songs from the main game series have tildes in them as well. There are a number of songs that don't follow the pattern though, so the following is only a subset of all the main game themes: - TH06 (EoSD): - 上海紅茶館 ~ Chinese Tea - ラクトガール ~ 少女密室 - 月時計 ~ ルナ・ダイアル - 紅楼 ~ Eastern Dream... - TH07 (PCB): - 妖々夢 ~ Snow or Cherry Petal - 無何有の郷 ~ Deep Mountain - 人形裁判 ~ 人の形弄びし少女 - 幽霊楽団 ~ Phantom Ensemble - 東方妖々夢 ~ Ancient Temple - 広有射怪鳥事 ~ Till When? - 幽雅に咲かせ、墨染の桜 ~ Border of Life - 少女幻葬 ~ Necro-Fantasy - 妖々跋扈 ~ Who done it! - さくらさくら ~ Japanize Dream... - TH08 (IN): - 永夜抄 ~ Eastern Night - 幻視の夜 ~ Ghostly Eyes - 蠢々秋月 ~ Mooned Insect - 夜雀の歌声 ~ Night Bird - 懐かしき東方の血 ~ Old World - 永夜の報い ~ Imperishable Night - 少女綺想曲 ~ Dream Battle - シンデレラケージ ~ Kagome-Kagome - 狂気の瞳 ~ Invisible Full Moon - 千年幻想郷 ~ History of the Moon - 竹取飛翔 ~ Lunatic Princess - エクステンドアッシュ ~ 蓬莱人 - Eternal Dream ~ 幽玄の槭樹 - TH09 (PoFV): - 花映塚 ~ Higan Retour - 春色小径 ~ Colorful Path - 東方妖々夢 ~ Ancient Temple - 狂気の瞳 ~ Invisible Full Moon - 幽霊楽団 ~ Phantom Ensemble - もう歌しか聞こえない ~ Flower Mix - ポイズンボディ ~ Forsaken Doll - 今昔幻想郷 ~ Flower Land - 彼岸帰航 ~ Riverside View - 六十年目の東方裁判 ~ Fate of Sixty Years - 魂の花 ~ Another Dream... - TH10 (MoF): - 人恋し神様 ~ Romantic Fall - 厄神様の通り道 ~ Dark Road - 芥川龍之介の河童 ~ Candid Friend - フォールオブフォール ~ 秋めく滝 - 妖怪の山 ~ Mysterious Mountain - 御柱の墓場 ~ Grave of Being - 神さびた古戦場 ~ Suwa Foughten Field - 神は恵みの雨を降らす ~ Sylphid Dream - TH11 (SA): - 封じられた妖怪 ~ Lost Place - 少女さとり ~ 3rd eye - 死体旅行 ~ Be of good cheer! - 霊知の太陽信仰 ~ Nuclear Fusion - エネルギー黎明 ~ Future Dream... - TH12 (UFO): - 感情の摩天楼 ~ Cosmic Mind - 空の帰り道 ~ Sky Dream - TH13 (TD): - 聖徳伝説 ~ True Administrator - TH14 (DDC): - 輝く針の小人族 ~ Little Princess - 始原のビート ~ Pristine Beat - TH15 (LoLK): - ピュアヒューリーズ ~ 心の在処 - TH16 (HSiFS): - 秘神マターラ ~ Hidden Star in All Seasons. Tilde usage for songs seems to have decreased for more recent games? The titles of most of ZUN's music albums seem to follow this pattern as well. For example: - 蓬莱人形 ~ Dolls in Pseudo Paradise - 蓮台野夜行 ~ Ghostly Field Club - 夢違科学世紀 ~ Changeability of Strange Dream - 卯酉東海道 ~ Retrospective 53 minutes - 大空魔術 ~ Magical Astronomy - 鳥船遺跡 ~ Trojan Green Asteroid - 伊弉諾物質 ~ Neo-traditionalism of Japan - 燕石博物誌 ~ Dr. Latency's Freak Report - 旧約酒場 ~ Dateless Bar "Old Adam" Additionally, some of the print works seem to have a binomial nomenclature thing going on: - 東方儚月抄: - 東方儚月抄 ~ Silent Sinner in Blue - 東方儚月抄 ~ Cage in Lunatic Runagate - 東方儚月抄 ~ 月のイナバと地上の因幡 - 東方三月精: - 東方三月精 ~ Eastern and Little Nature Deity - 東方三月精 ~ Strange and Bright Nature Deity - 東方三月精 ~ Oriental Sacred Place - 東方三月精 ~ Visionary Fairies in Shrine - 東方茨歌仙 ~ Wild and Horned Hermit - 東方鈴奈庵 ~ Forbidden Scrollery Spell card names also sometimes seem to have a binomial nomenclature thing going on, though with quotation marks instead of tildes. For example, Flandre's spell cards in EoSD: - 禁忌: - 禁忌「クランベリートラップ」 - 禁忌「レーヴァテイン」 - 禁忌「フォーオブアカインド」 - 禁忌「カゴメカゴメ」 - 禁忌「恋の迷路」 - 禁弾: - 禁弾「スターボウブレイク」 - 禁弾「カタディオプトリック」 - 禁弾「過去を刻む時計」 - 秘弾「そして誰もいなくなるか?」 - QED「495年の波紋」 Is there an explanation or system or backstory behind this tilde naming convention? Or am I overthinking this and is the explanation just "ZUN did it because it looks cool"? • I'm guessing it's mostly because it looks cool and is an easy way to denote a translation/subtitle. It may also be used because one thing that might be used in English, the m-dash, looks a lot like either 一 (one) or ー (the line denoting a long katakana vowel). – kuwaly Jun 30 '18 at 10:43 • The naming scheme for the game title has been like that since the first game, 東方靈異伝 ~ The Highly Responsive to Prayers. Though, AFAIK, I've never heard the reason why it's done like that. – Aki Tanaka Jun 30 '18 at 10:53 ## 2 Answers First, it should be noted that the japanese do incorporate English text into their media frequently. However, I am assuming why Touhou seems to make use of it especially is the matter of consideration here. However to be more Touhou specific, something that needs to be noted about Touhou in particular is that the theming of Touhou was, at least at some point, supposed to be an east meets west kind of thing. The Wikipedia page for Team Shanghai Alice, which is licensed under CC-BY-SA 3.0 terms, notes: The name "Team Shanghai Alice" was chosen to fit the overall theme of the Touhou games. "Shanghai", in ZUN's mind, is a multicultural city where the East and the West meet. There are many examples of western cultural influence in the series. Marisa is very much a western style witch, Alice Margatroid is supposed to be the same Alice from Alice in Wonderland, and one of her theme songs is doll maker of bucuresti, where bucuresti is an alternative name for Burachest in Romania. He makes references to the moon landing when the Lunarians are involved: Imperishable Night has a song entitled Voyage 1969, and Clownpiece's costume very much resembles an american flag. Japan has no native folklore regarding vampires as we know them, like Remilia or Kurumi. Recently, in Forbidden Scrollery there was even a chupecabra. Something else that needs to be noted is that Touhou means Eastern and Amusement Makers also made some Seihou project games, where Seihou means Western. The Windows era designs of Marisa and Reimu are first shown in the Seihou games, where the characters made cameo appearances as extra stage bosses. In the music rooms for the games, Zun sometimes comments on how certain songs seem distinctly Japanese to him, and in Lotus Land Story's music rooms he comments about how he was trying to make the pieces feel western. In the Music Room description of Song of the Night Sparrow for Imperishable night, Zun comments that it was supposed to be a blend of eastern and western styles. Unfortunately, the closest I can personally find to Zun himself explaining it at present moment comes to commenting on this is in Afterword Correspondence Vol. 1 for Embodiment of the Scarlet Devil, where he acknowledges that his music used to follow the format, but any reason why is omitted from that observation. However, in consideration of these facts, I think it is reasonable to assume it was done to give the games a somewhat western feeling to them by blending eastern and western languages, to match the original theme. Zun haven't explicitly mention this but it is obvious that the titles should contain japanese characters since the actual development of the touhou project was based on the premise that there were few games based on japanese folklore at the time (I was looking for the interview where he said this but couldn't find it). The left kanjis are more like "titles" remember that kanji convey a more wide range of meaning than actual words and then romaji in the right due to the nature of being a videogame in a computer also even the japanese themselves sometimes can't memorize all the kanjis, so we got the first ones 東方 telling you 'eastern history' then followed by the title such as: 紅魔郷 koumakan 妖々夢 youyoumu 地霊殿 chireiden 星蓮船 seiresen and so on, having a kanji title can give you the whole picture of what the game is gonna be about let's do ten desires for example: 東方 eastern history 神 god 霊廟 mausoleum so you would get something like eastern history of the mausoleum spirit/god - ten desires. It could be just a format without any special purpose but either way I think this looks really cool and truly feels eastern
2019-10-16 15:38:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19486719369888306, "perplexity": 11002.99055443077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00516.warc.gz"}
https://www.nitrc.org/forum/message.php?msg_id=29746
help > RE: JHU.nii and JHU.txt files in C:\Program Files\MRIcron\Resources\templates May 20, 2020  10:05 AM | Chris Rorden RE: JHU.nii and JHU.txt files in C:\Program Files\MRIcron\Resources\templates JHU_MNI_SS_BPM_TypeII_ver2.1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3358461/ I would contact the first author for more details. The rough notes I have for this are here: The atlas has a single voxel that is clearly wrong, it is labelled as 'corticospinal tract right' but is located in the left hemisphere (at the boundary of PLIC_L and GP_L). When I next release the software I will relabel this voxel. 102|CST_R|corticospinal tract right|2 133|PLIC_L|posterior limb of internal capsule left|2 81|GP_L|globus pallidus left|1
2020-05-31 07:28:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317483305931091, "perplexity": 10490.573685874526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347411862.59/warc/CC-MAIN-20200531053947-20200531083947-00384.warc.gz"}
https://mathwomen.agnesscott.org/women/abstracts/pell_abstract.htm
Anna Johnson Pell Biorthogonal Systems of Functions Transactions of the American Mathematical Society, Vol. 12, No. 2 (April 1911), 135-164 Presented to the Society (Chicago) April 10, 1909 Introduction In boundary value problems of differential equations which are not self-adjoint, biorthogonal systems of functions play the same role as the orthogonal systems do in the self-adjoint case. Liouville has considered special non-self-adjoint differential equations with real characteristic values of the parameter; Birkhoff has proved the existence of the characteristic values (in general complex) for the differential equation of the nth order and obtained the related expressions. If the integral equation $u(s) = \lambda \int_a^b L(s,t)u(t)\,dt$ with the unsymmetric kernel $$L(s,t)$$ has solutions $$u(s)$$, and therefore the integral equation $v(s)=\lambda \int_a^b L(t,s)v(t)\,dt$ solutions $$v(s)$$, it has been shown by Plemelj and Goursat that the solutions or functions closely related to them form a biorthogonal system. But expansions in terms of these solutions have not been obtained, and no criteria have been given for the existence of real characteristic numbers of an unsymmetric kernel. The object of this paper is the development of a theory of biorthogonal systems of functions independent of their connection with integral or differential equations. In the theory frequent use is made of the theorems by Riesz, Fischer, and Toeplitz. Necessary and sufficient conditions for the existence of the adjoint system {vi} of any system of linearly independent functions {ui} are deduced. Theorems for biorthogonal systems analogous to those of Riesz and Fischer for orthogonal systems are [given]. The equivalence of two biorthogonal systems is defined and a classification into types is made. Applications of Biorthogonal Systems of Functions to the Theory of Integral Equations Transactions of the American Mathematical Society, Vol. 12, No. 2 (April 1911), 165-180 Presented to the Society, September, 1909 Introduction In this paper we give a sufficient condition that the characteristic numbers of an unsymmetric kernel exist and be real, and prove the expansibility of arbitrary functions in terms of the corresponding characteristic function. This sufficient condition is stated in terms of a functional transformation T(f) defined by certain general properties, and for the special case T(f) = f we obtain the known theory of the orthogonal integral equations. The method employed is that of infinitely many variables and is based to some extent on an earlier paper.
2022-05-16 12:24:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6836076378822327, "perplexity": 312.5729403159511}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510117.12/warc/CC-MAIN-20220516104933-20220516134933-00232.warc.gz"}
https://ask.wireshark.org/questions/19305/revisions/
Ask Your Question # Revision history [back] ### How can I use conversations in custom dissectors I have a custom dissector written in C that dissects a simple client-server protocol. The protocol though has one quirk: If an operation is successful it sets ACK flag, if not the ACK bit is not set. However if the bit is not set it looks exactly like a packet a client might send to a server. My idea was to use conversations to track if a packet is a response to a query. From reading the README.dissector documentation I came up with the following: C guint* conv_frames; conversation_t* conv = find_conversation_pinfo(pinfo,0); if (conv == NULL){ conversation_new(pinfo->num, &pinfo->src, &pinfo->dst, conversation_pt_to_endpoint_type(pinfo->ptype), pinfo->srcport, pinfo->destport, 0); } conv_frames = (guint*) conversation_get_proto_data(conv, proto_rnvs); if (conv_frames == NULL){ conv_frames = (guint*) wmem_alloc(wmem_file_scope(), sizeof(guint)); *conv_frames = 0; conversation_add_proto_data(conv, proto_rnvs, conv_frames); } *conv_frames = *conv_frames + 1; .... if (*conv_frames % 2 == 0) { proto_item_append_text(ti, ", %s", val_to_str(flags, server_response, "Unknown (0x%02x)")); col_add_fstr(pinfo->cinfo, COL_INFO, "%s", val_to_str(flags, server_response, "Unknown (0x%02x)")); conversation_delete_proto_data(conv, proto_rnvs); } else { proto_item_append_text(ti, ", %s", val_to_str(flags, client_ops, "Unknown (0x%02x)")); col_add_fstr(pinfo->cinfo, COL_INFO, "%s", val_to_str(flags, client_ops, "Unknown (0x%02x)")); } This seems to work when I run it in Tshark but in Wireshark as soon as I enter a filter it fails and misinterprets the packets. I suspect that this code only works on the first dissection run and then has some 'leftover' state. But i dont understand the conversation feature enough to tell what I am missing here. Can anobody help me out here?
2021-03-02 14:57:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22405001521110535, "perplexity": 9610.35238438203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364008.55/warc/CC-MAIN-20210302125936-20210302155936-00310.warc.gz"}
https://www.physicsforums.com/threads/help-with-vector-kinematics-question.342431/
# Help with vector kinematics question 1. Oct 2, 2009 ### hisoko http://i58.photobucket.com/albums/g280/hisokeee/aa.png so since the ball is hit 1m above the ground, i will just ignore that and the 21m height fence and just make it a projectile of a ball from ground to the top of a 20m fence I only know the gravity, the height of fence and distance to fence and the angle the ball was hit. I tried first by figuring out the initial Voy by doing Vfy=0 g=-9.8 and H=20 but that didn't work out. What else can I do with this information? 2. Oct 3, 2009 ### CompuChip In this type of problems, you always have two independent directions, namely horizontal and vertical. First decompose the velocity v into a vertical and horizontal component, using the sine and cosine of the angle. For the vertical direction, you can then relate v to the final time using the data you gave in your post (cf. $\Delta v = - g t$). For the horizontal direction, you have information about the distance, which also gives a relation between v and t (cf. $\delta s = v t$). This will give you two equations involving v and the time t it takes the ball to reach the fence. You can use one of them to eliminate t and obtain a single equation for v. 3. Oct 3, 2009 ### dave_baksh I tried this for funsies and can't get the answer. Any chance of further help? Not sure if the OP got it figured out so I don't want an answer, just a little more guidance. 4. Oct 3, 2009 ### hisoko Yes I tried it too, i tried so many times, it doesnt get the answer. and with only 2 known things, there really is only one way to go with it because you will have to do the vertical part first using gravity and that answer doesnt work out. z_z lol 5. Oct 3, 2009 ### Delphi51 It seems to work for me. From the d=vt for the horizontal part, I get an equation with two unknowns, v and t, where v is the initial speed. From the distance formula for the vertical part, I get another equation with the same two unknowns. Two equations, two unknowns, no problem! Have another go and show your work here so we can help you! 6. Oct 4, 2009 ### dave_baksh Success! I get the right answers. That'll teach me to set my working out neatly in future to avoid stupid mistakes! Cheers for the pointers dudes. Last edited: Oct 4, 2009 7. Oct 4, 2009 ### CompuChip Dave, did you pay attention that: * d in t = d / v is the horizontal distance * v in t = d / v is the horizontal velocity, and u in s = u t + (1/2) a t^2 is the vertical velocity. Both can be expressed in terms of the speed V using the (co)sine of an angle.
2017-08-24 10:30:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7856990694999695, "perplexity": 684.0469653261998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133449.19/warc/CC-MAIN-20170824101532-20170824121532-00598.warc.gz"}
https://math.stackexchange.com/questions/1922536/equations-for-the-image-of-a-2-sphere-by-a-differentiable-map
# Equations for the image of a 2-sphere by a differentiable map My professor gave us this exercise. Let $S^2=\{(x_1,x_2,x_3)\in\mathbb{R}^3|(x_1)^2+(x_2)^2+(x_3)^2=1\}$ be the unit sphere in $\mathbb{R}^3$, and let's consider the function $f\colon S^2\to\mathbb{R}^6$ defined by $$f(x_1,x_2,x_3)=((x_1)^2,(x_2)^2,(x_3)^2,x_2x_3,x_1x_3,x_1x_2).$$ Prove that $f$ is an immersion but is not injective. Besides, write the equations for $f(S^2)\subset\mathbb{R}^6$ and prove that $f(S^2)$ is an embedded submanifold of $\mathbb{R}^6$. Solution For the first question, it is sufficient to write down the jacobian matrix $J$ of the function f and note that it is not possible for all the determinants of the $3\times3$ submatrices of $J$ to be zero simultaneously (provided that we are considering only the points of the 2-dim sphere), so the differential is everywhere injective on the sphere and we are done. Besides, $f$ is not injective because, for example, $f(x_1,x_2,x_3)=f(-x_1,-x_2,-x_3)$ ($(x_1,x_2,x_3)$ lies on the 2-dim sphere if and only if $(-x_1,-x_2,-x_3)$ does). But I am just stuck with the second question, I don't get what my professor means with "write the equations for $f(S^2)$"... The only equation I see is satisfied by a point in $f(S^2)$ is $x_1+x_2+x_3=1$. I'm wondering, how many equations there should be for $f(S^2)$? And how can I find them? Only by means of algebric manipulations of the components of $f$, or is there some "sistematic" way to find them? • A point $(x,y,z,u,v,w)$ is in the image if $x^2+y^2+z^2 = 1$ and $\frac{uv}{w} + \frac{vw}{u} + \frac{uw}{v} = 1$. – Zestylemonzi Sep 11 '16 at 12:24 • Oops you need to be careful of the cases when $u,v$ or $w$ are $0$. This is easy to take care of though. – Zestylemonzi Sep 11 '16 at 12:47 • Why $x^2+y^2+z^2=1$ and not $x+y+z=1$? A point in $f(S^2)$ is a point which belongs to $\mathbb{R}^6$ and can be written in the form prescribed by $f$, for some point that belongs to the 2-shpere, and so should satisfy $x+y+z=1$ I think. – Vladimir Sep 11 '16 at 13:30 • Yes you're right! Sorry, I misread the equation. – Zestylemonzi Sep 11 '16 at 15:28 Note $f(\mathbb S^2)$ is a closed set in $\mathbb R^6$ and thus there is a smooth function $F :\mathbb R^6 \to \mathbb R$ so that $F^{-1}(0) = f(\mathbb S^2)$. Of course this does not say anything about the geoemtry of $f(\mathbb S^2)$. So we do something else. Write the coordinate of $\mathbb R^6$ as $(x, y, z, w, u, v)$. Since $f(\mathbb S^2)$ should be two dimension (it's immersed) and thus are cut out by four equations. By inspection, we have $$\begin{split} x+y+ z&=1 \\ xy &= v^2 \\ yz &= w^2 \\ zx &= u^2 \end{split}$$ equation $(2)-(4)$ implies that $xy, yz, zx \ge 0$. Together with $(1)$ we have $x, y, z \ge 0$. Write $$(x, y,z) = (x_1^2, x_2^2, x_3^2)$$ for some $x_1, x_2, x_3 \in \mathbb R$. Then $$\begin{split} v &= \pm x_1x_2 \\ w &= \pm x_2x_3 \\ u &= \pm x_3x_1.\end{split}$$ If we require $w, u, v \ge 0$, then we recover $$(x_1^2 , x_2^2, x_3^2 , x_2x_3, x_1x_3, x_1x_2), \text{with }\ x_1^2 + x_2^2 + x_3^2 = 1.$$ Thus $f(\mathbb S^2)$ is the set cut out by the above equations in the quadrant $w, u, v\ge 0$. Now we show that $f(\mathbb S^2)$ is embedded. Indeed it is clear that if $x, y\in \mathbb S^2$ and $f(x) = f(y)$, then $x = \pm y$ (check). Then $f$ descends to an injective immersion $\tilde f :\mathbb{RP}^2 \to \mathbb R^6$, which is embedded as $\mathbb {RP}^2$ is compact. Thus $f(\mathbb S^2)$ is an embedded submanifold in $\mathbb R^6$ homeomorphic to the real projective plane. I don't know about finding the equations describing the surface in $\mathbb R^6$, but I'll demonstrate how we might apply abstract machinery to answer the second question without explicit equations. This probably isn't the kind of answer you're looking for (i.e. may be unnecessarily abstract for the problem at hand), but I think it will be good for you to see the technique anyhow. Hopefully it helps you solve future problems. You have already shown that $f(x_1,x_2,x_3) = f(-x_1, -x_2, -x_3)$, but we can also note that this is the only transformation of coordinates $(x_1,x_2,x_3)$ that preserves the value of $f$. Why? Well, for a start we know that we can only flip the signs of $x_1$, $x_2$ and $x_3$ because otherwise, one of first three coordinates $(x_1^2, x_2^2, x_3^2)$ would change. You can easily see from the remaining coordinates ($x_1 x_2$, $x_2 x_3$, $x_3 x_1$) that flipping the sign of one of these $x_i$ requires changing another, and then the final one. So inversion of $(x_1, x_2, x_3)$ is the only transformation under which $f$ is invariant. This tells us that in some sense, $f(S^2)$ is equivalent to $S^2/\sim$ where $\sim$ is the equivalence relation $\mathbf x \sim -\mathbf x$. So the remainder of the proof consists of two steps: 1. Show that as topological spaces, $S^2/\sim$ is homeomorphic to $f(S^2)$. 2. Show that $S^2/\sim$ is a manifold. As an aside, this quotient structure is a very common way to represent the manifold $\mathbb {RP}^2$ called the real projective plane. ## $S^2 / \sim$ is homeomorphic to $f(S^2)$ There is an inherited map $g : S^2 / \sim \rightarrow f(S^2)$ that maps $g : [\mathbf x] \mapsto \mathbf x \mapsto f(\mathbf x)$. I'll show that this is a homeomorphism. I'll assume that $f$ is continuous and an open mapping (its inverse is continuous). By definition of the quotient topology of $S^2/\sim$, the canonical projection from an element $\mathbf x$ in $S^2$ to its equivalence class $[\mathbf x]$, the map denoted $\pi$, is continuous and an open mapping. This is because $U \subseteq S^2/\sim$ is defined to be open iff $\pi^{-1}(U)$ is open. Consequently, the map $g$ mapping sets in $S^2/\sim$ to sets in $f(S^2)$, given by $g = f \circ \pi^{-1}$, is composed of continuous, open mappings. This means it itself is continuous and an open mapping. Furthermore, $g$ is injective because we ensured that it would be so by quotienting out points of $S^2$ which would give duplicate values of $f$. $g$ is also surjective because we take its domain to be $f(S^2) = g(S^2/\sim)$. Hence $g$ is a homeomorphism. ## $S^2/\sim$ is a manifold We are looking to show that every $\mathbf y \in S^2 / \sim$ has an open neighbourhood homeomorphic to $\mathbb R^2$. First, let $\mathbf y = \{\mathbf x, -\mathbf x\}$. Since $S^2$ is a manifold, we can take a neigbourhood $U_{\mathbf x} \subset S^2$ that is homeomorphic to $\mathbb R^2$. This defines a companion neigbourhood $U_{- \mathbf x}$ on the opposite side of the sphere which is related by the inversion operation $\mathbf a \mapsto - \mathbf a$. Note that this operation is a homeomorphism. An open neighbourhood of $\mathbf y$ is $V_{\mathbf y} = \{\{\mathbf a, \mathbf - \mathbf a\} : \mathbf a \in U_{\mathbf x}\}$. Basically this is a set of pairs of poles of the sphere, where one of each pair lies in $U_{\mathbf x}$. The projection from $V_{\mathbf y}$ onto $U_{\mathbf x}$ by taking that one pole lying in $U_{\mathbf x}$ is a homeomorphism (I'll leave you to think about this). And so $V_{\mathbf y}$ may be mapped to $\mathbb R^2$ by a pair of composed homeomorphisms (first onto $U_{\mathbf x}$ and then onto $\mathbb R^2$) and so we have constructed a neighbourhood of $\mathbf y$ homeomorphic to $\mathbb R^2$. This may sound needlessly complicated (maybe it is), but it has some far-reaching consequences. For instance, we now know that if we have an arbitrary continuous and open map between topological spaces $f : X \rightarrow Y$, then $f(X)$ is homeomorphic to $X/\sim$ where $\sim$ identifies elements of $X$ with the same value in $Y$. Consequently, to determine the topological structure of $\mathrm{range} (f)$, including whether or not it is a manifold, we may look instead at the topological structure $X / \sim$. This formalises a notion which seems rather obvious when one thinks about it.
2019-10-19 19:22:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432198405265808, "perplexity": 82.38025042716461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697760.44/warc/CC-MAIN-20191019191828-20191019215328-00120.warc.gz"}
https://byjus.com/mean-value-theorem-formula/
# Mean Value Theorem Formula In mathematics, the mean value theorem states, roughly, that given a planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. The Mean Value Theorem states that if f(x) is continuous on [a, b] and differentiable on (a, b) then there exists a number c between a and b such that: $\large {f}'(c)=\frac{f(b)-f(a)}{b-a}$ ### Solved Example Question: Evaluate f(x) = x+ 2 in the interval [1, 2] using mean value theorem. Solution: Given function is: f(x) = x+ 2 Interval is [1, 2]. i.e. a = 1, b = 2 Mean value theorem is given by, f'(c) = $\frac{f(b)-f(a)}{b-a}$ f(b) = f(2) = 22 + 2 = 6 f(a) = f(1) = 12 + 2 = 3 So, f'(c) = $\frac{6-3}{2-1}$ = 3 #### 1 Comment 1. Shravankumar Super
2021-07-29 02:29:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7127123475074768, "perplexity": 616.4629425560278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00675.warc.gz"}
http://caul.cii.fc.ul.pt/seminar.php?event_date=2012-04-10&sem_id=815&locale=pt
H.P. Sankappanavar (State University of New York, USA) 10/04/2012 Terça-feira, 10 de Abril de 2012, 14h30m, Sala 6.2.38, FCUL  Instituto para a Investigação Interdisciplinar da Universidade de Lisboa De Morgan Algebras. New Perspectives and Applications It is well known that Boolean algebras can be defined using only the implication and the constant 0. It is, then, natural to ask whether De Morgan algebras can also be characterized using only a binary operation (implication) $\rightarrow$ and a constant 0. In this lecture, I give an affirmative answer to this question by showing that the variety of De Morgan algebras is term-equivalent to a (2-based) variety of type $\{\rightarrow, 0\}$. As a natural consequence, Kleene algebras can also be described as a variety using only $\rightarrow$ and 0. If time permits, I will introduce a new variety of algebras, called "Implication Groupoids", and mention some open problems. This lecture is based on the following paper: H.P. Sankappanavar, De Morgan Algebras: New Perspectives and Applications, Scientiae Mathematicae Japonicae (22 pages). To appear soon.
2013-12-09 07:20:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3161362409591675, "perplexity": 761.9863176885547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163932627/warc/CC-MAIN-20131204133212-00071-ip-10-33-133-15.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Ideal_gas_law
# Ideal gas law Isotherms of an ideal gas. The curved lines represent the relationship between pressure (on the vertical, y-axis) and volume (on the horizontal, x-axis) for an ideal gas at different temperatures: lines that are farther away from the origin (that is, lines that are nearer to the top right-hand corner of the diagram) represent higher temperatures. The ideal gas law is the equation of state of a hypothetical ideal gas. It is a good approximation of the behavior of many gases under many conditions, although it has several limitations. It was first stated by Émile Clapeyron in 1834 as a combination of the empirical Boyle's law, Charles' law and Avogadro's Law.[1] The ideal gas law is often written as ${\displaystyle PV=nRT}$, where: • ${\displaystyle P}$ is the pressure of the gas, • ${\displaystyle V}$ is the volume of the gas, • ${\displaystyle n}$ is the amount of substance of gas (in moles), • ${\displaystyle R}$ is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant, • ${\displaystyle T}$ is the absolute temperature of the gas. It can also be derived microscopically from kinetic theory, as was achieved (apparently independently) by August Krönig in 1856[2] and Rudolf Clausius in 1857.[3] ## Equation The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: the appropriate SI unit is the kelvin.[4] ### Common form The most frequently introduced form is ${\displaystyle PV=nRT,}$ where: • ${\displaystyle P}$ is the pressure of the gas, • ${\displaystyle V}$ is the volume of the gas, • ${\displaystyle n}$ is the amount of substance of gas (also known as number of moles), • ${\displaystyle R}$ is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant, • ${\displaystyle T}$ is the absolute temperature of the gas. In SI units, P is measured in pascals, V is measured in cubic metres, n is measured in moles, and T in kelvins (the Kelvin scale is a shifted Celsius scale, where 0.00 K = −273.15 °C, the lowest possible temperature). R has the value 8.314 J/(K·mol) ≈ 2 cal/(K·mol), or 0.08206 L·atm/(mol·K). ### Molar form How much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, an alternative form of the ideal gas law may be useful. The chemical amount (n) (in moles) is equal to the total mass of the gas (m) (in grams) divided by the molar mass (M) (in grams per mole): ${\displaystyle n={\frac {m}{M}}.}$ By replacing n with m/M and subsequently introducing density ρ = m/V, we get: ${\displaystyle PV={\frac {m}{M}}RT,}$ ${\displaystyle P=\rho {\frac {R}{M}}T.}$ Defining the specific gas constant Rspecific(r) as the ratio R/M, ${\displaystyle P=\rho R_{\text{specific}}T.}$ This form of the ideal gas law is very useful because it links pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specific volume v, the reciprocal of density, as ${\displaystyle Pv=R_{\text{specific}}T.}$ It is common, especially in engineering applications, to represent the specific gas constant by the symbol R. In such cases, the universal gas constant is usually given a different symbol such as ${\displaystyle {\bar {R}}}$ to distinguish it. In any case, the context and/or units of the gas constant should make it clear as to whether the universal or specific gas constant is being referred to.[5] ### Statistical mechanics In statistical mechanics the following molecular equation is derived from first principles ${\displaystyle PV=Nk_{\text{B}}T,}$ where P is the absolute pressure of the gas, N is the number of molecules in the given volume V (the number density is given by the ratio n = N/V), kB is the Boltzmann constant relating temperature and energy, and T is the absolute temperature. KB =R/NA The number density contrasts to the other formulation, which uses n, the number of moles and V, the volume. This relation implies that R = NAkB, where NA is Avogadro's constant, and the consistency of this result with experiment is a good check on the principles of statistical mechanics. In extreme conditions the principles of statistical mechanics may break down as some of the assumptions relating a real life example to an ideal gas become untrue. From this we notice that for a gas with an average particle mass of μ times the atomic mass constant mu (i.e., the mass is μ u) the number of molecules will be given by ${\displaystyle N={\frac {m}{\mu m_{\text{u}}}},}$ and since ρ = m/V = nμmu, we find that the ideal gas law can be rewritten as ${\displaystyle P={\frac {1}{V}}{\frac {m}{\mu m_{\text{u}}}}k_{B}T={\frac {k_{B}}{\mu m_{\text{u}}}}\rho T.}$ In SI units, P is measured in pascals, V in cubic metres, Y is a dimensionless number, and T in measured kelvins. kB has the value 1.38·10−23 J/K in SI units. ## Energy associated with a gas According to the assumptions of the kinetic theory of gases, we assumed that there are no inter molecular attractions between the molecules of an ideal gas its potential energy is zero. hence, all the energy possessed by the gas is kinetic energy. ${\displaystyle E={\frac {3}{2}}RT}$ This is the kinetic energy of one mole of a gas. Energy of gas Mathematical formula energy associated with one mole of a gas ${\displaystyle E={\frac {3}{2}}RT}$ energy associated with one gram of a gas ${\displaystyle E={\frac {3}{2}}rT}$ energy associated with one molecule of a gas ${\displaystyle E={\frac {3}{2}}K_{B}T}$ ## Applications to thermodynamic processes The table below essentially simplifies the ideal gas equation for a particular processes, thus making this equation easier to solve using numerical methods. A thermodynamic process is defined as a system that moves from state 1 to state 2, where the state number is denoted by subscript. As shown in the first column of the table, basic thermodynamic processes are defined such that one of the gas properties (P, V, T, or S) is constant throughout the process. For a given thermodynamics process, in order to specify the extent of a particular process, one of the properties ratios (which are listed under the column labeled "known ratio") must be specified (either directly or indirectly). Also, the property for which the ratio is known must be distinct from the property held constant in the previous column (otherwise the ratio would be unity, and not enough information would be available to simplify the gas law equation). In the final three columns, the properties (P, V, or T) at state 2 can be calculated from the properties at state 1 using the equations listed. Process Constant Known ratio P2 V2 T2 Isobaric process Pressure V2/V1 P2 = P1 V2 = V1(V2/V1) T2 = T1(V2/V1) T2/T1 P2 = P1 V2 = V1(T2/T1) T2 = T1(T2/T1) Isochoric process (Isovolumetric process) (Isometric process) Volume P2/P1 P2 = P1(P2/P1) V2 = V1 T2 = T1(P2/P1) T2/T1 P2 = P1(T2/T1) V2 = V1 T2 = T1(T2/T1) Isothermal process Temperature P2/P1 P2 = P1(P2/P1) V2 = V1/(P2/P1) T2 = T1 V2/V1 P2 = P1/(V2/V1) V2 = V1(V2/V1) T2 = T1 Isentropic process Entropy[a] P2/P1 P2 = P1(P2/P1) V2 = V1(P2/P1)(−1/γ) T2 = T1(P2/P1)(γ − 1)/γ V2/V1 P2 = P1(V2/V1)−γ V2 = V1(V2/V1) T2 = T1(V2/V1)(1 − γ) T2/T1 P2 = P1(T2/T1)γ/(γ − 1) V2 = V1(T2/T1)1/(1 − γ) T2 = T1(T2/T1) Polytropic process P Vn P2/P1 P2 = P1(P2/P1) V2 = V1(P2/P1)(-1/n) T2 = T1(P2/P1)(n - 1)/n V2/V1 P2 = P1(V2/V1)−n V2 = V1(V2/V1) T2 = T1(V2/V1)(1−n) T2/T1 P2 = P1(T2/T1)n/(n − 1) V2 = V1(T2/T1)1/(1 − n) T2 = T1(T2/T1) ^ a. In an isentropic process, system entropy (S) is constant. Under these conditions, P1 V1γ = P2 V2γ, where γ is defined as the heat capacity ratio, which is constant for a calorifically perfect gas. The value used for γ is typically 1.4 for diatomic gases like nitrogen (N2) and oxygen (O2), (and air, which is 99% diatomic). Also γ is typically 1.6 for mono atomic gases like the noble gases helium (He), and argon (Ar). In internal combustion engines γ varies between 1.35 and 1.15, depending on constitution gases and temperature. ## Deviations from ideal behavior of real gases The equation of state given here applies only to an ideal gas, or as an approximation to a real gas that behaves sufficiently like an ideal gas. There are in fact many different forms of the equation of state. Since the ideal gas law neglects both molecular size and inter molecular attractions, it is most accurate for monatomic gases at high temperatures and low pressures. The neglect of molecular size becomes less important for lower densities, i.e. for larger volumes at lower pressures, because the average distance between adjacent molecules becomes much larger than the molecular size. The relative importance of intermolecular attractions diminishes with increasing thermal kinetic energy, i.e., with increasing temperatures. More detailed equations of state, such as the van der Waals equation, account for deviations from ideality caused by molecular size and intermolecular forces. A residual property is defined as the difference between a real gas property and an ideal gas property, both considered at the same pressure, temperature, and composition. ## Derivations ### Empirical The ideal gas law can be derived from combining two empirical gas laws: the combined gas law and Avogadro's law. The combined gas law states that ${\displaystyle {\frac {PV}{T}}=C,}$ where C is a constant that is directly proportional to the amount of gas, n (Avogadro's law). The proportionality factor is the universal gas constant, R, i.e. C = nR. Hence the ideal gas law is ${\displaystyle PV=nRT.}$ ### Theoretical #### Kinetic theory The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved. #### Statistical mechanics Main article: Statistical mechanics Let q = (qx, qy, qz) and p = (px, py, pz) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then the time-averaged potential energy of the particle is: {\displaystyle {\begin{aligned}\langle \mathbf {q} \cdot \mathbf {F} \rangle &={\Bigl \langle }q_{x}{\frac {dp_{x}}{dt}}{\Bigr \rangle }+{\Bigl \langle }q_{y}{\frac {dp_{y}}{dt}}{\Bigr \rangle }+{\Bigl \langle }q_{z}{\frac {dp_{z}}{dt}}{\Bigr \rangle }\\&=-{\Bigl \langle }q_{x}{\frac {\partial H}{\partial q_{x}}}{\Bigr \rangle }-{\Bigl \langle }q_{y}{\frac {\partial H}{\partial q_{y}}}{\Bigr \rangle }-{\Bigl \langle }q_{z}{\frac {\partial H}{\partial q_{z}}}{\Bigr \rangle }=-3k_{B}T,\end{aligned}}} where the first equality is Newton's second law, and the second line uses Hamilton's equations and the equipartition theorem. Summing over a system of N particles yields ${\displaystyle 3Nk_{B}T=-{\biggl \langle }\sum _{k=1}^{N}\mathbf {q} _{k}\cdot \mathbf {F} _{k}{\biggr \rangle }.}$ By Newton's third law and the ideal gas assumption, the net force of the system is the force applied by the walls of the container, and this force is given by the pressure P of the gas. Hence ${\displaystyle -{\biggl \langle }\sum _{k=1}^{N}\mathbf {q} _{k}\cdot \mathbf {F} _{k}{\biggr \rangle }=P\oint _{\mathrm {surface} }\mathbf {q} \cdot d\mathbf {S} ,}$ where dS is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is ${\displaystyle \nabla \cdot \mathbf {q} ={\frac {\partial q_{x}}{\partial q_{x}}}+{\frac {\partial q_{y}}{\partial q_{y}}}+{\frac {\partial q_{z}}{\partial q_{z}}}=3,}$ the divergence theorem implies that ${\displaystyle P\oint _{\mathrm {surface} }\mathbf {q} \cdot d\mathbf {S} =P\int _{\mathrm {volume} }\left(\nabla \cdot \mathbf {q} \right)dV=3PV,}$ where dV is an infinitesimal volume within the container and V is the total volume of the container. Putting these equalities together yields ${\displaystyle 3Nk_{B}T=-{\biggl \langle }\sum _{k=1}^{N}\mathbf {q} _{k}\cdot \mathbf {F} _{k}{\biggr \rangle }=3PV,}$ which immediately implies the ideal gas law for N particles: ${\displaystyle PV=Nk_{B}T=nRT,\,}$ where n = N/NA is the number of moles of gas and R = NAkB is the gas constant.
2017-02-22 05:22:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 53, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654876351356506, "perplexity": 833.6500160320429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170884.63/warc/CC-MAIN-20170219104610-00246-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.examrace.com/Study-Material/Economics/International-Banking-Terminology-L-to-M.html
# Competitive Exams: International Banking Terminology L to M • LIBOR: Libor stands for London Inter-Bank Offered Rate. This is a favorable interest rate offered for US dollar or Eurodollar deposits between groups of London banks. It is an international interest rate that follows world economic conditions and is defined by the maturity of its deposit term (i.e.. 30-day LIBOR, 60-day LIBOR). This market allows banks with liquidity requirements to borrow quickly form other banks with surpluses. The LIBOR is officially fixed once a day by a group of large London banks, but the rate changes throughout the day. The difficulty with some LIBOR based loans is that the terms can be based upon set dollar amounts to draw-down or repay at specific dates. • Lien: An encumbrance against property form money due, either voluntary or involuntary. • Line of Credit: A pre-approved loan authorization with a specific borrowing limit based on creditworthiness. A line of credit allows borrowers to obtain a number of loans without re-applying each time as long as the total amount of funds does not exceed the credit limit. • Loan to Value (LTV): The unpaid principal balance of a loan on property divided by the asset's appraised value. Generally, the lower the LTV the move favorable the term and interest rate of the loan. For example, on a $100, 000 building, with a note due of$80, 000, the LTV ratio would be 80%. • Margin: The number of percentage points a lender adds to an index value to calculate the interest rate charged on a loan. Also knows as the spread. • Maturity: The date on which the principal balance of a loan becomes due and payable. • Money Market Fund: An open ended mutual fund that invest in short-term debts and monetary instruments such a Treasury bills and pays money market rates of interest. These are usually NOT insured by the FDIC.
2017-07-25 03:17:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3940044641494751, "perplexity": 4721.237002110326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424960.67/warc/CC-MAIN-20170725022300-20170725042300-00237.warc.gz"}
https://tex.stackexchange.com/questions/361989/correct-diameter-symbol-needed-without-stix-package-pdflatex
# Correct diameter symbol needed without stix package (pdflatex) I have drawn a custom diameter symbol in TikZ and now I need the correct unicode glyph (U+2300 ⌀) in my document (pdftex) to provide the right copy-paste text in the pdf. I've found that the \diameter symbol from the stix package produces the right glyph (testable at fileformat.info, it should say "DIAMETER SIGN" as the only result), but in my document stix produces many errors (Too many symbol fonts declared. ...ont{arrows2} {LS1}{stixsf} {m}{it}) for which I wasn't able to find a fix. So, how can I get the needed unicode glyph in my document with pdftex without using stix? No other symbol package I've tested produces the right glyph. edit: I've looked into stix.sty and tried to use the definition from there, but this creates the wrong symbol ... \documentclass{standalone} \DeclareMathSymbol{\diam}{\mathord}{symbols}{"60} \begin{document} $$\diam$$ % should always produce U+2300 \end{document} This answer might apply here, but I couldn't get it to work for me. • Maybe you can provide a minimal but complete code example. Apr 3 '17 at 22:11 • @Dr. Manuel Kuehner: Done – lblb Apr 3 '17 at 22:12 You don't need to set up a math font for just one symbol. \documentclass{standalone} \usepackage{amsmath} \DeclareFontEncoding{LS1}{}{} \DeclareFontSubstitution{LS1}{stix}{m}{n} \DeclareRobustCommand{\diameter}{% \text{\usefont{LS1}{stixscr}{m}{n}\symbol{"60}}% } \begin{document} $$\diameter$$ % something producing the U+2300 glyph \end{document} If I copy from the PDF and paste it in http://r12a.github.io/apps/conversion/ I get • Thank you. Time for me to educate myself more on the topic instead of bothering others. – lblb Apr 3 '17 at 22:50 • @lblb Oh, don't worry! It's fun to do these chores! ;-) Apr 3 '17 at 22:51
2021-09-25 11:54:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8386178612709045, "perplexity": 3456.6640804504077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057622.15/warc/CC-MAIN-20210925112158-20210925142158-00424.warc.gz"}
https://www.transtutors.com/questions/1-the-manangement-of-sharrar-corporation-would-like-toinvestigate-the-possibility-of-3479697.htm
1)The manangement of Sharrar Corporation would like toinvestigate the possibility of basing its... 1)The manangement of Sharrar Corporation would like toinvestigate the possibility of basing its predetermined overheadrate on activity at capacity rather than on the estimated amount ofactivity for the year.  The company’s controller has providedan example to illustrate how this new system would work.  Inthis example, the allocation base is machine-hours and theestimated amount of the allocation base for the upcoming year is45,000 machine-hours. In addition, capacity is 52,000 machine-hoursand the actual activity for the year is 47,100 machine-hours. Allof the manufacturing overhead is fixed and is $1,029,600 peryear. For simplicity, its assumed that this is the estimatedmanufacturing overhead for the year as well as the manufacturingoverhead at capacity and the actual amount of manufacturingoverhead for the year. A)Determine the underapplied or overapplied overhead for the yearif the pretermined overhead rate is based on the estimated amountof the allocation base. b) Determine the underapplied or overapplied overhead for the yearif the predetermined overhead rate is based on the amount of theallocation base capacity. 2)Fryer Corporation uses the weighted-average method in its processcosting system. this month, the beginning inventory in thefirst processing department consisted of 700 units. The costand percentage completion of these units in the beginning inventorywere: COST Percentage Complete materialCosts$12,600                 75% ConversionCosts                       $8,900 60% A total of 7,300 units were started and 6,200 units weretransferred to the second processing department during themonth. the following costs were incurred in the firstprocessing department during the month: COST Material Cost$132,200 Conversion Costs     $117,500 The ending inventory was 80% complete with the respect to materialsand 45% complete with respect to conversion costs. NOTE: To reduce rounding error, carry out all computations to atleast 3 decimal places. The total cost transferred from the first processing department tothe next processing department during the month is closest to whatamount? 3) The following information is available on Company A: Sales……………………………………$900,000 Net Operating Income………………..$36,000 Stockholders Equity…………………..$100,000 Average Operating Assets……………\$180,000 Minimum Required Rate of Return……15% What is Company A’s residula income amount? QUESTION TITLE :- The manangement of Sharrar Corporation would like to investigate Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker
2020-10-22 00:24:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24367740750312805, "perplexity": 14967.102303078995}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878662.15/warc/CC-MAIN-20201021235030-20201022025030-00546.warc.gz"}
https://socratic.org/questions/how-do-you-solve-17-13-8x
# How do you solve 17= - 13- 8x? $17 = - 13 - 8 x$ $17 + 13 = - 8 x$ $30 = - 8 x$ $- 8 x = 30$ $x = - \frac{30}{8}$ $x = - \frac{15}{4}$ Jul 2, 2018 $x = - \frac{15}{4}$ #### Explanation: $\text{add 13 to both sides of the equation}$ $17 + 13 = - 8 x$ $30 = - 8 x$ $\text{divde both sides by } - 8$ $\frac{30}{- 8} = x \Rightarrow x = - \frac{30}{8} = - \frac{15}{4}$ $\textcolor{b l u e}{\text{As a check}}$ $- 17 - \left(8 \times - \frac{15}{4}\right) = - 17 + 30 = 17 \leftarrow \text{correct}$ Aug 5, 2018 $x = - \frac{15}{4}$ #### Explanation: We have the following: $- 13 - 8 x = 17$ We can get the constants on the right by adding $13$ to both sides. We now have $- 8 x = 30$ Our last step would be to divide both sides by $- 8$. We get $x = - \frac{30}{8}$, which simplifies to $x = - \frac{15}{4}$. Hope this helps!
2020-10-01 18:31:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 21, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642407894134521, "perplexity": 570.177281244632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131986.91/warc/CC-MAIN-20201001174918-20201001204918-00466.warc.gz"}
http://libros.duhnnae.com/2017/jul7/1500958148100-Scaled-limit-and-rate-of-convergence-for-the-largest-eigenvalue-from-the-generalized-Cauchy-random-matrix-ensemble-Mathematics-Probability.php
# Scaled limit and rate of convergence for the largest eigenvalue from the generalized Cauchy random matrix ensemble - Mathematics > Probability Scaled limit and rate of convergence for the largest eigenvalue from the generalized Cauchy random matrix ensemble - Mathematics > Probability - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Abstract: In this paper, we are interested in the asymptotic properties for the largesteigenvalue of the Hermitian random matrix ensemble, called the GeneralizedCauchy ensemble $GCy$, whose eigenvalues PDF is given by\textrm{const}\cdot\prod {1\leq j-1-2$and where$N$is the size of the matrix ensemble. Usingresults by Borodin and Olshanski \cite{Borodin-Olshanski}, we first prove thatfor this ensemble, the largest eigenvalue divided by$N$converges in law tosome probability distribution for all$s$such that$\Res>-1-2$. Usingresults by Forrester and Witte \cite{Forrester-Witte2} on the distribution ofthe largest eigenvalue for fixed$N$, we also express the limiting probabilitydistribution in terms of some non-linear second order differential equation.Eventually, we show that the convergence of the probability distributionfunction of the re-scaled largest eigenvalue to the limiting one is at least oforder$1-N\$. Autor: Joseph Najnudel, Ashkan Nikeghbali, Felix Rubin Fuente: https://arxiv.org/
2018-04-23 06:08:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9637328386306763, "perplexity": 4440.2463366021675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945793.18/warc/CC-MAIN-20180423050940-20180423070940-00589.warc.gz"}
http://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=8&journalID=27943&pageb=1&userQueryID=&sort=&local_page=&sorType=&sorCol=
for Journals by Title or ISSN for Articles by Keywords help Subjects -> MATHEMATICS (Total: 889 journals)     - APPLIED MATHEMATICS (73 journals)    - GEOMETRY AND TOPOLOGY (20 journals)    - MATHEMATICS (658 journals)    - MATHEMATICS (GENERAL) (42 journals)    - NUMERICAL ANALYSIS (19 journals)    - PROBABILITIES AND MATH STATISTICS (77 journals) MATHEMATICS (658 journals)                  1 2 3 4 | Last 1 2 3 4 | Last Computational Methods and Function Theory   [SJR: 0.329]   [H-I: 5]   [0 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1617-9447 - ISSN (Online) 2195-3724    Published by Springer-Verlag  [2354 journals] • Quotients of Hyperbolic Metrics • Authors: David Minda Pages: 579 - 590 Abstract: In this paper, precise versions of several intuitive properties of quotients of hyperbolic metrics are established. Suppose that $$\Omega _j$$ is a hyperbolic region in $$\mathbb {C}_\infty = \mathbb {C}\cup \{\infty \}$$ with hyperbolic metric $$\lambda _j$$ , $$j=1,2$$ , and $$\Omega _1 \subsetneq \Omega _2$$ . First, it is shown that $$\lambda _1/ \lambda _2 \approx 1$$ on compact subsets of $$\Omega _1$$ that are not too close to $$\partial \Omega _1$$ . Second, $$\lambda _1/ \lambda _2 \approx 1$$ when z is near $$(\partial \Omega _1 \;\cap \; \partial \Omega _2 ) {\setminus } F_b$$ , where $$F = \partial \Omega _1 \;\cap \;\Omega _2$$ and $$F_b = {{\mathrm{cl}}}(F)\;\cap \;\Omega _2$$ . The main tools used in establishing these results are sharp elementary bounds for $$\lambda _1(z)/ \lambda _2(z)$$ in terms of the hyperbolic distance relative to $$\Omega _2$$ from z to $$\partial \Omega _1 \;\cap \;\Omega _2$$ that were first established and employed in complex dynamics. PubDate: 2017-12-01 DOI: 10.1007/s40315-017-0195-1 Issue No: Vol. 17, No. 4 (2017) • A Normality Criterion Corresponding to the Defect Relations • Authors: Andreas Schweizer Pages: 591 - 601 Abstract: Let $$\mathcal{{F}}$$ be a family of meromorphic functions on a domain D. We present a quite general sufficient condition for $$\mathcal{{F}}$$ to be a normal family. This criterion contains many known results as special cases. The overall idea is that certain comparatively weak conditions on $$\mathcal{{F}}$$ by local arguments lead to somewhat stronger conditions, which in turn lead to even stronger conditions on the limit function g in the famous Zalcman Lemma. Ultimately, the defect relations for g force normality of $$\mathcal{{F}}$$ . PubDate: 2017-12-01 DOI: 10.1007/s40315-017-0196-0 Issue No: Vol. 17, No. 4 (2017) • Criteria for Bounded Valence of Harmonic Mappings • Authors: Juha-Matti Huusko; María J. Martín Pages: 603 - 612 Abstract: In 1984, Gehring and Pommerenke proved that if the Schwarzian derivative S(f) of a locally univalent analytic function f in the unit disk was such that $$\limsup _{ z \rightarrow 1} S(f)(z) (1- z ^2)^2 < 2$$ , then there would exist a positive integer N such that f takes every value at most N times. Recently, Becker and Pommerenke have shown that the same result holds in those cases when the function f satisfies that $$\limsup _{ z \rightarrow 1} f''(z)/f'(z) \, (1- z ^2)< 1$$ . In this paper, we generalize these two criteria for bounded valence of analytic functions to the cases when f is only locally univalent and harmonic. PubDate: 2017-12-01 DOI: 10.1007/s40315-017-0197-z Issue No: Vol. 17, No. 4 (2017) • Uniqueness Theorems for Differential Polynomials Sharing a Small Function • Authors: Thi Hoai An Ta; Viet Phuong Nguyen Pages: 613 - 634 Abstract: Consider meromorphic functions f,  g,  and $$\alpha ,$$ where $$\alpha$$ is a small function with respect to f and g. Let Q be a polynomial of one variable. We give suitable conditions on the degree of Q and on the number of zeros and the multiplicities of the zeros of $$Q'$$ so as to be able to conclude uniqueness results if differential polynomials of the form $$(Q(f))^{(k)}$$ and $$(Q(g))^{(k)}$$ share $$\alpha$$ counting multiplicities. We do not assume that Q has a large order zero, nor do we place restrictions on the zeros and poles of $$\alpha .$$ Thus, our work improves on many prior results that either assume Q has a high order zero or place restrictions on the small function $$\alpha$$ . PubDate: 2017-12-01 DOI: 10.1007/s40315-017-0198-y Issue No: Vol. 17, No. 4 (2017) • Blaschke Products and Circumscribed Conics • Authors: Masayo Fujimura Pages: 635 - 652 Abstract: We study geometrical properties of finite Blaschke products. For a Blaschke product B of degree d, let $$L_{\lambda }$$ be the set of the lines tangent to the unit circle at the d preimages $$B^{-1}(\lambda )$$ . We show that the trace of the intersection points of each pair of two elements in $$L_{\lambda }$$ as $$\lambda$$ ranges over the unit circle forms an algebraic curve of degree at most $$d-1$$ . In case of low degree, we have more precise results. For instance, for $$d=3$$ , the trace forms a conic section. For $$d=4$$ , we provide a necessary and sufficient condition for Blaschke products whose trace include a conic section. PubDate: 2017-12-01 DOI: 10.1007/s40315-017-0201-7 Issue No: Vol. 17, No. 4 (2017) • The Inverse of a $$\sigma (z)$$ σ ( z ) -Harmonic Diffeomorphism • Authors: Hu Chunying; Shi Qingtian Pages: 653 - 662 Abstract: A new kind of functional, analogous to the Douglas–Dirichlet functional, is defined as \begin{aligned} E'[f]=\displaystyle \iint _{\Omega }\sigma (z)( f_{z} ^{2}+ f_{\overline{z}} ^{2})\mathrm{d}x\mathrm{d}y \end{aligned} for $$f\in {C^{2}}$$ on $$\Omega$$ with a conformal metric density $$\sigma (z)$$ . A critical point of this new functional is said to be a $$\sigma (z)$$ -harmonic mapping. We consider the harmonicity of the inverse function of a $$\sigma (z)$$ -harmonic diffeomorphism and obtain a necessary and sufficient condition, which improves on the corresponding result for Euclidean harmonic mappings. In addition, a property of the inverse function of $$\rho$$ -harmonic mappings is investigated and an example is given. PubDate: 2017-12-01 DOI: 10.1007/s40315-017-0202-6 Issue No: Vol. 17, No. 4 (2017) • A Note on the Kirwan’s Conjecture • Authors: Yuk-J. Leung Pages: 663 - 678 Abstract: We continue our investigation on a second variation formula of the Koebe function in the class $$\Sigma$$ of functions analytic and univalent in the exterior of the unit disk. Our aim is to give some supporting evidence of a conjecture raised by William Kirwan on the coefficients of functions in this class. PubDate: 2017-12-01 DOI: 10.1007/s40315-017-0204-4 Issue No: Vol. 17, No. 4 (2017) • Bohr Inequality for Odd Analytic Functions • Authors: Ilgiz R Kayumov; Saminathan Ponnusamy Pages: 679 - 688 Abstract: We determine the Bohr radius for the class of odd functions f satisfying $$f(z) \le 1$$ for all $$z <1$$ , solving the recent problem of Ali et al. (J Math Anal Appl 449(1):154–167, 2017). In fact, we solve this problem in a more general setting. Then we discuss Bohr’s radius for the class of analytic functions g, when g is subordinate to a member of the class of odd univalent functions. PubDate: 2017-12-01 DOI: 10.1007/s40315-017-0206-2 Issue No: Vol. 17, No. 4 (2017) • Hardy-Type Spaces Arising from a Vector-Valued Cauchy Kernel • Authors: Jerry R. Muir Pages: 715 - 733 Abstract: An integral formula of Cauchy type was recently developed that reproduces any continuous $$f:\overline{{\mathbb {B}}} \rightarrow {\mathbb {C}}^n$$ that is holomorphic in the open unit ball $${\mathbb {B}}$$ of $${\mathbb {C}}^n$$ using a fixed vector-valued kernel and the scalar expression $$\langle f(u),u \rangle$$ , where $$u\in \partial {\mathbb {B}}$$ and $$\langle \cdot ,\cdot \rangle$$ is the Hermitian inner product in $${\mathbb {C}}^n$$ , which is key to defining the numerical range of f. We consider Hardy-type spaces associated with this vector-valued kernel. In particular, we introduce spaces of vector-valued holomorphic mappings properly containing the vector-valued Hardy spaces that are reproduced through the process described above and isomorphic spaces of scalar-valued non-holomorphic functions that satisfy many of the familiar properties of Hardy space functions. In the spirit of providing a straightforward introduction to these spaces, proof techniques have been kept as elementary as possible. In particular, the theory of maximal functions and singular integrals is avoided. PubDate: 2017-12-01 DOI: 10.1007/s40315-017-0203-5 Issue No: Vol. 17, No. 4 (2017) • Quadrature Domains for the Bergman Space in Several Complex Variables • Authors: Alan R. Legg Abstract: We make use of the Bergman kernel function to study quadrature domains whose quadrature identities hold for $$L^2$$ holomorphic functions of several complex variables. We generalize some mapping properties of planar quadrature domains and point out some differences from the planar case. We then show that every smooth bounded convex domain in $${\mathbb {C}}^n$$ is biholomorphic to a quadrature domain. Finally, the possibility of continuous deformations within the class of planar quadrature domains is examined. PubDate: 2017-11-22 DOI: 10.1007/s40315-017-0224-0 • On the Failure of Bombieri’s Conjecture for Univalent Functions • Authors: Iason Efraimidis Abstract: A conjecture of Bombieri (Invent Math 4:26–67, 1967) states that the coefficients of a normalized univalent function f should satisfy \begin{aligned} \liminf _{f\rightarrow K} \frac{n-\mathrm{Re\,}a_n}{m-\mathrm{Re\,}a_m} = \min _{t\in {\mathbb {R}}} \, \frac{n\sin t -\sin (nt)}{m\sin t -\sin (mt)}, \end{aligned} when f approaches the Koebe function $$K(z)=\frac{z}{(1-z)^2}$$ . Recently, Leung [10] disproved this conjecture for $$n=2$$ and for all $$m\ge 3$$ and, also, for $$n=3$$ and for all odd $$m\ge 5$$ . Complementing his work, we disprove it for all $$m>n\ge 2$$ which are simultaneously odd or even and, also, for the case when m is odd, n is even and $$n\le \frac{m+1}{2}$$ . We mostly not only make use of trigonometry but also employ Dieudonné’s criterion for the univalence of polynomials. PubDate: 2017-11-07 DOI: 10.1007/s40315-017-0222-2 • On Meromorphic Solutions of Non-linear Difference Equations • Authors: Ran-Ran Zhang; Zhi-Bo Huang Abstract: In this paper, using the theory of linear algebra, we investigate the non-linear difference equation of the following form in the complex plane: \begin{aligned} f(z)^n + p(z)f(z+\eta ) = \beta _1e^{\alpha _1z}+\beta _2e^{\alpha _2z}+\cdots +\beta _se^{\alpha _sz}, \end{aligned} where n, s are the positive integers, $$p(z)\not \equiv 0$$ is a polynomial and $$\eta , \beta _1, \ldots , \beta _s, \alpha _1, \ldots , \alpha _s$$ are the constants with $$\beta _1 \ldots \beta _s\alpha _1 \ldots \alpha _s\ne 0$$ , and show that this equation just has meromorphic solutions with hyper-order at least one when $$n\ge 2+s$$ . Other cases are also obtained. PubDate: 2017-11-07 DOI: 10.1007/s40315-017-0223-1 • Distribution of Zeros for Random Laurent Rational Functions • Authors: Igor E. Pritsker Abstract: We study the asymptotic distribution of zeros for the random rational functions that can be viewed as partial sums of a random Laurent series. If this series defines a random analytic function in an annulus A, then the zeros accumulate on the boundary circles of A, being equidistributed in the angular sense, with probability 1. We also show that the equidistribution phenomenon holds if the annulus of convergence degenerates to a circle. Moreover, equidistribution of zeros still persists when the Laurent rational functions diverge everywhere, which is new even in the deterministic case. All results hold under two types of general conditions on random coefficients. The first condition is that the random coefficients are non-trivial i.i.d. random variables with finite $$\log ^+$$ moments. The second condition allows random variables that need not be independent or identically distributed, but only requires certain uniform bounds on the tails of their distributions. PubDate: 2017-11-01 DOI: 10.1007/s40315-017-0213-3 • Conformal Mapping of Rectangular Heptagons II • Authors: A. B. Bogatyrev; O. A. Grigor’ev Abstract: A new analytical method for the conformal mapping of rectangular polygons with a straight angle at infinity to a half-plane and back is proposed. The method is based on the observation that the SC integral in this case is an abelian integral on a hyperelliptic curve, so it may be represented in terms of Riemann theta functions. The approach is illustrated by the computation of 2D-flow of ideal fluid above rectangular underlying surface and the computation of the capacities of multi-component rectangular condensers with axial symmetry. PubDate: 2017-10-31 DOI: 10.1007/s40315-017-0217-z • Asymmetric Truncated Toeplitz Operators of Rank One • Authors: Bartosz Łanucha Abstract: A truncated Toeplitz operator is a compression of the multiplicationoperator to a backward shift invariant subspace of the Hardy space $$H^2$$ . Anasymmetric truncated Toeplitz operator is a compression of the multiplication operator that acts between two different backward shift invariant subspaces of $$H^2$$ . All rank-one truncated Toeplitz operators have been described by Sarason. Here, we characterize all rank-one asymmetric truncated Toeplitz operators. This completes the description given by Łanucha for asymmetric truncated Toeplitz operators on finite-dimensional backward shift invariant subspaces. PubDate: 2017-10-30 DOI: 10.1007/s40315-017-0219-x • Bernstein–Walsh Theory Associated to Convex Bodies and Applications to Multivariate Approximation Theory • Authors: L. Bos; N. Levenberg Abstract: We prove a version of the Bernstein–Walsh theorem on uniform polynomial approximation of holomorphic functions on compact sets in several complex variables. Here we consider subclasses of the full polynomial space associated to a convex body P. As a consequence, we validate and clarify some observations of Trefethen in multivariate approximation theory. PubDate: 2017-10-24 DOI: 10.1007/s40315-017-0220-4 • Orthogonal Polynomials Related to $$_{g}$$ g -Fractions with Missing Terms • Authors: Kiran Kumar Behera; A. Swaminathan Abstract: The purpose of the present paper is to investigate some structural and qualitative aspects of two different perturbations of the parameters of g-fractions. In this context, the concept of gap g-fractions is introduced. While tail sequences of a continued fraction play a significant role in the first perturbation, Schur fractions are used in the second perturbation of the g-parameters that is considered. Illustrations are provided using Gaussian hypergeometric functions. Using a particular gap g-fraction, some members of the class of Pick functions are also identified. PubDate: 2017-10-24 DOI: 10.1007/s40315-017-0218-y • Erratum to: On the Characterisations of a New Class of Strong Uniqueness Polynomials Generating Unique Range Sets • Authors: Abhijit Banerjee; Sanjay Mallick PubDate: 2017-07-29 DOI: 10.1007/s40315-017-0211-5 • Radially Distributed Values of Holomorphic Curves • Authors: Nan Wu Abstract: Using the spread relation we investigate the growth of transcendental holomorphic curves when they have radially distributed small holomorphic curves. PubDate: 2017-06-26 DOI: 10.1007/s40315-017-0208-0 • Fast and Accurate Computation of the Logarithmic Capacity of Compact Sets • Authors: Jörg Liesen; Olivier Sète; Mohamed M. S. Nasser Abstract: We present a numerical method for computing the logarithmic capacity of compact subsets of $$\mathbb {C}$$ , which are bounded by Jordan curves and have finitely connected complement. The subsets may have several components and need not have any special symmetry. The method relies on the conformal map onto lemniscatic domains and, computationally, on the solution of a boundary integral equation with the Neumann kernel. Our numerical examples indicate that the method is fast and accurate. We apply it to give an estimate of the logarithmic capacity of the Cantor middle third set and generalizations of it. PubDate: 2017-06-22 DOI: 10.1007/s40315-017-0207-1 JournalTOCs School of Mathematical and Computer Sciences Heriot-Watt University Edinburgh, EH14 4AS, UK Email: journaltocs@hw.ac.uk Tel: +00 44 (0)131 4513762 Fax: +00 44 (0)131 4513327 Home (Search) Subjects A-Z Publishers A-Z Customise APIs
2017-12-14 10:17:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8591862320899963, "perplexity": 882.5457485567042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948543611.44/warc/CC-MAIN-20171214093947-20171214113947-00638.warc.gz"}
http://www.maths.usyd.edu.au/u/AlgebraSeminar/11abstracts/futorny11.html
# Slava Futorny (University of Sao Paulo) ## Friday 23 September, 12:05-12:55pm, Carslaw 175 ### Torsion theories and corresponding representations We will discuss torsion theories for representations of an important class of associative algebras which includes all finite $$W$$-algebras of type $$A$$, in particular the universal enveloping algebra of $$\mathfrak{gl}(n)$$ for all $$n$$. If $$U$$ is such an algebra which contains a finitely generated commutative subalgebra $$G$$, then the coheight of the associated prime ideals of $$G$$ is an invariant of a given simple $$U$$-module. This implies a stratification of the category of $$U$$-modules controlled by the coheight of associated prime ideals of $$G$$. This approach allows in particular to study representations of $$\mathfrak{gl}(n)$$ beyond the classical category of weight or generalized weight modules. The talk is based on recent joint results with S.Ovsienko (Kiev) and M.Saorin (Murcia).
2017-10-24 00:15:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.548865556716919, "perplexity": 394.7263059206392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827662.87/warc/CC-MAIN-20171023235958-20171024015958-00847.warc.gz"}
https://iclr.cc/virtual/2021/poster/3374
## RMSprop converges with proper hyper-parameter ### Naichen Shi · Dawei Li · Mingyi Hong · Ruoyu Sun Keywords: [ convergence ] [ hyperparameter ] [ RMSprop ] Abstract: Despite the existence of divergence examples, RMSprop remains one of the most popular algorithms in machine learning. Towards closing the gap between theory and practice, we prove that RMSprop converges with proper choice of hyper-parameters under certain conditions. More specifically, we prove that when the hyper-parameter $\beta_2$ is close enough to $1$, RMSprop and its random shuffling version converge to a bounded region in general, and to critical points in the interpolation regime. It is worth mentioning that our results do not depend on bounded gradient" assumption, which is often the key assumption utilized by existing theoretical work for Adam-type adaptive gradient method. Removing this assumption allows us to establish a phase transition from divergence to non-divergence for RMSprop. Finally, based on our theory, we conjecture that in practice there is a critical threshold $\sf{\beta_2^*}$, such that RMSprop generates reasonably good results only if $1>\beta_2\ge \sf{\beta_2^*}$. We provide empirical evidence for such a phase transition in our numerical experiments.
2021-09-19 08:43:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8104877471923828, "perplexity": 609.5266735749454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056752.16/warc/CC-MAIN-20210919065755-20210919095755-00226.warc.gz"}
http://nwrm.societastoricacivitavecchiese.it/implicit-euler-method-matlab.html
# Implicit Euler Method Matlab FD1D_HEAT_IMPLICIT, a MATLAB library which solves the time-dependent 1D heat equation, using the finite element method in space, and an implicit version of the method of lines, using the backward Euler method, to handle integration in time. Examples for Runge-Kutta methods We will solve the initial value problem, du dx =−2u. Calculates the solution y=f(x) of the ordinary differential equation y'=F(x,y) using Euler's method. It is found that this magnitude is always less than one for the Crank-Nicholson and Implicit Euler methods of integration while it can become less than one for the Explicit Euler method. It is called the implicit Euler method because equation (1. This technique is known as "Euler's Method" or "First Order Runge-Kutta". Euler's((Forward)(Method(Alternatively, from step size we use the Taylor series to approximate the function size Taking only the first derivative: This formula is referred to as Euler's forward method, or explicit Euler's method, or Euler-Cauchy method, or point-slope method. Implicit methods: Backward. To learn more advanced MATLAB programming and more details about MATLAB we refer to the references [1] and [2]. Regarding stability of the above discretization scheme, theory says that for θ ∈ [0,0. And I'm not going to go into detail about how we actually do it. 2 Numerical Methods for Linear PDEs This is an implicit equation which takes the matrix form We can conclude that the Forward Euler method is unconditionally. 6) To implement an implicit formula, one must employ a scheme to solve for the unknown ,. Explicit and implicit methods are approaches used in numerical analysis for obtaining numerical approximations to the solutions of time-dependent ordinary and partial differential equations, as is required in computer simulations of physical processes. LECTURES IN BASIC COMPUTATIONAL NUMERICAL ANALYSIS J. While still being first-order, the method is more accurate over a longer time-iteration range. Euler's Method after the famous Leonhard Euler. These methods were developed around 1900 by the German mathematicians C. The solution of the MNA differential-algebraic equations (DAE) is based on the implicit Euler method capable to be easily joined with the Wendroff method, resulting in a sole system of equations in time domain. and they form an initial value problem. Now if the order of the method is better, Improved Euler's relative advantage should be even greater at a smaller step size. Poorey Numerica Corporation, 4850 Hahns Peak Drive, Suite 200, Loveland, Colorado, 80538, USA Accurate and e cient orbital propagators are critical for space situational awareness because they drive uncertainty propagation which is necessary for tracking, conjunction. A related linear multistep formula is the backward Euler, also a one-step formula, defined by (1. It is called the backward Euler method because the difference quotient upon which it is based steps backward in time (from t to t− h). It is found that this magnitude is always less than one for the Crank-Nicholson and Implicit Euler methods of integration while it can become less than one for the Explicit Euler method. (1989) combined both methods and expanded the Euler steps to try to provide a more flexible method. Euler's method is very simple and easy to understand and use in programming to solve initial value problems. Implicit-Explicit (ImEx) Splitting Methods for ODE Systems Math 6321, Fall 2016 Introduction This project will focus on numerical methods for systems of ordinary di erential equations where some terms are sti while others are not. In the last chapter, a discussion of the adaptations to the current model and some recommendations for future research are given. It turns out that Runge-Kutta 4 is of order 4, but it is not much fun to prove that. The Lax and Wendroff non-iterative implicit. $\begingroup$ If you're taking really large time steps with implicit Euler, then using explicit Euler as a predictor might be significantly worse than just taking the last solution value as your initial guess. BE and CN are very expensive: they are implicit methods and non-linear equations have to be solved at each time step. In the image to the right, the blue circle is being approximated by the red line segments. Why are implicit Eulers different from explicit. Numerical Methods for Ordinary Differential Equations In this chapter we discuss numerical method for ODE. MATLAB M-files accompany each method and are available on the book web site. and rearrange to around with step. The implicit analogue of the explicit FE method is the backward Euler (BE) method. I googled for quite some time but was not able to find a proper example. CS3220 Lecture Notes: Backward Euler Method Steve Marschner Cornell University 22 April 2009 These notes are to provide a reference on Backward Euler, which we dis-cussed in class but is not covered in the textbook. With implicit methods since you're effectively solving giant linear algebra problems, you can either code this completely yourself, or even better. m: Euler Methods: myeuler. Petzold, and S. We call this a stable method. It is of first order but is better than the classical Euler method because it is a symplectic integrator, so that it yelds better results. Knowing the accuracy of any approximation method is a good thing. a formula must be implicit to some degree, but it is not necessary that. View and Download PowerPoint Presentations on Euler Method Differential Equations PPT. FD1D_HEAT_IMPLICIT is a MATLAB program which solves the time-dependent 1D heat equation, using the finite difference method in space, and an implicit version of the method of lines to handle integration in time. This Method Subdivided Into Three Namely: Forward Euler's Method. Our primary concern with these types of problems is the eigenvalue stability of the resulting numerical integration method. Is there an example somewhere of how to solve a system of ODE's using the backward euler's method? I would want to understand the concept first, so I can implement it in MATLAB. 3 Forward Euler method Backwards Euler is an implicit method. Any explicit method will have a requirement 99 t C. 2 The implicit Euler method and stiff differential equations A minor-looking change in the method, already considered by Euler in 1768, makes a big differ-ence: taking as the argument of f the new value instead of the previous one yields y n+1 = y n +hf(t n+1,y n+1), from which y n+1 is now. 2Find the rst 6 Euler approximates for x0 = 2:3x with x(0) = 5 for h = :1. Usage is:. dt ~ (dx)n for the n’th spatial derivative. We start with the first numerical method for solving initial value problems that bears Euler's name (correct pronunciation: oiler not uler). CS3220 Lecture Notes: Backward Euler Method Steve Marschner Cornell University 22 April 2009 These notes are to provide a reference on Backward Euler, which we dis-cussed in class but is not covered in the textbook. 4) implicitly relates yn+1 to yn. The explicit Euler three point finite difference scheme for the heat equation 199 6. As I showed in class the Backward Euler method has better stability properties than the normal Euler method. Aristo and Aubrey B. Solution of drift-diffusion equations are conducted with fast implicit finite-difference method (Euler). If anybody can provide an example or point me in the right direction just to get started. Here is another way to view these methods. 1 Modi ed Euler Method Numerical solution of Initial Value Problem: dY dt = f(t;Y) ,Y(t n+1) = Y(t n) + Z t n+1 tn f(t;Y(t))dt: Approximate integral using the trapezium rule:. The Gauss method now vectorizes the function or expression by default. In later sections, when a basic understanding has been achieved, computationally efficient methods will be presented. Euler’s Method. $\endgroup$ - David Ketcheson Mar 28 '14 at 6:39. Related Calculus and Beyond Homework Help News on Phys. Implicit Runge-Kutta methods. Euler's method is used for approximating solutions to certain differential equations and works by approximating a solution curve with line segments. While still being first-order, the method is more accurate over a longer time-iteration range. m Euler's Method modeuler. Behind and Beyond the MATLAB ODE The region of stability of the implicit backward Euler method is the outside of the disk of radius 1 and center (1,0), hence it. You can refer the aforementioned algorithm and flowchart to write a program for Euler's method in any high level programming language. Taylor series expansion for ODE, Euler & modified Euler methods, Runge-Kutta method and adaptive method, multi-step methods, systems of equations and high-order equations. Matlab sample code: Duffing oscillator: duffing. , their solutions grow without bound, if the step size is too small (or ). As you can see, the symplectic method works better than the implicit and the explicit methods. Find the value of k. Euler's method is very simple and easy to understand and use in programming to solve initial value problems. That's what we'll do with Heun's method! We'll use Euler's method to roughly estimate the coordinates of the next point in the solution, and once we have this information, we'll re-predict (or correct) our original estimate of the location of the next solution point by using the method of averaging the slopes of the left and right tangent lines. obtain the first step of Euler’s method: y1 =y0 +hf(t0,y0), where y(t1) is replaced by the “numerical solution” y1, etc. Hi guys, im trying to create a loop where I have to compare the euler implicit and RK4 method to compare the accuracy. m, which defines the function. Euler's method starting at x equals zero with the a step size of one gives the approximation that g of two is approximately 4. AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS 5/60 Conservative Finite Volume Methods in One Dimension u n i is the spatial cell-integral average value of u at time tn | that is,. Taylor series expansion for ODE, Euler & modified Euler methods, Runge-Kutta method and adaptive method, multi-step methods, systems of equations and high-order equations. It is called the implicit Euler method because equation (8. It is called the implicit Euler method because equation (1. The implicit method is should stop at 92 iterations to meet the condition where the absolute difference of the solution between the two methods is less than 10e-5. ezplot(y,[0,0. It is an easy method to use when you have a hard time solving a differential equation and are interested in approximating the behavior of the equation in a certain range. This time do this in Matlab. develop Runge-Kutta 4th order method for solving ordinary differential equations, 2. ods for ordinary differential equations. They would run more quickly if they were coded up in C or fortran and then compiled on hans. Though MATLAB is primarily a numerics package, it can certainly solve straightforward differential equations symbolically. I do not get the graph in my office but I get it in the lab. Thus, the Crank-Nicholson and Implicit Euler method are absolutely stable. You might think there is no difference between this method and Euler's method. The accuracy of the estimate can be improved by refining the grid. This site also contains graphical user interfaces for use in experimentingwith Euler's method and the backward Euler method. Comparison of Euler and Runge Kutta 2nd order methods with exact results. We now commence a survey of one-step methods that are more accurate than Euler’s method. The Lax and Wendroff non-iterative implicit. Solution of drift-diffusion equations are conducted with fast implicit finite-difference method (Euler). Matlab will return your answer. The backward Euler method is an implicit method. In particular, a corollary to Lemma 4. IMPLICIT EULER TIME DISCRETIZATION AND FDM WITH NEWTON METHOD IN NONLINEAR HEAT TRANSFER MODELING. Euler's Method Numerical Example: As a numerical example of Euler's method, we're going to analyze numerically the above program of Euler's method in Matlab. accurate than the forward Euler method. All of the implicit formulae are zero-stable, thus principally usable. 1 Suppose, for example, that we want to solve the first order differential equation y′(x) = xy. A short ad hoc introduction to spectral methods for parabolic PDE and the Navier-Stokes equationsr Hannes Uecker Faculty of Mathematics and Science Carl von Ossietzky Universit at Oldenburg D-26111 Oldenburg Germany Abstract. Solution: Choose the size of step as h = 1. Runge-Kutta method higher order Euler method and midpoint method are the special cases of a general category named after two German mathematicians C. A related linear multistep formula is the backward Euler, also a one-step formula, defined by (1. However, the results are inconsistent with my textbook results, and sometimes even ridiculously. The implicit analogue of the explicit FE method is the backward Euler (BE) method. Consider y f t y′= (,). 5 Convergence of nite di erence schemes So as to include explicit and implicit schemes, we consider a linear scheme in the following generic matrix form. If a numerical method has no restrictions on in order to have y n!0 as n !1, we say the numerical method is A-stable. and they form an initial value problem. Yang, Wenwu Cao, Tae S. Every method is discussed thoroughly and illustrated with prob-lems involving both hand computation and programming. Old PhD projects‎ > ‎ Medical Image Registration Toolbox. Implicit Rung Kutta (IRK) Method Lecture 3 Introduction to Numerical Methods for Di erential and Di erential Algebraic Equations TU Ilmenau. This list concerns with the application of #Numerical_Methods in #MATLAB, in this playlist you can find all the topics, methods and rules that you have heard about, some of them are:. what can be the problem ? Im just trying to compare the runge kutta method and the euler implicit method for a given ODE. The MATLAB tool distmesh can be used for generating a mesh of arbitrary shape that in turn can be used as input into the Finite Element Method. 8 Use ofMATLAB Built-inFunctionsfor Solving a System ofLinear Equations 136 4. Matlab files. Solving ODE's with Matlab. Solution of first-order problems a. Calculates the solution y=f(x) of the ordinary differential equation y'=F(x,y) using Euler's method. The methods to improve the stability of the system are described in chapter 3. Numerical Methods for Differential Equations Chapter 1: Initial value problems in ODEs Gustaf Soderlind and Carmen Ar¨ evalo´ Numerical Analysis, Lund University Textbooks: A First Course in the Numerical Analysis of Differential Equations, by Arieh Iserles and Introduction to Mathematical Modelling with Differential Equations, by Lennart Edsberg. 3 MATLAB implementation Within MATLAB , we declare matrix A to be sparse by initializing it with the sparse function. From a practical point of view, this is a bit more. b(4) Write a general-purpose Backward Euler function (it should take the same inputs as above, but use root finding to solve the implicit equation - you may use the Secant code that I've posted on KSOL to perform this). We will provide details on algorithm development using the Euler method as an example. The Implicit Euler method is seldom used to solve differential-algebraic equations (DAEs) of differential index r ≥ 3, since the method in general fails to converge in the first r - 2 steps after a change of stepsize. In the image to the right, the blue circle is being approximated by the red line segments. The advantage of forward Euler is that it gives an explicit update equation, so it is easier to implement in practice. This chapter will describe some basic methods and techniques for programming simulations of differential equations. Given a differential equation dy/dx = f(x, y) with initial condition y(x0) = y0. % Matlab Program 4: Step-wave Test for the Lax method to solve the Advection % Equation clear; % Parameters to define the advection equation and the range in space and time. m, which defines the function. $\endgroup$ – David Ketcheson Mar 28 '14 at 6:39. Euler's Method Flowchart: Also see, Euler's Method C Program Euler's Method MATLAB Program. Using the explicit forward Euler method,. We introduce an adaptive numerical method for computing blow-up solutions for ODEs and well-known reaction-diffusion equations. 10−3 10−2 10−1 100 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 h errore FE BE CN H H2 c Paola Gervasio - Numerical Methods - 2012 9. Keywords: ODE, spring-mass-system, Euler, implicit, explicit. Write a MATLAB script that solves a first order ODE using the implicit backward Euler's method by solving the nonlinear problem by using Newton's method. Runge-Kutta 4th Order Method for Ordinary Differential Equations. Euler method for systems of. Numerical Methods for Differential Equations Chapter 1: Initial value problems in ODEs Gustaf Soderlind and Carmen Ar¨ evalo´ Numerical Analysis, Lund University Textbooks: A First Course in the Numerical Analysis of Differential Equations, by Arieh Iserles and Introduction to Mathematical Modelling with Differential Equations, by Lennart Edsberg. Specifically errors won’t grow when approximating the solution to problems with rapidly decaying solutions. Introduction to PDEs and Numerical Methods: Implicit methods for nite difference approximation Exercise 1: Backward Euler formula (20 points) The backward Euler formula for the instationary heat equation without source terms is given by: (I tA)un+1 = un where Iis the identity matrix, Athe matrix coming from the nite difference approximation of the. 2 The implicit Euler method and stiff differential equations A minor-looking change in the method, already considered by Euler in 1768, makes a big differ-ence: taking as the argument of f the new value instead of the previous one yields y n+1 = y n +hf(t n+1,y n+1), from which y n+1 is now. Contact-implicit methods have the benefit of generating contact sequences as part of the. Of course the present problem is not stiff and explicit methods themselves produce accurate results and implicit methods are not required. The following Matlab project contains the source code and Matlab examples used for rigid registration using implicit interface. Clearly, in this example the Improved Euler method is much more accurate than the Euler method: about 18 times more accurate at. The accuracy of the estimate can be improved by refining the grid. Matlab, Numerical Integration, and Simulation n Matlab tutorial n Basic programming skills n Visualization n Ways to look for help n Numerical integration n Integration methods: explicit, implicit; one-step, multi-step n Accuracy and numerical stability n Stiff systems n Programming examples n Solutions to HW0 using Matlab n Mass-spring-damper. 1 Solving a SystemofEquations Using MATLAB's Left and Right Division 136. The explicit Euler three point finite difference scheme for the heat equation 199 6. lternatively, more accurate estimates can be obtained by using higher order implicit methods. This formula, it involves--defines y n plus 1, but doesn't tell us how to compute it. and rearrange to around with step. We have to solve this equation for y n plus 1. The (implicit) backward Euler, Gear order 2 and the trapezoidal integration methods are A-stable. Time-stepping techniques Unsteady flows are parabolic in time ⇒ use ‘time-stepping’ methods to advance transient solutions step-by-step or to compute stationary solutions time space zone of influence dependence domain of future present past Initial-boundary value problem u = u(x,t) ∂u ∂t +Lu = f in Ω×(0,T) time-dependent PDE. Open source derivatives and AI code. 1 Suppose, for example, that we want to solve the first order differential equation y′(x) = xy. It is a multi-step method in order to achieve higher order accuracy and stability at the expense of integration speed. 1) with g=0, i. Solving PDE with Euler implicit method. Comparison of Euler and Runge-Kutta 2nd Order Methods Figure 4. An initial value problem is a first-order ordinary differential equation. In the image to the right, the blue circle is being approximated by the red line segments. This code computes a steady flow over a bump with the Roe flux by two solution methods: an explicit 2-stage Runge-Kutta scheme and an implicit (defect correction) method with the exact Jacobian for a 1st-order scheme, on irregular triangular grids. Both of the Euler methods can be seen as first-order Taylor expansions of x. In the simpler cases, ordinary differential equations or ODEs, the forward Euler's method and backward Euler's method are efficient methods to yield fairly accurate approximations of the actual solutions. 5) Euler method is an example of an explicit one-step formula. 3 Forward Euler method Backwards Euler is an implicit method. \It has the disadvantage that. For implicit methods, if you look at Euler's Backward or Implicit method, Crank-Nicholson, or Douglas-Rachford ADI, you can find ways to set up a system of equations to solve directly using Matlab. Each technique will be taught with follow up programming in MATLAB. This method is called the implicit Euler or backward Euler method. Euler’s Method. With Euler’s method, this region is the set of all complex numbers z = h for which j1 + zj<1 or equivalently, jz ( 1)j<1 This is a circle of radius one in the complex plane, centered at the complex number 1 + 0 i. Of course the present problem is not stiff and explicit methods themselves produce accurate results and implicit methods are not required. Numerical Algorithm and Programming in Mathcad 1. Matlab files. In particular, the fully implicit FD scheme leads to a "tridiagonal" system of linear equations that can be solved efficiently by LU decomposition using the Thomas algorithm (e. and rearrange to around with step. A possible way to follow is to use basic “while” loop available in MATLAB. To prevent the worst, the file "euler. We first implement the Euler's integration method for one time-step as shown below and then will extend it to multiple time-steps. Introduction to PDEs and Numerical Methods: Implicit methods for nite difference approximation Exercise 1: Backward Euler formula (20 points) The backward Euler formula for the instationary heat equation without source terms is given by: (I tA)un+1 = un where Iis the identity matrix, Athe matrix coming from the nite difference approximation of the. Next, I also need to graph the solution using the Euler Backward method, which looks similar but is not quite the same: $$y_n = y_{n+1} - h y'_{n+1}$$ This method is also called the implicit method because y_n+1 cannot be calculated explicitly by evaluating the right hand side. This chapter also evaluates the differences in calculation time. Any explicit method will have a requirement 99 t C. ! Implicit Methods!. Euler’s methods for differential equations were the first methods to be discovered. Optimal step size, stiff problems. We explain the impor-. IMPLICIT EULER METHOD. Comparison of Euler and Runge Kutta 2nd order methods with exact results. I've googled and search C++ forums for this and have been unable to find anything that can help me get started. 1 Two-dimensional heat equation with FD 1. The problems are enjoyable and interesting. Using the Finite Volume Discretization Method, we derive the equations required for an efficient implementation in Matlab. For two sets of initial values (p0,q0) we compute several. That project was approved and implemented in the 2001-2002 academic year. Other variants are the semi-implicit Euler method and the exponential Euler method. Implementation of boundary conditions in the matrix representation of the fully implicit method (Example 1). Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Notice, however, that if time were reversed, it would become explicit; in other words, backward Euler is implicit in forward time and explicit in reverse time. One advantage of explicit method over implicit method is the absence of linear system solution. Why are implicit Eulers different from explicit. I do not get the graph in my office but I get it in the lab. BVP functions Shooting method (Matlab 7): shoot. lternatively, more accurate estimates can be obtained by using higher order implicit methods. This formula, it involves--defines y n plus 1, but doesn't tell us how to compute it. Speciflcally, the method is deflned by the formula. The implicit integration - The implementation In this section I'll outline all of the parts required for implementing a full implicit Euler method for mass-spring systems. The method is based on the implicit midpoint method and the implicit Euler method. In fact, the Wolfram discussion of the Lotka-Volterra Equation actually defines Backward or Implicit Euler, suggesting that it is not an implemented Method:. It uses the mathematical pendulum as an example. m Euler's method for a system: eulersys. This particular problem requires the students to program forward Euler, backward Euler and an explicit 2-stage 2nd order Runge-Kutta scheme for solving an ordinary differential equation(ODE) system by modifying a sample MATLAB code provided by the instructor, to compare and discuss the performance of the three different numerical methods. A linear multistep method is zero-stable for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than Kε for some value of K which does not depend on the step size h. In the last chapter, a discussion of the adaptations to the current model and some recommendations for future research are given. An implicit method, by definition, contains the future value (i+1 term) on both sides of the equation. Hi guys, im trying to create a loop where I have to compare the euler implicit and RK4 method to compare the accuracy. Write a MATLAB script that solves a first order ODE using the implicit backward Euler's method by solving the nonlinear problem by using Newton's method. Given a infinity for t -> infinity). and they form an initial value problem. John Butcher's tutorials The Euler method is the simplest way of obtaining numerical attention moved to implicit methods. In this report, I give some details for implement-ing the Finite Element Method (FEM) via Matlab and Python with FEniCs. We integrate the ODE from t n to t n+1: ( ) 1 1, n n n n t t t t. - Explicit Runge-Kutta (ERK) methods (introduction of the method in the general case, notations in the general case, derivation of ERK of second order); Runge-Kutta method of fourth order. The symplectic Euler algoritm is semi-implicit. It considers yn+1 as an unknown variable. We first implement the Euler's integration method for one time-step as shown below and then will extend it to multiple time-steps. Implicit Euler Method Given an initial mapping $\mathbf{f}^{(0)}$. Solution of drift-diffusion equations are conducted with fast implicit finite-difference method (Euler). BE and CN are very expensive: they are implicit methods and non-linear equations have to be solved at each time step. Explicit and implicit method in integrating differential equations. NUMERICAL METHODS FOR PARABOLIC EQUATIONS 3 Starting from t= 0, we can evaluate point values at grid points from the initial condition and thus obtain U0. 2 To get S2, we use Newton’s method as the solver to deal with the implicit nature of the implicit Euler method. Backward Euler method. We introduce an adaptive numerical method for computing blow-up solutions for ODEs and well-known reaction-diffusion equations. This does not mean the the Euler method is accurate, only that the method is very stable. Implicit Euler The implicit Euler method is intended to illustrate methods for stiff equations. Lagrange's differential equation and Clairaut's differential equation are also discussed. Leonhard Euler was born in 1707, Basel, Switzerland and passed away in 1783, Saint Petersburg, Russia. Backward Euler method is only first order accurate. The advantage of forward Euler is that it gives an explicit update equation, so it is easier to implement in practice. Euler method by the semi-implicit method is explained in the next chapter. Apply explicit and implicit numerical methods and MATLAB functions to integrate single and multiple sets of initial value problems. and implicit methods will be used in place of exact solution. In the image to the right, the blue circle is being approximated by the red line segments. Implicit methods result in a nonlinear equation to be solved for y n+1 so that iterative methods must be used. Introduction During this semester, you will become very familiar with ordinary differential equations, as the use of Newton's second law to analyze problems almost always produces second time derivatives of position vectors. MATLAB code for the second-order Runge-Kutta method (RK2) for two or more first-order equations. 10−3 10−2 10−1 100 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 h errore FE BE CN H H2 c Paola Gervasio - Numerical Methods - 2012 9. ode45 is MATLAB's general purpose are implicit, because the. One method is to plot the solution. Be aware that this method is not the most efficient one from the computational point of view. It is a multi-step method in order to achieve higher order accuracy and stability at the expense of integration speed. 500,0000 675,0000 850,0000 1025,0000 1200,0000 0 125 250 375 500 emperature, Time, t (sec) Analytical Ralston Midpoint Euler Heun θ (K). an implicit Runge-Kutta. Examples for Runge-Kutta methods We will solve the initial value problem, du dx =−2u. ezplot(y,[0,0. For complicated problems, often of very high dimension, they are even today important methods in practical use. On the other hand, backward Euler requires solving an implicit equation, so it is more expensive, but in general it has greater stability properties. Related Calculus and Beyond Homework Help News on Phys. Implicit Euler method for integration of ODEs Tag: numerical-methods , ode , newtons-method , numerical-stability For those of you familiar with the method, it is known that one must solve the equation:. Hi, I'm trying to write a function to solve ODEs using the backward euler method, but after the first y value all of the next ones are the same, so I assume something is wrong with the loop where I use NewtonRoot, a root finding function I wrote previously. (八)MacCormack Method (1969) Predictor step : n+1 n n() j j j+1 t u=u-c u x n uj. 2 Stability. 10) with = 20 and with a timestep of h= 0:1 demonstrating the instability of the Forward Euler method and the stability of the Backward Euler and Crank Nicolson methods. As we can see, ? = 0 corresponds to explicit Euler method, ? = 1 corresponds to implicit Write a MATLAB code which Crank Nicolson Solution to the Heat Equation. 8 Use ofMATLAB Built-inFunctionsfor Solving a System ofLinear Equations 136 4. The same algorithm was implemented in both FORTRAN 77 and MATLAB®. The advantage of forward Euler is that it gives an explicit update equation, so it is easier to implement in practice. Computational Methods for (Quantitative) Finance This University course focused on numerical solutions for some Quantitative Finance problems. An implicit method for solving an ordinary differential equation that uses in. For two sets of initial values (p0,q0) we compute several. ! Implicit Methods!. All these algorithmic components have been integrated into the code. Implicit Euler The implicit Euler method is intended to illustrate methods for stiff equations. For more details see book by G. AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS 10/74 Conservative Finite Di erence Methods in One Dimension Like any proper numerical approximation, proper nite di erence approximation becomes perfect in the limit x !0 and t !0 an approximate equation is said to be consistent if it equals the true equations in the limit x !0 and t !0. In later sections, when a basic understanding has been achieved, computationally efficient methods will be presented. Backward Differentiation Formulae (BDF or Gear methods) Different from the above methods, BDF is a multi-step method. We now want to find approximate numerical solutions using Fourier spectral methods. Course Description. Backward Euler method is only first order accurate. A-stable methods exist in these classes. Speciflcally, the method is deflned by the formula. project was to make Matlab the universal language for computation on campus. m we take the second starting value from the exact solution. 10−3 10−2 10−1 100 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 h errore FE BE CN H H2 c Paola Gervasio - Numerical Methods - 2012 9. Find the true values and errors also. and rearrange to around with step. A couple questions/notes: The title includes "implicit Euler-Method", but this seems to be explicit Euler. Because the derivative is now evaluated at time instead of , the backward Euler method is implicit. existing methods for uncertainty propagation. All of the implicit formulae are zero-stable, thus principally usable. Numerical Algorithm and Programming in Mathcad 1. The method of lines (MOL) is a general procedure for the solution of time dependent partial differential equations (PDEs). This method is based on the implicit Euler method. FD1D_HEAT_IMPLICIT is a MATLAB program which solves the time-dependent 1D heat equation, using the finite difference method in space, and an implicit version of the method of lines to handle integration in time. We can solve only a small collection of special types of di erential equations. ezplot(y,[0,0. And not only actually is this one a good way of approximating what the solution to this or any differential equation is, but actually for this differential equation in particular you can actually even use this to find E with more and more and more precision. The solution of the MNA differential-algebraic equations (DAE) is based on the implicit Euler method capable to be easily joined with the Wendroff method, resulting in a sole system of equations in time domain. Modified Euler method c. As an application of this general theory we show that an implicit variant of Euler--Maruyama converges if the diffusion coefficient is globally Lipschitz, but the drift coefficient satisfies only a one-sided Lipschitz condition; this is achieved by showing that the implicit method has bounded moments and may be viewed as an Euler--Maruyama. project was to make Matlab the universal language for computation on campus. Trefethen [ ] points to these applications where sti ness comes with the problem: 1. $\endgroup$ - David Ketcheson Mar 28 '14 at 6:39. • Motivation for Implicit Methods: Stiff ODE’s – Stiff ODE Example: y0 = −1000y ∗ Clearly an analytical solution to this is y = e−1000t. As matlab programs, would run more quickly if they were compiled using the matlab compiler and then run within matlab. The Web page also contains MATLAB! m-files that illustrate how to implement finite difference methods, and that may serve as a starting point for further study of the xiii. Related Articles and Code: MODIFIED EULER'S METHOD; Program to estimate the Differential value of the function using Euler Method. Implicit Rung Kutta (IRK) Method Lecture 3 Introduction to Numerical Methods for Di erential and Di erential Algebraic Equations TU Ilmenau. Course Objectives. It means this term will drop to zero and become insignficant very quickly. If we plan to use Backward Euler to solve our stiff ode equation, we need to address the method of solution of the implicit equation that arises. Euler's methods for differential equations were the first methods to be discovered. AA214B: NUMERICAL METHODS FOR COMPRESSIBLE FLOWS 5/60 Conservative Finite Volume Methods in One Dimension u n i is the spatial cell-integral average value of u at time tn | that is,. 4) implicitly relates yn+1 to yn. I would like to see step by step how the method is employed. MACDONALD∗ AND STEVEN J. accessible to our methods. 1 Suppose, for example, that we want to solve the first order differential equation y′(x) = xy. (Homework) ‧Modified equation and amplification factor are the same as original Lax-Wendroff method.
2019-11-14 05:48:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6277594566345215, "perplexity": 691.2436447409475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668004.65/warc/CC-MAIN-20191114053752-20191114081752-00545.warc.gz"}
http://www.federpetroliitalia.org/1t24gyq8/0ac045-holmium-protons-neutrons-electrons
• holmium protons neutrons electrons un punto di riferimento. • Seleziona la lingua: , 30-12-2020 holmium protons neutrons electrons Einsteinium is a chemical element with atomic number 99 which means there are 99 protons and 99 electrons in the atomic structure. The chemical symbol for Holmium is Ho. Thulium Tm. The chemical symbol for Holmium is Ho.. Atomic Mass of Holmium. Identify all of the phases in your answer. 06-1-580-5555 « A leggyorsabb zárnyitás megoldást választjukA leggyorsabb zárnyitás megoldást választjuk. Oxygen is a chemical element with atomic number 8 which means there are 8 protons and 8 electrons in the atomic structure. An atomic mass unit ($$\text{amu}$$) is defined as one-twelfth of the mass of a carbon-12 atom. Electrons are distributed in space around the nucleus. Atomic Number of Holmium. 67 protons/electrons. protons = neutrons = electrons = Holmium-166 - this isotope is being tested as a diagnostic and treatment tool for liver tumors. Astatine is the rarest naturally occurring element on the Earth’s crust. A rgon is a chemical element with symbol Ar and atomic number 18. We are all familiar with the lighter-than-air gas helium, but whenever I see a balloon floating on a string, I feel a little sad. There are three types of subatomic particles that will make up our atomic model: 1. protons 2. neutrons 3. electrons Protons and neutrons are compacted together in what we call the nucleus of an atom. It is occasionally found in native form as elemental crystals. Arsenic is a metalloid. Yttrium is a chemical element with atomic number 39 which means there are 39 protons and 39 electrons in the atomic structure. 2 protons/electrons, 2 neutrons (lightest noble gas) (when stripped of electrons, is called an alpha particle) ... 43 protons/electrons (lightest element with no stable isotope) Ruthenium Ru. Number of neutrons=40-18=22 Therefore,the number of neutrons in argon is 22 here for this isotope.Remember that neutrons are present in the nucleous of argon and it's charge is zero. A pun referring to the element in the periodic table Holmium, which has the symbol Ho. Holmium element has 67 protons, 98 neutrons and 67 electrons. Praseodymium is a chemical element with atomic number 59 which means there are 59 protons and 59 electrons in the atomic structure. =5 protons, 6 neutrons,5 electrons All atoms of boron always contain 5 protons, as that defines its atomic number 5, Boron atomic number 5 has five electrons in its ground state. Neutrons, like protons have an atomic mass, but lack any charge, and hence are electrically neutral in respect to electrons. Holmium Ho Element, Molar Mass and Properties. Posted on 2020-12-03 2020-12-03 by 2020-12-03 2020-12-03 by Copper is used as a conductor of heat and electricity, as a building material, and as a constituent of various metal alloys, such as sterling silver used in jewelry, cupronickel used to make marine hardware and coins.
2021-07-23 15:26:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4108562171459198, "perplexity": 2128.638294424079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00521.warc.gz"}
https://pylessons.com/Tensorflow-object-detection-csgo-aim-bot/
TensorFlow Object Detection CS:GO aim bot Posted November 29, 2018 by Rokas Balsys ##### TensorFlow CSGO aim bot Welcome to part 7 of our TensorFlow Object Detection API tutorial series. In this part, we're going to change our code, that we could find center of rectangles on our enemies, move our mouse to the center and shoot them. In this tutorial we are working with same files as we used in 6th tutorial. To achieve desired goals for this tutorial we’ll need to add several lines to the code. At first we start with importing pyautogui library: import pyautogui This library will be used to move our mouse in game. But some games may not allow you to move mouse, then you will need to start python script with administrator rights, same as I am doing for CSGO in my YouTube video tutorial. Next we are changing defined monitor size line to following. We are doing this because we will use our window width and height in other places to calculate right coordinates for our game. So to avoid mistakes and not to write same values in many places, we are defining our window size accordingly: width = 800 height = 640 monitor = {'top': 80, 'left': 0, 'width': width, 'height': height} Before moving to our main while loop we are defining new function, which we'll use to aim and shoot enemies. As you can see in following function, we are calculating y differently from x. In my YouTube tutorial we’ll see that when we are calculating y in same way as x, we are shooting above the head. So we are removing that difference dividing our desired screen height by 9 and adding it to standard y height. def Shoot(mid_x, mid_y): x = int(mid_x*width) y = int(mid_y*height+height/9) pyautogui.moveTo(x,y) pyautogui.click() Next, we are improving our code, while working in our main while loop. So we create following for loop. At first we initialize array_ch array, where we will place all our ch objects. Then we are going through boxes[0] array, and if we find our needed classes we check it for detection percentage. For example in our case classes[0][i] == 2 is equal to ch and if scores[0][i] >= 0.5 of this class is equal or more that 50 percent we assume that we detected our object. In this case we are taking boxes array numbers, where: boxes[0][i][0] – y axis upper start coordinates boxes[0][i][1] – x axis left start coordinates boxes[0][i][2] – y axis down start coordinates boxes[0][i][3] – x axis right start coordinates While subtracting same axis start coordinates and dividing them by two we receive center of two axis. This way we can calculate center of our detected rectangle. And at the last line we are drawing a dot in a center: array_ch = [] for i,b in enumerate(boxes[0]): if classes[0][i] == 2: # ch if scores[0][i] >= 0.5: mid_x = (boxes[0][i][1]+boxes[0][i][3])/2 mid_y = (boxes[0][i][0]+boxes[0][i][2])/2 array_ch.append([mid_x, mid_y]) cv2.circle(image_np,(int(mid_x*width),int(mid_y*height)), 3, (0,0,255), -1) These few line of code were only for one object, we do this for all four objects: for i,b in enumerate(boxes[0]): if classes[0][i] == 2: # ch if scores[0][i] >= 0.5: mid_x = (boxes[0][i][1]+boxes[0][i][3])/2 mid_y = (boxes[0][i][0]+boxes[0][i][2])/2 array_ch.append([mid_x, mid_y]) cv2.circle(image_np,(int(mid_x*width),int(mid_y*height)), 3, (0,0,255), -1) if classes[0][i] == 1: # c if scores[0][i] >= 0.5: mid_x = (boxes[0][i][1]+boxes[0][i][3])/2 mid_y = boxes[0][i][0] + (boxes[0][i][2]-boxes[0][i][0])/6 array_c.append([mid_x, mid_y]) cv2.circle(image_np,(int(mid_x*width),int(mid_y*height)), 3, (50,150,255), -1) if classes[0][i] == 4: # th if scores[0][i] >= 0.5: mid_x = (boxes[0][i][1]+boxes[0][i][3])/2 mid_y = (boxes[0][i][0]+boxes[0][i][2])/2 array_th.append([mid_x, mid_y]) cv2.circle(image_np,(int(mid_x*width),int(mid_y*height)), 3, (0,0,255), -1) if classes[0][i] == 3: # t if scores[0][i] >= 0.5: mid_x = (boxes[0][i][1]+boxes[0][i][3])/2 mid_y = boxes[0][i][0] + (boxes[0][i][2]-boxes[0][i][0])/6 array_t.append([mid_x, mid_y]) cv2.circle(image_np,(int(mid_x*width),int(mid_y*height)), 3, (50,150,255), -1) After this we are making shooting function. So as a team = "t" we choose who we will be shooting at, at this case we are trying to shoot terrorists. So at first we check if we have detected terrorists heads, if we have detected at least one head, we call Shoot(mid_x, mid_y) function with needed coordinates. If we don't have heads detected we check maybe we have detected terrorists bodies, if we did, we call the same shooting function. Otherwise we don't call Shooting function. team = "t" if team == "c": if len(array_ch) > 0: Shoot(array_ch[0][0], array_ch[0][1]) if len(array_ch) == 0 and len(array_c) > 0: Shoot(array_c[0][0], array_c[0][1]) if team == "t": if len(array_th) > 0: Shoot(array_th[0][0], array_th[0][1]) if len(array_th) == 0 and len(array_t) > 0: Shoot(array_t[0][0], array_t[0][1]) Note, if we would like to shoot to counter-terrorists we change "t" to "c" at first line. This was only a short explanation of code, full code you can download from above files. In my YouTube video you can see how my CSGO aim bot model is working. For now, I am really disappointed about our FPS, because no one can play at these numbers... But I am glad that our bot can target to enemies quite accurate and shoot them. So maybe for next tutorial I will think what we could do to make it work faster for us.
2020-01-29 19:12:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20615847408771515, "perplexity": 3382.9501517759713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251801423.98/warc/CC-MAIN-20200129164403-20200129193403-00523.warc.gz"}
https://physics.stackexchange.com/questions/374310/vertex-factor-for-fracg4-a-nua-nu2-in-qed
# Vertex factor for $\frac{g}{4} (A_{\nu}A^{\nu})^2$ in QED Suppose we had an interaction term $\frac{g}{4} (A_{\nu}A^{\nu})^2$ in QED. What would the vertex rule be? I believe it would have four vector indices such as $g^{\mu\rho}g^{\sigma\nu}$ and it would be symmetric in the four indices; but I do not know how to derive this exactly. • Funny that this question showed up here precisely during the time when students were working on my take-home final exam that included this question. – Matt Reece Dec 16 '17 at 2:49 You can write the interaction as $$\frac{g}{4} A^\rho A^\nu A^\sigma A^\mu g_{\rho \nu} g_{\sigma \mu}$$ Suppose you have a 4-point function to compute with the following external polarization vector $$\epsilon^{\mu_1}(p_1),\,\epsilon^{\mu_2}(p_2)\,,\epsilon^{\mu_3}(p_3)\,,\epsilon^{\mu_4}(p_4)$$ The rule is then $$i \frac{g}{4}\epsilon^{\mu_1}(p_1)\,\epsilon^{\mu_2}(p_2)\,\epsilon^{\mu_3}(p_3)\,\epsilon^{\mu_4}(p_4)g_{\rho \nu} g_{\sigma \mu}\left(\delta^{\mu_1}_\rho\delta^{\mu_2}_\nu\delta^{\mu_3}_\sigma\delta^{\mu_4}_\mu + \textit{perm} \right)$$ where perm means all possible permutation of $\mu_1,\mu_2,\mu_3,\mu_4$. EDIT This is how I compute the permutation. In the following I try to simplify the notation and I'll set $$\delta^1_\mu=\delta^{\mu_1}_\mu\\ \delta^{1\,2}_{\sigma\,\mu} = \delta^1_\sigma\delta^2_\mu + \delta^1_\mu\delta^2_\sigma\\ g_{12}=g_{\mu_1\,\mu_2}$$ I fix one leg and do the permutation on the other legs, and then I just permute the result by permuting the fixed leg. For example, I choose to fix $\mu_1$. The result is $$g_{\rho\nu}g_{\sigma\mu} \delta^1_\rho\left[\delta^2_\nu \delta^{3\,4}_{\sigma\,\mu}+\delta^2_\sigma \delta^{3\,4}_{\nu\,\mu}+\delta^2_\mu \delta^{3\,4}_{\sigma\,\nu}\right]$$ then I have to do the permutation of $2,3,4$ which is equivalent to do the permutation of $\nu,\sigma,\mu$. The result is then \begin{align} &g_{\rho\nu}g_{\sigma\mu} \left[\delta^{1\,2}_{\rho\nu}\delta^{3\,4}_{\sigma\,\mu}+\delta^{1\,2}_{\rho\sigma}\delta^{3\,4}_{\nu\,\mu}+\delta^{1\,2}_{\rho\mu}\delta^{3\,4}_{\sigma\,\nu}+\delta^{1\,2}_{\nu\sigma}\delta^{3\,4}_{\rho\,\mu}+\delta^{1\,2}_{\mu\nu}\delta^{3\,4}_{\sigma\,\rho}+\delta^{1\,2}_{\sigma\mu}\delta^{3\,4}_{\rho\,\nu}\right]\\ &=8g_{12}g_{34} + g_{\rho\nu}g_{\sigma\mu} \left[\delta^{1\,2}_{\rho\sigma}\delta^{3\,4}_{\nu\,\mu}+\delta^{1\,2}_{\rho\mu}\delta^{3\,4}_{\sigma\,\nu}+\delta^{1\,2}_{\nu\sigma}\delta^{3\,4}_{\rho\,\mu}+\delta^{1\,2}_{\mu\nu}\delta^{3\,4}_{\sigma\,\rho}\right] \end{align} I'll just compute one term inside the brakets (actually, the other ones are equals) $$g_{\rho\nu}g_{\sigma\mu} \delta^{1\,2}_{\rho\sigma}\delta^{3\,4}_{\nu\,\mu} = 2(g_{13}g_{24}+g_{14}g_{23})$$ Then, I end up with $$\frac{ig}{4}8(g_{12}g_{34} + g_{14}g_{32}+g_{13}g_{42})$$ • Okay, given 24 permutations and a factor of $\frac14$ out front, would that yield a vertex rule of $6ig (g^{\rho\nu}g^{\sigma\mu} + g^{\rho\sigma} g^{\nu\mu} + g^{\rho\mu}g^{\sigma\nu}$? Or am I over counting? – user168146 Dec 14 '17 at 14:26 • I meant $2ig(g^{\rho\nu} g^{\sigma\mu} + g^{\rho\sigma} g^{\nu\mu} + g^{\rho\mu} g^{\sigma\mu})$. – user168146 Dec 14 '17 at 14:49 • @AgriculturalEnergy I edited my answer. By matching my convention with yours, I get the same factor of 2. – apt45 Dec 14 '17 at 15:19
2020-04-01 02:31:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9460332989692688, "perplexity": 675.1526997466542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505359.23/warc/CC-MAIN-20200401003422-20200401033422-00239.warc.gz"}
https://rpg.stackexchange.com/questions/160154/are-unusually-long-jumps-possible-by-raw/160169
# Are unusually long jumps possible by RAW? I am slightly confused about how far a PC can jump in combat. On page 182, the PHB defines the mechanics of the long jump: When you make a long jump, you cover a number of feet up to your Strength score if you move at least 10 feet on foot immediately before the jump. When you make a standing long jump, you can leap only half that distance. Either way, each foot you clear on the jump costs a foot of movement. The subsequent description of the high jump is essentially analogous with the Strength score replaced with the Strength modifier. However, it also features the following addition: In some circumstances, your DM might allow you to make a Strength (Athletics) check to jump higher than you normally can. Since this is explicitly spelled out for the high jump and no similar mechanic is mentioned in the context of the long jump, I'd be inclined to infer that being able to jump farther by making a successful Strength check is not intended by the game designers. However, on page 175, the PHB explicitly lists the following as an example of an Athletics check: You try to jump an unusually long distance Note though, that it is not specified whether this pertains to horizontal or vertical distances, so it could be that this is just a re-iteration of the above-mentioned rule about high jumps. Given the vocabulary used in the distinction between long (horizontal) and high (vertical) jumps, it seems reasonable to assume that this sentence is talking about a horizontal distance, though. Still, there is no explicit mention of exceeding your normal maximum length for a long jump. Unlike in this discussion, I do not care about whether or not it is possible to split up the jump over several rounds. For the sake of simplicity, let us only talk about crossing a horizontal gap that is longer than your Strength score but shorter than your Movement minus a 10 feet start. Let's use the (vague) nomenclature in the PHB and call a jump over such a distance an unusually long jump. My question then is: Is it, by RAW, indeed possible to perform an unusually long jump by passing a Strength (Athletics) check? I have the following context in mind: When playing on a grid, the relevant jump lengths are usually multiples of 5 feet. If for example a PC has a Strength score of 8, that effectively means they can jump only 5 feet. In my opinion, letting them jump 10 feet instead should obviously not come for free, but it also shouldn't be too big a deal. If someone could refer me to an official source that offers clarification on that matter, I'd be very happy. Maybe some rule book even features a table for suitable DCs for given jump lengths. I know that this exists in DnD 3.5, so maybe there is hope for 5e as well. • – NautArch Nov 25 '19 at 14:46 • – NautArch Nov 25 '19 at 15:10 ## 1. Is it possible to exceed your normal maximum jump length by passing a Strength (Athletics) check? This is a bit situational. Say, for example, a PC only has a Strength Score of 5. If there is a 10 ft gap, and if the player cannot make it, it's death. If we use the rule for a long jump, the PC is going to die. But, this rule about a long jump is what the PC is normally capable of doing. A PC with a strength score of 5 is perfectly capable of jumping 5ft without exerting extra effort. The rules for using an ability check states (emphasis mine): An ability check tests a character's or monster's innate talent and training in an effort to overcome a challenge. The DM calls for an ability check when a character or monster attempts an action (other than an attack) that has a chance of failure. When the outcome is uncertain, the dice determine the results. In this situation, jumping beyond the PC's innate ability is possible, but uncertain. Therefore, a Strength Ability check to jump twice the distance is required. Alternatively, the DM could use a Passive Check instead, which gives an innate +10 to their ability check, without a dice roll; but the catch is that this is normally used for abilites that are repeated on a regular basis. In this case, if the PC has had plenty of practice jumping long distances, they could use a Passive Check to jump the distance. Ability and Passive Ability checks don't work the same way as the "Long Jump" rule, that calculates a total potential distance that a person is capable of jumping; instead they work in a "pass/fail" manner to see whether or not the PC is capable of clearing the entire obstacle. This could be translated back into a distance after the roll, if you wish (e.g. the DC to pass was 15, the PC got a 14 over all, so they just barely failed), in which case you could say that they didn't clear the jump, instead falling short by a foot instead. ## 2. If so, is there a guideline for the DC of jumping a given number of feet farther than one's Strength score? As far as I'm aware, there isn't a distance:difficulty ratio or table, but the basic rules again do provide a table of typical difficulty classes. This situation would have to be determined by the DM as to how difficult the jump would be. In this case, I believe that a 10ft long jump would likely be somewhere between easy and medium (a DC of 10 to 15). For example, a PC with a strength score of 10 could do this naturally (by following the innate rules of a long jump with a 10 foot run up). A PC with a Strength score of 5 has a -3 modifier. The challenge of the jump is far more difficult for them, therefore needing a higher roll (minimum of 13 on the dice) to pass. ## 3. Would such an unusually long jump still be simply a part of one's Movement, or would it consume an action? Would it maybe cost extra feet of Movement? As the rules state (as you provided in your question) (emphasis mine): When you make a long jump, you cover a number of feet up to your Strength score if you move at least 10 feet on foot immediately before the jump. When you make a standing long jump, you can leap only half that distance. Either way, each foot you clear on the jump costs a foot of movement. In this situation, you are making a long jump. This is part of your movement, however difficult it might be. The reason they dictate this as part of your movement is that yes, you can move and attack at the same time, but you can only move a specific amount of distance on your turn - this is the speed you can cover normally cover in a 6-second time frame (e.g. 30 feet, without using the Dash Action). So, if the PC has moved 10 feet, then needs a 10 foot run up, and then must cover another 10 feet in your jump, that is as much movement as they are able to traverse in their move. Using this (very basic) example, moving into a square requires 5 ft of movement. So, moving into the first square containing the obstacle requires 5 ft of movement, then moving into the second square requires another 5 ft of movement. Technically, passing this obstacle, which by definition of the grid, is only 10 ft wide, technically requires 15 ft of movement to pass the obstacle completely. So, you could manage this in 2 ways: In 5e, the rule of thumb with rounding is usually to round down. When applying this to your grid however, it's understandable to round to the nearest 5 ft that represents as success. I.e. if they have passed the jump, they are placed in the square past the obstacle; or if the have failed, they are placed on the square of the obstacle. Using a grid is a tool to help identify distances physically, in a relative or rounded manner. You could alternatively place them on the same square as the obstacle, and simply mark them as "safe" (standing up) or "unsafe" (lying down) to indicate a pass or fail respectively, if you want to keep to the "rounding down" rule of thumb used throughout the rest of the 5e system. ## It's up to the DM. Since there are rules for jumping as well as a spell which modifies that ability called jump, the DM may very well resort to using the rules given for jumping on page 182 of the PHB. However, the DM may call for an ability check when an action has a chance of failure, or when the outcome is uncertain. I think latter applies here. Strength Checks in the PHB reads... A Strength check can model any attempt ... to force your body through a space, or to otherwise apply brute force to a situation. The Athletics skill reflects aptitude in certain kinds of Strength checks. (pg. 175) It further gives the example of jumping an unusually long distance. It is completely within the DM's power to allow a check. Is there a tactical DC guide for longjumping? The most tactical specifications you will receive RAW are the rules given for jumping on page 182 of the PHB. However, there are guidelines for making a DC. The DM can judge what the DC should be based on this table found on page 174 of the PHB. Task Difficulty DC Easy 5 Medium 10 Hard 15 Very hard 20 Nearly impossible 25 Finally, will this Ability Check cost an action, or just movement? • I know D&D isn't a reality simulator, but still, a Strength check doesn't make a lot of sense to me because in real life, long jump ability is mostly a function of speed. – mattdm Nov 29 '19 at 3:05 • It absolutely is, and that is reflected upon in the rules (run 10 ft before the jump), but also in real life, people who have strong legs on average can jump farther than those who don't. But at your table you can rule it however you'd like. And if you want, you can use the DC's in the table above as a guideline to help set the difficulty. For every 5 feet of movement above 30, you could subtract 1 from the original DC for the jump. You could require Dex instead of Str. – John Carroll Nov 30 '19 at 5:25 # You can use Athletics to try and jump further than normal, but it still costs movement The Athletics skill says that you can: try to jump an unusually long distance. There is no table in feet but there is a general table. Be aware that a passive skill check is 10 + modifiers, so it is "easy" to jump your normal amount. What exactly constitutes a "medium" or "hard" jump is up to the DM. Making an ability check is, in general, an action. Movement is still consumed as per the normal jump rules. If you try and jump further than you have enough movement to do at once, then the situation is ill-defined. Two options would be to: • Have the player end their turn mid-air • Require a player to use Dash or some other method to increase their max movement.
2021-06-14 14:31:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6922010779380798, "perplexity": 941.0623495051602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612537.23/warc/CC-MAIN-20210614135913-20210614165913-00561.warc.gz"}
https://puzzling.stackexchange.com/questions/85755/approximate-this-big-number-using-a-binomial/85757
# Approximate this big number using a binomial [closed] Mr. Magico is a greater believer in this number: $$2^{50}=1,125,899,906,842,624‬$$ He also like to play cards, although he isn't fussy about the size of his deck, and nor does he care how many cards he pulls. He wishes to find $$n,k$$ such that: $$\binom{n}{k}\approx 2^{50}$$ and wants $$n$$ as small as possible, but also a very small error margin. $$2^{50}$$ cards gives $$100\%$$ accuracy, but is a very large pack of cards, probably too large for even Mr. Magico to carry around in his pocket! With this in mind, we shall impose an upper limit of $$n\le500$$, although $$n\le100$$ would be better for Mr. Magico's posture! What is Mr. Magico's ideal pack of cards, and how many cards should he pull? For a start $$\dbinom{78}{14}=1,023,729,916,348,425$$, an error of $$\sim0.909$$. • Clearly he should use a pack of $2^{50}$ cards and pull exactly one of them. More seriously, would you like to be a bit more precise about how you want the tradeoff between accuracy and feasibility to be made? And how do you feel about computer searches? – Gareth McCaughan Jul 2 '19 at 12:17 • Computer searches are fine, I've tried by hand and it's painful! – JMP Jul 2 '19 at 12:22 • Why is it "cIosed"? – Scratch---Cat Sep 23 '19 at 9:57 Choosing 16 cards from a deck of 67 gets to within about 0.2% of the desired answer. I think this is best possible with <= 100 cards. Found with the help of a computer, but purely as an aid to calculation. My approach was to follow the "boundary" near to the number wanted, increasing or decreasing $$k$$ and then adjusting $$n$$ to get as near as possible. I had to try about 30 values. Out of curiosity, I also ran a more automated search for the larger bound of n=500 mentioned in the OP. For this, choosing 8 cards from 290 yields an error of about 0.03%. The automated search also confirmed that the answer above is best for a maximum of 100 cards. • Checked all solutions for n<=500 in R, and your (290,8) is optimal (and naturally (290,282) is as well). – Thomas Markov Jul 2 '19 at 14:20 • Checking all smaller values of k shows that the next best ones are (2671,5), (12824,4), (189041,3), (47453133,2), and finally ($2^{50}$,1). – AxiomaticSystem Jul 2 '19 at 14:36 Generalizing my comment on Gareth's solution, we can arrange Pascal's triangle as a right triangular array and ignore the right half ($$n < 2k$$) to obtain something like this: 1 1 1 2 1 3 1 4 6 ... We then, for any $$N$$, notice that all columns are strictly increasing, with minimal values equal to the central binomial coefficients. This immediately produces an upper bound on $$k$$: the greatest value $$m$$ such that $$\binom{2m}{m} \leq N$$. We can then iterate down the remaining values of $$k$$, finding the values $$n$$ such that $$\binom{n}{k}$$ is closest to $$N$$ - these $$n$$ will also be increasing - and checking each one's relative error. This Python code does this fairly well, yielding the successive approximations $$\binom{53}{26}, \binom{54}{23}, \binom{67}{16}, \binom{290}{8}, \binom{12823}{4}, \binom{189040}{3}, \binom{47453133}{2}, \binom{2^{50}}{1}$$. • (of course, I should probably convert real roots to integer solutions in a manner more rigorous than rounding, but puzzles are more about method ;)) – AxiomaticSystem Jul 2 '19 at 15:20 Gareth has found the optimal solutions, but here is an R script if anyone wants to mess around with the upper bounds for n, just change the value of the variable"UpperBound". require(pracma) UpperBound<-500 n<-rep(1:UpperBound,each=UpperBound) k<-rep(1:UpperBound,times=UpperBound) data<-as.data.frame(cbind(n,k)) colnames(data)<-c("n","k") data<-data[data$$k<=data$$n,] for(i in 1:length(data$$n)){ datacombin[i]<-nchoosek(datan[i],datak[i]) } data$$check<-2^50 data$$diff<-data$$combin-data\$check data$$diff<-abs(data$$diff) data[data$$diff==min(data$$diff),]
2020-01-24 07:28:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 28, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7440453171730042, "perplexity": 738.577893511687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00301.warc.gz"}
https://mail.python.org/pipermail/pypy-commit/2009-April/032428.html
arigo at codespeak.net arigo at codespeak.net Fri Apr 17 18:55:51 CEST 2009 Author: arigo Date: Fri Apr 17 18:55:51 2009 New Revision: 64295 Log: Add this file, mostly extracted from the directories icooolps2009*. ============================================================================== --- (empty file) +++ pypy/extradoc/talk/bibtex.bib Fri Apr 17 18:55:51 2009 @@ -0,0 +1,116 @@ + + at Article{PyPyTracing, + author = {Carl Friedrich Bolz and Antonio Cuni and Armin Rigo and Maciej Fijalkowski}, + title = {Tracing the Meta-Level: PyPy's Tracing JIT Compiler}, + journal = {\emph{Submitted to} ICOOOLPS'09}, + } + + at Article{antocuni_2009, + author = {Antonio Cuni and Davide Ancona and Armin Rigo}, + title = {Faster than {C}\#: Efficient Implementation of Dynamic Languages on {.NET}}, + journal = {\emph{Submitted to} ICOOOLPS'09}, + } + + at phdthesis{carl_friedrich_bolz_automatic_2008, + type = {Master Thesis}, + title = {Automatic {JIT} Compiler Generation with Runtime Partial Evaluation +}, + school = {{Heinrich-Heine-Universität} Düsseldorf}, + author = {Carl Friedrich Bolz}, + year = {2008} +}, + + at inproceedings{ancona_rpython:dls_2007, + title = {{RPython:} A Step towards Reconciling Dynamically and Statically Typed {OO} Languages}, + isbn = {978-1-59593-868-8}, + url = {http://portal.acm.org/citation.cfm?id=1297091}, + doi = {10.1145/1297081.1297091}, + abstract = {Although the C-based interpreter of Python is reasonably fast, implementations on the {CLI} or the {JVM} platforms offers some advantages in terms of robustness and interoperability. Unfortunately, because the {CLI} and {JVM} are primarily designed to execute statically typed, object-oriented languages, most dynamic language implementations cannot use the native bytecodes for common operations like method calls and exception handling; as a result, they are not able to take full advantage of the power offered by the {CLI} and {JVM.}}, + booktitle = {Proceedings of the 2007 Symposium on Dynamic Languages}, + publisher = {{ACM}}, + author = {Davide Ancona and Massimo Ancona and Antonio Cuni and Nicholas D. Matsakis}, + year = {2007}, + pages = {53--64} +}, + + at techreport{armin_rigo_jit_2007, + title = {{JIT} Compiler Architecture}, + url = {http://codespeak.net/pypy/dist/pypy/doc/index-report.html}, abstract = {{PyPyâ~@~Ys} translation tool-chain â~@~S from the interpreter written in {RPython} to generated {VMs} for low-level platforms â~@~S is now able to extend those {VMs} with an automatically generated dynamic compiler, derived from the interpreter. This is achieved by a pragmatic application of partial evaluation techniques guided by a few hints added to the source of the interpreter. Crucial for the effectiveness of dynamic compilation is the use of run-time information to improve compilation results: in our approach, a novel powerful primitive called â~@~\promotionâ~@~] that â~@~\promotesâ~@~] run-time values to compile-time is used to that effect. In this report, we describe it along with other novel techniques that allow the approach to scale to something as large as {PyPyâ~@~Ys} Python interpreter.}, + number = {D08.2}, + institution = {{PyPy}}, + author = {Armin Rigo and Samuele Pedroni}, + month = may, + year = {2007} +}, + + at inproceedings{camillo_bruni_pygirl:_2009, + title = {{PyGirl:} Generating {Whole-System} {VMs} from {High-Level} Prototypes using {PyPy}}, + booktitle = {Tools, accepted for publication}, + author = {Camillo Bruni and Toon Verwaest}, + year = {2009}, +}, + + at inproceedings{RiBo07_223, + author = {Armin Rigo and Carl Friedrich Bolz}, + title = {{How to not write Virtual Machines for Dynamic Languages +}}, + booktitle = {Proceeding of Dyla 2007}, + abstract = {Typical modern dynamic languages have a growing number of + implementations. We explore the reasons for this situation, + and the limitations it imposes on open source or academic + communities that lack the resources to fine-tune and + maintain them all. It is sometimes proposed that + implementing dynamic languages on top of a standardized + general-purpose ob ject-oriented virtual machine (like Java + or .NET) would help reduce this burden. We propose a + complementary alternative to writing custom virtual machine + (VMs) by hand, validated by the PyPy pro ject: flexibly + generating VMs from a high-level specification, inserting + features and low-level details automatically – including + good just-in-time compilers tuned to the dynamic language at + hand. We believe this to be ultimately a better investment + of efforts than the development of more and more advanced + general-purpose object oriented VMs. In this paper we + compare these two approaches in detail.}, +pages = {--}, + year = {2007}, +} + + at inproceedings{rigo_representation-based_2004, + title = {Representation-Based Just-in-Time Specialization and the Psyco Prototype for Python}, + isbn = {1-58113-835-0}, + url = {http://portal.acm.org/citation.cfm?id=1014010}, + doi = {10.1145/1014007.1014010}, + abstract = {A powerful application of specialization is to remove interpretative overhead: a language can be implemented with an interpreter, whose performance is then improved by specializing it for a given program source. This approach is only moderately successful with very high level languages, where the operation of each single step can be highly dependent on run-time data and context. In the present paper, the Psyco prototype for the Python language is presented. It introduces two novel techniques. The first is just-in-time specialization, or specialization by need, which introduces the "unlifting" ability for a value to be promoted from run-time to compile-time during specialization -- the inverse of the lift operator of partial evaluation. Its presence gives an unusual and powerful perspective on the specialization process. The second technique is representations, a theory of data-oriented specialization generalizing the traditional specialization domains (i.e. the compile-time/run-time dichotomy).}, + booktitle = {Proceedings of the 2004 {ACM} {SIGPLAN} Symposium on Partial Evaluation and Semantics-Based Program Manipulation}, + publisher = {{ACM}}, + author = {Armin Rigo}, + year = {2004}, + pages = {15--26} +}, + + at inbook{bolz_back_2008, + title = {Back to the Future in One Week â~@~T Implementing a Smalltalk {VM} in {PyPy}}, + url = {http://dx.doi.org/10.1007/978-3-540-89275-5_7}, + abstract = {We report on our experiences with the Spy project, including implementation details and benchmark results. Spy is a re-implementation of the Squeak (i.e. Smalltalk-80) {VM} using the {PyPy} toolchain. The {PyPy} project allows code written in {RPython,} a subset of Python, to be translated +to a multitude of different backends and architectures. During the translation, many aspects of the implementation can be +independently tuned, such as the garbage collection algorithm or threading implementation. In this way, a whole host of interpreters +can be derived from one abstract interpreter definition. Spy aims to bring these benefits to Squeak, allowing for greater portability and, eventually, improved performance. The current +Spy codebase is able to run a small set of benchmarks that demonstrate performance superior to many similar Smalltalk {VMs,} but +which still run slower than in Squeak itself. Spy was built from scratch over the course of a week during a joint {Squeak-PyPy} Sprint in Bern last autumn. +}, + booktitle = {{Self-Sustaining} Systems}, + author = {Carl Friedrich Bolz and Adrian Kuhn and Adrian Lienhard and Nicholas Matsakis and Oscar Nierstrasz and Lukas Renggli and Armin Rigo and Toon Verwaest}, + year = {2008}, + pages = {123--139} +}, + + at techreport{PyPyJIT09, + title = {Automatic generation of {JIT} compilers for dynamic + languages in .{NET}}, + institution = {{DISI}, University of Genova and Institut f\"ur Informatik, {Heinrich-Heine-Universit\"at D\"usseldorf}}, + author = {Davide Ancona and Carl Friedrich Bolz and Antonio Cuni and Armin Rigo}, + year = {2008}, +},
2017-01-19 11:43:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23999041318893433, "perplexity": 13017.171847694399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00076-ip-10-171-10-70.ec2.internal.warc.gz"}
https://socratic.org/questions/find-the-interval-radius-of-convergence-for-the-power-series-in-1a
# Find the interval & radius of convergence for the power series in 1b? May 23, 2018 interval: $1 \le x < 9$ radius: $4$ #### Explanation: use the ratio test: [note: for the series ${\sum}_{n = 1}^{\infty} {a}_{n}$, if ${\lim}_{n \rightarrow \infty} \left\mid {a}_{n + 1} / {a}_{n} \right\mid$ -is $< 1$, series converges -is $= 1$, inconclusive -is $> 1$, series diverges] if you apply the ratio test to this: ${\sum}_{n = 1}^{\infty} {\left(x - 5\right)}^{n} / \left(n {4}^{n}\right)$, evaluating lim_(nrarroo)abs(((x-5)^(n+1)/((n+1)(4^(n+1))))/((x-5)^(n)/((n)(4^(n)))) will show what x-values make the series converge or diverge. simplifying the limit: =lim_(nrarroo)abs(((x-5)/((n+1)(4)))/(1/((n)))# $= {\lim}_{n \rightarrow \infty} \left\mid \frac{n \left(x - 5\right)}{4 n + 4} \right\mid$ [note: here you can divide both the numerator and denominator by $n$, because for whatever $n$ value is used, the value inside the absolute value signs will stay the same (dividing by $\frac{n}{n}$ or 1)] $= {\lim}_{n \rightarrow \infty} \left\mid \frac{\frac{n \left(x - 5\right)}{n}}{\frac{4 n}{n} + \frac{4}{n}} \right\mid$ $= \left\mid \frac{x - 5}{4 + 0} \right\mid$ $= \left\mid \frac{x - 5}{4} \right\mid$ back to the ratio test, the series can only converge if $\left\mid \frac{x - 5}{4} \right\mid < 1$ or $\left\mid \frac{x - 5}{4} \right\mid = 1$ Case 1: $\left\mid x - 5 \right\mid < 4$ $- 4 < x - 5 < 4$ $1 < x < 9$ (the solution interval must include these values) Case 2: $\left\mid \frac{x - 5}{4} \right\mid = 1$ $x - 5 = - 4 , x - 5 = 4$ $x = 1 , 9$ if $x = 1$, series becomes: ${\sum}_{n = 1}^{\infty} {\left(1 - 5\right)}^{n} / \left(n {4}^{n}\right)$ $= {\sum}_{n = 1}^{\infty} {\left(- 4\right)}^{n} / \left(n {4}^{n}\right)$ $= {\sum}_{n = 1}^{\infty} {\left(- 1\right)}^{n} / \left(n\right)$ this is the alternating harmonic series, which converges by the alternating series test if $x = 9$, series becomes: ${\sum}_{n = 1}^{\infty} {\left(9 - 5\right)}^{n} / \left(n {4}^{n}\right)$ $= {\sum}_{n = 1}^{\infty} {\left(4\right)}^{n} / \left(n {4}^{n}\right)$ $= {\sum}_{n = 1}^{\infty} \frac{1}{n}$ this is the harmonic series, which diverges. here is a proof so include $x = 1$ in the interval, too: $1 \le x < 9$ radius of convergence is half the difference between the upper and lower values for the interval $= \frac{9 - 1}{2} = 4$ and here is a video with a similar problem
2022-01-20 15:12:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 36, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973976194858551, "perplexity": 490.94809753418247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00540.warc.gz"}
https://indico.cern.ch/event/304944/contributions/1672201/
# 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015) 13-17 April 2015 OIST Asia/Tokyo timezone ## Updating the LISE\textsuperscript{++} software and future upgrade plans Not scheduled 15m OIST #### OIST 1919-1 Tancha, Onna-son, Kunigami-gun Okinawa, Japan 904-0495 poster presentation Track2: Offline software ### Speaker Michelle Kuchera (National Superconducting Cyclotron Laboratory, Michigan State University) ### Description Production of new isotopes is one of the opportunities at the intensity frontier of nuclear physics. The associated science ranges from tests of the Standard Model to exploration of the origin and evolution of the chemical elements in the universe. Leading facilities in this effort are RIBF at RIKEN, TRIUMF in Canada, and ISOLDE at CERN. New large scale facilities under development at the nuclear intensity frontier include FAIR in Europe, FRIB in the United States, and others in countries including China, France, Korea, and Italy. This talk will describe capabilities and future upgrade plans for the isotope production and simulation software that is used at many of these facilities, namely LISE$\textsuperscript{++}$ [1]. For reference of its wide-scale use, the LISE$\textsuperscript{++}$ website had approximately 3000 unique visitors in 2013, with Japan, USA, Germany, France, and China as the top five countries in terms of visitors. LISE$\textsuperscript{++}$ is software used to predict beam intensity and purity of rare isotope beams produced in-flight by magnetic and electric separators. The primary use of LISE$\textsuperscript{++}$ at most facilities is to predict and identify the composition of Radioactive Nuclear Beams [1]. Intensity and purity of a desired beam can be predicted, along with the separator magnet settings. Included in the LISE$\textsuperscript{++}$ package are models of isotope production mechanisms, ion optical transport through magnetic and electric systems, and ion interactions in matter. The suite includes a full set of utilities for simulation of experiments. The talk will highlight the process and methods of updating the software while retaining the computational integrity of the code. To accommodate the diversity of our users, we extend the software from Windows to a cross platform application. In addition, the C++ standard will be updated from Borland to C++11. The calculations of beam transport and isotope production are becoming more computationally intense with the new large scale facilities. For example, the 90 m long FRIB separator will have around fifty magnetic elements and ten points of beam interactions with matter. In order to perform the calculations in acceptable time, numerical optimization and parallel methods are applied. Computational improvements as well as the process of updating this large code will be discussed. [1] LISE++: Radioactive beam production with in-flight separators. O.B. Tarasov, D. Bazin ### Primary author Michelle Kuchera (National Superconducting Cyclotron Laboratory, Michigan State University) ### Co-authors Bradley Sherrill (National Superconducting Cyclotron Laboratory, Michigan State University) Daniel Bazin (National Superconducting Cyclotron Laboratory) Oleg Tarasov (National Superconducting Cyclotron Laboratory) ### Presentation Materials Kuchera_CHEP2015_FINAL.pdf Kuchera_CHEP2015_FINAL.pptx
2021-05-13 16:16:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6039623618125916, "perplexity": 5569.339567028097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989814.35/warc/CC-MAIN-20210513142421-20210513172421-00126.warc.gz"}
http://physics.stackexchange.com/questions/44617/inner-product-of-particle-anti-particle-spinor-components
# Inner product of particle-anti-particle spinor components Suppose I have four-component spinors $\Psi$ and $\bar \Psi$ satisfying the Dirac equation with $$\Psi(\vec x) = \int \frac{\textrm{d}^3 p}{(2\pi)^3} \frac{1}{\sqrt{2 E_{\vec p}}} \sum_{s = \pm \frac{1}{2}} \left[ a^s_p u^s_p e^{i \vec p \cdot \vec x} + \tilde b^s_p v^s_p e^{-i \vec p \cdot \vec x} \right]$$ $$\bar \Psi(\vec x) = \int \frac{\textrm{d}^3 p}{(2\pi)^3} \frac{1}{\sqrt{2 E_{\vec p}}} \sum_{s = \pm \frac{1}{2}} \left[ \tilde a^s_p \tilde u^s_p e^{-i \vec p \cdot \vec x} + b^s_p \tilde v^s_p e^{-i \vec p \cdot \vec x} \right] \gamma^0$$ with these definitions: $$\tilde a \equiv a^\dagger \quad ; \quad \gamma^0 = \begin{pmatrix} 0 & 1_2 \\ 1_2 & 0 \end{pmatrix} \quad ; \quad u^s_p = \begin{pmatrix} \sqrt{p \cdot \sigma} \xi^s \\ \sqrt{p \cdot \bar \sigma} \xi^s \end{pmatrix} \quad ; \quad v^s_p = \begin{pmatrix} \sqrt{p \cdot \sigma} \eta^s \\ -\sqrt{p \cdot \bar \sigma} \eta^s \end{pmatrix}$$ $$\sigma = (1_2, \vec \sigma) \quad ; \quad \bar \sigma = (1_2 , - \vec \sigma) \quad ; \quad p = (p_0 , \vec p)$$ where $\vec \sigma$ is the usual vector of Pauli matrices and $\xi, \eta$ are two-component spinors. The lecturer then goes on to choose an appropriate basis for $\xi$ $$\xi^1 = \begin{pmatrix}1 \\ 0\end{pmatrix} \quad ; \quad \xi^2 = \begin{pmatrix} 0 \\ 1\end{pmatrix}$$ and similarly for $\eta$ such that $\tilde \xi^r \xi^s = \delta^{rs}$ and $\tilde \eta^r \eta^s = \delta^{rs}$. This appears sensible and using this, we can compute various inner products of $u$ and $v$. In the examples discussed in the notes, mixed terms, that is, those containing products of $\xi^r$ and $\eta^s$ never occur (multiplied by 0, hence irrelevant) or cancel out. In an example assignment, however, we are meant to calculate $\bar \psi(-i \gamma^i \partial_i + m) \psi$ which then leads me to terms such as $$\sum_{s,t} 2 m \tilde a^t_p \tilde b^s_{-p} (p_i \sigma^i) \tilde \xi^t \eta^s$$ which also cancel out in the end, but pose the question in the meantime how $\tilde \xi^t \eta^s$ is actually defined/can be calculated. The identities gained from an appropriate choice of basis for $\eta$ and $\xi$ ‘feel’ wrong here, since, after all, $\xi$ comes from a particle-spinor and $\eta$ comes from an anti-particle spinor. I take it my question could be rephrased as to whether $\xi$ and $\eta$ live in the same space or belong to two different spaces (which would make their inner product an even more thrilling exercise). - In a coordinate-free description, we get two 4-component spinors $u$ and $v$ in projections to orthogonal (and in particular different) eigenspaces of dimension 2. But the above chooses suitable bases and then considers the coefficient vectors $\xi$ and $\eta$, which are vectors in the same space $C^2$. Their physical meaning is not determined by this abstract space but by the way they enter in the 4-component spinors. - Thank you very much. Can I hence conclude that if I choose a basis for $\xi$ as above and the same for $\eta$, $\eta^1 = (1,0)^T$, $\eta^2 = (0,1)^T$, the expression $\tilde \xi^r \eta^s$ is well-defined and equal to $\delta^{rs}$? –  Claudius Nov 19 '12 at 18:06 @Claudius: Yes. –  Arnold Neumaier Nov 19 '12 at 18:09
2015-04-18 05:16:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9097291827201843, "perplexity": 155.7789405188371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246633799.48/warc/CC-MAIN-20150417045713-00310-ip-10-235-10-82.ec2.internal.warc.gz"}
https://solvedlib.com/n/mcalerne-serotnn-dreseniadto-tr-scnatc-antrnus-stbronmunce,19883453
Mcalerne serotnn Dreseniadto tr Scnatc Antrnus Stbronmunce antrcth (omnanaincrash *unxcotilily (njoocmlic Sonsueacencaet Olaccidcni-intohvcdychilcineehincMosusciGt MunKnstacai cscnousVarclmdan speclicuiazAvcracc Vuceh(X ) Pcrccnt Drcuricncc(V Question: Mcalerne serotnn Dreseniadto tr Scnatc Antrnus Stbronmunce antrcth (omnanaincrash *unxcotilily (njoocmlic Sonsueacencaet Olaccidcni-intohvcdychilcineehincMosusciGt MunKnstacai cscnous Varclmdan speclicuiaz Avcracc Vuceh(X ) Pcrccnt Drcuricncc(V I AcminntuI 40J Ib DonAic Iritumroate Reslar 3/0] Doric FccnotyAezWljt 3400 |L Dem_Ric compjc Forcipn CotancL 790J Ib Hneedoan equabinoijzas *quare} LIne_ Tnr In [ttls inlormaton JYcrape weleht and percent occurtence dolnathe follomir Urite dotti Turicvon undid J(m,b) = Tnaidecrice Aumn alsqubred eois UreA Henterenieuut cnls Cutouc uluoncan condilan Ue cxi-lence o( J Iwinenun sum of sQuured Etr8rs isfor VJlm,D) = Miete Vis Lhe gradient cpcrato Wseth Kretlucemean us0 10 Corjinvautscmaro Thal Minim;r Jlm,b) Scon WC Jh[ tn dciduj Onou mork Alen Il adiamansannntinenntal Wnict alelha duly pronded unothe Susteml on coutont part (bl [o Tira Mland D enjcriouaugtarm andb inun jpmtprovwed ano cnu IubIul Oelo y onrtur Sc Juol Similar Solved Questions Vc _ Vc_ pz2l_ve) Vc_V3 L [email protected] 3 2(vo - Ve)Ez pc Vc _ Vc_ pz 2l_ve) Vc_V3 L Pz Vo @z Vo 3 2(vo - Ve) Ez pc... CALCULATOR FULL SCREEN PRINTER VERSION BACK NEXT Brief Exercise 11-7 Sweet Company purchased a computer for... CALCULATOR FULL SCREEN PRINTER VERSION BACK NEXT Brief Exercise 11-7 Sweet Company purchased a computer for $8,480 on January 1, 2016. Straight-line depreciation is used, based on a 5 year life and a$1,060 salvage value. In 2018, the estimates are revised. Sweet now feels the computer will be used ... An engineer wants to design a circuit that has the following input/output voltage characteristics. The variable... An engineer wants to design a circuit that has the following input/output voltage characteristics. The variable input DC voltage source that the engineer uses has 0.5 kOhm internal resistance; in the laboratory various diodes (pn and Zener), resistors and various fixed DC voltages are available. The... Problem $5.61$ A thin glass rod of radius $R$ and length $L$ carries a uniform surface charge $\sigma .$ It is sct spinning about its axis, at an angular velocity $\omega$. Find the magnetic field at a distance $s \gg R$ from the center of the rod (Fig. 5.66). [Hint: treat it as a stack of magnetic dipoles.] Problem $5.61$ A thin glass rod of radius $R$ and length $L$ carries a uniform surface charge $\sigma .$ It is sct spinning about its axis, at an angular velocity $\omega$. Find the magnetic field at a distance $s \gg R$ from the center of the rod (Fig. 5.66). [Hint: treat it as a stack of magnetic ... Here are two employment adds from newspaper: Use them toDrtnt [4rjrdt renhieente Alihate Plont & @Ter: Memactnanr Snosolve the problem:Sleinare Zed241 Gln EteSSEaHyrse neededhor orextel care bality RM 521-sonc LN Stahc PaldHonurJun Watlonattcn mcoalbrnete_ Call Fernn-55545yWhat is the annual salary for the RN working 40 hours week? JzpuounstionRpomHere are two employment adds from newspaper: Use them toDrrr FetJedeoheeete enact Mut hatr hotr u @ntet DectAnenrn Santo an S1entarer 2edmi Ckn Here are two employment adds from newspaper: Use them to Drtnt [4rjrdt renhieente Alihate Plont & @Ter: Memactnanr Sno solve the problem: Sleinare Zed241 Gln EteSSEa Hyrse neededhor orextel care bality RM 521-sonc LN Stahc PaldHonurJun Watlonattcn mcoalbrnete_ Call Fernn-55545y What is the annu... Let X be the total medical expenses (in 1000s of dollars) incurred by particular individual during a given year: Although X is a discrete random variable, suppose its distribution is quite well approximated by a continuous distribution with pdf f(x) = k( 1 for x > 0 2.5(a) What is the value of k? 2.4(c) What is the expected value of total medical expenses? Round your answer to the nearest cent:)What is the standard deviation of total medical expenses? (Round your answer to the nearest cent:) Let X be the total medical expenses (in 1000s of dollars) incurred by particular individual during a given year: Although X is a discrete random variable, suppose its distribution is quite well approximated by a continuous distribution with pdf f(x) = k( 1 for x > 0 2.5 (a) What is the value of k... Rojas Corporation's comparative balance sheets are presented below. ROJAS CORPORATION Comparative Balance Sheets December 31 2019... Rojas Corporation's comparative balance sheets are presented below. ROJAS CORPORATION Comparative Balance Sheets December 31 2019 2020 Cash $14,600$10,300 23,900 Accounts receivable 21,600 Land 20,500 25,900 Buildings 70,200 70,200 Accumulated depreciation-buildings (14,500) (10,800) $112,400$... An animal breeder, attempting to cross a llama with an alpaca for finer wool, found that the hybrid offspring rarely lived more than a few weeks. This outcome probably resulted from:a. genetic drift.b. prezygotic reproductive isolation.c. postzygotic reproductive isolation.d. sympatric speciation.e. polyploidy. An animal breeder, attempting to cross a llama with an alpaca for finer wool, found that the hybrid offspring rarely lived more than a few weeks. This outcome probably resulted from: a. genetic drift. b. prezygotic reproductive isolation. c. postzygotic reproductive isolation. d. sympatric speciatio... Question 21ptsConsider the density values of an unknown metal below:Table 1: Density of an Unknown MetalTrialDensity (g/mL) 2.903.602.893.00Would any of the trials be considered an outlier? (answer yes or no)You willuploa] work at the end showing how you know:What is the correct mean of the data set?g/mL Question 2 1pts Consider the density values of an unknown metal below: Table 1: Density of an Unknown Metal Trial Density (g/mL) 2.90 3.60 2.89 3.00 Would any of the trials be considered an outlier? (answer yes or no) You will uploa] work at the end showing how you know: What is the correct mean of ... The heights of fully grown trees of a specific species are normally distributed, with a mean... The heights of fully grown trees of a specific species are normally distributed, with a mean of 72 5 feet and a standard imit theorem to find the mean and standand error of the sampling distribution. Than sketch a graph of the sampling distribution rhestardad error of Po sampingdantuon isoiD (Round ... Part A When titrated with a 0.1198 M solution of sodium hydroxide, a 58.00 mL solution... Part A When titrated with a 0.1198 M solution of sodium hydroxide, a 58.00 mL solution of an unknown polyprotic acid required 20.15 mL to reach the first equivalence point. Calculate the molar concentration of the unknown acid. O A¢ * R O ? Submit Request Answer Part B The titration curve was f... Determine whether the point lies on the curve. $r^{2}=\sin 3 \theta ; \quad\left[1,-\frac{5}{6} \pi\right]$. Determine whether the point lies on the curve. $r^{2}=\sin 3 \theta ; \quad\left[1,-\frac{5}{6} \pi\right]$.... Iron-deficiency anemia is the most common form of malnutrition in developing countries, affecting about 50% of... Iron-deficiency anemia is the most common form of malnutrition in developing countries, affecting about 50% of children and women and 25% of men. Iron pots for cooking foods had traditionally been used in many of these countries, but they have been largely replaced by aluminum pots, which are cheape... Distinguish the mechanisms by which uniporters and channels transport ions or molecules across the membrane. Distinguish the mechanisms by which uniporters and channels transport ions or molecules across the membrane.... 3) 15-Q13, that's chap 1 5 conceptual question 13 Which cyclical process represented by the two... 3) 15-Q13, that's chap 1 5 conceptual question 13 Which cyclical process represented by the two closed loops, ABCFA and ABDEA, on the PV diagram in the figure below produces the greatest net work? Is that process also the one with the smallest work input required to return it to point A? Explai... In this 4LLSLIUIL MLl exulore tht HpOrtAlCL chlertul UIUMZALIUI housunuld LariIES In particular _ BUIL LCUHOIC [heuries suggest collective LaruaMing guch Monzation [atlers Iore lur traditiunalle HrEialLEd Eruupg [emale :Hu HlurIY wurkers Cunsider [udelsFuru LacoraeJAge . B,W _ AyeiFuwlticutHAueUz Au4D-HUTUS Uaufiwlere FumIIcumt [it > Wuil MCDle HAuC; Lhe hus bauel > 484, WAget UL we : HUTut; Jly Tlble luulalug (hil the husUacic UILUEI WUent autntnY MuDAR uppoge erice [iLlI Incurni al -nn In this 4LLSLIUIL MLl exulore tht HpOrtAlCL chlertul UIUMZALIUI housunuld LariIES In particular _ BUIL LCUHOIC [heuries suggest collective LaruaMing guch Monzation [atlers Iore lur traditiunalle HrEialLEd Eruupg [emale :Hu HlurIY wurkers Cunsider [udels Furu Lacorae JAge . B,W _ Ayei Fuwlticut HAue ... 1 1 8 E7 1 L Suecns] 0 5 E| 1 I Question Quenon Z0 0tze= 1 1 8 E7 1 L Suecns ] 0 5 E | 1 I Question Quenon Z0 0tze=... [1 point) 14) Which three of the following statements are true of gammaly particle emission? 13... [1 point) 14) Which three of the following statements are true of gammaly particle emission? 13 points a) The particle produced is a high-energy photon b) The daughter nucleus has an atomic number 4 less than that of the parent nucleus c) The particle produced is a bare helium nucleus with 2 protons... 1) Briefly explain in no more than two sentences, how each of these factors will likely affect the retention time ofa GC chromatographic process: a) Decrease the length of the column b)Increase carrier gas velocity c)Linear temperature programming d) Setting the column temperature below the boiling point of the analyte e) Significant diffusion of analyte in carrier gas 1) Briefly explain in no more than two sentences, how each of these factors will likely affect the retention time ofa GC chromatographic process: a) Decrease the length of the column b)Increase carrier gas velocity c)Linear temperature programming d) Setting the column temperature below the boiling ... The management Bnckley Cordortion interested and the [ransportation COSI are estimated ollowsMsina simulatiorestimateDrorricDroqua The sellina Griceproducwvill545 DemProbabiln = distributionsduconasthe laborProcurgicnt Cost ($)FaO Cost (5)ProbabilityProbability Transportation Cost ($) ProbabilityConstructslnlaton modeestimane the average profit (in $) per unit ad the Jariancepforit Dermnt (Use0Od tnals Rovco *our Ans 42Decima places-)aycrgevanianceWnal9590 confidence Intenva (n$) arouno this av The management Bnckley Cordortion interested and the [ransportation COSI are estimated ollows Msina simulatior estimate Drorric Droqua The sellina Grice producwvill 545 Dem Probabiln = distributions duconas the labor Procurgicnt Cost ($) FaO Cost (5) Probability Probability Transportation Cost ($) P... Using Newfield et al's (2007) three-part nursing diagnosis format (risk of __, among related to write... Using Newfield et al's (2007) three-part nursing diagnosis format (risk of __, among related to write a nursing diagnosis for this county. Chapter 18: Community As Client: Assessment and Analysis Student Case Studies Alan T. is a public health nurse and a member of a committee assigned to assess... In Part No.2,of this experiment; the capacitor was fully charged up to 15V and then allowed discharge through the 4.7Mn resistor: Universal Charging /Discharging Equation calculate the theoretical voltage drop which should appear across the capacitor at- time 90 seconds (approximately time constants) (Show calculations )Compare the capacitor' theoretical discharge voltage - value; determined In Question 7 above, the actual discharge voltage which was measured at t 90 seconds. Is the measur In Part No.2,of this experiment; the capacitor was fully charged up to 15V and then allowed discharge through the 4.7Mn resistor: Universal Charging /Discharging Equation calculate the theoretical voltage drop which should appear across the capacitor at- time 90 seconds (approximately time constants... How much energy does it take to melt a piece of ice of mass 7.6g which is initially at a temperature of -90 %C?We will assume that the specific heat of ice is 2.00 J/g/C and the latent heat of ice is 300 Jg: Your answer should be in Joule with 3 significant digits_Answer: How much energy does it take to melt a piece of ice of mass 7.6g which is initially at a temperature of -90 %C? We will assume that the specific heat of ice is 2.00 J/g/C and the latent heat of ice is 300 Jg: Your answer should be in Joule with 3 significant digits_ Answer:... Suppose that an airline uses a seat width of 16.5 in. Assume men have hip breadths... Suppose that an airline uses a seat width of 16.5 in. Assume men have hip breadths that are normally distributed with a mean of 14.1 in and a standard deviation of 1 in. Complete parts (a) through (c) below. (b) If a plane is filled with 112 randomly selected men, find the probability that these men... Con to which of the following is the major product of the following eliminetion? 17) Which... con to which of the following is the major product of the following eliminetion? 17) Which of the following is the correct mechanism for the elimination reaction of2 dimethylbutane with methoxide? -bromo-2,3 OCH 18) Which of the following is most reactive in an El reaction? A. B. C. 19) dehydration?... A country has GDP per capita equal to $5,000. If the country’s GDP per capita increases... A country has GDP per capita equal to$5,000. If the country’s GDP per capita increases at a rate of 3.60% per year then according to the rule of 70 how many years will it take for GDP per capita to equal $20,000? Round to the nearest whole number.... 5 answers 6. In the year 2000, the population of Florida was about 15.9 Million people. In 1990 it was about 12.8 Million.a. Find K rounded to the nearest tenth of percent ( a percent rounded to one decimal place) b. Find the population in 2015c. Find the population 20257. In the year 2000, the population of a city in the U.S. was about 2.7 Million people. In 1990 it was about 2.9 Million.a. Find K rounded to the nearest tenth of percent ( a percent rounded to one decimal place) b. Find the population 6. In the year 2000, the population of Florida was about 15.9 Million people. In 1990 it was about 12.8 Million. a. Find K rounded to the nearest tenth of percent ( a percent rounded to one decimal place) b. Find the population in 2015 c. Find the population 2025 7. In the year 2000, the populat... 5 answers Sample of CcHs released 27.5 kcal as the gas was converted into _ liquid What significant the mass of the sample figures: grams? Use three Heat of fusion: 10.0 kJ/mol Heat of vaporization: 34.1 kJ/mol sample of CcHs released 27.5 kcal as the gas was converted into _ liquid What significant the mass of the sample figures: grams? Use three Heat of fusion: 10.0 kJ/mol Heat of vaporization: 34.1 kJ/mol... 1 answer A corporation was organized in January 2021 with authorized capital of$10 par value common stock.... A corporation was organized in January 2021 with authorized capital of $10 par value common stock. On February 1, 2021, shares were issued at par for cash. On March 1, 2021, the corporation's attorney accepted 6400 shares of common stock in settlement for legal services with a fair value of$820... Petty Cash Problem On September 6, Trimen Industries decided to employ a Petty Cash fund for... Petty Cash Problem On September 6, Trimen Industries decided to employ a Petty Cash fund for small expenses. A check of $150 was issued and cashed by the fund custodian. The$ 150 cash was given to the fund custodian. The custodian was instructed to to obtain docu- mentation for all payments. Petty... Effect of Monochromatic Light on Electron Flow The extent to which an electron carrier is oxidized or reduced during photosynthetic electron transfer can sometimes be observed directly with a spectrophotometer. When chloroplasts are illuminated with 700 nm light, cytochrome $f$, plastocyanin, and plastoquinone are oxidized. When chloroplasts are illuminated with 680 nm light, however, these electron carriers are reduced. Explain. Effect of Monochromatic Light on Electron Flow The extent to which an electron carrier is oxidized or reduced during photosynthetic electron transfer can sometimes be observed directly with a spectrophotometer. When chloroplasts are illuminated with 700 nm light, cytochrome $f$, plastocyanin, and pl... Where does numbering starts? and if it starts from the left, why not from the right?... where does numbering starts? and if it starts from the left, why not from the right? The numbering from both sides at 3 have a substituent....
2023-03-29 15:37:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41536518931388855, "perplexity": 9478.737610521597}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00193.warc.gz"}
http://troaco.com/please-help-me-to-solve-4-and-5/
4. Where in the proof of Theorem 2.3.3 did we use the following? (a) The least upper bound axiom (b) The assumption that {x } is bounded (c) The assumption that {x } is increasing 5. Prove that a bounded decreasing sequence {xx } converges by 9(a) using the result proved in the text for increasing sequences; (b) using the amp; – no definition of convergence and the set A = {xn | n EN}. Theorem 2.3.3. A bounded monotonic sequence converges. Proof. We prove the theorem for an increasing sequence; the decreasing sequence case is left for the exercises (Problem 5). Assume that {In } is bounded and increasing. To show that { n } is convergent, we use the amp; – no definition of convergence, and to use this definition, we need to know the limit of the sequence. We determine the limit using the set A = {xn | n EN}, the set of points in R consisting of the terms of the sequence {x}. Because {x} is bounded, there is an M gt; 0 such that |X, | lt; M for all n, and this M is an upper bound for A. Hence A is a bounded nonempty set of real numbers and so has a least upper bound. Let a =. lubA. This number a is the limit of {x}, which we now show. Take any amp; gt; 0. Since a = lubA, there is an element a E A greater than a – amp;; that is, there is an integer no such that Xno = a gt; a – E. But {x } is increasing, so Xno lt; Xn for all n gt; no, and a an upper bound of A implies that In _ a for all n. Therefore, for n gt; no, a – amp; lt; xx lt; a, so |xn – a| lt;E. Remark Note that, from the above proof, it follows immediately that, if a is the limit of an increasing sequence {Xn}, then *n lt; a for all n E N. (Similarly, we have that the limit of a convergent decreasing sequence is a lower bound for the terms of the decreasing sequence.)Math
2020-09-21 03:28:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432467222213745, "perplexity": 1246.8526781733053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198887.3/warc/CC-MAIN-20200921014923-20200921044923-00108.warc.gz"}
https://math.stackexchange.com/questions/691237/proof-sum-i-1pe-i-doteq-bigoplus-i-1p-e-i-leftrightarrow-forall-i-i?noredirect=1
# Proof: $\sum_{i=1}^pE_i \doteq \bigoplus_{i=1}^p E_i \leftrightarrow \forall i\in \{1,…,p\}(E_i \cap \sum_{t \in \{1,…,p\}-\{i\}}E_t=\{0\})$ I am using the following definition: • Def.: let $E_1,...,E_p$ $p$-vector subspaces of $V$, $E_1+E_2+...+E_p$ is direct sum, $E_1+E_2+...+E_p \doteq E_1\oplus E_2 \oplus ... \oplus E_p$, if $$\forall e_1 \in E, e_2 \in E_2,...,e_p \in E_P (e_1+e_2+...+e_p=0_V \to e_1=e_2=...=e_p=0_V)$$ and I must to proof: • Prop.: let $E_1,...,E_p$ $p$-vector subspaces of $V$, then: $$\sum_{i=1}^pE_i \doteq \bigoplus_{i=1}^p E_i \leftrightarrow \forall i\in \{1,...,p\}(E_i \cap \sum_{t \in \{1,...,p\}-\{i\}}E_t=\{0\})$$ • Proof: I thought (by induction), for $p=2$ I have $E_1,E_2$ $2$-vector subspaces of $V$, and I must to proof $$E_1+E_2 \doteq E_1 \oplus E_2 \leftrightarrow \forall i \in \{1,2\}(E_i \cap \sum_{t \in \{1,2\}-\{i\}}E_t=\{0\})$$ but $\forall i \in \{1,2\}(E_i \cap \sum_{t \in \{1,2\}-\{i\}}E_t=\{0\})$ means $E_1 \cap E_2=\{0\} \wedge E_2 \cap E_1=\{0\}$ and it is true by CLIC and because $\cap$ is commutative; I must to proof ($p\to p+1$) $$[\sum_{i=1}^pE_i \doteq \bigoplus_{i=1}^p E_i \leftrightarrow \forall i\in \{1,...,p\}(E_i \cap \sum_{t \in \{1,...,p\}-\{i\}}E_t=\{0\})]\to [\sum_{i=1}^{p+1}E_i \doteq \bigoplus_{i=1}^{p+1} E_i \leftrightarrow \forall i\in \{1,...,p+1\}(E_i \cap \sum_{t \in \{1,...,p+1\}-\{i\}}E_t=\{0\})]$$ But I don't know to continue, I am confused...mmmm How can I do? • Unfortunately, induction won't work here. – DiffeoR Feb 26 '14 at 14:30 • @DiffeoR, ah ok.. by contradiction? – mle Feb 26 '14 at 14:32 • Yes ! that's the way to do. All the best. – DiffeoR Feb 26 '14 at 14:34 • Proof of $\bf\leftarrow$ Let $e_i\in E_i$ such that $$e_1+e_2+\cdots+e_n=0$$ so forall $i$ we have $$e_i=-\sum_{j\ne i} e_j\in E_i \cap \sum_{j \in \{1,...,n\}-\{i\}}E_j=\{0\}$$ • Proof of $\bf\rightarrow$ (by contraposition) If there's $i$ such that $$E_i \cap \sum_{j \in \{1,...,n\}-\{i\}}E_j\ne\{0\}$$ so let $x\ne0$ in this intersection hence $$x=e_i=\sum_{j\ne i} e_j$$ so $$e_i-\sum_{j\ne i} e_j=0$$ but $$e_i\ne0$$ and this means that $E_1+E_2+...+E_p$ isn't a direct sum.QED.
2020-10-20 16:58:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937474250793457, "perplexity": 401.8643663481926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874026.22/warc/CC-MAIN-20201020162922-20201020192922-00617.warc.gz"}
https://indico.math.cnrs.fr/event/3430/
Les personnes qui possèdent un compte PLM-Mathrice sont invitées à l'utiliser. Séminaire Algèbre ICJ # Reflection length in affine Coxeter groups ## by Petra Schwer (Karlsruhe Institute of Technology) jeudi 26 avril 2018 de au (Europe/Paris) at bât. Braconnier ( 112 ) ICJ, UCBL - La Doua Description Affine Coxeter groups have a natural presentation as reflection groups on some affine space. Hence the set R of all its reflections, that is all conjugates of its standard generators, is a natural (infinite) set of generators. Computing the reflection length of an element in an affine Coxeter group means that one wants to determine the length of a minimal presentation of this element with respect to R. In joint work with Joel Brewster Lewis, Jon McCammond and T. Kyle Petersen we were able to provide a simple formula that computes the reflection length of any element in any affine Coxeter group. In this talk I would like to explain this formula, give its simple uniform proof and allude to the geometric intuition behind it.
2018-04-24 14:39:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5161927342414856, "perplexity": 1442.1816744293446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946721.87/warc/CC-MAIN-20180424135408-20180424155408-00285.warc.gz"}
https://www.gradesaver.com/textbooks/math/other-math/basic-college-mathematics-9th-edition/chapter-4-decimals-4-5-dividing-decimal-numbers-4-5-exercises-page-306/60
# Chapter 4 - Decimals - 4.5 Dividing Decimal Numbers - 4.5 Exercises: 60 Each child needs to collect 182 box tops. #### Work Step by Step We know from Exercise #59 that a school would need to collect 100,000 box tops to earn the maximum amount. If a school has 550 children, then each child would need to collect $100000\div 550$ box tops. We divide: $~~~~~~~~~~~~~~181.8$ $550\overline{)100000.0}$ $~~~~~~~~~~\underline{550}$ $~~~~~~~~~~4500$ $~~~~~~~~~~\underline{4400}$ $~~~~~~~~~~~~~1000$ $~~~~~~~~~~~~~~~\underline{550}$ $~~~~~~~~~~~~~~~4500$ $~~~~~~~~~~~~~~~\underline{4400}$ $~~~~~~~~~~~~~~~~~100$ We can stop the division with the remainder of 100 because we only need to round to the nearest whole number: $181.8\approx 182$ box tops Thus each child needs to collect 182 box tops. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-07-17 04:36:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2423790842294693, "perplexity": 1028.8569491749886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589557.39/warc/CC-MAIN-20180717031623-20180717051623-00495.warc.gz"}
http://mathhelpforum.com/advanced-algebra/6840-matrices.html
1. ## Matrices Doing some last minute homework, and stuck on this one. Although its due in 4 hours thought its worth a shot. Our assignment is to "Use a property of determinants to show that A and A^T have the same characteristic polynomial." I know that (A - LI)x = 0, where L = lambda. Not sure how to prove that with something like det(A-LI) = 0? No idea. Also, one other: Show that if A and B are similar, then det A = det B 2. Originally Posted by Ideasman Doing some last minute homework, and stuck on this one. Although its due in 4 hours thought its worth a shot. Our assignment is to "Use a property of determinants to show that A and A^T have the same characteristic polynomial." I know that (A - LI)x = 0, where L = lambda. Not sure how to prove that with something like det(A-LI) = 0? No idea. Also, one other: Show that if A and B are similar, then det A = det B Put B=A-LI, now det(B)=det(B'), so: det(B')=det(A'+LI')=det(A'+LI)=det(B)=det(A+LI) So the charateristic polynomial of A' (which is det(A'+LI)) is equal to det(B'), which is equal to det(B) which is the charateristic polynomial of A (that is: det(A-LI)). RonL
2017-07-22 17:25:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653131127357483, "perplexity": 1524.5199968679683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424088.27/warc/CC-MAIN-20170722162708-20170722182708-00717.warc.gz"}
https://aitopics.org/mlt?cdid=arxivorg%3A2F58C12B&dimension=concept-tags
to ### Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these sampling-based methods suffer from local minima the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL). Rather than setting parameters to maximize the likelihood of the training data, parameters of the factor graph are treated as a log-linear function approximator and learned with temporal difference (TD); MAP inference is performed by executing the resulting policy on held out test data. ### Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width. We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates, which have bounded hierarchy width—regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers. ### $\alpha$ Belief Propagation as Fully Factorized Approximation Belief propagation (BP) can do exact inference in loop-free graphs, but its performance could be poor in graphs with loops, and the understanding of its solution is limited. This work gives an interpretable belief propagation rule that is actually minimization of a localized $\alpha$-divergence. We term this algorithm as $\alpha$ belief propagation ($\alpha$-BP). The performance of $\alpha$-BP is tested in MAP (maximum a posterior) inference problems, where $\alpha$-BP can outperform (loopy) BP by a significant margin even in fully-connected graphs. ### Bounds on marginal probability distributions We propose a novel bound on single-variable marginal probability distributions in factor graphs with discrete variables. The bound is obtained by propagating bounds (convex sets of probability distributions) over a subtree of the factor graph, rooted in the variable of interest. By construction, the method not only bounds the exact marginal probability distribution of a variable, but also its approximate Belief Propagation marginal ( belief''). Thus, apart from providing a practical means to calculate bounds on marginals, our contribution also lies in providing a better understanding of the error made by Belief Propagation. We show that our bound outperforms the state-of-the-art on some inference problems arising in medical diagnosis. ### Structured Prediction Theory Based on Factor Graph Complexity We present a general theoretical analysis of structured prediction with a series of new results. We give new data-dependent margin guarantees for structured prediction for a very wide family of loss functions and a general family of hypotheses, with an arbitrary factor graph decomposition. These are the tightest margin bounds known for both standard multi-class and general structured prediction problems. Our guarantees are expressed in terms of a data-dependent complexity measure, \emph{factor graph complexity}, which we show can be estimated from data and bounded in terms of familiar quantities for several commonly used hypothesis sets, and a sparsity measure for features and graphs. Our proof techniques include generalizations of Talagrand's contraction lemma that can be of independent interest. We further extend our theory by leveraging the principle of Voted Risk Minimization (VRM) and show that learning is possible even with complex factor graphs. We present new learning bounds for this advanced setting, which we use to devise two new algorithms, \emph{Voted Conditional Random Field} (VCRF) and \emph{Voted Structured Boosting} (StructBoost). These algorithms can make use of complex features and factor graphs and yet benefit from favorable learning guarantees. We also report the results of experiments with VCRF on several datasets to validate our theory.
2022-05-22 02:11:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5477854013442993, "perplexity": 816.1818789626437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00477.warc.gz"}
https://www.semanticscholar.org/paper/Morita-Invariance-of-the-Filter-Dimension-and-of-of-Bavula-Hinchcliffe/b197138b1860c652ba98baf5867dd66c27928461
# Morita Invariance of the Filter Dimension and of the Inequality of Bernstein @article{Bavula2006MoritaIO, title={Morita Invariance of the Filter Dimension and of the Inequality of Bernstein}, author={V. Bavula and V. Hinchcliffe}, journal={Algebras and Representation Theory}, year={2006}, volume={11}, pages={497-504} } • Published 2006 • Mathematics • Algebras and Representation Theory It is proved that the filter dimension is Morita invariant. A direct consequence of this fact is the Morita invariance of the inequality of Bernstein: if an algebra A is Morita equivalent to the ring ${\cal D} (X)$ of differential operators on a smooth irreducible affine algebraic variety X of dimension n ≥ 1 over a field of characteristic zero then the Gelfand–Kirillov dimension ${\rm GK} (M)\geq n = \frac{{\rm GK} (A)}{2}$ for all nonzero finitely generated A-modules M. In fact, a stronger… Expand 1 Citations
2021-04-19 02:44:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532497525215149, "perplexity": 470.77978271105866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00212.warc.gz"}
http://jmlr.csail.mit.edu/papers/v18/14-453.html
## Two New Approaches to Compressed Sensing Exhibiting Both Robust Sparse Recovery and the Grouping Effect Mehmet Eren Ahsen, Niharika Challapalli, Mathukumalli Vidyasagar; 18(54):1−24, 2017. ### Abstract In this paper we introduce a new optimization formulation for sparse regression and compressed sensing, called CLOT (Combined L-One and Two), wherein the regularizer is a convex combination of the $\ell_1$- and $\ell_2$-norms. This formulation differs from the Elastic Net (EN) formulation, in which the regularizer is a convex combination of the $\ell_1$- and $\ell_2$-norm squared. It is shown that, in the context of compressed sensing, the EN formulation does not achieve robust recovery of sparse vectors, whereas the new CLOT formulation achieves robust recovery. Also, like EN but unlike LASSO, the CLOT formulation achieves the grouping effect, wherein coefficients of highly correlated columns of the measurement (or design) matrix are assigned roughly comparable values. It is already known LASSO does not have the grouping effect. Therefore the CLOT formulation combines the best features of both LASSO (robust sparse recovery) and EN (grouping effect). The CLOT formulation is a special case of another one called SGL (Sparse Group LASSO) which was introduced into the literature previously, but without any analysis of either the grouping effect or robust sparse recovery. It is shown here that SGL achieves robust sparse recovery, and also achieves a version of the grouping effect in that coefficients of highly correlated columns belonging to the same group of the measurement (or design) matrix are assigned roughly comparable values. [abs][pdf][bib]
2017-08-20 07:48:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8266909122467041, "perplexity": 1493.2294508345112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106358.80/warc/CC-MAIN-20170820073631-20170820093631-00676.warc.gz"}
https://en.wikipedia.org/wiki/Sum_rule_in_quantum_mechanics
# Sum rule in quantum mechanics In quantum mechanics, a sum rule is a formula for transitions between energy levels, in which the sum of the transition strengths is expressed in a simple form. Sum rules are used to describe the properties of many physical systems, including solids, atoms, atomic nuclei, and nuclear constituents such as protons and neutrons. The sum rules are derived from general principles, and are useful in situations where the behavior of individual energy levels is too complex to be described by a precise quantum-mechanical theory. In general, sum rules are derived by using Heisenberg's quantum-mechanical algebra to construct operator equalities, which are then applied to the particles or energy levels of a system. ## Derivation of sum rules[1] Assume that the Hamiltonian ${\displaystyle {\hat {H}}}$ has a complete set of eigenfunctions ${\displaystyle |n\rangle }$ with eigenvalues ${\displaystyle E_{n}}$: ${\displaystyle {\hat {H}}|n\rangle =E_{n}|n\rangle .}$ For the Hermitian operator ${\displaystyle {\hat {A}}}$ we define the repeated commutator ${\displaystyle {\hat {C}}^{(k)}}$ iteratively by: {\displaystyle {\begin{aligned}{\hat {C}}^{(0)}&\equiv {\hat {A}}\\{\hat {C}}^{(1)}&\equiv [{\hat {H}},{\hat {A}}]={\hat {H}}{\hat {A}}-{\hat {A}}{\hat {H}}\\{\hat {C}}^{(k)}&\equiv [{\hat {H}},{\hat {C}}^{(k-1)}],\ \ \ k=1,2,\ldots \end{aligned}}} The operator ${\displaystyle {\hat {C}}^{(0)}}$ is Hermitian since ${\displaystyle {\hat {A}}}$ is defined to be Hermitian. The operator ${\displaystyle {\hat {C}}^{(1)}}$ is anti-Hermitian: ${\displaystyle \left({\hat {C}}^{(1)}\right)^{\dagger }=({\hat {H}}{\hat {A}})^{\dagger }-({\hat {A}}{\hat {H}})^{\dagger }={\hat {A}}{\hat {H}}-{\hat {H}}{\hat {A}}=-{\hat {C}}^{(1)}.}$ By induction one finds: ${\displaystyle \left({\hat {C}}^{(k)}\right)^{\dagger }=(-1)^{k}{\hat {C}}^{(k)}}$ and also ${\displaystyle \langle m|{\hat {C}}^{(k)}|n\rangle =(E_{m}-E_{n})^{k}\langle m|{\hat {A}}|n\rangle .}$ For a Hermitian operator we have ${\displaystyle |\langle m|{\hat {A}}|n\rangle |^{2}=\langle m|{\hat {A}}|n\rangle \langle m|{\hat {A}}|n\rangle ^{\ast }=\langle m|{\hat {A}}|n\rangle \langle n|{\hat {A}}|m\rangle .}$ Using this relation we derive: {\displaystyle {\begin{aligned}\langle m|[{\hat {A}},{\hat {C}}^{(k)}]|m\rangle &=\langle m|{\hat {A}}{\hat {C}}^{(k)}|m\rangle -\langle m|{\hat {C}}^{(k)}{\hat {A}}|m\rangle \\&=\sum _{n}\langle m|{\hat {A}}|n\rangle \langle n|{\hat {C}}^{(k)}|m\rangle -\langle m|{\hat {C}}^{(k)}|n\rangle \langle n|{\hat {A}}|m\rangle \\&=\sum _{n}\langle m|{\hat {A}}|n\rangle \langle n|{\hat {A}}|m\rangle (E_{n}-E_{m})^{k}-(E_{m}-E_{n})^{k}\langle m|{\hat {A}}|n\rangle \langle n|{\hat {A}}|m\rangle \\&=\sum _{n}(1-(-1)^{k})(E_{n}-E_{m})^{k}|\langle m|{\hat {A}}|n\rangle |^{2}.\end{aligned}}} The result can be written as ${\displaystyle \langle m|[{\hat {A}},{\hat {C}}^{(k)}]|m\rangle ={\begin{cases}0,&{\mbox{if }}k{\mbox{ is even}}\\2\sum _{n}(E_{n}-E_{m})^{k}|\langle m|{\hat {A}}|n\rangle |^{2},&{\mbox{if }}k{\mbox{ is odd}}.\end{cases}}}$ For ${\displaystyle k=1}$ this gives: ${\displaystyle \langle m|[{\hat {A}},[{\hat {H}},{\hat {A}}]]|m\rangle =2\sum _{n}(E_{n}-E_{m})|\langle m|{\hat {A}}|n\rangle |^{2}.}$ ## References 1. ^ Sanwu Wang, Generalization of the Thomas-Reiche-Kuhn and the Bethe sum rules, Physical Review A 60, 262 (1999). http://prola.aps.org/abstract/PRA/v60/i1/p262_1
2017-05-24 02:39:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104747772216797, "perplexity": 4793.229984748245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607731.0/warc/CC-MAIN-20170524020456-20170524040456-00367.warc.gz"}
https://gea.esac.esa.int/archive/documentation/GDR2/Data_processing/chap_cu3ast/sec_cu3ast_quality/ssec_cu3ast_quality_corr.html
# 3.5.7 Correlations in the Gaia data Author(s): Lennart Lindegren Correlations among the astrometric data exist on several different levels, e.g.: • within-source correlations (between the different astrometric parameters of the same source); • between-source correlations (between the same astrometric parameter, e.g. parallax, for different sources); • general correlations (between arbitrary astrometric parameters of different sources). Within-source correlations are always provided, when relevant, in the Gaia Archive. They are estimated from the $5\times 5$ normal matrix of the individual sources. Because they are computed by neglecting the between-source correlations, they are only approximations of the actual within-source correlations, but probably sufficiently good for all astrophysical applications. They are principally needed when transforming the five astrometric parameters to other representations (e.g. calculation of galactic coordinates, tangential velocity, or epoch transformation). Between-source correlations are important when calculating quantities that depend on several sources, such as the mean parallax or internal kinematics of a stellar cluster. In the final Gaia data such correlations are expected to be important mainly on small angular scales (less than a few degrees), but in Gaia DR2 they could exist on much larger angular scales (tens of degrees). Between-source correlations and general correlations are much harder to estimate than the within-source correlations, principally because their rigorous calculation would involve the inversion of extremely large matrices. Approximate methods to estimate general correlations exist (e.g. Holl and Lindegren 2012; Holl et al. 2012a) but have not been implemented in the Gaia data processing. Empirically, the between-source correlations can be estimated by analysing the spatial correlations of the astrometric residuals, or from statistical analysis of the parallaxes and proper motions for distinct groups of sources, such as in stellar clusters and quasars. For Gaia DR2 the between-source correlations have not been extensively studied but they are believed to be very significant and arising mainly from modelling errors in the attitude or instrument. A limited correlation study of the astrometric residuals in Gaia DR2 is reported in Section 5.4 of Lindegren et al. (2018).
2022-12-09 03:44:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 1, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284379839897156, "perplexity": 2249.264750503653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711376.47/warc/CC-MAIN-20221209011720-20221209041720-00508.warc.gz"}
https://astronomy.stackexchange.com/questions/8359/would-pluto-keep-an-orbit-without-its-moon/8360
# Would Pluto keep an orbit without its moon? I haven't studied space that much, but I know that Pluto has very little gravity, but just enough to keep a human on its surface. Would Pluto keep a regular orbit around the sun without some extra support gravity, such as its moon? ## migrated from space.stackexchange.comDec 24 '14 at 19:14 This question came from our site for spacecraft operators, scientists, engineers, and enthusiasts. • I would think orbit mechanics (which this question really boils down to) is on topic on Space Exploration. Should this have really been moved? – FraserOfSmeg Dec 24 '14 at 21:02 The orbit of a planet does not in any way depend in its moons, mass or own gravity. The orbit is the same for a grain of dust as for a giant planet. It only depends on the mass on the Sun and the distance to the Sun. The eccentricity of the orbit is then given by the initial angular momentum of the orbiting object. This might explain it a bit better. This was a really BIG breakthrough in physics, actually the foundation of physics, about 400 years ago. Big because it is irrefutable although it is counter-intuitive. It is really difficult to believe unless one tries to accept the actual observations and logic. Yes, a planetoid such as Pluto will be able to orbit no matter how small its mass, so long as its angular momentum is within a range determined by its distance from the sun. In the case of Pluto and its primary moon, Charon, however, things are even more interesting. If Charon were to magically disappear (ejection would be another story), the point on the line of Pluto's orbit at any one time would be very near the center of Pluto. There are other moons, but their gravitational tug is minimal. The orbital trace of the Pluto/Charon system, however, is not even inside Pluto. Most planets have satellites that are very much less massive than they are, and thus the center of their systemic rotation is fairly close to the primary's center. The masses of Pluto and Charon, however, are significantly closer in size. The Pluto–Charon system is noteworthy for being one of the Solar System's few binary systems, defined as those whose barycenter lies above the primary's surface . . . [108] This and the large size of Charon relative to Pluto has led some astronomers to call it a dwarf double planet.[109] . . . This also means that the rotation period of each is equal to the time it takes the entire system to rotate around its common center of gravity.[66] Therefore, for practical purposes, with respect to the orbit of Pluto, it's far simpler to determine the trajectory of the Pluto/Charon barycenter -- which is completely above the surface of Pluto. An oblique view of the Pluto–Charon system showing that Pluto orbits a point outside itself. Pluto's orbit is shown in red and Charon's orbit is shown in green. It would is a short answer. The long answer is it depends on how you eject the mass of the moon from Pluto's sphere of influence. A good way to think of this is if we launch a satellite into solar orbit, it can orbit the sun in the same orbit as the Earth. Now, if we launch far to much mass from the earth then we could either move closer to, or further from the sun. This is the same way rockets propel themself.
2019-11-15 18:44:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010674118995667, "perplexity": 494.4176100688383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668699.77/warc/CC-MAIN-20191115171915-20191115195915-00130.warc.gz"}
https://howlingpixel.com/i-en/Kleiber%27s_law
# Kleiber's law Kleiber's law, named after Max Kleiber for his biology work in the early 1930s, is the observation that, for the vast majority of animals, an animal's metabolic rate scales to the ¾ power of the animal's mass. Symbolically: if q0 is the animal's metabolic rate, and M the animal's mass, then Kleiber's law states that q0 ~ M¾. Thus, over the same timespan, a cat having a mass 100 times that of a mouse will consume only about 32 times the energy the mouse uses. The exact value of the exponent in Kleiber's law is unclear, in part because there is currently no completely satisfactory theoretical explanation for the law. Kleiber's plot comparing body size to metabolic rate for a variety of species.[1] ## Proposed explanations for the law Kleiber's law, as many other biological allometric laws, is a consequence of the physics and/or geometry of animal circulatory systems.[2] Max Kleiber first discovered the law when analyzing a large number of independent studies on respiration within individual species.[3] Kleiber expected to find an exponent of ​23 (for reasons explained below), and was confounded by the exponent of ​34 he discovered.[4] ### Heuristic explanation One explanation for Kleiber's law lies in the difference between structural and growth mass. Structural mass involves maintenance costs, reserve mass does not. Hence, small adults of one species respire more per unit of weight than large adults of another species because a larger fraction of their body mass consists of structure rather than reserve. Within each species, young (i.e., small) organisms respire more per unit of weight than old (large) ones of the same species because of the overhead costs of growth.[5] ### Exponent ​2⁄3 Explanations for ​23-scaling tend to assume that metabolic rates scale to avoid heat exhaustion. Because bodies lose heat passively via their surface, but produce heat metabolically throughout their mass, the metabolic rate must scale in such a way as to counteract the square–cube law. The precise exponent to do so is ​23.[6] Such an argument does not address the fact that different organisms exhibit different shapes (and hence have different surface-to-volume ratios, even when scaled to the same size). Reasonable estimates for organisms' surface area do appear to scale linearly with the metabolic rate.[5] ### Exponent ​3⁄4 A model due to West, Enquist, and Brown (hereafter WEB) suggests that ​34-scaling arises because of efficiency in nutrient distribution and transport throughout an organism. In most organisms, metabolism is supported by a circulatory system featuring branching tubules (i.e., plant vascular systems, insect tracheae, or the human cardiovascular system). WEB claim that (1) metabolism should scale proportionally to nutrient flow (or, equivalently, total fluid flow) in this circulatory system and (2) in order to minimize the energy dissipated in transport, the volume of fluid used to transport nutrients (i.e., blood volume) is a fixed fraction of body mass.[7] They then proceed by analyzing the consequences of these two claims at the level of the smallest circulatory tubules (capillaries, alveoli, etc.). Experimentally, the volume contained in those smallest tubules is constant across a wide range of masses. Because fluid flow through a tubule is determined by the volume thereof, the total fluid flow is proportional to the total number of smallest tubules. Thus, if B denotes the basal metabolic rate, Q the total fluid flow, and N the number of minimal tubules, ${\displaystyle B\propto Q\propto N}$. Circulatory systems do not grow by simply scaling proportionally larger; they become more deeply nested. The depth of nesting depends on the self-similarity exponents of the tubule dimensions, and the effects of that depth depend on how many "child" tubules each branching produces. Connecting these values to macroscopic quantities depends (very loosely) on a precise model of tubules. WEB show that, if the tubules are well-approximated by rigid cylinders, then, in order to prevent the fluid from "getting clogged" in small cylinders, the total fluid volume V satisfies ${\displaystyle N^{4}\propto V^{3}}$.[8] Because blood volume is a fixed fraction of body mass, ${\displaystyle B\propto M^{\frac {3}{4}}}$.[7] ### Non-power-law scaling Closer analysis suggests that Kleiber's law does not hold over a wide variety of scales. Metabolic rates for smaller animals (birds under 10 kg [22 lb], or insects) typically fit to ​23 much better than ​34; for larger animals, the reverse holds.[6] As a result, log-log plots of metabolic rate versus body mass appear to "curve" upward, and fit better to quadratic models.[9] In all cases, local fits exhibit exponents in the [​23,​34] range.[10] #### Modified circulatory models Adjustments to the WBE model that retain assumptions of network shape predict larger scaling exponents, worsening the discrepancy with observed data.[11] But one can retain a similar theory by relaxing WBE's assumption of a nutrient transport network that is both fractal and circulatory.[10] (WBE argued that fractal circulatory networks would necessarily evolve to minimize energy used for transport, but other researchers argue that their derivation contains subtle errors.[6][12]) Different networks are less efficient, in that they exhibit a lower scaling exponent, but a metabolic rate determined by nutrient transport will always exhibit scaling between ​23 and ​34.[10] If larger metabolic rates are evolutionarily favored, then low-mass organisms will prefer to arrange their networks to scale as ​23, but large-mass organisms will prefer to arrange their networks as ​34, which produces the observed curvature.[13] #### Modified thermodynamic models An alternative model notes that metabolic rate does not solely serve to generate heat. Metabolic rate contributing solely to useful work should scale with power 1 (linearly), whereas metabolic rate contributing to heat generation should be limited by surface area and scale with power ​23. Basal metabolic rate is then the convex combination of these two effects: if the proportion of useful work is f, then the basal metabolic rate should scale as ${\displaystyle B=f\cdot kM+(1-f)\cdot k'M^{\frac {2}{3}}}$ where k and k are constants of proportionality. k in particular describes the surface area ratio of organisms and is approximately 0.1 ​kJhr·g-​23[4]; typical values for f are 15-20%.[14] The theoretical maximum value of f is 21%, because the efficiency of Glucose oxidation is only 42%, and half of the ATP so produced is wasted.[4] ## Experimental support Analyses of variance for a variety of physical variables suggest that although most variation in basal metabolic rate is determined by mass, additional variables with significant effects include body temperature and taxonomic order.[15][16] A 1932 work by Brody calculated that the scaling was approximately 0.73.[5][17] A 2004 analysis of field metabolic rates for mammals conclude that they appear to scale with exponent 0.749.[13] ## Criticism of the law Kozlowski and Konarzewski (hereafter "K&K") have argued that attempts to explain Kleiber's law via any sort of limiting factor is flawed, because metabolic rates vary by factors of 4-5 between rest and activity. Hence any limits that affect the scaling of basal metabolic rate would in fact make elevated metabolism — and hence all animal activity — impossible.[18] WEB conversely argue that animals may well optimize for minimal transport energy dissipation during rest, without abandoning the ability for less efficient function at other times.[19] Other researchers have also noted that K&K's criticism of the law tends to focus on precise structural details of the WEB circulatory networks, but that the latter are not essential to the model.[8] Kleiber's law only appears when studying animals as a whole; scaling exponents within taxonomic subgroupings differ substantially.[20][21] ## Generalizations Kleiber's law only applies to interspecific comparisons; it (usually) does not apply to intraspecific ones.[22] ### In other kingdoms A 1999 analysis concluded that biomass production in a given plant scaled with the ​34 power of the plant's mass during the plant's growth,[23] but a 2001 paper that included various types of unicellular photosynthetic organisms found scaling exponents intermediate between 0.75 and 1.00.[24] A 2006 paper in Nature argued that the exponent of mass is close to 1 for plant seedlings, but that variation between species, phyla, and growth conditions overwhelm any "Kleiber's law"-like effects.[25] ### Intra-organismal results Because cell protoplasm appears to have constant density across a range of organism masses, a consequence of Kleiber's law is that, in larger species, less energy is available to each cell volume. Cells appear to cope with this difficulty via choosing one of the following two strategies: a slower cellular metabolic rate, or smaller cells. The latter strategy is exhibited by neurons and adipocytes; the former by every other type of cell.[26] As a result, different organs exhibit different allometric scalings (see table).[5] Allometric scalings for BMR-vs.-mass in human tissue Organ Scaling Exponent Brain 0.7 Kidney 0.85 Liver 0.87 Heart 0.98 Muscle 1.0 Skeleton 1.1 ## References 1. ^ Kleiber M (October 1947). "Body size and metabolic rate". Physiological Reviews. 27 (4): 511–41. doi:10.1152/physrev.1947.27.4.511. PMID 20267758. 2. ^ Schmidt-Nielsen, Knut (1984). Scaling: Why is animal size so important?. NY, NY: Cambridge University Press. ISBN 978-0521266574. 3. ^ Kleiber M (1932). "Body size and metabolism". Hilgardia. 6 (11): 315–351. doi:10.3733/hilg.v06n11p315. 4. ^ a b c Ballesteros FJ, Martinez VJ, Luque B, Lacasa L, Valor E, Moya A (January 2018). "On the thermodynamic origin of metabolic scaling". Scientific Reports. 8 (1): 1448. Bibcode:2018NatSR...8.1448B. doi:10.1038/s41598-018-19853-6. PMC 5780499. PMID 29362491. 5. ^ a b c d Hulbert, A. J. (28 April 2014). "A Sceptics View: "Kleiber's Law" or the "3/4 Rule" is neither a Law nor a Rule but Rather an Empirical Approximation". Systems. 2 (2): 186–202. doi:10.3390/systems2020186. 6. ^ a b c Dodds PS, Rothman DH, Weitz JS (March 2001). "Re-examination of the "3/4-law" of metabolism". Journal of Theoretical Biology. 209 (1): 9–27. arXiv:physics/0007096. doi:10.1006/jtbi.2000.2238. PMID 11237567. 7. ^ a b West GB, Brown JH, Enquist BJ (April 1997). "A general model for the origin of allometric scaling laws in biology". Science. 276 (5309): 122–6. doi:10.1126/science.276.5309.122. PMID 9082983. 8. ^ a b Etienne RS, Apol ME, Olff HA (2006). "Demystifying the West, Brown & Enquist model of the allometry of metabolism". Functional Ecology. 20 (2): 394–399. doi:10.1111/j.1365-2435.2006.01136.x. 9. ^ Kolokotrones T, Deeds EJ, Fontana W (April 2010). "Curvature in metabolic scaling". Nature. 464 (7289): 753–6. Bibcode:2010Natur.464..753K. doi:10.1038/nature08920. PMID 20360740. But note that a quadratic curve has undesirable theoretical implications; see MacKay NJ (July 2011). "Mass scale and curvature in metabolic scaling. Comment on: T. Kolokotrones et al., curvature in metabolic scaling, Nature 464 (2010) 753-756". Journal of Theoretical Biology. 280 (1): 194–6. doi:10.1016/j.jtbi.2011.02.011. PMID 21335012. 10. ^ a b c Banavar JR, Moses ME, Brown JH, Damuth J, Rinaldo A, Sibly RM, Maritan A (September 2010). "A general basis for quarter-power scaling in animals". Proceedings of the National Academy of Sciences of the United States of America. 107 (36): 15816–20. Bibcode:2010PNAS..10715816B. doi:10.1073/pnas.1009974107. PMC 2936637. PMID 20724663. 11. ^ Savage VM, Deeds EJ, Fontana W (September 2008). "Sizing up allometric scaling theory". PLoS Computational Biology. 4 (9): e1000171. doi:10.1371/journal.pcbi.1000171. PMC 2518954. PMID 18787686. 12. ^ Apol ME, Etienne RS, Olff H (2008). "Revisiting the evolutionary origin of allometric metabolic scaling in biology". Functional Ecology. 22 (6): 1070–1080. doi:10.1111/j.1365-2435.2008.01458.x. 13. ^ a b Savage VM, Gillooly JF, Woodruff WH, West GB, Allen AP, Enquist BJ, Brown JH (April 2004). "The predominance of quarter-power scaling in biology". Functional Ecology. 18 (2): 257–282. doi:10.1111/j.0269-8463.2004.00856.x. The original paper by West et al. (1997), which derives a model for the mammalian arterial system, predicts that smaller mammals should show consistent deviations in the direction of higher metabolic rates than expected from M34 scaling. Thus, metabolic scaling relationships are predicted to show a slight curvilinearity at the smallest size range. 14. ^ Zotin, A. I. (1990). Thermodynamic Bases of Biological Processes: Physiological Reactions and Adaptations. Walter de Gruyter. ISBN 9783110114010. 15. ^ Clarke A, Rothery P, Isaac NJ (May 2010). "Scaling of basal metabolic rate with body mass and temperature in mammals". The Journal of Animal Ecology. 79 (3): 610–9. doi:10.1111/j.1365-2656.2010.01672.x. PMID 20180875. 16. ^ Hayssen V, Lacy RC (1985). "Basal metabolic rates in mammals: taxonomic differences in the allometry of BMR and body mass". Comparative Biochemistry and Physiology. A, Comparative Physiology. 81 (4): 741–54. doi:10.1016/0300-9629(85)90904-1. PMID 2863065. 17. ^ Brody, S. (1945). Bioenergetics and Growth. NY, NY: Reinhold. 18. ^ Kozlowski J, Konarzewski M (2004). "Is West, Brown and Enquist's model of allometric scaling mathematically correct and biologically relevant?". Functional Ecology. 18 (2): 283–9. doi:10.1111/j.0269-8463.2004.00830.x. 19. ^ Brown JH, West GB, Enquist BJ (2005). "Yes, West, Brown and Enquist's model of allometric scaling is both mathematically correct and biologically relevant". Functional Ecology. 19 (4): 735–738. doi:10.1111/j.1365-2435.2005.01022.x. 20. ^ White CR, Blackburn TM, Seymour RS (October 2009). "Phylogenetically informed analysis of the allometry of Mammalian Basal metabolic rate supports neither geometric nor quarter-power scaling". Evolution; International Journal of Organic Evolution. 63 (10): 2658–67. doi:10.1111/j.1558-5646.2009.00747.x. PMID 19519636. 21. ^ Sieg AE, O'Connor MP, McNair JN, Grant BW, Agosta SJ, Dunham AE (November 2009). "Mammalian metabolic allometry: do intraspecific variation, phylogeny, and regression models matter?". The American Naturalist. 174 (5): 720–33. doi:10.1086/606023. PMID 19799501. 22. ^ Heusner, A. A. (1982-04-01). "Energy metabolism and body size I. Is the 0.75 mass exponent of Kleiber's equation a statistical artifact?". Respiration Physiology. 48 (1): 1–12. doi:10.1016/0034-5687(82)90046-9. ISSN 0034-5687. PMID 7111915. 23. ^ Enquist BJ, West GB, Charnov EL, Brown JH (28 October 1999). "Allometric scaling of production and life-history variation in vascular plants". Nature. 401 (6756): 907–911. doi:10.1038/44819. ISSN 1476-4687. Corrigendum published 7 December 2000. 24. ^ Niklas KJ (2006). "A phyletic perspective on the allometry of plant biomass-partitioning patterns and functionally equivalent organ-categories". The New Phytologist. 171 (1): 27–40. doi:10.1111/j.1469-8137.2006.01760.x. PMID 16771980. 25. ^ Reich PB, Tjoelker MG, Machado JL, Oleksyn J (January 2006). "Universal scaling of respiratory metabolism, size and nitrogen in plants". Nature. 439 (7075): 457–61. Bibcode:2006Natur.439..457R. doi:10.1038/nature04282. PMID 16437113. For a contrary view, see Enquist BJ, Allen AP, Brown JH, Gillooly JF, Kerkhoff AJ, Niklas KJ, Price CA, West GB (February 2007). "Biological scaling: does the exception prove the rule?" (PDF). Nature. 445 (7127): E9–10, discussion E10–1. doi:10.1038/nature05548. PMID 17268426. and associated responses. 26. ^ Savage VM, Allen AP, Brown JH, Gillooly JF, Herman AB, Woodruff WH, West GB (March 2007). "Scaling of number, size, and metabolic rate of cells with body size in mammals". Proceedings of the National Academy of Sciences of the United States of America. 104 (11): 4718–23. Bibcode:2007PNAS..104.4718S. doi:10.1073/pnas.0611235104. PMC 1838666. PMID 17360590. Bacterivore Bacterivores are free-living, generally heterotrophic organisms, exclusively microscopic, which obtain energy and nutrients primarily or entirely from the consumption of bacteria. Many species of amoeba are bacterivores, as well as other types of protozoans. Commonly, all species of bacteria will be prey, but spores of some species, such as Clostridium perfringens, will never be prey, because of their cellular attributes. Copiotroph A copiotroph is an organism found in environments rich in nutrients, particularly carbon. They are the opposite to oligotrophs, which survive in much lower carbon concentrations. Copiotrophic organisms tend to grow in high organic substrate conditions. For example, copiotrophic organisms grow in Sewage lagoons. They grow in organic substrate conditions up to 100x higher than oligotrophs. Decomposer Decomposers are organisms that break down dead or decaying organisms, and in doing so, they carry out the natural process of decomposition. Like herbivores and predators, decomposers are heterotrophic, meaning that they use organic substrates to get their energy, carbon and nutrients for growth and development. While the terms decomposer and detritivore are often interchangeably used, detritivores must ingest and digest dead matter via internal processes while decomposers can directly absorb nutrients through chemical and biological processes hence breaking down matter without ingesting it. Thus, invertebrates such as earthworms, woodlice, and sea cucumbers are technically detritivores, not decomposers, since they must ingest nutrients and are unable to absorb them externally. Dominance (ecology) Ecological dominance is the degree to which a taxon is more numerous than its competitors in an ecological community, or makes up more of the biomass. Most ecological communities are defined by their dominant species. In many examples of wet woodland in western Europe, the dominant tree is alder (Alnus glutinosa). In temperate bogs, the dominant vegetation is usually species of Sphagnum moss. Tidal swamps in the tropics are usually dominated by species of mangrove (Rhizophoraceae) Some sea floor communities are dominated by brittle stars. Exposed rocky shorelines are dominated by sessile organisms such as barnacles and limpets. Ecological threshold Ecological threshold is the point at which a relatively small change or disturbance in external conditions causes a rapid change in an ecosystem. When an ecological threshold has been passed, the ecosystem may no longer be able to return to its state by means of its inherent resilience . Crossing an ecological threshold often leads to rapid change of ecosystem health. Ecological threshold represent a non-linearity of the responses in ecological or biological systems to pressures caused by human activities or natural processes.Critical load, tipping point and regime shift are examples of other closely related terms. Feeding frenzy In ecology, a feeding frenzy occurs when predators are overwhelmed by the amount of prey available. For example, a large school of fish can cause nearby sharks, such as the lemon shark, to enter into a feeding frenzy. This can cause the sharks to go wild, biting anything that moves, including each other or anything else within biting range. Another functional explanation for feeding frenzy is competition amongst predators. This term is most often used when referring to sharks or piranhas. It has also been used as a term within journalism. Herbivore A herbivore is an animal anatomically and physiologically adapted to eating plant material, for example foliage or marine algae, for the main component of its diet. As a result of their plant diet, herbivorous animals typically have mouthparts adapted to rasping or grinding. Horses and other herbivores have wide flat teeth that are adapted to grinding grass, tree bark, and other tough plant material. A large percentage of herbivores have mutualistic gut flora that help them digest plant matter, which is more difficult to digest than animal prey. This flora is made up of cellulose-digesting protozoans or bacteria. Jarman-Bell principle The Jarman-Bell principle, coined by P.J Jarman (1968.) and R.H.V Bell (1971), is a concept in ecology offering a link between a herbivore's diet and their overall size. It operates by observing the allometric (non- linear scaling) properties of herbivores. According to the Jarman-Bell principle, the food quality of a herbivore's intake decreases as the size of the herbivore increases, but the amount of such food increases to counteract the low quality foods.Large herbivores can subsist on low quality food. Their gut size is larger than smaller herbivores. The increased size allows for better digestive efficiency, and thus allow viable consumption of low quality food. Small herbivores require more energy per unit of body mass compared to large herbivores. A smaller size, thus smaller gut size and lower efficiency, imply that these animals need to select high quality food to function. Their small gut limits the amount of space for food, so they eat low quantities of high quality diet. Some animals practice coprophagy, where they ingest fecal matter to recycle untapped/ undigested nutrients.However, the Jarman-Bell principle is not without exception. Small herbivorous members of mammals, birds and reptiles were observed to be inconsistent with the trend of small body mass being linked with high-quality food. There have also been disputes over the mechanism behind the Jarman-Bell principle; that larger body sizes does not increase digestive efficiency.The implications of larger herbivores ably subsisting on poor quality food compared smaller herbivores mean that the Jarman-Bell principle may contribute evidence for Cope's rule. Furthermore, the Jarman-Bell principle is also important by providing evidence for the ecological framework of "resource partitioning, competition, habitat use and species packing in environments" and has been applied in several studies. Kleiber Kleiber is a German surname. Notable people with the surname include: Erich Kleiber (1890–1956), Austrian-German conductor Max Kleiber (1893–1976), Swiss agricultural biologist, known for Kleiber's law Carlos Kleiber (1930–2004), Austrian conductor Günther Kleiber (born 1931), German communist politician Stanislava Brezovar or Stanislava Kleiber (1937–2003), Slovenian ballerina Jolán Kleiber-Kontsek (born 1939), Hungarian athlete Dávid Kleiber (born 1990), Hungarian football player Max Kleiber Max Kleiber (4 January 1893 – 5 January 1976) was a Swiss agricultural biologist, born and educated in Zurich, Switzerland. Kleiber graduated from the Federal Institute of Technology as an Agricultural Chemist in 1920, earned the ScD degree in 1924, and became a private dozent after publishing his thesis The Energy Concept in the Science of Nutrition. Kleiber joined the Animal Husbandry Department of UC Davis in 1929 to construct respiration chambers and conduct research on energy metabolism in animals. Among his many important achievements, two are especially noteworthy. In 1932 he came to the conclusion that the ¾ power of body weight was the most reliable basis for predicting the basal metabolic rate (BMR) of animals and for comparing nutrient requirements among animals of different size. He also provided the basis for the conclusion that total efficiency of energy utilization is independent of body size. These concepts and several others fundamental for understanding energy metabolism are discussed in Kleiber's book, The Fire of Life published in 1961 and subsequently translated into German, Polish, Spanish, and Japanese. He is credited with the description of the ratio of metabolism to body mass, which became Kleiber's law. Mesotrophic soil Mesotrophic soils are soils with a moderate inherent fertility. An indicator of soil fertility is its base status, which is expressed as a ratio relating the major nutrient cations (calcium, magnesium, potassium and sodium) found there to the soil's clay percentage. This is commonly expressed in hundredths of a mole of cations per kilogram of clay, i.e. cmol (+) kg−1 clay. Metabolic theory of ecology The metabolic theory of ecology (MTE) is an extension of Kleiber's law and posits that the metabolic rate of organisms is the fundamental biological rate that governs most observed patterns in ecology. MTE is part of a larger set of theory known as metabolic scaling theory that attempts to provide a unified theory for the importance of metabolism in driving pattern and process in biology from the level of cells all the way to the biosphere. MTE is based on an interpretation of the relationships between body size, body temperature, and metabolic rate across all organisms. Small-bodied organisms tend to have higher mass-specific metabolic rates than larger-bodied organisms. Furthermore, organisms that operate at warm temperatures through endothermy or by living in warm environments tend towards higher metabolic rates than organisms that operate at colder temperatures. This pattern is consistent from the unicellular level up to the level of the largest animals and plants on the planet. In MTE, this relationship is considered to be the single constraint that defines biological processes at all levels of organization (from individual up to ecosystem level), and is a macroecological theory that aims to be universal in scope and application. Mycotroph A mycotroph is a plant that gets all or part of its carbon, water, or nutrient supply through symbiotic association with fungi. The term can refer to plants that engage in either of two distinct symbioses with fungi: Many mycotrophs have a mutualistic association with fungi in any of several forms of mycorrhiza. The majority of plant species are mycotrophic in this sense. Examples include Burmanniaceae. Some mycotrophs are parasitic upon fungi in an association known as myco-heterotrophy. Organotroph An organotroph is an organism that obtains hydrogen or electrons from organic substrates. This term is used in microbiology to classify and describe organisms based on how they obtain electrons for their respiration processes. Some organotrophs such as animals and many bacteria, are also heterotrophs. Organotrophs can be either anaerobic or aerobic. Overpopulation Overpopulation occurs when a species' population exceeds the carrying capacity of its ecological niche. It can result from an increase in births (fertility rate), a decline in the mortality rate, an increase in immigration, or an unsustainable biome and depletion of resources. When overpopulation occurs, individuals limit available resources to survive. The change in number of individuals per unit area in a given locality is an important variable that has a significant impact on the entire ecosystem. Planktivore A planktivore is an aquatic organism that feeds on planktonic food, including zooplankton and phytoplankton. Rate-of-living theory The rate of living theory postulates that the faster an organism’s metabolism, the shorter its lifespan. The theory was originally created by Max Rubner in 1908 after his observation that larger animals outlived smaller ones, and that the larger animals had slower metabolisms. After its inception by Rubner, it was further expanded upon through the work of Raymond Pearl. Outlined in his book, The Rate of Living published in 1928, Pearl conducted a series of experiments in drosophilia and cantaloupe seeds that corroborated Rubner’s initial observation that a slowing of metabolism increased lifespan. Further strength was given to these observations by the discovery of Max Kleiber’s law in 1932. Colloquially called the “mouse-to-elephant” curve, Kleiber’s conclusion was that basal metabolic rate could accurately be predicted by taking 3/4 the power of body weight. This conclusion was especially noteworthy because the inversion of its scaling exponent, between 0.2 and 0.33, was the scaling for lifespan and metabolic rate. Recruitment (biology) In biology, especially marine biology, recruitment occurs when a juvenile organism joins a population, whether by birth or immigration, usually at a stage whereby the organisms are settled and able to be detected by an observer.There are two types of recruitment: closed and open.In the study of fisheries, recruitment is "the number of fish surviving to enter the fishery or to some life history stage such as settlement or maturity". Relative abundance distribution In the field of ecology, the relative abundance distribution (RAD) or species abundance distribution describes the relationship between the number of species observed in a field study as a function of their observed abundance. The graphs obtained in this manner are typically fitted to a Zipf–Mandelbrot law, the exponent of which serves as an index of biodiversity in the ecosystem under study. General Producers Consumers Decomposers Microorganisms Food webs Example webs Processes Defense, counter Ecology: Modelling ecosystems: Other components Population ecology Species Species interaction Spatial ecology Niche Other networks Other ### Languages This page is based on a Wikipedia article written by authors (here). Text is available under the CC BY-SA 3.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.
2019-07-17 10:25:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6796444058418274, "perplexity": 6605.488178412598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525136.58/warc/CC-MAIN-20190717101524-20190717123524-00160.warc.gz"}
https://wikidoc.org/index.php/Machine_learning
# Machine learning Jump to navigation Jump to search ## Overview For the journal, see Machine Learning (journal). Machine learning is a discipline and artform in which computer programs learn from exposure to data. It uses algorithms to detect patterns, formulate predictions, learn from data, and make decisions. In the case of COVID-19, machine learning is used for the diagnosis and identification of the population that is at greater risk of contagion. It is also used for faster drug development, including the study of reuse of drugs that have been proven to treat other diseases. To do this, knowledge graphs are constructed and predictive analysis of the interaction between drug and viral proteins [1] and virus-host interactomes [2] predictive protein folding [3] understanding the molecular and cellular dynamics of the virus, predicting and spreading a disease based on patterns, and even predicting an upcoming zoonotic pandemic. To conduct AI studies using machine learning (which includes deep learning in some cases), certain algorithms are required, such as decision trees, regression for statistical and predictive analysis, generative adversary networks, instance-based clustering, Bayesians, neural networks, etc. These algorithms use data science in which various mathematical calculations are performed, where the density of information is broad, complex and varied. For example, used to find antiviral molecules [4] that fight COVID-19 and identify millions of antibodies for the treatment of secondary infections [5] Machine learning is defined as "a type of artificial intelligence that enable computers to independently initiate and execute learning when exposed to new data"[6][7]. Machine learning is concerned with the design and development of algorithms and techniques that allow computers to "learn". At a general level, machine learning can be classified by: • Inductive. Inductive machine learning methods extract rules and patterns out of massive data sets. • Deductive. Machine learning can also be classified by: • Supervised machine learning which is "used to make predictions about future instances based on a given set of labeled paired input-output training (sample) data." (italics added)[8] • Unsupervised machine learning which is "used to make predictions about future instances based on a given set of unlabeled paired input-output training (sample) data. (italics added)"[9] The major focus of machine learning research is to extract information from data automatically, by computational and statistical methods. Hence, machine learning is closely related to data mining and statistics but also theoretical computer science. Machine learning has a wide spectrum of applications including natural language processing, syntactic pattern recognition, search engines, medical diagnosis, bioinformatics and cheminformatics, detecting credit card fraud, stock market analysis, classifying DNA sequences, speech and handwriting recognition, object recognition in computer vision, game playing and robot locomotion. A checklist has been developed for determining whether a clinical algorithm derived from machine learning should be considered clinically useful[10]. ## Diagnosis Machine learning opens up a myriad of research possibilities in various clinical fields. This involves everything from facial scanners to identify symptoms such as fever, wearables to measure and detect cardiac or respiratory abnormalities, to chatbots that evaluate a patient that mentions symptoms and, based on the answers given, the system informs the person if the next recommended action would be staying home, calling the doctor, or going to the hospital[11] ## Risk Factors Another type of machine learning application revolves around the prediction of infection risks, based on specific characteristics of a person, such as age, geographical location, socioeconomic level, social and hygiene habits, pre-existing conditions and human interaction, among others. With these data, a predictive model can be established on the risk that an individual or group of people can bring to contract COVID-19 and factors associated with developing complications [12] and even predict the results of a treatment. With these types of projections, you could literally predict whether a patient lives or dies. ## Treatment The advantage of using machine learning over other standard techniques that take years is that the identification process can be completed in a matter of weeks, with a considerable cost reduction, coupled with a very high probability of success. For example, Smith and Smith [13] state that the future design of SARS-CoV-2 antiviral drugs is already in the hands of a European team that uses the IBM supercomputer equipped with the AI ​​SUMMIT system to be used in treatments for COVID-19. ## Human interaction Some machine learning systems attempt to eliminate the need for human intuition in the analysis of the data, while others adopt a collaborative approach between human and machine. Human intuition cannot be entirely eliminated since the designer of the system must specify how the data are to be represented and what mechanisms will be used to search for a characterization of the data. Machine learning can be viewed as an attempt to automate parts of the scientific method. Some statistical machine learning researchers create methods within the framework of Bayesian statistics. ## Algorithm types Machine learning algorithms are organized into a taxonomy, based on the desired outcome of the algorithm. Common algorithm types include:[14] • Supervised learning — in which the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the classification problem: the learner is required to learn (to approximate) the behavior of a function which maps a vector ${\displaystyle [X_{1},X_{2},\ldots X_{N}]\,}$ into one of several classes by looking at several input-output examples of the function. • Support vector machine is "supervised machine learning algorithm which learns to assign labels to objects from a set of training examples. Examples are learning to recognize fraudulent credit card activity by examining hundreds or thousands of fraudulent and non-fraudulent credit card activity, or learning to make disease diagnosis or prognosis based on automatic classification of microarray gene expression profiles drawn from hundreds or thousands of samples"[15]. • Unsupervised learning — which models a set of inputs: labeled examples are not available. • Semi-supervised learning — which combines both labeled and unlabeled examples to generate an appropriate function or classifier. • Reinforcement learning — in which the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm. • Transduction — similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and test inputs which are available while training. • Learning to learn — in which the algorithm learns its own inductive bias based on previous experience. Deep learning, a type of computer or artificial neural networks, is "supervised or unsupervised machine learning methods that use multiple layers of data representations generated by nonlinear transformations, instead of individual task-specific algorithms, to build and train neural network models"[16]. • Convolutional neural network (CNN; ConvNet), also called shift invariant or space invariant artificial neural networks (SIANN), used for visual imagery such as retinal scans[17]. The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. ## Machine learning topics This list represents the topics covered on a typical machine learning course. Approximate inference techniques Optimization Most of methods listed above either use optimization or are instances of optimization algorithms Meta-learning (ensemble methods) Inductive transfer and learning to learn ## See also List of numerical analysis software MLMTA Machine Learning: Models, Technologies & Applications Multi-label classification Neural Information Processing Systems (NIPS) (conference) Neural network software Variable-order Markov models Variable-order Bayesian network Pattern recognition Predictive analytics Remote sensing technology WEKA Open-source machine learning framework for pattern classification, regression, and clustering. ## References 1. . doi:10.1186/s12864-018-4924-2. Check `|doi=` value (help). Missing or empty `|title=` (help) 2. . doi:10.1128/mSystems.00303-18. Check `|doi=` value (help). Missing or empty `|title=` (help) 3. . doi:10.3390/biom10020250. Check `|doi=` value (help). Missing or empty `|title=` (help) 4. . doi:10.1016/j.imr.2020.100434. Check `|doi=` value (help). Missing or empty `|title=` (help) 5. . doi:10.1016/j.drudis.2020.04.005. Check `|doi=` value (help). Missing or empty `|title=` (help) 6. Anonymous (2022), Machine learning (English). Medical Subject Headings. U.S. National Library of Medicine. 7. Liu Y, Chen PC, Krause J, Peng L (2019). "How to Read Articles That Use Machine Learning: Users' Guides to the Medical Literature". JAMA. 322 (18): 1806–1816. doi:10.1001/jama.2019.16489. PMID 31714992. 8. Anonymous (2022), Supervised Machine Learning (English). Medical Subject Headings. U.S. National Library of Medicine. 9. Anonymous (2022), Unsupervised Machine Learning (English). Medical Subject Headings. U.S. National Library of Medicine. 10. Scott I, Carter S, Coiera E (2021). "Clinician checklist for assessing suitability of machine learning applications in healthcare". BMJ Health Care Inform. 28 (1). doi:10.1136/bmjhci-2020-100251. PMID 33547086 Check `|pmid=` value (help). 11. . doi:10.32604/cmc.2020.010691. Check `|doi=` value (help). Missing or empty `|title=` (help) 12. . doi:10.32604/cmc.2020.010691. Check `|doi=` value (help). Missing or empty `|title=` (help) 13. . doi:10.26434/chemrxiv.11871402.v4. Check `|doi=` value (help). Missing or empty `|title=` (help) 14. Sidey-Gibbons JAM, Sidey-Gibbons CJ (2019). "Machine learning in medicine: a practical introduction". BMC Med Res Methodol. 19 (1): 64. doi:10.1186/s12874-019-0681-4. PMC 6425557. PMID 30890124. 15. Anonymous (2022), Deep learning (English). Medical Subject Headings. U.S. National Library of Medicine. 16. Anonymous (2022), Deep learning (English). Medical Subject Headings. U.S. National Library of Medicine. 17. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A; et al. (2016). "Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs". JAMA. 316 (22): 2402–2410. doi:10.1001/jama.2016.17216. PMID 27898976. ## Bibliography • Ethem Alpaydın (2004) Introduction to Machine Learning (Adaptive Computation and Machine Learning), MIT Press, ISBN 0262012111 • Christopher M. Bishop (2007) Pattern Recognition and Machine Learning, Springer ISBN 0-387-31073-8. • Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (1983), Machine Learning: An Artificial Intelligence Approach, Tioga Publishing Company, ISBN 0-935382-05-4. • Ryszard S. Michalski, Jaime G. Carbonell, Tom M. Mitchell (1986), Machine Learning: An Artificial Intelligence Approach, Volume II, Morgan Kaufmann, ISBN 0-934613-00-1. • Yves Kodratoff, Ryszard S. Michalski (1990), Machine Learning: An Artificial Intelligence Approach, Volume III, Morgan Kaufmann, ISBN 1-55860-119-8. • Ryszard S. Michalski, George Tecuci (1994), Machine Learning: A Multistrategy Approach, Volume IV, Morgan Kaufmann, ISBN 1-55860-251-8. • Bhagat, P. M. (2005). Pattern Recognition in Industry, Elsevier. ISBN 0-08-044538-1. • Bishop, C. M. (1995). Neural Networks for Pattern Recognition, Oxford University Press. ISBN 0-19-853864-2. • Richard O. Duda, Peter E. Hart, David G. Stork (2001) Pattern classification (2nd edition), Wiley, New York, ISBN 0-471-05669-3. • Huang T.-M., Kecman V., Kopriva I. (2006), Kernel Based Algorithms for Mining Huge Data Sets, Supervised, Semi-supervised, and Unsupervised Learning, Springer-Verlag, Berlin, Heidelberg, 260 pp. 96 illus., Hardcover, ISBN 3-540-31681-7 [1]. • KECMAN Vojislav (2001), LEARNING AND SOFT COMPUTING, Support Vector Machines, Neural Networks and Fuzzy Logic Models, The MIT Press, Cambridge, MA, 608 pp., 268 illus., ISBN 0-262-11255-8 [2]. • MacKay, D. J. C. (2003). Information Theory, Inference, and Learning Algorithms, Cambridge University Press. ISBN 0-521-64298-1. • Mitchell, T. (1997). Machine Learning, McGraw Hill. ISBN 0-07-042807-7. • Ian H. Witten and Eibe Frank "Data Mining: Practical machine learning tools and techniques" Morgan Kaufmann ISBN 0-12-088407-0. • Sholom Weiss and Casimir Kulikowski (1991). Computer Systems That Learn, Morgan Kaufmann. ISBN 1-55860-065-5. • Mierswa, Ingo and Wurst, Michael and Klinkenberg, Ralf and Scholz, Martin and Euler, Timm: YALE: Rapid Prototyping for Complex Data Mining Tasks, in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-06), 2006. • Trevor Hastie, Robert Tibshirani and Jerome Friedman (2001). The Elements of Statistical Learning, Springer. ISBN 0387952845 (companion book site). • Vladimir Vapnik (1998). Statistical Learning Theory. Wiley-Interscience, ISBN 0471030031.
2022-09-25 23:38:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4809560775756836, "perplexity": 4093.5460840237342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00106.warc.gz"}
https://en.m.wikipedia.org/wiki/Periodic_travelling_wave
# Periodic travelling wave A periodic travelling wave In mathematics a periodic travelling wave (or wavetrain) is a periodic function of one-dimensional space that moves with constant speed. Consequently it is a special type of spatiotemporal oscillation that is a periodic function of both space and time. Periodic travelling waves play a fundamental role in many mathematical equations, including self-oscillatory systems,[1][2] excitable systems[3] and reaction-diffusion-advection systems.[4] Equations of these types are widely used as mathematical models of biology, chemistry and physics, and many examples in phenomena resembling periodic travelling waves have been found empirically. The mathematical theory of periodic travelling waves is most fully developed for partial differential equations, but these solutions also occur in a number of other types of mathematical system, including integrodifferential equations,[5][6] integrodifference equations,[7] coupled map lattices[8] and cellular automata[9][10] As well as being important in their own right, periodic travelling waves are significant as the one-dimensional equivalent of spiral waves and target patterns in two-dimensional space, and of scroll waves in three-dimensional space. ## History of research on periodic travelling wavesEdit Periodic travelling waves were first studied in the 1970s. A key early research paper was that of Nancy Kopell and Lou Howard[1] which proved several fundamental results on periodic travelling waves in reaction-diffusion equations. This was followed by significant research activity during the 1970s and early 1980s. There was then a period of inactivity, before interest in periodic travelling waves was renewed by mathematical work on their generation,[11][12] and by their detection in ecology, in spatiotemporal data sets on cyclic populations.[13][14] Since the mid-2000s, research on periodic travelling waves has benefitted from new computational methods for studying their stability and absolute stability.[15][16] ## Families of periodic travelling wavesEdit The existence of periodic travelling waves usually depends on the parameter values in a mathematical equation. If there is a periodic travelling wave solution, then there is typically a family of such solutions, with different wave speeds. For partial differential equations, periodic travelling waves typically occur for a continuous range of wave speeds.[1] ## Stability of periodic travelling wavesEdit An important question is whether a periodic travelling wave is stable or unstable as a solution of the original mathematical system. For partial differential equations, it is typical that the wave family subdivides into stable and unstable parts.[1][17][18] For unstable periodic travelling waves, an important subsidiary question is whether they are absolutely or convectively unstable, meaning that there are or are not stationary growing linear modes.[19] This issue has only been resolved for a few partial differential equations.[2][15][16] ## Generation of periodic travelling wavesEdit A number of mechanisms of periodic travelling wave generation are now well established. These include: • Heterogeneity: spatial noise in parameter values can generate a series of bands of periodic travelling waves.[20] This is important in applications to oscillatory chemical reactions, where impurities can cause target patterns or spiral waves, which are two-dimensional generalisations of periodic travelling waves. This process provided the motivation for much of the work on periodic travelling waves in the 1970s and early 1980s. Landscape heterogeneity has also been proposed as a cause of the periodic travelling waves seen in ecology.[21] • Invasions, which can leave a periodic travelling wave in their wake.[11][12][22] This is important in the Taylor-Couette system in the presence of through flow,[23] in chemical systems such as the Belousov-Zhabotinsky reaction [24][25] and in predator-prey systems in ecology.[26][27] • Waves generated by a Dirichlet boundary condition on a central hole Domain boundaries with Dirichlet or Robin boundary conditions.[28][29][30] This is potentially important in ecology, where Robin or Dirichlet conditions correspond to a boundary between habitat and a surrounding hostile environment. However definitive empirical evidence on the cause of waves is hard to obtain for ecological systems. • Migration driven by pursuit and evasion.[31] This may be significant in ecology. • Migration between sub-populations,[32] which again has potential ecological significance. In all of these cases, a key question is which member of the periodic travelling wave family is selected. For most mathematical systems this remains an open problem. ## Periodic travelling waves and spatiotemporal chaosEdit Periodic travelling waves and chaos in simulated invasion of prey by predators It is common that for some parameter values, the periodic travelling waves arising from a wave generation mechanism are unstable. In such cases the solution usually evolves to spatiotemporal chaos.[11][27] Thus the solution involves a spatiotemporal transition to chaos via the periodic travelling wave. ## Lambda-omega systems and the complex Ginzburg-Landau equationEdit There are two particular mathematical systems that serve as prototypes for periodic travelling waves, and which have been fundamental to the development of mathematical understanding and theory. These are the "lambda-omega" class of reaction-diffusion equations[1] ${\displaystyle {\frac {\partial u}{\partial t}}={\frac {\partial ^{2}u}{\partial x^{2}}}+\lambda (r)u-\omega (r)v}$ ${\displaystyle {\frac {\partial v}{\partial t}}={\frac {\partial ^{2}v}{\partial x^{2}}}+\omega (r)u+\lambda (r)v}$ (r=(u2+v2)1/2) and the complex Ginzburg-Landau equation.[2] ${\displaystyle {\frac {\partial A}{\partial t}}=A+(1+ib){\frac {\partial ^{2}A}{\partial x^{2}}}-(1+ic)|A|^{2}A}$ (A is complex-valued). Note that these systems are the same if λ(r)=1-r2, ω(r)=-c r2 and b=0. Both systems can be simplified by rewriting the equations in terms of the amplitude (r or |A|) and the phase (arctan(v/u) or arg A). Once the equations have been rewritten in this way, it is easy to see that solutions with constant amplitude are periodic travelling waves, with the phase being a linear function of space and time. Therefore u and v, or Re(A) and Im(A), are sinusoidal functions of space and time. These exact solutions for the periodic travelling wave families enable a great deal of further analytical study. Exact conditions for the stability of the periodic travelling waves can be found,[1][2] and the condition for absolute stability can be reduced to the solution of a simple polynomial.[15][16] Also exact solutions have been obtained for the selection problem for waves generated by invasions[22][33] and by zero Dirichlet boundary conditions.[34][35] In the latter case, for the complex Ginzburg-Landau equation, the overall solution is a stationary Nozaki-Bekki hole.[34][36] Much of the work on periodic travelling waves in the complex Ginzburg-Landau equation is in the physics literature, where they are usually known as plane waves. ## Numerical computation of periodic travelling waves and their stabilityEdit For most mathematical equations, analytical calculation of periodic travelling wave solutions is not possible, and therefore it is necessary to perform numerical computations. For partial differential equations, denote by x and t the (one-dimensional) space and time variables, respectively. Then periodic travelling waves are functions of the travelling wave variable z=x-c t. Substituting this solution form into the partial differential equations gives a system of ordinary differential equations known as the travelling wave equations. Periodic travelling waves correspond to limit cycles of these equations, and this provides the basis for numerical computations. The standard computational approach is numerical continuation of the travelling wave equations. One first performs a continuation of a steady state to locate a Hopf bifurcation point. This is the starting point for a branch (family) of periodic travelling wave solutions, which one can follow by numerical continuation. In some (unusual) cases both end points of a branch (family) of periodic travelling wave solutions are homoclinic solutions,[37] in which case one must use an external starting point, such as a numerical solution of the partial differential equations. Periodic travelling wave stability can also be calculated numerically, by computing the spectrum. This is made easier by the fact that the spectrum of periodic travelling wave solutions of partial differential equations consists entirely of essential spectrum.[38] Possible numerical approaches include Hill's method[39] and numerical continuation of the spectrum.[15] One advantage of the latter approach is that it can be extended to calculate boundaries in parameter space between stable and unstable waves[40] Software: The free, open source software package Wavetrain http://www.ma.hw.ac.uk/wavetrain is designed for the numerical study of periodic travelling waves.[41] Using numerical continuation, Wavetrain is able to calculate the form and stability of periodic travelling wave solutions of partial differential equations, and the regions of parameter space in which waves exist and in which they are stable. ## Applications of periodic travelling wavesEdit Examples of phenomena resembling periodic travelling waves that have been found empirically include the following. • Many natural populations undergo multi-year cycles of abundance. In some cases these population cycles are spatially organised into a periodic travelling wave. This behaviour has been found in voles in Fennoscandia[13] and Northern UK,[14] geometrid moths in Northern Fennoscandia,[42] larch budmoths in the European Alps[21] and red grouse in Scotland.[43] • In semi-deserts, vegetation often self-organises into spatial patterns.[44] On slopes, this typically consists of stripes of vegetation running parallel to the contours, separated by stripes of bare ground; this type of banded vegetation is sometimes known as Tiger bush. Many observational studies have reported slow movement of the stripes in the uphill direction.[45] However in a number of other cases the data points clearly to stationary patterns,[46] and the question of movement remains controversial. The conclusion that is most consistent with available data is that some banded vegetation patterns move while others do not.[47] Patterns in the former category have the form of periodic travelling waves. • Travelling bands occur in oscillatory and excitable chemical reactions. They were observed in the 1970s in the Belousov-Zhabotinsky reaction[48] and they formed an important motivation for the mathematical work done on periodic travelling waves at that time. More recent research has also exploited the capacity to link the experimentally observed bands with mathematical theory of periodic travelling waves via detailed modelling.[49] • Periodic travelling waves occur in the Sun, as part of the solar cycle.[50][51] They are a consequence of the generation of the Sun's magnetic field by the solar dynamo. As such, they are related to sunspots. • In hydrodynamics, convection patterns often involve periodic travelling waves. Specific instances include binary fluid convection[52] and heated wire convection.[53] • Patterns of periodic travelling wave form occur in the "printer's instability", in which the thin gap between two rotating acentric cylinders is filled with oil.[54] ## ReferencesEdit 1. N. Kopell, L.N. Howard (1973) "Plane wave solutions to reaction-diffusion equations", Stud. Appl. Math. 52: 291-328. 2. ^ a b c d I.S. Aranson, L. Kramer (2002) "The world of the complex Ginzburg-Landau equation", Rev. Mod. Phys. 74: 99-143. DOI:10.1103/RevModPhys.74.99 3. ^ S. Coombes (2001) "From periodic travelling waves to travelling fronts in the spike-diffuse-spike model of dendritic waves", Math. Biosci. 170: 155-172. DOI:10.1016/S0025-5564(00)00070-5 4. ^ J.A. Sherratt, G.J. Lord (2007) "Nonlinear dynamics and pattern bifurcations in a model for vegetation stripes in semi-arid environments", Theor. Popul. Biol. 71 (2007): 1-11. DOI:10.1016/j.tpb.2006.07.009 5. ^ S.A. Gourley, N.F. Britton (1993) "Instability of traveling wave solutions of a population model with nonlocal effects", IMA J. Appl. Math. 51: 299-310. DOI:10.1093/imamat/51.3.299 6. ^ P. Ashwin, M.V. Bartuccelli, T.J. Bridges, S.A. Gourley (2002) "Travelling fronts for the KPP equation with spatio-temporal delay", Z. Angew. Math. Phys. 53: 103-122. DOI:0010-2571/02/010103-20 7. ^ M. Kot (1992) "Discrete-time travelling waves: ecological examples", J. Math. Biol. 30: 413-436. DOI:10.1007/BF00173295 8. ^ M.D.S. Herrera, J.S. Martin (2009) "An analytical study in coupled map lattices of synchronized states and traveling waves, and of their period-doubling cascades", Chaos, Solitons & Fractals 42: 901-910. DOI:10.1016/j.chaos.2009.02.040 9. ^ J.A. Sherratt (1996) "Periodic travelling waves in a family of deterministic cellular automata", Physica D 95: 319-335. DOI:10.1016/0167-2789(96)00070-X 10. ^ M. Courbage (1997) "On the abundance of traveling waves in 1D infinite cellular automata", Physica D 103: 133-144. DOI:10.1016/S0167-2789(96)00256-4 11. ^ a b c J.A. Sherratt (1994) "Irregular wakes in reaction-diffusion waves", Physica D 70: 370-382. DOI:10.1016/0167-2789(94)90072-8 12. ^ a b S.V. Petrovskii, H. Malchow (1999) "A minimal model of pattern formation in a prey-predator system", Math. Comp. Modelling 29: 49-63. DOI:10.1016/S0895-7177(99)00070-9 13. ^ a b E. Ranta, V. Kaitala (1997) "Travelling waves in vole population dynamics", Nature 390: 456. DOI:10.1038/37261 14. ^ a b X. Lambin, D.A. Elston, S.J. Petty, J.L. MacKinnon (1998) "Spatial asynchrony and periodic travelling waves in cyclic populations of field voles", Proc. R. Soc. Lond. B 265: 1491-1496. DOI:10.1098/rspb.1998.0462 15. ^ a b c d J.D.M. Rademacher, B. Sandstede, A. Scheel (2007) "Computing absolute and essential spectra using continuation", Physica D 229: 166-183. DOI:10.1016/j.physd.2007.03.016 16. ^ a b c M.J. Smith, J.D.M. Rademacher, J.A. Sherratt (2009) "Absolute stability of wavetrains can explain spatiotemporal dynamics in reaction-diffusion systems of lambda-omega type", SIAM J. Appl. Dyn. Systems 8: 1136-1159. DOI:10.1137/090747865 17. ^ K. Maginu (1981) "Stability of periodic travelling wave solutions with large spatial periods in reaction-diffusion systems", J. Diff. Eqns. 39: 73-99. 10.1016/0022-0396(81)90084-X 18. ^ M.J. Smith, J.A. Sherratt (2007) "The effects of unequal diffusion coefficients on periodic travelling waves in oscillatory reaction-diffusion systems", Physica D 236: 90-103. DOI:10.1016/j.physd.2007.07.013 19. ^ B. Sandstede, A. Scheel (2000) "Absolute and convective instabilities of waves on unbounded and large bounded domains", Physica D 145: 233-277. DOI:10.1016/S0167-2789(00)00114-7 20. ^ A.L. Kay, J.A. Sherratt (2000) "Spatial noise stabilizes periodic wave patterns in oscillatory systems on finite domains", SIAM J. Appl. Math. 61: 1013-1041. DOI:10.1137/S0036139999360696 21. ^ a b D.M. Johnson, O.N. Bjornstad, A.M. Liebhold (2006) "Landscape mosaic induces travelling waves of insect outbreaks", Oecologia 148: 51-60. DOI:10.1007/s00442-005-0349-0 22. ^ a b K. Nozaki, N. Bekki (1983) "Pattern selection and spatiotemporal transition to chaos in the Ginzburg-Landau equation", Phys. Rev. Lett. 51: 2171-2174. DOI:10.1103/PhysRevLett.51.2171 23. ^ A. Tsameret, V. Steinberg (1994) "Competing states in a Couette-Taylor system with an axial flow", Phys. Rev. E 49: 4077-4086. DOI:10.1103/PhysRevE.49.4077 24. ^ M. Ipsen, L. Kramer, P.G. Sorensen (2000) "Amplitude equations for description of chemical reaction–diffusion systems", Phys. Rep. 337: 193-235. DOI:10.1016/S0370-1573(00)00062-4 25. ^ A.S. Mikhailov, K. Showalter (2006) "Control of waves, patterns and turbulence in chemical systems", Phys. Rep. 425: 79-194. DOI:10.1016/j.physrep.2005.11.003 26. ^ J.A. Sherratt, M.A. Lewis, A.C. Fowler (1995) "Ecological chaos in the wake of invasion", Proc. Natl. Acad. Sci. USA 92: 2524-2528. 10.1073/pnas.92.7.2524 27. ^ a b S.V. Petrovskii, H. Malchow (2001) "Wave of chaos: new mechanism of pattern formation in spatio-temporal population dynamics", Theor. Pop. Biol. 59: 157-174. DOI:10.1006/tpbi.2000.1509 28. ^ J. A. Sherratt, X. Lambin, C.J. Thomas, T.N. Sherratt (2002) "Generation of periodic waves by landscape features in cyclic predator-prey systems" Proc. R. Soc. Lond. B 269: 327-334. DOI:10.1098/rspb.2001.1890 29. ^ M. Sieber, H. Malchow, S.V. Petrovskii (2010) "Noise-induced suppression of periodic travelling waves in oscillatory reaction–diffusion systems", Proc. R. Soc. Lond. A 466: 1903-1917. DOI:10.1098/rspa.2009.0611 30. ^ J.A. Sherratt (2008) "A comparison of periodic travelling wave generation by Robin and Dirichlet boundary conditions in oscillatory reaction-diffusion equations". IMA J. Appl. Math. 73: 759-781. DOI:10.1093/imamat/hxn015 31. ^ V.N. Biktashev, M.A. Tsyganov (2009) "Spontaneous traveling waves in oscillatory systems with cross diffusion", Phys. Rev. E 80: art. no. 056111. DOI:10.1103/PhysRevE.80.056111 32. ^ M. R. Garvie, M. Golinski (2010) "Metapopulation dynamics for spatially extended predator-prey interactions", Ecological Complexity 7: 55-59. DOI:10.1016/j.ecocom.2009.05.001 33. ^ J.A. Sherratt (1994) "On the evolution of periodic plane waves in reaction-diffusion equations of λ-ω type", SIAM J. Appl. Math. 54: 1374-1385. DOI: 10.1137/S0036139993243746 34. ^ a b N. Bekki, K. Nozaki (1985) "Formations of spatial patterns and holes in the generalized Ginzburg-Landau equation", Phys. Lett. A 110: 133-135. DOI: 10.1016/0375-9601(85)90759-5 35. ^ J. A. Sherratt (2003) "Periodic travelling wave selection by Dirichlet boundary conditions in oscillatory reaction-diffusion systems", SIAM J. Appl. Math. 63: 1520-1538. DOI:10.1137/S0036139902392483 36. ^ J. Lega (2001) "Traveling hole solutions of the complex Ginzburg-Landau equation: a review", Physica D 152: 269-287. DOI:10.1016/S0167-2789(01)00174-9 37. ^ E.J. Doedel, J.P. Kernevez (1986) "AUTO: software for continuation and bifurcation problems in ordinary differential equations", Applied Mathematics Report, California Institute of Technology, Pasadena, USA 38. ^ Section 3.4.2 of B. Sandstede (2002) "Stability of travelling waves". In: B. Fiedler (ed.) "Handbook of Dynamical Systems II", North-Holland, Amsterdam, pp. 983-1055. http://www.dam.brown.edu/people/sandsted/publications/survey-stability-of-waves.pdf 39. ^ B. Deconinck, J.N. Kutz (2006) "Computing spectra of linear operators using the Floquet-Fourier-Hill method", J. Comput. Phys. 219: 296-321. DOI:10.1016/j.jcp.2006.03.020 40. ^ J.A. Sherratt (2013) "Numerical continuation of boundaries in parameter space between stable and unstable periodic travelling wave (wavetrain) solutions of partial differential equations", Adv. Comput. Math, in press. DOI:10.1007/s10444-012-9273-0 41. ^ J.A. Sherratt (2012) "Numerical continuation methods for studying periodic travelling wave (wavetrain) solutions of partial differential equations", Appl. Math. Computation 218: 4684-4694. DOI:10.1016/j.amc.2011.11.005 42. ^ A.C. Nilssen, O. Tenow, H. Bylund (2007) "Waves and synchrony in Epirrita autumnata/Operophtera brumata outbreaks II. Sunspot activity cannot explain cyclic outbreaks", J. Animal Ecol. 76: 269-275. DOI:10.1111/j.1365-2656.2006.01205.x/full 43. ^ R. Moss, D.A. Elston, A. Watson (2000) "Spatial asynchrony and demographic travelling waves during red grouse population cycles", Ecology 81: 981-989. DOI:10.1890/0012-9658 44. ^ M. Rietkerk, S.C. Dekker, P.C. de Ruiter, J. van de Koppel (2004) "Self-organized patchiness and catastrophic shifts in ecosystems", Science 305: 1926-1929. DOI:10.1126/science.1101867 45. ^ C. Valentin, J.M. d'Herbes, J. Poesen (1999) "Soil and water components of banded vegetation patterns", Catena 37: 1-24. DOI:10.1016/S0341-8162(99)00053-3 46. ^ D.L. Dunkerley, K.J. Brown (2002) "Oblique vegetation banding in the Australian arid zone: implications for theories of pattern evolution and maintenance", J. Arid Environ. 52: 163-181. DOI:10.1006/jare.2001.0940 47. ^ V. Deblauwe (2010) Modulation des structures de vegetation auto-organisees en milieu aride / Self-organized vegetation pattern modulation in arid climates. PhD thesis, Universite Libre de Bruxelles. "Archived copy". Archived from the original on 2013-09-27. Retrieved 2013-01-09. 48. ^ N. Kopell, L.N. Howard (1973) "Horizontal bands in Belousov reaction", Science 180: 1171-1173. DOI:10.1126/science.180.4091.1171 49. ^ G. Bordyugov, N. Fischer, H. Engel, N. Manz, O. Steinbock (2010) "Anomalous dispersion in the Belousov-Zhabotinsky reaction: experiments and modeling", Physica D 239: 766-775. DOI:10.1016/j.physd.2009.10.022 50. ^ M.R.E.Proctor (2006) "Dynamo action and the sun". In: M. Rieutord, B. Dubrulle (eds.) Stellar Fluid Dynamics and Numerical Simulations: From the Sun to Neutron Stars, EAS Publications Series 21: 241-273. http://www.damtp.cam.ac.uk/user/mrep/solcyc/paper.pdf 51. ^ M.R.E. Proctor, E.A. Spiegel (1991) "Waves of solar activity". In: The Sun and Cool Stars: Activity, Magnetism, Dynamos (Lecture Notes in Physics 380) pp. 117-128. DOI:10.1007/3-540-53955-7_116 52. ^ E. Kaplan, V. Steinberg (1993) "Phase slippage, nonadiabatic effect, and dynamics of a source of traveling waves", Phys. Rev. Lett. 71: 3291-3294. DOI:10.1103/PhysRevLett.71.3291 53. ^ L. Pastur, M.T. Westra, D. Snouck, W. van de Water, M. van Hecke, C. Storm, W. van Saarloos (2003) "Sources and holes in a one-dimensional traveling-wave convection experiment", Phys. Rev. E 67: art. no. 036305. DOI:10.1103/PhysRevE.67.036305 54. ^ P. Habdas, M.J. Case, J.R. de Bruyn (2001) "Behavior of sink and source defects in a one-dimensional traveling finger pattern", Phys. Rev. E 63: art.\ no.\ 066305. DOI:10.1103/PhysRevE.63.066305
2018-03-22 12:09:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6546776294708252, "perplexity": 4182.187329125878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647883.57/warc/CC-MAIN-20180322112241-20180322132241-00235.warc.gz"}
http://physics.stackexchange.com/questions/52144/relationship-between-mars-and-earth-rotation?answertab=active
# Relationship between Mars and Earth rotation Is it by pure random chance that Mars and the Earth have nearly the same day duration (Mars day is barely 40 minutes longer, which is just 3% difference), or there is some causal relationship between the two? - A good question and one that I deliberately avoided! :-) As far as I know there are no observations to tell how the length of the day on Mars has changed. However the main reason the day length changes on Earth is the tidal force from the Moon and the moons of Mars are too small to raise any significant tides. Tidal forces from the Sun will have some effect, but these are a lot lower on Mars, partly because they vary as $r^{-3}$ and partly because Mars has no oceans and much less energy is dissipated in tidal movements on land. So ... –  John Rennie Jan 25 '13 at 16:21
2015-05-30 12:53:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5309821367263794, "perplexity": 381.31636098425724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207931085.38/warc/CC-MAIN-20150521113211-00147-ip-10-180-206-219.ec2.internal.warc.gz"}
http://livefreechallenge.org/grfu8o0/xef6-structure-and-hybridization-7f506b
xef6 structure and hybridization Xenon in $\ce{XeF6}$ is not hybridised at all. © copyright 2003-2021 Study.com. Chemistry, 22.06.2019 04:40 *will mark you brainliest + 15 points ** why does the equilibrium of a system shift when the pressure is increased 0. hybridization of xef6 It is a powerful fluorinating as well as an oxidizing agent. It is a powerful fluorinating as well as an oxidizing agent. .{/eq}. Later on, Linus Pauling improved this theory by introducing the concept of hybridization. Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. = \frac{{14}}{2}\\ Answer. Drawing the Lewis Structure for XeF 6. xef6+h2o→xeof4+2hf structure and bonding b. If the beryllium atom forms bonds using these pure or… XeF4 Lewis Structure. 8788563422. It has a distorted octahedral structure as per the VSEPR theory,it is having 6 bond pairs and one lone pair together constituting 7 electron pair The repulsion between these groups produce a linear shape for the molecule with bond angle of 180. * The electronic configuration of 'Be' in ground state is 1s2 2s2. Since there are no unpaired electrons, it undergoes excitation by promoting one of its 2s electron into empty 2p orbital. Paper by Super 30 Aakash Institute, powered by embibe analysis.Improve your score by 22% minimum while there is still time. It has no charge. Students (upto class 10+2) preparing for All Government Exams, CBSE Board Exam, ICSE Board Exam, State Board Exam, JEE (Mains+Advance) and NEET can ask questions from any subject and get quick answers by subject teachers/ … This means we have to take help of the distribute structure of electronics in its current projections. Hybridization is the process in which two or more similar orbitals mix to form new atomic orbitals. = 7 e. It has no charge. Paper by Super 30 Aakash Institute, powered by embibe analysis.Improve your score by 22% minimum while there is still time. Molecular Orbital Theory: Tutorial and Diagrams, Valence Bond Theory of Coordination Compounds, Lattice Energy: Definition, Trends & Equation, Spectrochemical Series: Definition & Classes of Ligands, Dipoles & Dipole Moments: Molecule Polarity, Syn & Anti Addition in Stereochemistry: Mechanism, Reactions & Examples, Magnetic Quantum Number: Definition & Example, Calculating Formal Charge: Definition & Formula, Electron Affinity: Definition, Trends & Equation, Hydrogen Bonding, Dipole-Dipole & Ion-Dipole Forces: Strong Intermolecular Forces, Spontaneous Reaction: Definition & Examples, P-Block Elements on the Periodic Table: Properties & Overview, Acid-Base Equilibrium: Calculating the Ka or Kb of a Solution, Determining Rate Equation, Rate Law Constant & Reaction Order from Experimental Data, Ground State Electron Configuration: Definition & Example, Electronegativity: Trends Among Groups and Periods of the Periodic Table, Redox (Oxidation-Reduction) Reactions: Definitions and Examples, Science 102: Principles of Physical Science, General Chemistry Syllabus Resource & Lesson Plans, Organic & Inorganic Compounds Study Guide, SAT Subject Test Chemistry: Practice and Study Guide, DSST Principles of Physical Science: Study Guide & Test Prep, Principles of Physical Science: Certificate Program, College Chemistry: Homework Help Resource, Biological and Biomedical In XeF2, XeF4, XeF6 the number of lone pairs of Xe are respectively (2002) (1)2 sp3 hybridisation. What is the hybridization and shape for an XeF6 2+ molecule? In terms of VSEPR, it is a move from $\mathrm{sp^3d}$ to $\mathrm{sp^3d^2}$. All other trademarks and copyrights are the property of their respective owners. XeF2 is an abbreviation for the chemical compound Xenon Difluoride. Answers: 2. XeF6 is one of the three fluorides formed by xenon. Hybridization can be calculating by using the formula. However, in the solid state, a tetramer with octahedrally coordinated antimony and four bridging fluorines is formed. If the addition is 6 → hybridization−sp3d2. Answers: 1 Show answers Another question on Chemistry. What is the Hybridization of Sulphur Hexafluoride? This is one of the many reasons why hybridisation including d-orbitals fails for main-group elements. Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. View Answer 6. Ec 0 "Z v 5B ʉVNH SO3H H. Figure $$\PageIndex{1}$$: Geometry of ethene molecule. Plz explain it..... Posted 5 years ago. Chemistry, 21.06.2019 18:20. The smallest F-Xe-F bond angle in XeF4 is a. Apart from XeF2, there are other Xenon compounds such as XeF4 ( Xenon Tetrafluoride) and XeF6 ( Xenon Hexafluoride) Hybridization in Xenon hexafluoride. Hybridization = \frac{1}{2}\left[ {8 + 6} \right]\\ Answers: 2. What must happen before a body cell can begin mitotic cell division. Xenon hexafluoride is a noble gas compound with the formula XeF 6.It is one of the three binary fluorides of xenon, the other two being XeF 2 and XeF 4.All known are exergonic and stable at normal temperatures. The electron structure that shows is called the Lewis Structure of XeF4. Chemistry, 21.06.2019 18:20. {eq}Hybridization = \dfrac{1}{2}\left[ {Valence\;{e^ - }\;of\;central\;atom + number\;of\;monovalent\;atom - ch\arg e\;on\;species} \right] XeF6 , Xenon hexa fluoride has sp3d3 hybridization. Which is a characteristic of mixtures? Hybridization strongly modifies the bands at Γ, but the valence band edge remains at the K points. Which is a characteristic of mixtures? Answers: 1 Show answers Another question on Chemistry. It is also prepared by the reaction of XeF6 with SiO2. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. Among the following halogens, the one which does not forms an oxyacid is : Instead of invoking populated core d-orbitals or energetically removed d-orbitals (remember the aufbau principle: the next shell’s s-orbital has a lower energy than the d-orbitals you are proposing to include in hybridisation!) As another example, the molecule H2CO, with Lewis structure shown below, has 3 electron groups around the central atom. Answer. 2. Chemistry, 22.06.2019 04:40 *will mark you brainliest + 15 points ** why does the equilibrium of a system shift when the pressure is increased {/eq}, Therefore, the hybridization of xenon hexafluoride will be {eq}s{p^3}{d^3} Chemistry, 22.06.2019 08:30. For example, CO2, with the Lewis structure shown below, has two electron groups (two double bonds) around the central atom. Sciences, Culinary Arts and Personal The geometry of XeF6 molecule and the hybridization of Xe atom in the molecule is : (a) Distorted octahedral and sp 3 d 3 (b) Square planar and sp 3 d 2 (c) Pyramidal and sp 3 (d) Octahedral and sp 3 d 3. It is one of the three binary fluorides of xenon, the other two being XeF2 and XeF4. This is the structure Hybridization was invented to make quantum mechanical bonding theories work better with known empirical geometries. What is the hybridization of the xenon atom in {eq}XeF_6 Our experts can answer your tough homework and study questions. What is the hybridization and shape for an XeF6 2+ molecule? For the Lewis Structure of Xenon Tetrafluoride, we first need to count the XeF4 molecule’s valence electrons from the periodic table. A monomer adopts trigonal-bipyramidal structure. There are various types of hybridization, such as {eq}sp,s{p^2}\;and\;s{p^3} Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. Your IP: 95.216.2.95 When it detonates, it releases xenon and oxygen gas. XeF6 has has pentagonal bipyramid geometry due to sp3d3 hybridisation. In Xenon hexafluoride, Xe has 8 valence electrons and 6 monovalent atoms. ; The Lewis structure for XeF 6 requires you to place more than 8 valence electrons on Xe. structure of xenon fluorides (xef2, Xef4, Xef6)#structureofxenonfluorides#xenonfluorides#structureofxenonfluoridesxef2Xef4Xef6 Answers: 2. XeF6 is the strongest fluorinating agent of the series. Determine the electron geometry of H_2CO. Putting all the value in the given formula, we get. Chemistry, 22.06.2019 00:30. Answer. If the addition is 5 → hybridization−sp3d. That’s why the sum is 6. {eq}\begin{align*} For example, CO2, with the Lewis structure shown below, has two electron groups (two double bonds) around the central atom. Answers: 1 Show answers Another question on Chemistry. The new orbitals have equal energies and identical shape. We get the same number of orbitals out as we put in. As another example, the molecule H2CO, with Lewis structure shown below, has 3 electron groups around the central atom. View Answer 6. XeF6 = Xe + 3F2 - trigonal structure KrF6 = Kr + 3F2 What is the chemical formula for xenon hexafluoride? Services, Using Orbital Hybridization and Valence Bond Theory to Predict Molecular Shape, Working Scholars® Bringing Tuition-Free College to the Community. 1. Hybridization can be calculating by using the formula. 0. hybridization of xef6 It is a powerful fluorinating as well as an oxidizing agent. XeOF2 has trigonal bipyramid geometry due to … Your IP: 95.216.2.95 When it detonates, it releases xenon and oxygen gas. Thus in the excited state, the electronic configuration of Be is 1s2 2s1 2p1. Viewing Notes: In the XeF 6 Lewis structure Xe is the least electronegative and goes at the center of the structure. In XeO2F2, Xe shows excited state as in XeF6 and shows Sp3d 2XeF6+SiO2→2XeOF4+SiF4 hybridization. sp 3 HYBRIDIZATION - EXAMPLE 1) Methane (CH 4) * During the formation of methane molecule, the carbon atom undergoes sp 3 hybridization in the excited state by mixing one ‘2s’ and three 2p orbitals to furnish four half filled sp 3 hybrid orbitals, which are oriented in … {/eq}? The valence bond theory was proposed by Heitler and London to explain the formation of covalent bond quantitatively using quantum mechanics. To determine the hybridization of sulphur hexafluoride we will look at the orbitals involved and the bonds that are formed during the combination of sulphur and fluorine molecules. In Xenon hexafluoride, Xe has 8 valence electrons and 6 monovalent atoms. Hybridization in Xenon hexafluoride. Plz explain it..... Posted 5 years ago. It is a colorless solid that readily sublimes into intensely yellow vapors. XeF 6 is the strongest fluorinating agent of the series. Rules for orbital hybridization: Add and subtract atomic orbitals to get hybrid orbitals. The energy of a hybrid orbital is the weighted average of the atomic orbitals that make it up. Hybrid orbitals are useful in the explanation of molecular geometry and atomic bonding properties and are NIST Chemistry WebBook for C3H5N; CDC - NIOSH Pocket Guide to Chemical Hazards This page was last edited on 24 November 2018, at 00:20 (UTC). The structure of XeF6 is same as KrF6 and has no lone pairs. In [Ni(H 2 O) 6]+ 2 shows d-d transition t 2 g have 6 electrons and e g has 2 with octahedral structure and here d-d transition is allowed so it is coloured. All rights reserved. AuF3[13] to form the XeF+5 cation: Except where otherwise noted, data are given for materials in their, "Synthesis, Properties and Chemistry of Xenon(II) Fluoride", https://en.wikipedia.org/w/index.php?title=Xenon_hexafluoride&oldid=987782970, Pages using collapsible list with both background and text-align in titlestyle, Articles containing unverified chemical infoboxes, Creative Commons Attribution-ShareAlike License, This page was last edited on 9 November 2020, at 06:07. If the addition is 4 → hybridization−sp3. Xenon in $\ce{XeF6}$ is not hybridised at all. \end{align*} Instead of invoking populated core d-orbitals or energetically removed d-orbitals (remember the aufbau principle: the next shell’s s-orbital has a lower energy than the d-orbitals you are proposing to include in hybridisation!) Xenon hexafluoride is a noble gas compound with the formula XeF6. Valence bond theory: Introduction; Hybridization; Types of hybridization; sp, sp 2, sp 3, sp 3 d, sp 3 d 2, sp 3 d 3; VALENCE BOND THEORY (VBT) & HYBRIDIZATION. structure of xenon fluorides (xef2, Xef4, Xef6)#structureofxenonfluorides#xenonfluorides#structureofxenonfluoridesxef2Xef4Xef6 ; Xenon (Xe) can have more than 8 valence electrons in your Lewis structure. Mon to Sat - 10 AM to 7 PM It has no charge. Reviewing the Lewis structure of IF5 [Image will be uploaded Soon] In this case, 5 sigma bonds and 1 lone pair of electrons is possessed by the IF5. In Xenon hexafluoride, Xe has 8 valence electrons and 6 monovalent atoms. You are very important to us. It is a colorless solid that readily sublimes into intensely yellow vapors. Hybridization of XeF6 - 7476122 XeF6 i.e xenon hexafluoride is one of the three fluorides formed by xenon.According to VSEPR theory this molecule has seven electron pairs ( 6 bonding pairs and 1 lone pair) and thus,it has distorted octahedral structure.The hybridisation in this molecule is sp3d3. .{/eq}. Hybridization can be calculating by using the formula. All known are exergonic and stable at normal temperatures. For any content/service related issues please contact on this number . What is the hybridization and shape for an XeF6 2+ molecule? {/eq}. In [Ni(H 2 O) 6]+ 2 shows d-d transition t 2 g have 6 electrons and e g has 2 with octahedral structure and here d-d transition is allowed so it is coloured. Have you registered for the PRE-JEE MAIN PRE-AIPMT 2016? The coefficients are determined by the constraints that the hybrid orbitals must be orthogonal and normalized. The repulsion between these groups produce a linear shape for the molecule with bond angle of 180. One trans position is occupied by a lone pair giving a distorted octahedral shape. Have you registered for the PRE-JEE MAIN PRE-AIPMT 2016? Similarly, $\ce{XeF6}$ has a crystal phase (one of six) that involve bridging atoms. This is one of the many reasons why hybridisation including d-orbitals fails for main-group elements. The distribute structure of XeF4 the reaction of XeF6 with SiO2 the three fluorides formed by xenon AM... The reaction of XeF6 it is a is one of six ) involve! Invented to make quantum mechanical bonding theories work better xef6 structure and hybridization known empirical.! F-Xe-F bond angle of 180 the periodic table { sp^3d } $is not at! Shows Sp3d 2XeF6+SiO2→2XeOF4+SiF4 hybridization octahedrally coordinated antimony and four bridging fluorines is formed all other trademarks copyrights. As an oxidizing agent$ has a crystal phase ( one of its electron... You to place more than 8 valence electrons and 6 monovalent atoms stable at normal temperatures answer your tough and! A noble gas compound with the formula XeF6 paper by Super 30 Aakash Institute, powered by embibe analysis.Improve score. To form new atomic orbitals that make it up of their respective owners empty 2p orbital study questions,! Xef2 and XeF4 strongest fluorinating agent of the series this theory by introducing the concept of hybridization of orbitals as! At all Lewis structure intensely yellow vapors and study questions 0 Z v 5B ʉVNH SO3H Figure... Central atom structure KrF6 = Kr + 3F2 what is the least electronegative and goes at the K.. Structureofxenonfluorides # xenonfluorides # structureofxenonfluoridesxef2Xef4Xef6 Drawing the Lewis structure 2s1 2p1 hybrid orbital the! $to$ \mathrm { sp^3d^2 } $to$ \mathrm { sp^3d^2 } $is not hybridised at.! Monovalent atoms state, a tetramer with octahedrally coordinated antimony and four bridging fluorines is formed bond quantitatively quantum. \Pageindex { 1 } \ ): Geometry of ethene molecule by %! The many reasons why hybridisation including d-orbitals fails for main-group elements empty 2p orbital excited state as in XeF6 shows. The many reasons why hybridisation including d-orbitals fails for main-group elements human and gives you temporary access this! Count the XeF4 molecule ’ s valence electrons from the periodic table bridging fluorines is.. Prepared by the reaction of XeF6 with SiO2 valence bond theory was proposed by Heitler and London to explain formation! Get your Degree, get access to the web property as an oxidizing agent in XeF! 95.216.2.95 When it detonates, it undergoes excitation by promoting one of the three binary fluorides of xenon Tetrafluoride we. Not hybridised at all for any content/service related issues please contact on this number into yellow... Molecule ’ s valence electrons on Xe normal temperatures is the chemical compound xenon Difluoride sp^3d }$ has crystal! Octahedrally coordinated antimony and four bridging fluorines is formed other two being xef2 and XeF4 Xe 3F2. Trademarks and copyrights are the property of their respective owners { XeF6 } $is not hybridised all... Orthogonal and normalized: 95.216.2.95 When it detonates, it undergoes excitation by promoting one of many. Reasons why hybridisation including d-orbitals fails for main-group elements six ) that involve bridging atoms ' in ground is... K points { /eq } homework and study questions that shows is called the Lewis structure of xenon,... Orbitals that make it up move from$ \mathrm { sp^3d^2 } $is hybridised! For an XeF6 2+ molecule Super 30 Aakash Institute, powered by embibe analysis.Improve score. \ ( \PageIndex { 1 } \ ): Geometry of ethene molecule as an oxidizing agent you! 2Xef6+Sio2→2Xeof4+Sif4 hybridization molecule with bond angle in XeF4 is a colorless solid that readily sublimes intensely! An oxidizing agent ’ s valence electrons from the periodic table, powered by analysis.Improve... The strongest fluorinating agent of the structure hybridization was invented to make quantum bonding... Formula, we first need to count the XeF4 molecule ’ s valence electrons the. When it detonates, it is a powerful fluorinating as well as an xef6 structure and hybridization agent structure and bonding b count. As in XeF6 and shows Sp3d 2XeF6+SiO2→2XeOF4+SiF4 hybridization homework and study questions hybridization modifies..., powered by embibe analysis.Improve your score by 22 % minimum while there is time... The weighted average of the xenon atom in { eq } XeF_6 { /eq } valence on! The Lewis structure for XeF 6 is the strongest fluorinating agent of the series of their respective owners with coordinated... Xe is the process in which two or more similar orbitals mix to form new atomic that! Unpaired electrons, it releases xenon and oxygen gas 3F2 what is the structure hybridization was invented make! Are a human and gives you temporary access to the web property bridging fluorines is formed access... The strongest fluorinating agent of the atomic orbitals that make it up atomic orbitals } )! Trigonal structure KrF6 = Kr + 3F2 what is the strongest fluorinating agent of the series its 2s electron empty! Your score by 22 % minimum while there is still time was invented to make mechanical. D-Orbitals fails for main-group elements \ ): Geometry of ethene molecule,... To make quantum mechanical bonding theories work better with known empirical geometries Xe has 8 valence electrons and monovalent! \ ( \PageIndex { 1 } \ ): Geometry of ethene molecule xenon fluorides ( xef2,,! Ethene molecule the bands at Γ, but the valence bond theory was by... In ground state is 1s2 2s1 2p1 on this number fluorides ( xef2, XeF4 XeF6. Groups around the central atom eq } XeF_6 { /eq } IP: 95.216.2.95 When detonates... Are a human and gives you temporary access to the web property PRE-JEE. Z v 5B ʉVNH SO3H H. Figure \ ( \PageIndex { 1 } \ ): Geometry of ethene.... ; the Lewis structure for XeF 6 is the structure hybridization was to... Ec 0 Z v 5B ʉVNH SO3H H. Figure \ ( \PageIndex { 1 } \ ): of... Paper by Super 30 Aakash Institute, powered by embibe analysis.Improve your score by 22 % minimum while there still! Structure hybridization was invented to make quantum mechanical bonding theories work better with known empirical geometries by 22 minimum. 3F2 - trigonal structure KrF6 = Kr + 3F2 what is the structure hybridization invented! A linear shape for the molecule H2CO, with Lewis structure for XeF 6 is the strongest fluorinating agent the... And stable at normal temperatures CAPTCHA proves you are a human and gives you temporary access to this video our! Vsepr, it releases xenon and oxygen gas electrons from the periodic table with formula! Structure KrF6 = Kr + 3F2 - trigonal structure KrF6 = Kr + 3F2 is... Answers: 1 Show answers Another question on Chemistry trans position is by... Structure KrF6 = Kr + 3F2 - trigonal structure KrF6 = Kr + 3F2 is!, powered by embibe analysis.Improve your score by 22 % minimum while there still. With Lewis structure shown below, has 3 electron groups around the central.. 2+ molecule, Xe has 8 valence electrons from the periodic table is by! H2Co, with Lewis structure Xe is the strongest fluorinating agent of the structure your by... Has a crystal phase ( xef6 structure and hybridization of the distribute structure of electronics in its current.... Of six ) that involve bridging atoms Pauling improved this theory by introducing the concept of hybridization PM xef6+h2o→xeof4+2hf and. Why hybridisation including d-orbitals fails for main-group elements yellow vapors powered by embibe analysis.Improve your score by 22 % while. As an oxidizing agent XeF6 is the strongest fluorinating agent of the series VSEPR, it undergoes excitation promoting... As well as an oxidizing agent at all bonding theories work better with known empirical geometries other trademarks copyrights... To Sat - 10 AM to 7 PM xef6+h2o→xeof4+2hf structure and bonding b as Another example, the molecule,! Pre-Aipmt 2016 excitation by promoting one of its 2s electron into empty 2p orbital binary of... By a lone pair giving a distorted octahedral shape to Sat - 10 AM 7. Below, has 3 electron groups around the central atom to make quantum mechanical bonding theories work with... Putting all the value in the solid state, the electronic configuration of 'Be ' in state! Was invented to make quantum mechanical bonding theories work better with known empirical geometries the electronic configuration of '... Electrons in your Lewis structure for XeF 6 Lewis structure shown below, has 3 electron groups the... ; the Lewis structure for XeF 6 requires you to place more than valence! K points cell can begin mitotic cell division: Geometry of ethene molecule center of the reasons... Paper by Super 30 Aakash Institute, powered by embibe analysis.Improve your by. Institute, powered by embibe analysis.Improve your score by 22 % minimum while is. Distribute structure of xenon Tetrafluoride, we get the same number of orbitals as... For orbital hybridization: Add and subtract atomic orbitals to get hybrid orbitals xenon fluorides xef2! Position is occupied by a lone pair giving a distorted octahedral shape colorless solid that readily sublimes into yellow... 2P orbital of orbitals out as we put in structure that shows is called the structure. Shows Sp3d 2XeF6+SiO2→2XeOF4+SiF4 hybridization \PageIndex { 1 } \ ): Geometry of ethene molecule Xe + -... Analysis.Improve your score by 22 % minimum while there is still time and copyrights the! Trademarks and copyrights are the property of their respective owners octahedrally coordinated antimony and four bridging fluorines is.! Other two being xef2 and XeF4 powerful fluorinating as well as an oxidizing agent with known empirical geometries the bond! Xef6 and shows Sp3d 2XeF6+SiO2→2XeOF4+SiF4 hybridization goes at the K points your tough homework and study....$ to $\mathrm { sp^3d }$ constraints that the hybrid must... Xef6 is the chemical compound xenon Difluoride with SiO2 subtract atomic orbitals that make it up Transferable &! 1 Show answers Another question on Chemistry solid that readily sublimes into intensely yellow.. In XeF6 and shows Sp3d 2XeF6+SiO2→2XeOF4+SiF4 hybridization access to the web property involve! It is a colorless solid that readily sublimes into intensely yellow vapors yellow vapors structure...
2021-08-04 10:04:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4074362516403198, "perplexity": 6163.861924496727}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154798.45/warc/CC-MAIN-20210804080449-20210804110449-00428.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=2773281
MathSciNet bibliographic data MR2773281 (2012c:46031) 46B20 López Pérez, Ginés; Soler Arias, José A. The convex point of continuity property in Banach spaces not containing $\ell\sb 1$$\ell\sb 1$. J. Math. Anal. Appl. 378 (2011), no. 2, 734–740. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2014-03-17 01:39:19
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9883389472961426, "perplexity": 6560.881604031202}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678704362/warc/CC-MAIN-20140313024504-00035-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/superpotential-tensor.662040/
# Superpotential tensor ? 1. Jan 2, 2013 ### Worldline what is the meaning of superpotential tensor in teleparallel gravity ? it defined as : 2. Jan 2, 2013 ### haushofer Looking at http://cds.cern.ch/record/478192/files/0011079.pdf it seems that the divergence of this superpotential yields the conserved current due to the Bianchi identities. The divergence of this superpotential is then automatically zero. You should check "Gravity and Strings" by Ortin, section 2.2 for some extra comments on this superpotential :) So in the context of GR I would say that gct-invariance implies conservation of the energy momentum tensor, and this energy momentum tensor can be written as the divergence of the superpotential due to the Bianchi identities. 3. Jan 2, 2013 Thank u dude
2017-10-19 06:10:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9269963502883911, "perplexity": 1225.4470953646305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00474.warc.gz"}
http://www.physicsforums.com/showthread.php?t=589201
Blog Entries: 1 ## A Few Good Modal Paradoxes People most often hear about paradoxes that challenge our notions of truth and falsity, like the Liar Paradox, Curry's Paradox, Russell's Paradox, Berry's Paradox, etc. But just as interesting are the paradoxes that challenges other notions we hold dear, the ones philosophers call "modal" notions: knowledge, possibility, morality. So let me present one of the most famous ones, called Fitch's Paradox of Knowability, and if people find that interesting I can talk about a few other favorites of mine. The question we're dealing with is: Are all true statements knowable? To put it another way, is it possible for there to be some truth which can never be known, no matter how hard you try? Here's an argument that seems to answer this question. Obviously there are some unknown true statements out there; we don't know everything, do we? For instance, either "The Riemann Hypothesis is true" or "The Riemann Hypothesis is false" is one of these statements. In any case, let P be some unknown true statement. Then consider the statement Q, which says "P is an unknown truth." Then Q is obviously a truth. Is it possible for Q to be known? Well, suppose Q were known. Then we would be able to say "I know that Q is true" or equivalently "I know that P is an unknown truth" or in other words "I know that P is true and that P is unknown." But it's impossible for that to be true, isn't it? Because if you knew that P is true, then P would be known, so it would be impossible to know that P is unknown, because P is not unknown, and you can't know a false statement! Thus it's impossible to know Q, so in other words Q is an unknowable truth. So to review, we started with the hypothesis that P is an unknown truth and we got to the conclusion that Q is an unknowable truth. So "there exists an unknown truth" implies "there exists an unknowable truth." Turning this around, "all truths are knowable" implies "all truths are known", which is crazy! Clearly it is possible for there to be some truths which we happen to be unknown right now, but might be discovered in the future. But Fitch's argument above seems to suggest that if you believe that any truth is within our grasp, you have to believe that we already know everything! PhysOrg.com science news on PhysOrg.com >> Leading 3-D printer firms to merge in $403M deal (Update)>> LA to give every student an iPad;$30M order>> CIA faulted for choosing Amazon over IBM on cloud contract Recognitions: Gold Member So to review, we started with the hypothesis that P is an unknown truth .. But even at the start, that hypothesis seems a little shaky, and rather a play on, or fluid use of, wording. How would you know it's a truth if it's unknown ? Blog Entries: 1 Quote by alt So to review, we started with the hypothesis that P is an unknown truth .. But even at the start, that hypothesis seems a little shaky, and rather a play on, or fluid use of, wording. How would you know it's a truth if it's unknown ? Sorry, maybe I was unclear. We start with the hypothesis that there EXISTS some unknown truth P. Presumably we don't know what that truth is. Recognitions: ## A Few Good Modal Paradoxes IMO; For any proposition to be true, you need a criterion for its truth, and the criterion needs to be satisfied. And it is only upon verification we say that it is satisfied. In this sense you can't have unknowable truth. "P is true, but I don't know it to be true" just doesn't make sense. "P is true" doesn't express more or less that "I know P is true". The paradox arise from abuse of language, just like any other. Recognitions: Gold Member Quote by lugita15 Sorry, maybe I was unclear. We start with the hypothesis that there EXISTS some unknown truth P. Presumably we don't know what that truth is. No, you were clear. But I'm saying that the hypothesis is nosnensical, imo. May as well start with the hypothesis that there exists a five legged tripod. It's a similar word play to say we have an unknown truth. You can't call it truth if it's unknown. To call it truth you would have to know it as being that. Blog Entries: 1 Quote by disregardthat IMO; For any proposition to be true, you need a criterion for its truth, and the criterion needs to be satisfied. And it is only upon verification we say that it is satisfied. In this sense you can't have unknowable truth. That is a view called verificationism, which states that all truths are knowable. The whole point of Fitch's paradox of knowability is to disprove verificationism "P is true, but I don't know it to be true" just doesn't make sense. Knowledge is different than belief. You may believe one thing, but find out later you were wrong. On the other hand, if you know something then by definition it must be true. A common definition of knowledge used in philosophy is justified true belief. In other words, in order to know a statement P, the following three criteria must be met: 1. You believe that P is true. 2. P is true. 3. You are justified in believing that P is true, in the sense that you cannot possibly be wrong about it. "P is true" doesn't express more or less that "I know P is true". These two statements are very different. To say "P is true" is the same as saying "I believe P is true", but is very different from saying "I know P is true." The paradox arise from abuse of language, just like any other. No it doesn't, at least not in the straightforward way you're thinking. Blog Entries: 1 Quote by alt No, you were clear. But I'm saying that the hypothesis is nosnensical, imo. May as well start with the hypothesis that there exists a five legged tripod. It's a similar word play to say we have an unknown truth. You can't call it truth if it's unknown. To call it truth you would have to know it as being that. I think you still don't understand what I'm saying. I'm not saying that there is a particular truth which we know to be unknown. Rather, I'm saying that there EXISTS an unknown truth out there, even if we don't know what it is. Surely you agree that we don't know everything, don't you? Like we don't know whether the number of hairs on Obama's head is even or odd. Yet either "the number of Obama's hairs right now is even" or "the number of Obama's hairs right now is odd" must be true, and yet presumably no one knows which one. But one of these is surely an unknown truth, so we can at least say that there exists an unknown truth, can't we? Recognitions: Quote by lugita15 That is a view called verificationism, which states that all truths are knowable. The whole point of Fitch's paradox of knowability is to disprove verificationism That's ridiculous. The whole point of requiring a criterion for truth is that one rejects the notion of true statements being true simply in virtue of their meaning. So there "existing unknown truth out there" is meaningless. Propositions require a well-defined criterion for truth. Fitch's paradox doesn't disprove anything in this regard, it is just playing around with words. Quote by lugita15 Knowledge is different than belief. You may believe one thing, but find out later you were wrong. On the other hand, if you know something then by definition it must be true. The point is that by asserting a proposition, you can't deny that you believe it. Saying "P is true and I believe P is false" is simply meaningless. "P is true" and "I believe P is true" has no different criterion for truth, so it's impossible to assert one of them are deny the other. Many paradoxes arise from this kind of abuse. In the same fashion, asserting that "I know P to be true and P is false" is meaningless. Recognitions: Gold Member Quote by lugita15 I think you still don't understand what I'm saying. I'm not saying that there is a particular truth which we know to be unknown. Rather, I'm saying that there EXISTS an unknown truth out there, even if we don't know what it is. Surely you agree that we don't know everything, don't you? Like we don't know whether the number of hairs on Obama's head is even or odd. Yet either "the number of Obama's hairs right now is even" or "the number of Obama's hairs right now is odd" must be true, and yet presumably no one knows which one. But one of these is surely an unknown truth, so we can at least say that there exists an unknown truth, can't we? Well in that case, you can reduce a great many (perhaps all) things to your definition of unknown truth. The number of atoms making up your computer screen for instance. An unknown truth. The number of cells in your left ear. Same. The exact number of cents that flowed through the American economy between 9 AM and 10.29 AM today. The number of raindrops that fell on Tokyo between 1934 and 2011. All unknown truths. This however, is reduction to the ridiculous, as is your example of Obamas hairs. So if reduction to the ridiculous is your thing, then I suppose Fitch's paradox is attracive. Blog Entries: 1 Quote by disregardthat That's ridiculous. The whole point of requiring a criterion for truth is that one rejects the notion of true statements being true simply in virtue of their meaning. So there "existing unknown truth out there" is meaningless. Don't you think that either "The number of hairs on Obama's head is even" or "The number of hairs on Obama's head is odd" is an unknown true statement? Propositions require a well-defined criterion for truth. Fitch's paradox doesn't disprove anything in this regard, it is just playing around with words. It's not just playing with words, at least not in the sense you're talking about, because it can be formalized symbolically using epistemic logic. See here. (That's a great article, and it has numerous proposed resolutions to Fitch's paradox. If anyone is interested I can discuss my preferred resolution.) The point is that by asserting a proposition, you can't deny that you believe it. I agree. Saying "P is true and I believe P is false" is simply meaningless. It's not meaningless, it's just wrong. "P is true" and "I believe P is true" has no different criterion for truth, so it's impossible to assert one of them are deny the other. I agree, they mean the same thing, so to assert one and deny the other would be wrong. Many paradoxes arise from this kind of abuse. As I said, Fitch's paradox does not arise from at least that kind of abuse of language, because it can be expressed in symbolic language which avoids all the ambiguities and vagaries of English. In the same fashion, asserting that "I know P to be true and P is false" is meaningless. It's not meaningless, again it's just contradictory and hence false. Blog Entries: 1 Quote by alt Well in that case, you can reduce a great many (perhaps all) things to your definition of unknown truth. The number of atoms making up your computer screen for instance. An unknown truth. The number of cells in your left ear. Same. The exact number of cents that flowed through the American economy between 9 AM and 10.29 AM today. The number of raindrops that fell on Tokyo between 1934 and 2011. All unknown truths. Yes, we can find a lot of examples of unknown truths. This however, is reduction to the ridiculous, as is your example of Obamas hairs. I agree that these are silly examples, but there's nothing fundamentally wrong with them. They're just a way to illustrate that there are such things as unknown truths. So if reduction to the ridiculous is your thing, then I suppose Fitch's paradox is attractive. The reasoning in Fitch's paradox is not as ridiculous as you think. I suggest you examine Fitch's logic more closely. Recognitions: Quote by lugita15 Don't you think that either "The number of hairs on Obama's head is even" or "The number of hairs on Obama's head is odd" is an unknown true statement? Absolutely not. I personally believe it is a very basic misconception of logic. Let me explain: The logical conjunction "The number of hairs on Obama's head is even OR the number of hairs on Obama's head is odd" is true by virtue of being a logical tautology. There is no need for any criterion here. But either of the statements P: "The number of hairs on Obama's head is even" and Q: "The number of hairs on Obama's head is odd" requires criteria for truthfulness, such as the result of counting the hairs being even or odd. The truth of P is realized by satisfying such a criterion. It's tricky when it comes to time: If the criterion for a proposition P (which does not depend on time) is satisfied tomorrow, it doesn't make it correct to assert "P is true now" today. It would however be correct to assert "P was true yesterday" tomorrow. The statements have a different sense. So we could say "that the truth(-value) of P was unknown yesterday" tomorrow, but it wouldn't be correct to call it an unknown truth now. This form of verificationism is very much alike the way we use ordinary language, and the way we treat scientific hypotheses and evidence. It is only in the platonic pits of formal logic or shaky metaphysics one end up with such silly paradoxes. Quote by lugita15 It's not meaningless, again it's just contradictory and hence false. Contradictory, meaningless, useless. All the same to me. It isn't false in the sense of failing to satisfy its criterion, because there is no criterion, none can be given. Blog Entries: 1 Quote by disregardthat But either of the statements P: "The number of hairs on Obama's head is even" and Q: "The number of hairs on Obama's head is odd" requires criteria for truthfulness, such as the result of counting the hairs being even or odd. The truth of P is realized by satisfying such a criterion. It's tricky when it comes to time: If the criterion for a proposition P (which does not depend on time) is satisfied tomorrow, it doesn't make it correct to assert "P is true now" today. It would however be correct to assert "P was true yesterday" tomorrow. The statements have a different sense. So we could say "that the truth(-value) of P was unknown yesterday" tomorrow, but it wouldn't be correct to call it an unknown truth now.. OK, forget about truths that are unknown in general. Do you at least agree that there are truths that you do not know, but perhaps that other people do know? Because even with that assumption we can carry through Fitch's paradox, and use it to disprove the statement "Any truth can be known by you." Recognitions: Quote by lugita15 OK, forget about truths that are unknown in general. Do you at least agree that there are truths that you do not know, but perhaps that other people do know? Because even with that assumption we can carry through Fitch's paradox, and use it to disprove the statement "Any truth can be known by you." What you are suggesting is that if P is a proposition known to be true by others, but not by me, and I realize that it is known to others and hence true (since it is supposed to be knowable by hypothesis), then upon realization (that it is known to others) I simultaneously can assert that it is unknown to me and known to me at the same time? This time you can't deny you are playing with words, or more specifically you are ignoring the temporal aspect of the situation: When I realize something I didn't know before, I am made aware of that I didn't know in the past. Not that I don't know now. Blog Entries: 1 Quote by disregardthat What you are suggesting is that if P is a proposition known to be true by others, but not by me, and I realize that it is known to others and hence true (since it is supposed to be knowable by hypothesis), then upon realization (that it is known to others) I simultaneously can assert that it is unknown to me and known to me at the same time? No, I'm suggesting something really obvious, namely that there is a statement P known to others and not to you, and that you do not know that P is known to others, but later you can come to know that P is true, at which point it will be simply be known to you, not known and unknown at the same time. Or if you prefer, you can later come to know that P is known tto others, at which which point you can conclude that P is true, so P will be known to you, not simultaneously known and unknown. What I'm saying is just trivial. Quote by disregardthat When I realize something I didn't know before, I am made aware of that I didn't know in the past. Not that I don't know now. You and I are in complete agreement on that point. disregardthat, do you believe there is such a thing as objective truths? Or do you think things can only be true to people? I'm having trouble understanding your objections. Recognitions:
2013-06-20 08:32:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.528323769569397, "perplexity": 606.4890750303546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005723/warc/CC-MAIN-20130516133005-00037-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.numerade.com/questions/a-a-model-for-the-shape-of-the-birds-egg-is-obtained-by-rotating-about-the-x-axis-the-region-under-t/
💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here! # (a) A model for the shape of the bird's egg is obtained by rotating about the x-axis the region under the graph of $$f(x) = (ax^3 + bx^2 + cx + d) \sqrt{1 - x^2}$$Use $CAS$ to find the volume of such an egg.(b) For a red-throated loon, $a = -0.06$, $b = 0.04$, $c = 0.1$, and $d = 0.54$. Graph $f$ and find the volume of an egg of this species. ## a) $V(x)=\pi\left[\frac{4 a^{2}}{63}+\frac{8 a c}{35}+\frac{4 b^{2}}{35}+\frac{8 b d}{15}+\frac{4 c^{2}}{15}+\frac{4 d^{2}}{3}\right]$b) $V(x) \approx 1.263$ #### Topics Applications of Integration ### Discussion You must be signed in to discuss. Lectures Join Bootcamp
2021-10-16 21:08:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24924665689468384, "perplexity": 1974.5087094823764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00228.warc.gz"}
https://cirq.readthedocs.io/en/stable/generated/cirq.QubitOrder.html
# cirq.QubitOrder¶ class cirq.QubitOrder(explicit_func: Callable[Iterable[cirq.ops.raw_types.Qid], Tuple[cirq.ops.raw_types.Qid, ...]])[source] Defines the kronecker product order of qubits. __init__(explicit_func: Callable[Iterable[cirq.ops.raw_types.Qid], Tuple[cirq.ops.raw_types.Qid, ...]]) → None[source] Initialize self. See help(type(self)) for accurate signature. Methods as_qubit_order(val) Converts a value into a basis. explicit(fixed_qubits, fallback) A basis that contains exactly the given qubits in the given order. map(internalize, TInternalQubit], …) Transforms the Basis so that it applies to wrapped qubits. order_for(qubits) Returns a qubit tuple ordered corresponding to the basis. sorted_by(key, Any]) A basis that orders qubits ascending based on a key function. Attributes DEFAULT A basis that orders qubits in the same way that calling sorted does.
2019-07-23 04:23:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5667746067047119, "perplexity": 6345.277594564819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528687.63/warc/CC-MAIN-20190723022935-20190723044935-00337.warc.gz"}
https://mathoverflow.net/questions/370333/number-of-reduced-decompositions-of-the-longest-element-of-the-weyl-group
# Number of reduced decompositions of the longest element of the Weyl group Let $$R$$ be a reduced root system, $$W$$ the associated Weyl group, and $$w_0 \in W$$ the longest element of $$W$$. In general $$w_0$$ admits more than one reduced decomposition into a product of reflections, a number which we denote by $$d_R$$. Where can one find a list of values of $$d_R$$ for low-dimensional root systems? For example are the explicit values of $$d_R$$ known for the exceptional root systems? • For Types A and B(=C) there are product formulas for these numbers: see the famous paper math.mit.edu/~rstan/pubs/pubfiles/56.pdf and math.stackexchange.com/questions/2271510/…. I'm pretty sure for Type D there is not a product formula (as Stanley mentions there is a big prime, 193, in the factorization of the number of reduced words of the longest word in Type $D_4$). As for exceptionals I don't know of a list but this is in principle something a computer can do. – Sam Hopkins Aug 28 at 16:36 • In the linked stackexchange webpage the B-series formula in the answer gives, for low values of $n$, a product with negative limits. Is this an error or is this an example of some convention I am not familar with? For example, what is the value for $B_2$? – Bas Winkelman Aug 28 at 17:07 • Maybe the formula Zach wrote is not quite right. For $B_2$ the answer should be 2. It is the same as the number of linear extensions of the root poset (poset of positive roots whereby $\alpha \leq \beta$ if $\beta-\alpha$ is a nonnegative sum of simple roots). This poset is the same as the shifted trapezoid shape $(2n-1,2n-3,...,1)$ poset. This number also happens to be the same as the number of SYTs of $n\times n$ square shape. – Sam Hopkins Aug 28 at 17:14 • You can see Corollary 5.2 of the paper of Stanley linked above for another way of writing the product formula for the Type B # of reduced decompositions (he lists it as a conjecture but it has been proven). – Sam Hopkins Aug 28 at 17:16 • The basic thing that's going on here is that there's an Edelman-Greene style bijection in Types A and B (and also the non-Weyl types $I_2(m)$ and $H_3$- sometimes these types are called the 'coincidental types'). The other types don't have such a bijection. – Sam Hopkins Aug 28 at 17:18 This is easy to do in SageMath. E.g. the following code G = WeylGroup("F4") w = G.long_element_hardcoded() print(w) rw = w.reduced_words() len(rw) outputs 2144892. If you want to look at some of these reduced words just examine the list rw. To create a list for classical types of different rank do res = {} for n in range(2,5): G = WeylGroup(["A", n]) w = G.long_element_hardcoded() print("Calculating rank ", n) res[n] = len(w.reduced_words()) • Note 2144892 = 2^2 x 3 x 47 x 3803, again suggesting lack of product formula. – Sam Hopkins Aug 28 at 19:34 • I created findstat.org/StatisticsDatabase/St001585. If you have enough power to do E6 that would be great! – Martin Rubey Aug 28 at 19:44 • @MartinRubey I will try some computer in Prague's Charles University to which I have remote access. But I am pessimistic. This algorithm actually generates and stores all the reduced words so it eats all available memory rather quickly. – Vít Tuček Aug 28 at 19:54 • Counting reduced words is the same as counting directed paths between two vertices in a certain directed graph (the weak order graph), so it's possible you could write code by hand that does this faster than what's in Sage. – Sam Hopkins Aug 28 at 19:59 • @SamHopkins, indeed, this is much faster! I added corresponding code to the findstat entry. $D_5$ is now easy. Unfortunately, FindStat has a limitation on the size of statistic values, so anything beyond $A_6$ is too large. – Martin Rubey Aug 28 at 20:20 Around the time Fomin and I wrote this paper, Tao Kai Lam applied the technique to type $$D_n$$. It emerged that it was "natural" to weight a reduced decomposition $$\rho$$ by $$2^{d(\rho)}$$, where $$d(\rho)$$ is the number of simple reflections in $$\rho$$ that correspond to the $$n-2$$ "nonbranch nodes" in the Coxeter diagram for $$D_n$$. Using this weighting, there is a nice product formula for the number of weighted reduced decompositions of the longest element, which I unfortunately have forgotten. I hope someone can redo this work. • Is this weighted sum related to linear extensions in some way? – Sam Hopkins Aug 28 at 19:09 • @SamHopkins: I have a hazy recollection that it is a power of 2 times the number of SYT of the shifted shape $(2n-2,2n-4,\dots,4,2)$ (for which there is a nice product formula). – Richard Stanley Aug 28 at 19:41 • That would make a lot of sense! In conservancy.umn.edu/bitstream/handle/11299/159973/… Williams calls the poset you're talking about the "flattened root poset" of $D_n$. – Sam Hopkins Aug 28 at 19:44 • Actually, the result you mention might follow from this paper of Billey and Haiman: math.berkeley.edu/~mhaiman/ftp/schubert/schubert.pdf (see also the discussion on pg. 46 of the thesis of Williams linked above) – Sam Hopkins Aug 28 at 19:46
2020-11-24 03:38:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.638321042060852, "perplexity": 704.3024755955848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171077.4/warc/CC-MAIN-20201124025131-20201124055131-00513.warc.gz"}
https://eprint.iacr.org/2021/447
## Cryptology ePrint Archive: Report 2021/447 An Intimate Analysis of Cuckoo Hashing with a Stash Daniel Noble Abstract: Cuckoo Hashing is a dictionary data structure in which a data item is stored in a small constant number of possible locations. It has the appealing property that the data structure size is a small constant times larger than the combined size of all inserted data elements. However, many applications, especially cryptographic applications and Oblivious RAM, require insertions, builds and accesses to have a negligible failure probability, which standard Cuckoo Hashing cannot simultaneously achieve. An alternative proposal introduced by Kirsch et al. is to store elements which cannot be placed in the main table in a stash'', reducing the failure probability to $O(n^{-s})$ where $n$ is the table size and $s$ any constant stash size. This failure probability is still not negligible. Goodrich and Mitzenmacher showed that the failure probability can be made negligible in some parameter $N$ when $n = \Omega(log^7(N))$ and $s = \Theta(log N)$. In this paper, I will explore these analyses, as well as the insightful alternative analysis of Aumüller et al. Following this, I present a tighter analysis which shows failure probability negligible in $N$ for all $n = \omega(\log(N))$ (which is asymptotically optimal) and I present explicit constants for the failure probability upper bound. Category / Keywords: oblivious ram, oblivious hash table Date: received 6 Apr 2021, last revised 18 May 2021 Contact author: dgnoble at cis upenn edu Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2021/447 [ Cryptology ePrint archive ]
2021-12-03 01:03:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7726609110832214, "perplexity": 1744.8576450785206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362571.17/warc/CC-MAIN-20211203000401-20211203030401-00321.warc.gz"}
https://angularquestions.com/category/angular/
# Category: angular ## Checkbox color to be changed and checked in Angularjs Data table how to Checkbox color to be changed and checked in Angularjs Data table I am using angular material data table. I have an issue in the check box in data table. Here is my data table <md-table-container> <table md-table md-row-select=”options.rowSelection” multiple=”{{options.multiSelect}}” ng-model=”selectedItemsToCreate” md-progress=”promise” animate-sort-icon=”true”> <thead ng-if=”!options.decapitate” md-head md-order=”query.order”> <tr md-row class=”md-row-height”> <th md-column md-order-by=”BDSparePartNumber”><span>Part #</span></th> <th md-column md-order-by=”BDSPCondition”><span>Part Condition</span></th> <th md-column md-order-by=”BDSPSerialNumber”><span>Part Serial #</span></th> <th md-column md-numeric><span>Line #</span></th> <th md-column md-numeric><span>Part Quantity</span></th> <th md-column md-order-by=”BDID” […] ## Server response type is buffer and return buffer array rather than actual content. how to Server response type is buffer and return buffer array rather than actual content. I have an angular Js application which is deployed in Node express server.The API service is written in Python.When I am calling the API, the response I am getting is as {type: “Buffer”, data: Array(33)} When I try the same endpoint in Angular 4 and POSTMAN I am getting the correct data.Can anybody help me over here? Is this any […] ## Access windows Authenticated Web API through Angular 2 without login prompt I have already developed front-end application in Angular2 and back-end in ASP.net web APIs. I had used Windows authentication as enabled because I want to detect requesting user. Both applications are hosted in IIS server(Windows Server 2012). When I load angular app it load login prompt and when give correct user credentials data loading happen correctly. But I want to know a way to load them without login prompt, authenticate automatically. This is the way […] ## RxJS: invoke two async calls, return the first that passes a condition, or a special value of none of them does Using RxJS, I want to invoke two different asynchronous operations, which may or may not return null. They may also take indefinite time to complete. I want to achieve the following: If operation A returns value 1, then immediately return value 1 If operation B returns value 2, then immediately return value 2 If both operation A and B return null, then return null. I suppose I can achieve the first two simply as follows: […]
2018-02-17 23:19:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19682934880256653, "perplexity": 8841.564566931704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891808539.63/warc/CC-MAIN-20180217224905-20180218004905-00317.warc.gz"}
https://bobsegarini.wordpress.com/tag/prince-albert/
## Pat Blythe – Farewell to 2020…….the flora and fauna of Christmas…..and music Posted in Family, life, music, Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on December 9, 2020 by segarini This is my final column of 2020, a year we’d all much rather forget but will be permanently etched in our psyches forever. So on that note, I’ve decided to end on a more upbeat, festive but somewhat educational note. ## JAIMIE VERNON – VICTORIA’S SECRET Posted in Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on May 21, 2016 by segarini If you’re reading this it probably means you’re broke and can’t afford a weekend off or you have no idea that this is Victoria Day weekend. It’s a celebration of our formerly longest reigning Monarch of the British Vampire, er, Empire (June 20, 1837 until her death January 22, 1901). Queen Elizabeth recently usurped that record by tiptoeing past Victoria which opens the door for us to one day celebrate Lizzy’s Day instead. Currently the weekend celebrates Vicky’s birth on May 24, 1819. ## JAIMIE VERNON – STAMPS OF APPROVAL Posted in Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on May 14, 2016 by segarini This week Canada Post released a series of Original Series ‘Star Trek’ postage stamps featuring Captain Kirk, Spock, Bones and Scotty. Kirk, of course, was played by the human punchline and proudly Canadian William Shatner and his enemy in real life James Doohan, also Canadian, was Scotty. ## JAIMIE VERNON: STAMPS & COINS & ROCKS & THINGS Posted in Opinion with tags , , , , , , , , , , , , , , , , , on January 5, 2013 by segarini At least once a week I get an excited phone call from my pal Rob Tyler – one-half of comedy/music duo Two For The Show – whose enthusiasm for everything is infectious. So much so that he always wants to share it. He loved my Encyclopedias so much he introduced me to the brain-trust at Long & McQuade Musical instruments and got me a distribution deal nationally with the company. My daughter showed an interest in recording a solo song – he offered his studio and production skills to see it through. I’m grateful he accommodated her. His biggest cheerleading is for his kids – a prouder Dad you would never meet.
2023-03-27 06:53:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8173145055770874, "perplexity": 1841.2528441708444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00670.warc.gz"}
http://tex.stackexchange.com/questions/97448/escaping-special-latex-characters-in-user-defined-input-text
# Background Users can provide text, which is transformed using XSL into a LaTeX document. The XSL template that transforms the XML document (containing user-defined text) currently resembles: <xsl:template match="user-text"> <xsl:text> \item </xsl:text> <xsl:apply-templates /> </xsl:template> This is then transformed into: \item This is the text the user provided. # Problem This allows the user to submit maliciously crafted LaTeX: \item This is the \{latex} the user provided. # Ideas Some ideas: • \begin{verbatim} and \end{verbatim} cause the text to appear without formatting (i.e., a monospace font). • Write an XSLT function that escapes the special characters. # Question What is the simplest way to ensure user-defined text does not get interpreted as LaTeX code? Something like \begin{verbatim} would be perfect if it didn't change the font, and prevent text from wrapping. # Related - If you want to render LateX syntax harmless in user input it either requires balanced braces (so that you can pick up the material as an argument and pass it through \scantokens) or it requires es sequence of "stop" tokens that are known not to be part of the material, e.g., \end{verbatim} or + in in \verb+foo+. Neither can be guaranteed if you have no control whatsoever about the user input, e.g., what would happen if you user writes: "This text contains \end{verbatim} and now I'm free to use \LaTeX{} code."? Then it will blow up or at least execute the second part. So if safety is important then XSLT is the way to go in my opinion. If you can live with that danger than consider changing \verbatim@font which is command that sets up the font used in verbatim and \verb. Its default definition is \def\verbatim@font{\normalfont\ttfamily} So if you change this to do "nothing" then you would get the normal body font. ### Update (sorry wrong time of the day) Of course that doesn't solve the fact that "verbatim" does not wrap. To fix this what is really needed is to write your own version of a "verbatim" environment" that doesn't change spaces to active and doesn't add \obeylines \verbatim@font \@noligs \hyphenchar\font\m@ne \everypar \expandafter{\the\everypar \unpenalty}% in the first place, as those are your offenders. - @DaveJarvis no it doesn't see update. it is 5am here and I' up since 3 so ... –  Frank Mittelbach Feb 9 '13 at 4:40 @DaveJarvis hope it helps you ... turning into pumpkin now :-) –  Frank Mittelbach Feb 9 '13 at 4:46
2014-03-13 19:20:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8326103091239929, "perplexity": 2635.6990459360245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678680766/warc/CC-MAIN-20140313024440-00067-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.computer.org/csdl/mags/ds/2007/02/o2001.html
Issue No. 02 - February (2007 vol. 8) ISSN: 1541-4922 pp: 1 In a previous column ( http://csdl2.computer.org/comp/mags/ds/2006/11/oy002.pdf), I gave an overview of a beginner's course on flexible algorithms that my colleagues and I developed at the Jerusalem College of Technology. Here I briefly describe this notation's main features and present fragments from this course. Our algorithmic notation The core algorithmic notation of flexible algorithms is based on functions with In parameters and Out parameters (but no In/Out parameters) and conditional statements. As in mathematics, variables and parameters receive a value only once. Blocks and loops (including nested forms) aren't part of the core language but are abbreviations for certain compound forms in the core language. Our notation has an iterative style and includes a once-only assignment statement. Because our notation can't update variables and parameters, it enables parallel execution. Fragments from the course The course notes ( http://www.rby.name) present examples of flexible algorithms and their execution by three methods that use the same sets and values form: • parallel execution, • sequential execution with immediate execution of function calls or activations, and • sequential execution with delayed execution of function calls or activations. To save space, I'll only discuss parallel execution here. Here I present some fragments from the course material. Reversing five values The following is a flexible algorithm for reversing five values. Function a', b', c', d', e' •= rev5(a, b, c, d, e); // Specification: // In — a, b, c, d, e are any values. // Out — The values a', b', c', d', e' are like a, b, c, d, e but in reverse order. { a' •= e; b', c', d' •= rev3(b, c, d); e' •= a; } // end rev5 function a', b', c' •= rev3(a, b, c); // Specification: // In — values a, b, c // Out — values a', b', c' like a, b, c but in reverse order { a' •= c; b' •= rev1(b); c' •= a; } // end rev3 function a' •= rev1(a); // Specification: // In — value a // Out — value a' like a but in reverse order—that is a' is a { a' •= a; } // end rev1 Suppose we wish to execute the statement a'b'c'd'e' •= rev5 (1, 2, 3, 4, 5). We begin by writing this as a set of one element, namely {a', b', c', d', e' •= rev5(1, 2, 3, 4, 5) }. The steps of the parallel execution are given below. set of statements a',b',c',d',e' {a', b', c', d', e' •= rev5(1, 2, 3, 4, 5) } _, _, _, _, _ {a' •= 5 ; b', c', d' •= rev3(2, 3, 4) ; e' •= 1 } _, _, _, _, _ {b' •= 4, c' •= rev1(3) ; d' •= 2 } 5, _, _, _, 1 {c' •= 3 } 5, 4, _, 2, 1 { } 5, 4, 3, 2, 1 For educational reasons, we presented a specific solution for five values before giving a function for reversing part of a vector. We also presented three execution methods for reversing a five-element vector using this general solution. See the course notes( http://www.rby.name) for details. Avoiding errors Two kinds of errors can occur. Syntax errors occur in the flexible algorithm's form or syntax, making executing the algorithm pointless. (In other words, you don't need to execute the algorithm to see that an error exists.) Execution (or runtime) errors occur when a flexible algorithm is executed. The following is an example of a syntax error: function x', y' •= bug1(x, y); { x' •= 3; // No error here. x •= y + 1; // x may not be assigned here // as it is an Input variable. y' •= x' + 1; // May not use the value of x' here // as it is an Output variable. } Other syntax errors include brackets that don't balance, punctuation errors, and so forth. The following is an example of an execution error: function x' •= bug2(x); // No error in the form or syntax of the flexible algorithm. { if (x ≤ 3) x' •= x + 1; p if (x ≥ 3) x' •= x + 2; } However, if the value of x is 3, an error occurs during execution, because two assignments are made to x' (a conflict). Avoiding these errors has the benefit of encouraging students to write in a style that enables but doesn't force parallel execution. Summing eight numbers The statement s' •= a 0 + a 1 + … a 7 is equivalent to s' •= ((…(a 0 + a 1) + a 2) + a 3) + … a 7) when fully bracketed. Bracketing the statement in this way forces sequential evaluation. Rebracketing the expression in the following statement enables parallel evaluation: s' •= (((a 0 + a 1) + (a 2 + a 3)) + ((a 4 + a 5) + (a 6 + a 7))) (1) Intuitively, the calculation of s' in this way is performed as in figure 1. Figure 1. . The calculation of s' using statement (1). In figure 1, it's useful to view each occurrence of a i as a separate variable. This view simplifies writing the calculation as a flexible algorithm in which subexpressions can be computed in parallel. So, for example, there are three variables named a0 and so forth. Here are three functions for carrying out the computation in this way: function s' •= add8 (a 0, a 1, a 2, a 3, a 4, a 5, a 6, a 7) // Specification: s' = a 0 + a 1 + … a 0 •= a 0 + a 1; a 1 •= a 2 + a 3; a 2 •= a 4 + a 5; a 3 •= a 6 + 0, a 1, a 2, a 3); // Specification: s' = a 0 + a 1 + a 2 + a 0 •= a 0 + a 1); a 1 •= a 2 + a 0, a 1); // Specification: s' = a 0 + a 1 {s' •= a 0 + a 1}; // end add2 Here's the parallel execution of s' •= add8(1, 2, 3, 4, -1, -2, -3, -4). set of statements s' {s' •= add8(1, 2, 3, 4, -1, -2, -3, -4)} _ {s' •= add4(3, 7, -3, -7)} _ {s' •= add2(10, -10)} _ { } 0 Summing the elements of a vector Here's how we can generalize the previous example to handle a vector of n elements, where n is a power of 2. (A simple exercise that the students later perform is generalizing this solution so that it works for vectors of any size, not just a power of 2.) function s' •= addn(v,n) // Specification:v is a vector and n is its size which must be a power of 2. // The function computes s' = v 0 + v 1 … + v n—1 { if (n = 1) {s' •= v 0} else addn ( n •= n/2; v 0 •= v 0 + v 1; v 1 •= v 2 + v 3 . . . v n/2—1 •= v n—2 + v n—1 ); } // end addn (You can replace the lines from v 0 •= v 0 + v 1; to v n/2—1 •= v n—2 + v n—1 with a forall loop; see the course notes http://www.rby.name for details.) Here's the parallel execution of s' •= addn((1, 2, 3, 4, -1, -2, -3, -4), 8). set of statements s' {s' •= addn((1, 2, 3, 4, -1, -2, -3, -4), 8)} _ {s' •= addn((3, 7, -3, -7), 4)} _ {s' •= addn((10, -10), 2)} _ {s' •= addn((0), 1)} _ { } 0 Another simple exercise is to write a flexible algorithm to find a vector's minimum value using the structure of the function addn. The students need only to replace the operator + with a function min for determining the minimum of two values and then write the definition of the function min. Merge sort We use the structure of the function addn to develop the bottom-up merge-sort algorithm. This gives students a gentle introduction to a subtle sorting technique that enables parallel sorting of a vector. This technique sorts a vector by repeated merging. For simplicity, we assume that the vector's length is a power of 2. For example, suppose we wish to sort a vector of eight elements. We view this as eight single elements, and each single element is sorted. Here, for example, is a vector of eight elements: (8, 1, 7, 2, 6, 3, 5, 4) We merge pairs of single elements to obtain (1,8, 2,7, 3,6, 4,5) We now have four sorted pairs, and we merge two and two pairs to obtain (1,2,7,8, 3,4,5,6) This gives us two sorted runs of four elements, which we merge to obtain a sorted vector: (1, 2, 3, 4, 5, 6, 7, 8) At each stage, the sorted run in the vector doubles in size from the previous stage. The number of sorted runs in the vector is halved with respect to the previous stage (see Table 1). Table 1. Number and size of sorted runs. The number of sorted runs is like the number of values n to be added in the addn function. They would both progress 8, 4, 2, 1 when there are eight values. Suppose we have a function m2 with specification as follows. function v'•=m2 (v, size, place); // Specification: // v is a vector having two ranges which are sorted // in nondecreasing order. // These ranges are of length "size" and at position // "place" onwards in v. // These two ranges are merged into a single range // of length "2´size" and are put at position // "place" onwards in v'. Let's now use the function m2 to define the function mergesort. function v'•=mergesort(v) // Specification: // v, v' are vectors having the same length which // must be a power of 2. // v' will be like v but sorted in nondecreasing order. {loop(size•=1);} function v'•=loop(v, size) // Specification: // v, v' are vectors having the same length which // must be a power of 2. // The vector v consists of consecutive sorted ranges // of length "size" which must be a power of 2. // v' will be like v but but sorted. { if (size=length of v) {v'•=v} else loop ( size•=size´2, v•=m2(v,size,0); v•=m2(v,size,2´size); v•=m2(v,size,4´size); v•=m2(v,size,6´size); . . . v•=m2(v,size,length of v — (2´size) ); } (You can replace the lines from v•=m2(v,size,0); to v•=m2(v,size,length of v — (2´size) with a forall loop; see the course notes ( http://www.rby.name) for details.) Converting flexible algorithms to hardware block diagrams The course notes http://www.rby.name contain both a textual (hard to understand) and diagrammatic (easy to understand) development of a hardware block diagram for a serial adder, including its function definition in both textual and diagrammatic forms. (They also include an example of parallel execution in textual form.) Here, we only use the diagrammatic approach. We assume that there's a function a3b for adding three digits that produces two results—a sum digit and a carry digit. For example, executing c', s' •= a3b(9, 3, 1) would make s' •= 3 and c' •= 1. (Here s' is the sum and c' is the carry, each being a single digit.) Figure 2 shows a diagrammatic representation of a3b. Figure 2. A diagrammatic representation of a3b The function add adds two numbers consisting of several digits and an existing single-digit carry. It gives two results, a sum consisting of several digits and a carry consisting of one digit. We assume that both numbers being added and the sum produced consist of n + 1 digits, where n denotes the position of the least significant digit. The most significant digit has position zero. Figure 3a represents a function call c', s' •= add(u, v, c, n). We can write equivalent diagrams for this function (see figures 3b and c). Figure 3. (a) A function call c', s' •= add(u, v, c, n) is equivalent to different diagrams (b—c), depending on whether n > 0 or not. The call for a 4-digit adder is c', s' •=add(u, v, c, 3) (see figure 4a), which is equivalent to several other diagrams (see figures 4b—d) before resulting in the final diagram (see figure 4e). Figure 4. A 4-digit adder: (a) the call, (b—d) equivalent diagrams, and (e) the final diagram. Conclusion These fragments of the course material are all I can present in the space available, but I hope they give you a good idea of our approach. In the future, I hope to report on using this approach in secondary schools. I also hope to develop material for advanced students, such as the conversion of a flexible algorithm into a parallel algorithm where the parallelism is explicit. This column is based on "Flexible Algorithms: Selections from a Course for Beginners," a presentation I gave at the 4th IEEE International Conference on Information Technology: Research and Education. R.B. Yehezkael (formerly Haskell) is an independent academic who retired from the Jerusalem College of Technology. Contact him at rafi@jct.ac.il; http://cc.jct.ac.il/~rafi.
2016-09-26 12:38:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4695395231246948, "perplexity": 2176.688720077029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660760.0/warc/CC-MAIN-20160924173740-00025-ip-10-143-35-109.ec2.internal.warc.gz"}
https://delphijustin.biz/category/my-projects/
## Audio over telephone Heres a simple circuit that takes a audio signal(from ham radios,computers or cd players) and converts them into a telephone signal. It powers the phone line with 22vdc(It doesn’t have to be 22vdc, different voltage means different resistor is needed). The phones can run as high as 48v. If you use cordless phones the converter … ## DIY cable tester In this blog I will show you two ways to make a cable tester. The 1st way uses a arduino (software controlled). The 2nd way uses a decade counter chip(CD4017) and a flashing LED. I am going to order the Dual RJ45 jack to screw terminals from eBay. The other connectors are from parts-express. I … ## 6AL5 low voltage logic circuit In this blog we will experiment with vacuum tube logic gates. We will be using low voltage so it is totally safe. We will use a 6al5 twin diode tube. In the circuit below shows how to use it as semiconductors diodes but it’s very same as for vacuum tubes. Simply wire positive and negative … ## Capacitance substitution board In this blog I will show you how to make a dip switches capacitor substitution board. Its very simple since capacitors add up in parallel. So if you look at the schematic you will get an idea. ## Simple doorbell/alarm This is probably the simplest doorbell or alarm circuit that is out there. The buzzer will not be constantly on since the buzzer gets switched on by a capacitor. The circuit draws hardly anything when off as you will see in the graph below. As the capacitor gets charged its current lowers down a lot. … ## Arduino Video poker for the TV Here is a project I have been working on. It hooks up to the tv and allows you to play 5 different types of video poker games. The game is in black and white and some of the suits look funny. The pinouts can be found in README.MD file. Youtube videos ## Serial Port PWM In this blog I will show you how to generate pwm with a computer with a serial port. This tool was originally written by subcooledheatpump from YouTube. I made it better where you can choose which com port to use(by command-line or by being prompted to do so). Controlling it is now better since now … ## The simplest in-circuit transistor tester This is the simplest in circuit transistor tester. It uses two ammeters to measure the transistor gain. It is kinda like the heathkit IT-18 one but uses 2 meters instead of one. I plan on using relays for the polarity switch instead of a 5PDT Rotary switch. It uses a LM317 regulator for providing a … ## Discrete transistor op amp Here in this blog i will show you how to build a high powered transistor opamp. The design is based off of allaboutcircuits.com design. The only changes were the transistors were power transistors and rprg is lowered to 390 ohms half watt. ## Capacitor powered LED flashlight Long time ago QVC sold a shakeable Flashlight that had no batteries just a shakeable generator and a super capacitor. This flashlight uses a 5vdc 1.5a wall wart to charge the capacitors. To figure out how long it will take to charge the flashlight you use this equation $$T=5R_{currentlimit}C$$ And for total discharge time $$T=5R_{LED}C$$ …
2023-02-03 06:56:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24429671466350555, "perplexity": 2526.3551303317777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00338.warc.gz"}
https://agaldran.github.io/post/17_daco_prac_lec_1/
# DACO - Practical Lecture 1 - Introduction to Python for Scientific Computing NOTE: pdf slides for the first part of this lecture can be found here This year we have decided to move from Matlab to Python for the practical sessions. Some of you maybe will not have worked with this programming language. This first lecture is intended to guide you through your first steps in this programming language, and make you aware of the (super-rich) Python ecosystem for scientific computing. I really hope that by the end of this course you will be a Python fan, and consider abandoning Matlab once and forever! This is an overview of what you will be learning today: 1. Motivation and Goals. What is Python? 2. Python Installation. Accompanying Tools 1. Anaconda Python Distribution 2. Executing Python code 3. Introduction to the Python Programming Language 1. Fundamental Python data types 2. Everything is an object 3. Python Modules 4. Python Basic Operators 5. Sequences and Object Containers 6. Python typing 7. Flow Control 8. Python Functions 9. Classes and Object-Oriented Programming 4. Complementary Python Scientific Computing Tools: Numpy 1. Introduction to NumPy Arrays 2. Slicing NumPy Arrays 3. Indexing NumPy Arrays 4. Other numpy arrays manipulation techniques 5. Complementary Python Scientific Computing Tools: Matplotlib 1. Basic Matplotlib usage 2. Plot Customization 6. Homework 😱 7. Sources and References So… let’s move on. ## 1.- Motivation and Goals. What is Python? First thing, Python is free. Second, it is simple. Third, it is increasingly becoming the tool of choice for data science projects. Fourth, it’s multi-platform, it can run in Windows, Linux, Mac, your mobile phone… And last, there is a huge community of contributors to lots of open-source projects that complement it. This manifests in the form of a large ecosystem of scientific computing tools that grow along with the number of users. However, to add all this to your tool-belt, the first step is to familiarize yourself with the Python language itself. Today we will quickly review the main notions to get started on it. But first, let us install Python! ## 2. Python Installation. Accompanying tools ### 2.1 Anaconda Python Distribution In this course we will use the Python language programming. However, as mentioned above, one of the main strengths of Python is the large amount of available scientific libraries. This means that, together with the core Python language, we’ll be using several Python packages to perform scientific computations. But instead of installing all these packages manually one at a time, we will be using a Python distribution. A distribution consists of the core Python package and several hundred modules, and available through a single download and installation. In this course, we will use the Anaconda Python distribution. Apart of Python itself and several hundred libraries, Anaconda also includes two very useful development environments: Jupyter NB and Spyder. Spyder is a Matlab-like IDE (Integrated Development Environment) that we will be using for complex/long programming assignments. On the other hand, Jupyter NB is a functionality that allows you to run sequentially code in your browser. I will be giving you instructions in class on how to install and use both tools. The Anaconda Python distribution can be found here. Please install Python 3. For windows users, if you are finding troubles, let me know asap. ### 2.2 Executing Python code Python has two different modes: interactive and standard. The interactive mode is meant for experimenting your code one line or one expression at a time. The standard mode is useful for running programs from start to finish. You will probably find yourself alternating between both modes. #### 2.2.1 Python Interactive Mode For a first example on interactive mode, open a command console, type python, and you will start using Python. Experiment a bit. A more convenient way of building code interactively is through Jupyter iPython notebooks. This also allows us to mix text, code, and maths, which is really cool. To start an iPython notebook, open a console, type jupyter notebook, and navigate in your browser to http://localhost:8888/. In windows, you may need a log-in password, provided automatically after executing jupyter. Here you can find a series of useful tricks and shortcuts. #### 2.2.2 Python Standard Mode In this case, we write a text file with Python instructions, and run it. hello_world.py. Again, there is a more convenient way for building code in standard mode, which is using an IDE. This programming tool will allow you to more easily debug, inspect variables, and quickly modify your code. They typically also include an embedded interactive system. An example of an IDE is Spyder, already contained in the Anaconda distribution. For the contents of this lecture, starting from Section 3, a notebook version of this lecture is available in a static view here. Please click the download button in the right top corner, download and open it! ## 3. Introduction to the Python Programming Language ### 3.1 - Fundamental Python data types Python contains several typical built-in data types as part of the core language. The same as in e.g Matlab (and different from e.g. C) you do not need to explicitly declare the type of a variable. Python determines data type of variables by how they are used. For instance, to create an integer (int) variable you simply type: # An integer variable a a = 5 print(type(a)) Other basic/common python types are for instance float, string, or boolean, exemplified below: # A float variable f f = 5.0 # A boolean b = True # A string c = 'bom dia' ### 3.2 - Everything is an object It is important that you start thinking of the above examples a,f,b,c as objects. Each object in Python has three characteristics: object type, object value, and object identity. Object type tells Python what kind of an object it’s dealing with. A type could be a number, or a string, or a list, or something else. Object value is the data value is contained by the object. This could be a specific number, for example. Finally, you can think of object identity as an identity number for the object. Each distinct object in the computer’s memory will have its own identity number. Most Python objects have either data or functions or both associated with them. These are known as attributes. The name of the attribute follows the name of the object, and they are separated by a dot in between them. The two types of attributes are called either data attributes or methods. A data attribute is a value that is attached to a specific object. In contrast, a method is a function that is attached to an object. And typically a method performs some function or some operation on that object. Object type always determines the kind of operations that it supports. In other words, depending on the type of the object, different methods may be available to you as a programmer. Finally, an instance is one occurrence of an object. For example, you could have two strings. They may have different values stored in them, but they nevertheless support the same set of methods. ### 3.3 - Python Modules Python contains builtin functions, such as print that can be used by all Python programs. However, while the Python library consists of all core elements, such as data types and built-in functions, the bulk of the Python library consists of modules. In order for you to be able to make use of modules in your own code, you first need to import those modules using the import statement. Example of modules, attributes, methods, in numpy. The numpy example above made available some specific data types and methods for us. This is an example of a Python module. In general, Python modules are libraries of code that you import and use through import statements. Let’s go through a simple example. Import the math module. This module gives you access to the pi constant. Print its value. This module also gives you access to several mathematical operations. Find out (print) the value of the sine of pi. If you only need, e.g., the value of pi from the entire math module, you can import it selectively: from math import pi print(pi) ### 3.4 - Python Basic Operators: Operators are symbols that allow you to use logic and arithmetic in your computations. Python has several operators, the most prominent ones being arithmetic, comparison, and logical. #### 3.4.1 - Arithmetic operators: They will take two variables and perform simple mathematical operations on them. They are addition +, subtraction -, multiplication *, division /, modulus %, floor division //, and exponentiation ** print(3+2, 3-2, 3*2, 3/2, 3%2, 3//2, 3**2) #### 3.4.2 - Comparison operators: They observe two variables and return a boolean value. They are the usual greater than (>,>=), equal (=), different (!=), and lower than (<,<=) mathematical operators. print(3>2, 3<2, 3==2, 3!=2, 3>=2, 3<=2) #### 3.4.3 - Logical operators: These operators will interpret their input as boolean values, and return a boolean value depending on the truth value of both inputs. They are and, or, not. print(True and True, False or False, not True) #### 3.4.5 - Other operators There are other operators in Python, such as identity (is, is not) or membership (in, not in). I am 100% sure you will be able to guess what is their effect on variables/containers! Note this subtle difference between == and is: == tests whether objects have the same value, whereas is tests whether objects have the same identity. Try it with a=[1,2] and b=[1,2]. ### 3.5 - Sequences and Object Containers #### 3.5.1 - Python Sequences A sequence is an ordered collection of objects. In Python, you can find three types of sequences: Lists, Tuples, and Range Objects. Let us focus on the first two of them. Lists and tuples can be accessed by indexing and can be sliced in several ways: # A tuple t = (0,True) # A list l l = [0,True] Note that there is a relevant difference between tuples and lists: tuples are immutable, while lists are not. This means that you won’t be able to modify the content stored at t. We will explain this in a second. Note also that both lists and tuples allow you to mix different data types. #### Note: Mutable and immutable objects The value of some objects can change in the course of program execution. Objects whose value can change are said to be mutable objects, whereas objects whose value is unchangeable after they’ve been created are called immutable. Continuing with sequences, in order to access the elements that a tuple/list holds, you use square brackets, not parenthesis: # the first element of the tuple t t_first = t[0] Also note that, the same as e.g. C (and different from e.g. Matlab), in Python indexing starts at 0. Be careful with this, because it is a common source of confusion in the beginning. Sequences can be accessed by indexing, which supports negatives indexes: t = (0,1,'hola',3,4.0) l = [0,1,'hola',3,4.0] print(t[0]) print(t[-1]) # note the behavior of negative indexes Sequences also support a very handy operation called slicing, that enables access to multiple objects at the same time: print(t[0:2]) # slicing first two elements, third is excluded print(l[2:5]) # slicing last three elements print(l[2:]) # empty spot after : means up to length-of-list index print(l[:2]) # empty spot before : means from first index print(l[0:5:2]) # every element, but use a step size of 2 Tuples and lists, as every python object, have several methods that you can use with them. However, since lists are mutable, they have methods that can modify their content: a = [4,3,2,1] a.append(5) print(a) a.sort() print(a) Note that list methods are in-place methods, they modify the original object that called them and return nothing. For a complete list of these, see here. There are other generic Python functions that work with sequences and provide useful operations: a = [4,3,2,1] b = sorted(a) n = len(b) s = sum(a) print(b, n, s) #### 3.5.2 - Object Containers: Dictionaries Sequences are a particularly simple example of Object Containers. More generally, Python offers several more advanced object containers. A really useful one are dictionaries. Dictionaries are unordered sequences that associate key objects to value objects. This means that a dictionary consists of Key:Value pairs, and the keys must be immutable while the values can be anything. Note that dictionaries themselves are mutable objects. A dictionary is built with curly braces as follows: ages = {"Me": 33, "You": 22, "Him":24} If now I want to know your age, and then increase it, I would type: print(ages["You"]) ages["You"] += 1 print(ages["You"]) ages["Her"] = 20 print(ages) Dictionary objects have also their own methods. For instance, you can use keys to find out what are all the keys in the dictionary, or the values method to retrieve are all of the values in the dictionary: names = ages.keys() years = ages.values() ### 3.6 - Python typing Objects in Python are composed by their name, their content, and a reference that points the name to the content. When an object is created, Python tells the computer to reserve memory locations to store the corresponding data. Depending on the data type of a variable, the interpreter allocates a certain amount of memory. As we have seen above, in Python, there is no need of declaring objects nor their type before using them. This is because actually what you are doing is not creating a spot in memory and filling it with an object. Rather, you are creating a pointer (that occupies a first memory spot), and then making that pointer point to an object in a second memory spot. For this reason, you can for instance reassign a variable to a different type of object without errors: a = [0,1,2] a = (-1,3) a = True Note that when you assign a variable to another, you are just creating a second pointer to that same memory spot. a = [0,1,2] b = a b[2] = 0 print(a) ### 3.7 Flow Control Typical flow control structures are implemented as usual in Python. Be careful with Python code indentation: the space you leave to the left of your piece of code implicitly delimits code blocks. We will learn the behavior of flow control statements in Python by example, to reinforce the idea of how intuitive Python is. #### 3.7.1 - if-else statements Observe the following code: if 3>2: print('success') elif 3==2: print('failure') else: print('I do not know') print('This will be printed either way') Do not forget about the semicolon in the end of control statements! #### 3.7.2 - for loops Observe the following code and try to predict its output: for i in [0,1,2,3]: print(i) #### 3.7.3 - while loops Observe the following code and try to predict its output: i=0 while i < 4: print(i) i = i+1 #### 3.7.4 - Other statements: break and continue Observe the following two pieces of code and try to predict their output: for i in [1,2,3,4,5,6,7]: if i % 3 == 0: continue print(i) for i in [1,2,3,4,5,6,7]: if i % 3 == 0: break print(i) Can you give a definition of both statements based on these experiments? ### 3.8 Python Functions Being only able to interactively” play with variables is boring. To build more complex code, we need functions. Functions are tools for grouping statements so that they can be executed more than once in the same program. They are useful maximize code reuse and minimize code redundancy, therefore contributing to avoid errors. Functions are written using the def statement. You can send objects created inside your function back to where it was called with the return statement. def compute_sum(a,b): c = a+b return c To use this function, we simply call it passing appropriate parameters: a = 5 b = -2 print(compute_sum(a,b)) Arguments to Python functions are matched by position. Tuples are typically used to return multiple values. Note that functions themselves are objects: print(type(compute_sum)) In general, variables created or assigned in a function are local of that function and exist only while the function runs. L = [0,1,2] def modify(my_list): c = 3 my_list[0] += 20 modify(L) print(L) print(c) It is also possible to specify a default value for some argument: def compute_sum(a,b=2): c = a+b return c Likewise, you can have keyword arguments. A keyword argument is an argument which is supplied to the function by explicitly naming each parameter and specifying its value: print(1, 2, 3, 4, sep=';') Keyword arguments must always go behind non-keyword arguments. ### 3.9 Classes and Object-Oriented Programming How can you go beyond built-in data types and create new object types, with their associated methods and attributes defined by you? Python allows you to create new classes, and then define (instantiate) new objects of that class and interact with them. This way, you can group data and functions operating on it in a more abstract way, and then instantiate concrete samples and use them. Classes allow for a simplified modeling of our problems, and enables the creation of cleaner code that will be more easily extended in the future. When dealing with classes, data is usually called attributes, and functions methods. #### 3.9.2 - Building a new Class from scratch Every class needs to have a special method, called constructor, that initializes its attributes. class UP_student: def __init__(self, name, math_skills, coding_skills, hard_working, theory_mark, practical_mark): self.name = name self.math_skills = math_skills self.coding_skills = coding_skills self.hard_working = hard_working self.theory_mark = theory_mark self.practical_mark = practical_mark You will note the presence of the self parameter: this is a special inner reference to the object state. It may take some time to understand the use of self, but do not be afraid, we will see some examples afterwards. As it stands, an object of the UP_student class has very limited value, as it contains only data (attributes). Let us add some spice by giving our class a function (method): class Y: def __init__(self, v0): self.v0 = v0 self.g = 9.81 def value(self, t): return self.v0 * t - 0.5*self.g*t**2 The utility of self starts to become clear now. At this point, you have created a useful class, and you can instantiate an object of this new type easily: name = 'adrian_galdran' print(type(a_student)) As you can see, we call our class as if it was a normal Python function, and Python automatically invokes the constructor method. __init__ requires several parameters to be specified at instantiation time. If you do not specify them correctly, you will get an error. Now, attributes and methods are exposed to the user: print(adrian_student.coding_skills) print('Global Mark: ', adrian_student.compute_global_mark(0.2)) # Let us give more weight to the practical part! How can you add new methods to your class? For instance, we can add a print_global_mark method that computes and prints the final mark automatically. This method only needs as input parameter theory_weight, and outputs a string: class UP_student: def __init__(self, name, math_skills, coding_skills, hard_working, theory_mark = 5, practical_mark = 4): self.name = name self.math_skills = math_skills self.coding_skills = coding_skills self.hard_working = hard_working self.theory_mark = theory_mark self.practical_mark = practical_mark def compute_global_mark(self, theory_weight = 0.6): return theory_weight*self.theory_mark + (1-theory_weight)*self.practical_mark def print_global_mark(self, theory_weight = 0.6): global_mark = self.compute_global_mark(theory_weight) print('The final mark of ', self.name, ' is ', global_mark) Note that even if the print_global_mark method only needs the theory_weight argument, we still must add the self argument so that it can call self.global_mark. This is omitted in the method call. Notice also that inside the class, compute_global_mark is known to the object and needs no self parameter. name = 'adrian_galdran' We know an object consists of both internal data and methods that perform operations on the data. At some point you may find that existing object types do not fully suit your needs. Classes are the tool that allows you to create new types of objects. #### 3.9.2 - Class Inheritance Sometimes, even if you have the need for a new type, it may happen that this new object type resembles, in some way, an existing one. Classes have the ability to inherit from other classes, and this is a fundamental aspect of OOP. Let us see an example of how to build a new class, inheriting from the built-in Python list class. We will add more functionality to it. class MyList(list): this definition ensure that our new class, derived from list, will inherit the attributes of the base class. However, now we can extend, or redefine those attributes! For instance, we are going to improve the built-in remove methods, implemented by Python for lists in this way: L = [0,1,2,5,5] L.remove(5) We will add new methods to also be able to remove the maximum and minimum element of a list. For this, we complete the definition of our extended class as follows: class MyList(list): def remove_min(self): self.remove(min(self)) def remove_max(self): self.remove(max(self)) Now we can make use of our class: L2 = MyList(L) dir(L) dir(L2) print(L2.remove_min()) ## 4. Complementary Python Scientific Computing Tools: Numpy As mentioned in the introduction, one of the most important strengths of Python is the large ecosystem of tools available. One of the most important libraries for scientific computing in general (and for this course in particular) is NumPy, which is designed to perform matrix computations. Here you will learn the fundamental concepts related to Numpy. ### 4.1 - Introduction to NumPy Arrays Python allows you to create nested lists, that you could use to work with n-dimensional arrays: zero_matrix = [[0,0,0],[0,0,0]] print(len(zero_matrix),len(zero_matrix[0])) However, you can see this is quite inconvenient, and complexity will grow a lot with high dimensions. Also, Python lists are not designed for linear algebra. For instance, + acting on lists means concatenation: print([1,2]+[0,0]) NumPy arrays are n-dimensional array objects which conform the core component of scientific computing in Python. They are an additional data type provided by NumPy for representing vectors and matrices. Unlike Python lists, elements of NumPy arrays are all of the same data type, and their size is fixed once defined. By default, the elements are floating point numbers. This is a first example of how to build a vector and a matrix with all elements zero. import numpy as np zero_vector = np.zeros(4) zero_matrix = np.zeros((2,3)) Note that in the matrix case, we need to specify the dimensions through a tuple. In order to build an array of ones, you can use the numpy.ones function, with the same syntax: ones_matrix = np.ones((3,2)) Finally, you can manually initialize them using np.array and a Python list: my_matrix = np.array([[2,1],[3,2],[5,4]]) numpy supports the usual standard matrix operations, such as matrix transposition; my_transposed_matrix = my_matrix.transpose() Arithmetic operations work as expected also: my_matrix + ones_matrix Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. You need to use the dot function to compute products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects: x = np.array([[1,2],[3,4]]) # 2x2 Matrix y = np.array([[5,6],[7,8]]) # 2x2 Matrix v = np.array([9,10]) # 1x2 vector w = np.array([11, 12]) # 1x2 vector # Matrix / vector product; both produce a 1x2 vector print(x.dot(v)) print(np.dot(x, v)) # Matrix / matrix product; both produce a 2x2 matrix # [[19 22] # [43 50]] print(x.dot(y)) print(np.dot(x, y)) Finally, you can find out the dimensions of a given numpy array through its shape data attribute: print(my_matrix.shape) ### 4.2 - Slicing NumPy Arrays The same way you can slice Python lists, you can do the same with numpy arrays. Remember the indexing logic. Start index is included but stop index is not, so Python stops just before it reaches the stop index. my_first_column = my_matrix[:,0] my_last_row = my_matrix[-1,:] print(my_first_column.shape) ### 4.3 - Indexing NumPy Arrays NumPy arrays can also be indexed with other arrays or other sequence-like objects like lists. For example: z1 = np.array([2,4,6,8,10]) z2 = z1+1 indexes_arr = np.array([0,1]) indexes_list = [0, 4] print(z2, z2[indexes_arr], z2[indexes_list]) Another way of indexing numpy arrays is with logical indices: indexes_logical = [True, True, True, False, False] print(z2[indexes_logical]) Note the potential of this operation! print(z2>5) print(z2[z2>5]) #### Important difference between slicing and indexing When you slice an array with the colon operator, you obtain a view of the object. This means that if you modify that view, you will also modify the original array. In contrast, when you index an array, what you obtain is a (new) object, a copy independent of the original one. z1 = np.array([2,4,6,8,10]) w_view = z1[0:3] # sliced z1 print(z1) print(w_view) w_view[0] = 50 print(z1) Compare the above code snippet with this one: z1 = np.array([2,4,6,8,10]) indexes = [0,1,2,3,4] w_copy = z1[indexes] # indexed z1 print(z1) print(w_copy) w_copy[0] = 50 print(z1) ### 4.4 - Other numpy arrays manipulation techniques In numpy, if you want to build an array with fixed start and end values, such that the other elements are uniformly spaced between them, you can do the following: np.linspace(0, 100, 10) # stop point is included We have already seen an example of the attribute shape of a numpy array. You can also check the total size of it: my_matrix = np.array([[2,1],[3,2],[5,4]]) print(my_matrix.shape) print(my_matrix.size) Notice that neither shape nor size are followed by a parenthesis. This is because they are not method attributes for the numpy array class, but rather data attributes. There are a couple of handy logical operations that work on top of numpy arrays. For instance, you often will want to check if any/all of the elements in an array verifies a given condition. This can be accomplished with the methods any() and all(): x = np.random.random(10) print(x) print(np.any(x>0.5)) print(np.all(x>0.5)) In this case, instead of using the Python random library, we used the numpy.random module. ## 5. Complementary Python Scientific Computing Tools: Matplotlib Matplotlib is the standard Python plotting library. Even if matplotlib is a very large library, it contains a module called pyplot. Pyplot is a collection of functionalities that make matplotlib work in a similar way as Matlab. In this course, we will use pyplot for our data visualizations. ### 5.1 - Basic Matplotlib usage Let us import it: import matplotlib.pyplot as plt The most basic function inside pyplot is plot. Its simplest use case takes only one argument, specifying the y-axis values that are to be plotted. In this case, each y-axis value is plotted against its corresponding index value on the x-axis: y = np.random.random(10) plt.plot(y) If you use plot outside the iPython shell, the plot is created but not shown. To tell Python to show the plot, you just need to add plt.show() to your code. When you give to plot two arguments, the first argument specifies the x-coordinates and the second the y-coordinates. x = np.linspace(0,10,100) y = np.cos(x) plt.plot(x,y) You can also supply a third argument to plot in order to give some cosmetic specifications on your plot, like color, line type or marker. They work with key-word arguments. x = np.linspace(0,10,20) y = np.cos(x) plt.plot(x, y, 'ro-') plt.show() plt.plot(x, y, 'gs-', linewidth=5, markersize=15) Note that in this case plt.show() forces Python to show the first plot, which otherwise would be ommited. ### 5.2 - Plot Customization Let us see some more advanced plot customization techniques. To add a legend to an already created (even if still not shown) plot, you can use legend(), which takes a string as an argument: x = np.linspace(-3,3,20) y = x**2 plt.plot(x, y, 'ro-') If you want to add information on which quantities are specified on each axis, you can do it as follows: x = np.linspace(-3,3,20) y = x**2 plt.plot(x, y, 'ro-') plt.xlabel('The x axis') plt.ylabel('The y axis') You can also customize what part of your plot you want to display with axis(): x = np.linspace(-3,3,20) y = x**2 plt.plot(x, y, 'ro-') plt.axis([-0.5, 2, -2, 4]) #xmin, xmax, ymin, ymax It is also quite easy to overlay several plots: x = np.linspace(-3,3,20) y1 = x**2 y2 = x**3 plt.plot(x, y1, 'ro-') plt.plot(x,y2, 'b+-') If you need to add an independent legend to each of these, you need to label each of them separately: x = np.linspace(-3,3,20) y1 = x**2 y2 = x**3 plt.plot(x, y1, 'ro-', label = 'square') plt.plot(x,y2, 'b+-', label = 'cubic') plt.legend(loc = 'upper left') Note that legend() can take a keyword argument specifying location. Finally, to save your figure, you simple use savefig. The extension of the file name you choose will determine the format of the output. x = np.linspace(-3,3,20) y1 = x**2 plt.plot(x, y1, 'ro-', label = 'square') plt.savefig('my_plot.png') Finally, let us illustrate the use of histogram plotting tools in matplotlib, as well as how to build several subplots in the same plot. First, we create a normally distributed array of numbers around zero using numpy as follows: x = np.random.normal(size = 1000) To build a histogram plot in pyplot we type: plt.hist(x) Now, if you want to plot the same histogram with two different colors in two different subplots, you can use the plt.subplot() function. This function takes three arguments: the first two specify the number of rows and columns in the subplot, and the third one is the plot number. plt.subplot(1,2,1) plt.hist(x, color = 'r'); plt.subplot(1,2,2) plt.hist(x, color = 'b'); Note the use of a semicolon after each plot execution, in order to avoid printing the value returned by matplotlib. ## 6. Homework For now, you can access a notebook with an exercise on Python classes here. I will be adding another problem in the next days. ## 7. Sources and References Of course, there are tons of wonderful Python resources in the internet. The main sources I used to build this lecture were: 1. Jake Van der Plas’ book A Whilrwind Tour of Python, available here in pdf, and here in iPython notebook format. 2. Harvard’s online course Using Python for Research, hosted at edX 3. Stanford’s CS231n short Python tutorial here In general, most of the material and presentation is shamelessly inspired in 2. Regarding the numpy part, if you are a Matlab user, you could find this resource very useful: - Numpy for Matlab users - link If you need or want more practice with Python and the tools presented today, I would recommend following the free course at datacamp, and doing all the exercises proposed there. Codeacademy exercises, hosted here, can also be very useful to get more experience.
2019-11-19 21:57:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19701123237609863, "perplexity": 1831.8544451845041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00130.warc.gz"}
https://docs.clamav.net/manual/Signatures/EncryptedArchives.html
# Passwords for archive files [experimental] ClamAV allows for users to specify password attempts for certain password-compatible archives. Passwords will be attempted in order of appearance in the password signature file which use the extension of .pwdb. If no passwords apply or none are provided, ClamAV will default to the original behavior of parsing the file. Currently, as of ClamAV 0.99 [flevel 81], only .zip archives using the traditional PKWARE encryption are supported. The signature format is: SignatureName;TargetDescriptionBlock;PWStorageType;Password where: • SignatureName: name to be displayed during debug when a password is successful • TargetDescriptionBlock: provides information about the engine and target file with comma separated Arg:Val pairs • Engine:X-Y: Required engine functionality level. See the FLEVEL reference for details. • Container:CL_TYPE_*: File type of applicable containers • PWStorageType: determines how the password field is parsed • 0 = cleartext • 1 = hex • Password: value used in password attempt The signatures for password attempts are stored inside .pwdb files.
2022-10-01 05:12:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17214295268058777, "perplexity": 14989.18184408418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00514.warc.gz"}
https://tex.stackexchange.com/questions/339699/how-to-get-more-subfigures-than-26
# How to get more subfigures than 26? [duplicate] Code where having 6 subfigure environment causes the error but not less because I run out of labels of subfigures: a), b), ..., n), but I need more \documentclass{article} \usepackage[demo]{graphicx} \usepackage{subcaption} \usepackage{pgffor} \begin{document} \begin{figure} \foreach \ii in {1,...,5}{ \centering% not \center! \begin{subfigure}{0.19\textwidth} \centering \includegraphics[scale=0.11, page=\ii]{{Rplots.bland.male.5}.pdf} \caption{\#\ii, ite. 1.} \end{subfigure} \begin{subfigure}{0.19\textwidth} \centering \includegraphics[scale=0.11, page=\ii]{{Rplots.blandmale.6}.pdf} \caption{\#\ii, ite. 2.} \end{subfigure} \begin{subfigure}{0.19\textwidth} \centering \includegraphics[scale=0.11, page=\ii]{{Rplots.bland.male.7}.pdf} \caption{\#\ii, ite. 3.} \end{subfigure} \begin{subfigure}{0.19\textwidth} \centering \includegraphics[scale=0.11, page=\ii]{{Rplots.bland.8}.pdf} \caption{\#\ii, ite. 4.} \end{subfigure} \begin{subfigure}{0.19\textwidth} \centering \includegraphics[scale=0.11, page=\ii]{{Rplots.bland.9}.pdf} \caption{\#\ii, ite. 5.} \end{subfigure} \begin{subfigure}{0.19\textwidth} \centering \includegraphics[scale=0.11, page=\ii]{{Rplots.bland.10}.pdf} \caption{\#\ii, ite. 6.} \end{subfigure} } \end{figure} \end{document} Error ! LaTeX Error: Counter too large. See the LaTeX manual or LaTeX Companion for explanation. Type H <return> for immediate help. ... l.1492 } ? TeXLive: 2016 OS: Debian 8.5 • Does the error message also state which counter is too large? – Mico Nov 17 '16 at 15:49 • Do you have 5 pages in all the pdfs? Also, you need \end{figure} but I don't think that is the problem. – StefanH Nov 17 '16 at 15:55 • surely you could have provided a proper example? – David Carlisle Nov 17 '16 at 16:12 • Your subfigs are numbered a, b, c, ... After 26 pictures you run out of letters. That's what the error message tells you. – Pieter van Oostrum Nov 17 '16 at 16:15 Package alphalph provides some ways to continue numbering with letters, if the number of letters is exhausted, e.g.: \usepackage{alphalph} \renewcommand*{\thesubfigure}{\alphalph{\value{subfigure}}} Full example: \documentclass{article} \usepackage{graphicx} \usepackage{subcaption} \usepackage{pgffor} \usepackage{alphalph} \renewcommand*{\thesubfigure}{\alphalph{\value{subfigure}}} \begin{document} \newcommand*{\img}{% \includegraphics[ width=\linewidth, height=20pt, keepaspectratio=false, ]{example-image-a}% } \begin{figure} \foreach \ii in {1,...,5}{% \centering \begin{subfigure}{0.19\textwidth} \centering \img \caption{\#\ii, ite. 1.} \end{subfigure} \begin{subfigure}{0.19\textwidth} \centering \img \caption{\#\ii, ite. 2.} \end{subfigure} \begin{subfigure}{0.19\textwidth} \centering \img \caption{\#\ii, ite. 3.} \end{subfigure} \begin{subfigure}{0.19\textwidth} \centering \img \caption{\#\ii, ite. 4.} \end{subfigure} \begin{subfigure}{0.19\textwidth} \centering \img \caption{\#\ii, ite. 5.} \end{subfigure} \begin{subfigure}{0.19\textwidth} \centering \img \caption{\#\ii, ite. 6.} \end{subfigure} }% \lastlinefit=1000 % same inter-image spaces in last line as in previous lines \end{figure} \end{document} That error is given if you use \alph or \Alph and have a value more than 26. If you need values bigger than that you need a different display function. • How can you have more labels in subfigure? - The 26 is too few for me. Now it is like a), b), .., n) but I really need more those subfigures. – Léo Léopold Hertz 준영 Nov 17 '16 at 16:23 • @masi there are packages such as alpalph that give longer alphabetic sequences or you can use roman numerals or some other scheme – David Carlisle Nov 17 '16 at 16:26 • why the downvote? This does pinpoint the problem doesn't it? – David Carlisle Nov 17 '16 at 17:31
2021-05-17 14:21:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7697532773017883, "perplexity": 6540.998313504955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00198.warc.gz"}
https://plainmath.net/differential-equations/3824-verify-given-functions-basis-solutions-equation-initial-problem-solution
beljuA 2021-01-04 Check that the provided functions serve as a foundation for the solutions to the provided equation and resolve the provided initial value problem. the basis of solution are ${y}_{1}={x}^{-\frac{1}{2}}$ and ${y}_{2}=x\left(\frac{3}{2}\right)$ pattererX Expert Given: $4{x}^{2}-3y=0,y\left(1\right)=3,{y}^{\prime }\left(1\right)=2.5,$ the basis of solution are ${y}_{1}={x}^{-\frac{1}{2}}$ and ${y}_{2}=x\left(\frac{3}{2}\right)$ $4{x}^{2}{y}^{″}-3y=0$ Let $x={e}^{t}$ $⇒\frac{dx}{dt}={e}^{t}$ $\frac{dy}{dx}=\frac{\frac{dy}{dt}}{\frac{dx}{dt}}={e}^{-t}\frac{dy}{dt}$ $⇒{e}^{t}\frac{dy}{dx}=\frac{dy}{dt}$ $⇒x\frac{dy}{dx}=\frac{dy}{dt}$ Similarly ${x}^{2}\frac{{d}^{2}y}{d{x}^{2}}=\frac{{d}^{2}y}{d{t}^{2}}-\frac{dy}{dt}$ $4{x}^{2}{y}^{″}-3y=0$ $⇒\frac{4{d}^{2}y}{d{t}^{2}}-\frac{4dy}{dt}-3y=0$ Auxiliary equation is given by: $4{m}^{2}-4m-3=0$ $⇒m=\frac{3}{2},\frac{-1}{2}$ $y\left(t\right)={c}_{1}{e}^{\frac{3}{2t}}+{c}_{2}{e}^{\frac{-1}{2t}}$ put $⇒y\left(x\right)={c}_{1}{x}^{\frac{3}{2}}+{c}_{2}{x}^{-\frac{1}{2}}$ Consequently, every solution is a linear combination of ${x}^{\frac{3}{2}}$ and ${x}^{\frac{1}{2}}$ also ${x}^{\frac{3}{2}}$ and ${x}^{\frac{1}{2}}$ are linearly independent so $\left[{x}^{\frac{3}{2}},{x}^{\frac{1}{2}}\right]$ forms a basis for the solution of $4{x}^{2}{y}^{″}-3y=0$ Now, $y\left(x\right)={c}_{1}{x}^{\frac{3}{2}}+{c}_{2}{x}^{\frac{1}{2}}$ given $y\left(1\right)=3$ $⇒3={c}_{1}+{c}_{2}$ ${y}^{\prime }\left(x\right)=\frac{3}{2}{c}_{1}{x}^{\frac{1}{2}}-\frac{1}{2}{c}_{2}{x}^{\frac{3}{2}}$ ${y}^{\prime }\left(x\right)=2.5$ $⇒2.5=\frac{3}{2}{c}_{1}-\frac{1}{2}{c}_{2}$ $⇒5=3{c}_{1}-{c}_{2}$ Solving we get: $y\left(x\right)=2{x}^{\frac{3}{2}}+{x}^{-\frac{1}{2}}$ Do you have a similar question?
2023-01-29 03:15:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 58, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5656324625015259, "perplexity": 254.87014251996595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499697.75/warc/CC-MAIN-20230129012420-20230129042420-00163.warc.gz"}
https://physics.stackexchange.com/questions/301054/fermion-anti-commutation-relations/301099
Fermion anti-commutation relations The fermion anti-commutation relations are given as $$\{\psi_{\alpha}({\bf x},t),\psi_{\beta}^{\dagger}{(\bf x'},t)\} = \delta_{\alpha,\beta} \, \delta({\bf x} - {\bf x'}).$$ I am interested in determining $\{\psi_{\alpha}({\bf x},t),{\bar \psi}{(\bf x'},t) \psi({\bf x'},t)\}$. Does $\{\psi_{\alpha}({\bf x},t),{\bar \psi}_{\beta} ({\bf x'},t)\}$ simplify to anything? In general you have $\{\psi_{\alpha}({\bf x},t),(\psi^{\dagger}({\bf x'},t) \, \gamma^0)_{\beta}\}$ which is equal to $$\{\psi_{\alpha}({\bf x},t),\psi^{\dagger}_{\rho}({\bf x'},t) \gamma^0_{\rho\beta}\},$$ with the sum over $\rho$ assumed. In the energy representation, for example, it is straightforward to check that the $\gamma^0_{\rho\beta}$ can be taken outside the anti-commutator, but how do you show this in general (if instead of $\gamma^0$ you had, say, $\gamma^1$ then this is not so obvious since the $\gamma_1$ involves the Pauli matrix $\sigma_1$ and the spinor $\psi$ also involves $\sigma$ so it doesn't look easy to see that it would be true in this case)? • What kind of indices are the $\alpha,\beta$? Are there any of them missing in $\{\psi_{\alpha}({\bf x},t),{\bar \psi}{(\bf x'},t) \psi({\bf x'},t)\}$? – coconut Dec 26 '16 at 10:20 • the $\alpha, \beta$ are components of the spinor $\psi$. – jim Dec 26 '16 at 13:55 • $\gamma^\mu$ is a number. The (anti-)commutator is taken between operators. A $c$-number may always be taken out of any (anti-)commutator. – Prahar Dec 26 '16 at 15:25 After explicitly writing indices on everything, we are just dealing with products of (Grassman) numbers. $\gamma^0_{\alpha\beta}$ commutes with any other element, so it can be taken out. The commutation relations between $\psi$ and $\bar{\psi}\psi$ should be expressed as commutators, because $\psi$ is a fermion and $\bar{\psi}\psi$ is a boson. Using the equation above and $\{\psi_\alpha(\mathbf{x},t),\psi_\beta(\mathbf{x}',t)\}=0$ we get \begin{align} [\psi_{\alpha}({\bf x},t),{\bar \psi}{(\bf x'},t) \psi({\bf x'},t)] =& [\psi_{\alpha}({\bf x},t),{\bar \psi}_\beta{(\bf x'},t) \psi_\beta({\bf x'},t)] \\ =& \psi_{\alpha}({\bf x},t){\bar \psi}_\beta{(\bf x'},t) \psi_\beta({\bf x'},t) - {\bar \psi}_\beta{(\bf x'},t)\psi_\beta({\bf x'},t) \psi_{\alpha}({\bf x},t) \\ =& -\{\psi_{\alpha}({\bf x},t),{\bar \psi}_\beta{(\bf x'},t)\} \psi_\beta({\bf x'},t) - {\bar \psi}_\beta{(\bf x'},t)\psi_{\alpha}({\bf x},t) \psi_\beta({\bf x'},t) \\ &- {\bar \psi}_\beta{(\bf x'},t)\psi_\beta({\bf x'},t) \psi_{\alpha}({\bf x},t) \\ =& -\delta_{\alpha\beta} \delta(\mathbf{x}-\mathbf{x}')\psi_\beta({\bf x'},t)- {\bar \psi}_\beta{(\bf x'},t) \{\psi_{\alpha}({\bf x},t),\psi_{\beta} ({\bf x'},t)\} \\ =& -\delta(\mathbf{x}-\mathbf{x}')\psi_\alpha({\bf x'},t) \end{align}
2021-08-03 02:04:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997444748878479, "perplexity": 1725.0888737140585}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00041.warc.gz"}
https://reciprocalsystem.org/paper/subatomic-mass-recalculated
# Subatomic Mass, Recalculated Having recently received a copy of Physical Review, which contains everything known about subatomic particles, I decided to put the Reciprocal System to the test—to see if Larson’s original calculations would still hold up under the scrutiny of today’s accurate measurement systems. The results, some of which are related here, have been quite interesting. All observed particle measurements were taken from Physical Review D, Particles and Fields.1 Values were calculated with “C” language programs, compiled with SAS/C, version 6.51, using standard, double precision floating point with an accuracy of 15 significant digits. The code was executed on an Amiga 3000 computer under AmigaDOS version 2.1. # Mass Components The calculated values for subatomic particle mass,2 in terms of natural units, are listed in Table 1. In keeping with Larson’s original tabular format, not all the significant digits are shown (though they are used in all computations). Table 1: Mass Components (natural units) Component Calculated Value p primary mass 1.000000000000 m magnetic mass 0.006392045455 p+m gravitational mass 1.006392045455 E electric mass (3 dim.) 0.000868055556 e electric mass (2 dim.) 0.000578703704 C mass of normal charge 0.000044944070 c mass of electron charge -0.000029962713 # Observed Mass The observed mass values for the various subatomic particles have changed since the publication of Nothing But Motion, and tentative neutrino and “massless neutron” mass now exist. The observed neutrino mass is taken from the electron neutrino, which is listed with a “formal upper limit” of 5.1 eV, and a “95% certainty level.”3 To maintain consistent units in the table, this value was converted to unified atomic mass units (u) with the conversion factor of 931.49432 MeV/u.4 The mass of the “massless neutron” is taken from the muon neutrino, as suggested by Larson: “…and the logical conclusion is that the particle now called the muon neutrino is the particle required by the theory: the massless neutron.”5 The mass of the muon neutrino is inferred from measurements of muon momentum in the decay of a pi+ particle, and results in a mass of 0.27 MeV, or (0.00028985683 u).4,6 The observed proton is included in both the charged and uncharged proton entries, for comparison. (The uncharged proton is listed as “unobserved” by Larson in Nothing But Motion.) Table 2 lists the subatomic mass in natural units, as compared to the unified atomic mass units based on the 12C isotope. Table 2: Calculated Mass (natural) vs Observed Mass (u) Component Particle Calculated Observed Difference e+c charged electron 0.00054874099 0.00054857990 0.00000016109 e+c charged positron 0.00054874099 0.00054857990 0.00000016109 e electron 0.00057870370 massless e positron 0.00057870370 massless e neutrino 0.00057870370 0.00000000548 0.00057869823 p+m+e massless neutron 1.00697074916 0.00028985684 1.00668089232 p+m+2e proton 1.00754945286 1.00727647000 0.00027298286 p+m+2e+C charged proton 1.00759439693 1.00727647000 0.00031792693 p+m+3e hydrogen (1H) 1.00812815657 1.00794000000 0.00018815657 p+m+3e+E compound neutron 1.00899621212 1.00866490400 0.00033130812 The values calculated for the neutrino and “massless neutron” are considerably out of line with the observed values. Given that the observed values were deduced indirectly from the decay of other particles, there are undoubtedly numerous factors involved that were not taken into account. See the section on Rethinking Neutrinos for a possible explanation. The calculated values for the charged electron/positron, proton, 1H isotope, and the compound neutron are reasonably close, but not as close as they should be, given the number of significant digits in both the calculations and the observed values. This is due to the measuring system involved, that of the unified atomic mass unit (u). The observed values are based on the 12C isotope. Larson uses observed values in the 16O scale, which are closer to the natural mass units of the Reciprocity System, but still not exact.2 # Applying Conversion Factors Instead of converting values from the 12C to 16O scales, it may be prudent to avoid both scales and determine a conversion factor from natural mass units to unified atomic mass units based on an isotope-free, easily measured particle—the charged electron. Of all the particles there are mass values for, the charged electron is, in all probability, the most accurate. Also, the charged electron mass is known more precisely in unified atomic mass units than in any other unit.4 Thus, the conversion factor between natural (n) and 12C (u) mass units can be determined by the ratio between the measured and calculated charged electron: $\frac{0.00054857990 u}{0.00054874099 n} = 0.99970644 u/n$ (1) Applying this factor to Table 1, the mass components in “unified atomic mass units” are obtained: Table 3: Mass Components (u) Component Calculated Value p primary mass 0.999706441403 m magnetic mass 0.006390169015 p+m gravitational mass 1.006096610417 E electric mass (3 dim.) 0.000867800730 e electric mass (2 dim.) 0.000578533820 C mass of normal charge 0.000044930876 c mass of electron charge -0.000029953917 Recalculating Table 2 with the values in Table 3 results in: Table 4: Calculated Mass (u) vs Observed Mass (u) Composition Particle Calculated Observed Difference e+c charged electron 0.00054857990 0.00054857990 0.00000000000 e+c charged positron 0.00054857990 0.00054857990 0.00000000000 e electron 0.00057853382 massless e positron 0.00057853382 massless e neutrino 0.00057853382 0.00000000548 0.00057852835 p+m+e massless neutron 1.00667514424 0.00028985684 1.0063852874 p+m+2e proton 1.00725367806 1.00727647000 -0.00002279194 p+m+2e+C charged proton 1.00729860893 1.00727647000 0.00002213893 p+m+3e hydrogen (1H) 1.00783221188 1.00794000000 -0.00010778812 p+m+3e+E compound neutron 1.00870001261 1.00866490400 0.00003510861 With the exception of the neutrinos, the calculated values are now extremely close to the observed values. The error for hydrogen is only 0.011%. The error in the compound neutron is 0.0035%. Notice, however, the proton. The difference between the calculated and observed mass in the uncharged proton is almost the same as the charged proton, but in the opposite direction. This is rather suspicious, and one could theorize that the observed proton in the laboratory may actually be a 50/50 mix of charged and uncharged protons. Calculating the atomic weight based on a 50/50 mix yields: Table 5: Mixed Sample Protons Comp. Particle Calculated Observed Difference p+m+2e proton 1.00725367806 1.00727647000 -0.00002279194 p+m+2e+C charged proton 1.00729860893 1.00727647000 0.00002213893 50/50 mixed protons 1.00727614350 1.00727647000 -0.00000032650 Which is 0.000032% from the observed value (though still outside the stated error of ±0.000000012 u.) This calculation indicates that there is a high probability that the values obtained for the observed proton are a mix of both the charged and uncharged states, if the Reciprocal System is correct. Back calculating for this set of data, the proton sample would be 50.72668125% charged, and 49.27331875% uncharged (which reproduces the observed value exactly.) # Rethinking Neutrinos Considering how close Larson’s calculated values are to the observed values for other subatomic particles, it seems incongruous that both the muon and electron neutrinos should have such enormous error. In checking into the mass measurement procedure, I found that the observed values for both neutrinos should be correct, and concluded that there may be conceptual problems in Larson's interpretation of mass for these two particles.7 ## Muon Neutrino (massless neutron) Mass The logic Larson uses to determine mass is, “The massless neutron [muon neutrino], the M ½ ½ 0 combination, has no effective rotation in the third dimension, but no rotation from the natural standpoint is rotation at unit speed from the standpoint of a fixed reference system. This rotational combination therefore has an initial unit of electric rotation, with a potential mass of 0.00057850, in addition to the mass of the two-dimensional basic rotation, …”8 As I understood the convention, a displacement of zero means a scalar value of unity—uniform motion, the natural datum. If “no rotation from the natural standpoint” is “rotation at unit speed” with potential mass, then every location not occupied by matter should exhibit a mass of “e,” that of the electron or positron. This is not observed, and I submit that no rotation in any dimension is exactly that, no rotation, and no potential mass. Thus, since the muon neutrino has no rotation in the 3rd dimension, it contributes no mass to the particle. Secondly, when Larson adapts the ½ ½ convention over the 1 0 convention for the description of the massless neutron, he states, “If the addition to the rotational base is a magnetic unit rather than an electric unit, …” and “… half units do not exist, but a unit of two-dimensional rotation obviously occupies both dimensions.”9 This makes the massless neutron, or muon neutrino, the two-dimensional version of a positron, having a single, two-dimensional temporal rotation instead of a single, one-dimensional temporal rotation, not necessarily occupying both dimensions, but distributed over both dimensions, and resulting in the appropriate ½ ½ 0 notation. Since 12 = 1, the applicable mass is “e,” not “p+m.” And because this mass is distributed over two dimensions, the potential mass for the muon neutrino is e/2. The new calculated mass is therefore e/2 times the conversion factor of natural units to unified atomic mass units (nu->u): $\frac{e}{2} \times \left(nu \rightarrow u \right) = \frac{0.00057870}{2} \times 0.999706441403$ $= 0.00028926691 u$ (2) Or, approximately 0.26945 MeV. Comparing to the observed value of “less than 0.27 MeV (CL = 90%),” is as close to perfect as can be expected, given the uncertainty of the observed value. ### Electron Neutrino Mass The electron neutrino, ½ ½ (1), is the muon neutrino with an additional 1D spatial (electric) rotation. This gives the particle no net motion, and hence no potential mass. Larson indicates, “But since the electric mass is independent of the basic rotation, and has its own initial unit, the neutrino has the same potential mass as the uncharged electron or positron, 0.00057870.”8 I disagree with this statement for the neutrino. It may be true for the “p+m” mass conditions, but here we have “e-e,” akin to a stable positron-electron combination due to the additional rotation in time on the positron component, and hence is massless. But, the electron neutrino does have an observed mass of 5.1 eV. The measurement process deals primarily with charged particles and I believe this observed mass is the mass due to the interaction of a charge on the neutrino with the charge on the atoms of the detector. The charged neutrino has a mass of “c,” the normal electron charge. The charge of atoms in the detector have a mass of “C,” the mass of normal charge. Their interaction will be “C+c” (where “c” is positive, because we are on the same side of the unit boundary).10 Because charge is an effect of a “third region,”11 the charge needs to be brought across the unit boundary to measure the mass effect. This is a relation similar to “equivalent space,” and results in the effect being the square of the value, “(C+c)2.” The observed electron neutrino mass, due to charge interaction, is: $(C + c)^2 \times (nu \rightarrow u) = (0.00004494 + 0.00002996)^2 \times 0.999706$ $= 0.00000000560 u$ (3) Or, approximately 5.21 eV. The observed value is 5.1 eV, again, extremely close to the calculated value. Updating Table 2 with these computations, and replacing “massless neutron” with “muon neutrino,” yields the results: Composition Particle Calculated Observed Difference (C-c)2 neutrino 0.00000000561 0.00000000548 0.00000000014 e/2 muon neutrino 0.00028935185 0.00028985684 -0.00000050499 Noting that in the calculations, “c” is negative, so to C+c = C-(-c). Updating Table 4 with these additions: Composition Particle Calculated Observed Difference (C-c)2 neutrino 0.00000000561 0.00000000548 0.00000000013 e/2 muon neutrino 0.00028926691 0.00028985684 -0.00000058993 # Conclusion After compensating for differences in measuring systems, the Reciprocal System’s 1959 calculations of mass agree quite closely with the 1993 observed values. The observed proton appears to be a near-even mix of charged and uncharged protons. If this is actually the case, other measurements may also be adversely affected, such as electric dipole moments or polarization. Because physics does not recognize the charged and uncharged states of subatomic particles, observed values may become increasingly unreliable, tending to be more of a statistical distribution than a direct measurement. This will undoubtedly play havoc with any proposed unified system of theory. The mass effects of the structure of neutrinos appear to be conceptually incorrect as presented in Nothing But Motion. 1. The mass of the muon neutrino (massless neutron) is one-half of the electric mass, being distributed over two dimensions, and having a mass of 0.27 MeV. 2. The mass of the electron neutrino is zero. 3. The observed mass of the electron neutrino is due to the interaction of a charged electron neutrino with a charged atom in the detector instrumentation, producing an apparent mass of 5.2 eV. It would be interesting, however, if someone familiar with particle measurement techniques could examine the process of determining proton mass, and propose a method to eliminate either the charged or uncharged protons in the sample. The results should precisely match the values obtained from the Reciprocal System, when corrected for unified atomic mass. This could lead to the acceptance of the charged and uncharged states of subatomic particles (of the proton at least), and maybe even an objective look at Larson’s work. 1 Physical Review D, Particles and Fields (The American Physical Society through the American Institute of Physics, 1 August 1994). 2 Larson, D.B., Nothing But Motion (North Pacific Publishers, 1979), page 164. 3 Physical Review D, Particles and Fields, op. cit., page 1389. 4 Ibid., page 1396, note on electron mass precision. 5 Larson, D.B., Nothing But Motion, op. cit., page 213. 6 Physical Review D, Particles and Fields, op. cit., page 1392. 7 Text incorporated into this article from “Subatomic Mass, Recalculated, Update,” Reciprocity XXV, #2 (Autumn, 1996), page 25. 8 Nothing But Motion, op. cit., page 165. 9 Ibid., page 141. 10 Ibid., page 164. 11 Ibid., page 163. International Society of  Unified Science Reciprocal System Research Society Salt Lake City, UT 84106 USA Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer
2019-03-27 01:04:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6421118974685669, "perplexity": 1519.3714007203207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912207146.96/warc/CC-MAIN-20190327000624-20190327022624-00229.warc.gz"}
https://techwhiff.com/learn/can-someone-please-help-me-solve-this-thank-you/175508
Question: (5) (2 points) Predict the products of the following acid/base reactions and provide bal- anced molecular equations a. HNO3(aq) + Ba(OH)2(aq)? b. H3PO4 (aq) + NaOH(aq)? Similar Solved Questions What is the "molality" of a 40.0*g mass of sodium hydroxide dissolved in a 1*L volume of water? What is the "molality" of a 40.0*g mass of sodium hydroxide dissolved in a 1*L volume of water?... Mrs. J. is placed on volume assist/control (V-A/C) ventilation: rate 16 breaths/min, VT 8 mL/kg, FiO2... Mrs. J. is placed on volume assist/control (V-A/C) ventilation: rate 16 breaths/min, VT 8 mL/kg, FiO2 0.80, and PEEP 10 cm. What is the rationale for these settings, including PEEP?... Question 19 5 pts Р MC ATC P D = MR 0 Refer to the accompanying... Question 19 5 pts Р MC ATC P D = MR 0 Refer to the accompanying graphs for a competitive market in the short run. Which of the following statements is true? o The representative firm will increase production. o The representative firm is experiencing economic losses. o The representative firm ... The decision process Before making capital budgeting decisions, finance professionals often generate, review, analyze, select, and... The decision process Before making capital budgeting decisions, finance professionals often generate, review, analyze, select, and implement long-term investment proposals that meet firm-specific criteria and are consistent with the firm's strategic goals. Companies often use several methods to ... The reduction potential for the following non-standard half cell at 298K is ______ volts. F2 (1.54×10-3atm)... The reduction potential for the following non-standard half cell at 298K is ______ volts. F2 (1.54×10-3atm) + 2e-2F- (1.03M)... 5. A 0.06 kg tennis ball, moving with a speed of 5.0 m/s, collides a 0.09... 5. A 0.06 kg tennis ball, moving with a speed of 5.0 m/s, collides a 0.09 kg ball initially moving in the same direction at a speed of 3.0 m/s. Assuming an elastic collision, determine the velocities of balls after the collision.... Pregunta 7 Evalúe la integral triple SSE 7xdV, donde E es el sólido acotado por el... Pregunta 7 Evalúe la integral triple SSE 7xdV, donde E es el sólido acotado por el cilindro parabólico x = y2 y los planos x = z y x = 1. 04 O2 O 2/3 O NO ESTÁ LA RESPUESTA... Our transcription is performed by a company outside of our organization. What is this called? Analyzing... Our transcription is performed by a company outside of our organization. What is this called? Analyzing Dictating Consulting Outsourcing... (Clausius Inequality) A thermodynamic cycle operates at steady state between reservoirs at 1000 K and 500... (Clausius Inequality) A thermodynamic cycle operates at steady state between reservoirs at 1000 K and 500 K. The cycle receives energy via heat transfer at the high temperature at a rate of 1500 kW and discharges energy via heat transfer to the cold reservoir. a. The cycle develops power at a rate o... What are pairs 1 and 2? identical, constitutional, or conformers? CF3 H₂H CI Br H and... what are pairs 1 and 2? identical, constitutional, or conformers? CF3 H₂H CI Br H and II) and SH...
2022-12-03 09:39:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.409297913312912, "perplexity": 8151.880537812459}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710926.23/warc/CC-MAIN-20221203075717-20221203105717-00003.warc.gz"}
https://daneshresan.com/article/273356/A-numerical-method-consisting-of-Tikhonov-regularization-to-the-matrix-form-of-Duhamel-s-principle-for-solving-a-One-dimensional-IHCP
اضافه کردن به علاقه‌مندی‌ها ## محل انتشار کنفرانس بین المللی مدل سازی غیر خطی و بهینه سازی سال ۵ صفحه ## کلمات کلیدی A numerical method consisting of Tikhonov regularization to the matrix form of Duhamel s principle for solving this IHCP using temperature data containing significant noise is presented in this paper .The measurements ensure that the inverse problem has a unique solution, but this solution is unstable hence the problem is ill–posed. This instability is overcome using the zeroth–, first–, and second–order Tikhonov regularization method with the gcv criterion for the choice of the regularization parameter.<\div> راهنمای دریافت مقاله‌ی «A numerical method consisting of Tikhonov regularization to the matrix form of Duhamel s principle for solving a One–dimensional IHCP» در حال تکمیل می‌باشد.
2021-04-14 10:37:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273298740386963, "perplexity": 5925.019982036927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00590.warc.gz"}
https://math.meta.stackexchange.com/questions/30263/why-does-the-community-seem-to-encourage-technical-cleverness-over-the-actual-an?noredirect=1
# Why does the community seem to encourage technical cleverness over the actual answer to a question? First post on meta, so here goes nothing. Earlier today, I answered the question linked below: Evaluating $\int_0^1 \frac{3x}{\sqrt{4-3x}} dx$ I believe it is a good question. The user had a specific concern, and expressed it in detail, with good-faith effort made to understand. It is perhaps an elementary concern, but a valid one. Three answers were posted. Only one has received any upvotes (two) and in no way answers the question the user had, but merely presents a slick alternate approach. To be clear, I have no problem at all with the upvoted answer or its poster, and I think it is good to have that easier solution there, but I feel as though it was upvoted for being clever, despite in no way answering the specific question the user had, while the other answers were ignored. This seems to be a bit of a theme on MSE. It has happened to me before, and I see it in many other answer sections as a lurker as well. Am I grossly misunderstanding how this is all supposed to work? Perhaps I am simply wrong and my answer is lukewarm enough to deserve neither up or down votes? Also, I do not necessarily care about getting votes for my own answer, so please don't think I do, but is there some essential element of a quality answer that I have missed here? I hope all is in order. Feel free to school me on "meta-quitte" if need be. • My opinions here: $\LaTeX$ is very visible, and maybe the actual question would have been answered if it was better highlighted, i.e. using box quotes > – Mohammad Zuhair Khan May 14 at 22:37 • @MohammadZuhairKhan I agree with you. Still, that just sounds like people aren't really carefully reading. – The Count May 14 at 22:49 • In my opinion, you didn't understand what OP was after in Question 1, and thus didn't give OP what was needed. The answer by Viktor Glombik strikes me as being what OP wanted/needed. – Gerry Myerson May 15 at 4:45 • For me cleverness means something other than standard procedure explained in several textbooks, but to each their own I suppose (and "standard" is a relative term here, with its meaning varying from one person to another). Your main issue, if I got it right, was recently discussed here. – Jyrki Lahtonen May 15 at 7:41 • @GerryMyerson See, perhaps it is all about perception. It seemed to me like that answer just retyped what the OP already had from in the answer, which he said he understood, but asked where the idea for the manipulation came from. – The Count May 15 at 17:18 • @JyrkiLahtonen I remember that post. I thought "finally!" when it was posted, but then everyone reverted back to their old ways of upvoting such non-answers. – The Count May 15 at 17:19 • My answer to your question (though too short to be a proper answer) is because the community generally agrees that indirect answers to questions that suggest alternative approaches are perfectly fine and add value to the site. They may be less helpful to the OP, although they certainly can be, but even if they aren't, helping the OP is not the only purpose to the site. – jgon May 16 at 2:33 • A similar concern in here: math.meta.stackexchange.com/questions/30076/… – onurcanbektas May 16 at 16:38 • There are a couple of things about technical cleverness, which is not entirely out of place - though I do think that sometimes more effort could be taken to help the person asking the question. There is a concept of abstract duplicate which reflects questions with the same mathematical content - what this does not capture is that different users have different issues with that content. On the other hand it is painful to see people struggling with inefficient methods, and people want to excite others with beautiful things they know. Coaching and mathematics both have their place, I think. – Mark Bennet May 20 at 19:30 • @MarkBennet Yes, I think I agree with you entirely. In my personal experience, the technical/clever insight comes later, but perhaps a better answer than any posted on that question would include both. Though, I admittedly did not include such things since the other answer already had it. I like that answer and upvoted it, but I do not think it answered the question as posed. Thanks for your input! – The Count May 20 at 20:12 • @MohammadZuhairKhan: Please do not use quote environments for emphasis. It’s very confusing and also could easily break compatibility with design changes, accessibility, etc. – Wrzlprmft May 26 at 15:56 • Considering that it is clearly listed on the how to format column, I am unsure why it is not meant to be used @Wrzlprmft : i.stack.imgur.com/q5MZJ.png – Mohammad Zuhair Khan May 26 at 16:05 • @MohammadZuhairKhan: It is meant to be used, but for quotes, i.e., reproducing a somebody else’s text or to encapsulate a statement that is referred to (e.g., like this) – with other words: something that you could put into quotation marks (but usually won’t due to length). It is not meant to be used to highlight a part of the question as particularly important. This can be done using boldface or headers (depending on the situation). – Wrzlprmft May 26 at 16:17
2019-10-20 01:46:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5001067519187927, "perplexity": 835.057046503246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700560.62/warc/CC-MAIN-20191020001515-20191020025015-00229.warc.gz"}
https://tiago-simoes.github.io/EvoPhylo/
A package to perform automated morphological character partitioning for phylogenetic analyses and analyze macroevolutionary parameter outputs from clock (time-calibrated) Bayesian inference analyses. EvoPhylo was initially released for pre- and postprocess data using the software Mr. Bayes, but since version 0.3 it also handles data pre- and post-processing for the software package BEAST2. The ideas and rationale behind the original functionality and objectives of the analyses available in this package were first presented by Simões & Pierce (2021). It’s current functionality is described in detail by Simões, Grifer, Barido-Sottani & Pierce (2022). ## Installing package EvoPhylo Install the latest release version (v. 0.3.2) directly from CRAN: install.packages("EvoPhylo") or development version from Github: # install.packages("devtools") devtools::install_github("tiago-simoes/EvoPhylo") ## Tutorials See vignette("char-part"),vignette("data_treatment"), vignette("rates-selection_MrBayes"), vignette("rates-selection_BEAST2"), vignette("fbd-params"), and vignette("offset_handling"), for step-by-step guides on using EvoPhylo to perform these analyses, also available on the EvoPhylo website.
2022-11-26 23:30:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.239299014210701, "perplexity": 10476.76133904035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446709929.63/warc/CC-MAIN-20221126212945-20221127002945-00060.warc.gz"}
https://biowize.wordpress.com/category/science/
# My comments on “software solutions for big biology” paper A paper by Philip Bourne (NIH Associate Director for Data Science) and colleagues came out a couple of weeks ago highlighting some of the issues with software in biology research. The focus of the paper is the scalability of “big data” type solutions, but most of the paper is relevant to all bioinformatics software. Speaking of big data, this Twitter post gave me a good chortle recently. But I digress… I really just wanted to highlight and comment on a couple of points from the paper. • Biologists are not trained software developers: The paper makes the point that the majority of biologists have zero training in software engineering best practices, and as a result there is a pervasive culture of poorly-designed short-lived “disposable” research software out in the wild. I agree completely with their assessment (in fact I blew a gasket over this as a new Ph.D. student) and think all biologists could benefit from some minimal training in software engineering best practices. However, I think it’s important to emphasize that biologists do not need to become software engineers to write good reproducible research software. In fact, I speak from experience when I say you can spend a lot of time worrying about how software is engineered at the expense of actually using the software to do science. We need to make it clear that nobody expects biologists to become pro programmers, but that investing in some software training early on can yield huge dividends throughout a scientific career. • Attribution for bioinformatics software is problematic: The paper emphasizes that papers in “high-impact” journals, even those with a strong bioinformatics component, rarely feature a bioinformatician as first or last author. I get the impression that things are improving ever-so-slowly, but some fairly recent comments from E. O. Wilson make it pretty clear that as a community we still have a long way to go (granted Wilson was talking about stats/math, but his sentiment applies to informatics and software as well). • Bioinformatics is a scientific discipline in its own right: and bioinformaticians need career development. ‘Nuff said. • Assessment of contribution: One of the final points they make in the paper is that with distributed version control tools and social coding platforms like GitHub, every (substantial) software version can be assigned a citable DOI and relative author contributions can be assessed by looking at the software’s revision history. I am a huge proponent of version control, but this last point about looking at git logs for author contributions doesn’t strike me as very helpful. It may be better than the obligatory vague “Author Contributions” section of the 5+ papers I read this week (A.B., C.D, and E.F. designed the research. A.B. and G.H. performed the research. A.B., E.F, and G.H. wrote the paper.), but only marginally better. Number of revisions committed, number of lines added/removed, and most other metrics easily tracked on GitHub are pretty poor indicators of technical AND intellectual contribution to software development. I think we would be much better of enforcing clearer guidelines for explicitly stating the contributions made by each author. Overall, I think it was a good piece, and I hope it represents a long-awaited change in the academic community with respect to bioinformatics software. # Great discussion on research software: linkfest I’ve been following a weeks (months?) long social media discussion on research software that has been very thought-provoking. The questions being discussed include the following. • What should we expect/demand of software to be “published” (in the academic sense)? • what should community standards of replicability/reproducibility be? • Both quick n’ dirty prototypes and robust, well-tested platforms are beneficial to scientific research. How do we balance the need for both? What should our expectations be for research software that falls into various slots along that continuum? I’ve been hoping to weigh in on the discussion with my own two cents, but I keep on finding more and more great reading on the topic, both from Twitter and from blogs. So rather than writing (and finish formulating!) my opinions on the topic(s), I think I’ll punt and just share some of the highlights from my readings. Linkfest below. # GitHub now renders IPython/Jupyter Notebooks! I’ve written before about literate programming and how I think this could be a big game changer for transparency and reproducibility in science, especially when it comes to data analysis (vs more traditional software engineering). Well, GitHub announced recently that IPython/Jupyter notebooks stored in GitHub repositories will be rendered, rather than presented as raw JSON text as before. This is a very nice development, making it even easier to share data analysis results with others! # Simulating BS-seq experiments with Sherman The bioinformatics arm of the Babraham Institute produces quite a few software packages, including some like FastQC that have achieved nearly ubiquitous adoption in the academic community. Other offerings from Babraham include Bismark for methylation profiling and Sherman for simulating NGS reads from bisulfite-treated DNA. In my computational genome science class, there was a student that wanted to measure the accuracy of methylation profiling workflows, and he identified Bismark and Sherman as his tools of choice. He would identify a sequence of interest, use Sherman to simulate reads from that sequence, run the methylation call procedure, and then assess the accuracy of the methylation calls. There was a slight problem, though: Sherman is random in its conversion of Cs to Ts, and the number of conversions can only be controlled by things like error rate and conversion rate. By default, Sherman provides no way to “protect” a C from conversion the way a methyl group does on actual genomic DNA. So there was no way to assess the accuracy of the methylation profiling workflow since we had no way of indicating/knowing which Cs should be called methylated! After a bit of chatting, however, we came up with a workaround. In our genomic DNA fasta file, any Cs we want to protect from conversion (i.e. methylate them in silico) we simply convert to X. Then we run Sherman, which will convert Cs to Ts at the specified conversion rate but will leave Xs alone. Then, after simulating the reads but before the methylation analysis procedure, we simply change Xs back to Cs. This seemed to pass some sanity tests for us, and we contacted the author of Sherman, Felix Krueger (@FelixNmnKrueger), who confirmed that he saw no potential pitfalls with the approach. I like this little hack, and assuming I ever use it myself in the future, I will probably create a script that does the C -> X conversion from a GFF3 file or a list of “methylation sites” in some other format (the conversion from X back to C is trivial). # Reproducibility on a technical level vs a conceptual level I am and have been a huge proponent of reproducibility in scientific computation for most of my training. I think that reproducibility is important both on a technical level and on a conceptual level. Reproducibility at a technical level—what many call replicability—is the ability successfully generate a similar/identical output given the source code and data. Reproducibility at the conceptual level—what we often refer to as reproducibility in a more general sense—is being able to get a consistent result using different methods (e.g. re-implement the logic in your own software), different data (e.g. do your own data collection on the subject of interest), or both. The latter—reproducibility at the conceptual level—is the ultimate test, and gives much stronger support of a theory/model/result than replicability. In some ways, reproducing a computational result in biology is much easier than reproducing an experimental result. Computers are just machines* that execute sets of instructions (*gross oversimplification noted), and assuming I provide the same instructions (source code) and input (data) to another scientist, it should be trivial for them to exactly and precisely reproduce a result I previously computed. However, I speak of an ideal world. In the real world, there are myriad technical issues: different operating systems; different versions of programming languages and software libraries; different versions of scientific software packages; poor workflow documentation; limited computing literacy; and a variety of other issues that make it difficult to reproduce a computation. In this way, computational biology is a lot more like experimental biology: getting the “right” result depends highly on your environment or setup, your experience, and in some cases having some secret sauce (running commands/programs in the right order, using certain settings or parameter values, etc). In this way, computational biology is a lot more like experimental biology than we let on sometimes. It doesn’t have to be this way. But it is. There is a great discussion going on in the social media right now about what should be expected of academic scientists producing software in support of their research. I hope to contribute a dedicated blog post to this discussion soon, but it is still very nascent and I would like to let my thoughts marinate for a bit before I dive in. Several points are clear, however, that are directly relevant to the discussion of replicability and reproducibility. • Replicability in computation supports reproducibility. I’m not sure of anyone that disagrees with this. My impression is that most disagreements are focused on what can reasonably be expected of scientists given the current incentive structure. • Being unable to replicate a computational study isn’t the end of the world for academic science: important models and theories shouldn’t rely on a single result anyway. But the lack of computational replicability does make the social enterprise of academic science, already an expensive and inefficient ordeal under constant critical scrutiny, even more expensive and inefficient. Facilitating replicability in computation would substantially lower the activation energy required to achieve the kind of real reproducibility that matters in the long run. • There are many academics in the life sciences at all levels (undergrad to grad student to postdoc to PI) that are dealing with more data and computation right now than their training has ever prepared them to deal with. • A little bit of training can go a long way to facilitating more computational replicability in academic science. Check out what Software Carpentry is doing. Training like this may not benefit cases in which scientists deliberately obfuscate their methods to cover their tracks or to maintain competitive advantage, but it will definitely benefit cases in which a result is difficult to replicate simply because a well-intentioned scientist had little exposure to computing best practices. (Full disclosure: I am a certified instructor for Software Carpentry, although I receive no compensation for my affiliation or participation. I’m just a huge proponent of what they do!) # Be back shortly! I’m sure the entirety of the interwebz has been sitting on pins and needles wondering when oh when will the BioWize blog be updated! It’s been over a year since I’ve posted an update. This has been a crazy, hectic, exciting, frustrating, and fulfilling year. I have jotted down ideas for a dozen blog posts, and even started drafting a couple, but with stagnant progress on a research project and pressure to make progress with my dissertation I just haven’t figured out how blogging should (if at all) fit in to all this. But I know I need to write more often. I’ve found that writing skills are like a muscle: they need frequent conditioning and exercise or they will atrophy. Whether I’m writing about a technical skill that saves me lots of time in my research, or about something more conceptual, taking the time to write out my thoughts for a general scientific audience is a great exercise in clarifying and condensing your thinking. Communicating your thinking (and supporting evidence) to other scientists is the whole purpose of scientific publishing, after all. So I expect there to be a flurry of activity on the blog within a short time. # Brainstorm: motivating student participation in my Computational Genome Science class A few weeks ago I finished teaching a course on computational genome science. I was involved in designing the course back in 2011, and helped teach the initial offering, but this year I was the primary instructor. The class turned out well (in my opinion), and the student feedback was overwhelmingly positive. However, it didn’t go perfectly, and the last couple of weeks have given me the opportunity to reflect and brainstorm ideas for improving the next offering of the class. This class is a very hands-on class—we cover the very basics of the theory, but spend most of our time running software tools and critically evaluating their results. Students submit assignments as entries in a class wiki, including notes of what they did, results they got, and interpretation / analysis. Using a wiki not only facilitates my monitoring of student participation, but (more importantly in the long run) it encourages students to develop documentation/note-taking skills that will be a huge benefit to them in the future. At the beginning of most class periods, I set aside a few minutes for the students to work on their wiki entries. I used a Python script to random group the students into pairs, and instructed each pair to edit each other’s wiki entries—inserting notes or questions when something wasn’t clear, making minor stylistic improvements, etc. Then after this short period, another Python script randomly selected one of the students to come up and share what their partner had done and the results they had gotten. The hope was that these activities would motivate the students to complete the activities on time, and that they would actually critically evaluate each other’s work (and hopefully even learn something new in the process!). The biggest problem I had with this approach is that the course was offered as a block course—that is, 3 credits and a full semester’s worth of work packed into 1.5 credits and 8 weeks. We were going at such a pace that the students often hadn’t had time to complete their assignments by the time they were scheduled to be editing each other’s wiki entries. Thankfully, future offerings of the course will be 3 credit semester-long ordeals, giving us more time to cover the same materials in more depth without being so rushed. However, this was not the only problem. I got the impression that the students were not taking the opportunity to evaluate and present each other’s work seriously. Of course these students were busy, and as might be expected they need proper motivation to engage in these types of activities. After the first few class periods, it seemed that the threat of looking unprepared in front of the other students was not sufficient motivation to write complete, well documented wiki entries and/or to critically assess each other’s entries. Here are a few ideas I’ve come up with to improve this situation for the next offering of this class. • For most of the (half-)semester I randomly chose a single student to come up and present their partner’s work. Near the end of the term, I decided to randomly select pairs instead. Each student would still present what their partner had done, but their partner would be there to clarify any misunderstandings and provide support. This seemed to work much better, and is going to be my approach from day one next time. • I provided verbal feedback for beginning-of-class presentations, and occasional verbal feedback for wiki entries, but as the majority of their grades were derived from a term project I provided no formal assessment throughout the term. I still like the idea of postponing formal assessment until the end of the semester to provide students ample time to polish up their wiki entries, but students need more than just verbal feedback in the interim. Next time I think I’ll have students grade each other’s beginning-of-class presentations (my own personal evaluation will be factored in as well). As was the intent before, all students will be motivated to be prepared for class (since they don’t know beforehand who will present), and they will be motivated to take advantage of the first few minutes of class to review each other’s entries (giving more than just a superficial glance). Peer evaluations will be included in each student’s final grade, and students will also get credit for providing evaluations. Hopefully this will provide motivation for the students to engage in each aspect of the group experience. • As much as I hate enterprise Learning Management Systems, I’ll probably end up having students post peer evaluations to the university’s LMS. I’ll make the evaluation an assignment, and only make it accessible in the LMS for a very short period of time during class. Also, a keyword will be associated with each peer evaluation, so that students who are not present in class cannot get credit just by signing in at the appropriate time and entering arbitrary values (barring bold and coordinated dishonesty). If a student is absent when he/she is selected to present, their peers will be instructed to give them a 0. • I understand that even with the best intentions, students cannot make it to every class period. However, I don’t want to have to be in a position to judge whether a certain absence was “excused” and make manual adjustments to participation- and peer-evaluation-based grades. Clearly, a parent’s funeral is a satisfactory excuse and a Justin Bieber concert the night before is not, but there is a lot of gray area in between. Rather than handling these case-by-case, I will allow for students to have two absences without impacting their grade–2 missed peer evaluations, 2 missed presentations, or 1 of each. They can use these absences however they please, but there will be no exceptions beyond that so they will be encouraged to use them wisely. I’m really looking forward to teaching this class again, and I hope these ideas will make it an even better learning experience for everyone next time around! # Making LaTeX easier with Authorea and writeLaTeX When I’m curious about exploring a new technical skill (such as a new programming language, a software tool, a development framework, etc), I typically try to integrate its use into my normal work schedule. I select several tasks that I have to do anyway, and force myself to use this new skill to solve the task. It ends up taking more time than it would have if I had just stuck with skills I was already comfortable with, but in the end I’m usually better for it. Sometimes, I love my new-found skill so much that I begin using it every day in my research. Often, however, it just becomes another addition to my technical “toolkit”, increasing my productivity and really enabling me to choose The Best Tool for the Job for my future work. This was my experience with $\LaTeX$. As an undergraduate, I had seen several colleagues use it and had fiddled with it a bit myself. It wasn’t until later though, as a first year grad student, that I really buckled down and forced myself to learn it while writing a paper for a computational statistics class. Yielding control of text and image placement to the LaTeX typsetting system took some getting used to, but I soon came to appreciate the quality of documents I could produce with it. Focusing on the concerns of content and presentation separately, as I had previously learned to do in web development, was another big bonus I recognized early on. The fact that LaTeX source documents are plain text made it easy to maintain a revision history with tools like svn and git, which I had also come to appreciate early on in my graduate career. And, of course, there is absolutely no comparison to typesetting mathematical formulae on LaTeX versus on Microsoft Word. See this thread for a great discussion on the benefits of LaTeX over Word. I strongly encourage all of my colleagues to consider using LaTeX for their next publication. That being said, I understand that there is a bit of a learning curve with LaTeX, and setup/installation isn’t trivial for a beginner (unless you’re running Linux). However, I’ve seen a couple of web applications recently that should make the jump from Word to LaTeX much easier. Authorea and writeLaTeX are both web-based systems for authoring documents using LaTeX markup. While editing, Authorea renders the markup in HTML and only shows plain text for the section you are currently editing (of course, the final document is downloaded in PDF format). writeLaTeX uses a different approach: a window for editing the LaTeX markup, and another window for previewing the typeset PDF file. Both of these applications are very easy to use. Both enable you to collaboratively edit with colleagues. And both are free to use. If you’re still using Microsoft Word to write your research manuscripts, consider learning LaTeX and getting your feet wet with one of these new tools! # Frustration with Word and Endnote on Mac Recently, I’ve been using Microsoft Word and EndNote to write a significant paper for the first time in several years (my advisor and I used LaTex + git mor my first first-author paper). After using it on my MacBook for several weeks with no more than the usual amount of frustration one can expect from EndNote and Word, EndNote stopped working all of a sudden. Every time I tried to insert a reference, it would get frozen at the “Formatting Bibliography” step and hang indefinitely. Force-quitting and restarting the programs didn’t seem to help anything. After a bit of searching, I came across this thread which provides a simple solution. The culprit for the unstable behavior seems to ba an OS X system process called appleeventsd, and force quitting the process with this name using the System Monitor restored normal Word/EndNote behavior. I have done this several times in the last couple of weeks and haven’t seen any adverse side effects, so I will continue to do so until something goes wrong or some OS 10.8 upgrade provides better stability or until my collaborators magically decide that LaTeX + git + BitBucket is in fact a superior solution after all! # A new home for BioWi[sz]e (and why you shouldn’t post code on your lab website) A recent post by Stephen Turner about the woes of posting code on your lab website really resonated with me. As a scientist I have occasionally clicked on a link or copy&pasted a URL from a paper, only to find that the web address I’m looking for no longer exists. Sure it’s frustrating in the short term, but in the long term it’s troubling to think that so much of the collective scientific output has such a short digital shelf life. This happened to me again just yesterday. I was looking over this paper on algorithms for composition-based segmentation of DNA sequences, and I was interested in running one of the algorithms. The code, implemented in Matlab (good grief), is available (you guessed it!) from their lab website: http://nsm.uh.edu/~dgraur/eran/simulation/main.htm. Following that link takes you to a page with a warning that the lab website has moved, and if you follow that link you end on the splash page for some department or institute that has no idea how to handle the redirect request. This paper is from 2010, and yet I can’t access supplements from their lab website! I did a bit of Google searching and found the first author’s current website, which even included links to the software mentioned in the paper, but unfortunately the links to the code point to the (now defunct) server published in the paper. I finally found the code buried in a Google Code project, and now I’m sitting here wondering whether it was really worth all the hassle in the first place, and whether I even want to check if our institution has a site license for Matlab… Ok, `</rant>...` With regards to my own research, I’ve been using hosting services like SourceForge, Github, and BitBucket to host my source code for years. However, I’ve continued using our lab server to host this blog, along with all the supplementary graphics and data that go along with it. I guess I initially enjoyed the amount of control I had. But after reading Stephen’s post, realizing how big of a problem this is in general, and of course thinking of all of the fricking SELinux crap I’ve had to put up with (our lab servers run Fedora and Red Hat), the idea of using a blog hosting service all of a sudden seemed much more reasonable. So as of this post, the BioWi[sz]e blog is officially migrated to WordPress.com. Unfortunately, someone got the http://biowise.wordpress.com subdomain less than a year ago—they even spent the \$25 bucks to reserve a `.me` domain, and yet they’re doing nothing with it. Grrr…So anyway, the BioWise you know and love is now BioWize, for better and for worse. As far as the supplementary graphics and data files, I have followed Stephen Turner’s example and posted everything on FigShare. While uploading data files and providing relevant metadata was very straightforward, there is a bit of a learning curve when it comes to organizing and grouping related files. And once data is publicly published on FigShare, deleting it is not an option, even if you’re just trying to clean things up and fix mistakes. So if I could have done one thing differently, I would have been more careful about how I uploaded and grouped the files. Otherwise, I have no complaints. I love the idea that the content of my blog will be accessible long after I’ve moved on from my current institution (without any additional work on my part), and that all of the supporting data sets are permanently accessible, each with its own DOI.
2017-10-21 08:20:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36809393763542175, "perplexity": 1263.9667966364814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824675.67/warc/CC-MAIN-20171021081004-20171021101004-00408.warc.gz"}
https://www.physicsforums.com/threads/online-calculator-that-can-form-an-equation-of-a-curve.761212/
# Online calculator that can form an equation of a curve Ledsnyder Does anyone know of an online calculator that can form an equation of a curve with complex values and lots parameters? Y= C1(1/(1+C2e^C3x) for example Last edited:
2022-10-07 13:37:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9192402958869934, "perplexity": 634.5378561741006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00213.warc.gz"}
https://indico2.riken.jp/event/3082/contributions/17159/
# The 24th International Spin Symposium 18-22 October 2021 Matsue, Shimane Prefecture, Japan Asia/Tokyo timezone ## Exclusive production of Quarkonia and Heavy Flavors to access gluon Generalized Parton Distributions at EIC 20 Oct 2021, 17:00 20m Room 501 (Kunibiki Messe) ### Room 501 #### Kunibiki Messe Parallel Session Presentation Form factors and GPDs Tyler Schroeder ### Description Exclusive heavy meson production is a key tool for accessing the inner dynamics of the proton. These reactions involve the proton Generalized Parton Distributions (GPDs), which correlate the longitudinal momenta and their transverse distribution of the proton’s composite partons. The hard exclusive production of Quarkonia ($J/\psi$, $\Upsilon$, etc.) is particularly interesting, as it accesses the gluon GPDs at the lowest order. We used ROOT to create a new flexible generator for the photoproduction, quasi-photoproduction, and electroproduction of vector mesons off a proton. The output phase space is weighted by the reaction cross-section, creating a realistic graph of event count as a function of kinematics. We will discuss the relevance of measuring hard exclusive production of Quarkonia, present our work on the event generator, and discuss our projections for the upcoming Electron-Ion Collider (EIC). ### Presentation Materials There are no materials yet.
2021-10-17 06:13:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17039218544960022, "perplexity": 6140.249170807469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00601.warc.gz"}
https://socratic.org/questions/int-x-a-x-2-3-dx
# Int (x/a+x^2)^3 dx = ? May 11, 2018 $\left(\frac{1}{7}\right) {x}^{7} + \left(\frac{1}{2 a}\right) {x}^{6} + \left(\frac{3}{5 {a}^{2}}\right) {x}^{5} + \left(\frac{1}{4 {a}^{3}}\right) {x}^{4} + C$ #### Explanation: This integral can be solved by expanding it. Note that the following is true: ${\left(\frac{x}{a} + {x}^{2}\right)}^{3} = \left(\frac{x}{a} + {x}^{2}\right) \left(\frac{x}{a} + {x}^{2}\right) \left(\frac{x}{a} + {x}^{2}\right)$ $= \left({x}^{2} / \left({a}^{2}\right) + 2 {x}^{3} / a + {x}^{4}\right) \left(\frac{x}{a} + {x}^{2}\right)$ $= {x}^{3} / \left({a}^{3}\right) + 2 {x}^{4} / \left({a}^{2}\right) + {x}^{5} / a + {x}^{4} / \left({a}^{2}\right) + 2 {x}^{5} / a + {x}^{6}$ $= {x}^{6} + 3 {x}^{5} / a + 3 {x}^{4} / \left({a}^{2}\right) + {x}^{3} / \left({a}^{3}\right)$ Making the above substitution, our integral is now: $\int \left({x}^{6} + \left(\frac{3}{a}\right) {x}^{5} + \left(\frac{3}{{a}^{2}}\right) {x}^{4} + \left(\frac{1}{{a}^{3}}\right) {x}^{3}\right) \mathrm{dx}$ $= \left(\frac{1}{7}\right) {x}^{7} + \left(\frac{1}{2 a}\right) {x}^{6} + \left(\frac{3}{5 {a}^{2}}\right) {x}^{5} + \left(\frac{1}{4 {a}^{3}}\right) {x}^{4} + C$ If it pleases you, you can find common denominators to make this a single term, but this form looks fine to me.
2020-02-25 00:37:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9321319460868835, "perplexity": 1029.9071561253143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145989.45/warc/CC-MAIN-20200224224431-20200225014431-00352.warc.gz"}
https://archive.lib.msu.edu/crcmath/math/math/d/d382.htm
## Dot Product The dot product can be defined by (1) where is the angle between the vectors. It follows immediately that if is Perpendicular to . The dot product is also called the Inner Product and written . By writing (2) (3) it follows that (1) yields (4) So, in general, (5) The dot product is Commutative (6) Associative (7) and Distributive (8) The Derivative of a dot product of Vectors is (9) The dot product is invariant under rotations (10) where Einstein Summation has been used. The dot product is also defined for Tensors and by (11) Arfken, G. Scalar or Dot Product.'' §1.3 in Mathematical Methods for Physicists, 3rd ed. Orlando, FL: Academic Press, pp. 13-18, 1985.
2021-12-02 10:25:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830243587493896, "perplexity": 898.3163993276091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361253.38/warc/CC-MAIN-20211202084644-20211202114644-00247.warc.gz"}
http://forestbpms.co/k7mw6f/combined-events-probability-ff73df
Using Algebra we can also "change the subject" of the formula, like this: "The probability of event B given event A equals Some of the worksheets for this concept are Statistics, Independent and dependent events, Probability and compound events examples, Probability of compound events, Joint conditional marginal probabilities, Sample space events probability, Probability practice, Probability 2 text. So, what is the probability you will be a Goalkeeper today? We love notation in mathematics! Combined Events teaching resources for KS3 / KS4. What percent of those who like Chocolate also like Strawberry? The probability of events A and B to occur equals the product of the probabilities of each event occurring. The intersection of two or more simple events creates a compound event that occurs only if all the simple events occurs.? Worksheets with answers . (and subtract from 1 for the "Yes" case), (This idea is shown in more detail at Shared Birthdays. Plan included along with Powerpoint and Worksheet. If we want to know the probability of having the sum of two dice be 6, we can work with the 36 underlying outcomes of the form . The probability mass reserved for unseen events is equal to T / (N + T) where T is the number of observed event types and N is the total number of observed events. You need to login to view this content. The probability of a combined event ‘A and B’ is given ... Read more. Let's build a tree diagram. Extension worksheet also provided - scaffolded questions to help students discover 'and&' rule for themselves. An outcome that never Free resources for teachers and students to hopefully make the teaching and learning of mathematics a wee bit easier and more fun. … The complement of an event $E$, denoted ${E}^{\prime }$, is the set of outcomes in the sample space that are not in $E$. The conditional probability formula is as follows: P(X,given Y) or P(X∣Y)P(X, given~Y) \text{ or } P(X | Y)P(X,given Y) or P(X∣Y). Sometimes, we are interested in finding the probability that an event will not happen. UNIT-II : distributions: Binomial and poison distributions & Normal distribution related properties. If it is thrown three times, find the probability of getting a) three heads b) 2 heads and a tail c) at least one head. Tag: Probability > Probability of combined events. July 1, 2020 Craig Barton Based on a Context. How to handle Dependent Events. DRAFT. Join Us ) , ) 1st January 2021 / by johan1. 7.3 Probability of a Combined Event 7.3b Finding the Probability of Combined Events (a) A or B (b) A and B 1. Joint probability is a measure of two events happening at the same time, and can only be applied to situations where more than one observation can occur at the same time. January 20, 2021 Craig Barton Probability, Statistics and Probability. So if you think about it, the probability is going to be the number of events that meet these conditions, over the total number events. The probability of event X and event Y happening is the same thing as the point where X and Y intersect. The chance is simply 1-in-2, or 50%, just like ANY toss of the coin. 0% average accuracy. Events, like sets, can be combined in various ways described as follows. Conditional Probability. Multiple linear regression (MLR) is a statistical technique that uses several explanatory variables to predict the outcome of a response variable. When listing possible outcomes, try to be as logical as possible. Imagine that you are rolling a six-sided die (D) and flipping a penny (P) at the same time. jonesk5 Reformed functional skills whole course! List the sets representing the following: i)E 1 or E 2 or E 3 You need to get a "feel" for them to be a smart and successful person. Probability 2: Probability of combined events . Independent Events. 0 likes. Life is full of random events! • The probability of an outcome is a number between 0 and 1 inclusive. Cans of beans. We haven't included Alex as Coach: An 0.4 chance of Alex as Coach, followed by an 0.3 chance gives 0.12. It reflects the notion that smallest probability, reserved for impossible events, is zero. A moving average is a technical analysis indicator that helps smooth out price action by filtering out the “noise” from random price fluctuations. The accident. Find PowerPoint Presentations and Slides using the power of XPowerPoint.com, find free presentations research about Probability Of Combined Events PPT Grades K-8 Worksheets. First, the probability that a random 10-digit telephone number belongs to Obama is 1/10 10. First we show the two possible coaches: Sam or Alex: The probability of getting Sam is 0.6, so the probability of Alex must be 0.4 (together the probability is 1). Show Video Lesson. High School Math / Homework Help. The correlation coefficient is a statistical measure that calculates the strength of the relationship between the relative movements of two variables. Probability and Statistics. (1/5 + 4/5 = 5/5 = 1). Combined events-Card (Probability) Ask Question Asked 5 years, 5 months ago. The combined 5-year BS/MS degree in Actuarial Science, only available to UCSB undergraduates in the Actuarial major. Probability applies to situations in which there is a well defined trial whose possible outcomes are found among those in a given basic set. Combined Events: Probability Worksheet. Joint Probability: A joint probability is a statistical measure where the likelihood of two events occurring together and at the same point in time are calculated. The Difference Between Joint Probability and Conditional Probability. FREE (1) Popular paid resources. Events can be "Independent", meaning each event is not affected by any other events. The Precalculus course, often taught in the 12th grade, covers Polynomials; Complex Numbers; Composite Functions; Trigonometric Functions; Vectors; Matrices; Series; Conic Sections; and Probability and Combinatorics. In probability, two events are independent if the incidence of one event does not affect the probability of the other event. The second axiom of probability is that the probability of the entire sample space is one. It states that the probability of two independent events occurring together can be calculated by multiplying the individual probabilities of each event occurring alone. So the next event depends on what happened in the previous event, and is called dependent. You need to get a "feel" for them to be a smart and successful person. Active 5 years, 5 months ago. An outcome that always happens has probability 1. the probability of event A times the probability of event B given event A". January 29, 2020 January 29, 2020 Craig Barton Probability, Statistics and Probability. Lesson on finding combined probabilities by listing all possible outcomes for 2 or more events. GCSE Maths Specification and Awarding Body Information Videos . if we got a red marble before, then the chance of a blue marble next is 2 in 4, if we got a blue marble before, then the chance of a blue marble next is 1 in 4. Symbolically we write P(S) = 1. Use of … And that is a popular trick in probability: It is often easier to work out the "No" case The easiest case to examine when calculating probability with dice is the odds that a side will come up when throwing a single die. So you have all the possible events over all the possible events when you add all of these things up. Combined events probability-HELP! The symbol “∩” in a joint probability is referred to as an intersection. Viewed 178 times 1 $\begingroup$ A man draws one card at random from a complete pack of 52 playing cards, replaces it and then draws another card at random from the pack. Share this entry. For example, from a deck of 52 cards, the joint probability of picking up a card that is both red and 6 is P(6 ∩ red) = 2/52 = 1/26, since a deck of cards has two red sixes—the six of hearts and the six of diamonds. Events can be "Independent", meaning each event is not affected by any other events. The remaining probability mass is discounted such that all probability estimates sum to one, yielding: But for the "Alex and Blake did not match" there is now a 2/5 chance of Chris matching (because Chris gets to match his number against both Alex and Blake). This is because we are removing marbles from the bag. P(B|A) is also called the "Conditional Probability" of B given A. There is a 2/5 chance of pulling out a Blue marble, and a 3/5 chance for Red: We can go one step further and see what happens when we pick a second marble: If a blue marble was selected first there is now a 1/4 chance of getting a blue marble and a 3/4 chance of getting a red marble. The coin and the dice. For instance, joint probability can be used to estimate the likelihood of a drop in the Dow Jones Industrial Average (DJIA) accompanied by a drop in Microsoft’s share price, or the chance that the value of oil rises at the same time the U.S. dollar weakens. What it did in the past will not affect the current toss. Answer: it is a 2/5 chance followed by a 1/4 chance: Did you see how we multiplied the chances? A pair of dice is rolled; the outcome is viewed in terms of the numbers of spots appearing on the top faces of the two dice. A compound probability combines at least two simple events, also known as a compound event. In other words, if events $A$ and $B$ are independent, then the chance of $A$ occurring does not affect the chance of $B$ occurring and vice versa. probability of combined events worksheet. Revision of Probability of Combined Event KSSM Form 4. the smallest total would be 4; since each spinner has been spun twice. The offers that appear in this table are from partnerships from which Investopedia receives compensation. You are off to soccer, and want to be the Goalkeeper, but that depends who is the Coach today: Sam is Coach more often ... about 6 out of every 10 games (a probability of 0.6). The following are typical. Note: "Yes" and "No" together  makes 1 Now let's take it up a notch. Dependence, conditioning, Bayes methods. Statistics and Probability change topic; Intro to statistics Using your calculator for basic stats Sampling and outliers Standard deviation and variance Cumulative frequency Box and whisker plots Linear regression of y on x Linear regression of x on y Probability basics Combined events Conditional probability is the chances of an event or outcome that is itself based on the occurrence of some other previous event or outcome. The probability of an event B to occur if an event A has already occurred is the same as the probability of an event B to occur. A Tree Diagram: is a wonderful way to picture what is going on, so let's build one for our marbles example. Determining the independence of events is important because it informs whether to apply the rule of product to calculate probabilities. FREE (3) csehzsuzsi Parallel to xy bingo. Tree diagrams are a way of showing combinations of two or more events. Some of the worksheets for this concept are Statistics, Independent and dependent events, Probability and compound events examples, Probability of compound events, Joint conditional marginal probabilities, Sample space events probability, Probability practice, Probability 2 text. 70% of your friends like Chocolate, and 35% like Chocolate AND like Strawberry. The probability of getting "tails" on a single toss of a coin, for example, is 50 percent, although in statistics such a probability value would normally be written in decimal format as 0.50. If event E 1 represents all the events of getting a natural number less than 4, event E 2 consists of all the events of getting an even number and E 3 denotes all the events of getting an odd number. This means that for certain events you can actually calculate how likely it is that they will happen. Probability: Sample space and events Probability The axioms of probability – Some Elementary theorems – Conditional probability Baye’s theorem. We can get the expected time for all six events to happen by integrating the above from 0 to infinity. The North Atlantic goes from a deep trough into a strong blocking high in the final days before Christmas and establish an open channel for cold advection from the Arctic region towards the deep south. Marginal Probability: Probability of event X=A given variable Y. October 30, 2018 Craig Barton Based on an Image. Probability of combined events Probability of combined event ID: 1353686 Language: English School subject: Math Grade/level: Form 4 Age: 16-17 Main content: Probability Other contents: Add to my workbooks (0) Download file pdf Embed in my website or blog Add to Google Classroom Add to Microsoft Teams Pupils are asked to find the probability of independent events as well as using conditional probability. Each toss of a coin is a perfect isolated thing. 2 hours ago by. Life is full of random events! It means we can then use the power of algebra to play around with the ideas. Blake compares his number to Alex's number. This ignores potential complications from Obama owning multiple phones or failing to answer personally (perhaps using an assistant or answering machine). ), with Coach Sam the probability of being Goalkeeper is, with Coach Alex the probability of being Goalkeeper is. 2 thoughts on “ Probability of Combined Events: GCSE Maths Question of the Week (Higher) ” kim Kelly says: January 9, 2017 at 2:38 pm If you spin the above spinners ‘twice’ the probability of having a total of 2 is zero. Least squares prediction. This probability combines two events. And we can work out the combined chance by multiplying the chances it took to get there: Following the "No, Yes" path ... there is a 4/5 chance of No, followed by a 2/5 chance of Yes: Following the "No, No" path ... there is a 4/5 chance of No, followed by a 3/5 chance of No: Also notice that when we add all chances together we still get 1 (a good check that we haven't made a mistake): OK, that is all 4 friends, and the "Yes" chances together make 101/125: But here is something interesting ... if we follow the "No" path we can skip all the other calculations and make our life easier: (And we didn't really need a tree diagram for that!). View. In quantum mechanics, a probability amplitude is a complex number used in describing the behaviour of systems. You are rolling a six-sided die ( D ) and flipping a penny P... The easiest case to examine when calculating probability with dice is the probability of each event is not by! To learn probability … combined events: probability of events is important it... Line ( Alex and Blake did match ) we already have a match ( a chance of combined. Probability tells you how likely it is that they will happen any particular event occurring alone high-quality worksheets... Things up Sam the probability of event X=A given variable Y and analysts use joint:. Combined probabilities by listing all possible outcomes, try to be a Goalkeeper today together make 0.3. We can then use the power of algebra to play around with the likelihood of an eventand its is! The teaching and learning of mathematics and programming to Solve problems ( Px ( \phi \! And learning of mathematics and programming to Solve problems ( Px ( \phi ) \ is. The bag Strategies & Instruments, Investopedia uses cookies to provide you with a great user experience smart successful. Other previous event, and is called dependent can actually calculate how likely it is field... ( A∪B ) n ( S ) 2 it is that the of. Together make: 0.3 + 0.12 = 0.42 probability of being a Goalkeeper today given basic set the. Probability, Statistics and probability are the two Yes '' branches of the events occur. test fluency. 10-Digit telephone number belongs to Obama is 1/10 10 on, so let 's build one for marbles. Match ) we already know the total number of events a and B occur. Goalkeeper is an event or phenomena occurring by an 0.3 chance gives 0.12 of a match for... A coin, throwing dice and lottery draws are all examples of random events match ( chance. It has a 60 % chance of landing on heads will come up when throwing a single die 35 like... Can answer questions like what are the chances of drawing 2 blue marbles 1/10! January 20, 2021 Craig Barton Based on an Image 1 ), reasoning and solving! Statistics and probability are the two main components of Risk analysis in finding the probability of independent events as as! Spring 2021 an introduction to probability, emphasizing the combined 5-year BS/MS degree in Actuarial Science, only to! Product of the entire sample space is 1 the next event depends on what happened in the previous event phenomena. The basis of much of predictive modeling with problems such as classification and.! A single die students discover 'and & ' rule for themselves meaning each event occurring biased that... It states that the chance that any combined events probability them chose the same.... The Actuarial major the power of algebra to play around with the ideas total of. Academy 's Precalculus course is built to deliver a comprehensive, illuminating, engaging, and Common Core experience... Influence on the outcomes of your friends like Chocolate also like Strawberry and Download PowerPoint on. Probability values of multiple events can be independent '', meaning each event not. Of product to calculate the probability of independent events as well as that a! Two or more simple events creates a compound event that occurs only all..., throwing dice and lottery draws are all examples of random events of events... Events Post navigation the behaviour of systems the events are independent if the incidence of one event.... The ideas the Actuarial major observable events can be combined in various ways as... = 0.42 probability of a coin will show head when you toss one...: did you see how we multiplied the chances of drawing 2 blue marbles is 10! Probability to a limited extent of the relationship between the relative movements of two or more of other!, 2018 Craig Barton probability, two events occurring together can be extended to any number individual. Be a smart and successful person coefficient is a statistical measure that calculates the strength of other. Residual standard deviation describes the difference in standard deviations of observed values versus predicted in... Students to hopefully make the teaching and learning of mathematics a wee bit easier and more fun some previous! 2017 # 1 Trying to learn probability … combined events occurring together can be independent,! Actually calculate how likely it is a simple event built to deliver a comprehensive, illuminating, engaging, Common! The expected time for all six events to happen by integrating the above from 0 to infinity probability – Elementary. With this lovely selection of combined event ‘ a and B ’ is given... Read.. Union of several simple events creates a compound event that occurs only if all the simple creates... Being a Goalkeeper today in finding the probability of events a and B Post navigation together... Events you can actually calculate how likely it is that an event will.. For fluency, connections, reasoning and problem solving, 2021 Craig Barton probability reserved! Deals with the ideas in this table are from partnerships from which receives. Can answer questions like what are the chances events over all the possible for. Event will happen 0.42 probability of two variables such that when one is... All examples of random events determining how to combine them sets, can extended. Probability only factors the likelihood of an event will not happen a deck of cards is 1/2 =.! We can still use the machinery of classical probability to a limited extent Obama! Other events likelihood estimate of a coin is a simple event happening is the probability events! Free ( 3 ) csehzsuzsi Parallel to xy bingo event X and Y! An 0.4 chance of 1/5 ) measure that calculates the likelihood of both events occurring throwing single. ) csehzsuzsi Parallel to xy bingo to be a smart and successful person, by! 0 Britain Apr 10, 2017 ; Tags combined events worksheet includes combined events probability questions to. ’ S theorem: Binomial and poison distributions & Normal distribution Post navigation of combined events probabilityhelp Home. Chances of drawing 2 blue marbles is 1/10 examine when calculating probability with dice is the probability of the is! What happened in the game of snakes and ladders, a probability amplitude is statistical! Given... Read more on another event happening is the odds that a coin is a relationship between variables. Each event is not affected by any other events, or 50 %, just like toss! Here is another quite different example of conditional probability '' of B given a belongs... ( \phi ) \ ) is interpreted as referring to some rational number 3 ) Parallel! Be 4 ; since each spinner has been spun twice algebra to play around with the likelihood of event. Probabilities by listing all possible outcomes for two or more simple events creates a event... Jan 2017 18 0 Britain Apr 10, 2017 ; Tags combined events Post navigation it reflects the notion smallest... Given by the formula below choose a random 10-digit telephone number belongs to Obama is 1/10 uses cookies provide! ) 1st january 2021 / by combined events probability side will come up when a. The correlation coefficient is a complex number used in describing the behaviour of systems friends (,... 2017 # 1 Trying to learn probability … combined events axiom of of. Calculated by multiplying the individual probability values of multiple events can occur simultaneously sequence of events a and B formula! \ ( Px ( \phi ) \ ) is interpreted as referring to some number. If all the possible outcomes for two or more events this principle can be independent '', meaning event... What percent of those who like Chocolate, and is called dependent '' for them to be a Goalkeeper.... Can take a few different forms calculations and make sure they add to 1: Here is another different. Calculator can calculate the probability of the probabilities for all possible outcomes in given! Drawing 2 blue marbles? rule for themselves values in a regression analysis you see how we the... & Normal distribution 5/5 = 1 4 friends ( Alex and Blake did match ) we already have a influence. 2 or more events compound event combined events probability occurs if one or more combined enables... A deck of cards is 1/2 = 0.5 to Obama is 1/10 ) n A∪B! 70 % of your lessons with this lovely selection of combined event ‘ a and B is! Lottery draws are all examples of random events can calculate the probability of two or more the. Events enables you to calculate probabilities likelihood estimate of a match ( a chance of on! Its complement is always 1 past will not affect the probability that a random 10-digit telephone number belongs to is... Outcomes of your friends like Chocolate, and 35 % like Chocolate, is! Only one coin is a complex number used in describing the behaviour of systems product to calculate probabilities whose... Point where X and event Y happening is the probability of combined event ‘ a B... 70 % of your lessons with this lovely selection of combined events probability axioms... Influence on the outcomes of your lessons with this lovely selection of combined events: probability of independent events combined events probability! If the incidence of one event happening its complement is always 1 with this lovely selection combined! Use joint probability can take a few different forms outcomes of your friends like Chocolate, and is dependent! How we multiplied the chances of drawing a red card from a deck of cards is 1/2 0.5... Will occur. probability are the chances of drawing 2 blue marbles is 1/10,.
2021-10-28 01:37:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6077498197555542, "perplexity": 700.0623300310185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00372.warc.gz"}
https://rdrr.io/cran/emplik/man/el.cen.test.html
el.cen.test: Empirical likelihood ratio for mean with right censored data,... In emplik: Empirical Likelihood Ratio for Censored/Truncated Data Description This program computes the maximized (wrt p_i) empirical log likelihood function for right censored data with the MEAN constraint: ∑_i [ d_i p_i g(x_i) ] = \int g(t) dF(t) = μ where p_i = Δ F(x_i) is a probability, d_i is the censoring indicator. The d for the largest observation is always taken to be 1. It then computes the -2 log empirical likelihood ratio which should be approximately chi-square distributed if the constraint is true. Here F(t) is the (unknown) CDF; g(t) can be any given left continuous function in t. μ is a given constant. The data must contain some right censored observations. If there is no censoring or the only censoring is the largest observation, the code will stop and we should use el.test( ) which is for uncensored data. The log empirical likelihood been maximized is ∑_{d_i=1} \log Δ F(x_i) + ∑_{d_i=0} \log [ 1-F(x_i) ]. Usage 1 el.cen.test(x,d,fun=function(x){x},mu,error=1e-8,maxit=15) Arguments x a vector containing the observed survival times. d a vector containing the censoring indicators, 1-uncensor; 0-censor. fun a left continuous (weight) function used to calculate the mean as in H_0. fun(t) must be able to take a vector input t. Default to the identity function f(t)=t. mu a real number used in the constraint, sum to this value. error an optional positive real number specifying the tolerance of iteration error in the QP. This is the bound of the L_1 norm of the difference of two successive weights. maxit an optional integer, used to control maximum number of iterations. Details When the given constants μ is too far away from the NPMLE, there will be no distribution satisfy the constraint. In this case the computation will stop. The -2 Log empirical likelihood ratio should be infinite. The constant mu must be inside ( \min f(x_i) , \max f(x_i) ) for the computation to continue. It is always true that the NPMLE values are feasible. So when the computation cannot continue, try move the mu closer to the NPMLE, or use a different fun. This function depends on Wdataclean2(), WKM() and solve3.QP() This function uses sequential Quadratic Programming to find the maximum. Unlike other functions in this package, it can be slow for larger sample sizes. It took about one minute for a sample of size 2000 with 20% censoring on a 1GHz, 256MB PC, about 19 seconds on a 3 GHz 512MB PC. Value A list with the following components: "-2LLR" The -2Log Likelihood ratio. xtimes the location of the CDF jumps. weights the jump size of CDF at those locations. Pval P-value error the L_1 norm between the last two wts. iteration number of iterations carried out Author(s) Mai Zhou, Kun Chen References Pan, X. and Zhou, M. (1999). Using 1-parameter sub-family of distributions in empirical likelihood ratio with censored data. J. Statist. Plann. Inference. 75, 379-392. Chen, K. and Zhou, M. (2000). Computing censored empirical likelihood ratio using Quadratic Programming. Tech Report, Univ. of Kentucky, Dept of Statistics Zhou, M. and Chen, K. (2007). Computation of the empirical likelihood ratio from censored data. Journal of Statistical Computing and Simulation, 77, 1033-1042. Examples 1 2 3 4 5 6 el.cen.test(rexp(100), c(rep(0,25),rep(1,75)), mu=1.5) ## second example with tied observations x <- c(1, 1.5, 2, 3, 4, 5, 6, 5, 4, 1, 2, 4.5) d <- c(1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1) el.cen.test(x,d,mu=3.5) # we should get "-2LLR" = 1.246634 etc. emplik documentation built on May 29, 2017, 11:44 a.m.
2018-02-19 22:18:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7761474251747131, "perplexity": 1763.3079910974936}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812841.74/warc/CC-MAIN-20180219211247-20180219231247-00454.warc.gz"}
https://www.physicsforums.com/threads/complex-numbers-question.385778/
# Complex numbers question ;) 1. Mar 11, 2010 ### Maatttt0 1. The problem statement, all variables and given/known data One root of the equation x^2 + ax + b = 0 is 4 + 5i. Write down the second root. 2. Relevant equations N/a? 3. The attempt at a solution My problem is it's a "write down" question which suggests no working required. This is probably so simple but I just don't know... I cannot do this even thought I know the later parts of the question. Thank you in advance ;) 2. Mar 11, 2010 ### Pengwuino Write down the quadratic equation as you know it. The quadratic formula gives you two equations using one formula. What changes about the formula that gives you 2 separate solutions? 3. Mar 11, 2010 ### Maatttt0 The plus or minus, so would it be 4 - 5i? 4. Mar 11, 2010 ### Pengwuino Yup! The quadratic formula gives solutions as $$$\frac{{ - b}}{{2a}} \pm \frac{{\sqrt {b^2 - 4ac} }}{{2a}}$$$ However, you can rewrite the quadratic formula by pulling out a -1 from the term in the square root to get $$\[ \frac{{ - b}}{{2a}} \pm i\frac{{\sqrt {4ac - b^2 } }}{{2a}}$$ So you can identify the 4 and the 5 with the real and imaginary parts of the equation. 5. Mar 11, 2010 ### Maatttt0 Hehe, thank you so much :) I know it was rather simple but my mind just wouldn't trigger. Ty again ;) 6. Mar 13, 2010 ### jbunniii By the way, that answer is correct only if $a$ and $b$ are both real. If the problem doesn't specify that they are, then the other root could be any complex number.
2018-02-18 09:10:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6532815098762512, "perplexity": 993.2629697892692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811795.10/warc/CC-MAIN-20180218081112-20180218101112-00431.warc.gz"}
https://www.phenomenalworld.org/analysis/nokia-risk/
January 4, 2023 Analysis # The Nokia Risk ##### Small countries, big firms, and the end of the fifth Schumpetarian wave In the early 2000s, Finland was the darling of industrial and employment policy analysts everywhere. This small country with a population of 5.5 million and a GDP roughly equal to the state of Oregon experienced what looked like a high tech-led productivity revolution. Real GDP per capita in local currency terms rose 55 percent from 1995 to 2007—nearly double the US increase and close to the pinnacle of the twenty-one richest OECD industrialized economies. Yet spectacular growth abruptly halted after 2008. GDP continued to rise with population growth, but from 2008 to 2019 real Finnish per capita income declined. The European Central Bank’s dilatory response to the eurozone crisis, and the austerity policies that followed, undoubtedly explain part of this abysmal performance. But an equally large part is due to Finland having many of its growth eggs in a single basket: Nokia. Nokia’s handsets and related telephony equipment accounted for 20 percent of Finnish exports at peak, driving Finland’s current account surplus to nearly 7 percent of GDP.1 When the Apple iPhone launched in 2007, Nokia’s handset market collapsed, exports fell by half, Finland’s current account swung into deficit, and a decade plus of economic stagnation began. Finland is not the only economy facing “Nokia risk.” A larger group of seven countries—all of them relatively small, rich, and with stable governments—are similarly exposed. In Denmark, Israel, South Korea, Sweden, Switzerland, and Taiwan a handful of firms account for a hugely disproportionate share of both profits and R&D spending. The firms which dominate these seven economies have all been extraordinarily successful in the knowledge economy of the past three decades: Samsung Electronics in Korea, Taiwan Semiconductor Manufacturing Co. in Taiwan, Novo Nordisk (pharmaceuticals) in Denmark, and Roche and Novartis (pharmaceuticals) in Switzerland. For the decade of 2011–2020, these firms have had large shares of cumulative profits both domestically and abroad. It is largely thanks to these profits that these small countries have such a significant share of global profit (larger than their share of global GDP) and, in turn, a relatively high per capita income. Table 1 shows the ratio of the share of cumulative global profit and of R&D expenditures to country share of global GDP, with some comparisons to the larger rich OECD countries. The figures are based on 21,580 global ultimate owners (headquarters firms) with annual sales averaging at least $10 million between 2011 and 2020, and which have enough comprehensive financial data to make comparison possible. While this calculation excludes a big chunk of small and medium sized industries (or SMEs, firms with fewer than 250 employees), these 21,580 are the biggest among the 41 million in the Bureau van Dijk Orbis database. The handful of firms discussed here is thus both mathematically and macroeconomically significant, as the very long tail of SMEs typically depends on those larger firms for their revenue. For example, Danish pharmaceutical giant Novo Nordisk alone accounts for a quarter of cumulative profits for the 200 plus Danish firms in Table 1; its 0.2 percent share of cumulative profits for all 21,580 firms is almost 50 times the average for that group. Table 1: Ratio of share of cumulative Profit and R&D spending to share of Global GDP, by country, 2011–2020 In the coming years, these firms and the national economies that house them will confront two big challenges. On the economic front, the fifth Schumpeterian growth wave, which was built on information and communication technologies (ICT) as well as traditional and first-generation bio-engineered pharmaceuticals, has exhausted much of its growth impulse. A potential sixth wave built on artificial intelligence (AI), machine learning (ML), and second-generation biotech (CRISPR) as general-purpose technologies threatens these firms’ existing production systems and markets. At the same time, governments everywhere are ramping up antitrust and other attacks on visible monopolies, and edging towards more nationalistic industrial and trade policy. In light of the central role of these firms that account for disproportionate shares of their home economy exports, failure to navigate these challenges will result in a major economic shock for their small, rich countries. ### Schumpetarian waves and industrial transitions Joseph Schumpeter’s analysis of dynamic change in capitalist economies—what he termed “creative destruction”— sheds light on the economic risks facing the core firms in these seven economies. Schumpeter argued that the central puzzle in economics was explaining the sources of dynamic growth. In an economy that actually embodied the starting assumptions of mainstream economics—small, competitive firms with no barriers to entry and no pricing power—profits would fall to the cost of capital plus some managerial wage for owner-operators. In essence, profits would only cover depreciation. Consequently, extensive and intensive growth would slow to the rate of population growth plus some gains from the normal productivity creep that incremental innovation produced for firms—in short, the kind of economic slump that prompted the secular stagnation debate of the 2010s. This circular economy, as Schumpeter called it, would never produce the periodic eruptions of rapid technological change and growth that he observed in the two centuries after the industrial revolution. As he put it, in these circumstances, you could add as many stagecoaches as you wanted to the economy but never get to a railroad. Dynamic growth required radical innovation in five key, interconnected areas: new modes of transportation, new energy sources, new consumer goods, new general purpose production technologies, and, though he underemphasized this, new legal organization or governance structures for firms. Writing on the eve of the Second World War, Schumpeter identified four such revolutions that had so far taken place. The first initiated the industrial revolution, centering on canals, water mills, textiles, and other household nondurables, with small owner-operated factories collecting handicraft producers under one roof. The second, in the mid-1800s, focused on steam power, railroads, and iron goods, as well as larger but still owner-operated factories using custom made machinery. The third, towards the end of the 1800s, emerged from electricity, steel steamships, urban trams, bicycles and chemicals, and saw the rise of large corporations which began to separate ownership and management. The fourth wave centered on internal combustion engines for land and air transport, petroleum, mass consumption of consumer durables like vehicles, continuous flow-assembly line production, and vertically integrated and often multidivisional firms with full separation of ownership and management. This wave began in the United States, most famously with Ford’s Model T, and spread to Europe and Japan. The transition periods between each of these growth spurts all saw increased domestic or international conflict and decreased investment in ageing growth sectors. Thus the transition from wave three to wave four saw the beginning of industrial unions and coincided with rising interimperial conflict that eventually produced World War I. The fifth wave began in the US in the 1960s with the development of the semiconductor chip, and soon spread globally. It is based on connectivity via electronics (ICT), negative energy consumption via digitalization, pharmaceuticals and first-generation biotechnologies, software and semiconductors, global supply chains, and de jure vertically disintegrated firms with de facto control by lead firms. This era’s iconic product, the smartphone, embodies the whole range of electronics products developed from the 1940s onward in a compact and relatively cheap form. There are many reasons to think this fifth wave is nearing exhaustion. Smartphone sales levelled off in 2018 and then declined somewhat, signalling replacement sales rather than growth. Despite the annual iPhone launch hype, most of the improvements to smartphones in recent years are largely marginal. Roughly 80 percent of the world’s population has 4G access—if they can pay for it. The entire global electronics industry is linked to personal computers and smartphones, which account for roughly half of global chip sales. Similarly, new pharmaceutical discovery levelled off in the 2000s, with most new drugs being copies or modifications of older drugs.2 But sales volumes simply reflect more important trends. Several key technologies and industries have already emerged in the energy and transportation space as well as, albeit with less clarity, in production software and pharmaceuticals.3 Alternative and renewable energy sources are clearly replacing fossil fuels as the basis for electricity generation and equipment power, including the critical transportation sector. And AI and ML appear to be general purpose production technologies reshaping both biotechnology (along with the new CRISPR-Cas gene editing technology) and the organization of machinery production. Table 2 summarizes the economic and political risks this sixth wave poses for core firms in these seven economies. Table 2: Economic and political risks ### Calculating risk Creating the huge reticulation networks that made new energy and transport sources useful required equally huge investments. One mile of railroad was pointless, 100 miles was revolutionary. Schumpeter, in his earlier works, argued that only entrepreneurs hyping potential monopoly profits could induce bankers to finance these huge investments. He called this the Mark I model, in which small start-up firms run by visionaries upend existing incumbents. Many of the current software giants fit this pattern, but it also characterized the earliest days of Ford. Later, Schumpeter noted that high-profit corporations could channel their monopoly profits to dedicated internal research labs and generate the same kind of revolutionary innovation—his Mark II innovation model. Here, think of ATT’s Bell Labs, which invented the transistor. Software aside, we largely live in a world that combines both Mark I and Mark II innovation in a complex web that mostly favors larger firms. Typically, states promote radical innovation, often by funding basic research in university labs and their small firms spin-offs—classic Mark I innovation. But larger firms then typically provide those small firms with more funding to develop a commercializable product that their own Mark II R&D teams will perfect. While these smaller firms often get the publication glory, the larger firms usually get the bulk of patents and thus profits. Apple’s SIRI assistant, initially developed by a US-wide research team organized through the Stanford University’s Research Institute on behalf of the Defense Advanced Research Projects Agency (DARPA), was absorbed by Apple into its own software ecosystem. If profits and R&D were evenly distributed across firms in each of our seven economies, these new technologies might not pose so big threat. But under the present combination of Mark I and Mark II innovation, a handful of firms account for the bulk of profits and R&D. For example, insulin accounts for roughly 7 percent of cumulative Danish goods exports for the period 2002–2021, with most of this coming from the pharmaceutical giant Novo Nordisk. 80 percent of Novo Nordisk’s revenue comes from sales of insulin and diabetes-related products, but 80 percent of Denmark’s workforce is in the service sector, which apart from Maersk-Møller and Legoland largely does not generate exports. Thus, the fate of Novo Nordisk has the potential to impact one of the country’s only significant export sectors. Table 3 shows the degree to which this very narrow slice of firms dominates both profitability and R&D in our group of seven economies. The table thus captures the degree of vulnerability around Mark II innovation. Comprehensive data on Mark I innovation—which ranges from a handful of people tinkering in their parents’ garage to modestly sized start-ups with no revenue—is not available. As noted above, Schumpeter’s two pathways to radical innovation are more intermingled today than in the past, when vertically integrated firms were rather sealed off from both universities and small firms. These days, much Mark I innovation is often captured by the larger firms through acqui-hires, acquisition, or copycat innovation and litigation. Equally so, accurate, comparable data on the relative share of national profits by firm size is not available. While the$10 million annual operating revenue metric is biased towards fairly large firms, it does reach into the upper end of the long tail of SMEs that constitute the majority of firms in most countries. Moreover, SMEs typically have labor productivity levels one-third below larger firms. Table 3: Firm share of cumulative national R&D and profits for high R&D firms, 2004–2020 These large firms dominate exports, and thus generate the foreign exchange needed to buy the considerable volume of goods and services that the seven economies do not produce locally. Given their small size, these economies cannot attain a domestic division of labor that generates the bulk of their consumption—hence their need for export. In all but one of the countries, firms with more than 250 employees account for more than half of exports. Denmark is the exception; there the share is so close to half that it makes no difference (data for Taiwan are not available, but as noted above TSMC’s large share suggests the pattern of large firm dominance is not much different). Table 3 understates the importance of the larger firms, as these typically account for fewer than 1 percent of the total count of firms, while firms with fewer than 10 employees account for over 90 percent. As noted above, these few dominant firms largely explain why these societies capture a larger share of global profit than their share of global GDP (Table 1), which in turn explains why they are relatively rich societies. Success in generating high-value exports and their associated profits permits these societies to exchange a small volume of high-value exports for a much larger volume of lower-value imports. This disproportionate profit share reflects both pluck—local ability in the form of prior investments in R&D and human capital—as well as the broader dynamics of the modern economy. These firms rely on their investments in R&D to maintain their competitiveness. Their investments in turn explain the above average share of R&D in their economies as compared with the larger economies of the G7. Table 4 provides four relevant pieces of information. It contrasts R&D spending and the number of full-time researchers in the population relative to the G7 countries. It also shows the high share of total R&D by business, as well as the disproportionate share of total R&D spending accounted for by both the largest and the fifth largest firms and thus their even greater share of business R&D (the picture is somewhat different in the case of Taiwan). Table 4 thus shows the degree to which future innovation—at least in terms of the Mark II innovation that turns Mark I innovation into a global product—rests on a very narrow foundation. We could construct an index of risk by multiplying the share of business R&D by the share of the top one or five firms, but this would give a false sense of precision. The central point here—looking at the last two columns—is one of amazing concentration of research spending. The overlap of high profitability and profit share, export share and R&D share, is not accidental. It indicates past competitiveness and near monopoly or dominant positions in world markets. Profits fund the R&D that enables dominance and thus continued above average shares of global profits; those profits fund high levels of per capita income—among others, all those researcher jobs. And they fund, in some cases, extensive welfare states or at least state-education funding that generates the human capital those researchers possess, and which is the basis for past and potentially future dominance in high-tech sectors. Thus, for example, between 1991 and 2000, Swedish education spending increased by 2.1 percentage points of GDP to 7.4 percent, with half of that going to tertiary education. The Nordics have significantly higher literacy scores in standardized international tests than the rest of the rich countries, though Sweden has recently slipped back.4 Table 4: R&D intensity of economy, various measures, percent, average as indicated, ranked by column 1 ### The franchise economy The shift from the fourth (mass production) to the fifth (ICT) Schumpeterian wave involved changes in corporate strategy and structure that had significant knock-on effects. Chief among them, it boosted income inequality and increased the degree to which firms’ profitability depended on the legal regime around intellectual property (IPRs—patent, copyright, brand, and trademark). In the Fordist era, corporate strategy aimed at monopoly or oligopoly profits through control over large masses of physical capital arranged into continuous flow, assembly line systems. Profitability rested on running those systems at something close to their full capacity. This pushed firms to vertically integrate and negotiate peace with their typically unionized labor forces, which in turn reduced income inequality and funded internal research labs. But as more and more firms adopted this vertically integrated, unionized structure, profits began to decline. Workers revolted against the monotony and pace of assembly line production, and decolonization enabled raw-materials producers to push up prices, disrupting energy and metals markets. Put simply, once everyone adopted a Fordist product and production structure, the world ran out of cheap oil and docile semi-skilled assembly-line workers. Firms reacted to this militance by changing their corporate structure. They shrank their labor forces and opted to subcontract or offshore their low-wage, low-skill workforce. Similarly, they expelled physical assets—machines—used for producing undifferentiated goods into spin-off firms, as when GM and Ford created Delphi (now Aptiv) and Visteon as parts producers. At the same time, they began seeking more durable monopolies based on IPRs produced by a labor force high in human capital and supported by an army of subcontractors. This shift, which both coincided with and enabled the emergence of the fifth Schumpeterian wave, produced what I call a franchise structure. In the franchise economy, lead firms with lots of human capital, few actual employees, and substantial intellectual property portfolios outsource much of production to two other generic types of subordinated firms. Second layer firms are typically more capital-intensive firms with some barrier to entry for their production processes. Third layer firms are labor intensive firms producing undifferentiated goods and services. The lead firm orchestrates almost everything in its value chain without bearing any of the risk of holding that physical capital or dealing with masses of workers. In our seven economies, the lead firms are all among the country’s largest, albeit sometimes hybrids of the top two layers. Profitability for those firms derives from their control of intellectual property rights. Nokia, for example, lives on as a producer of network hardware and software based on earlier and ongoing R&D that generated more than 10,000 US patents from 2000 to 2019. Unlike the barrier to entry posed by capital intensive production, patents are vulnerable to radical technological change and also radical change in the legal regime surrounding the patent. While the shift to a franchise structure was good for firms with robust IPR portfolios and, by extension, for the high human-capital intensity of the workforce of our seven economies, it was less good for workers and firms producing undifferentiated goods and services. Downsizing meant shifting relatively well-paying jobs to low-wage countries, hollowing out the middle of the income distribution. Between 1998 and 2016, for example, Sweden’s industrial giants cut their workforce from 80,000 to 49,000 people. While younger labor market entrants into more IPR-based firms partly compensated for this, the loss of so many mid-level jobs has had broad socioeconomic effects, arguably contributing to the rise of anti-immigrant parties in the country. Nativist policies, in turn, threaten the ability of firms to attract the global talent they need to sustain R&D. ### Coping with risk Danish industrial policy with regard to pharmaceuticals illustrates the tensions around both Mark I and Mark II types of innovation strategy. Recent Danish governments have promoted Denmark as an “innovation country” and formed a “Disruption Council” intended to preserve the country’s economic position. Danish R&D is highly concentrated in the pharmaceutical and biomedical sector (Vestas’ windmill production and servicing provides a side bet on alternative energy, but Chinese competition squeezes profitability). A government-run public goods strategy supports the emergence of Mark I type firms in biotechnology, thus providing a “feedstock” for Mark II oriented Novo Nordisk. This strategy continues a long tradition of “extension” type services designed to bring new technology and best practices from universities and high performing firms to more traditional firms including manufacturing SMEs. It is supplemented with venture capital from the Danish Growth Fund (Vækstfonden). The policy has helped keep Danish firms largely at the technology frontier, producing a narrower gap between leading and lagging firms than in the rest of the OECD. But the challenge in pharmaceuticals is anticipating the bio-genomics revolution rather than adapting to it. In cooperation with Novo Nordisk, the Danish government runs a cluster of data collection and dissemination organizations: the Danish National Genome Center (Nationalt Genom Center), the National Biobank (Danmarks Nationale Biobank under the State Serum Institute), and the overarching Royal Bio and Genome Bank. These centers leverage the comprehensive and diachronic data the state collects from all hospitals and practitioners on a large range of conditions and patient variables. For example, the Biobank currently has serum samples from nearly 1 million people and 25 million samples in toto. The Genome Center was able to establish a genomic sequence database for distinguishing Covid-19 variants within five days in March 2020. Quite apart from the value to Novo Nordisk (which accounts for 90 percent of Danish pharmaceutical employment and profits), the databanks also enable smaller biotech firms to access large volumes of data on relatively smaller (in terms of incidence) diseases and thus help them overcome one major research hurdle. Put simply, they generate and hold the data that machine-learning and AI driven R&D projects need to function. Novo Nordisk meanwhile acts as a central contractor with many smaller research-oriented firms—though these contracts generally choke off growth from Mark I innovation in favor of pre-emption by the lead firm. Overall, however, Denmark lacks indigenous AI and ML firms that might supply expertise to the entire sector, as compared with either Israel or Sweden. Those countries display different strategies with different weaknesses. Israel’s huge defense related investment in software and sensor capacity created a vibrant Mark I-type tech sector. But Israelis doing that Mark I innovation typically sell their firms or technology to US Mark II type firms, as when Google bought the navigation firm Waze or when Intel acquired Mobileye, which develops autonomous driving technologies. The relative absence of big domestic tech firms explains Israel’s deviation from the broader pattern of profit share being above GDP share (Table 1). It also explains why so many Israelis—an estimated 100,000—simply migrate to Silicon Valley even though Israel’s so-called Silicon Wadi usually houses more start-up firms per capita than any other country. Both trends potentially inhibit a response to the AI and ML revolution in software. The Israeli state, meanwhile, has relaxed the stringent employment and production requirements it put in place to stimulate Silicon Wadi in the first place. By contrast, Sweden has an abundance of large, mature Mark II type firms. Yet these have gradually hollowed out actual production in favor of simply generating intellectual property. The extreme case here is Volvo. Its old automobile capacity was sold first to Ford and then to the Chinese holding company Zhejiang Geely.5 Geely relies on Swedish engineering talent to design the new Volvo electric vehicle line (including Polestar), but production has largely shifted outside of Sweden. Similarly, Swedish pharmaceutical giants Pharmacia and Astra respectively ended up controlled by Pfizer (USA) and AstraZeneca (UK). As noted above, the big Swedish manufacturing firms have steadily reduced employment. Like Israel, Sweden is increasingly a hunting ground for foreign multinational firms looking for talented individuals. This sustains high-wage employment—at least for some. But it also means the profits end up somewhere else, and large Mark II firms that might anchor a research network are harder to form. ### High tides The sixth Schumpeterian wave, should it indeed appear, poses serious risks for the largest, export-focused firms of our seven economies. Presently, a narrow set of IPR-based firms in the Mark II model does the forward-looking investment in R&D that enables the transformation and scaling up of Mark I innovation required to catch that wave. It also generates both the jobs and the revenues needed to sustain a politically acceptable level of imports, employment, and growth in general. The potential inability of the big, highly profitable firms that anchor local research ecosystems to transition from their current production model to the novel production models emerging will have serious consequences. This risk extends beyond the “innovator’s dilemma.” Domestically, the loss of core manufacturing jobs in the second layer of the franchise economy has provoked populist backlashes. In both Israel and Sweden, this has empowered parties hostile to state-led industrial policy favoring highly paid knowledge workers. Externally, growing geopolitical tension between the United States and China has prompted efforts to reshore or “near-shore” the ICT sector, particularly semiconductor production. All told, this probably tilts the global playing field towards firms from the larger and more geopolitically powerful countries. As noted above, those firms already fish in the human-capital pools of the seven economies of our study, potentially undermining the survival or emergence of Mark II firms that anchor local R&D and production ecosystems. If the winner-take-all nature of the franchise economy continues, these older firms must run fast simply to stay in place. For the pawns of the global economy—smaller economies without national champions like Nokia or Samsung, and without oil-fund assets as in Norway—these challenges are even more pronounced. They enter this race with greater headwinds, weighed down by external debt, relatively untrained workers, and, in the worst cases, an over-reliance on unprocessed raw materials exports. 1. All data on exports are from the UN International Trade Center. Data on R&D are from the European Commission database on the 2500 largest R&D spenders in any given year from 2004 through 2019. This cumulates to 5303 firms over 56 countries and territorial units. 2. That said—as we will note below—a rising share of drug approvals concern bio-similars rather than the traditional small molecule pharmaceuticals. 3. The US state is already organizing policy around this. See remarks by National Security Advisor Jake Sullivan in September 2022. 4. Data supplied by John Stephens, University of North Carolina. 5. Not to be confused with AB Volvo, now largely a heavy truck manufacturer after spinning out Volvo Cars. ### Manufacturing Stagnation \$5.3 trillion of US federal government stimulus and relief spending have returned the economy to its pre-Covid growth trajectory. But that growth trajectory was hardly robust—either before or after the… ### Change the Furniture Mark Blyth is William R. Rhodes Professor of International Political Economy at Brown University and a Faculty Fellow at Brown’s Watson Institute for International Studies. His research examines how the…
2023-01-27 02:17:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24382585287094116, "perplexity": 6259.782759659948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00275.warc.gz"}
https://pos.sissa.it/340/004/
Volume 340 - The 39th International Conference on High Energy Physics (ICHEP2018) - Parallel: Beyond the Standard Model Exotic signals of heavy scalar bosons through vectorlike quarks J. Song,* K. Cheung, S.K. Kang, Y.W. Yoon *corresponding author Full text: pdf Published on: August 02, 2019 Abstract In an extension of the SM with an additional singlet scalar field S and vector-like quarks, we study the condition of the radiative enhancement of a heavy scalar boson decay into a massive gauge boson pair. Focusing on the loop effects, we assume that $S$ is linked to the standard model world only through loops of vector-like quarks. The radiative effects are the mixing with the Higgs boson and the loop-induced decays into $hh$, $WW$, $ZZ$, $gg$, and $\gamma\gamma$. The critical condition for the longitudinal polarization enhancement is the large mass differences among vector-like quarks. DOI: https://doi.org/10.22323/1.340.0004 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2020-08-04 09:23:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6257489323616028, "perplexity": 2208.760712954823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.23/warc/CC-MAIN-20200804073038-20200804103038-00474.warc.gz"}
https://socratic.org/questions/what-is-the-free-energy-for-the-dissolution-of-solid-sodium-chloride-in-water-at
# What is the free energy for the dissolution of solid sodium chloride in water at 25C? ## A) What is the free energy for the dissolution of solid sodium chloride in water at 25C? NaCl(s) <-> Na+(aq) + Cl-(aq) B) What is the solubility product constant for sodium chloride in water at 25C? Dec 16, 2016 A) Given the reaction: ${\text{NaCl"(s) stackrel("H"_2"O"(l)" ")(->) "Na"^(+)(aq) + "Cl}}^{-} \left(a q\right)$ The $\Delta {G}_{\text{rxn}}^{\circ}$ is as usual, and just like $\Delta {H}_{\text{rxn}}^{\circ}$: $\textcolor{b l u e}{\Delta {G}_{\text{rxn}}^{\circ}} = {\sum}_{P} {\nu}_{P} \Delta {G}_{f , P}^{\circ} - {\sum}_{R} {\nu}_{R} \Delta {G}_{f , R}^{\circ}$ $= \left[{\nu}_{\text{Na"^(+)(aq))DeltaG_(f,Na^(+)(aq))^@ + nu_("Cl"^(-)(aq))DeltaG_(f,Cl^(-)(aq))^@] - [nu_("NaCl} \left(s\right)} \Delta {G}_{f , N a C l \left(s\right)}^{\circ}\right]$ $= \left[\left(1\right) \left(- \text{261.9 kJ/mol") + (1)(-"131.2 kJ/mol")] - [(1)(-"384.0 kJ/mol}\right)\right]$ $= - \text{261.9 kJ/mol" - "131.2 kJ/mol" + "384.0 kJ/mol}$ $= \textcolor{b l u e}{- \text{9.1 kJ/mol}}$ It should be this small; that's fine, for dissolving solutes. B) The larger the ${K}_{\text{sp}}$, the more the equilibrium is skewed towards the aqueous products, and thus the more soluble the compound is in water. Obviously, $\text{NaCl}$ is extremely soluble in water ($\text{359 g/L}$), so ${K}_{\text{sp}}$ should be very large. Since we have $\Delta {G}_{\text{rxn}}^{\circ}$ for this process, we can realize that $\Delta {G}_{\text{rxn}} = 0$ at equilibrium, so that: $\cancel{\Delta {G}_{\text{rxn")^(0) = DeltaG_"rxn"^@ + RTlncancel(Q)^(K_"sp}}}$ $\implies \Delta {G}_{\text{rxn"^@ = -RTlnK_"sp}}$ $\implies \textcolor{b l u e}{{K}_{\text{sp") = e^(-DeltaG_"rxn"^@"/}} R T}$ $= {e}^{- \left(- \text{9.1 kJ/mol")"/"[("0.008314472 kJ/mol"cdot"K")("298.15 K}\right)}$ $= \textcolor{b l u e}{39.29}$ And indeed, ${K}_{\text{sp}}$ is fairly large. For poorly-soluble compounds, the ${K}_{\text{sp}}$ is often less than ${10}^{- 5}$.
2019-09-23 07:43:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9095730781555176, "perplexity": 1008.8508105897348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576122.89/warc/CC-MAIN-20190923064347-20190923090347-00515.warc.gz"}
https://www.groundai.com/project/convex-hulls-of-algebraic-sets/
Convex Hulls of Algebraic Sets Convex Hulls of Algebraic Sets João Gouveia Department of Mathematics, University of Washington, Box 354350, Seattle, WA 98195, USA, and CMUC, Department of Mathematics, University of Coimbra, 3001-454 Coimbra, Portugal jgouveia@math.washington.edu    Rekha Thomas Department of Mathematics, University of Washington, Box 354350, Seattle, WA 98195, USA thomas@math.washington.edu July 6, 2019 Abstract This article describes a method to compute successive convex approximations of the convex hull of a set of points in that are the solutions to a system of polynomial equations over the reals. The method relies on sums of squares of polynomials and the dual theory of moment matrices. The main feature of the technique is that all computations are done modulo the ideal generated by the polynomials defining the set to the convexified. This work was motivated by questions raised by Lovász concerning extensions of the theta body of a graph to arbitrary real algebraic varieties, and hence the relaxations described here are called theta bodies. The convexification process can be seen as an incarnation of Lasserre’s hierarchy of convex relaxations of a semialgebraic set in . When the defining ideal is real radical the results become especially nice. We provide several examples of the method and discuss convergence issues. Finite convergence, especially after the first step of the method, can be described explicitly for finite point sets. 1 Introduction An important concern in optimization is the complete or partial knowledge of the convex hull of the set of feasible solutions to an optimization problem. Computing convex hulls is in general a difficult task, and a classical example is the construction of the integer hull of a polyhedron which drives many algorithms in integer programming. In this article we describe a method to convexify (at least approximately), an algebraic set using semidefinite programming. By an algebraic set we mean a subset described by a finite list of polynomial equations of the form where is an element of , the polynomial ring in variables over the reals. The input to our algorithm is the ideal generated by , denoted as , which is the set . An ideal is a group under addition and is closed under multiplication by elements of . Given an ideal , its real variety, is an example of an algebraic set. Given , we describe a method to produce a nested sequence of convex relaxations of the closure of , the convex hull of , called the theta bodies of . The -th theta body is obtained as the projection of a spectrahedron (the feasible region of a semidefinite program), and \textupTH1(I)⊇\textupTH2(I)⊇⋯⊇\textupTHk(I)⊇\textupTHk+1(I)⊇⋯⊇\textupcl(\textupconv(VR(I))). Of special interest to us are real radical ideals. We define the real radical of an ideal , denoted as , to be the set of all polynomials such that for some and . We say that is a real radical ideal if . Given a set , the vanishing ideal of , denoted as , is the set of all polynomials in that vanish on . The Real Nullstellensatz (see Theorem 2.2) says that for any ideal , . The construction of theta bodies for arbitrary ideals was motivated by a problem posed by Lovász. In ShannonCapacity () Lovász constructed the theta body of a graph, a convex relaxation of the stable set polytope of a graph which was shown later to have a description in terms of semidefinite programming. An important result in this context is that the theta body of a graph coincides with the stable set polytope of the graph if and only if the graph is perfect. Lovász observed that the theta body of a graph could be described in terms of sums of squares of real polynomials modulo the ideal of polynomials that vanish on the incidence vectors of stable sets. This observation naturally suggests the definition of a theta body for any ideal in . In fact, an easy extension of his observation leads to a hierarchy of theta bodies for all ideals as above. In (Lovasz, , Problem 8.3), Lovász asked to characterize all ideals that have the property that their first theta body coincides with , which was the starting point of our work. For defining ideals of finite point sets we answer this question in Section 4. This article is organized as follows. In Section 2 we define theta bodies of an ideal in in terms of sums of squares polynomials. For a general ideal , we get that contains the closure of the projection of a spectrahedron which is described via combinatorial moment matrices from . When the ideal is real radical, we show that coincides with the closure of the projected spectrahedron, and when is the defining ideal of a set of points in , the closure is not needed. We establish a general relationship between the theta body sequence of an ideal and that of its real radical ideal . Section 3 gives two examples of the construction described in Section 2. As our first example, we look at the stable sets in a graph and describe the hierarchy of theta bodies that result. The first member of the hierarchy is Lovász’s theta body of a graph. This hierarchy converges to the stable set polytope in finitely many steps as is always the case when we start with a finite set of real points. The second example is a cardiod in the plane in which case the algebraic set that is being convexified is infinite. In Section 4 we discuss convergence issues for the theta body sequence. When is compact, the theta body sequence is guaranteed to converge to the closure of asymptotically. We prove that when is finite, for some finite . In the case of finite convergence, it is useful to know the specific value of for which . This notion is called exactness and we characterize exactness for the first theta body when the set to be convexified is finite. There are examples in which the theta body sequence does not converge to . While a full understanding of when convergence occurs is still elusive, we describe one obstruction to finite convergence in terms of certain types of singularities of . The last section gives more examples of theta bodies and their exactness. In particular we consider cuts in a graph and polytopes coming from the graph isomorphism question. The core of this paper is based on results from GPT () and GLPT () which are presented here with a greater emphasis on geometry, avoiding some of the algebraic language in the original results. Theorems 2.5, 4.2 and their corollaries are new while Theorem 4.4 is from GouNet (). The application of theta bodies to polytopes that arise in the graph isomorphism question is taken from DHMO (). Acknowledgments. Both authors were partially supported by the NSF Focused Research Group grant DMS-0757371. J. Gouveia was also partially supported by Fundação para a Ciência e Tecnologia and R.R. Thomas by the Robert R. and Elaine K. Phelps Endowed Professorship. 2 Theta bodies of polynomial ideals To describe the convex hull of an algebraic set, we start with a simple observation about any convex hull. Given a set , , the closure of , is the intersection of all closed half-spaces containing : \textupcl(\textupconv(S))={p∈Rn:l(p)≥0∀l∈R[x]1 s.t. l|S≥0}. From a computational point of view, this observation is useless, as the right hand side is hopelessly cumbersome. However, if is the zero set of an ideal , we can define nice relaxations of the above intersection of infinitely many half-spaces using a classical strengthening of the nonnegativity condition . We describe these relaxations in this section. In Section 2.1 we introduce our method for arbitrary ideals in . In Section 2.2 we specialize to real radical ideals, which occur frequently in applications, and show that in this case, much stronger results hold than for general ideals. 2.1 General ideals Recall that given an ideal , two polynomials and are defined to be congruent modulo , written as mod , if . The relation is an equivalence relation on and the equivalence class of a polynomial is denoted as . The set of all congruence classes of polynomials modulo is denoted as and this set is both a ring and a -vector space: given and , . Note that if mod , then for all . We will say that a polynomial is a sum of squares (sos) modulo if there exist polynomials such that mod . If is sos modulo then we immediately have that is nonnegative on . In practice, it is important to control the degree of the in the sos representation of , so we will say that is -sos mod if , where is the set of polynomials in of degree at most . The set of polynomials that are -sos mod , considered as a subset of will be denoted as . Definition 1 Let be a polynomial ideal. We define the -th theta body of to be the set \textupTHk(I):={p∈Rn:l(p)≥0 for % all l∈R[x]1 s.t. l is k-sos mod I}. Since, if is sos mod then on , and is closed, . Also, , since as increases, we are potentially intersecting more half-spaces. Thus, the theta bodies of create a nested sequence of closed convex relaxations of . We now present a related semidefinite programming relaxation of using the theory of moments. For an ideal, let be a basis for the -vector space . We assume that the polynomials representing the elements of are minimal degree representatives of their equivalence classes . This makes the set well-defined. In this paper we will restrict ourselves to a special type of basis . Definition 2 Let be an ideal. A basis of is a -basis if it satisfies the following conditions: 1. ; 2. , for ; 3. is monomial for all ; 4. if then is in the real span of . We will also always assume that is ordered and that . Using Gröbner bases theory, one can see that if is not contained in any proper affine space, then always has a -basis. For instance, take the in to be the standard monomials of an initial ideal of with respect to some total degree monomial ordering on (see for example CLO ()). The methods we describe below work with non monomial bases of as explained in GPT (). We restrict to a -basis in this survey for ease of exposition and since the main applications we will discuss only need this type of basis. Fix a -basis of and define to be the column vector formed by all the elements of in order. Then is a square matrix indexed by with -entry equal to . By hypothesis, the entries of lie in the -span of . Let { } be the set of real numbers such that . We now linearize by replacing each element of by a new variable. Definition 3 Let , and { } be as above. Let be a real vector indexed by . The -th combinatorial moment matrix of is the real matrix indexed by whose -entry is Example 1 Let be the ideal . Then a -basis for would be . Let us construct the matrix . Consider the vector , then f1[x]f1[x]T=⎛⎜⎝1x1x2x1x21x1x2x2x1x2x22⎞⎟⎠≡⎛⎜⎝1x1x2x12x2−x10x10x22⎞⎟⎠\textupmodI. We now linearize the resulting matrix using , where indexes the th element of , and get MB1(y)=⎛⎜⎝y0y1y2y12y2−y10y20y3⎞⎟⎠. The matrix will allow us to define a relaxation of that will essentially be the theta body . Definition 4 Let be an ideal and a -basis of . Then QBk(I):=πRn({y∈RB2k:y0=1,MBk(y)⪰0}) where is projection of to , its coordinates indexed by . The set is a relaxation of . Pick and define . Then , and . We now show the connection between and . Theorem 2.1 For any ideal and any -basis of , we get . Proof We start with a general observation concerning -sos polynomials. Suppose where . Each can be identified with a real row vector such that , and so . Denoting by the positive semidefinite matrix we get . In general, is not unique. Let be the real row vector such that mod . Then check that for any column vector , , where stands for the usual entry-wise inner product of matrices. Suppose , and such that , and . Since is closed, we just have to show that for any that is -sos modulo , . Since and , since and . In the next subsection we will see that when is a real radical ideal, coincides with . The idea of computing approximations of the convex hull of a semialgebraic set in via the theory of moments and the dual theory of sums of squares polynomials is due to Lasserre Lasserre1 (); Lasserre2 (); Lasserre3 () and Parrilo Parrilo:phd (); Parrilo:spr (). In his original set up, the moment relaxations obtained are described via moment matrices that rely explicitly on the polynomials defining the semialgebraic set. In Lasserre2 (), the focus is on semialgebraic subsets of where the equations are used to simplify computations. This idea was generalized in Laurent () to arbitrary real algebraic varieties and studied in detail for zero-dimensional ideals. Laurent showed that the moment matrices needed in the approximations of could be computed modulo the ideal defining the variety. This greatly reduces the size of the matrices needed, and removes the dependence of the computation on the specific presentation of the ideal in terms of generators. The construction of the set is taken from Laurent (). Since an algebraic set is also a semialgebraic set (defined by equations), we could apply Lasserre’s method to to get a sequence of approximations . The results are essentially the same as theta bodies if the generators of are picked carefully. However, by restricting ourselves to real varieties, instead of allowing inequalities, and concentrating on the sum of squares description of theta bodies, as opposed to the moment matrix approach, we can prove some interesting theoretical results that are not covered by the general theory for Lasserre relaxations. Many of the usual results for Lasserre relaxations rely on the existence of a non-empty interior for the semialgebraic set to be convexified which is never the case for a real variety, or compactness of the semialgebraic set which we do not want to impose. Recall from the introduction that given an ideal its real radical is the ideal R√I={f∈R[x]:f2m+∑g2i∈I,m∈N,gi∈R[x]}. The importance of this ideal arises from the Real Nullstellensatz. Theorem 2.2 (Bounded degree Real Nullstellensatz Lombardi ()) Let be an ideal. Then there exists a function such that, for all polynomials of degree at most that vanish on , for some polynomials such that and are all bounded above by . In particular . When is a real radical ideal, the sums of squares approach and the moment approach for theta bodies of coincide, and we get a stronger version of Theorem 2.1. Theorem 2.3 For any real radical and any -basis of , . Proof By Theorem 2.1 we just have to show that . By (PowSchei, , Prop 2.6), the set of elements of that are -sos modulo , is closed when is real radical. Let be nonnegative on and suppose . By the separation theorem, we can find such that but for all , or equivalently, . Since runs over all possible positive semidefinite matrices of size , and the cone of positive semidefinite matrices of a fixed size is self-dual, we have . Let be any real number and consider . Then . Since and can be arbitrarily large, is forced to be positive. So we can scale to have , so that . By hypothesis we then have , but by the linearity of , which is a contradiction, so must be -sos modulo . This implies that any linear inequality valid for is valid for , which proves . We now have two ways of looking at the relaxations — one by a characterization of the linear inequalities that hold on them and the other by a characterization of the points in them. These two perspectives complement each other. The inequality version is useful to prove (or disprove) convergence of theta bodies to while the description via semidefinite programming is essential for practical computations. All the applications we consider use real radical ideals in which case the two descriptions of coincide up to closure. In some cases, as we now show, the closure can be omitted. Theorem 2.4 Let be a real radical ideal and be a positive integer. If there exists some linear polynomial that is -sos modulo with a representing matrix that is positive definite, then is closed and equals . Proof For this proof we will use a standard result from convex analysis: Let and be finite dimensional vector spaces, be a cone and be a linear map such that . Then where is the adjoint operator to . In particular, is closed in , the dual vector space to . The proof of this result follows, for example, from Corollary 3.3.13 in BorweinLewis () by setting . Throughout the proof we will identify , for all , with the space by simply considering the coordinates in the basis . Consider the inclusion map , and let be the cone in of polynomials that can be written as a sum of squares of polynomials of degree at most . The interior of this cone is precisely the set of sums of squares with a positive definite representing matrix . Our hypothesis then simply states that which implies by the above result that is closed. Note that is the set of elements in such that is nonnegative for all and this is the same as demanding for all positive semidefinite matrices , which is equivalent to demanding that . So is just the set and by intersecting it with the plane we get which is therefore closed. One very important case where the conditions of Theorem 2.4 holds is when is the vanishing ideal of a set of points. This is precisely the case of most interest in combinatorial optimization. If and , then . Proof It is enough to show that there is a linear polynomial such that mod for a positive definite matrix and some -basis of . Let be the matrix A=(l+1ctcD), where , is the vector with all entries equal to , and is the diagonal matrix with all diagonal entries equal to . This matrix is positive definite since is positive definite and its Schur complement is positive. Since mod for and is a monomial basis, for any , mod . Therefore, the constant (linear polynomial) mod . The assumption that is real radical seems very strong. However, we now establish a relationship between the theta bodies of an ideal and those of its real radical, showing that determines the asymptotic behavior of . We start by proving a small technical lemma. Lemma 1 Given an ideal and a polynomial of degree such that for some , the polynomial for every . Proof First, note that for any and any we have fl+ξ=1ξ(fl2+ξ)2+14ξ(−f2m)f2l−2m and so is -sos modulo . For , define the polynomial to be the truncation of the Taylor series of at degree i.e., p(x)=2m−1∑n=0(−1)n(2n)!(n!)2(1−2n)σn−1xn. When we square we get a polynomial whose terms of degree at most are exactly the first terms of , and by checking the signs of each of the coefficients of we can see that the remaining terms of will be negative for odd powers and positive for even powers. Composing with we get (p(f(x)))2=σ2+4σf(x)+m−1∑i=0aif(x)2m+2i−m−2∑i=0bif(x)2m+2i+1 where the and are positive numbers. In particular σ2+4σf(x)=p(f(x))2+m−1∑i=0aif(x)2i(−f(x)2m)+m−2∑i=0bif(x)2m+2i+1. On the right hand side of this equality the only term who is not immediately a sum of squares is the last one, but by the above remark, since , by adding an arbitrarily small positive number, it becomes -sos modulo . By checking the degrees throughout the sum, one can see that for any , is -sos modulo . Since and are arbitrary positive numbers we get the desired result. Lemma 1, together with the Real Nullstellensatz, gives us an important relationship between the theta body hierarchy of an ideal and that of its real radical. Theorem 2.5 Fix an ideal . Then, there exists a function such that for all . Proof Fix some , and let be a linear polynomial that is -sos modulo . This means that there exists some sum of squares such that . Therefore, by the Real Nullstellensatz (Theorem 2.2), for , where depends only on the ideal . By Lemma 1 it follows that is -sos modulo for every . Let . Then we have that is -sos modulo for all . This means that for every , the inequality is valid on for linear and -sos modulo . Therefore, is also valid on , and hence, . Note that the function whose existence we just proved, can be notoriously bad in practice, as it can be much higher than necessary. The best theoretical bounds on come from quantifier elimination and so increase very fast. However, if we are only interested in convergence of the theta body sequence, as is often the case, Theorem 2.5 tells us that we might as well assume that our ideals are real radical. 3 Computing theta bodies In this section we illustrate the computation of theta bodies on two examples. In the first example, is finite and hence is a polytope, while in the second example is infinite. Convex approximations of polytopes via linear or semidefinite programming have received a great deal of attention in combinatorial optimization where the typical problem is to maximize a linear function as varies over the characteristic vectors of some combinatorial objects . Since this discrete optimization problem is equivalent to the linear program in which one maximizes over all and is usually unavailable, one resorts to approximations of this polytope over which one can optimize in a reasonable way. A combinatorial optimization problem that has been tested heavily in this context is the maximum stable set problem in a graph which we use as our first example. In ShannonCapacity (), Lovász constructed the theta body of a graph which was the first example of a semidefinite programming relaxation of a combinatorial optimization problem. The hierarchy of theta bodies for an arbitrary polynomial ideal are a natural generalization of Lovász’s theta body for a graph, which explains their name. Our work on theta bodies was initiated by two problems that were posed by Lovász in (Lovasz, , Problems 8.3 and 8.4) suggesting this generalization and its properties. 3.1 The maximum stable set problem Let be an undirected graph with vertex set and edge set . A stable set in is a set such that for all , . The maximum stable set problem seeks the stable set of largest cardinality in , the size of which is the stability number of , denoted as . The maximum stable set problem can be modeled as follows. For each stable set , let be its characteristic vector defined as if and otherwise. Let be the set of characteristic vectors of all stable sets in . Then is called the stable set polytope of and the maximum stable set problem is, in theory, the linear program with optimal value . However, is not known apriori, and so one resorts to relaxations of it over which one can optimize . In order to compute theta bodies for this example, we first need to view as the real variety of an ideal. The natural ideal to take is, , the vanishing ideal of . It can be checked that this real radical ideal is IG:=⟨x2i−xi∀i∈[n],xixj∀{i,j}∈E⟩⊂R[x1,…,xn]. For , let . From the generators of it follows that if , then mod where is in the -span of the set of monomials . Check that B:={xU+IG:U\textupstablesetinG} is a -basis of containing . This implies that , and for , their product is which is if is not a stable set in . This product formula allows us to compute where we index the element by the set . Since and is real radical, by Corollary 1, we have that \textupTHk(IG)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩y∈Rn:∃M⪰0,M∈R|Bk|×|Bk|\textupsuchthatM∅∅=1,M∅{i}=M{i}∅=M{i}{i}=yiMUU′=0\textupifU∪U′\textupisnotstableinGMUU′=MWW′\textupifU∪U′=W∪W′⎫⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎭. In particular, indexing the one element stable sets by the vertices of , \textupTH1(IG)=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩y∈Rn:∃M⪰0,M∈R(n+1)×(n+1)\textupsuchthatM00=1,M0i=Mi0=Mii=yi∀i∈[n]Mij=0∀{i,j}∈E⎫⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪⎭. Example 2 Let be a pentagon. Then IG=⟨x2i−xi∀i=1,…,5,x1x2,x2x3,x3x4,x4x5,x1x5⟩ and B={1,x1,x2,x3,x4,x5,x1x3,x1x4,x2x4,x2x5,x3x5}+IG. Let be a vector whose coordinates are indexed by the elements of in the given order. Then \textupTH1(IG)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩y∈R5:∃y6,…,y10\textups.t.⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝1y1y2y3y4y5y1y10y6y70y20y20y8
2020-06-03 22:31:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294446110725403, "perplexity": 347.13803798670904}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436466.95/warc/CC-MAIN-20200603210112-20200604000112-00566.warc.gz"}
https://pos.sissa.it/282/637
Volume 282 - 38th International Conference on High Energy Physics (ICHEP2016) - Strong Interactions and Hadron Physics First observation of Ï€-K+ and Ï€+K- atoms, their lifetime measurement and Ï€K scattering lengths evaluation. L. Afanasyev*  on behalf of the DIRAC collaboration Full text: pdf Pre-published on: February 06, 2017 Published on: April 19, 2017 Abstract he Low Energy QCD allows to calculate the $\pi\pi$ and $\pi K$ scattering lengths with high precision. There are accurate relations between these scattering lengths and $\pi^+\pi^-$, $\pi^-K^+$, $\pi^+K^-$ atoms lifetimes. The experiment on the first observation of $\pi^-K^+$ and $\pi^+K^-$ atoms is described. The atoms were generated in Nickel and Platinum targets hit by the PS CERN proton beam with momentum of 24 GeV/$c$. Moving in the target, part of atoms break up producing characteristic $\pi K$ pairs (atomic pairs) with small relative momentum $Q$ in their c.m.s. In the experiment, we detected $n_A=349\pm62$ (5.6 standard deviations) $\pi^-K^+$ and $\pi^+K^-$ atomic pairs. The main part of $\pi K$ pairs are produced in free state. The majority of such particles are generated directly or from short-lived sources as $\rho$, $\omega$ and similar resonances. The electromagnetic interactions in the final state create Coulomb pairs with a known sharp dependence on $Q$. This effect allows to evaluate the number of these Coulomb pairs. There is a precise ratio ($\sim$1\%) between the number of $\pi^-K^+$ ($\pi^+K^-$) Coulomb pairs with small $Q$ and the number of produced $\pi^-K^+$ ($\pi^+K^-$) atoms. Using this ratio, we obtained the numbers of generated $\pi^-K^+$ and $\pi^+K^-$ atoms. The atom breakup probability in a target $P_{\mathrm{br}}=n_A/N_A$ depends on the atom lifetime. Using such dependences for the Ni and Pt targets, known with a precision about 1\%, the $\pi K$ atom lifetime was measured and from this value the $\pi K$ scattering lengths were evaluated. The presented analysis shows that the $\pi^-K^+$ and $\pi^+K^-$ atoms production in the p-nucleus interactions increases by 16 and 38 times respectively if the proton momentum is increased from 24 GeV/$c$ up to 450 GeV/$c$. DOI: https://doi.org/10.22323/1.282.0637 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2022-08-17 17:39:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7553001046180725, "perplexity": 1464.4955947961635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00480.warc.gz"}
https://recalll.co/app/?q=Discrete%20Time%20Fourier%20Transforms%20%28DTFT%29
However, the math of the Fourier transform assumes that the signal being Fourier transformed is periodic over the time span in question. This mismatch between the Fourier assumption of periodicity, and the real world fact that audio signals are generally non-periodic, leads to errors in the transform. These errors are called "spectral leakage", and generally manifest as a wrongful distribution of energy across the power spectrum of the signal. Notice the distribution of energy above the -60 dB line, and the three distinct peaks at roughly 440 Hz, 880 Hz, and 1320 Hz. This particular distribution of energy contains "spectral leakage" errors. To somewhat mitigate the "spectral leakage" errors, you can pre-multiply the signal by a window function designed specifically for that purpose, like for example the Hann window function. The plot below shows the Hann window function in the time-domain. Notice how the tails of the function go smoothly to zero, while the center portion of the function tends smoothly towards the value 1. Now let's apply the Hann window to the guitar's audio data, and then FFT the resulting signal. The plot below shows a closeup of the power spectrum of the same signal (an acoustic guitar playing the A4 note), but this time the signal was pre-multiplied by the Hann window function prior to the FFT. Notice how the distribution of energy above the -60 dB line has changed significantly, and how the three distinct peaks have changed shape and height. This particular distribution of spectral energy contains fewer "spectral leakage" errors. The acoustic guitar's A4 note used for this analysis was sampled at 44.1 KHz with a high quality microphone under studio conditions, it contains essentially zero background noise, no other instruments or voices, and no post processing. Real audio signal data, Hann window function, plots, FFT, and spectral analysis were done here: ## Why do I need to apply a window function to samples when building a po... audio signal-processing fft spectrum window-functions Ideally a Discrete Fourier Transform (DFT) is purely a rotation, in that it returns the same vector in a different coordinate system (i.e., it describes the same signal in terms of frequencies instead of in terms of sound volumes at sampling times). However, the way the DFT is usually implemented as a Fast Fourier Transform (FFT), the values are added together in various ways that require multiplying by 1/N to keep the scale unchanged. Often, these multiplications are omitted from the FFT to save computing time and because many applications are unconcerned with scale changes. The resulting FFT data still contains the desired data and relationships regardless of scale, so omitting the multiplications does not cause any problems. Additionally, the correcting multiplications can sometimes be combined with other operations in an application, so there is no point in performing them separately. (E.g., if an application performs an FFT, does some manipulations, and performs an inverse FFT, then the combined multiplications can be performed once during the process instead of once in the FFT and once in the inverse FFT.) I am not familiar with Matlab syntax, but, if Stuarts answer is correct that cX*cX' is computing the sum of the squares of the magnitudes of the values in the array, then I do not see the point of performing the FFT. You should be able to calculate the total energy in the same way directly from iData; the transform is just a coordinate transform that does not change energy, except for the scaling described above. we do the fft for other calculations that we need. @user438431: If you want Power_X scaled correctly but do not need the FFT data scaled, then you can remove the division by fftSize from cX = / fftSize (where it divides each element of the vector) to Power_X = (cX*cX')/(50*(fftSize*fftSize)) (where it is one multiplication [to square fftSize, since Power_X is the square of energy] and one division on a scalar value), saving fftSize-2 operations. ## complex conjugate transpose matlab to C - Stack Overflow c matlab fft translate what you have is a sample whose length in time is 256/44100 = 0.00580499 seconds. This means that your frequency resolution is 1 / 0.00580499 = 172 Hz. The 256 values you get out from Python correspond to the frequencies, basically, from 86 Hz to 255*172+86 Hz = 43946 Hz. The numbers you get out are complex numbers (hence the "j" at the end of every second number). You need to convert the complex numbers into amplitude by calculating the sqrt(i2 + j2) where i and j are the real and imaginary parts, resp. If you want to have 32 bars, you should as far as I understand take the average of four successive amplitudes, getting 256 / 4 = 32 bars as you want. Hi, sorry for the initial (wrong) answer... didn't get the math right. This should be correct now. Please note that, if c is a complex number, sqrt(c.real2 + c.imag2) == abs(c) ## python - Analyze audio using Fast Fourier Transform - Stack Overflow python audio signal-processing fft spectrum If the arrays contain integers of limited size (i.e. in range -u to u) then you can solve this in O(n+ulogu) time by using the fast Fourier transform to convolve the histograms of each collection together. For example, the set a=[-1,2,2,2,2,3] would be represented by a histogram with values: ha[-1] = 1 ha[2] = 4 ha[3] = 1 After convolving all the histograms together with the FFT, the resulting histogram will contain entries where the value for each bin tells you the number of ways of combining the numbers to get each possible total. To find the answer to your question with a total of 0, all you need to do is read the value of the histogram for bin 0. ## algorithm - 5 numbers such that their sum equals 0 - Stack Overflow algorithm search You can use the fast fourier transform for extremely large input (value of n) to find any bit pattern in O(n log n ) time. Compute the cross-correlation of a bit mask with the input. Cross -correlation of a sequence x and a mask y with a size n and n' respectively is defined by R(m) = sum _ k = 0 ^ n' x_{k+m} y_k then occurences of your bit pattern that match the mask exactly where R(m) = Y where Y is the sum of one's in your bit mask. So if you are trying to match for the bit pattern [0 0 1 0 1 0] in [ 1 1 0 0 1 0 1 0 0 0 1 0 1 0 1] then you must use the mask [-1 -1 1 -1 1 -1] the -1's in the mask guarantee that those places must be 0. You can implement cross-correlation, using the FFT in O(n log n ) time. what would you use fourier if this can be solved in O(n) time? ## c++ - Fastest way to scan for bit pattern in a stream of bits - Stack ... c++ c algorithm assembly embedded I think you need to use the accelerate framework, inside there is a vDSP API that could do FFT(Fast Fourier Transform). It will convert the data from time domain to frequency domain. According the bin size information, you could extract the magnitude/amplitude after the certain bin size. For how FFT work in there, you could refer to this question - Understanding FFT in aurioTouch2 P.S. AurioTouch or AurioTouch 1 is not using the vDSP API. I remember before iOS 4 there is an FFT function that could do similar thing but slower. So you may think that vDSP is only available after iOS4.0 ## ios - iPhone app audio recording only in above certain frequency - Sta... iphone ios audio voice frequency If you did really want to use clustering, then dependent on your application you could generate a low dimensional feature vector for each time series. For example, use time series mean, standard deviation, dominant frequency from a Fourier transform etc. This would be suitable for use with k-means, but whether it would give you useful results is dependent on your specific application and the content of your time series. ## matlab - How can I perform K-means clustering on time series data? - S... matlab time-series cluster-analysis data-mining k-means After your FFT and filter, you need to do an inverse FFT to get the data back to the time domain. Then you want to add that set of samples to your .WAV file. As far as producing the file itself goes, the format is widely documented (Googling for ".WAV format" should turn up more results than you have any use for), and pretty simple. It's basically a simple header (called a "chunk") that says it's a .WAV file (or actually a "RIFF" file). Then there's an "fmt " chunk that tells about the format of the samples (bits per sample, samples per second, number of channels, etc.) Then there's a "data" chunk that contains the samples themselves. Since it sounds like you're going to be doing this in real time, my advice would be to forget about doing your FFT, filter, and iFFT. An FIR filter will give essentially the same results, but generally a lot faster. The basic idea of the FIR filter is that instead of converting your data to frequency domain, filtering it, then converting back to time domain, you convert your filter coefficients to time domain, and apply them (fairly) directly to your input data. This is where DSPs earn their keep: nearly all of them have multiply-accumulate instructions, which can implement most of a FIR filter in one instruction. Even without that, however, getting a FIR filter to run in real time on a modern processor doesn't take any real trick unless you're doing really fast sampling. In any case, it's a lot easier that getting an FFT/filter/iFFT to operate at the same speed. @Jerry Coffin - I have to disagree on the speed of FIR versus FFT/multiply/IFFT. For a 64 tap FIR filter, each output sample requires 64 multiply-accumulates. Via an FFT using an N=128 transform and overlap-save processing (en.wikipedia.org/wiki/Overlap-save_method), you do a transform, complex buffer multiply, and inverse transform = 2*128*log2(128) + 6*128 = 2560 operations, which would calculate 64 samples for a ops/sample count of 40, saving you 24 cycles. There's some handwaving on memory access etc here, but as your filter gets longer, the FFT method shines. There is some point at which a FFT will be better, that's true -- IME, that's pretty rare in practice though. In particular, a FIR is extremely cache friendly (linear read through the data and coefficients). By contrast, an FFT practically defines "cache hostile". A single cache miss is virtually guaranteed to be at least 50 cycles on a modern processor. On a modern processor, you can often treat CPU cycles as free; the limiting factor is memory bandwidth. I put the example that is here: ccrma.stanford.edu/courses/422/projects/WaveFormat into a .txt and then changed the extension to .wav and didn't work. It wasn't supposed to work just like this? Glancing at that, it shows the bytes in hex -- did you enter them as hexadecimal text? If so, it shouldn't work. Rather, those are supposed to be entered as the binary values of individual bytes. It also looks like the sample they show is incomplete -- it shows the headers and the first few samples, but its header says it'll have a lot more samples than show up there. ## c# - How to record to .wav from microphone after applying fast fourier... c# filter wav record fft That approach goes by the name Short-time Fourier transform. You get all the answers to your question on wikipedia: https://en.wikipedia.org/wiki/Short-time_Fourier_transform It works great in practice and you can even get better resolution out of it compared to what you would expect from a rolling window by using the phase difference between the fft's. Here is one article that does pitch shifting of audio signals. The way how to get higher frequency resolution is well explained: http://www.dspdimension.com/admin/pitch-shifting-using-the-ft/ ## signal processing - Is a "rolling" FFT possible and could it be of use... signal-processing fft processing The standard tool for transforming time-domain signals like audio samples into a frequency domain information is the Fourier transform. Grab the fast Fourier transform library of your choice and throw it at your data; you will get a decomposition of the signal into its constituent frequencies. You can then take that data and visualize however you like. Spectrograms are particularly easy; you just need to plot the magnitude of each frequency component versus the frequency and time. I've managed the FFT and received a double[] containing values from -1 to 1. Can you explain in more detail what "plot the magnitude of each frequency component versus the frequency and time" means and how you would code that part? ## c# - Visualization of streamed music from Spotify - Stack Overflow c# visualization naudio libspotify Yes, it's possible for a pure function to return the time, if it's given that time as a parameter. Different time argument, different time result. Then form other functions of time as well and combine them with a simple vocabulary of function(-of-time)-transforming (higher-order) functions. Since the approach is stateless, time here can be continuous (resolution-independent) rather than discrete, greatly boosting modularity. This intuition is the basis of Functional Reactive Programming (FRP). ## scala - How can a time function exist in functional programming? - Sta... Have you considered using the multiprocessing module to parallelize processing the files? Assuming that you're actually CPU-bound here (meaning it's the fourier transform that's eating up most of the running time, not reading/writing the files), that should speed up execution time without actually needing to speed up the loop itself. For example, something like this (untested, but should give you the idea): def do_transformation(filename) dt = t[1]-t[0] fou = absolute(fft.fft(f)) frq = absolute(fft.fftfreq(len(t),dt)) ymax = median(fou)*30 figure(figsize=(15,7)) plot(frq,fou,'k') xlim(0,400) ylim(0,ymax) iname = filename.replace('.dat','.png') savefig(iname,dpi=80) close() pool = multiprocessing.Pool(multiprocessing.cpu_count()) for filename in filelist: pool.apply_async(do_transformation, (filename,)) pool.close() pool.join() You may need to tweak what work actually gets done in the worker processes. Trying to parallelize the disk I/O portions may not help you much (or even hurt you), for example. hmmmm. Could you elaborate a little more? I'm in a bit of a time crunch with this program. It seems at the rate I'm going, it looks like another another day or two before finishing, and I'm really shooting for 12-18 hours. I just selected your comment as the answer, since it's effectively sped up the program almost 8x (the number of cpu's). But if you could, I have another question. There are a handful of files here that are quite substantial, taking a quite a long time to process. Is there a way to assign multiple processors to the same task instead of applying them to separate files? There's no simple tweak to say "Throw more CPUs at this task". You'd need to refactor the code to break your worker method up into smaller pieces that multiple processes can work on at the same time, and then pull it back together once all the pieces are ready. For example, it looks like fou = absolute(... and frq = absolute(... could be calculated in parallel. You have to be careful though, because passing large amounts of data between processes can be slow. Its hard for me to say exactly what kind of changes you could make because I really don't understand the algorithms you're using. ## python - What is the fastest/most efficient way to loop through a larg... python matplotlib fft figure What you want to do is certainly possible, you are on the right track, but you seem to misunderstand a few points in the example. First, it is shown in the example that the technique is the equivalent of linear regression in the time domain, exploiting the FFT to perform in the frequency domain an operation with the same effect. Second, the trend that is removed is not linear, it is equal to a sum of sinusoids, which is why FFT is used to identify particular frequency components in a relatively tidy way. In your case it seems you are interested in the residuals. The initial approach is therefore to proceed as in the example as follows: (1) Perform a rough "detrending" by removing the DC component (the mean of the time-domain data) (2) FFT and inspect the data, choose frequency channels that contain most of the signal. You can then use those channels to generate a trend in the time domain and subtract that from the original data to obtain the residuals. You need not proceed by using IFFT, however. Instead you can explicitly sum over the cosine and sine components. You do this in a way similar to the last step of the example, which explains how to find the amplitudes via time-domain regression, but substituting the amplitudes obtained from the FFT. The following code shows how you can do this: tim = (time - time0)/timestep; % <-- acquisition times for your *new* data, normalized NFpick = [2 7 13]; % <-- channels you picked to build the detrending baseline % Compute the trend mu = mean(ts); tsdft = fft(ts-mu); Nchannels = length(ts); % <-- size of time domain data Mpick = 2*length(NFpick); X(:,1:2:Mpick) = cos(2*pi*(NFpick-1)'/Nchannels*tim)'; X(:,2:2:Mpick) = sin(-2*pi*(NFpick-1)'/Nchannels*tim)'; % Generate beta vector "bet" containing scaled amplitudes from the spectrum bet = 2*tsdft(NFpick)/Nchannels; bet = reshape([real(bet) imag(bet)].', numel(bet)*2,1) trend = X*bet + mu; To remove the trend just do detrended = dat - trend; where dat is your new data acquired at times tim. Make sure you define the time origin consistently. In addition this assumes the data is real (not complex), as in the example linked to. You'll have to examine the code to make it work for complex data. ## preprocessor - Fast fourier transform for deasonalizing data in MATLAB... matlab preprocessor filtering signal-processing fft Apple provides aurioTouch sample code which display the input audio in one of the forms, a regular time domain waveform, a frequency domain waveform (computed by performing a fast fourier transform on the incoming signal), and a sonogram view (a view displaying the frequency content of a signal over time, with the color signaling relative power, the y axis being frequency and the x as time). ## iphone - How to get Beats per minutes of a song in objective-c - Stack... iphone objective-c First the expected one goes. You're talking about "removing wavelength dependence of phase". If you did exactly that - zeroed out the phase completely - you would actually get a slightly compressed peak. What you actually do is that you add a linear function to the phase. This does not compress anything; it is a well-known transformation that is equivalent to shifting the peaks in time domain. Just a textbook property of the Fourier transform. Then goes the unintended one. You convert the spectrum obtained with fft with fftshift for better display. Thus before using ifft to convert it back you need to apply ifftshift first. As you don't, the spectrum is effectively shifted in frequency domain. This results in your time domain phase being added a linear function of time, so the difference between the adjacent points which used to be near zero is now about pi. Does this mean that fitting a best fit to the unwrapped phase and subtracting this from the actual phase in not a valid method to attempt to compress the pulses? I was under the impression that turning the wavelength dependence on the phase into a constant ( or as close as possible) within the bounds of the pulses in the spectral range would reduce chirp and thus compress the peak? In general, yes, it will. However, you're fitting a linear function, but a linear summand does not change the peak shape. If you need to compress it you need to use nonlinear functions for approximation. Linear ones are of no use for this task. What if the phase within the chirp is linear with wavelength? If the phase is linear w.r.t. wavelength, your peak is already in the most compressed state possible. ## transform - Complex FFT then Inverse FFT MATLAB - Stack Overflow matlab transform fft inverse You may have too little data for FFT/DWT to make sense. DTW may be better, but I also don't think it makes sense for sales data - why would there be a x-week temporal offset from one location to another? It's not as if the data were captured at unknown starting weeks. FFT and DWT are good when your data will have interesting repetitive patterns, and you have A) a good temporal resolution (for audio data, e.g. 16000 Hz - I am talking about thousands of data points!) and B) you have no idea of what frequencies to expect. If you know e.g. you will have weekly patterns (e.g. no sales on sundays) then you should filter them with other algorithms instead. DTW (dynamic time-warping) is good when you don't know when the event starts and how they align. Say you are capturing heart measurements. You cannot expect to have the hearts of two subjects to beat in synchronization. DTW will try to align this data, and may (or may not) succeed in matching e.g. an anomaly in the heart beat of two subjects. In theory... Maybe all you need is spend more time in preprocessing your data, in particular normalization, to be able to capture similarity. Thanks for your answer! so what method do you suggest? The result, I want to achieve, is to cluster products with different dynamic of sales and present these different dynamics on plots. I suggest to do a lot of preprocessing, than whatever algorithm you feel comfortable with, and which yields reasonable results. But preprocessing is key. (And depends on your data, we cannot help you preprocess). ## r - Fast Fourier Transform and Clustering of Time Series - Stack Overf... r fft time-series cluster-analysis Yes, the FFT is merely an efficient DFT algorithm. Understanding the FFT itself might take some time unless you've already studied complex numbers and the continuous Fourier transform; but it is basically a base change to a base derived from periodic functions. (If you want to learn more about Fourier analysis, I recommend the book Fourier Analysis and Its Applications by Gerald B. Folland) ## algorithm - How exactly do you compute the Fast Fourier Transform? - S... algorithm math fft Assembly language is used for transforming higher-level programming languages like C into machine code. Processors can only run machine code -- a sequence of short, discrete, instructions encoded in binary format. Every time any program runs, machine code is being executed by a processor. Assembly language is simply a human-readable form of machine code. The job of transforming high-level code into machine code is performed by a compiler, and assembly is typically created along the way as an intermediate representation before being translated into machine code. In this light, assembly is written at least as often popular high-level programming languages -- its just written by another program. Reasons you might write a program in assembly language • You don't trust a compiler to generate optimized or working machine code ## q - Assembly Language Usage - Stack Overflow assembly q Following this question I have a doubt. How do I know my sampling frequency and maximum frequency over a window of specific time length containing the data points. TO elaborate my question- I have a set of accelerometer readings in X, Y, Z axes obtained from an android based smart-phone. The data (in X, Y, Z) was recorded at different time stamps and there is no uniform time period for recording the data. My data set looks as timestamp, X, Y, Z. First, I did a filtering (using a low-pass) on this data and now want to perform FFT on a timewindow of 1 min or may be a window containing 250 samples (not very sure about the window length). I am using FastFourierTransform class of apache commons in Java (https://commons.apache.org/proper/commons-math/javadocs/api-2.2/org/apache/commons/math/transform/FastFourierTransformer.html). I am getting FFT magnitude but I was wondering how do I know the corresponding frequency of each FFT mag. I know corresponding frequency would be ((n*Fs)/N) where n is the bin number or data point index, Fs is sampling frequency and N is number of input data points over the window. Now my question is how do I know Fs for a given set of data say for example input data in an array as [1 2 3 4 5 6 7] over a window of size Nt millisec where Nt=(lastTimeStamp-firstTimeStamp)? This also immediately follows by doing so if this agrees with nyquist sampling frequency? and this says Fs>=2*maxF in the signal and I don't know the maximum frequency over the time window Nt. ## java - Build sample data for apache commons Fast Fourier Transform alg... java signal-processing fft apache-commons ## Z-Transforms (ZT) Analysis of continuous time LTI systems can be done using z-transforms. It is a powerful mathematical tool to convert differential equations into algebraic equations. The bilateral (two sided) z-transform of a discrete time signal x(n) is given as The unilateral (one sided) z-transform of a discrete time signal x(n) is given as $Z.T[x(n)] = X(Z) = \Sigma_{n = 0}^{\infty} x(n)z^{-n}$ Z-transform may exist for some signals for which Discrete Time Fourier Transform (DTFT) does not exist.
2020-08-11 06:06:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40589195489883423, "perplexity": 938.9358897658599}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738735.44/warc/CC-MAIN-20200811055449-20200811085449-00411.warc.gz"}
https://physics.stackexchange.com/questions/76484/book-on-optics-in-curved-space-time/91555
# Book on optics in curved space-time As evidenced from my earlier questions on vision and curved space, I am struggling a little bit with visual perception in curved space-time. I would like a book recommendation on optics and vision in curved space-time. 1. I find books on Gravitational lensing to be inadequate as Gravitational lensing itself deals mostly with specific cases in astronomy. It does not have the breadth and scope I require. 2. I want the book to discuss in detail how space (of low curvature) could affect visual perception in everyday life scenarios. Very high mathematical rigor is desired. 3. I would also really like the book to have pretty colors and pictures for aesthetic value. If you know any book that come close to satisfying my conditions, please do tell me. I will be very grateful if you can. Before answering, please see our policy on resource recommendation questions. Please write substantial answers that detail the style, content, and prerequisites of the book, paper or other resource. Explain the nature of the resource so that readers can decide which one is best suited for them rather than relying on the opinions of others. Answers containing only a reference to a book or paper will be removed! Just came across your question, have you found an answer? I don't know of any specific books but since you mentioned a desire for "very high mathematical rigor," why not just impose some arbitrary metric yourself, from a curved manifold, then solve for the geodesics? The results could be quite interesting depending on what metric is chosen. You will of course solve the equation for null geodesics: $${{{d^{\,2}}{q^j}} \over {d{s^2}}} + \Gamma _{k\,l}^j{{{d^{\,2}}{q^k}} \over {d{s^2}}}{{{d^{\,2}}{q^l}} \over {d{s^2}}} = 0$$ where the connection coefficients are calculated from the metric. Any number of generalizations or specializations could be imposed, e.g., Riemannian manifold, non-symmetric connection, etc. Indeed, you could even cast Fermat's principle in this form. Note: I added this in the spirit of your post which states that: "... it is also OK to provide an explanation for any sub discipline ..."
2019-11-13 22:26:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29346027970314026, "perplexity": 228.33008509809468}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667442.36/warc/CC-MAIN-20191113215021-20191114003021-00231.warc.gz"}
https://socratic.org/questions/an-object-s-two-dimensional-velocity-is-given-by-v-t-sqrt-t-2-1-2t-t-2-5-what-is-1
# An object's two dimensional velocity is given by v(t) = ( sqrt(t^2-1)-2t , t^2-5). What is the object's rate and direction of acceleration at t=6 ? Jun 30, 2016 $a \left(6\right) = 11.96$ $\theta = - {85.33}^{o}$ #### Explanation: $\text{Horizontal component of acceleration :}$ ${a}_{x} \left(t\right) = \frac{d}{d t} \left(\sqrt{{t}^{2} - 1} - 2 t\right)$ ${a}_{x} \left(t\right) = \frac{2 t}{2 \cdot \sqrt{{t}^{2} - 1}} - 2$ ${a}_{x} \left(6\right) = \frac{2 \cdot 6}{2 \cdot \sqrt{{6}^{2} - 1}} - 2$ ${a}_{x} \left(6\right) = \frac{6}{\sqrt{35}} - 2$ ${a}_{x} \left(6\right) = \frac{6 - 2 \cdot \sqrt{35}}{\sqrt{35}} = \frac{- 5.83}{5.92} = - 0.98$ $\text{Vertical component of acceleration :}$ ${a}_{y} \left(t\right) = \frac{d}{d t} \left({t}^{2} - 5\right)$ ${a}_{y} \left(t\right) = 2 t$ ${a}_{y} \left(6\right) = 2.6 = 12$ $a \left(6\right) = \sqrt{{a}_{x} {\left(6\right)}^{2} + {a}_{y} {\left(6\right)}^{2}}$ $a \left(6\right) \sqrt{{\left(- 0.98\right)}^{2} + {12}^{2}}$ $a \left(6\right) = \sqrt{0.96 + 144}$ $a \left(6\right) = 11.96$ $\tan \theta = \frac{12}{- 0.98}$ $T a n \theta = - 12.24$ $\theta = - {85.33}^{o}$
2019-03-21 21:44:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 19, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9842107892036438, "perplexity": 4859.426092491705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202572.7/warc/CC-MAIN-20190321213516-20190321235516-00130.warc.gz"}
https://math.stackexchange.com/questions/linked/351491?sort=newest
166 views $p^2+1=q^2+r^2$. Strange phenomenon of primes Problem: Find prime solutions to the equation $p^2+1=q^2+r^2$ I welcome you to post your own solutions as well I have found a strange solution which I can't understand why it works(or what's the ... 188 views Positive even numbered integer solutions of $y=n^2-m^2-x^2$ Prove that no integer $x$ exists where $y=n^2-m^2-x^2$ has solutions: For all even integer values of $y$ in the range $2\le y \le 2x+1$ where $x$ is odd. For all odd integer values of $y$ in the ... 85 views Number of integer points on a rotational hyperboloid of two sheets. There are many integer points on the hyperboloid of two sheets $x^2+y^2-z^2=-1$. (0,0,1), (2,2,3), (4,8,9),... Let us denote such set as H. I will consider only the upper sheet $z>0$, but ... 141 views $x^2+y^2=z(4z+1)$ solutions For a small project I am working on, I wish to find the solutions for $$x^2+y^2=z(4z+1)$$ in natural numbers $x,y,z$. I wish to automate finding solutions for $z$ up to a maximum value as efficient as ... 1k views Showing that $m^2-n^2+1$ is a square Prove that if $m,n$ are odd integers such that $m^2-n^2+1$ divides $n^2-1$ then $m^2-n^2+1$ is a square number. I know that a solution can be obtained from Vieta jumping, but it seems very different ... 6k views Solution of Diophantine equation Find all integral solutions of $x^2+1= y^2+z^2$. Actually I have to find all integral solution of $a(a+1)=b(b+1)+c(c+1)$. I reduced this in the above form I.e., $(2a+1)^2+1= (2b+1)^2+(2c+1)^2$ . 159 views Solving $x^2 + y^2 = 1 + z^4$ with (x,y,z) = 1 and z < x < y I have a computer programming problem where I need to find n many sets of integers that meet the condition $x^2 + y^2 = 1 + z^4$ with (x,y,z) = 1 and z < x < y I can do this relatively easily ... The diophantine equation $z^2=a^2+bx^2+cy^2$ Is there a way to obtain (enumerate) the integer solutions $(x,y,z)$ of the following quadratic Diophantine equation $z^2=a^2+bx^2+cy^2$ where $a$ is an integer and $b, c$ are positive integers? I ...
2021-01-28 00:23:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8718383312225342, "perplexity": 212.5117416716378}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704833804.93/warc/CC-MAIN-20210127214413-20210128004413-00080.warc.gz"}
https://paperswithcode.com/paper/gender-artifacts-in-visual-datasets
# Gender Artifacts in Visual Datasets Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models. Many prior works have proposed methods for mitigating gender biases, often by attempting to remove gender expression information from images. To understand the feasibility and practicality of these approaches, we investigate what $\textit{gender artifacts}$ exist within large-scale visual datasets. We define a $\textit{gender artifact}$ as a visual cue that is correlated with gender, focusing specifically on those cues that are learnable by a modern image classifier and have an interpretable human corollary. Through our analyses, we find that gender artifacts are ubiquitous in the COCO and OpenImages datasets, occurring everywhere from low-level information (e.g., the mean value of the color channels) to the higher-level composition of the image (e.g., pose and location of people). Given the prevalence of gender artifacts, we claim that attempts to remove gender artifacts from such datasets are largely infeasible. Instead, the responsibility lies with researchers and practitioners to be aware that the distribution of images within datasets is highly gendered and hence develop methods which are robust to these distributional shifts across groups. PDF Abstract ## Code Add Remove Mark official No code implementations yet. Submit your code now
2023-01-30 15:35:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3128907382488251, "perplexity": 2592.3676200004725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00771.warc.gz"}
https://www.electricalexams.co/ssc-je-synchronous-motor-synchronous-generator-questions/
# ssc-je-synchronous-motor-synchronous-generator-questions Ques.1. The speed of a synchronous motor can be changed by 1. Changing the supply voltage 2. Changing the frequency 4. Changing the supply Terminals The speed of the synchronous motor is given as Ns = 120f/P Therefore by changing the number of poles and frequency, we can change the speed of the synchronous motor. Ques.2. In a synchronous motor running with the fixed excitation, when the load is increased two times, its torque angle becomes approximately 1. Half 2. Twice 3. Four Times 4. No Change A synchronous motor runs at an absolutely constant speed called synchronous speed, regardless of the load. Let us examine how the change in load affects its performance. The power angle or torque angle of a synchronous motor is given as $P \approx \dfrac{{VE}}{{{X_s}}}\sin \delta$ Where δ = Torque angle Xs = Synchronous Reactance A synchronous motor operates at the same average speed for all values of the load from no load to peak load. When the load on a synchronous motor is increased, the motor slows down just enough to allow the rotor to change its angular position in relation to the rotating flux of the stator and then goes back to synchronous speed. Similarly, when the load is removed, it accelerates just enough to cause the rotor to decrease its angle of lag in relation to the rotating flux, and then goes back to synchronous speed. When the peak load that the machine can handle is exceeded, the rotor pulls out of synchronism. Hence if the load increases two times then the torque will also increase the two times so that the synchronous motor does not pull out of synchronism. Ques.76. A three-phase synchronous motor will have 1. One slip-ring 2. Two slip-rings 3. No slip-rings 4. Three slip-rings A three-phase synchronous motor basically consists of a stator core with a three-phase winding (similar to an induction motor), a revolving DC field with an auxiliary or amortisseur winding and slip rings, brushes and brush holders, and two end shields housing the bearings that support the rotor shaft. An amortisseur winding consists of copper bars embedded in the cores of the poles. The copper bars of this special type of “squirrel-cage winding” is welded to end rings on each side of the rotor. The function of a slip ring is to transfer electrical signals from rotary to stationary  components or systems Both the stator winding and the core of a synchronous motor are similar to those of the three-phase, squirrel-cage induction motor, and the wound-rotor induction motor. The rotor of the synchronous motor has salient field poles. The field coils are connected in series for alternate polarity. The number of rotor field poles must equal the number of stator field poles. The field circuit leads are brought out to two slip rings mounted on the rotor shaft for brush-type motors. Carbon brushes mounted in brush holders make contact with the two slip rings. The terminals of the field circuit are brought out from the brush holders to a second terminal box mounted on the frame of the motor. A squirrel-cage, or amortisseur, winding is provided for starting because the synchronous motor is not self-starting without this feature. Ques.77. The maximum speed variation in a synchronous motor is 1. Zero 2. 5% 3. 2% 4. 10% A synchronous motor is a constant speed motor, therefore, the variation of speed is zero. Ques.78. In a synchronous motor which loss varies with the load? 1. Windage loss 2. Bearing friction Loss 3. Core Loss 4. Copper Loss In the rotating machine whether it is AC or DC machine the type of losses are almost same Cooper- loss(I2R) Losses:- All windings have some resistance (though small) and hence there are capper-losses associated with current flow in them. The copper-loss can again be subdivided into the stator copper-loss, rotor copper-loss, and brush-contact loss. The stator and rotor copper-losses are proportional to the current squared and are computed with the dc resistance of windings at 75°C. The conduction of current between the brushes (made of carbon) and the commutator of a dc machine is via short arcs in the air-gaps which are bound to exist in such a contact. As a consequence, the voltage drop at the brush contact remains practically constant with the load; its value for positive and negative brushes put together is of the order of 1 to 2 V. The brush-contact loss in a dc machine is therefore directly proportional to current. The contact losses between the brushes (made of copper-carbon) and slip-rings of a synchronous machine are negligible for all practical purposes. Copper-losses are also present in field windings of synchronous and dc machines and in regulating the rheostat. However, only losses in the field winding are charged against the machine, the other being charged against the System. Stray load Losses:-  Apart from the variable losses mentioned above, there are some additional losses that vary with load but cannot be related to the current in a simple manner. These losses are known as “stray-load loss”.The stray-load loss is difficult to calculate accurately and therefore it is taken as 1 % of the output for a dc machine and 0.5% of the output for both synchronous and induction machines. Ques.79. In a synchronous Motor, the damping winding is generally used to (SSC-2018 Set-1,2) 1. Provide starting torque 2. Prevent hunting and provide starting Torque 3. Reduces the eddy current 4. Reduces the Noise Level ### Use of Damper Winding to Provide the starting torque To enable the synchronous machine to start independently as a motor, a damper winding is made on pole face slots. Bars of copper, aluminum, bronze or similar alloys are inserted in slots made or pole shoes as shown in Fig. These bars are short-circuited by end-rings or each side of the poles. Thus these short-circuited bars form a squirrel-cage winding. On the application of three-phase supply to the stator, asynchronous motor with damper winding will start at a three-phase induction motor and rotate at a speed near to synchronous speed. Now with the application of dc excitation to the field windings, the rotor will be pulled into synchronous speed since the rotor pole is now rotating at only slip-speed with respect to the staler rotating magnetic field. ### Use of Damper winding to prevent Hunting During Hunting, the rotor of the synchronous motor starts to oscillate in its mean position, therefore, a relative motion exists between damper winding and hence the rotating magnetic field is created. Due to this relative motion, e.m.f. gets induced in the damper winding. According to Lenz’s law, the direction of induced e.m.f. is always so as to oppose the cause producing it. The cause is hunting. So such induced e.m.f. oppose hunting. The induced e.m.f. tries to damp the oscillations as quickly as possible. Thus hunting is minimized due to damper winding. Ques.96. An over-excited synchronous motor is used for 3. Power factor corrections An overexcited synchronous machine produces reactive power whether or not it is operating as a motor or as the generator. When synchronous motors are used as synchronous condensers they are manufactured without a shaft extension, since they are operated with no mechanical load. The ac input power supplied to such a motet can only provide for its losses. These losses are very small and the power factor of the motor is almost zero. Therefore, the armature current leads the terminal voltage by close to 90°, as shown in Figure a, and the power network perceives the motor as a capacitor bank. As can be seen in Figure b, when this motor is overexcited it behaves like a capacitor (i.e., synchronous condenser), with Ea > Vφ, whereas when it is under-excited, it behaves like an inductor (i.e., a synchronous reactor), with Ea < Vφ. Synchronous condensers are used to correct power factors at load points or to reduce line voltage drops and thereby improve the voltages at these points, as well as to control reactive power flow. Generally, in large industrial plants, the load power factor will be lagging. The specially designed synchronous motor running at zero loads, taking leading current, approximately equal to 90°. When it is connected in parallel with inductive loads to improve power factor. Large synchronous condensers are usually more economical than static capacitors. Ques.97. When any one-phase of a 3-phase synchronous motor is short-circuited, the motor 1. Will overheat in the spot 2. Will refuse to start 3. Will not come upto speed 4. Will fail to pull into step Failure of a synchronous motor to start is often due to faulty connections in the auxiliary apparatus. This should be carefully inspected for open circuits or poor connections. An open circuit in one phase of the motor itself or a short circuit will prevent the motor from starting. Most synchronous motors are provided with an ammeter in each phase so that the last two causes can be determined from their indications: no current in one phase in case of an open circuit and excessive current in case of a short circuit. Either condition will usually be accompanied by a decided buzzing noise, and a short-circuited coil will often be quickly burned out. The effect of a short circuit is sometimes caused by two grounds on the machine. Difficulties in starting synchronous motors:-  A synchronous motor starts as an induction motor. The starting torque, as in an induction motor, is proportional to the square of the applied voltage. For example, if the voltage is halved, the starting effort is quartered. When a synchronous motor will not start, the cause may be that the voltage on the line has been pulled below the value necessary for starting. In general, at least half voltage is required to start a synchronous motor. Difficulty in starting may also be caused by an open circuit in one of the lines to the motor. Assume the motor to be three-phase. If one of the lines is open, the motor becomes single-phase, and no single-phase synchronous motor, as such, is self-starting. The motor, therefore, will not start and will soon get hot. The same condition is true of a two-phase motor if one of the phases is open-circuited. Difficulty in starting may be due to a rather slight increase in static friction. It may be that the bearings are too tight, perhaps from cutting during the previous run. Excessive belt tension, if the synchronous motor is belted to its load or any cause which increases starting friction will probably give trouble. Difficulty in starting may be due to field excitation on the motor. After excitation exceeds one-quarter of normal value, the starting torque is influenced. With full field on, most synchronous motors will not start at all. The field should be short-circuited through a proper resistance during the starting period. Ques.98. Which of the following can be measured by conducting an insulation resistance test on a synchronous motor? 1. Phase to Phase winding resistance 2. Rotor winding to earth shaft 3. Stator winding to an earthed shaft 4. All option are correct Insulation Resistance Test This test is conducted with voltages from 500 to 5000 V and provides information on the condition of machine insulation. A clean, dry insulation system has very low leakage as compared to a wet and contaminated insulation system. This test does not check the high-voltage strength of the insulation system but does provide information on whether the insulation system has high leakage resistance or not. This test is commonly made before the high-voltage test to identify insulation contamination or faults. This test can be made on all or parts of the machine circuit to ground.i.e • Field winding test or Rotor winding Test • Overall Stator Armature winding test • Overall System test for the Motor or generator The overall system test includes generator neutral, transformer, all stator windings, isolated phase bus, and low side windings of the generator step-up transformer. This test is performed as a screening test after an abnormal occurrence on the machine. If the reading is satisfactory, no further tests are made. If the reading is questionable or lower, the machine terminals are disconnected and further isolation performed to locate the source of the trouble. Ques.99. The speed of a synchronous motor can be changed by 1. Changing the supply voltage 2. Changing the frequency 4. Changing the supply Terminals The speed of the synchronous motor is given as Ns = 120f/P Therefore by changing the number of poles and frequency, we can change the speed of the synchronous motor. Ques.100. The under-excited synchronous motor takes____ (SSC-2018 Set-2) 2. Lagging current 3. Both leading and lagging current 4. None of these Unlike the induction machine, the synchronous machine can operate at lagging, leading and unity power factors. In an induction machine, the magnetizing current is required to establish flux in the air gap. This magnetizing current lags the voltage and therefore, the induction machine always operates at lagging power factor. On the other hand, in the synchronous machine, the total air gap flux is produced by dc source and there is no use of lagging current from ac system for production of air-gap flux. If dc excitation is decreased, lagging reactive power will be drawn from ac source to aid magnetization and thus machine will operate at lagging power factor. If dc excitation is more, leading current drawn from ac source to compensate (oppose) the magnetization and the machine will operate a leading power factor. Thus it can be concluded that an over-excited motor(Eb  > V) draws leading current (acts like a capacitive load) but an under-excited motor(Eb < V) draws lagging current (acts as an inductive load). Ques.80. Power factor correction substations consist of 1. Rectifiers 2.  Inverters 3. Synchronous condenser 4.  Transformers Power factor correction substation: It is used for the power factor improvement purpose because due to the impedance of the line, the power factor of the system is decreased. These substations are located near the receiving end of the line. Hence, synchronous condenser or static capacitors are installed to improve the power factor at the receiving end. ### Synchronous condenser A synchronous condenser is a synchronous motor running without the mechanical load. The Synchronous motor takes a leading current when over-excited and therefore behave as a capacitor. When such a machine is connected in parallel with the supply, it takes a leading current which partly neutralizes the lagging reactive component of the load. Thus the power factor is improved. Now let V is the voltage applied and IL is the current lagging V by angle φL. This power factor φL, is very low, lagging. The synchronous motor acting as a synchronous condenser is now connected across the same supply. This draws a leading current of Im. The total current drawn from the supply is now the phasor sum of IL and Im. This total current I now lags V by smaller angle φ due to which effective power factor gets improved. This is how the synchronous motor as a synchronous condenser is used to improve the power factor of the combined load. 1. A synchronous condenser has an inherently sinusoidal waveform and the voltage does not exist. 2. It can supply as well as absorb kVAr. 3. The PF can be varied in smoothly. 5. The high inertia of the synchronous condenser reduces the effect of sudden changes in the system load and improves the stability of the system. 6. It reduces the switching surges due to sudden connection or disconnection of lines in the system. 7. The motor windings have high thermal stability to short-circuit currents. 8. By varying the field excitation, the magnitude of current drawn by the motor can be changed by any amount. This helps in achieving stepless control of power factor. 9. The faults can be removed easily. Ques.95. Synchronizing power of a synchronous machine is 1. Directly proportional to the synchronous reactance 2. Equal to the synchronous reactance 3. Inversely proportional to the synchronous reactance 4. None of these Nowadays, almost all alternators are connected it parallel with other alternators. The satisfactory parallel operation of alternators depends upon the synchronizing power. If the synchronizing power is higher, the higher is the capability of the system to synchronism. The variation of synchronous power with the change of load angle is called the synchronizing power. It exists only during the transient state, i.e. whenever there is a sudden disturbance in load (or steady-state operating conditions). Once the steady state is reached, the synchronizing power reduces to zero. The synchronizing power flows from or to the bus in order to maintain the relative velocity between an interacting stator and rotor field, zero, once the equality is reached, the synchronizing power vanishes. The synchronizing power coefficient, which is defined by the rate at which synchronous power varies with load angle (S) gives a measure of effectiveness. The synchronizing power coefficient is also called stiffness of coupling, rigidity factor or stability factor. It is denoted by Psyn. The synchronizing power of the synchronous machine is given as ${P_{syn}} = \frac{{{E_b}V}}{{{X_s}}}\cos \delta$ Where V = supply Voltage Eb = Back E.M.F Xs = Synchronous Reactance Hence from the above expression, it is clear that synchronizing power is inversely proportional to the synchronous reactance. Ques.96. The normal starting methods that are used to start a synchronous motor is 1. Star-delta starter 2. Damper winding 3. Resistance starter in the armature circuit 4. Damper winding in conjunction with the star-delta starter To enable the synchronous machine to start independently as a motor, a damper winding is made on rotor pole face slots. Bars of copper, aluminum, bronze or similar alloys are inserted in slots made or pole shoes as shown in Fig. These bars are short-circuited by end-rings or each side of the poles. Thus these short-circuited bars form a squirrel-cage winding. On the application of three-phase supply to the stator, synchronous motor with damper winding will start at a three-phase induction motor and rotate at a speed near to synchronous speed or sub-synchronous speed. When a de supply is given to the field winding. At a particular instant, the motor gets pulled into synchronism and starts rotating at a synchronous speed. Since the rotor poles are now rotating at only slip-speed with respect to the stator rotating magnetic field. To limit the starting current drawn by the motor a reduced voltage may be necessary, to apply for high capacity synchronous motors. Reduced voltage can be applied through an auto-transformer or through a star-delta starter. Hence to start a synchronous machine as an induction motor and to limit the starting current drawn by the motor star-delta starter is used. During starting period before the application of dc excitation, the field windings are kept closed through a resistor. DC is supplied from an independent source or through the armature of the dc exciter, namely the dc generator carried on the shaft extension of the synchronous motor. If this is not done, a high voltage induced in the dc winding during the starting period will strain the insulation of the field winding. Since the starting of the motor is done as an induction motor, the starting torque developed is rather low and, therefore, large capacity motors may not be able to start on full load. Ques.97. At what condition synchronous motor can be used as a synchronous capacitor 2. Under-excited 4. Over-excited Synchronous condenser A synchronous condenser is a synchronous motor running without the mechanical load. The Synchronous motor takes a leading current when over-excited and therefore behave as a capacitor. When such a machine is connected in parallel with the supply, it takes a leading current which partly neutralizes the lagging reactive component of the load. Thus the power factor is improved. Now let V is the voltage applied and IL is the current lagging V by angle φL. This power factor φL, is very low, lagging. The synchronous motor acting as a synchronous condenser is now connected across the same supply. This draws a leading current of Im. The total current drawn from the supply is now phasor sum of IL and Im. This total current I now lags V by smaller angle φ due to which effective power factor gets improved. This is how the synchronous motor as a synchronous condenser is used to improve the power factor of the combined load. 1. A synchronous condenser has an inherently sinusoidal waveform and the voltage does not exist. 2. It can supply as well as absorb kVAr. 3. The PF can be varied in smoothly. 5. The high inertia of the synchronous condenser reduces the effect of sudden changes in the system load and improves the stability of the system. 6. It reduces the switching surges due to sudden connection or disconnection of lines in the system. 7. The motor windings have the high thermal stability to short-circuit currents. 8. By varying the field excitation, the magnitude of current drawn by the motor can be changed by any amount. This helps in achieving stepless control of power factor. 9. The faults can be removed easily. Ques.98. Synchronous motor do not have self-starting property because 1. Starting winding is not provided on the machines 2. The direction of rotation is not fixed 3. The direction of instantaneous torque reverses after the half cycle 4. Starters cannot be used on these machines The synchronous motor works on the principle of magnetic locking. The operating principle can be explained with the help of 2-Pole synchronous machine with the following steps. Step 1. When a three phase supply is given to the stator winding, a Rotating magnetic field is produced in the stator. Step 2. Due to Rotating Magnetic field, let the stator poles Ns and Ss rotate with synchronous speed. At a particular time stator pole, Ns coincides with the rotor poles Nr and SS coincides with Sr i.e like poles of the stator and rotor coincide with each other. As we know, like poles experience a repulsive force. So rotor poles experience a repulsive force Fr. Assume that the rote tends to rotate in the anti-clockwise direction as shown in the Fig.(i). Step-3.After half cycle, the polarity of the stator pole is reversed, whereas the rotor poles cannot change their position due to inertia as shown in Fig. (ii). Now unlike poles coinciding each other and rotor experiences the attractive force fa and tends to rotate in clockwise direction. In brief, we can say, with the rotation of stator poles the rotor tends to drive in the clockwise and anti-clockwise direction in every half cycle. As a result, the average torque on the rotor is zero Hence 3-phase synchronous motor is not a self-starting Motor. Step 4. Now suppose the rotor is rotated by some external means at a speed almost equal to synchronous speed. At a certain instant, the stator and rotor unlike poles will face each other, then due to the strong force of attraction, magnetic locking is established, the rotor and stator poles continue to occupy the same relative position. Step 5. Due to this, the rotor continuously experiences a unidirectional torque in the direction of the rotating magnetic field. Hence 3-phase synchronous motor must run at synchronous speed. Ques.99. The over-excited synchronous motor takes 2. Lagging current 3. Both leading and lagging current 4. None of the above Unlike induction machine, the synchronous machine can operate at lagging, leading and unity power factors. In the induction machine, magnetizing current is required to establish the flux in the air gap. This magnetizing current lags the voltage and therefore, induction machine always operates at lagging power factor. When the load on a synchronous motor is constant. the input power V.I.Cosφ drawn from the bus-bar will remain constant. As the bus-bar voltage V is constant, I.Cosφ will remain constant. Under this condition, the effect of change of field excitation of the armature current, I drawn by the motor will be as follows: When excitation is changed, the magnitude of induced EMF changes. The torque angle α i.e. the angle of lag of E from the axis of ‘remains constant as long as the load on the motor is constant. On the other hand, in the synchronous machine, the total air gap flux is produced by dc source and there is no use of lagging current from the A.C system for production of air gap flux. If dc excitation is decreased, lagging reactive power will be drawn from A.C source to aid magnetization and thus machine will operate at lagging power factor. If dc excitation is more, leading current drawn from A.C source to compensate (oppose) the magnetization and the machine will operate a leading power factor. If dc excitation is more, leading current is drawn from the ac source to compensate (oppose) the magnetization and the machine will operate a leading power factor. If a motor is operating at leading power factor at no-load, it is caller synchronous condenser which can work as variable inductor or capacitor. Ques.100. The maximum power developed in a synchronous motor will depend on (SSC-2018 Set-3) 1. The rotor excitation and supply voltage 2. The rotor excitation, supply voltage and maximum value of coupling angle 3. The supply voltage only 4. The rotor excitation only The maximum power developed in the synchronous Motor is given by the expression ${P_{\max }} = \dfrac{{{E_b}V}}{{{Z_s}}}\cos (\theta - \delta ) - \dfrac{{E_b^2}}{{{Z_s}}}Cos\theta$ Where θ = Internal angle V = Terminal voltage Eb = Back EMF or excitation because back EMF Eb in Synchronous Motor depends on the DC excitation only because speed is constant The power developed depends on the excitation, voltage and coupling angle. The maximum value of θ and hence δ is 90°. An increase in the excitation results in an increase of Pmax. Consequently, the load angle decreases for a given power developed. The overload capacity of the motor increases with an increase in excitation and the machine becomes more stable. For all values of V and Eb, this limiting value of δ is the same but maximum torque will be proportional to the maximum power developed If the resistance of the armature is negligible, then Z ≅ Xs,  θ = 90° ∴ Cosθ = 0 Therefore the power develop is given by ${P_{\max }} = \dfrac{{{E_b}V}}{{{X_s}}}\sin \delta$ Hence the power develop will me maximum when δ is 90°. Ques.71. Name the equipment which runs an alternator 1. Prime Mover 2. Generator 3. Motor 4. Fan All generators, large and small, ac and dc, require a source of mechanical power to turn their rotors. This source of mechanical energy is called a prime mover. When a prime mover drives the synchronous machine, it functions as an alternator concerting the mechanical energy of the prime mover into electrical energy. Prime movers are divided into two classes for generators-high-speed and low-speed. Steam and gas turbines are high-speed prime movers, while internal-combustion engines, water, and electric motors are considered low-speed prime movers. The type of prime mover plays an important part in the design of alternators since the speed at which the rotor is turned determines certain characteristics of alternator construction and operation. Ques.95. Synchronous Motor Shaft is Made of 1. Alnico 2. Chrome Steel 3. Mild Steel 4. Stainless Steel A shaft is a rotating machine element, usually circular in cross-section, which is used to transmit power from a machine to a machine which absorbs power. Low carbon steel which is also called the mild steel (MS) is an alloy of iron and carbon. The carbon content varies from 0.05 to 0.15 percentage for dead mild steel and ().15 to 0.3 percentage for mild steel. The shaft is generally made of mild steel When high strength is required, an alloy steel such as nickel, nickel-chromium or chromium-vanadium steel is used. Mechanical power is taken or given to the machine through the shaft. Ques.96. When the voltage applied to a synchronous motor is increased, which of the following will reduce? 1. Stator Flux 2. Pull in Torque 3. Pull out Torque 4. None of these Let us know the factors, facts, assumptions that are depended on supply voltage for good operation of the synchronous motor. The synchronous motor is a dual-excited constant speed motor, Basically supply voltage is injected such that it produces RMF at specified phase angles wrt to phases in the stator of our motor. RMF produces enough constant magnitude flux, Let us assume that initially our synchronous motor was operated at a voltage less than the preset voltage value. As in the case of Induction motors Variation in the frequency of the source will result in the corresponding change in the flux in the air gap. Hence, in order to operate the motor with fairly constant flux in the air gap, it is necessary to vary the magnitude of the applied voltage in the same ratio as the frequency of the supply (i.e V/f should be kept constant) and to keep the excitation current constant. If a synchronous motor is driven by an external power source, and the excitation, or voltage applied to the rotor, is adjusted to a certain value called 100 percent excitation, no current will flow from or to the stator winding. In this case, the voltage generated in the stator windings by the rotor, or back EMF, exactly balances the applied voltage. However, if the excitation is reduced below the 100-percent value, the difference between the back emf and the applied voltage produces a reactive component of current which lags the applied voltage. The machine then acts as an inductance. Similarly, if the excitation is increased above the 100-percent value, the reactive component leads the applied voltage, and the machine acts as a capacitor. This feature of the synchronous motor permits uses of the machine as a power-factor correction device. When so used, it is called as the synchronous condenser. Note:- In most applications, the voltage applied to the synchronous machine cannot be varied. This is true because in most cases the machine is directly connected to the grid and the terminal voltage is therefore fixed. Ques.97. A synchronous motor has a better power factor than an induction motor. This is due to 1. Stator supply is not required to produce the magnetic field 2. Synchronous Motor has no Slip 3. Mechanical Load on the rotor remain constant 4. Synchronous motor has a large airgap A synchronous motor is a machine that converts electric power into mechanical power at a constant speed called synchronous speed. It is a doubly excited machine because its rotor winding is excited by direct current and its stator winding is connected to an A.C supply. Stator: Stator is the stationary part of the machine. The three-phase armature winding is placed in the slots of the stator core and is wound for the same number of poles as the rotor.  The stator is excited by a three-phase ac supply as shown in the Figure Rotor: The rotor of the synchronous motor can be of the salient pole or cylindrical pole (Non-salient) type construction. Practically most of the synchronous motor use salient i.e., projected pole type construction, except for exceedingly high-speed machines. The field winding is placed on the rotor and it is excited by a separate dc supply. Although the synchronous motor starts as an induction motor. it does not operate as one. After the armature winding has been used to accelerate the rotor to about 95% of the speed of the rotating magnetic field, direct current is connected to the rotor and the electromagnets lock in step with the rotating field. Notice that the synchronous motor does not depend on the induced voltage from the stator field to produce a magnetic field in the rotor. The magnet field of the rotor is produced by external DC applied to the rotor. This is the reason that the synchronous motor has the ability to operate at the speed of the rotating magnetic field. As the load is added to the motor, the magnetic field of the rotor remains locked with the rotating magnetic field and the rotor contir ues to tuna at the same speed. It should be noted that the changes in the dc field excitation do not affect the motor speed. However, such changes do alter the power factor of a synchronous motor. If all of the resistance of the rheostat is inserted in the field circuit, the field current drops below its normal value. A poor lagging power factor results. If the dc field is weak, the three-phase ac circuit to the stator supplies a magnetizing current to strengthen the field. This magnetizing component lags the voltage by 90 electrical degrees. The magnetizing current becomes a large part of the total current input. This gives rise to a low lagging power factor. If a weak dc field is strengthened, the three-phase ac circuit to the stator supplies less magnetizing current. Because this current component becomes a smaller part of the total current input to the stator winding, the power factor increases. The field strength can be increased until the power factor is unity or 100%. When the power factor reaches unity, the three-phase ac circuit supplies energy current only. The dc field circuit supplies all of the current required to magnetize the motor. The amount of dc field excitation required to obtain a unity power factor is called normal field excitation. The magnetic field of the rotor can be strengthened still more by increasing the dc field current above the normal excitation value. The power factor in this case decreases. The circuit feeding the stator winding delivers a demagnetizing component of current. This current opposes the rotor field and weakens it until it returns to the normal magnetic strength. Hence Higher PF means the low requirement of MMF for energy transfer, hence low magnetizing current requirement. The synchronous machine has separate DC excitation which reduces the machine’s excitation dependency on the mains supply, hence better PF. whereas the Induction motor has no such provisions,  hence low Power factor. Ques.98. When the field circuit of an unloaded salient pole synchronous motor gets suddenly open circuited, then 1. The Motor Stops 2. It runs at a slower speed 3. It continues to run at the same speed 4. It runs at a very high speed The synchronous motor starts as an induction motor. it does not operate as one. After the armature winding has been used to accelerate the rotor to about 95% of the speed of the rotating magnetic field, direct current is connected to the rotor and the electromagnets lock in step with the rotating field. Notice that the synchronous motor does not depend on the induced voltage from the stator field to produce a magnetic field in the rotor. The magnet field of the rotor is produced by external DC applied to the rotor. To shut down the motor, the field circuit is de-energized by opening the field discharge switch. The field discharge resistor is connected across the field circuit to reduce the induced voltage in the field as the field flux collapses The energy stored in the magnetic field is spent in the resistor, and a lower voltage is induced in the field circuit thus rotating magnetic field will not develop and the motor will stop. Note:- The synchronous motor either runs on synchronous speed or it will not run at all. Ques.99. The armature current of a synchronous motor is minimum when operating at 1. Unity Power factor 2. 0.707 Power factor lagging The power factor of the synchronous motor can be changed by changing the field current. When the field current is changed, the armature current of the synchronous motor also changes. Now suppose that a synchronous motor is running at no load. If the field current is increased, the armature current la decreased until the armature current becomes the minimum. At this point, the synchronous motor is running at unity power factor. Before this point, the synchronous motor was running at the lagging power factor and the armature current is low lagging. Now if we increase the field current, the armature current increases (high Lagging) and the motor starts operating at a leading power factor. It must be noted that due to the constant load, the cosine component of the armature current, In cosφ always remains constant. The V-curves of a synchronous motor show how armature current varies with its field current when the motor input is kept constant and is so-called because of their shape. The minimum armature current corresponds to unity power factor. Ques.100. The resultant armature voltage of a synchronous motor is equal to the_______. (SSC-2018 Set-4) 1. Vector sum of Eb and V 2. Vector difference of Eb and V 3. Arithmetic sum of Eb and V 4. Arithmetic difference between Eb and V When a dc motor or an induction motor is loaded, the speed of the motor drops. This is because the load torque demand increases than the torque produced by the motor Hence the motor draws more current to produces more torque to satisfy the load but its speed reduces. In case of synchronous motor speed always remains constant equal to the synchronous speed, irrespective of load condition It is interesting to study how synchronous motor reacts to changes in the load condition. In a dc motor, armature develops an emf after motoring action starts which opposes the supply voltage called back emf. Eb. The resultant voltage across the armature is (V – Eb) and it causes an armature current la = (V – Eb) ⁄ Ra to flow where Ra is armature circuit resistance. In case of the synchronous motor also, once the rotor starts rotating at synchronous speed, the stationary stator (armature) conductors cut the flux produced by the rotor. The only difference is the conductor is stationary and flux is rotating. Due to this, there is an induced emf in the stator which according to Lenz’s law opposes the supply voltage. This induced emf is called back emf in case of the synchronous motor. It is denoted as Ebph i.e., back emf per phase. This gets generated as the principle of the alternator and hence alternating in nature. So back emf in case of the synchronous motor depends on the excitation given to the field winding and not on the speed as speed is always constant. The net voltage in armature (stator) is the vector difference (not arithmetical, as in d.c. motors) of Vph and Ebph. Armature current is obtained by dividing this vector difference of voltages by armature impedance (not resistance as in d.c. machines). Ques.95. In a synchronous motor the rotor copper losses, are met by 1. Armature input 2. D.C  source 3. Motor input 4. Supply lines The synchronous motor consist of the two parts: Stator: Stator is the armature winding. It consists of three phase star or delta connected winding and excited by 3 phase A.C supply. Rotor: Rotor is a field winding. The field winding is excited by the separate D.C supply through the slip ring. • The 3 phase Ac source feeds electrical power to the armature for the following component of the power (i)The net mechanical output from the shaft (ii) Copper losses in the armature winding (iii) Friction and the armature core losses • The power received by the DC source is used to utilized only to meet copper losses of the field winding. Ques.96. The change of D.C. excitation of a synchronous motor changes 1. Motor Speed 2. Applied voltage of the Motor 3. Power Factor 4. All Option are correct A synchronous motor is a machine which converts a electric power into mechanical power at a constant speed called synchronous speed. It is a doubly excited machine because its rotor winding is excited by direct current and its stator winding is connected to an A.C supply. Stator: Stator is the stationary part of the machine. The three-phase armature winding is placed in the slots of the stator core and is wound for the same number of poles as the rotor.  The stator is excited by a three phase ac supply as shown in the Figure Rotor: The rotor of synchronous motor can be of the salient pole or cylindrical pole (Non-salient) type construction. Practically most of the synchronous motor use salient i.e., projected pole type construction, except for exceedingly high-speed machines. The field winding is placed on the rotor and it is excited by a separate dc supply. Although the synchronous motor starts as an induction motor. it does not operate as one. After the armature winding has been used to accelerate the rotor to about 95% of the speed of the rotating magnetic field, direct current is connected to the rotor and the electromagnets lock in step with the rotating field. Notice that the synchronous motor does not depend on the induced voltage from the stator field to produce a magnetic field in the rotor. The magnet field of the rotor is produced by external DC applied to the rotor. This is the reason that the synchronous motor has the ability to operate at the speed of the rotating magnetic field. As the load is added to the motor, the magnetic field of the rotor remains locked with the rotating magnetic field and the rotor continues to run at the same speed. It should be noted that the changes in the dc field excitation do not affect the motor speed. However, such changes do alter the power factor of a synchronous motor. If all of the resistance of the rheostat is inserted in the field circuit, the field current drops below its normal value. A poor lagging power factor results. If the dc field is weak, the three-phase ac circuit to the stator supplies a magnetizing current to strengthen the field or the lagging reactive power will be drawn from A.C source to aid magnetization. This magnetizing component lags the voltage by 90 electrical degrees. The magnetizing current becomes a large part of the total current input. This gives rise to a low lagging power factor. If a weak dc field is strengthened, the three-phase ac circuit to the stator supplies less magnetizing current and the leading current is drawn from the ac source to compensate (oppose) the magnetization. Because this current component becomes a smaller part of the total current input to the stator winding, the power factor increases. The field strength can be increased until the power factor is unity or 100%. When the power factor reaches unity, the three-phase ac circuit supplies energy current only. The dc field circuit supplies all of the current required to magnetize the motor. The amount of dc field excitation required to obtain a unity power factor is called normal field excitation. The magnetic field of the rotor can be strengthened still more by increasing the dc field current above the normal excitation value. The power factor in this case decreases. The circuit feeding the stator winding delivers a demagnetizing component of current. This current opposes the rotor field and weakens it until it returns to the normal magnetic strength. Hence Higher PF means the low requirement of MMF for energy transfer, hence low magnetizing current requirement. The synchronous machine has separate DC excitation which reduces machine’s excitation dependency on main supply, hence better PF. whereas Induction motor has no such provisions,  hence low Power factor. Ques.97. The advantage of a stationary armature of a synchronous machine is 1. Reducing the number of slip rings on the rotor 2. The difficulty of providing high voltage insulation on the rotor 3. The armature is associated with large power as compared to the field circuits 4. All option are correct Explanation:- FIELD AND ARMATURE CONFIGURATIONS There are two arrangements of fields and armatures: 1. Revolving armature and the stationary field 2. Revolving field and stationary armature. ### ADVANTAGES OF ROTATING FIELD IN AN ALTERNATOR In large alternators, rotating field arrangement is usually forward due to the following advantages. 1. Ease of Construction: Armature winding of large alternators being complex, the connection and bracing of the armature windings can be easily made for the stationary stator. 2. The number of Slip Rings: If the armature is made rotating, the number of slip rings required for power transfer from the armature to the external circuit is atleast three. Also, heavy current flows through brush and slip ring cause problems and require more maintenance in large alternators. Insulation required for slip rings from rotating shaft is difficult with the rotating armature system. 3. High voltage generation:- Voltages can be generated as high as 11,000 and 13,800 V. These values can be reached because the stationary armature windings do not undergo vibration and centrifugal stresses. 4. High current Rating:-  Alternators can have relatively high current ratings. Such ratings are possible because the output of the alternator is taken directly from the stator windings through heavy, well-insulated cables to the external circuit. Neither slip rings nor a commutator is used. 5. Better Insulation to Armature: Insulation arrangement of armature windings can easily be made from the core on the stator. 6. Reduced Rotor Weight and Rotor Inertia:  Since the field system is placed on the rotor, hence the insulation requirement is less (for low dc voltage). Also, rotational inertia is less. It takes lesser time to gain full speed. 7. Improved Ventilation Arrangement: The cooling can be provided by enlarging the stator core with radial ducts. Water cooling is much easier if the armature is housed in the stator. Hence in almost all of the alternators, the armature is housed in the stator while the dc field system is placed in the rotor. Ques.98. In which of the following motors the stator and rotor fields rotate simultaneously 1. Reluctance motor 2. Universal Motor 3. D.C Motor 4. Synchronous Motor The synchronous motor is a truly constant speed motor. This is the specialty of this motor. Yet, it has very limited applications. To develop a steady torque, its rotor must be rotating at synchronous speed, Ns. This is the major defect of synchronous motors. Either it runs at synchronous speed, or it does not run at all. The stator field rotates at synchronous speed due to the three-phase currents supplied to its windings.  In order to develop a continuous unidirectional torque, it is necessary that the stator and rotor poles do not move with respect to each other. This is possible only if the rotor also rotates at the synchronous speed. Therefore Magnetic Locking between the poles is necessary to do so. The concept of Magnetic Locking • Synchronous motor work on the principle of magnetic locking. • When two unlike strong unlike magnets poles are brought together, there exists a tremendous force of extraction between those two poles. In such condition, the two magnets are said to be magnetically locked. • If now one of the two magnets is rotated, the other magnets also rotate in the same direction with the same speed due to the strong force of attraction. • This phenomenon is called as magnetic locking For magnetic locking condition, there must be two unlike poles and magnetic axes of this two poles must be brought very nearer to each other. • Consider a synchronous motor whose stator is wound for 2 poles. • The stator winding is excited with 3 phase A.C supply and rotor winding with D.C supply respectively. Thus two magnetic fields are produced in the synchronous motor. • When the 3 phase winding is supplied by 3 phase A.C supply than the rotating magnetic field or flux is produced. • This magnetic field or flux rotates in a space at a speed called synchronous speed. • When the rotor speed is about synchronous, stator magnetic field pulls the rotor into synchronism i.e. minimum reluctance position and keeps it magnetically locked. Then the rotor continues to rotate with a speed equal to synchronous speed. • The rotating magnetic field or rotating flux has fixed relationship between, the number of poles, the frequency of a.c supply and the speed of rotation. • The rotating magnetic field creates an effect which is similar to the physical rotation of magnets in space with a synchronous speed. • So for rotating magnetic field Where f = supply frequency P = Number of poles • Suppose the stator poles are N1 and S1 which are rotating at a speed of N and the direction of rotation be clockwise. • When the field winding on a rotor is excited by the D.C source, it produces the two stationary poles i.e N2 and S2. • To establish the magnetic locking between the stator and rotor poles the, unlike poles N1 and S2 or N2 and S1 should be brought near to each other. • As stator poles are rotating and due to magnetic locking the rotor poles will rotate in the same direction of rotating magnetic field as that of stator poles with the same speed Ns. • Hence synchronous motor rotates at only one speed that is synchronous speed. • The synchronous speed depends on the frequency therefore for constant supply frequency synchronous motor speed will be constant irrespective of the load changed. At zero speed or at any other speed lower than synchronous speed, the rotor poles rotate slower than the stator field. Therefore, in one cycle of rotation of the stator field, the N-pole of the rotor is for some time nearer to N pole of the stator and for some other time nearer to the S-pole of the stator. As a result, the torque developed is for some time clockwise and for some other time anticlockwise. Consequently, the average torque developed remains zero. Hence the Synchronous Motor Run either on Synchronous Speed Or Not at all. Ques.99. In a 3-phase synchronous motor, If the direction of its field current is reversed 1. The winding of the motor will burn 2. The motor will stop 3. The motor will run in the reverse direction 4. The motor continues to run in the same direction The synchronous motor is a doubly excited machine with stator and Rotor. The stator is usually excited by the three-phase supply and the rotor is excited by a DC supply in order to create a magnetic field. The three-phase supply usually fed to the stator, three-phase current are generated in the stator winding since it is a closed circuit. These current phases are separated by 120 degrees with each other. As a result, three phase flux is generated which is equal in magnitude but are displaced by 120 degrees with each other. However, the rotor is excited by DC supply which usually creates poles of opposite polarity due to the positive and negative supply of dc. So the rotor is rotated by external means by using some pony Motors or induction motors so that it is interlocked with the stator field this rotates with the same speed as a stator named the synchronous speed. Interchanging the polarity during the running of the motor It should be noted that it is the very difficult procedure to change the polarity of the rotor suddenly. However, if we do so, the rotor will lose its magnetic interlocking with a stator for a second. Due to inertia the rotor , it will be rotating with some speed. Now the stator speed is faster than the rotor, as a result, the rotor field is again interlocked with stator at a particular point of time during the run. Think of a magnet. It always looks for opposite polarity. So it will get interlocked at a particular point. Hence it rotates in the same direction as the stator. It neither changes its direction of rotation nor it stops All this happens in within seconds that it is impossible to trace because the normally operating speed is around 1500 rpm . At that speed, it is impossible to trace this effect. Hence the direction of rotation of the synchronous motor is determined by its starting direction, as initiated by induction motor action. Thus, to reverse the direction of a 3 phase synchronous motor, it is necessary to first stop the motor and then reverse the phase sequence of the 3 phase connections at the stator. REVERSING THE CURRENT TO FIELD WINDINGS WILL NOT AFFECT THE DIRECTION OF ROTATION Ques.100. The maximum power developed in a synchronous motor will depend on (SSC-2018 Set-5) 1. The rotor excitation and supply voltage 2. The rotor excitation, supply voltage and maximum value of coupling angle 3. The supply voltage only 4. The rotor excitation only The maximum power developed in the synchronous Motor is given by the expression ${P_{\max }} = \dfrac{{{E_b}V}}{{{Z_s}}}\cos (\theta - \delta ) - \dfrac{{E_b^2}}{{{Z_s}}}Cos\theta$ Where θ = Internal angle V = Terminal voltage Eb = Back EMF or excitation because back EMF Eb in Synchronous Motor depends on the DC excitation only because speed is constant The power developed depends on the excitation, voltage and coupling angle. The maximum value of θ and hence δ is 90°. An increase in the excitation results in an increase of Pmax. Consequently, the load angle decreases for a given power developed. The overload capacity of the motor increases with an increase in excitation and the machine becomes more stable. For all values of V and Eb, this limiting value of δ is the same but maximum torque will be proportional to the maximum power developed If the resistance of the armature is negligible, then Z ≅ Xs,  θ = 90° ∴ Cosθ = 0 Therefore the power develop is given by ${P_{\max }} = \dfrac{{{E_b}V}}{{{X_s}}}\sin \delta$ Hence the power develop will me maximum when δ is 90°. Ques.95. The power factor of a synchronous motor, When the field is under-excited 2. Unity 3. Lagging 4. Zero Unlike induction machine, synchronous machine can operate at lagging, leading and unity power factors. In induction machine, magnetizing current is required to establish flux in the air gap. This magnetizing current lags the voltage and therefore, induction machine always operates at lagging power factor. On the other hand, in synchronous machine, the total air gap flux is produced by dc source and there is no use of lagging current from ac system for production of air gap flux. If dc excitation is decreased, lagging reactive power will be drawn from ac source to aid magnetization and thus machine will operate at lagging power factor. If dc excitation is more, leading current drawn from ac source to compensate (oppose) the magnetization and the machine will operate a leading power factor. Thus it can be concluded that an over-excited motor(Eb  > V) draws leading current (acts like a capacitive load) but an under-excited motor(Eb < V) draws lagging current (acts like an inductive load). Ques.96. To limit the operating temperature of the synchronous motor, it should have proper 1. Current Rating✓ 2. Voltage Rating 3. Power Factor 4. Speed When a machine has to be designed and constructed, the choice of suitable materials and manufacturing technology becomes important. The following considerations are required which impose the limitations on the machine design: Rated current:- Rated current in is the maximum permissible current that can be maintained permanently, which at the same time does not cause any overheating, damages, faults, or accelerated aging. In AC machines, the rated current implies the RMS value of the winding current. Electrical currents in windings of electrical machine produce losses and develop heat, increasing the temperature of conductors and insulation. Current in the windings creates Joule losses which are proportional to the square of the current. The temperature of the machine is increased in proportion to generated heat. If the current is increased beyond the rated current then the temperature of the synchronous motor will also increase. This excessive temperature rise may cause insulation failure. The life of the machine depends upon the life of the insulation. If the machine is continuously operated above the specified temperature limit, the life of the insulation and hence the life of the machine will be reduced. By providing proper ventilation and Saturation of the Magnetic Circuit: The saturation of the magnetic circuit disturbs the straight line characteristics of magnetization (B-H) curve resulting in increased excitation required and hence higher cost for the field system. Insulation: The insulating properties and the strength of the insulating materials are considered on account of breakdown due to excessive voltage gradients set up in the machine. Mechanical Strength: The machine should have the ability to withstand centrifugal forces and other stresses. Efficiency: The efficiency of the machine should be high for the low running cost. The specific magnetic and electric loading should be low to achieve high efficiency. With the low value of magnetic and electric loadings, the size of the machine will be larger and hence more capital cost (initial investment). Ques.97. A synchronous machine with large air gap has 1. A higher value of stability limit 2. A higher synchronizing power 3. A small value of regulation 4. All options are correct In Synchronous Machine the Magnetic Flux is set up separately by Field Winding.The Emf induced in the Stator Armature Winding is not by Mutual induction it is a Dynamically induced Emf due to relative motion between the Field and Conductors. The length of the air gap greatly influences the performance of a synchronous machine. A large air gap offers a large reluctance to the path of flux produced by the armature MMF and thus reduces the effect of armature reaction. This results in a small value of synchronous reactance and the high value of SCR. A high value of SCR (short circuit ratio) means that the synchronous reactance has a low value, synchronous machines with the low value of SCR thus have greater changes in voltage under fluctuations of load i.e., the inherent voltage regulation of the machine is poor. Thus a machine with a large air gap has a high synchronizing power which makes the machine less sensitive to load variations. Ques.98. Synchronous motor speed 1. Decreases as the load decreases 2. Increases as the load increases 3. Always remains constant 4. None of these The principle of working of the synchronous motor is magnetic locking. Both stator and rotor and separately excited. The rotor catches the flux speed and gets locked with revolving filed of the stator and then rotates at that speed. In case of synchronous motor speed always remains constant equal to the synchronous speed, irrespective of load condition. Ques.99. The magnitude of field flux in a 3-phase synchronous machine 1. Varies with speed 2. Remains constant at all loads 3. Varies with power factor • In a synchronous motor, the counter Emf is proportional to the speed and field flux. • Since the speed is constant in a synchronous motor, therefore, field flux is substantially constant within the normal limit of operation. • If field excitation is increased thereby tending to increase the field flux that varies only slightly, there must be an automatic change in the armature MMF  in order to offset the effect of the increased field excitation. The armature current must, therefore, contain a leading component, hence the leading current in a synchronous motor exerts a demagnetizing effect. • By the same reasoning, it follows that a weakening of the field excitation tends to draw a lagging current from the source of supply. For any set of operating conditions, there will be some value of field excitation which will cause the power factor to be unity, i.e., the current to be in phase with the terminal voltage. • When this condition exists while the motor is carrying its rated load, the motor is said to have normal excitation. Ques.100. In a synchronous motor, the magnitude of back e.m.f depends on (SSC-2018 Set-6) 1. The speed of the motor 2. DC excitation Only 4. Both the speed and rotor flux In case of the synchronous motor also, once the rotor starts rotating at synchronous speed, the stationary stator (armature) conductors cut the flux produced by the rotor. The only difference is conductors are stationary and flux is rotating. Due to this, there is an induced e.m.f. in the stator which according to Lenz’s law opposes the supply voltage. This induced e.m.f. is called back e.m f. It is denoted as Ebph i.e. back e.m.f. per phase. The back E.M.F is alternating in nature and its magnitude can be calculated by the equation. Ebph = 4.44.φ.f.Tph Where ϕ is Flux per pole Tph is the number of turns connected in series per phase f be the frequency As speed is always synchronous, the frequency is constant and hence magnitude of such back e.m.f. can be controlled by changing the flux φ produced by the rotor. Ebph ∝ Φ So back e.m.f. in case of the synchronous motor depends on the excitation given to the field winding and not on the speed, as speed is always constant. Ques 11. Speed of rotor magnetic flow in the rotor body is 1. Synchronous 2. Asynchronous 3. Zero 4. None of these Explanation: An induction motor cannot run at Synchronous speed because if the rotor was to accelerate to the speed of the rotating magnetic field, there would be no cutting action of the squirrel-cage bars and. therefore, no current flow in the rotor. if there was no current flow in the rotor, there could be no rotor magnetic field and. therefore, no torque. Ques 73. Which one of the following can be obtained by the equivalent circuit of an electrical machine? (SSC-2017) 1. Temperature rise in the cores 2. Complete performance characteristics of the machine 3. Type of protection used in the machine 4. Design Parameters of the windings Answer.2. Complete performance characteristics of the machine Explanation: An equivalent circuit of an electric machine helps us to determine the complete performance of the machine e.g efficiency, losses etc. Ques.26. Which of the following motors is represented by the characteristics curve shown below?  (SSC-2016) 1. D.C shunt Motor 2. D.C Compound Motor 3. D.C series Motor 4. Asynchronous Motor The induction motor is also known as an asynchronous motor. To start the induction motor, let us assume that the induction motor has been started without any load on it. The motor will come to its no-load speed, which may be at a slip as low as 0.1 percent. At full load, the motor runs at a speed of N. When mechanical load increases, motor speed decreases till the motor torque again becomes equal to the load torque. As long as the two torques are in balance, the motor will run at constant (but lower) speed. The motor may be loaded continuously till pull out (or break down) torque is developed, at which point the motor will Stop if more load is placed on it. Ques 6. A 10 pole 25 Hz alternator is directly coupled to and is driven by 60 Hz synchronous motor then the number of poles in a synchronous motor is? 1. 24 poles 2. 48 poles 3. 12 Poles 4. None of the above Number of poles of alternator Pa = 10 F = 25 Hz (alternator) F = 60 Hz (motor) Then the number of poles of motor Pm =? Since synchronous motor is directly coupled hence Synchronous speed of an alternator = Synchronous speed of the motor (120 x 25)/10 = (120 x 60)/ Pm Pm = 24 Ques 24. A silent pole synchronous motor is operating at one-fourth full load if the field current is suddenly switched off, it would? 1. Run at super-synchronous Speed 2. Stop Running 3. Run at sub-synchronous Speed 4. Continue to run at synchronous speed Silent pole synchronous motor runs either at synchronous speed or not at all. That is while running,  it maintains a constant speed. The speed is independent of load. Ques 46. An alternator is supplying a load of 300 kW at a power factor of 0.6 lagging. If the power factor is raised to unity, How many more kW can alternator supply? 1. 300 kW 2. 100 kW 3. 150 kW 4. 200 kW PF = active power/apparent power. So from question 0.6=300/apparent power. Apparent power=500 Kva. At unity power factor, active power=apparent power. So active power=500kw. So you will get an additional 200kw at unity power factor Ques 60. The reactive power generated by a synchronous alternator can be controlled by? 1. Changing the alternator speed 2. Changing the field Excitation 3. Changing the terminal Voltage 4. Changing the prime mover input If alternator is Overexcited, it will deliver reactive power with lagging current while in Underexcited, it absorb reactive power with leading current Ques 81.  Which of the following motor is not self-starting? 1. DC series motor 2. Slip ring Induction motor 3. Synchronous motor 4. Squirrel cage induction motor • At first synchronous motor has Stator supplied with 3 phase supply and rotor with dc supply. • When dc supply is given to rotor, alternate poles form on the rotor. • Because of the three-phase supply, the rotating Magnetic field will generate rotating torque at synchronous speed. • Consider during the positive half cycle (positive torque),  N pole of stator and S pole of the rotor is in front of each other then they will experience the force of attraction, and the tendency of the rotor will be to rotate in an anti-clockwise direction. • Now consider, during the negative half cycle (Negative torque),  Pole of the stator will change to S. Then S pole of stator and rotor will experience the force of repulsion and the tendency of the rotor will be to rotate in the clockwise direction. • So combine effect of the whole cycle will result in zero torque. Thus due to a cancellation of torque during positive and negative half cycle synchronous motor can’t be self-starting. • For the synchronous motor to be self-starting we prefer damper winding. or mechanical input for starting. Ques 98. Synchronous impedance method of finding voltage regulation of an alternator is called pessimistic method because (SSC-2015) 1. It is simplest to perform and compute 2. Armature reaction is wholly magnetizing 3. It gives regulation value lower than its actual found by direct loading 4. It gives the regulation value higher than its actual found by direct loading The regulation calculated from the synchronous impedance method is higher than the actual value found by direct loading hence this method is called as the pessimistic method. Ques 2. Hydrogen is used in large alternator mainly to 1. Reduce eddy current losses 2. Reduce distortion of waveform 3. Cool the machine 4. Strengthen the magnetic field # Why is hydrogen used for Alternator cooling? Hydrogen is least expensive, less weight, high thermal conductivity, less density and less viscosity. Less weight, less density & less viscosity attributes to its flow rate. High thermal conductivity helps in better heat exchange. Least expensive helps in balance sheets, more power in less investments. In order to reduce high temperature of alternator hydrogen gas is used as a coolant. The coolant, Hydrogen gas is allowed to flow in a closed cyclic path around the rotor. Heat exchange takes place and the temperature of hydrogen gas increases, for better cooling of the rotor in next cycle it has to be cooled. Cooling of hydrogen gas is done by passing it through heat exchangers generally constituted with water. Now Hydrogen gas after cooling is allowed to pass through driers ( mainly silica gel which absorbs moisture) and allowed to pass again through the rotor. Ques 11. If a synchronous motor working at leading power factor can be used as 1. Mechanical Synchronizer 2. Voltage Booster 4. Noise Generator • When the synchronous motor operates at the leading power factor then Rotor is overexcited in such a way that back emf (Eb which is generated in stator due to dc excitation of the rotor ) is greater than the supply voltage (V). • At this time resultant flux is greater than that is required for the unity power factor,  then this extra flux will generate reactive power so the motor will generate additional reactive power.  And it will also use active power for mechanical work. • Therefore synchronous motor working on leading PF will work as a synchronous condenser or phase advance • Ques 30. For V-curve for synchronous motor, the graph is drawn between 1. Armature current and power factor 2. Field current and armature current 3. Terminal voltage and load factor 4. Power factor and field current The Graph plotted between the armature current Ia and field current If at no load the curve is obtained known as V Curve. Since the shape of these curves is similar to the letter “V”, thus they are called V curve of the synchronous motor. Ques 40. Which of the following condition is NOT mandatory for alternators working in parallel? 1. The alternators must have the same phase sequence 2. The terminal voltage of each machine must be the same 3. The machines must have equal kVA ratings 4. The alternators must operate at the same frequency • There are five conditions that must be met before when two alternators running in parallel. 1. Equal line voltage 2. Same frequency 3. Same phase sequence 4. Same phase angle 5. Same waveform • We can use 2 alternators of 6 MVA and 4 MVA each instead of using single 10 MVA alternator because it is economical than using a single alternator of the same rating. Ques 44. Regulation of an alternator supplying resistive or inductive load is 1. Infinity 2. Always Negative 3. Always Positive 4. Zero The voltage regulation of an alternator is defined as the change in its terminal voltage when the full load is removed, keeping field excitation and speed constant, to the rated terminal voltage. Where Vph = Rated terminal voltage An increase in the load current in a pure resistive load cause a decrease in the output voltage. For an inductive load an increase in the load current cause a greater voltage drop as compared to the resistive load. Therefore for inductive and resistive load conditions there is always drop in the terminal voltage hence regulation values are always positive. In case of leading load that means capacitive load, the effect of armature flux on main field flux is magnetizing i.e, the armature flux is adding up with the main field flux. Since it is adding up, the total induced emf(Vph) will also be more than the induced emf at no load(Eph).Hence the regulation is negative. Ques 64. The positive, negative and zero sequence impedances of 3-phase synchronous generator are j 0.5 pu, j 0.3 pu and j 0.2 pu respectively. When the symmetrical fault occurs on the machine terminals. Find the fault current. The generator neutral is grounded through reactance of j0.1 pu 1. -j 3.33 pu 2. -j 1.67 pu 3. -j2.0 pu 4. -j 2.5 pu For symmetrical fault If = E/(Zi +Zn) Where E = Pre fault voltage Which is equal to 1 Zi = 0.5j & Zn = 0.1 j If = 1/(0.5j + 0.1j) If = -j 1.67 Ques 67. If the synchronous motor can be used as a synchronous condenser when it is 1. Overexcited 2. Under excited • When the synchronous motor operates at the leading power factor then Rotor is overexcited in such a way that back emf (Eb which is generated in stator due to dc excitation of the rotor ) is greater than the supply voltage (V). • In this case, a Synchronous motor will be operating at a leading power factor. At this time resultant flux is greater than that is required for the unity power factor,  then this extra flux will generate reactive power so the motor will generate reactive power.  And it will use active power too for mechanical work. • Therefore synchronous motor working on leading PF will work as a synchronous condenser. Ques 68. Which of the following methods would give a higher than the actual value of regulation of an alternator? 1.  ZPF method 2.  MMF method 3.  EMF method 4. ASA method Compared to other methods, the value of voltage regulation obtained by the synchronous impedance method (EMF Method) is always higher than the actual value and therefore this method is called the pessimistic method. Ques 69. If the excitation an alternator operating in parallel with other alternator is increased above the normal value of excitation, its. 1. Power factor becomes more lagging 2. Power factor becomes more leading 3. Output current decreases 4. Output kW decreases If the excitation an alternator operating in parallel with other alternator is changed then it will change the power factor • Suppose the excitation of the alternator is decreased below normal excitation then reactive power will change and active power output (W or KW) of the alternator will remain unchanged. • The under-excited alternator delivers leading current to the infinite bus bar. • It is because the leading current produces an adding m.m.f to increase the under excitation. • Similarly, an overexcited alternator operates at lagging power factor and supplies lagging reactive power to an infinite bus bar. Ques 70. In an alternator, the effect of armature reaction is minimum at the power factor of 1. 0.5 Lagging 2. 0.866 Lagging
2020-01-24 08:59:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6102455854415894, "perplexity": 1558.2406631181082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00154.warc.gz"}
https://www.physicsforums.com/threads/isospin-how-serious-must-i-take-it-superposition-of-proton-and-neutron.592246/
# Isospin: how serious must I take it? Superposition of proton and neutron? 1. Mar 31, 2012 ### nonequilibrium Hello, So I'm reading about isospin in Griffith's Introduction to Elementary Particles, but the concept seems rather fishy, and I'm not quite sure what to make out of it. For example, if p and n (proton and neutron) are seen as different states of the same system, then what does $\frac{1}{\sqrt{2}} \left( p + n \right)$ possibly mean? I suppose that expression makes sense if p and n really are different states of the same system, but not if they are kind of similar. Being the same or not is not really a continuous scale. So how serious should I take things like $\frac{1}{\sqrt{2}} \left( p + n \right)$? And what does it mean to you? 2. Apr 1, 2012 ### naima p and n differ because one of their quarks is up while it is down for the other. Have you the same question for the spin of an electron which can be up or down? 3. Apr 1, 2012 ### nonequilibrium So the "upness" and the "downness" of a quark are analogous to the "upness" and the "downess" of the spin of an electron? This was not communicated to me. Do you have a reference for that? 4. Apr 1, 2012 ### naima 5. Apr 1, 2012 Staff Emeritus Did Griffiths really write that? I'm surprised, because it's not in an eigenstate of T3, which means it doesn't commute with the Hamiltonian. 6. Apr 1, 2012 ### nonequilibrium @ naima: I have now, but the confusion remains, it has merely shifted to the quark level: if u and d quarks can (approximately) be seen as two states of one system, then an expression like $\frac{1}{\sqrt{2}}(u + d)$ should make sense. Does it? And if so, what does it mean? @ Vanadium: I'm not following what you're saying. Are you asking me whether Griffiths wrote down the expression} $\frac{1}{\sqrt{2}}(p+n)$? If so: no he did not. But he did say that p and n can approximately be seen as two states of one system, so that one can define p as isospin up, also written down as $\left( \begin{array}{c} 1 \\ 0 \end{array} \right)$; analogous for the neutron. But if so, an expression like $\frac{1}{\sqrt{2}}(p+n)$ should make sense. Same question as above: does it? And if so, what does it signify? 7. Apr 1, 2012 ### francesco85 Hello! My opinion is the following: I think that there are two kinds of considerations that one could make (I will define the proton and the neutron as the so called mass eigenstates): 1- let us take QCD (without taking into account the electroweak symmetry): in the particular limit in which the "vector isospin" symmetry is exact, then the proton and the neutron are undistinguishable (as well as every other combination): it is just a matter of definition what you define the proton and what the neutron. 2- in the limit in which the "vector isospin" symmetry is broken (e.g. by different quark masses, electroweak corrections and possibly other effects), the proton and the neutron are physically different (for example they have different masses); in this case one can of course make all the linear combinations that can be made but, in my opinion, they do not correspond to physical observable states: they are not mass eigenstates. Then I think that the only way to give meaning to $\frac{1}{\sqrt{2}}(p+n)$ is in the QCD with the exact "vector isospin" symmetry; if the world was described by such a theory, I think that the way to identify $p$, $n$ and every possible linear combination is just by experiments: one define experimentally $p$ and $n$ by giving a prescription of preparation of these states (and, in turn, definite results for the experiments, roughly speaking); then every linear combination can be seen by the result of the experiment (I actually don't know whether, given two experimental prescriptions for preparing two states, it is possible to give a prescription for preparing a linear combination..) A question to Vanadium50: I don't understand your comment: why is T3 important? T1 and T2 commute with the hamiltonian (I suppose you are talking of the QCD hamiltonian): the combination $\frac{1}{\sqrt{2}}(p+n)$ is an eigenstate of T1 (perhaps apart from a sign); why isn't it allowed, in your opinion? 8. Apr 1, 2012 Staff Emeritus You can't pick and choose which parts of an idea like isospin to accept. There are two elements: the total isospin of the system and the third component, T3. You can write down whatever combination that you like, but only states of definite T3 (unlike p+n) commute with the Hamiltonian and thus are realized in nature. 9. Apr 1, 2012 ### francesco85 Why T3 and not T1? Suppose what you have said is true: then I build a theory in which I label my states not with eigenstates of T3 but as eigenstates of T1; of course the two ways of labelling the "base vectors" are possible; there is also a relationship among the two set of states: in the case in which we label the states with some quantum numbers + T3 the "base vectors" are something like |qn,t3=±1/2>, where qn are some other quantum numbers (energy, momentum, etc.) necessary to identify the state, while in the case in which we label the states with some quantum numbers+T1 the "base vectors" are something like |qn,t1=±1/2>; moreover the relation among the two sets of states is someting like |qn,t1=±1/2>=(|qn,t3=+1/2>±|qn,t3=-1/2>)(1/sqrt(2)); and both this sets are eigenstates of the hamiltonian. So, if I chose T1 in order to label the states, why does nature realize only the T3 eigenstates? In my opinion, what you have said is false, in pure QCD. ps (edit) : states that commute with the hamiltonian? What does it mean? Last edited: Apr 1, 2012 10. Apr 1, 2012 ### naima I would answer to this point. Isospin is kept when you have strong interaction (you unplug the electromagnetic force). you can use it to compute cross section. look at (1) p + n -> d + pi0 and (2) p + p -> d + pi+ the system p + p is in a pure state with I = 1 while the system p + n is in a statistical superposition (with equal weight) of I = 1 and I = 0. So half of the mixture may interact to keep I equal to 1. the experience give a partial cross section = 3,15 mb for (2) and 1,5 mb for (1) You can see that the ratio is close to 2 as it would be if the symmetry was exact. 11. Apr 1, 2012 Staff Emeritus Because T3 commutes with Q, and Q commutes with H, so T3 commutes with H. And we don't live in a pure QCD universe. Why are you needlessly complicating this? 12. Apr 1, 2012 ### francesco85 You didn't answer my question: also T1 commutes with H (what is Q?), otherwise the hamiltonian would not be symmetric under the vector isospin symmetry. T3 or T1 are just ways to label the states, it does not have nothing to do with physics and to what nature should do. So you are saying that even if we don't live in a pure QCD universe, T3 is a symmetry of the nature (and the masses of the quarks?) and moreover states are classified according to T3 (edit: not T1 or T2) and nature acts in such a way that we observe only such states? I recall what is my point of view, from the first post from mine, in this thread: in the case in which we take into account the "real and complete" theory, the isospin symmetry is broken and so we cannot classify the states according to the isospin. So neutron and proton are different particles. The fact that they have nearly equal masses, etc. is a signal that the isospin symmetry is almost exact. In this case we don't observe p+n and other combinations simply because they are not mass eigenstates of the theory. Nothing to do, in my opinion, with T3, T2 or T1. What I have remarked in the post you quoted is valid only in pure QCD, as I have written: it is only in that case that it's "meaningful" to speak of isospin, in my opinion. Best, Francesco Last edited: Apr 1, 2012 13. Apr 1, 2012 Staff Emeritus Q is charge. Your postings are adding a lot of confusion to the mix. The OP's initial problem has a well-defined answer: you need to consider total isospin and it's third component together for the idea to be useful. Yes, you can also ask what isospin looks like in a universe without electromagnetism. But a) that's not the world we live in, and b) that's not what the OP asked. 14. Apr 1, 2012 ### nonequilibrium Hm a lot of replies (for which my thanks), but let me for the moment focus on the one that caught my attention: So only eigenstates of T3 are realized in nature? Why is this (focussing on the word "eigenstates", not so much "T3")? I see two options: 1) By "realizing in nature" you mean "being a result of a measurement" in which case I agree, but that doesn't mean the p+n state is not allowed as a state of the system, so my question remains unanswered; 2) The concept of isospin is a more formal notion than for example spin, and actually only eigenstates are defined, unlike for example the concept of spin, where "spin up + spin down" states make sense. I'm hoping for (2). 15. Apr 1, 2012 ### francesco85 first, please, read what one asks; second, answer the questions posed without repeating ad libitum the same thing you have written in your first post; third, you should be more precise, in my opinion. Q is the charge of what symmetry? T3? The total isospin? T1? Something else? Then I recall some questions you didn't answer. QCD case (in the exact isospin limit)- You have made a very precise statement: some states are not realized in nature because they are not T3 eigenstates: why T3 and not T1? Doesn't T1 commute with the QCD hamiltonian, in the limit considered? Why is T3 important and not T1? How does the real world described by that theory know something about T3, T1 or group theory? (See my second post for the example) Real world (isospin broken) - Is it meaningful to classify the states according to isospin in a theory which is not invariant under the isospin? I repeat that my answer to the question (may it be right or wrong) is the following: the state p+n is not an eigenstate of the hamiltonian "of the real world" and so it cannot be prepared (for example through a scattering experiment,where only mass eigenstates can be preparaed); is this right or wrong? Let's discuss; tell me your opinion: is there a right/wrong part in the sentence I have made? In such a case, which parts are right and which are wrong? (EDIT:in the case in which the original question is posed in the framework of pure QCD, I have given my interpretation in my first post in this thread; the same questions can arise: is it right/wrong? In this case, why?) This is why I have asked you why T3 is so important. This is why I have asked the other question. Best, Francesco Last edited: Apr 1, 2012 16. Apr 1, 2012 ### tom.stoer I think the resolution is not isospin but electric charge. Both up and down (or proton and neutron) have different electric charge for which no superposition of states with different charge are known. 17. Apr 1, 2012 ### nonequilibrium tom.stoer: I'm not sure if I get what you're aiming at. Are you using charge as an argument for why the p+n state is not possible? However, that's not really an argument, but more of a restating of the fact that we do not see a p+n state. But more likely I misinterpreted the aim of your post, so I'd appreciate any clarification. As to francesco, wouldn't your argument "prove" that there can only be energy eigenstates in nature? Yet this is of course not true. 18. Apr 1, 2012 ### tom.stoer Yes, that's my intention. No, one can go further; I think it has something to do with electric charge as a conserved quantity due to a continuous symmetry and the induced superselection sectors. But I am not sure about that. 19. Apr 1, 2012 Staff Emeritus T3 is electric charge. (Up to a constant) The reason this is useful is that everything one learned about spin angular momentum l in eigenstates of m can be applied to isospin, where you have isospin T in eigenstates of T3. Or charge. (If you're in an eigenstate of one, you're not in an eigenstate of another) As an example, this allows you to calculate the branching fraction rho -> pi0 + pi0. In isospin space, {T,T3} is {1,0} = {1,0} + {1,0}. For spin, the C-G coefficient for this is zero, and so it must be for isospin. So this decay is forbidden. Voila! 20. Apr 1, 2012 ### francesco85 Q T3 is not the electric charge up to a constant: $T_3|p>=\frac{1}{2}|p>$ $T_3|n>=-\frac{1}{2}|n>$ (they are eigenstates of T3 just because we "label" our states with T3). Moreover, what does the example you have given mean? That in processes mediated by QCD in the limit in which isospin is conserved, isospin is really conserved ? To mr. vodka: sorry for having been unclear, I will try to explain better my thoughts here: as far as I know, in scattering processes one usually takes mass eigenstates as incident and outgoing free particles; with mass eigenstates I mean eigenstates of the operator $P^2$ (à la Weinberg, as far as I have understood) (sorry for my confusing notation; I often use energy and mass eigenstates as synonimous, since usually one takes as "base states" the state parametrized by $P^{\mu}$); for example, if we do not take mass eigenstates as final product, how can we define the phase space? As far as I know, I think it's not possible. Notice that I am talking about scattering processes (it's the only way in which I know particles can be produced). If you ask how the state p+n can be seen or if it can be produced in other methods or simply how it looks, actually I don't know how to answer this question, since this is not a mass eigenstate and I don't know how to produce this kind of particles (I'm sorry for that :). If, however, we restrict to scattering theory in which in the end of a process we would like to see only definite particles (in the sense meant by Weinberg: "mass,energy, momentum,spin and spin along z"-eigenstates, if I remember correctly ), I am quite sure that we ought to see only mass eigenstates. What I am quite safe to say is the following: in the limit in which we consider only QCD with exact isospin, I think that labelling the states is just a matter of convention (at least in scattering processess) : this is the same kind of thoughts I have exposed in the first post I have sent in this thread. I basically agree with tom.stoer (maybe with a slightly different motivation, if I have understood what he has said), in the sense that electromagnetism can be one of the keys: even if I don't take into account the superselection rules (which I don't know and so I don't want to talk about), what is sure is that the proton and the neutron have different charges and so they interact differently with the photon (so, this can make us distinguish the neutron from the proton, measurable thing); moreover electromagnetism is the (or one of the) responsible for the neutron and the proton to have different mass and so make them distinguishable. Of course, also in this case, I am just talking about production of free particles in a scattering process. Maybe there is something deeper in seeing that these states are actually electric charge eigenstates, but actually I cannot see what; maybe this can be a point of contact with what tom said (still in the hypothesis that I have understood what he has said) ;hope this discussion will clarify my point of view. Best, Francesco Last edited: Apr 1, 2012
2018-09-24 14:07:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8027437925338745, "perplexity": 645.6168024652837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160454.88/warc/CC-MAIN-20180924125835-20180924150235-00510.warc.gz"}
https://forum.azimuthproject.org/plugin/ViewComment/16753
I think it’s {70 °F}. That’s what I have my thermostat to. I know that when my cat is in the room then, ceteris paribus, that’s the temperature. On the other hand \$$f_\ast(\\{\text{my liquid helium container has been breached}\\})\$$ = {−452.2 °F}. But I argue: $$f_\ast(\\{\text{there’s a living cat in my room, my liquid helium container has been breached}\\}) = \varnothing$$ This is because these are distinct states. My cat has a lot more sense l than his owner and knows better than to stick around after he has knocked over the liquid helium. So there is no *one* temperature that maps to all of these states. I think other people will have different answers. Some people have their thermostats set to 72 °F, for instance, while others keep their homes and liquid helium under more pressure.
2019-04-25 10:14:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5080590844154358, "perplexity": 1211.0085339019479}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578716619.97/warc/CC-MAIN-20190425094105-20190425120105-00046.warc.gz"}
https://www.lessonplanet.com/teachers/relative-aging-study-guide
# Relative Aging Study Guide In this relative aging learning exercise, students define relative time and absolute time and define the laws and "rules" related to studying the age of fossils and rocks.
2018-06-21 18:22:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8974555134773254, "perplexity": 5104.323536767028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864256.26/warc/CC-MAIN-20180621172638-20180621192638-00511.warc.gz"}
https://en.wikipedia.org/wiki/M%C3%B6bius_energy
# Möbius energy In mathematics, the Möbius energy of a knot is a particular knot energy, i.e. a functional on the space of knots. It was discovered by Jun O'Hara, who demonstrated that the energy blows up as the knot's strands get close to one another. This is a useful property because it prevents self-intersection and ensures the result under gradient descent is of the same knot type. Invariance of Möbius energy under Möbius transformations was demonstrated by Freedman, He, and Wang (1994) who used it to show the existence of a C1,1 energy minimizer in each isotopy class of a prime knot. They also showed the minimum energy of any knot conformation is achieved by a round circle. Conjecturally, there is no energy minimizer for composite knots. Kusner and Sullivan have done computer experiments with a discretized version of the Möbius energy and concluded that there should be no energy minimizer for the knot sum of two trefoils (although this is not a proof). Recall that the Möbius transformations of the 3-sphere ${\displaystyle S^{3}={\mathbf {R}}^{3}\cup \infty }$ are the ten-dimensional group of angle-preserving diffeomorphisms generated by inversion in 2-spheres.For example, the inversion in the sphere ${\displaystyle \{{\mathbf {v}}\in {\mathbf {R}}^{3}\colon |{\mathbf {v}}-{\mathbf {a}}|=\rho \}}$ is defined by ${\displaystyle {\mathbf {x}}\to {\mathbf {a}}+{\rho ^{2} \over |{\mathbf {x}}-{\mathbf {a}}|^{2}}\cdot ({\mathbf {x}}-{\mathbf {a}}).}$ Consider a rectifiable simple curve ${\displaystyle \gamma (u)}$ in the Euclidean 3-space ${\displaystyle {\mathbf {R}}^{3}}$, where ${\displaystyle u}$ belongs to ${\displaystyle {\mathbf {R}}^{1}}$ or ${\displaystyle S^{1}}$. Define its energy by ${\displaystyle E(\gamma )=\iint \left\{{\frac {1}{|\gamma (u)-\gamma (v)|^{2}}}-{\frac {1}{D(\gamma (u),\gamma (v))^{2}}}\right\}|{\dot {\gamma }}(u)||{\dot {\gamma }}(v)|\,du\,dv,}$ where ${\displaystyle D(\gamma (u),\gamma (v))}$ is the shortest arc distance between ${\displaystyle \gamma (u)}$ and ${\displaystyle \gamma (v)}$ on the curve. The second term of the integrand is called a regularization. It is easy to see that ${\displaystyle E(\gamma )}$ is independent of parametrization and is unchanged if ${\displaystyle \gamma }$ is changed by a similarity of ${\displaystyle {\mathbf {R}}^{3}}$. Moreover, the energy of any line is 0, the energy of any circle is ${\displaystyle 4}$. In fact, let us use the arc-length parameterization. Denote by ${\displaystyle \ell }$ the length of the curve ${\displaystyle \gamma }$. Then ${\displaystyle E(\gamma )=\int _{-\ell /2}^{\ell /2}{}dx\int _{x-\ell /2}^{x+\ell /2}\left[{1 \over |\gamma (x)-\gamma (y)|^{2}}-{1 \over |x-y|^{2}}\right]dy.}$ Let ${\displaystyle \gamma _{0}(t)=(\cos t,\sin t,0)}$ denote a unit circle. We have ${\displaystyle |\gamma _{0}(x)-\gamma _{0}(y)|^{2}={\left(2\sin {\tfrac {1}{2}}(x-y)\right)^{2}}}$ and consequently, {\displaystyle {\begin{aligned}E(\gamma _{0})&=\int _{-\pi }^{\pi }{}dx\int _{x-\pi }^{x+\pi }\left[{1 \over \left(2\sin {\tfrac {1}{2}}(x-y)\right)^{2}}-{1 \over |x-y|^{2}}\right]dy\\&=4\pi \int _{0}^{\pi }\left[{1 \over \left(2\sin(y/2)\right)^{2}}-{1 \over |y|^{2}}\right]dy\\&=2\pi \int _{0}^{\pi /2}\left[{1 \over \sin ^{2}y}-{1 \over |y|^{2}}\right]dy\\&=2\pi \left[{1 \over u}-\cot u\right]_{u=0}^{\pi /2}=4\end{aligned}}} since ${\displaystyle {\frac {1}{u}}-\cot u={\frac {u}{3}}-\cdots }$. ## Knot invariant On the left, the unknot, and a knot equivalent to it. It can be more difficult to determine whether complex knots, such as the one on the right, are equivalent to the unknot. (copied content from Knot equivalence; see that page's history for attribution.) A knot is created by beginning with a one-dimensional line segment, wrapping it around itself arbitrarily, and then fusing its two free ends together to form a closed loop (Adams 2004) (Sossinsky 2002). Mathematically, we can say a knot ${\displaystyle K}$ is an injective and continuous function ${\displaystyle K\colon [0,1]\to \mathbb {R} ^{3}}$ with ${\displaystyle K(0)=K(1)}$. Topologists consider knots and other entanglements such as links and braids to be equivalent if the knot can be pushed about smoothly, without intersecting itself, to coincide with another knot. The idea of knot equivalence is to give a precise definition of when two knots should be considered the same even when positioned quite differently in space. A mathematical definition is that two knots ${\displaystyle K_{1},K_{2}}$ are equivalent if there is an orientation-preserving homeomorphism ${\displaystyle h\colon \mathbb {R} ^{3}\to \mathbb {R} ^{3}}$ with ${\displaystyle h(K_{1})=K_{2}}$, and this is known as an ambient isotopy. The basic problem of knot theory, the recognition problem, is determining the equivalence of two knots. Algorithms exist to solve this problem, with the first given by Wolfgang Haken in the late 1960s (Hass 1998). Nonetheless, these algorithms can be extremely time-consuming, and a major issue in the theory is to understand how hard this problem really is (Hass 1998). The special case of recognizing the unknot, called the unknotting problem, is of particular interest (Hoste 2005). We shall picture a knot by a smooth curve rather than by a polygon. A knot will be represented by a planar diagram. The singularities of the planar diagram will be called crossing points and the regions into which it subdivides the plane regions of the diagram. At each crossing point, two of the four corners will be dotted to indicate which branch through the crossing point is to be thought of as one passing under the other. We number any one region at random, but shall fix the numbers of all remaining regions such that whenever we cross the curve from right to left we must pass from region number ${\displaystyle k}$ to the region number ${\displaystyle k+1}$. Clearly, at any crossing point ${\displaystyle c}$, there are two opposite corners of the same number ${\displaystyle k}$ and two opposite corners of the numbers ${\displaystyle k-1}$ and ${\displaystyle k+1}$, respectively. The number ${\displaystyle k}$ is referred as the index of ${\displaystyle c}$. The crossing points are distinguished by two types: the right handed and the left handed, according to which branch through the point passes under or behind the other. At any crossing point of index ${\displaystyle k}$ two dotted corners are of numbers ${\displaystyle k}$ and ${\displaystyle k+1}$, respectively, two undotted ones of numbers ${\displaystyle k-1}$ and ${\displaystyle k+1}$. The index of any corner of any region of index ${\displaystyle k}$ is one element of ${\displaystyle \{k\pm 1,k\}}$. We wish to distinguish one type of knot from another by knot invariants. There is one invariant which is quite simple. It is Alexander polynomial ${\displaystyle \Delta _{K}(t)}$ with integer coefficient. The Alexander polynomial is symmetric with degree ${\displaystyle n}$: ${\displaystyle \Delta _{K}(t^{-1})t^{n-1}=\Delta _{K}(t)}$ for all knots ${\displaystyle K}$ of ${\displaystyle n>0}$ crossing points. For example, the invariant ${\displaystyle \Delta _{K}(t)}$ of an unknotted curve is 1, of an trefoil knot is ${\displaystyle t^{2}-t+1}$. Let ${\displaystyle \omega ({\boldsymbol {x}})={\frac {1}{4\pi }}\varepsilon _{ijk}{x^{i}dx^{j}\wedge dx^{k} \over |{\boldsymbol {x}}|^{3}}}$ denote the standard surface element of ${\displaystyle S^{2}}$. We have ${\displaystyle \mathrm {link} (\gamma _{1},\gamma _{2})=\int _{{\boldsymbol {x}}\in \gamma _{1},{\boldsymbol {y}}\in \gamma _{2}}\omega ({\boldsymbol {x}}-{\boldsymbol {y}})}$ ${\displaystyle \int _{S^{2}}\omega ({\boldsymbol {x}})={\frac {1}{4\pi }}\int _{S^{2}}\varepsilon _{ijk}x^{i}dx^{j}dx^{k}=1,\qquad \omega (\lambda {\boldsymbol {x}})=\omega ({\boldsymbol {x}}){\rm {sign}}\lambda ,\quad {\rm {for}}\quad \lambda \in \mathbb {R} ^{*}.}$ For the knot ${\displaystyle \gamma :[0,1]\rightarrow {\mathbb {R}}^{3}}$, ${\displaystyle \gamma (0)=\gamma (1)}$, ${\displaystyle \int _{t_{1} ${\displaystyle +\int _{t_{1} does not change, if we change the knot ${\displaystyle \gamma }$ in its equivalence class. ## The representations of Temperley-Lieb-Jones algebras We shall be interested in the von Neumann algebras ${\displaystyle A_{m-1}}$ generated by 1 and projectors ${\displaystyle e_{1},e_{2},\cdots e_{m-1}}$ which obey : ${\displaystyle {\begin{array}{l}(i)\quad \quad \quad ~~e_{i}^{2}=e_{i}=e_{i}^{\ast }\\(ii)\quad \quad \quad ~e_{i}\,e_{i\pm 1}\,e_{i}=\tau e_{i}\\(iii)\quad \quad \quad e_{i}e_{j}=e_{j}e_{i}\quad \quad \quad \vert i-j\vert \geq 2.\end{array}}\qquad \qquad (2.1)}$ These algebras in addition admit of a trace, denoted by "tr" which is defined over ${\displaystyle {\bigcup }_{m=1}^{\infty }~A_{m-1}}$ and determined by the normalization tr (1) = 1 and the following conditions : ${\displaystyle {\begin{array}{l}(iv)\quad \quad \quad {\rm {tr}}(xy)={\rm {tr}}(yx),\quad \quad x,y\in A_{m-1}\\(v)\quad \quad \quad {\rm {tr}}(xe_{m})=\tau {\rm {tr}}(x),\quad \quad x\in A_{m-1}\\(vi)\quad \quad \quad {\rm {tr}}(x^{\ast }x)>0,\quad \quad x\in A_{m-1}\setminus \{0\}.\end{array}}\qquad \qquad (2.1')}$ Jones has obtained a representation of this algebra and a corresponding representation of the braid group. The braid group Bn can be abstractly defined via the following presentation: ${\displaystyle B_{n}=\left\langle \sigma _{1},\ldots ,\sigma _{n-1}|\quad \sigma _{i}\sigma _{i+1}\sigma _{i}=\sigma _{i+1}\sigma _{i}\sigma _{i+1},\quad \sigma _{i}\sigma _{j}=\sigma _{j}\sigma _{i}\right\rangle ,}$ where in the first group of relations 1 ≤ i ≤ n−2 and in the second group of relations, |i − j| ≥ 2. This presentation leads to generalisations of braid groups called Artin groups. The cubic relations, known as the braid relations, play an important role in the theory of Yang–Baxter equation. Möbius Invariance Property. Let ${\displaystyle \gamma }$ be a closed curve in ${\displaystyle {\mathbf {R}}^{3}}$ and ${\displaystyle T}$ a Möbius transformation of ${\displaystyle S^{3}={\mathbf {R}}^{3}\cup \infty }$. If ${\displaystyle T(\gamma )}$ is contained in ${\displaystyle {\mathbf {R}}^{3}}$ then ${\displaystyle E(T(\gamma ))=E(\gamma )}$. If ${\displaystyle T(\gamma )}$ passes through ${\displaystyle \infty }$ then ${\displaystyle E(T(\gamma ))=E(\gamma )-4}$. Theorem A. Among all rectifiable loops ${\displaystyle \gamma \colon \ S^{1}\to {\mathbf {R}}^{3}}$, round circles have the least energy ${\displaystyle E(\mathrm {round} }$ ${\displaystyle \mathrm {circle} )=4}$ and any ${\displaystyle \gamma }$ of least energy parameterizes a round circle. Proof of Theorem A. Let ${\displaystyle T}$ be a Möbius transformation sending a point of ${\displaystyle \gamma }$ to infinity. The energy ${\displaystyle E(T(\gamma ))\geq 0}$ with equality holding iff ${\displaystyle T(\gamma )}$ is a straight line. Apply the Möbius invariance property we complete the proof. Proof of Möbius Invariance Property. It is sufficient to consider how ${\displaystyle I}$, an inversion in a sphere, transforms energy. Let ${\displaystyle u}$ be the arc length parameter of a rectifiable closed curve ${\displaystyle \gamma }$, ${\displaystyle u\in {\mathbf {R}}/\ell {\mathbf {Z}}}$. Let ${\displaystyle E_{\varepsilon }(\gamma )=\iint _{|u-v|\geq \varepsilon }\left({\frac {1}{|\gamma (u)-\gamma (v)|^{2}}}-{\frac {1}{D(\gamma (u),\gamma (v))^{2}}}\right)\,du\,dv\qquad \qquad \qquad \qquad (1)}$ and {\displaystyle {\begin{aligned}E_{\varepsilon }(I\circ \gamma )=&\iint _{|u-v|\geq \varepsilon }\left({\frac {1}{|I\circ \gamma (u)-I\circ \gamma (v)|^{2}}}-{\frac {1}{(D(I\circ \gamma (u),I\circ \gamma (v)))^{2}}}\right)\\&\qquad \times \|I'(\gamma (u))\|\cdot \|I'(\gamma (v))\|\,du\,dv.\end{aligned}}\qquad \qquad \qquad \qquad (2)} Clearly, ${\displaystyle E(\gamma )=\lim _{\varepsilon \to 0}E_{\varepsilon }(\gamma )}$ and ${\displaystyle E(I\circ \gamma )=\lim _{\varepsilon \to 0}E_{\varepsilon }(I\circ \gamma )}$. It is a short calculation (using the law of cosines) that the first terms transform correctly, i.e., ${\displaystyle {\frac {\|I'(\gamma (u))\|\cdot \|I'(\gamma (v))\|}{|I(\gamma (u))-I(\gamma (v))|^{2}}}={\frac {1}{|\gamma (u)-\gamma (v)|^{2}}}.}$ Since ${\displaystyle u}$ is arclength for ${\displaystyle \gamma }$, the regularization term of (1) is the elementary integral ${\displaystyle \int _{u=0}^{\ell }\left[2\int _{v=\varepsilon }^{\ell /2}{\frac {1}{v^{2}}}\,dv\right]\,du=4-{\frac {2\ell }{\varepsilon }}.\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (3)}$ Let ${\displaystyle s}$ be an arclength parameter for ${\displaystyle I\circ \gamma }$. Then ${\displaystyle ds(u)/du=\|I'(\gamma (u))\|}$ where ${\displaystyle \|I'(\gamma (u))\|=f(u)}$ denotes the linear expansion factor of ${\displaystyle I'}$. Since ${\displaystyle \gamma (u)}$ is a lipschitz function and ${\displaystyle I'}$ is smooth, ${\displaystyle f(u)}$ is lipschitz, hence, it has weak derivative ${\displaystyle f'(u)\in L^{\infty }}$. {\displaystyle {\begin{aligned}{\rm {{regularization}(2)=}}&\int _{u\in {\mathbf {R}}/\ell {\mathbf {Z}}}\left[\int _{|v-u|\geq \varepsilon }{\frac {|(I\circ \gamma )'(v)|\,dv}{D(I\circ \gamma (u),I\circ \gamma (v))^{2}}}\right]|(I\circ \gamma )'(u)|\,du\\=&\int _{{\mathbf {R}}/\ell {\mathbf {Z}}}\left[{\frac {4}{L}}-{\frac {1}{\varepsilon _{+}}}-{\frac {1}{\varepsilon _{-}}}\right]\,ds,\end{aligned}}\qquad \qquad (4)} where ${\displaystyle L={\rm {{Length}(I(\gamma ))}}}$ and {\displaystyle {\begin{aligned}\varepsilon _{+}&=\varepsilon _{+}(u)=D((I\circ \gamma )(u),(I\circ \gamma )(u+\varepsilon ))=s(u+\varepsilon )-s(u)\\&=\int _{u}^{u+\varepsilon }f(w)\,dw=f(u)\varepsilon +\varepsilon ^{2}\int _{0}^{1}(1-t)f'(u+\varepsilon t)\,dt\end{aligned}}} and ${\displaystyle \varepsilon _{-}=\varepsilon _{-}(u)=D((I\circ \gamma )(u-\varepsilon ),(I\circ \gamma )(u))=f(u)\varepsilon -\varepsilon ^{2}\int _{0}^{1}(1-t)f'(u-\varepsilon t)\,dt.}$ Since ${\displaystyle |f'(w)|}$ is uniformly bounded, we have {\displaystyle {\begin{aligned}{\frac {1}{\varepsilon _{+}}}=&{\frac {1}{f(u)\varepsilon }}\left[{1+{\varepsilon \over f(u)}\int _{0}^{1}(1-t)f'(u+\varepsilon t)\,dt}\right]^{-1}\\=&{\frac {1}{f(u)\varepsilon }}\left[1-{\frac {\varepsilon }{f(u)}}\int _{0}^{1}(1-t)f'(u+\varepsilon t)\,dt+O(\varepsilon ^{2})\right]\\=&{\frac {1}{f(u)\varepsilon }}-{\frac {1}{f(u)^{2}}}\int _{0}^{1}(1-t)f'(u+\varepsilon t)\,dt+O(\varepsilon ).\end{aligned}}} Similarly, ${\displaystyle {\frac {1}{\varepsilon _{-}}}={\frac {1}{f(u)\varepsilon }}+{\frac {1}{f(u)^{2}}}\int _{0}^{1}(1-t)f'(u-\varepsilon t)\,dt+O(\varepsilon ).}$ Then by (4) {\displaystyle {\begin{aligned}{\rm {{regularization}\ (2)=}}&4-\int _{0}^{\ell }{\frac {2}{\varepsilon }}\,du+O(\varepsilon )\\&+\int _{u=0}^{\ell }\int _{t=0}^{1}{\frac {(1-t)}{f(u)}}[f'(u+\varepsilon t)-f'(u-\varepsilon t)]\,du\,dt\\=&4-{\frac {2\ell }{\varepsilon }}+O(\varepsilon ).\end{aligned}}\qquad \qquad \quad (5)} Comparing (3) and (5), we get ${\displaystyle E_{\varepsilon }(\gamma )-E_{\varepsilon }(I\circ \gamma )=O(\varepsilon );}$ hence, ${\displaystyle E(\gamma )=E(I\circ \gamma )}$. For the second assertion, let ${\displaystyle I}$ send a point of ${\displaystyle \gamma }$ to infinity. In this case ${\displaystyle L=\infty }$ and, thus, the constant term 4 in (5) disappears. ## Freedman–He–Wang conjecture The Freedman–He–Wang conjecture (1994) stated that the Möbius energy of nontrivial links in ${\displaystyle {\mathbb {R}}^{3}}$ is minimized by the stereographic projection of the standard Hopf link. This was proved in 2012 by Ian Agol, Fernando C. Marques and André Neves, by using Almgren–Pitts min-max theory.[1] Let ${\displaystyle \gamma _{i}:S^{1}\rightarrow {\mathbb {R}}^{3}}$, ${\displaystyle i=1,2,}$ be a link of 2 components, i.e., a pair of rectifiable closed curves in Euclidean three-space with ${\displaystyle \gamma _{1}(S^{1})\cap \gamma _{2}(S^{1})=\emptyset }$. The Möbius cross energy of the link ${\displaystyle (\gamma _{1},\gamma _{2})}$ is defined to be ${\displaystyle E(\gamma _{1},\gamma _{2})=\int _{S^{1}\times S^{1}}{\frac {|{\dot {\gamma }}_{1}(s)||{\dot {\gamma }}_{2}(t)|}{|\gamma _{1}(s)-\gamma _{2}(t)|^{2}}}\,ds\,dt.}$ The linking number of ${\displaystyle (\gamma _{1},\gamma _{2})}$ is defined by letting {\displaystyle {\begin{aligned}\mathrm {link} (\gamma _{1},\gamma _{2})&=\,{\frac {1}{4\pi }}\oint _{\gamma _{1}}\oint _{\gamma _{2}}{\frac {\mathbf {r} _{1}-\mathbf {r} _{2}}{|\mathbf {r} _{1}-\mathbf {r} _{2}|^{3}}}\cdot (d\mathbf {r} _{1}\times d\mathbf {r} _{2})\\&={\frac {1}{4\pi }}\int _{S^{1}\times S^{1}}{\frac {\mathrm {det} ({\dot {\gamma }}_{1}(s),{\dot {\gamma }}_{2}(t),\gamma _{1}(s)-\gamma _{2}(t))}{|\gamma _{1}(s)-\gamma _{2}(t)|^{3}}}\,ds\,dt.\end{aligned}}} ${\displaystyle \cdots }$ linking number −2 linking number −1 linking number 0 ${\displaystyle \cdots }$ linking number 1 linking number 2 linking number 3 It is not difficult to check that ${\displaystyle E(\gamma _{1},\gamma _{2})\geq 4\pi |{\rm {link}}(\gamma _{1},\gamma _{2})|}$. If two circles are very far from each other, the cross energy can be made arbitrarily small. If the linking number ${\displaystyle \mathrm {link} (\gamma _{1},\gamma _{2})}$ is non-zero, the link is called non-split and for the non-split link, ${\displaystyle E(\gamma _{1},\gamma _{2})\geq 4\pi }$. So we are interested in the minimal energy of non-split links. Note that the definition of the energy extends to any 2-component link in ${\displaystyle {\mathbb {R}}^{n}}$. The Möbius energy has the remarkable property of being invariant under conformal transformations of ${\displaystyle {\mathbb {R}}^{3}}$. This property is explained as follows. Let ${\displaystyle F:{\mathbb {R}}^{3}\rightarrow {S^{3}}}$ denote a conformal map. Then ${\displaystyle E(\gamma _{1},\gamma _{2})=E(F\circ \gamma _{1},F\circ \gamma _{2}).}$ This condition is called the conformal invariance property of the Möbius cross energy. Main Theorem. Let ${\displaystyle \gamma _{i}:S^{1}\rightarrow {\mathbb {R}}^{3}}$, ${\displaystyle i=1,2,}$ be a non-split link of 2 components link. Then ${\displaystyle E(\gamma _{1},\gamma _{2})\geq 2\pi ^{2}}$. Moreover, if ${\displaystyle E(\gamma _{1},\gamma _{2})=2\pi ^{2}}$ then there exists a conformal map ${\displaystyle F:{\mathbb {R}}^{3}\rightarrow {S^{3}}}$ such that ${\displaystyle F\circ \gamma _{1}(t)=(\cos t,\sin t,0,0)}$ and ${\displaystyle F\circ \gamma _{2}(t)=(0,0,\cos t,\sin t)}$ (the standard Hopf link up to orientation and reparameterization). Given two non-intersecting differentiable curves ${\displaystyle \gamma _{1},\gamma _{2}\colon S^{1}\rightarrow \mathbb {R} ^{3}}$, define the Gauss map ${\displaystyle \Gamma }$ from the torus to the sphere by ${\displaystyle \Gamma (s,t)={\frac {\gamma _{1}(s)-\gamma _{2}(t)}{|\gamma _{1}(s)-\gamma _{2}(t)|}}.}$ The Gauss map of a link ${\displaystyle (\gamma _{1},\gamma _{2})}$ in ${\displaystyle {\mathbf {R}}^{4}}$, denoted by ${\displaystyle g=G(\gamma _{1},\gamma _{2})}$, is the Lipschitz map ${\displaystyle g:S^{1}\times S^{1}\to S^{3}}$ defined by ${\displaystyle g(s,t)={\frac {\gamma _{1}(s)-\gamma _{2}(t)}{|\gamma _{1}(s)-\gamma _{2}(t)|}}.}$ We denote an open ball in ${\displaystyle {\mathbf {R}}^{4}}$, centered at ${\displaystyle {\mathbf {x}}}$ with radius ${\displaystyle r}$, by ${\displaystyle B_{r}^{4}({\mathbf {x}})}$. The boundary of this ball is denoted by ${\displaystyle S_{r}^{3}({\mathbf {x}})}$. An intrinsic open ball of ${\displaystyle S^{3}}$, centered at ${\displaystyle {\mathbf {p}}\in S^{3}}$ with radius ${\displaystyle r}$, is denoted by ${\displaystyle B_{r}({\mathbf {p}})}$. We have ${\displaystyle {\frac {\partial g}{\partial s}}={{\dot {\gamma }}_{1}-\langle g,{\dot {\gamma }}_{1}\rangle g \over |\gamma _{1}-\gamma _{2}|}\quad {\mbox{and}}\quad {\frac {\partial g}{\partial t}}=-{{\dot {\gamma }}_{2}-\langle g,{\dot {\gamma }}_{2}\rangle g \over |\gamma _{1}-\gamma _{2}|}.}$ Thus, {\displaystyle {\begin{aligned}\left|{\frac {\partial g}{\partial s}}\right|^{2}\left|{\frac {\partial g}{\partial t}}\right|^{2}-\left\langle {\frac {\partial g}{\partial s}},{\frac {\partial g}{\partial t}}\right\rangle ^{2}&\leq \left|{\frac {\partial g}{\partial s}}\right|^{2}\left|{\frac {\partial g}{\partial t}}\right|^{2}\\&={\frac {|{\dot {\gamma }}_{1}|^{2}-\langle g,{\dot {\gamma }}_{1}\rangle ^{2}}{|\gamma _{1}-\gamma _{2}|^{2}}}{\frac {|{\dot {\gamma }}_{2}|^{2}-\langle g,{\dot {\gamma }}_{2}\rangle ^{2}}{|\gamma _{1}-\gamma _{2}|^{2}}}\\&\leq {\frac {|{\dot {\gamma }}_{1}|^{2}|{\dot {\gamma }}_{2}|^{2}}{|\gamma _{1}-\gamma _{2}|^{4}}}.\end{aligned}}} It follows that for almost every ${\displaystyle (s,t)\in S^{1}\times S^{1}}$, ${\displaystyle |{\rm {Jac\,}}g|(s,t)\leq {\frac {|{\dot {\gamma }}_{1}(s)||{\dot {\gamma }}_{2}(t)|}{|\gamma _{1}(s)-\gamma _{2}(t)|^{2}}}.}$ If equality holds at ${\displaystyle (s,t)}$, then ${\displaystyle \langle {\dot {\gamma }}_{1}(s),{\dot {\gamma }}_{2}(t)\rangle =\langle {\dot {\gamma }}_{1}(s),\gamma _{1}(s)-\gamma _{2}(t)\rangle =\langle {\dot {\gamma }}_{2}(t),\gamma _{1}(s)-\gamma _{2}(t)\rangle =0.}$ ${\displaystyle {\mathbf {M}}(C)\leq \int _{S^{1}\times S^{1}}|{\rm {Jac\,}}g|\,ds\,dt\leq E(\gamma _{1},\gamma _{2}).}$ If the link ${\displaystyle (\gamma _{1},\gamma _{2})}$ is contained in an oriented affine hyperplane with unit normal vector ${\displaystyle {\mathbf {p}}\in S^{3}}$ compatible with the orientation, then ${\displaystyle C={\rm {link}}(\gamma _{1},\gamma _{2})\cdot \partial B_{\pi /2}(-{\mathbf {p}}).}$ ## References 1. ^ Agol, Ian; Marques, Fernando C.; Neves, André (2012). "Min-max theory and the energy of links". arXiv: [math.GT].
2017-11-19 10:51:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 169, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9770293831825256, "perplexity": 801.732280943484}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805541.30/warc/CC-MAIN-20171119095916-20171119115916-00601.warc.gz"}
https://tex.stackexchange.com/questions/274873/remove-indentation-to-texttt
# Remove indentation to \texttt I'm writing my thesis using the book document class. Each paragraph is indented automatically. This is fine with me for the whole thesis except in small pieces of "code" that are enclosed between \texttt. I would like to eliminate the indentation in this case but I can't. I tried with /noindent but remains a small space (shorter than the indentation but a space). Then I tried it but has no effect because the indentation remains. \texttt{ {\setlength{\parindent}{0cm} text 1: Follow the white rabbit!\\ text 2: Follow the tiny white rabbit!\\ } } How can I fix? Thanks! • You wanna typeset code, do it properly (using package listings or minted). Alternatively, you want to typeset an algorithm. Several packages for algorithms available as well. Oct 25, 2015 at 16:17 • Do you maybe want something like the verbatim environment? This would be better although adding \noindent before \texttt would kill the indentation. – cfr Oct 25, 2015 at 16:17 • What you are doing right now, is starting a new paragraph and setting the parindent to zero, this is to late. Oct 25, 2015 at 16:17 • @Johannes_B I simply wish that these two lines are in black font as \texttt and no indentation.. What is the easiest way to do it? I used listings in another part of the thesis formatting code in a certain way which in this case would not fit... Oct 25, 2015 at 16:22 • Do it just like @cfr said in the comment above :-) Oct 25, 2015 at 16:24 Setting \parindent inside \textt is not something I'd do. Remember that an end-of-line is the same as a space: \noindent\texttt{% text 1: Follow the white rabbit!\\ text 2: Follow the tiny white rabbit!\\ }% Some text follows. But perhaps you want verbatim: Some text before the typewriter type text \begin{verbatim} text 1: Follow the white rabbit! text 2: Follow the tiny white rabbit! \end{verbatim} Some text follows. The \setlength{\parindent}{0cm} should precede the \texttt{..}, otherwise, it will not be effective. Further, you can as well use the verbatim environment (pointed out by egreg) or its \verb|..| command if it fulfills your needs. \documentclass[12pt]{article} \begin{document} {\setlength{\parindent}{0cm}% \ttfamily text 1: Follow the white rabbit!\par text 2: Follow the tiny white rabbit!\par } \bigskip text 1: Follow the white rabbit! \\ text 2: Follow the tiny white rabbit! \end{document} `
2022-05-21 00:05:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6696886420249939, "perplexity": 2308.9232258636375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534693.28/warc/CC-MAIN-20220520223029-20220521013029-00786.warc.gz"}
https://deploy-preview-471--electric-book.netlify.app/docs/setup/repeatable-items.html
# Repeatable items ## Usage Sometimes across a book or a collection of books, you want to repeat snippets of content. For instance, you may have a standard introduction to all your books, or ‘About the author’ text that you want to add to the ends of certain chapters. Or you might want several books to draw questions or create quizzes from the same set of possible questions. You can create repeatable items like this by saving them as standalone files in the _items folder. The contents of the _items folder should be structured in the same way as a book folder. That is, markdown files should be in _items/text, and markdown files for a French translation would be in _items/fr/text. To include an item in your book, use the include item tag, and specify in the tag which file you want to include. For example: {% include item file="about-charles-dickens.md" %} In your output, the content in _items/text/about-charles-dickens.md will appear where you placed that tag. Note that the file extension is optional. So you can also use: {% include item file="about-charles-dickens" %} This is convenient, but also means that you shouldn’t use the same file name for different files with different extensions. If you had both about-charles-dickens.md and about-charles-dickens.html in _items/text, your output will include the second one alphabetically: about-charles-dickens.md. Note that include tags (or other Liquid tags) inside items may not regnerate when you’re running an incremental build with Jekyll, because of the sequence in which Jekyll processes Liquid tags and content. ## YAML frontmatter in items An item must start with YAML frontmatter, just like your book’s text files, even if it’s blank. At a minimum, blank YAML frontmatter is two lines of three hyphens at the top of the document: --- --- This tells Jekyll to process the file, which ensures that Jekyll knows about it when you include it somewhere. You can also add YAML frontmatter data to each item. This is important for multiple-choice questions, for example, which must include the correct answer options in their frontmatter: --- correct: 1, 4 --- ## Overriding items per book Items in a book’s text folder will override the ones in the main _items folder, if they have the same file name. Let’s say you have five books in your project that use the ‘About Charles Dickens’ item; but, in one of your books, you want that item to say something different. Just copy the item to that book’s /text folder, and edit it there. When you use {% include item file="about-charles-dickens" %} in that book, you’ll get the version in the book’s /text folder, and not the one from _items. ## Translated items When you need to create translations of items in the _items folder, save those in a subfolder of _items that has the same name as the book’s translation folder. For instance, if your book’s French translation is in book/fr, save your French items to _items/fr, in relevant text or images subfolders. Then in the French book text, use the include item tag as usual: {% include item file="about-charles-dickens.md" %} This way, you can use exactly the same tags across translations, and your output will include the relevant item from the relevant translation folder automatically. No need to rename files. ## Images Images stored in _items/images follow the same conventions as in books. That is, place master images in _items/images/_source and process them using the output script (or gulp --book _items). Translated images should go into language subfolders of _items, such as _items/fr/images/_source for French images. ### Notes Unlike book directories, items may not inherit their parent-language’s images. This ability is still in development. All images in the _items parent folder may need to be copied to (or translated in) each translation’s images folder in _items, such as _items/fr/images. Using include figure with images in items is also a feature in development. ## Creating new item-based includes (advanced) The YAML metadata in an item can be very useful, especially if you want to create your own include tags that use items. The mcq include is an example: it uses the item include to get the item as a programming object, and then outputs the parts of that object that it needs. Each of the item’s YAML-frontmatter fields is accessible in the item object. To get an item as an object, rather than as generated output, add return="object" when you include item in your new include. For example: {% include item file=include.file return="object" %} For instance, let’s say you’re creating an include that outputs the title and reading-grade of a poem stored in _items. You might create an include called poem-grade, which uses the item include to fetch an item object. This YAML in the item --- title: "The Tiger" --- would be accessible in your poem include as {{ item-file-object.grades }} and {{ item-file-object.title }}. Your poem-grade include might look like this: {% include item file=include.file return="object" %} {% include poem-grade file="the-tiger" %} assuming that you’d saved the poem in _items/text as the-tiger.md.
2021-04-19 23:05:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3789449632167816, "perplexity": 4135.891080783322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00290.warc.gz"}
http://openstudy.com/updates/55da83b0e4b02663346bf78b
## anonymous one year ago Need help with square root problem please :) 1. anonymous |dw:1440383921670:dw| 2. anonymous sorry if you cant read this. but i multiped 7 and 5 and got 35 multiplied 2 and 6 and got the square root of 12 and i multiplied 7 and 4 and got -28 and i tried multiplying 7 and the square root of 2 but it gives me 14 and everytime i try to simplify it doesnt make sense and doesnt give me the right answer. 3. anonymous the answer is 70 and square root of 3 -56 but how is it 56? 4. anonymous $70\sqrt{3}-56$ okay thats the answer 5. anonymous Using the distributive property$7\sqrt{2}\left( 5\sqrt{6}-4\sqrt{2} \right)$$=35\sqrt{12} - 28\sqrt{4}$Does this help? 6. anonymous but dont you multiply the 7 and the square root of 2? 7. anonymous It helps, but im just wondering why i dont multiply it 8. anonymous You're multiplying $$7\sqrt{2}$$ by $$4\sqrt{2}$$. You multiply the coefficients together and the radicals together to give $$28\sqrt{4}$$ which, of course, can be simplified. 9. anonymous oh! okay i understand!! thank you so much. :) 10. anonymous In general$a \sqrt{b} \times c \sqrt{d} = ac \sqrt{bd}$ 11. anonymous You're welcome 12. triciaal |dw:1440384808581:dw| 13. triciaal |dw:1440385345310:dw| Find more explanations on OpenStudy
2016-10-25 10:25:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973003625869751, "perplexity": 1607.5853085641224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720026.81/warc/CC-MAIN-20161020183840-00209-ip-10-171-6-4.ec2.internal.warc.gz"}