url
stringlengths
17
1.81k
text
stringlengths
100
950k
date
stringlengths
19
19
metadata
stringlengths
1.07k
1.1k
http://mathhelpforum.com/differential-geometry/144601-continuity-print.html
# Continuity • May 13th 2010, 03:07 PM janae77 Continuity Let f be defined and continuous on a closed set S in R. Let A={x: x $\in$S and f(x)=0}. Prove that A is a closed subset of R . • May 13th 2010, 03:38 PM Plato Quote: Originally Posted by janae77 Let f be defined and continuous on a closed set S in R. Let A={x: x $\in$S and f(x)=0}. Prove that A is a closed subset of R . Hint: If $f$ is continues and $f(p)\not=0$ then there is an open interval such that $p\in (s,t)$ and $f$ is non-zero on $(s,t)$. Hence, does this show the complement open?
2018-02-21 19:59:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9218780994415283, "perplexity": 730.1085355867667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813712.73/warc/CC-MAIN-20180221182824-20180221202824-00290.warc.gz"}
http://tex.stackexchange.com/questions/13078/defining-optional-data-in-a-class
# Defining (optional) data in a class I'm trying to learn how to write latex classes and am using my resume as a toy example. I am trying to separate style from content as much as possible so I am trying to define data fields such as \name \address \university, similar to those that I have seen in \maketitle. Some of the fields of each of these are to be optional. I have a working example but since this is my first attempt at writing a latex class, I wanted to ask how I should be defining the class's metadata. My attempt so far is like so (optional second line for the address): \RequirePackage{xkeyval} % for keyval } } } } } \newcommand{\makeCV}{% \fi } This works okay for me, I can type \address { %second_line=my county, town=my city, postcode=my post code } \makeCV but it the class seems a little verbose? In particular, having to define all those \newcommand{...}{} and all the various \newif 's seems a little verbose. My question is how should I be doing this task properly? - In what sense are these meta data? –  Seamus Mar 9 '11 at 18:35 @Seamus Thats what I thought data in a class was called... (have taken the meta out of the title) –  Tom Mar 9 '11 at 18:35 as it stands, I'm not sure it's clear what you're asking. "How should I do this properly?" is a little unclear. Could you try and sharpen up what you want from answers here? –  Seamus Mar 9 '11 at 19:01 @Seamus You are right, I want to know what is usually done when someone defines \name \address etc in a package. I couldn't find a guide that told me so I had a go, but I feel that I have almost certainly done it badly. –  Tom Mar 9 '11 at 19:11 You can cut out a lot of the verbosity of the coding part using the keycommand package. But I wouldn't worry too much about the verbosity of coding, but rather about the author interface you are presenting to your potential users, which is verbose. From my experience with users they prefer (environments) and simple commands. I would reserve the key-val pairs for mostly switches such as including a photo or not. Certainly the address lines do not belong in the key-val portion of the command. An interface as shown below, \usepackage[foto=none]{sCV} \begin{CV} \end{CV} would be easier to code and use. Using an environment would also make it easier for coding something that is going to span potentially over many pages. - "program to an interface, not an implementation." –  Matthew Leingang Mar 9 '11 at 21:01 I write this directly without a try so be careful: NPK for new package better is to use for example letters of the name for your package. mcv makeCV. With \define@boolkey you don't need \newif. \ifNPK@mcv@LineTwois automatically created. \presetkeys is to give default values and \setkeys[NPK]{mcv}{#1} is to apply the options inside your macro. Without a try, perhaps i make some typos :( Now I prefer to use pgfkeysif you want the same things it's possible but perhaps it's more verbose \define@boolkey [NPK] {mcv} {LineTwo}[true]{} \define@cmdkey [NPK] {mcv} {firstline}{} \define@cmdkey [NPK] {mcv} {secondline}{} \define@cmdkey [NPK] {mcv} {town}{} \define@cmdkey [NPK] {mcv} {postcode}{} \presetkeys [NPK] {mcv} {LineTwo = false, firstline = {}, secondline = {}, town = {},% Paris postcode = {}}{} % 75005 \newcommand{\makeCV}[1][]{% \setkeys[NPK]{mcv}{#1} \cmdNPK@mcv@firstline\\% \ifNPK@mcv@LineTwo \cmdNPK@mcv@secondline\\% \fi \cmdNPK@mcv@town\\% \cmdNPK@mcv@postcode% } - I fixed a ' that should have been a ` Hope you don't mind. –  Seamus Mar 9 '11 at 19:05
2015-05-27 05:57:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8019410967826843, "perplexity": 1966.9893244144564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928907.65/warc/CC-MAIN-20150521113208-00046-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.tutorvista.com/content/math/probability-terms/
Top # Probability Terms Probability deals with uncertainty. In mathematics this process is called as probability. In our daily life, while speaking we use words like likely, possibly, probably. For example: • Probably, I may go to a movie. • He is likely to get  the prize. • Possibly, Kam may be leave today. What does the word likely, possibly, probably convey? It conveys that the event that we take under consideration may happen or may not happen. It is the case of uncertainty. Introduction to probability and probability terms: The meaning of the word probable in dictionary says “likely but not certain”. So we could say probability as an index which numerically measures the degree of certainty or degree of uncertainty in the occurrence of events. To learn about the definition of probability, it is very essential to know about the terms involved in probability. Terms in Probability: Experiment: An activity which results in a well defined outcome is called an experiment. Random experiment: An experiment in which all possible outcomes are known in advance, but the exact result of any trial cannot be surely predicted is called random experiment. Tossing a coin, throwing a die, picking a ball from a bag of balls are examples for random experiment Trial: Performing an experiment once is called a trial Events: The possible outcomes of a trial are called events. Equally likely events: If the different outcomes of a trial have equal chance of occurring, then outcomes are said to be equally likely. For example: when we throw a dice once, the chances of 1, 2, 3, 4, 5, 6 occurring is the same. So they are equally likely to appear. Sample space: The set of all possible outcomes of an experiment, constitute its sample space. Dependent Events: Occurrence of one event does have an effect on the probability of second event. Independent Events: Occurrence of one event has no effect on the probability of second event. Outcome: Each result of a trial is called outcome. Types of probability There are two types of probability. They are theoretical Probability and Experimental or Empirical probability Theoretical probability: The mathematical chance of occurrence of an event according to the law. Probability = $\frac{Number\ of\ outcomes\ favourable\ to\ an\ event}{Total\ number\ of\ possible\ outcomes}$ Experimental probability: When the number of cases favorable to an event is found out experimentally and then the probability is calculated, that is called experimental probability of an event. Related Calculators Combine Like Terms Calculation of Probability Binomial Distribution Probability Calculator Binomial Probability Calculator *AP and SAT are registered trademarks of the College Board.
2019-09-23 05:57:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.621959924697876, "perplexity": 783.3901201465807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576047.85/warc/CC-MAIN-20190923043830-20190923065830-00499.warc.gz"}
https://www.semanticscholar.org/paper/Tableaux-on-k%2B1-cores%2C-reduced-words-for-affine-and-Lapointe-Morse/5e1d97ff57bd0b6cc1b83a88da608357d251dcd3
Tableaux on k+1-cores, reduced words for affine permutations, and k-Schur expansions @article{Lapointe2005TableauxOK, title={Tableaux on k+1-cores, reduced words for affine permutations, and k-Schur expansions}, author={Luc Lapointe and Jennifer Morse}, journal={J. Comb. Theory, Ser. A}, year={2005}, volume={112}, pages={44-81} } • Published 19 February 2004 • Mathematics • J. Comb. Theory, Ser. A Figures from this paper Combinatorics of l , 0)-JM partitions,l -cores, the ladder crystal and the finite Hecke algebra The following thesis contains results on the combinatorial representation theory of the finite Hecke algebra $H_n(q)$. In Chapter 2 simple combinatorial descriptions are given which determine when QUANTUM COHOMOLOGY AND THE k-SCHUR BASIS • Mathematics • 2007 We prove that structure constants related to Hecke algebras at roots of unity are special cases of k-Littlewood-Richardson coefficients associated to a product of k-Schur functions. As a consequence, Order Ideals in Weak Subposets of Young’s Lattice and Associated Unimodality Conjectures • Mathematics • 2004 AbstractThe k-Young lattice Yk is a weak subposet of the Young lattice containing partitions whose first part is bounded by an integer k > 0. The Yk poset was introduced in connection with Operators on k-tableaux and the k-Littlewood-Richardson rule for a special case This thesis proves a special case of the $k$-Littlewood--Richardson rule, which is analogous to the classical Littlewood--Richardson rule but is used in the case for $k$-Schur functions. The K-theory Schubert calculus of the affine Grassmannian • Mathematics Compositio Mathematica • 2010 Abstract We construct the Schubert basis of the torus-equivariant K-homology of the affine Grassmannian of a simple algebraic group G, using the K-theoretic NilHecke ring of Kostant and Kumar. This A Note on Embedding Hypertrees Bohman, Frieze, and Mubayi's problem is solved, proving the tight result that $\chi > t$ is sufficient to embed any $r$-tree with t edges. Quantum cohomology of G/P and homology of affine Grassmannian • Mathematics • 2007 Let G be a simple and simply-connected complex algebraic group, P ⊂ G a parabolic subgroup. We prove an unpublished result of D. Peterson which states that the quantum cohomology QH*(G/P) of a flag Affine Insertion and Pieri Rules for the Affine Grassmannian • Mathematics • 2006 We study combinatorial aspects of the Schubert calculus of the affine Grassmannian Gr associated with SL(n,C). Our main results are: 1) Pieri rules for the Schubert bases of H^*(Gr) and H_*(Gr), College of Arts and Sciences Quantum Cohomology and the K-schur Basis • Mathematics • 2005 The following item is made available as a courtesy to scholars by the author(s) and Drexel University Library and may contain materials and content, including computer code and tags, artwork, text, References SHOWING 1-10 OF 23 REFERENCES Ordering the Affine Symmetric Group We review several descriptions of the affine symmetric group. We explicit the basis of its Bruhat order. Tableau atoms and a new Macdonald positivity conjecture Duke Math J • Engineering • 2000 A snap action fluid control valve, the operation of which is controlled by a relatively slow acting thermally responsive actuator member. The valve of this invention is particularly adapted for use Crystal base for the basic representation of $$U_q (\widehat{\mathfrak{s}\mathfrak{l}}(n))$$ • Mathematics • 1990 AbstractWe show the existence of the crystal base for the basic representation of $$U_q (\widehat{\mathfrak{s}\mathfrak{l}}(n))$$ by giving an explicit description in terms of Young diagrams. Algebraic Combinatorics And Quantum Groups * Uno's Conjecture on Representation Types of Hecke Algebras (S Ariki) * Quiver Varieties, Afine Lie Algebras, Algebras of BPS States, and Semicanonical Basis (I Frenkel et al.) * Divided Differences Crystal base for the basic representation of • Mathematics • 1990 We show the existence of the crystal base for the basic representation of Uq(~^l(n)) by giving an explicit description in terms of Young diagrams. Young Tableaux: With Applications to Representation Theory and Geometry Part I. Calculus Of Tableux: 1. Bumping and sliding 2. Words: the plactic monoid 3. Increasing sequences: proofs of the claims 4. The Robinson-Schensted-Knuth Correspondence 5. The Upper Bounds in Affine Weyl Groups under the Weak Order It is determined that the question of which pairs of elements of W have upper bounds can be reduced to the analogous question within a particular finite subposet within an affine Weyl group W0.
2022-09-27 14:45:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6162826418876648, "perplexity": 2302.0686311194013}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00176.warc.gz"}
http://mathhelpforum.com/calculus/83762-exponential-fourier-series-expansion-print.html
# Exponential Fourier-series expansion • April 14th 2009, 04:33 PM tiki_master Exponential Fourier-series expansion I need help in determining the exponential fourier series expansion for the half-wave rectified signal x(t)=cos(t). I am trying to find Xn, and have determined for the case where n=0, Xn=1/pi...but I'm having trouble finding the general case for just Xn. Any help would be appreciated. • May 23rd 2009, 06:48 PM Media_Man Fourier Series $f(x)=\frac{1}{2}a_0+\sum_{n=1}^\infty a_n\cos(nx)+\sum_{n=1}^\infty b_n\sin(nx)$ $a_0=\frac{1}{\pi}\int_{-\pi}^\pi f(x)dx$ $a_n=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\cos(nx)dx$ $b_n=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\sin(nx)dx$ Look very, very carefully at the function you are trying to expand here. $x(t)=cos(t)$, therefore $a_0=0$, $a_1=1$, $a_n=0$ for all $n>1$, and $b_n=0$ for all $n$.
2016-08-30 19:40:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8168185353279114, "perplexity": 874.7091366192794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983001995.81/warc/CC-MAIN-20160823201001-00202-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.experts-exchange.com/questions/24547717/Change-Folder-Permissions-in-Bulk.html
Solved # Change Folder Permissions in Bulk Posted on 2009-07-06 537 Views Hello. Currently I am in a W2K3 Enviroment. On one of our shared drives we have a list of Jobs, each which have a folder called COST.  I need to reset permissions on that COST folder, but can't push it down via inheritance.  Is there any way by a gui tool or script I can make those changes. An Example of the structure is: \\root\JobFiles\JobNumber1\Cost Therefor I would like to script something that rests the permissions for these folders \\root\JobFiles\JobNumber1\Cost \\root\JobFiles\JobNumber2\Cost \\root\JobFiles\JobNumber3\Cost and so forth. 0 Question by:AaronIT [X] ###### Welcome to Experts Exchange Add your voice to the tech community where 5M+ people just like you are talking about what matters. • Help others & share knowledge • Earn cash & points LVL 38 Expert Comment ID: 24788900 You could use the cacls.exe command line tool and then script the changes that you need to make.  You can get the applicable switches for the command by running cacls /? at the command line, if you're not familiar with it. You'd still have to identify each folder individually in the script, but at least you could cut and paste the folder paths. There's also an enhanced tool for Win2K3 SP2, which I've never used: http://support.microsoft.com/kb/919240 0 LVL 2 Author Comment ID: 24788947 So keeping with my example above... What would my command be cacls \\root\JobFiles\JobNumber1\Cost /T How do I set it to inherit? Can I also add a group vice a user? 0 LVL 85 Accepted Solution oBdA earned 250 total points ID: 24789010 Well, this would be a lot easier to answer if you'd say which permissions those Cost folders should have ... The example below would add *C*hange permissions to all Cost subfolders, leaving the current permissions intact. Simply enter the following command in a command line. You can do that safely, it's in test mode and will only echo the cacls commands it would otherwise run: for /d %a in ("\\root\JobFiles\*.*") do @ECHO cacls.exe "%a\Cost" /t /e /g:YOURDOMAIN\SomeGroup:C To run it for real, you'd need to leave out the @ECHO. 0 LVL 1 Assisted Solution vixtro earned 250 total points ID: 24839782 Agreeing with oBdA - it'd be a lot easier if you could say what you want the permissions on the folders to look like after the script has run. I use xcacls.vbs from VBScript on my fileserver - I use it to go through and change permissions of subfolders en masse. You can download XCACLS.VBS from here: You'll have to play around with it a little as i'm not entirely sure exactly what end result you're after. To see the switches for xcacls.vbs, run this from the command prompt: cscript \\path\to\xcacls.vbs /? NB: Copy and paste this code into a blank notepad document, and save it with the extension ".vbs" for it to work. Set objFSO = CreateObject("Scripting.FileSystemObject") Set objShell = CreateObject("WScript.Shell") Set rootPath = "\\root\jobfiles" xcaclsPath = "path\to\xcacls.vbs" Set rootFolder = objFSO.GetFolder(rootPath) For Each subfolder in rootFolder.subfolders fldrName = subfolder.name 'Running the next two commands grant MODIFY access to the specified user modCmd = "cscript " & xcaclsPath & " " & rootPath & "\" & fldrName & "\Cost /E /G DOMAIN\username:M" objShell.Run modCmd, 1, True 'Running the next two commands turn the INHERIT flag for the folder on. inCmd = "cscript " & xcaclsPath & " " & rootPath & "\" & fldrName & "\Cost /E /I ENABLE" objShell.Run inCmd, 1, True Next 0 ## Featured Post Question has a verified solution. If you are experiencing a similar issue, please ask a related question ADCs have gained traction within the last decade, largely due to increased demand for legacy load balancing appliances to handle more advanced application delivery requirements and improve application performance.
2017-07-27 01:10:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3839748501777649, "perplexity": 7680.54388571445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426693.21/warc/CC-MAIN-20170727002123-20170727022123-00178.warc.gz"}
https://blender.stackexchange.com/questions/238155/alt-scroll-wheel-in-time-line-and-jump-back-to-start
# Alt Scroll Wheel in time line and jump back to start is there a way to customize the Alt + Mouse Wheel which works in most editors that when you reach the end frame you can jump back to start? is it something I don't see in settings or do I need a script for that and how can I do it by script if so . I'm not looking for other hotkeys to jump to start and end, I wanna use that specific hotkey and move in timeline normally but instead of passing by the end frame, jump back to start or being stuck at the start get back to end frame... like in a walk cycle animation this way you can easily see the transition between end and starting over • SHIFT-Left arrow takes you to the start and SHIFT-Right arrow to the end of the timeline. Sep 15 at 8:25 • @JohnEason yeah but for that you need to take one of your hands of the mouse or keyboard . if there was a way to do it just like I said that would be nice Sep 15 at 9:29 • can do most things in blender via a script Please clarify what you do want, not what you don't. Are we talking about scrubbing the time line with the scroll wheel and if it goes past end points reverts to the other? Sep 15 at 15:24 • @batFINGER question is very self explaining . that link you sent helped alot i just need to look up for more stuff to make the script for it Sep 15 at 15:28 Edit the keymap. As commented by @John Eason these are already mapped to SHIFT and left arrow for go to start, and right to end. Can search for keymaps by keypress Or name, if unsure hover over the button that does it Once found change to suit. Here I've altered it to ALT + middle mouse click. Save User Preferences to make change permanent. • i know i can change the hotkey for that but what im saying is to change the behavior of a specific hotkey when something happens. it is a small change but i guess that way its easier for something like a walk cycle . so this wont be the answer Sep 15 at 14:55 • Clarify any details by editing it into your question. Do you want to match the frame range to current objects action range rather than scene. There are answers relating to that question. Sep 15 at 14:58 • yeah maybe that would work Sep 15 at 15:05 • blender.stackexchange.com/questions/27889/… Sep 15 at 15:13 ### Why Even Do this there are several options to monitor the transitions in frames but in my experience the best way is to use the mouse scroll wheel since you have more control over speed and its easy to use . It's like classic animation when animators flip back and forth paper between their fingers. but when you are making a cycle animation which is used a lot in game animations, there is one thing that is annoying and its how you cant see the transition from end frame to starting over as easy. it is a small change but I think it worth it. So here is a way to make this work basically we disable the old behavior and make a new one that does the same old behavior and just checks for the end and start frame then does the expected thing ### Disable Default Hotkeys Edit -> Preferences -> Keymap 1. set the search type to Key-Binding 2. search for "wheel" 3. scroll a bit and find Frames section 4. and disable both Frame Offsets ### How to Make The Script So to make a function and assign a hotkey to it we need an Operator which has an execute function where our logic will be and when we assign a hotkey to the operator ,when the keys are pressed(event happens) this function will be called we want to make this as an add-on ,so we add some info about it in bl_info ### First Step as you can see the operator class has some basic properties like an ID , label and more that we can fill in and some more info bl_info = { "name": "Better_Scroll_Time", "blender": (2, 80, 0), "category": "Object", } import bpy from bpy.props import * class ScrollWheelTime(bpy.types.Operator): """Sets the time in range""" # Use this as a tooltip for menu items and buttons. bl_idname = "object.scroll" # Unique identifier for buttons and menu items to reference. bl_label = "Better Time Scroll Wheel" # Display name in the interface. bl_options = {'REGISTER', 'UNDO'} # Enable undo for the operator. def execute(self, context): # execute() is called when running the operator. # logic return {'FINISHED'} # Lets Blender know the operator finished successfully. ### Execute direction: IntProperty() #outside of execute function as a class member def execute(self, context): #logic scn = context.scene current_frame = scn.frame_current current_frame+=self.direction scn.frame_set(current_frame) if not scn.frame_start <= current_frame <= scn.frame_end: scn.frame_set(scn.frame_start if self.direction>=1 else scn.frame_end) return {'FINISHED'} • so if you look at blender keymaps in Preferences you can find Operations with some properties that we can provide too, with bpy.props functions like IntProperty() • we need a reference to the scene we are in to access things like start and end frame of the scene • increase or decrease current frame with the amount of direction which will be set to +1 and -1 based on the shortcut • if current frame is not between start and end frame of the scene based on the direction we decide where to put the frame cursor • we tell blender this operation is finished ### Register addon_keymaps = [] #outside of function def register(): bpy.utils.register_class(ScrollWheelTime) wm = bpy.context.window_manager if kc: km = wm.keyconfigs.addon.keymaps.new(name = "Window",space_type='EMPTY', region_type='WINDOW') km_up = km.keymap_items.new(ScrollWheelTime.bl_idname, type='WHEELUPMOUSE', value='PRESS', alt=True) km_down = km.keymap_items.new(ScrollWheelTime.bl_idname, type='WHEELDOWNMOUSE', value='PRESS', alt=True) km_up.properties.direction = -1 # setting the default property value for wheel up km_down.properties.direction = +1 # setting the default property value for wheel down • here we register our operation class so we can use if as an addon • then we make a keymap using wm.keyconfigs.addon.keymaps.new and set its parameters to name = "Window", space_type ='EMPTY', region_type ='WINDOW' so our hotkey works in all editor windows • then we assign our shortcuts to the operation class • we can set a default value to our custom properties that defined as class member using bpy.props • we save this hotkeys so we can remove them in unregister function to make things clear in unregister function we remove our hotkeys you can see completed script here (code could have some clean ups) bl_info = { "name": "Better_Scroll_Time", "blender": (2, 80, 0), "category": "Object", } import bpy from bpy.props import * class ScrollWheelTime(bpy.types.Operator): """Sets the time in range""" # Use this as a tooltip for menu items and buttons. bl_idname = "object.scroll" # Unique identifier for buttons and menu items to reference. bl_label = "Better Time Scroll Wheel" # Display name in the interface. bl_options = {'REGISTER', 'UNDO'} # Enable undo for the operator. direction: IntProperty() def execute(self, context): # execute() is called when running the operator. # logic scn = context.scene current_frame = scn.frame_current current_frame+=self.direction scn.frame_set(current_frame) if not scn.frame_start <= current_frame <= scn.frame_end: scn.frame_set(scn.frame_start if self.direction>=1 else scn.frame_end) return {'FINISHED'} # Lets Blender know the operator finished successfully. def register(): bpy.utils.register_class(ScrollWheelTime) wm = bpy.context.window_manager if kc: km = wm.keyconfigs.addon.keymaps.new(name = "Window",space_type='EMPTY', region_type='WINDOW') km_up = km.keymap_items.new(ScrollWheelTime.bl_idname, type='WHEELUPMOUSE', value='PRESS', alt=True) km_down = km.keymap_items.new(ScrollWheelTime.bl_idname, type='WHEELDOWNMOUSE', value='PRESS', alt=True) km_up.properties.direction = -1 # setting the default property value for wheel up km_down.properties.direction = +1 # setting the default property value for wheel down print("Installed Better Scroll Time !") def unregister(): bpy.utils.unregister_class(ScrollWheelUpTime) # Remove the hotkey km.keymap_items.remove(k) • context is already passed to the operator methods so you can replace bpy.context.* by context.* for all calls/properties and when declaring a variable use it, see: pasteall.org/oUiT/raw Also consider that this is no regular forum and link-only answers are discouraged, if the link goes down so does the answer hence the downvote I guess. Sep 16 at 8:27
2021-12-02 07:01:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2360772341489792, "perplexity": 3468.472806140118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00226.warc.gz"}
http://mathematica.stackexchange.com/questions/8288/setting-a-variable-equal-to-the-output-of-findroot/8292
# setting a variable equal to the output of FindRoot So I set a function f[x] f[x_] := x*E^(-x) - 0.16064 Then I set a variable 'actualroot' to the function FindRoot, starting at 3 actualroot = FindRoot[ f[x], {x, 3} ] and get the output {x -> 2.88976} Later I want to compare this output with a different estimate (-2.88673) of the root, and calculate error, so I have Abs[ (actualroot - estimateroot)/actualroot ] and i get this output: Abs[ (-2.88673 + (x -> 2.88976))/(x -> 2.88976) ] How do I get mathematica to evaluate this expression? I also tried using N[] to give me a decimal evaluation but it didn't work. - You can use actualroot = FindRoot[f[x], {x, 3}][[1, 2]] –  b.gatessucks Jul 13 '12 at 20:27 The usual way to get the values of the results of FindRoot, Solve, etc., which are lists of Rule is the following: f[x_] := x E^(-x) - 0.16064 actualroot = x /. FindRoot[f[x], {x, 3}] estimateroot = -2.88673; Abs[(actualroot - estimateroot)/actualroot] Output: 2.88976 1.99895 - Thanks, new to mathematica, just diving in –  DWC Jul 13 '12 at 21:05 @DWC well, then it's probably good to know that /. is shorthand for ReplaceAll. Apart from its doc page reading this tutorial on transformation rules will prove fruitful. –  Sjoerd C. de Vries Jul 13 '12 at 21:51
2014-03-08 04:24:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3065386712551117, "perplexity": 4436.3842938887865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999653077/warc/CC-MAIN-20140305060733-00037-ip-10-183-142-35.ec2.internal.warc.gz"}
https://elteoremadecuales.com/abels-theorem-2/?lang=pt
# Abel's theorem Abel's theorem This article is about Abel's theorem on power series. For Abel's theorem on algebraic curves, see Abel–Jacobi map. For Abel's theorem on the insolubility of the quintic equation, see Abel–Ruffini theorem. For Abel's theorem on linear differential equations, see Abel's identity. For Abel's theorem on irreducible polynomials, see Abel's irreducibility theorem. For Abel's formula for summation of a series, using an integral, see Abel's summation formula. This article includes a list of references, leitura relacionada ou links externos, mas suas fontes permanecem obscuras porque faltam citações em linha. Ajude a melhorar este artigo introduzindo citações mais precisas. (Fevereiro 2013) (Saiba como e quando remover esta mensagem de modelo) Na matemática, Abel's theorem for power series relates a limit of a power series to the sum of its coefficients. It is named after Norwegian mathematician Niels Henrik Abel. Conteúdo 1 Teorema 2 Observações 3 Formulários 4 Esquema da prova 5 Related concepts 6 Veja também 7 Leitura adicional 8 External links Theorem Let the Taylor series {estilo de exibição G(x)=soma _{k=0}^{infty }uma_{k}x^{k}} be a power series with real coefficients {estilo de exibição a_{k}} with radius of convergence {estilo de exibição 1.} Suppose that the series {soma de estilo de exibição _{k=0}^{infty }uma_{k}} converge. Então {estilo de exibição G(x)} is continuous from the left at {displaystyle x=1,} isso é, {displaystyle lim _{xto 1^{-}}G(x)=soma _{k=0}^{infty }uma_{k}.} The same theorem holds for complex power series {estilo de exibição G(z)=soma _{k=0}^{infty }uma_{k}z^{k},} provided that {displaystyle zto 1} entirely within a single Stolz sector, isso é, a region of the open unit disk where {estilo de exibição |1-z|leq M(1-|z|)} for some fixed finite {displaystyle M>1} . Without this restriction, the limit may fail to exist: por exemplo, the power series {soma de estilo de exibição _{n>0}{fratura {z^{3^{n}}-z^{2cdot 3^{n}}}{n}}} converge para {estilo de exibição 0} no {estilo de exibição z = 1,} but is unbounded near any point of the form {estilo de exibição e^{pi i/3^{n}},} so the value at {estilo de exibição z = 1} is not the limit as {estilo de exibição com} tends to 1 in the whole open disk. Observe que {estilo de exibição G(z)} is continuous on the real closed interval {estilo de exibição [0,t]} por {estilo de exibição t<1,} by virtue of the uniform convergence of the series on compact subsets of the disk of convergence. Abel's theorem allows us to say more, namely that {displaystyle G(z)} is continuous on {displaystyle [0,1].} Remarks As an immediate consequence of this theorem, if {displaystyle z} is any nonzero complex number for which the series {displaystyle sum _{k=0}^{infty }a_{k}z^{k}} converges, then it follows that {displaystyle lim _{tto 1^{-}}G(tz)=sum _{k=0}^{infty }a_{k}z^{k}} in which the limit is taken from below. The theorem can also be generalized to account for sums which diverge to infinity.[citation needed] If {displaystyle sum _{k=0}^{infty }a_{k}=infty } then {displaystyle lim _{zto 1^{-}}G(z)to infty .} However, if the series is only known to be divergent, but for reasons other than diverging to infinity, then the claim of the theorem may fail: take, for example, the power series for {displaystyle {frac {1}{1+z}}.} At {displaystyle z=1} the series is equal to {displaystyle 1-1+1-1+cdots ,} but {displaystyle {tfrac {1}{1+1}}={tfrac {1}{2}}.} We also remark the theorem holds for radii of convergence other than {displaystyle R=1} : let {displaystyle G(x)=sum _{k=0}^{infty }a_{k}x^{k}} be a power series with radius of convergence {displaystyle R,} and suppose the series converges at {displaystyle x=R.} Then {displaystyle G(x)} is continuous from the left at {displaystyle x=R,} that is, {displaystyle lim _{xto R^{-}}G(x)=G(R).} Applications The utility of Abel's theorem is that it allows us to find the limit of a power series as its argument (that is, {displaystyle z} ) approaches {displaystyle 1} from below, even in cases where the radius of convergence, {displaystyle R,} of the power series is equal to {displaystyle 1} and we cannot be sure whether the limit should be finite or not. See for example, the binomial series. Abel's theorem allows us to evaluate many series in closed form. For example, when {displaystyle a_{k}={frac {(-1)^{k}}{k+1}},} we obtain {displaystyle G_{a}(z)={frac {ln(1+z)}{z}},qquad 0by integrating the uniformly convergent geometric power series term by term on {estilo de exibição [-z,0]} ; thus the series {soma de estilo de exibição _{k=0}^{infty }{fratura {(-1)^{k}}{k+1}}} converge para {estilo de exibição ln(2)} by Abel's theorem. De forma similar, {soma de estilo de exibição _{k=0}^{infty }{fratura {(-1)^{k}}{2k+1}}} converge para {displaystyle arctan(1)={tfrac {pi }{4}}.} {estilo de exibição G_{uma}(z)} is called the generating function of the sequence {displaystyle a.} Abel's theorem is frequently useful in dealing with generating functions of real-valued and non-negative sequences, such as probability-generating functions. Em particular, it is useful in the theory of Galton–Watson processes. Outline of proof After subtracting a constant from {estilo de exibição a_{0},} we may assume that {soma de estilo de exibição _{k=0}^{infty }uma_{k}=0.} Deixar {estilo de exibição s_{n}=soma _{k=0}^{n}uma_{k}!.} Then substituting {estilo de exibição a_{k}=s_{k}-s_{k-1}} and performing a simple manipulation of the series (summation by parts) resulta em {estilo de exibição G_{uma}(z)=(1-z)soma _{k=0}^{infty }s_{k}z^{k}.} Given {displaystyle varepsilon >0,} pick {estilo de exibição m} large enough so that {estilo de exibição |s_{k}| Se você quiser conhecer outros artigos semelhantes a Abel's theorem você pode visitar a categoria Mathematical series. Ir para cima Usamos cookies próprios e de terceiros para melhorar a experiência do usuário Mais informação
2023-04-02 12:53:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.928529679775238, "perplexity": 8178.9014807282165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00006.warc.gz"}
https://www.clutchprep.com/organic-chemistry/practice-problems/15672/write-a-structural-formula-for-each-of-the-following-compounds-a-6-isopropyl-2-3
Problem: Write a structural formula for each of the following compounds:(a) 6-Isopropyl-2,3-dimethylnonane FREE Expert Solution 84% (358 ratings) Problem Details Write a structural formula for each of the following compounds: (a) 6-Isopropyl-2,3-dimethylnonane
2021-01-23 05:07:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958631694316864, "perplexity": 8777.451071631454}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00622.warc.gz"}
http://mathcentral.uregina.ca/QQ/database/QQ.09.13/h/prince1.html
SEARCH HOME Math Central Quandaries & Queries Question from Prince, a student: What is the exponential form of 1/square root of 6v? Hi, There are two uses of exponents you need to use here. The first is fractional exponents. For example $x^{1/2} = \sqrt{x}$ and $x^{1/3} = \sqrt[3]{x}$ or in general if $p$ is a positive integer then $x^{1/p} = \sqrt[p]{x}.$ The other use of exponents is negative exponents, $x^{-y} = \frac{1}{x^y}.$ Can you complete your problem now? Penny Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
2017-11-19 14:18:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4154507517814636, "perplexity": 568.2080352239942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805649.7/warc/CC-MAIN-20171119134146-20171119154146-00699.warc.gz"}
https://www.thejournal.club/c/paper/103883/
#### Computing Equilibria with Partial Commitment ##### Vincent Conitzer In security games, the solution concept commonly used is that of a Stackelberg equilibrium where the defender gets to commit to a mixed strategy. The motivation for this is that the attacker can repeatedly observe the defender's actions and learn her distribution over actions, before acting himself. If the actions were not observable, Nash (or perhaps correlated) equilibrium would arguably be a more natural solution concept. But what if some, but not all, aspects of the defender's actions are observable? In this paper, we introduce solution concepts corresponding to this case, both with and without correlation. We study their basic properties, whether these solutions can be efficiently computed, and the impact of additional observability on the utility obtained. arrow_drop_up
2021-12-02 10:30:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8268622159957886, "perplexity": 857.2927488424104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361253.38/warc/CC-MAIN-20211202084644-20211202114644-00361.warc.gz"}
https://socratic.org/questions/590be7d311ef6b2c97f9260a#419237
# When a bottle of wine is opened, its hydrogen ion concentration, [H_3O^+]=4.1xx10^-4*mol*L^-1; what is its pH? How does the pH evolve after the wine is left to stand? May 7, 2017 $p H = - {\log}_{10} \left[{H}_{3} {O}^{+}\right]$ #### Explanation: And thus............when freshly opened $p H = - {\log}_{10} \left(4.1 \times {10}^{-} 4\right) = 3.39$. (And in fact most wines have a $p H$ around this level.) See here for more detail on the definition of $p H$. And later [H_3O^+]=0.0023*mol*L^-1; pH=2.64 The wine must not have been very good, because most wine is consumed within 12 hours after opening. What likely occurred is that the ethyl alcohol air-oxidized up to acetic acid, which is a carboxylic acid, and thus likely to have a lower $p H$ in aqueous solution. (Note that this oxidation is why we seal wine bottles with an air-tight cork/cap). For the oxidation of ethyl alcohol to acetic acid we could write the equation: $\text{H"_3"CCH"_2"OH(aq)" +"O"_2"(g)" rarr "H"_3"CCO"_2"H(aq)"+"H"_2"O(l)}$
2021-09-22 13:51:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072762489318848, "perplexity": 3471.6090485168083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057366.40/warc/CC-MAIN-20210922132653-20210922162653-00367.warc.gz"}
https://www.genetics.org/content/178/4/2169?ijkey=0c63ab5c8052670408ef7a9cf93547d14ab17bd0&keytype2=tf_ipsecsha
The Effects of Recombination Rate on the Distribution and Abundance of Transposable Elements | Genetics
2021-06-25 01:30:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2540581226348877, "perplexity": 2724.989208879195}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00173.warc.gz"}
https://www.ytiancompbio.com/publications/dengue-review/
# Human T cell response to dengue virus infection Contents ## Authors Yuan Tian, Alba Grifoni, Alessandro Sette, Daniela Weiskopf ## Journal Trends in immunology 37 (8), 557-568 ## Abstract DENV is a major public health problem worldwide, thus underlining the overall significance of the proposed Program. The four dengue virus (DENV) serotypes (1-4) cause the most common mosquito-borne viral disease of humans, with 3 billion people at risk for infection and up to 100 million cases each year, most often affecting children. The protective role of T cells during viral infection is well established. Generally, CD8 T cells can control viral infection through several mechanisms, including direct cytotoxicity and production of pro-inflammatory cytokines such as IFN- and TNF-. Similarly, CD4 T cells are thought to control viral infection through multiple mechanisms, including enhancement of B and CD8 T cell responses, production of inflammatory and anti-viral cytokines, cytotoxicity, and promotion of memory responses. To probe the phenotype of virus-specific T cells, epitopes derived from viral sequences need to be known. Here we discuss the identification of CD4 and CD8 T cell epitopes derived from DENV and how these epitopes have been used by researchers to interrogate the phenotype and function of DENV-specific T cell populations.
2023-03-24 00:52:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.24376802146434784, "perplexity": 9677.930693957733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00709.warc.gz"}
http://mathoverflow.net/questions/121915/linear-numeration-systems
# Linear numeration systems Let $F_{i}$ be the fibonacci or a multinacci sequence. The number of representations of $N$ in the form $N=\sum_{i=0}^{k}s_{i}F_{i}, s_{i}\in${0,1} is known. My question is what is known about sequence-based numeration systems given by other linear recurrences. To make the question precise, i am interested in the recurrence $G_{i+4}=G_{i+3}+G_{i+2}+G_{i+1}-G_{i}$ with $G_{0}=1$, $G_{1}=2$, $G_{2}=4$, $G(3)=8$. What is known about $\sharp_{G} N:=${$(s_{0},\dots,s_{k})\in${0,1}$^{k+1}|N=\sum_{i=0}^{k}s_{i}G_{i}$}? - ## 1 Answer Some results on the quantity in question can be found in J. M. Dumont, N. Sidorov and A. Thomas, Number of representations related to a linear recurrent basis, Acta Arithmetica 88 (1999), 371-394. We are mainly interested in the summatory function but there are also some upper bounds for the quantity itself. Our main assumption is that the corresponding root (of $x^4=x^3+x^2+x-1$ in your case) is a Perron number (in your example it's even Salem, so our results apply). - Many thanks for the reference. Best 9i –  Jörg Neunhäuserer Feb 16 '13 at 15:39 No problem. Hope it'll help. –  Nikita Sidorov Feb 16 '13 at 22:41
2014-11-27 23:28:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8126301765441895, "perplexity": 397.64678656531294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009292.37/warc/CC-MAIN-20141125155649-00038-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.mathway.com/glossary/definition/35/axis-of-symmetry
axis of symmetry A line that passes through a figure in such a way that the part of the figure on one side of the line is a mirror reflection of the part on the other side of the line. We're sorry, we were unable to process your request at this time Step-by-step work + explanations •    Step-by-step work •    Detailed explanations •    Access anywhere Access the steps on both the Mathway website and mobile apps $--.--/month$--.--/year (--%)
2018-02-22 09:12:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2769356667995453, "perplexity": 1253.2776964738455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814079.59/warc/CC-MAIN-20180222081525-20180222101525-00364.warc.gz"}
https://physics.stackexchange.com/tags/conductors/hot?filter=month
# Tag Info 5 They do indeed repel each other. But they are repelled from the point they are coming from even stronger. Imagine having two charged metal balls where one has half the charge of the other. When you connect them with a wire, will charges flow? Yes. Sure, each individual electron feels a strong repulsion from both of the balls, since there already is an ... 4 Electrons do repel each other but they also like to spread out. Quantum mechanics tells us that it costs a lot of energy to localize an electron in a small volume. These two tendencies compete. The quantum mechanical Hubbard model is based on these two effects. It has two parameters: on-site repulsion and transfer energy (transfer Hamiltonian matrix element).... 3 The drag is due to repulsion caused by eddy currents induced by the moving magnetic field in the Aluminum metal. The repulsive force opposes the motion of the metal ball according to Faraday's second law of electromagnetism. The same thing will happen if you replace the aluminum with copper metal. 3 If I take cross section close to beggining of the conductor, charges which start moving on one end don't experience as many collisions when they get to that cross section close to the beggining as they will when they come to the other end of a conductor. It seems that resistance should increase from one towards the other end of an conductor. You seem to ... 1 Inside the cavity we have placed $+q$ charge. Due to the electric field of the $+q$ charge in the cavity (radiating outwards), the free electron gets drifted towards the inside surface of cavity (opposite to the radial outward direction of positive charge in the cavity). As a result, the inside surface of cavity gets negative charge an outer surface of ... 1 Yes, there is a difference. If you made the wire as you mentioned, into a spiral, like this: then there is quite a big difference between this and a straight wire. The difference between a straight wire and a coil or spiral wire is that the spiral wire resists changes in current flow. This is called an inductor or solenoid. It resists changes in the ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-05-07 21:40:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6760057210922241, "perplexity": 302.0438957446246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988828.76/warc/CC-MAIN-20210507211141-20210508001141-00258.warc.gz"}
https://www.tutorke.com/lesson/342-the-acceleration-of-a-body-moving-along-a-straight-line-is-4-t-ms-2-and-its-velocity-is.aspx
Get premium membership and access revision papers with marking schemes, video lessons and live classes. OR # Differentiation and Its Applications Questions and Answers The acceleration of a body moving along a straight line is (4-t) ms^2 and its velocity is v m/s after t seconds. a).i) If the initial velocity of the body is 3m/s, express the velocity v in terms of t. ii).Find the velocity of the body after 2 seconds. b) Calculate: i).The time taken to attain maximum velocity. ii).The distance covered by the body to attain the maximum velocity. (9m 46s) 1203 Views     SHARE |
2023-01-31 05:59:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4702702462673187, "perplexity": 1969.1600026876624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00189.warc.gz"}
https://data.lesslikely.com/concurve/articles/examples.html
## Introduction Here I show how to produce P-value, S-value, likelihood, and deviance functions with the concurve package using fake data and data from real studies. Simply put, these functions are rich sources of information for scientific inference and the image below, taken from Xie & Singh, 20131 displays why. For a more extensive discussion of these concepts, see the following references.113 # Simple Models To get started, we could generate some normal data and combine two vectors in a dataframe library(concurve) set.seed(1031) GroupA <- rnorm(500) GroupB <- rnorm(500) RandomData <- data.frame(GroupA, GroupB) and look at the differences between the two vectors. We’ll plug these vectors and the dataframe they’re in inside of the curve_mean() function. Here, the default method involves calculating CIs using the Wald method. intervalsdf <- curve_mean(GroupA, GroupB, data = RandomData, method = "default" ) Each of the functions within concurve will generally produce a list with three items, and the first will usually contain the function of interest. tibble::tibble(intervalsdf[[1]]) #> # A tibble: 10,000 x 1 #> intervalsdf[[1… $upper.limit$intrvl.width $intrvl.level$cdf $pvalue #> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 -0.113 -0.113 0 0 0.5 1 #> 2 -0.113 -0.113 0.0000154 0.0001 0.500 1.00 #> 3 -0.113 -0.113 0.0000309 0.0002 0.500 1.00 #> 4 -0.113 -0.113 0.0000463 0.000300 0.500 1.00 #> 5 -0.113 -0.113 0.0000617 0.0004 0.500 1.00 #> 6 -0.113 -0.113 0.0000772 0.0005 0.500 1.00 #> 7 -0.113 -0.113 0.0000926 0.000600 0.500 0.999 #> 8 -0.113 -0.113 0.000108 0.0007 0.500 0.999 #> 9 -0.113 -0.112 0.000123 0.0008 0.500 0.999 #> 10 -0.113 -0.112 0.000139 0.0009 0.500 0.999 #> # … with 9,990 more rows, and 1 more variable:$svalue <dbl> We can view the function using the ggcurve() function. The two basic arguments that must be provided are the data argument and the “type” argument. To plot a consonance function, we would write “c”. (function1 <- ggcurve(data = intervalsdf[[1]], type = "c", nullvalue = TRUE)) We can see that the consonance “curve” is every interval estimate plotted, and provides the P-values, CIs, along with the median unbiased estimate It can be defined as such, $C V_{n}(\theta)=1-2\left|H_{n}(\theta)-0.5\right|=2 \min \left\{H_{n}(\theta), 1-H_{n}(\theta)\right\}$ Its information counterpart, the surprisal function, can be constructed by taking the $$-log_{2}$$ of the P-value.3,14,15 To view the surprisal function, we simply change the type to “s”. (function1 <- ggcurve(data = intervalsdf[[1]], type = "s")) We can also view the consonance distribution by changing the type to “cdf”, which is a cumulative probability distribution. The point at which the curve reaches 50% is known as the “median unbiased estimate”. It is the same estimate that is typically at the peak of the P-value curve from above. (function1s <- ggcurve(data = intervalsdf[[2]], type = "cdf", nullvalue = TRUE)) We can also get relevant statistics that show the range of values by using the curve_table() function. There are several formats that can be exported such as .docx, .ppt, and TeX. (x <- curve_table(data = intervalsdf[[1]], format = "image")) Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits) -0.132 -0.093 0.039 25.0 0.625 0.750 0.415 -0.154 -0.071 0.083 50.0 0.750 0.500 1.000 -0.183 -0.042 0.142 75.0 0.875 0.250 2.000 -0.192 -0.034 0.158 80.0 0.900 0.200 2.322 -0.201 -0.024 0.177 85.0 0.925 0.150 2.737 -0.214 -0.011 0.203 90.0 0.950 0.100 3.322 -0.233 0.008 0.242 95.0 0.975 0.050 4.322 -0.251 0.026 0.276 97.5 0.988 0.025 5.322 -0.271 0.046 0.318 99.0 0.995 0.010 6.644 # Comparing Functions If we wanted to compare two studies to see the amount of “consonance”, we could use the curve_compare() function to get a numerical output. First, we generate some more fake data GroupA2 <- rnorm(500) GroupB2 <- rnorm(500) RandomData2 <- data.frame(GroupA2, GroupB2) model <- lm(GroupA2 ~ GroupB2, data = RandomData2) randomframe <- curve_gen(model, "GroupB2") Once again, we’ll plot this data with ggcurve(). We can also indicate whether we want certain interval estimates to be plotted in the function with the “levels” argument. If we wanted to plot the 50%, 75%, and 95% intervals, we’d provide the argument this way: (function2 <- ggcurve(type = "c", randomframe[[1]], levels = c(0.50, 0.75, 0.95), nullvalue = TRUE)) Now that we have two datasets and two functions, we can compare them using the curve_compare() function. (curve_compare( data1 = intervalsdf[[1]], data2 = randomframe[[1]], type = "c", plot = TRUE, measure = "default", nullvalue = TRUE )) #> [1] "AUC = Area Under the Curve" #> [[1]] #> #> #> AUC 1 AUC 2 Shared AUC AUC Overlap (%) Overlap:Non-Overlap AUC Ratio #> ------ ------ ----------- ---------------- ------------------------------ #> 0.098 0.073 0.024 16.309 0.195 #> #> [[2]] This function will provide us with the area that is shared between the curve, along with a ratio of overlap to non-overlap. We can also do this for the surprisal function simply by changing type to “s”. (curve_compare( data1 = intervalsdf[[1]], data2 = randomframe[[1]], type = "s", plot = TRUE, measure = "default", nullvalue = FALSE )) #> [1] "AUC = Area Under the Curve" #> [[1]] #> #> #> AUC 1 AUC 2 Shared AUC AUC Overlap (%) Overlap:Non-Overlap AUC Ratio #> ------ ------ ----------- ---------------- ------------------------------ #> 3.947 1.531 1.531 38.801 0.634 #> #> [[2]] It’s clear that the outputs have changed and indicate far more overlap than before. # Survival Modeling Here, we’ll look at how to create consonance functions from the coefficients of predictors of interest in a Cox regression model. We’ll use the carData package for this. Fox & Weisberg, 2018 describe the dataset elegantly in their paper, The Rossi data set in the carData package contains data from an experimental study of recidivism of 432 male prisoners, who were observed for a year after being released from prison (Rossi et al., 1980). The following variables are included in the data; the variable names are those used by Allison (1995), from whom this example and variable descriptions are adapted: week: week of first arrest after release, or censoring time. arrest: the event indicator, equal to 1 for those arrested during the period of the study and 0 for those who were not arrested. fin: a factor, with levels “yes” if the individual received financial aid after release from prison, and “no” if he did not; financial aid was a randomly assigned factor manipulated by the researchers. age: in years at the time of release. race: a factor with levels “black” and “other”. wexp: a factor with levels “yes” if the individual had full-time work experience prior to incarceration and “no” if he did not. mar: a factor with levels “married” if the individual was married at the time of release and “not married” if he was not. paro: a factor coded “yes” if the individual was released on parole and “no” if he was not. prio: number of prior convictions. educ: education, a categorical variable coded numerically, with codes 2 (grade 6 or less), 3 (grades 6 through 9), 4 (grades 10 and 11), 5 (grade 12), or 6 (some post-secondary). emp1–emp52: factors coded “yes” if the individual was employed in the corresponding week of the study and “no” otherwise. We read the data file into a data frame, and print the first few cases (omitting the variables emp1 – emp52, which are in columns 11–62 of the data frame): library(carData) Rossi[1:5, 1:10] #> week arrest fin age race wexp mar paro prio educ #> 1 20 1 no 27 black no not married yes 3 3 #> 2 17 1 no 18 black no not married yes 8 4 #> 3 25 1 no 19 other yes not married yes 13 3 #> 4 52 0 yes 23 black yes married yes 1 5 #> 5 52 0 no 19 other yes not married yes 3 3 Thus, for example, the first individual was arrested in week 20 of the study, while the fourth individual was never rearrested, and hence has a censoring time of 52. Following Allison, a Cox regression of time to rearrest on the time-constant covariates is specified as follows: library(survival) mod.allison <- coxph(Surv(week, arrest) ~ fin + age + race + wexp + mar + paro + prio, data = Rossi ) mod.allison #> Call: #> coxph(formula = Surv(week, arrest) ~ fin + age + race + wexp + #> mar + paro + prio, data = Rossi) #> #> coef exp(coef) se(coef) z p #> finyes -0.37942 0.68426 0.19138 -1.983 0.04742 #> age -0.05744 0.94418 0.02200 -2.611 0.00903 #> raceother -0.31390 0.73059 0.30799 -1.019 0.30812 #> wexpyes -0.14980 0.86088 0.21222 -0.706 0.48029 #> marnot married 0.43370 1.54296 0.38187 1.136 0.25606 #> paroyes -0.08487 0.91863 0.19576 -0.434 0.66461 #> prio 0.09150 1.09581 0.02865 3.194 0.00140 #> #> Likelihood ratio test=33.27 on 7 df, p=2.362e-05 #> n= 432, number of events= 114 Now that we have our Cox model object, we can use the curve_surv() function to create the function. If we wanted to create a function for the coefficient of prior convictions, then we’d do so like this: z <- curve_surv(mod.allison, "prio") Then we could plot our consonance curve and density and also produce a table of relevant statistics. Because we’re working with ratios, we’ll set the measure argument in ggcurve() to “ratio”. ggcurve(z[[1]], measure = "ratio", nullvalue = TRUE) ggcurve(z[[2]], type = "cd", measure = "ratio", nullvalue = TRUE) curve_table(z[[1]], format = "image") Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits) 1.086 1.106 0.020 25.0 0.625 0.750 0.415 1.075 1.117 0.042 50.0 0.750 0.500 1.000 1.060 1.133 0.072 75.0 0.875 0.250 2.000 1.056 1.137 0.080 80.0 0.900 0.200 2.322 1.052 1.142 0.090 85.0 0.925 0.150 2.737 1.045 1.149 0.103 90.0 0.950 0.100 3.322 1.036 1.159 0.123 95.0 0.975 0.050 4.322 1.028 1.168 0.141 97.5 0.988 0.025 5.322 1.018 1.180 0.162 99.0 0.995 0.010 6.644 We could also construct a function for another predictor such as age x <- curve_surv(mod.allison, "age") ggcurve(x[[1]], measure = "ratio") ggcurve(x[[2]], type = "cd", measure = "ratio") curve_table(x[[1]], format = "image") Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits) 0.938 0.951 0.013 25.0 0.625 0.750 0.415 0.930 0.958 0.028 50.0 0.750 0.500 1.000 0.921 0.968 0.048 75.0 0.875 0.250 2.000 0.918 0.971 0.053 80.0 0.900 0.200 2.322 0.915 0.975 0.060 85.0 0.925 0.150 2.737 0.911 0.979 0.068 90.0 0.950 0.100 3.322 0.904 0.986 0.081 95.0 0.975 0.050 4.322 0.899 0.992 0.093 97.5 0.988 0.025 5.322 0.892 0.999 0.107 99.0 0.995 0.010 6.644 That’s a very quick look at creating functions from Cox regression models. # Meta-Analysis Here, we’ll use an example dataset taken from the metafor website, which also comes preloaded with the metafor package. library(metafor) #> Loading 'metafor' package (version 2.1-0). For an overview #> and introduction to the package please type: help(metafor). dat.hine1989 #> study source n1i n2i ai ci #> 1 1 Chopra et al. 39 43 2 1 #> 2 2 Mogensen 44 44 4 4 #> 3 3 Pitt et al. 107 110 6 4 #> 4 4 Darby et al. 103 100 7 5 #> 5 5 Bennett et al. 110 106 7 3 #> 6 6 O'Brien et al. 154 146 11 4 I will quote Wolfgang here, since he explains it best, "As described under help(dat.hine1989), variables n1i and n2i are the number of patients in the lidocaine and control group, respectively, and ai and ci are the corresponding number of deaths in the two groups. Since these are 2×2 table data, a variety of different outcome measures could be used for the meta-analysis, including the risk difference, the risk ratio (relative risk), and the odds ratio (see Table III). Normand (1999) uses risk differences for the meta-analysis, so we will proceed accordingly. We can calculate the risk differences and corresponding sampling variances with: dat <- escalc(measure = "RD", n1i = n1i, n2i = n2i, ai = ai, ci = ci, data = dat.hine1989) dat #> study source n1i n2i ai ci yi vi #> 1 1 Chopra et al. 39 43 2 1 0.0280 0.0018 #> 2 2 Mogensen 44 44 4 4 0.0000 0.0038 #> 3 3 Pitt et al. 107 110 6 4 0.0197 0.0008 #> 4 4 Darby et al. 103 100 7 5 0.0180 0.0011 #> 5 5 Bennett et al. 110 106 7 3 0.0353 0.0008 #> 6 6 O'Brien et al. 154 146 11 4 0.0440 0.0006 "Note that the yi values are the risk differences in terms of proportions. Since Normand (1999) provides the results in terms of percentages, we can make the results directly comparable by multiplying the risk differences by 100 (and the sampling variances by $$100^{2}$$): dat$yi <- dat$yi * 100 dat$vi <- dat$vi * 100^2 We can fit a fixed-effects model with the following fe <- rma(yi, vi, data = dat, method = "FE") Now that we have our metafor object, we can compute the consonance function using the curve_meta() function. fecurve <- curve_meta(fe) Now we can graph our function. ggcurve(fecurve[[1]], nullvalue = TRUE) We used a fixed-effects model here, but if we wanted to use a random-effects model, we could do so with the following, which will use a restricted maximum likelihood estimator for the random-effects model re <- rma(yi, vi, data = dat, method = "REML") And then we could use curve_meta() to get the relevant list recurve <- curve_meta(re) Now we can plot our object. ggcurve(recurve[[1]], nullvalue = TRUE) We could also compare our two models to see how much consonance/overlap there is curve_compare(fecurve[[1]], recurve[[1]], plot = TRUE) #> [1] "AUC = Area Under the Curve" #> [[1]] #> #> #> AUC 1 AUC 2 Shared AUC AUC Overlap (%) Overlap:Non-Overlap AUC Ratio #> ------ ------ ----------- ---------------- ------------------------------ #> 2.085 2.085 2.085 100 Inf #> #> [[2]] The results are practically the same and we cannot actually see any difference, and the AUC % overlap also indicates this. # Constructing Functions From Single Intervals We can also take a set of confidence limits and use them to construct a consonance, surprisal, likelihood or deviance function using the curve_rev() function. This method is computed from the approximate normal distribution. Here, we’ll use two epidemiological studies16,17 that studied the impact of SSRI exposure in pregnant mothers, and the rate of autism in children. Both of these studies suggested a null effect of SSRI exposure on autism rates in children. curve1 <- curve_rev(point = 1.7, LL = 1.1, UL = 2.6, type = "c", measure = "ratio", steps = 10000) (ggcurve(data = curve1[[1]], type = "c", measure = "ratio", nullvalue = TRUE)) curve2 <- curve_rev(point = 1.61, LL = 0.997, UL = 2.59, type = "c", measure = "ratio", steps = 10000) (ggcurve(data = curve2[[1]], type = "c", measure = "ratio", nullvalue = TRUE)) The null value is shown via the red line and it’s clear that a large mass of the function is away from it. We can also see this by plotting the likelihood functions via the curve_rev() function. lik1 <- curve_rev(point = 1.7, LL = 1.1, UL = 2.6, type = "l", measure = "ratio", steps = 10000) (ggcurve(data = lik1[[1]], type = "l1", measure = "ratio", nullvalue = TRUE)) lik2 <- curve_rev(point = 1.61, LL = 0.997, UL = 2.59, type = "l", measure = "ratio", steps = 10000) (ggcurve(data = lik2[[1]], type = "l1", measure = "ratio", nullvalue = TRUE)) We can also view the amount of agreement between the likelihood functions of these two studies. (plot_compare( data1 = lik1[[1]], data2 = lik2[[1]], type = "l1", measure = "ratio", nullvalue = TRUE, title = "Brown et al. 2017. J Clin Psychiatry. vs. \nBrown et al. 2017. JAMA.", subtitle = "J Clin Psychiatry: OR = 1.7, 1/6.83 LI: LL = 1.1, UL = 2.6 \nJAMA: HR = 1.61, 1/6.83 LI: LL = 0.997, UL = 2.59", xaxis = expression(Theta ~ "= Hazard Ratio / Odds Ratio") )) and the consonance functions (plot_compare( data1 = curve1[[1]], data2 = curve2[[1]], type = "c", measure = "ratio", nullvalue = TRUE, title = "Brown et al. 2017. J Clin Psychiatry. vs. \nBrown et al. 2017. JAMA.", subtitle = "J Clin Psychiatry: OR = 1.7, 1/6.83 LI: LL = 1.1, UL = 2.6 \nJAMA: HR = 1.61, 1/6.83 LI: LL = 0.997, UL = 2.59", xaxis = expression(Theta ~ "= Hazard Ratio / Odds Ratio") )) # The Bootstrap and Consonance Functions Some authors have shown that the bootstrap distribution is equal to the confidence distribution because it meets the definition of a consonance distribution.1,18,19 The bootstrap distribution and the asymptotic consonance distribution would be defined as: $H_{n}(\theta)=1-P\left(\hat{\theta}-\hat{\theta}^{*} \leq \hat{\theta}-\theta | \mathbf{x}\right)=P\left(\hat{\theta}^{*} \leq \theta | \mathbf{x}\right)$ Certain bootstrap methods such as the BCa method and t-bootstrap method also yield second order accuracy of consonance distributions. $H_{n}(\theta)=1-P\left(\frac{\hat{\theta}^{*}-\hat{\theta}}{\widehat{S E}^{*}\left(\hat{\theta}^{*}\right)} \leq \frac{\hat{\theta}-\theta}{\widehat{S E}(\hat{\theta})} | \mathbf{x}\right)$ Here, I demonstrate how to use these particular bootstrap methods to arrive at consonance curves and densities. We’ll use the Iris dataset and construct a function that’ll yield a parameter of interest. ## The Nonparametric Bootstrap iris <- datasets::iris foo <- function(data, indices) { dt <- data[indices, ] c( cor(dt[, 1], dt[, 2], method = "p") ) } We can now use the curve_boot() method to construct a function. The default method used for this function is the “Bca” method provided by the bcaboot package.19 I will suppress the output of the function because it is unnecessarily long. But we’ve placed all the estimates into a list object called y. The first item in the list will be the consonance distribution constructed by typical means, while the third item will be the bootstrap approximation to the consonance distribution. ggcurve(data = y[[1]], nullvalue = TRUE) ggcurve(data = y[[3]], nullvalue = TRUE) We can also print out a table for TeX documents (gg <- curve_table(data = y[[1]], format = "image")) Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits) -0.142 -0.093 0.048 25 0.625 0.75 0.415 -0.169 -0.067 0.102 50 0.750 0.50 1.000 -0.205 -0.031 0.174 75 0.875 0.25 2.000 -0.214 -0.021 0.194 80 0.900 0.20 2.322 -0.266 0.031 0.296 95 0.975 0.05 4.322 -0.312 0.077 0.389 99 0.995 0.01 6.644 More bootstrap replications will lead to a smoother function. But for now, we can compare these two functions to see how similar they are. plot_compare(y[[1]], y[[3]]) If we wanted to look at the bootstrap standard errors, we could do so by loading the fifth item in the list knitr::kable(y[[5]]) theta sdboot z0 a sdjack est -0.1175698 0.0755961 0.0576844 0.0304863 0.075694 jsd 0.0000000 0.0010234 0.0274023 0.0000000 0.000000 where in the top row, theta is the point estimate, and sdboot is the bootstrap estimate of the standard error, sdjack is the jacknife estimate of the standard error. z0 is the bias correction value and a is the acceleration constant. The values in the second row are essentially the internal standard errors of the estimates in the top row. The densities can also be calculated accurately using the t-bootstrap method. Here we use a different dataset to show this library(Lock5Data) dataz <- data(CommuteAtlanta) func <- function(data, index) { x <- as.numeric(unlist(data[1])) y <- as.numeric(unlist(data[2])) return(mean(x[index]) - mean(y[index])) } Our function is a simple mean difference. This time, we’ll set the method to “t” for the t-bootstrap method z <- curve_boot(data = CommuteAtlanta, func = func, method = "t", replicates = 2000, steps = 1000) #> Warning in norm.inter(t, alpha): extreme order statistics used as endpoints ggcurve(data = z[[1]], nullvalue = FALSE) ggcurve(data = z[[2]], type = "cd", nullvalue = FALSE) The consonance curve and density are nearly identical. With more bootstrap replications, they are very likely to converge. (zz <- curve_table(data = z[[1]], format = "image")) Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits) -39.400 -39.075 0.325 25.0 0.625 0.750 0.415 -39.611 -38.876 0.735 50.0 0.750 0.500 1.000 -39.873 -38.608 1.265 75.0 0.875 0.250 2.000 -39.932 -38.530 1.402 80.0 0.900 0.200 2.322 -40.026 -38.456 1.570 85.0 0.925 0.150 2.737 -40.118 -38.354 1.763 90.0 0.950 0.100 3.322 -40.294 -38.174 2.120 95.0 0.975 0.050 4.322 -40.442 -38.026 2.416 97.5 0.988 0.025 5.322 -40.636 -37.806 2.830 99.0 0.995 0.010 6.644 ## The Parametric Bootstrap For the examples above, we mainly used nonparametric bootstrap methods. Here I show an example using the parametric Bca bootstrap and the results it yields. First, we’ll load our data again and set our function. data(diabetes, package = "bcaboot") X <- diabetes$x y <- scale(diabetes$y, center = TRUE, scale = FALSE) lm.model <- lm(y ~ X - 1) mu.hat <- lm.model$fitted.values sigma.hat <- stats::sd(lm.model$residuals) t0 <- summary(lm.model)$adj.r.squared y.star <- sapply(mu.hat, rnorm, n = 1000, sd = sigma.hat) tt <- apply(y.star, 1, function(y) summary(lm(y ~ X - 1))$adj.r.squared) b.star <- y.star %*% X Now, we’ll use the same function, but set the method to “bcapar” for the parametric method. df <- curve_boot(method = "bcapar", t0 = t0, tt = tt, bb = b.star) Now we can look at our outputs. ggcurve(df[[1]], nullvalue = FALSE) ggcurve(df[[3]], nullvalue = FALSE) We can compare the functions to see how well the bootstrap approximations match up plot_compare(df[[1]], df[[3]]) We can also look at the density function ggcurve(df[[5]], type = "cd", nullvalue = FALSE) That concludes our demonstration of the bootstrap method to approximate consonance functions. ## Using Profile Likelihoods For this last example, we’ll explore the curve_lik() function, which can help generate profile likelihood functions, and deviance statistics with the help of the ProfileLikelihood package. library(ProfileLikelihood) #> Loading required package: MASS We’ll use a simple example taken directly from the ProfileLikelihood documentation where we’ll calculate the likelihoods from a glm model data(dataglm) xx <- profilelike.glm(y ~ x1 + x2, data = dataglm, profile.theta = "group", family = binomial(link = "logit"), length = 500, round = 2 ) #> Warning message: provide lo.theta and hi.theta Then, we’ll use curve_lik() on the object that the ProfileLikelihood package created. lik <- curve_lik(xx, dataglm) tibble::tibble(lik[[1]]) #> # A tibble: 500 x 1 #> lik[[1]]$values$likelihood $loglikelihood$support \$deviancestat #> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 -1.41 9.26e-21 -9.79 0.0000560 9.79 #> 2 -1.40 1.00e-20 -9.71 0.0000606 9.71 #> 3 -1.39 1.08e-20 -9.63 0.0000655 9.63 #> 4 -1.38 1.17e-20 -9.56 0.0000708 9.56 #> 5 -1.37 1.26e-20 -9.48 0.0000765 9.48 #> 6 -1.35 1.37e-20 -9.40 0.0000826 9.40 #> 7 -1.34 1.47e-20 -9.32 0.0000892 9.32 #> 8 -1.33 1.59e-20 -9.25 0.0000963 9.25 #> 9 -1.32 1.72e-20 -9.17 0.000104 9.17 #> 10 -1.31 1.85e-20 -9.10 0.000112 9.10 #> # … with 490 more rows Next, we’ll plot three functions, the relative likelihood, the log-likelihood, the likelihood, and the deviance function. ggcurve(lik[[1]], type = "l1", nullvalue = TRUE) ggcurve(lik[[1]], type = "l2") ggcurve(lik[[1]], type = "l3") ggcurve(lik[[1]], type = "d")` The obvious advantage of using reduced likelihoods is that they are free of nuisance parameters $L_{t_{n}}(\theta)=f_{n}\left(F_{n}^{-1}\left(H_{p i v}(\theta)\right)\right)\left|\frac{\partial}{\partial t} \psi\left(t_{n}, \theta\right)\right|=h_{p i v}(\theta)\left|\frac{\partial}{\partial t} \psi(t, \theta)\right| /\left.\left|\frac{\partial}{\partial \theta} \psi(t, \theta)\right|\right|_{t=t_{n}}$ thus, giving summaries of the data that can be incorporated into combined analyses. # References 1. Xie M-g, Singh K. Confidence Distribution, the Frequentist Distribution Estimator of a Parameter: A Review. International Statistical Review. 2013;81(1):3-39. doi:10.1111/insr.12000 2. Birnbaum A. A unified theory of estimation, I. The Annals of Mathematical Statistics. 1961;32(1):112-135. doi:10.1214/aoms/1177705145 3. Chow ZR, Greenland S. Semantic and Cognitive Tools to Aid Statistical Inference: Replace Confidence and Significance by Compatibility and Surprise. arXiv:190908579 [statME]. September 2019. http://arxiv.org/abs/1909.08579. 4. Fraser DAS. P-Values: The Insight to Modern Statistical Inference. Annual Review of Statistics and Its Application. 2017;4(1):1-14. doi:10.1146/annurev-statistics-060116-054139 5. Fraser DAS. The P-value function and statistical inference. The American Statistician. 2019;73(sup1):135-147. doi:10.1080/00031305.2018.1556735 6. Poole C. Beyond the confidence interval. American Journal of Public Health. 1987;77(2):195-199. doi:10.2105/AJPH.77.2.195 7. Poole C. Confidence intervals exclude nothing. American Journal of Public Health. 1987;77(4):492-493. doi:10.2105/ajph.77.4.492 8. Schweder T, Hjort NL. Confidence and Likelihood*. Scand J Stat. 2002;29(2):309-332. doi:10.1111/1467-9469.00285 9. Schweder T, Hjort NL. Confidence, Likelihood, Probability: Statistical Inference with Confidence Distributions. Cambridge University Press; 2016. 10. Singh K, Xie M, Strawderman WE. Confidence distribution (CD) – distribution estimator of a parameter. August 2007. http://arxiv.org/abs/0708.0976. 11. Sullivan KM, Foster DA. Use of the confidence interval function. Epidemiology. 1990;1(1):39-42. doi:10.1097/00001648-199001000-00009 12. Whitehead J. The case for frequentism in clinical trials. Statistics in Medicine. 1993;12(15-16):1405-1413. doi:10.1002/sim.4780121506 13. Rothman KJ, Greenland S, Lash TL. Precision and statistics in epidemiologic studies. In: Rothman KJ, Greenland S, Lash TL, eds. Modern Epidemiology. 3rd ed. Lippincott Williams & Wilkins; 2008:148-167. 14. Greenland S. Valid P-values behave exactly as they should: Some misleading criticisms of P-values and their resolution with S-values. The American Statistician. 2019;73(sup1):106-114. doi:10.1080/00031305.2018.1529625 15. Shannon CE. A mathematical theory of communication. The Bell System Technical Journal. 1948;27(3):379-423. doi:10.1002/j.1538-7305.1948.tb01338.x 16. Brown HK, Ray JG, Wilton AS, Lunsky Y, Gomes T, Vigod SN. Association between serotonergic antidepressant use during pregnancy and autism spectrum disorder in children. JAMA. 2017;317(15):1544-1552. doi:10.1001/jama.2017.3415 17. Brown HK, Hussain-Shamsy N, Lunsky Y, Dennis C-LE, Vigod SN. The association between antenatal exposure to selective serotonin reuptake inhibitors and autism: A systematic review and meta-analysis. The Journal of Clinical Psychiatry. 2017;78(1):e48-e58. doi:10.4088/JCP.15r10194 18. Efron B, Tibshirani RJ. An Introduction to the Bootstrap. CRC Press; 1994. 19. Efron B, Narasimhan B. The automatic construction of bootstrap confidence intervals. October 2018:17.
2020-04-09 06:00:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5648199319839478, "perplexity": 6947.525005279786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371830894.88/warc/CC-MAIN-20200409055849-20200409090349-00533.warc.gz"}
https://comm.support.ca.com/kb/launch-a-vse-server-as-a-windows-service-with-a-different-localproperties-using-vmoptions/kb000048429
# Launch a VSE Server as a Windows Service with a different local.properties (using .vmoptions) Document ID : KB000048429 Show Technical Document Details Question: How do you launch a VSE Server as a Windows Service with a different local.properties? 1. Create a new text file in LISA_HOME/bin called VirtualServiceEnvironmentService.vmoptions. 2. Edit the file, and add this line: -DLISA_LOCAL_PROPERTIES=C:\path\to\vse.local.properties(tweak the path accordingly) 3. Restart the VSE service. It should now be using the local.properties that you specified. You can do the same thing with any LISA executable in the bin directory. Just make an exename.vmoptions file and put the JVM options you want in the file (one option per line). The vmoptions files are used to pass additional parameters to a Java process in order to modify the default settings used for the JVM. These files can be used to customize the memory allocation settings for each of the LISA processes used in the server. Thesefiles must be located on the same folder as the actual executable scripts and must have the same name, with the exception of the extension ( .vmoptions). These files are located at LISA_HOME\bin folder. The contents can be like: -Xms256m -Xmx1024m -Xss512k Okay, this works for Internet based licenses, but I have file based licenses. How can this work for that? We use the same vmoptions file. In the above example, it is used to point to a different license via a local.properties file. This works for an Internet based license, but not a file based license because by DEFAULT the lisalic.xml file is in LISA_HOME. There is a property, lisa.license, that can be changed to allow for multiple file based licenses in LISA_HOME. The lisa.license property contains the fully qualified path to the license file and defaults to the lisalic.xml in the LISA_HOME folder. How can we use this to our advantage?
2018-11-21 04:36:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4245600998401642, "perplexity": 2573.7023966079623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747024.85/warc/CC-MAIN-20181121032129-20181121054129-00264.warc.gz"}
http://mscroggs.co.uk/puzzles/tags/geometry
mscroggs.co.uk mscroggs.co.uk subscribe # Puzzles ## 23 December Today's number is the area of the largest area rectangle with perimeter 46 and whose sides are all integer length. ## 12 December These three vertices form a right angled triangle. There are 2600 different ways to pick three vertices of a regular 26-sided shape. Sometime the three vertices you pick form a right angled triangle. Today's number is the number of different ways to pick three vertices of a regular 26-sided shape so that the three vertices make a right angled triangle. ## Equal lengths The picture below shows two copies of the same rectangle with red and blue lines. The blue line visits the midpoint of the opposite side. The lengths shown in red and blue are of equal length. What is the ratio of the sides of the rectangle? ## Is it equilateral? In the diagram below, $$ABDC$$ is a square. Angles $$ACE$$ and $$BDE$$ are both 75°. Is triangle $$ABE$$ equilateral? Why/why not? ## Bending a straw Two points along a drinking straw are picked at random. The straw is then bent at these points. What is the probability that the two ends meet up to make a triangle? ## Placing plates Two players take turns placing identical plates on a square table. The player who is first to be unable to place a plate loses. Which player wins? ## 20 December Earlier this year, I wrote a blog post about different ways to prove Pythagoras' theorem. Today's puzzle uses Pythagoras' theorem. Start with a line of length 2. Draw a line of length 17 perpendicular to it. Connect the ends to make a right-angled triangle. The length of the hypotenuse of this triangle will be a non-integer. Draw a line of length 17 perpendicular to the hypotenuse and make another right-angled triangle. Again the new hypotenuse will have a non-integer length. Repeat this until you get a hypotenuse of integer length. What is the length of this hypotenuse? ## 17 December The number of degrees in one internal angle of a regular polygon with 360 sides. ## Archive Show me a random puzzle ▼ show ▼
2019-09-18 22:24:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5971218347549438, "perplexity": 428.51672866812993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573368.43/warc/CC-MAIN-20190918213931-20190918235931-00339.warc.gz"}
http://mathhelpforum.com/advanced-statistics/155832-cumulative-distribution-function-using-gamma-distribution.html
## cumulative distribution function using a gamma distribution Hi there, I am trying to fit some data with a survival function, which is just 1 - cdf (cumulative distribution function). I was able to fit the data assuming a normal distributed random variable: Due to the nature of the experiment I suspect that a gamma distributed random variable would give a better fit (at the end of the step), since the gamma distribution is "like a normal distribution with a bias on one side". However, I cannot get this to work. It always looks like a normal distributed random variable, i.e. just as in the figure above. I use Python. This is how I define my function: def scaled_sf_gamma(x, c, d, shape_param): return c*stats.gamma.sf(x, shape_param) + d Parameters c and d scale the survival function. Then I define an additional shape parameter. I optimize the curve fit as: p_opt_gamma = sp.optimize.curve_fit(scaled_sf_gamma, new_time, CH4_interpolated)[0] As I said the result looks like a normal distribution. I think I am missing to optimize another parameter, which "skews" the normal distribution. Here is some information about the gamma distribution and how it is used scipy.stats.gamma &mdash; SciPy v0.9.dev6665 Reference Guide (DRAFT). I do not understand the "lower or upper tail probability" which is given as a non-optional argument here. Maybe there lies the key... Thanks a lot in advance for any help. Cheers Frank
2014-07-31 01:19:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8066100478172302, "perplexity": 584.2260729939748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272256.16/warc/CC-MAIN-20140728011752-00101-ip-10-146-231-18.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/67245/confusion-matrix-of-random-forest-doesnot-match-predicted-probabilities-on-train
# Confusion matrix of random Forest doesnot match predicted probabilities on train data Based on an earlier question I balanced the classes such that the numbers in both classes is about similar. The random Forest gives next result: > print(rFresult) Call: randomForest(formula = finresfh ~ ., data = rFdatasubset, importance = TRUE) Type of random forest: classification Number of trees: 500 No. of variables tried at each split: 14 OOB estimate of error rate: 35.53% Confusion matrix: 1 2 class.error 1 1852 627 0.2529246 2 1022 1140 0.4727105 Prediction on the train set shows perfect separation in contrast to the confusion matrix: > tab <- table(probability=round(predict(rFresult, newdata=rFdatasubset, type="prob")[,2],1), TRUE_status=rFdatasubset$finresfh) > tab TRUE_status probability 1 2 0.1 978 0 0.2 1447 0 0.3 54 0 0.7 0 65 0.8 0 1551 0.9 0 543 1 0 3 The probability is estimated for the subjects to be in class 2. The "probability" table means the number of subjects with predicted probability level having a certain TRUE status. Can anyone explain why the estimated probabilities show a perfect separation but a totally different result in the confusion table? ## 1 Answer You're trying to get predictions on your training dataset. This is misleading, as the component trees in the RF have been obtained by optimising the fit criterion on this data. You need to omit the newdata argument, which will get you the out-of-bag predictions instead. table(probability=round(predict(rFresult, type="prob")[,2], 1), TRUE_status=rFdatasubset$finresfh) • great, this is the solution – Hans Aug 14 '13 at 6:26
2021-10-25 20:43:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5184850096702576, "perplexity": 8347.310580960348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00442.warc.gz"}
http://zenzike.com/posts/euler/2010-08-12-euler-7
# Euler #7 : Benchmarked Primes by Nicolas Wu Posted on 12 August 2010 This week’s Project Euler question is: Find the 10001st prime. We already described a simple algorithm for finding primes in a previous post, so rather than repeat ourselves, in this article we’ll discuss benchmarking using Criterion to find the fastest prime number algorithm that doesn’t require too much magic. I’ll be taking implementations found on the Haskell wiki. ## Imported modules First we’ll need to import the Criterion modules, which provides us with our benchmarking suite: > import System.Environment (getArgs, withArgs) > import Criterion (bgroup, bench, nf) > import Progression.Main (defaultMain) > import Data.List.Ordered (minus, union) I’ll actually be using Progression in conjuction with Criterion, which just makes collecting the results of several benchmarks a little easier. ## Prime Algorithms The prime number generator we used previously was Turner’s sieve, defined as follows: > turner :: [Int] > turner = sieve [2 .. ] > where > sieve (p:xs) = p : sieve [x | x <- xs, x mod p /= 0] The Haskell wiki documents a whole range of other algorithms that can be used to generate primes. Here are the definitions that I pulled from the wiki: > postSieve :: [Int] > postSieve = 2 : 3 : sieve (tail postSieve) [5,7..] > where > sieve (p:ps) xs = h ++ sieve ps [x | x <- t, x rem p /= 0] > where (h,~(_:t)) = span (< p*p) xs > > trialOdds :: [Int] > trialOdds = 2 : 3 : filter isPrime [5,7..] > where > isPrime n = all (notDivs n) > $takeWhile (\p-> p*p <= n) (tail trialOdds) > notDivs n p = n mod p /= 0 > > nestedFilters :: [Int] > nestedFilters = 2 : 3 : sieve [] (tail nestedFilters) 5 > where > notDivsBy d n = n mod d /= 0 > sieve ds (p:ps) x = foldr (filter . notDivsBy) [x,x+2..p*p-2] ds > ++ sieve (p:ds) ps (p*p+2) > > spansPrimes :: [Int] > spansPrimes = 2 : 3 : sieve 0 (tail spansPrimes) 5 > where > sieve k (p:ps) x = [n | n <- [x,x+2..p*p-2], and [nremp/=0 | p <- fs]] > ++ sieve (k+1) ps (p*p+2) > where fs = take k (tail spansPrimes) > > bird :: [Int] > bird = 2 : primes' > where > primes' = [3] ++ [5,7..] minus foldr union' [] mults > mults = map (\p -> let q=p*p in (q,tail [q,q+2*p..]))$ primes' > union' (q,qs) xs = q : union qs xs > > wheel :: [Int] > wheel = 2:3:primes' > where > 1:p:candidates = [6*k+r | k <- [0..], r <- [1,5]] > primes' = p : filter isPrime candidates > isPrime n = all (not . divides n) > $takeWhile (\p -> p*p <= n) primes' > divides n p = n mod p == 0 I won’t go into the details of explaining these different algorithms, since I want us to focus on how we might benchmark these implementations. ## Benchmarking In order to compare these different algorithms, we construct a program that takes as its argument the name of the function that should be used to produce prime numbers. Once the user has provided this input, the benchmark is executed using Criterion to produce the first 101, 1001, and 10001 primes. > main = do > args <- getArgs > let !primes = case head args of > "turner" -> turner > "postSieve" -> postSieve > "trialOdds" -> trialOdds > "nestedFilters" -> nestedFilters > "spansPrimes" -> spansPrimes > "bird" -> bird > "wheel" -> wheel > _ -> error "prime function unkown!" > withArgs (("-n" ++ (head args)) : tail args)$ do > defaultMain . bgroup "Primes" $> [ bench "101"$ nf (\n -> primes !! n) 101 > , bench "1001" $nf (\n -> primes !! n) 1001 > , bench "10001"$ nf (\n -> primes !! n) 10001 > ] We then run this code with each prime function name as an argument individually, and the Progression library puts the results together. Here’s a bar chart generated from the data: These results have been normalised against the turner function, and show the results of how long it took for the various algorithms to find the 10001th, 1001th and 100th primes. Solving this week’s problem is a simple case of running any one of these algorithms on our magic number: > euler7 = spansPrimes 10001 ## Summary Collecting benchmark information with Criterion and Progression is really quite simple! The best thing about Criterion is that the benchmarking is very robust: detailed statistics are returned regarding the benchmarking process, and whether the results are likely to be accurate. Progression makes the collation of several runs of benchmarks very simple, and means that different versions of a program can be compared with ease.
2017-04-24 13:15:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.749905526638031, "perplexity": 6924.305193125778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119361.6/warc/CC-MAIN-20170423031159-00245-ip-10-145-167-34.ec2.internal.warc.gz"}
http://zbmath.org/?q=an:1212.35348
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) On the 3D viscous primitive equations of the large-scale atmosphere. (English) Zbl 1212.35348 Summary: This paper is devoted to considering the three-dimensional viscous primitive equations of the large-scale atmosphere. First, we prove the global well-posedness for the primitive equations with weaker initial data than that in the paper by D. Huang and B. Guo [Sci. China, Ser. D 51, No. 3, 469–480 (2008)]. Second, we obtain the existence of smooth solutions to the equations. Moreover, we obtain the compact global attractor in $V$ for the dynamical system generated by the primitive equations of large-scale atmosphere, which improves the result of D. Huang and B. Guo (loc. cit.). ##### MSC: 35Q30 Stokes and Navier-Stokes equations 65M70 Spectral, collocation and related methods (IVP of PDE) 86A10 Meteorology and atmospheric physics
2014-04-20 18:48:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7629653811454773, "perplexity": 7528.983201005069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/define-or-explain-following-concept-direct-demand-demand_78492
Share # Define Or Explain the Following Concept: Direct Demand - Economics ConceptDemand #### Question Define or explain the following concept: Direct demand #### Solution Demand for the goods that are purchased for direct consumption and are not used as intermediate goods is referred to as direct demand. For instance, goods like clothes and food have a direct demand, as they are meant for final consumption. The demand for such goods does not depend on the demand for any other commodity. Is there an error in this question or solution? #### APPEARS IN Micheal Vaz Solution for Micheal Vaz Class 12 Economics (2019 to Current) Chapter 3: Demand Analysis Exercise | Q: 1.4 | Page no. 24 #### Video TutorialsVIEW ALL [3] Solution Define Or Explain the Following Concept: Direct Demand Concept: Demand. S
2020-03-28 18:27:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348608613014221, "perplexity": 5846.327564840934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00240.warc.gz"}
https://byjus.com/rd-sharma-solutions/class-12-maths-chapter-19-indefinite-integrals-exercise-19-5/
# RD Sharma Solutions For Class 12 Maths Exercise 19.5 Chapter 19 Indefinite Integrals This exercise deals with evaluation of integrals of the form $\int (ax+b)\sqrt{cx+d} dx$ and $\int \frac{ax+b}{\sqrt{cx+d}} \ dx$. Experts at BYJU’S have formulated the RD Sharma Class 12 Solutions for Maths in the most lucid and easy manner. Solutions are developed using shortcut techniques to help students grasp the concepts faster and to make learning fun. The solutions to this exercise are available in the pdf format, which can be downloaded easily from the links provided below. To clear their doubts students can refer to RD Sharma Solutions for Class 12 Maths Chapter 19 Exercise 19.5. Solution: Given Solution: Solution: Solution: Solution:
2020-07-13 12:16:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5996934175491333, "perplexity": 1087.9267936265205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00556.warc.gz"}
https://nhigham.com/2017/01/30/good-times-in-matlab/?shared=email&msg=fail
# Good Times in MATLAB: How to Typeset the Multiplication Symbol The MATLAB output >> A = rand(2); whos Name Size Bytes Class Attributes A 2x2 32 double will be familiar to seasoned users. Consider this, however, from MATLAB R2016b: >> s = string({'One','Two'}) s = 1×2 string array "One" "Two" At first sight, you might not spot anything unusual, other than the new string datatype. But there are two differences. First, MATLAB prints a header giving the type and size of the array. It does so for arrays of type other than double precision and char. Second, the times symbol is no longer an “x” but is now a multiplication symbol: “×”. The new “times” certainly looks better. There are still remnants of “x”, for example in whos s for the example above, but I presume that all occurrences of “x” will be changed to the new symbol in the next release However, there is a catch: the “×” symbol is a Unicode character, so it will not print correctly when you include the output in LaTeX (at least with the version provided in TeX Live 2016). Moreover, it may not even save correctly if your editor is not set up for Unicode characters. Here is how we dealt with the problem in the third edition (published in January 2017) of MATLAB Guide. We put the code \usepackage[utf8x]{inputenc} \DeclareUnicodeCharacter{0215}{\ensuremath{\times}} in the preamble of the master TeX file, do.tex. We also told our editor, Emacs, to use a UTF-8 coding, by putting the following code at the end of each included .tex file (we have one file per chapter): %%% Local Variables: %%% coding: utf-8 %%% mode: latex %%% TeX-master: "do" %%% End: With this setup we can cut and paste output including “×” into our .tex files and it appears as expected in the LaTeX output.
2021-09-22 01:34:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729809522628784, "perplexity": 1859.6125031524075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00527.warc.gz"}
https://cstheory.stackexchange.com/questions/8169/computational-complexity-of-random-sampling?noredirect=1
# Computational complexity of random sampling I am using some randomized algorithms (particle filters) and I would like to know what is the computational complexity of obtaining one random sample of a continuous distribution (for instance from a multivariate Gaussian), in terms of elemental operations... or what computational complexities have conventional algorithms. Thank you • It depends on your computational model. Sometimes people just assume you can generate a Gaussian as a unit operation. However, if all you can generate is, say, random bits, and you want an approximate Gaussian, the complexity depends on the approximation you want. – Dana Moshkovitz Sep 10 '11 at 11:26 • @DanaMoshkovitz: maybe this could be an answer ? – Suresh Venkat Sep 10 '11 at 20:28 • Ok, I posted it as an answer. – Dana Moshkovitz Sep 10 '11 at 20:31 • FYI In the case of a finite distribution (not what op asks!), $O(1)$ time is (in theory) possible. See cstheory.stackexchange.com/questions/37648/…. – Neal Young Aug 22 '18 at 12:18 • A more precise question would be, if you want to sample a distribution from random iid bits that is $\epsilon$-close to a Gaussian in total variational distance, what is the running time dependence of the sampler in $\epsilon$. – Mahdi Cheraghchi Sep 11 '11 at 0:28
2020-11-24 20:19:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7139304280281067, "perplexity": 642.0394701565398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141177566.10/warc/CC-MAIN-20201124195123-20201124225123-00248.warc.gz"}
http://mathhelpforum.com/algebra/112410-recursion-patterns-arithmetic-geometric-equations-sigma-notations.html
Thread: recursion patterns, arithmetic/geometric equations, and sigma notations 1. recursion patterns, arithmetic/geometric equations, and sigma notations Could a kind soul please be so kind to explain what exactly the variables in the equations of arithmetic/geometric sequences mean? I would understand it so much more if i knew what u (sub N) or a meant. I have no notes Furthermore I do not quite grasp the concept of the sigma notation for my teacher is insane. Feel free to delve into that Thank you 2. Originally Posted by sodumb:( Could a kind soul please be so kind to explain what exactly the variables in the equations of arithmetic/geometric sequences mean? I would understand it so much more if i knew what u (sub N) or a meant. I have no notes Furthermore I do not quite grasp the concept of the sigma notation for my teacher is insane. Feel free to delve into that Thank you U_n = nth term of a sequence U_1 = a = first terms of a sequence n = number of terms in a sequence or a specific term Arithmetic Sequences d = common difference ( $U_n - U_{n-1} = U_{n-1}-U_{n-2} =d$) nth term: $U_n = a+(n-1)d$ Sum of n terms: $S_n=\frac{n}{2}(2a + (n-1)d)$ Geometric Sequence r = common ratio ( $\frac{U_n}{U_{n-1}} = \frac{U_{n-1}}{U_{n-2}} = r$) nth term: $U_n = ar^{n-1}$ sum to n terms ( $|r| \geq 1$): $S_n = \frac{a(1-r^n)}{r^n} = \frac{a(r^n-1)}{r^n}$ Sum to infinity ( $|r| < 1$): $S_{\infty} = \frac{a}{1-r}$
2017-02-28 15:13:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6766087412834167, "perplexity": 1008.5578513457846}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00210-ip-10-171-10-108.ec2.internal.warc.gz"}
https://cyclostationary.blog/category/radio-frequency-scene-analysis/
## PSK/QAM Cochannel Data Set for Modulation Recognition Researchers [CSPB.ML.2023] The next step in dataset complexity at the CSP Blog: cochannel signals. I’ve developed another data set for use in assessing modulation-recognition algorithms (machine-learning-based or otherwise) that is more complex than the original sets I posted for the ML Challenge (CSPB.ML.2018 and CSPB.ML.2022). Half of the new dataset consists of one signal in noise and the other half consists of two signals in noise. In most cases the two signals overlap spectrally, which is a signal condition called cochannel interference. We’ll call it CSPB.ML.2023. Continue reading “PSK/QAM Cochannel Data Set for Modulation Recognition Researchers [CSPB.ML.2023]” ## Neural Networks for Modulation Recognition: IQ-Input Networks Do Not Generalize, but Cyclic-Cumulant-Input Networks Generalize Very Well Neural networks with CSP-feature inputs DO generalize in the modulation-recognition problem setting. In some recently published papers (My Papers [50,51]), my ODU colleagues and I showed that convolutional neural networks and capsule networks do not generalize well when their inputs are complex-valued data samples, commonly referred to as simply IQ samples, or as raw IQ samples by machine learners.(Unclear why the adjective ‘raw’ is often used as it adds nothing to the meaning. If I just say Hey, pass me those IQ samples, would ya?, do you think maybe he means the processed ones? How about raw-I-mean–seriously-man–I-did-not-touch-those-numbers-OK? IQ samples? All-natural vegan unprocessed no-GMO organic IQ samples? Uncooked IQ samples?) Moreover, the capsule networks typically outperform the convolutional networks. In a new paper (MILCOM 2022: My Papers [52]; arxiv.org version), my colleagues and I continue this line of research by including cyclic cumulants as the inputs to convolutional and capsule networks. We find that capsule networks outperform convolutional networks and that convolutional networks trained on cyclic cumulants outperform convolutional networks trained on IQ samples. We also find that both convolutional and capsule networks trained on cyclic cumulants generalize perfectly well between datasets that have different (disjoint) probability density functions governing their carrier frequency offset parameters. That is, convolutional networks do better recognition with cyclic cumulants and generalize very well with cyclic cumulants. So why don’t neural networks ever ‘learn’ cyclic cumulants with IQ data at the input? The majority of the software and analysis work is performed by the first author, John Snoap, with an assist on capsule networks by James Latshaw. I created the datasets we used (available here on the CSP Blog [see below]) and helped with the blind parameter estimation. Professor Popescu guided us all and contributed substantially to the writing. Continue reading “Neural Networks for Modulation Recognition: IQ-Input Networks Do Not Generalize, but Cyclic-Cumulant-Input Networks Generalize Very Well” ## What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178] Starts as a personal gripe, but ends with weird stuff from the literature. During my poking around on arxiv.org the other day (Grrrrr…), I came across some postings by O’Shea et al I’d not seen before, including The Literature [R176]: “Wideband Signal Localization and Spectral Segmentation.” Huh, I thought, they are probably trying to train a neural network to do automatic spectral segmentation that is superior to my published algorithm (My Papers [32]). Yeah, no. I mean yes to a machine, no to nods to me. Let’s take a look. Continue reading “What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178]” ## One Last Time … We take a quick look at a fourth DeepSig dataset called 2016.04C.multisnr.tar.bz2 in the context of the data-shift problem in machine learning. And if we get this right, We’re gonna teach ’em how to say Goodbye … You and I. Lin-Manuel Miranda, “One Last Time,” Hamilton I didn’t expect to have to do this, but I am going to analyze yet another DeepSig dataset. One last time. This one is called 2016.04C.multisnr.tar.bz2, and is described thusly on the DeepSig website: I’ve analyzed the 2018 dataset here, the RML2016.10b.tar.bz2 dataset here, and the RML2016.10a.tar.bz2 dataset here. Now I’ve come across a manuscript-in-review in which both the RML2016.10a and RML2016.04c data sets are used. The idea is that these two datasets represent two sufficiently distinct datasets so that they are good candidates for use in a data-shift study involving trained neural-network modulation-recognition systems. The data-shift problem is, as one researcher puts it: Data shift or data drift, concept shift, changing environments, data fractures are all similar terms that describe the same phenomenon: the different distribution of data between train and test sets Georgios Sarantitis But … are they really all that different? Continue reading “One Last Time …” ## J. Antoni’s Fast Spectral Correlation Estimator The Fast Spectral Correlation estimator is a quick way to find small cycle frequencies. However, its restrictions render it inferior to estimators like the SSCA and FAM. In this post we take a look at an alternative CSP estimator created by J. Antoni et al (The Literature [R152]). The paper describing the estimator can be found here, and you can get some corresponding MATLAB code, posted by the authors, here if you have a Mathworks account. Continue reading “J. Antoni’s Fast Spectral Correlation Estimator” ## Cyclostationarity of DMR Signals Let’s take a brief look at the cyclostationarity of a captured DMR signal. It’s more complicated than one might think. In this post I look at the cyclostationarity of a digital mobile radio (DMR) signal empirically. That is, I have a captured DMR signal from sigidwiki.com, and I apply blind CSP to it to determine its cycle frequencies and spectral correlation function. The signal is arranged in frames or slots, with gaps between successive slots, so there is the chance that we’ll see cyclostationarity due to the on-burst (or on-frame) signaling and cyclostationarity due to the framing itself. Continue reading “Cyclostationarity of DMR Signals” ## Comments on “Deep Neural Network Feature Designs for RF Data-Driven Wireless Device Classification,” by B. Hamdaoui et al Another post-publication review of a paper that is weak on the ‘RF’ in RF machine learning. Let’s take a look at a recently published paper (The Literature [R148]) on machine-learning-based modulation-recognition to get a data point on how some electrical engineers–these are more on the side of computer science I believe–use mathematics when they turn to radio-frequency problems. You can guess it isn’t pretty, and that I’m not here to exalt their acumen. Continue reading “Comments on “Deep Neural Network Feature Designs for RF Data-Driven Wireless Device Classification,” by B. Hamdaoui et al” ## More on DeepSig’s RML Data Sets The second DeepSig data set I analyze: SNR problems and strange PSDs. I presented an analysis of one of DeepSig’s earlier modulation-recognition data sets (RML2016.10a.tar.bz2) in the post on All BPSK Signals. There we saw several flaws in the data set as well as curiosities. Most notably, the signals in the data set labeled as analog amplitude-modulated single sideband (AM-SSB) were absent: these signals were only noise. DeepSig has several other data sets on offer at the time of this writing: In this post, I’ll present a few thoughts and results for the “Larger Version” of RML2016.10a.tar.bz2, which is called RML2016.10b.tar.bz2. This is a good post to offer because it is coherent with the first RML post, but also because more papers are being published that use the RML 10b data set, and of course more such papers are in review. Maybe the offered analysis here will help reviewers to better understand and critique the machine-learning papers. The latter do not ever contain any side analysis or validation of the RML data sets (let me know if you find one that does in the Comments below), so we can’t rely on the machine learners to assess their inputs. (Update: I analyze a third DeepSig data set here. And a fourth and final one here.) Continue reading “More on DeepSig’s RML Data Sets” ## All BPSK Signals An analysis of DeepSig’s 2016.10A data set, used in many published machine-learning papers, and detailed comments on quite a few of those papers. Update March 2021 See my analyses of three other DeepSig datasets here, here, and here. Update June 2020 I’ll be adding new papers to this post as I find them. At the end of the original post there is a sequence of date-labeled updates that briefly describe the relevant aspects of the newly found papers. Some machine-learning modulation-recognition papers deserve their own post, so check back at the CSP Blog from time-to-time for “Comments On …” posts. ## A Gallery of Cyclic Correlations There are some situations in which the spectral correlation function is not the preferred measure of (second-order) cyclostationarity. In these situations, the cyclic autocorrelation (non-conjugate and conjugate versions) may be much simpler to estimate and work with in terms of detector, classifier, and estimator structures. So in this post, I’m going to provide surface plots of the cyclic autocorrelation for each of the signals in the spectral correlation gallery post. The exceptions are those signals I called feature-rich in the spectral correlation gallery post, such as DSSS, LTE, and radar. Recall that such signals possess a large number of cycle frequencies, and plotting their three-dimensional spectral correlation surface is not helpful as it is difficult to interpret with the human eye. So for the cycle-frequency patterns of feature-rich signals, we’ll rely on the stem-style (cyclic-domain profile) plots that I used in the spectral correlation gallery post. ## Data Set for the Machine-Learning Challenge [CSPB.ML.2018] A PSK/QAM/SQPSK data set with randomized symbol rate, inband SNR, carrier-frequency offset, and pulse roll-off. Update February 2023: I’ve posted a third challenge dataset here. It is CSPB.ML.2023 and features cochannel signals. Update April 2022. I’ve also posted a second dataset here. This new dataset is similar to the original ML Challenge dataset except the random variable representing the carrier frequency offset has a slightly different distribution. If you refer to either of the posted datasets in a published paper, please use the following designators, which I am also using in papers I’m attempting to publish: Original ML Challenge Dataset: CSPB.ML.2018. Shifted ML Challenge Dataset: CSPB.ML.2022. Update September 2020. I made a mistake when I created the signal-parameter “truth” files signal_record.txt and signal_record_first_20000.txt. Like the DeepSig RML data sets that I analyzed on the CSP Blog here and here, the SNR parameter in the truth files did not match the actual SNR of the signals in the data files. I’ve updated the truth files and the links below. You can still use the original files for all other signal parameters, but the SNR parameter was in error. Update July 2020. I originally posted $20,000$ signals in the posted data set. I’ve now added another $92,000$ for a total of $112,000$ signals. The original signals are contained in Batches 1-5, the additional signals in Batches 6-28. I’ve placed these additional Batches at the end of the post to preserve the original post’s content. Continue reading “Data Set for the Machine-Learning Challenge [CSPB.ML.2018]” ## A Challenge for the Machine Learners The machine-learning modulation-recognition community consistently claims vastly superior performance to anything that has come before. Let’s test that. Update February 2023: A third dataset has been posted here. This new dataset, CSPB.ML.2023, features cochannel signals. Update April 2022: I’ve also posted a second dataset here. This new dataset is similar to the original ML Challenge dataset except the random variable representing the carrier frequency offset has a slightly different distribution. If you refer to any of the posted datasets in a published paper, please use the following designators, which I am also using in papers I’m attempting to publish: Original ML Challenge Dataset: CSPB.ML.2018. Shifted ML Challenge Dataset: CSPB.ML.2022. Cochannel ML Dataset: CSPB.ML.2023. ### Update February 2019 I’ve decided to post the data set I discuss here to the CSP Blog for all interested parties to use. See the new post on the Data Set. If you do use it, please let me and the CSP Blog readers know how you fared with your experiments in the Comments section of either post. Thanks! ## CSP Estimators: The FFT Accumulation Method An alternative to the strip spectral correlation analyzer. Let’s look at another spectral correlation function estimator: the FFT Accumulation Method (FAM). This estimator is in the time-smoothing category, is exhaustive in that it is designed to compute estimates of the spectral correlation function over its entire principal domain, and is efficient, so that it is a competitor to the Strip Spectral Correlation Analyzer (SSCA) method. I implemented my version of the FAM by using the paper by Roberts et al (The Literature [R4]). If you follow the equations closely, you can successfully implement the estimator from that paper. The tricky part, as with the SSCA, is correctly associating the outputs of the coded equations to their proper $\displaystyle (f, \alpha)$ values. ## ‘Can a Machine Learn the Fourier Transform?’ Redux, Plus Relevant Comments on a Machine-Learning Paper by M. Kulin et al. Reconsidering my first attempt at teaching a machine the Fourier transform with the help of a CSP Blog reader. Also, the Fourier transform is viewed by Machine Learners as an input data representation, and that representation matters. I first considered whether a machine (neural network) could learn the (64-point, complex-valued)  Fourier transform in this post. I used MATLAB’s Neural Network Toolbox and I failed to get good learning results because I did not properly set the machine’s hyperparameters. A kind reader named Vito Dantona provided a comment to that original post that contained good hyperparameter selections, and I’m going to report the new results here in this post. Since the Fourier transform is linear, the machine should be set up to do linear processing. It can’t just figure that out for itself. Once I used Vito’s suggested hyperparameters to force the machine to be linear, the results became much better: ## CSP Patent: Tunneling Tunneling == Purposeful severe undersampling of wideband communication signals. If some of the cyclostationarity property remains, we can exploit it at a lower cost. My colleague Dr. Apurva Mody (of BAE Systems, AiRANACULUS, IEEE 802.22, and the WhiteSpace Alliance) and I have received a patent on a CSP-related invention we call tunneling. The US Patent is 9,755,869 and you can read it here or download it here. We’ve got a journal paper in review and a 2013 MILCOM conference paper (My Papers [38]) that discuss and illustrate the involved ideas. I’m also working on a CSP Blog post on the topic. Update December 28, 2017: Our Tunneling journal paper has been accepted for publication in the journal IEEE Transactions on Cognitive Communications and Networking. You can download the pre-publication version here. ## CSP Estimators: Cyclic Temporal Moments and Cumulants How do we efficiently estimate higher-order cyclic cumulants? The basic answer is first estimate cyclic moments, then combine using the moments-to-cumulants formula. In this post we discuss ways of estimating $n$-th order cyclic temporal moment and cumulant functions. Recall that for $n=2$, cyclic moments and cyclic cumulants are usually identical. They differ when the signal contains one or more finite-strength additive sine-wave components. In the common case when such components are absent (as in our recurring numerical example involving rectangular-pulse BPSK), they are equal and they are also equal to the conventional cyclic autocorrelation function provided the delay vector is chosen appropriately. That is, the two-dimensional delay vector $\boldsymbol{\tau} = [\tau_1\ \ \tau_2]$ is set equal to $[\tau/2\ \ -\tau/2]$. The more interesting case is when the order $n$ is greater than two. Most communication signal models possess odd-order moments and cumulants that are identically zero, so the first non-trivial order $n$ greater than two is four. Our estimation task is to estimate $n$-th order temporal moment and cumulant functions for $n \ge 4$ using a sampled-data record of length $T$. ## Automatic Spectral Segmentation Radio-frequency scene analysis is much more complex than modulation recognition. A good first step is to blindly identify the frequency intervals for which significant non-noise energy exists. In this post, I discuss a signal-processing algorithm that has almost nothing to do with cyclostationary signal processing (CSP). Almost. The topic is automatic spectral segmentation, which I also call band-of-interest (BOI) detection. When attempting to perform automatic radio-frequency scene analysis (RFSA), we may be confronted with a data block that contains multiple signals in a number of distinct frequency subbands. Moreover, these signals may be turning on and off within the data block. To apply our cyclostationary signal processing tools effectively, we would like to isolate these signals in time and frequency to the greatest extent possible using linear time-invariant filtering (for separating in the frequency dimension) and time-gating (for separating in the time dimension). Then the isolated signal components can be processed serially using CSP. It is very important to remember that even perfect spectral and temporal segmentation will not solve the cochannel-signal problem. It is perfectly possible that an isolated subband will contain more than one cochannel signal. The basics of my BOI-detection approach are published in a 2007 conference paper (My Papers [32]). I’ll describe this basic approach, illustrate it with examples relevant to RFSA, and also provide a few extensions of interest, including one that relates to cyclostationary signal processing. ## Cyclostationarity of Direct-Sequence Spread-Spectrum Signals Spread-spectrum signals are used to enable shared-bandwidth communication systems (CDMA), precision position estimation (GPS), and secure wireless data transmission. In this post we look at direct-sequence spread-spectrum (DSSS) signals, which can be usefully modeled as a kind of PSK signal. DSSS signals are used in a variety of real-world situations, including the familiar CDMA and WCDMA signals, covert signaling, and GPS. My colleague Antonio Napolitano has done some work on a large class of DSSS signals (The Literature [R11, R17, R95]), resulting in formulas for their spectral correlation functions, and I’ve made some remarks about their cyclostationary properties myself here and there (My Papers [16]). A good thing, from the point of view of modulation recognition, about DSSS signals is that they are easily distinguished from other PSK and QAM signals by their spectral correlation functions. Whereas most PSK/QAM signals have only a single non-conjugate cycle frequency, and no conjugate cycle frequencies, DSSS signals have many non-conjugate cycle frequencies and in some cases also have many conjugate cycle frequencies.
2023-02-07 11:38:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45300474762916565, "perplexity": 2341.131094820501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00817.warc.gz"}
http://mathhelpforum.com/algebra/86926-help-sums-integers-please.html
1. ## Help with sums of integers please. Question is: i) Find the sum of the integers from 29 to 107 inclusive. ii) Hence find the value of 107 E (4 + 3i) i = 29 Sorry I don't know how to put in the sigma notation above. I can follow what to do until it gets to the part in brackets (4 + 3i), I am not sure why it is that value? It doesn't seem to fit in with the rest of the equation when I have calculated it. 2. $\displaystyle \sum\limits_{k = 1}^{107} {\left( {4 + 3k} \right)} = \sum\limits_{k = 1}^{107} {\left( 4 \right)} + 3\sum\limits_{k = 1}^{107} {\left( k \right)} = (107)(4) + 3\frac{{\left( {107} \right)\left( {108} \right)}} {2}$ Here is the general idea: $\displaystyle \sum\limits_{k = 29}^{107} {\left( k \right)} = \sum\limits_{k = 1}^{107} {\left( k \right)} - \sum\limits_{k = 1}^{28} {\left( k \right)}$
2018-06-24 17:21:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7540731430053711, "perplexity": 278.6174755353776}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866984.71/warc/CC-MAIN-20180624160817-20180624180817-00314.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=141&t=60148&p=228976
## G(not) and G $\Delta G^{\circ} = -nFE_{cell}^{\circ}$ Johnathan Smith 1D Posts: 108 Joined: Wed Sep 11, 2019 12:16 am ### G(not) and G What is the difference between G(not) and G? Tracy Tolentino_2E Posts: 140 Joined: Sat Sep 07, 2019 12:17 am ### Re: G(not) and G G(not) is the standard Gibbs Free energy. So it's the energy in standard conditions. (1 M, 1atm, 25 degrees Celsius) G is Gibbs Free energy in other conditions ASetlur_1G Posts: 101 Joined: Fri Aug 09, 2019 12:17 am ### Re: G(not) and G Is it correct to say that G(not) is used at equilibrium (K value) and G is used for other conditions (Q value)? KarineKim2L Posts: 100 Joined: Fri Aug 30, 2019 12:16 am ### Re: G(not) and G In addition to all of the above, the relationship between G not and G can be seen in the equation G not= G+RTlnQ. Rafsan Rana 1A Posts: 55 Joined: Sat Aug 24, 2019 12:16 am ### Re: G(not) and G Isn't the equation G = Gnot + RTlnQ ? Jainam Shah 4I Posts: 130 Joined: Fri Aug 30, 2019 12:16 am ### Re: G(not) and G G(not) is at standard conditions whereas G itself doesn't have to be at standard conditions. Sean Tran 2K Posts: 65 Joined: Sat Aug 17, 2019 12:17 am ### Re: G(not) and G G(not) represents standard Gibbs Free energy. Leyna Dang 2H Posts: 104 Joined: Thu Jul 25, 2019 12:17 am ### Re: G(not) and G G(not) is at standard Gibbs free energy, thus it is under standard conditions, unlike G. Sanjana K - 2F Posts: 102 Joined: Sat Sep 07, 2019 12:17 am Been upvoted: 1 time ### Re: G(not) and G Rafsan Rana 1A wrote:Isn't the equation G = Gnot + RTlnQ ? Yes, it should be delta G = delta G(naught) + RTlnQ. Maya Beal Dis 1D Posts: 100 Joined: Sat Aug 17, 2019 12:16 am Been upvoted: 2 times ### Re: G(not) and G In problem 5G.13 you calculate the delta G of the reaction at equilibrium and then use whether that value is positive or negative to see which way the reaction will proceed (towards reactants or products), but if the reaction is at equilibrium doesn't that mean the reaction is going both ways at the exact same rate and would therefore favor neither direction? KaleenaJezycki_1I Posts: 127 Joined: Sat Aug 17, 2019 12:18 am Been upvoted: 2 times ### Re: G(not) and G ASetlur_1G wrote:Is it correct to say that G(not) is used at equilibrium (K value) and G is used for other conditions (Q value)? Yes for the most part. BCaballero_4F Posts: 94 Joined: Wed Nov 14, 2018 12:22 am ### Re: G(not) and G ASetlur_1G wrote:Is it correct to say that G(not) is used at equilibrium (K value) and G is used for other conditions (Q value)? Yes this is essentially correct to say 205405339 Posts: 77 Joined: Thu Jul 11, 2019 12:16 am ### Re: G(not) and G G9not) is under standard conditions whereas G is not and G(n0t) will contribute to the value of G Nathan Rothschild_2D Posts: 131 Joined: Fri Aug 02, 2019 12:15 am ### Re: G(not) and G Naught always means 1M of solution or 1 atm at 298K (same as 25 Celsius) Zoe Gleason 4F Posts: 51 Joined: Tue Jul 23, 2019 12:15 am ### Re: G(not) and G Gnaught will be under standard conditions, which are 1.0M, 1atm, and 25C.
2020-07-04 06:55:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.623700737953186, "perplexity": 11561.946520820176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00545.warc.gz"}
https://math.stackexchange.com/questions/2681358/spectral-families-of-commuting-operators
# Spectral families of commuting operators Consider two self-adjoint bounded operators $A$ and $B$ on a separable Hilbert space. According to the spectral theorem we can write $$A=\int_{-\infty}^{\infty} x d E^{A}_x, \quad B=\int_{-\infty}^{\infty} y d E^{A}_y$$ where $E^{A}_x$ and $E^{B}_y$ are the spectral families of projectors of $A$ and $B$ respectively. Is there a simple way to prove that if $[A,B]=AB-BA=0$, then $[E^{A}_x,E^{B}_y]=0$ for all $x,y$? From $AB=BA$, you get $A^nB=BA^n$ for all $n$, and immediately $p(A)B=Bp(A)$ for any polynomial $p$. By Stone-Weierstrass, $f(A)B=Bf(A)$ for any $f\in C(\sigma(A))$. Now let $$\Sigma=\{\Delta:\ \Delta\ \text{ is Borel and } E^A(\Delta)B=BE^A(\Delta)\}.$$ From the fact that $E^A$ is a spectral measure, it is quickly deduced that $\Sigma$ is a $\sigma$-algebra. If $V\subset\sigma(A)$ is any open set, it may be written as a disjoint union of intervals, which allows us to see that there exists a sequence $\{f_n\}\subset C(\sigma(A))$ such that $f_n\nearrow 1_V$ pointwise. Then, for any $x\in H$, \begin{align} \langle BE^A(V)x,x\rangle &=\langle E^A(V)x,B^*x\rangle =\int_{\sigma(A)}1_V\,d E^A_{x,B^*x}\\ \ \\ &=\lim_n\int_{\sigma(A)}f_n\,d E^A_{x,B^*x} =\lim_n\langle f_n(A)x,B^*x\rangle\\ \ \\ &=\lim_n\langle Bf_n(A)x,x\rangle=\lim_n\langle f_n(A)Bx,x\rangle\\ \ \\ &=\lim_n\int_{\sigma(A)}f_n\,d E^A_{Bx,x} =\int_{\sigma(A)}1_V\,d E^A_{Bx,x}\\ \ \\ &=\langle E^A(V)Bx,x\rangle. \end{align} As $x$ was arbitrary, $E^A(V)B=BE^A(V)$. So $V\in\Sigma$, and thus $\Sigma$ contains all open subsets of $\sigma(A)$, and then the whole Borel $\sigma$-algebra of $\sigma(A)$. Thus $E^A(\Delta)B=BE^A(\Delta)$ for any Borel $\Delta\subset\sigma(A)$. So far we haven't even used that $B$ is selfadjoint; but now we can use the fact to repeat the above argument for a fixed $\Delta_1\subset\sigma(A)$, to obtain $$E^A(\Delta_1)E^B(\Delta_2)=E^B(\Delta_2)E^A(\Delta_1)$$ for any pair of Borel sets $\Delta_1\subset\sigma(A)$, $\Delta_2\subset\sigma(B)$. What you need is a way to construct $E_A$ and $E_B$ directly from $A,B$. This is accomplished through Stone's Formula: $$\frac{1}{2}\left(E(a,b)x+E[a,b]x\right) \\ = \lim_{\epsilon\downarrow 0}\frac{1}{2\pi i} \int_{a}^{b}(A-(r+i\epsilon)I)^{-1}x-(A-(r-i\epsilon)I)^{-1}x dr$$ This is a contour integral around $[a,b]$ with the vertical pieces missing. Using strong limits you can isolate $E(a,b)x$ and $E[a,b]x$ through limits in $a$, $b$. You can actually do this in a constructive way using the $\tan^{-1}$ function to explicitly integrate and take the limit in the strong topology. If $AB=BA$, then $$(A-\lambda I)B = B(A-\lambda I) \\ (A-\lambda I)(B-\mu I)=(B-\mu I)(A-\lambda I) \\ (B-\mu I)(A-\lambda I)^{-1} = (A-\lambda I)^{-1}(B-\mu I) \\ (A-\lambda I)^{-1}(B-\mu I)^{-1}=(B-\mu I)^{-1}(A-\lambda I)^{-1}.$$ From this and Stone's formula, you have a constructive proof that the spectral measures of $A$ and $B$ commute. I think it helps to see how the spectral measure is constructively determined on intervals of $\mathbb{R}$.
2019-07-22 09:58:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996578693389893, "perplexity": 89.92662673309567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527907.70/warc/CC-MAIN-20190722092824-20190722114824-00196.warc.gz"}
http://math.boisestate.edu/m502/
# HW19 Exercises (not today) Supplemental problems 1. $\star$ Show that $2$ is not definable in $(\QQ,+)$. 2. $\star\star$ Kunen, exercise II.15.5. 3. $\star\star\star$ Show that + is not definable in $(\NN,s)$, where $s(n)=n+1$ denotes the successor function. 4. $\star\star\star$ Is the ordering $<$ definable in $(\QQ,+,\times)$? # HW18 Exercises (due Wednesday, April 23) 1. Show that the class of all finite graphs is not first-order axiomatizable (that is, there is no theory $\Sigma$ such that the models of $\Sigma$ are exactly the finite graphs). 2. Show that the class of all infinite graphs is not finitely axiomatizable (that is, there is no finite theory $\Sigma$ such that the models of $\Sigma$ are exactly the infinite graphs). Supplemental problems 1. $\star$ Show that $\RR$ and $\RR\smallsetminus\{0\}$ are not isomorphic as linear orders. 2. $\star\star$ Show that the class of connected graphs is not first-order axiomatizable. 3. $\star\star$ Kunen, exercise II.3.8. 4. $\star\star$ Kunen, exercise II.13.11. 5. $\star\star\star$ Kunen, exercise II.13.12. # HW17 Exercises (due Monday, April 21) 1. Show that relation defined by $\sigma\sim\tau$ if and only if $\Sigma\vdash\sigma=\tau$ is an equivalence relation. 2. Suppose $\Sigma$ is a complete theory with a finite model. Show that $\Sigma$ does not have any infinite models. Supplemental problems 1. $\star$ Suppose $\Sigma$ is a complete theory with an infinite model. Can $\Sigma$ have any finite models? 2. $\star\star$ Kunen, exercise II.12.23. 3. $\star\star$ Let TA (true arithmetic) be the theory of the structure $(\NN,+,\cdot,0,1)$. Show that TA has a model $N$ containing $\NN$ and containing elements “larger” than $\NN$. 4. $\star\star\star$ Show that every model of TA (from the previous problem) has a copy of $\NN$ as an initial segment, and that this copy has no supremum. # HW16 Exercises (due Monday, April 14) 1. Suppose that $P$ is a unary predicate and $Q$ is a propositional variable. Give a formal proof of the following: $(\forall x(P(x)\to Q))\to((\forall xP(x))\to Q)$. 2. Suppose that $R$ and $S$ are unary predicates. Use UG, EI and any other results you like to show there exists a formal proof of the following: $\forall x(R(x)\to S(x))\to(\exists x R(x)\to \exists x S(x))$. 3. Kunen, exercise II.11.16. Suppose $R$ is a binary predicate and use the soundness theorem to show that there does not exist a formal proof of $\forall y\exists x R(x,y)\to\exists x\forall y R(x,y)$. Supplemental problems 1. $\star$ Show that logical axiom 3 is valid. 2. $\star$ Give an example of a structure $A$ and a formula $\phi(x)$ such that $A\models\exists x\phi(x)$ but there is no term $\tau$ such that $A\models\phi(\tau)$. 3. $\star\star$ Kunen, exercise II.10.6. 4. $\star\star$ Kunen, exercise II.11.15. Give a formal proof from ZF that $\exists y\forall x(x\notin y)$. 5. $\star\star\star$ Kunen, exercise II.11.11. # HW15 Exercises (due Wednesday, April 9) 1. Show that the fourth structure on page 12 satisfies the formula $\forall x \exists y (yEx \wedge (\exists z) (z\neq x \wedge yEz))$ directly from the definition of $\models$. 2. Find a formula $\phi$ with one free variable $x$ such that $(\RR,+,\cdot)\models\phi[\sigma]$ iff $\sigma(x)=2$. Supplemental problems 1. $\star$ Show that if $\Sigma$ has an infinite model then $\Sigma$ has an uncountable model. 2. $\star\star$ Kunen, exercise II.7.19. 3. $\star\star$ Complete Exercise 2 with the number $2$ replaced by an arbitrary rational number $a/b$. 4. $\star\star\star$ What are the limits of the previous problem? 5. $\star\star\star$ Kunen, exercise II.7.20. 6. $\star\star\star$ Kunen, exercise II.7.21. # HW14 Exercises (due Monday, March 31) 1. Give a proof of Lemma II.5.4. 2. Let $L=\{E\}$ where $E$ is a binary relational symbol. Write a set of $L$-sentences $\Sigma$ such that the models of $\Sigma$ are precisely the equivalence relations with exactly $3$ equivalence classes. Supplemental problems 1. $\star$ Prove that the connectives $\vee$, $\wedge$, and $\leftrightarrow$ can all be defined using only $\neg$ and $\rightarrow$. 2. $\star\star$ Prove that $(\QQ,<)$ is isomorphic to $(\QQ\smallsetminus\{0\},<)$, but that $(\RR,<)$ is not isomorphic to $(\RR\smallsetminus\{0\},<)$. 3. $\star$ Find a formula $\phi$ (in the trivial language) such that every model of $\phi$ has size exactly $5$. 4. $\star\star$ Find a language $L$ and a set of $L$-sentences $\Sigma$ such that for all $n\in\NN$, there is a model of $\Sigma$ if and only if $n$ is even. 5. $\star\star\star$ Prove that any partial order $R$ on a finite set can be extended to a linear order $R’\supset R$ on that set. # HW13 Exercises (due Wednesday, March 19) 1. Convert the expressions from Polish to standard logical notation. • $\forall a\forall b\rightarrow=n\times ab\vee =na=nb$ • $\forall a\rightarrow\in aS\leq ab$ 2. For each of the following informal mathematical statements, define a Polish lexicon that would allow you to express the statement, and then do so. • The polynomial $x^4+3x+5$ has a root. • The number $n$ is the sum of four squares. Supplemental problems 1. $\star\star$ Kunen, exercise II.4.7 2. $\star\star\star$ Write a computer program that takes a Polish lexicon and string of symbols as input, and determines whether the given string is a well-formed expression. # HW12 Exercises (not today) Supplemental problems 1. $\star$ Kunen, exercise I.15.14. 2. $\star$ Kunen, exercise I.15.15. 3. $\star\star$ (via Andres) Let $S$ be the set of middle-third-intervals removed during the construction of the Cantor set. The elements of $S$ are strictly totally ordered from left to right. Show that $S$ with this ordering is isomorphic to $\QQ$ with its usual ordering. 4. $\star\star\star$ (via Andres) A train carries $\omega$ many passengers. It then passes $\omega_1$ many stations numbered $0,1,\ldots,\omega_1$. At each station one passenger gets off and then $\omega$ many passengers get on. How many passengers remain when the train pulls into the last station? 5. $\star\star\star$ (via Andres) Show in ZF that if there is an injective function $\omega\to P(X)$ then there is an injective function $P(\omega)\to P(X)$. # HW11 Exercises (not today) Supplemental problems 1. $\star$ Define $+$, $\times$, and $<$ on the rational numbers constructed in class. 2. $\star\star$ Define $+$, $\times$, and $<$ on the real numbers constructed in class. 3. $\star$ If $G_n$ are dense open subsets of $\RR$ then $\bigcap G_n$ is nonempty. [Hint: look up the Baire category theorem.] 4. $\star\star$ IF $G_n$ are dense open subsets of $\RR$ then $\bigcap G_n$ is uncountable and dense. 5. $\star\star$ Kunen, exercise I.15.10 (forget the last sentence). 6. $\star\star$ Kunen, exercise I.15.11 (forget the last sentence). 7. $\star\star$ Kunen, exercise I.15.12 (forget the last sentence). 8. $\star\star\star$ The last sentence of Kunen, exercises I.15.10–11. # HW10 Exercises (due Monday, March 10) 1. Kunen, exercise I.14.9. If you know the rank of $x$, then what is the rank of $\bigcup x$? (As always, see the text for a more precise problem statement.) 2. For any $\alpha$ the Union axiom holds in $V_\alpha$. 3. If $\alpha$ is a limit then the Pairing axiom holds in $V_\alpha$. Supplemental problems 1. $\star$ If $\alpha$ is a limit then the Power Set axiom holds in $V_\alpha$. 2. $\star$ If $\alpha$ is a limit and $x,y\in V_\alpha$ and $f$ is a function from $x$ to $y$ then $f\in V_\alpha$. 3. $\star\star$ If $\kappa$ is inaccessible then $|V_\kappa|=\kappa$ 4. $\star\star$ Kunen, exercise I.14.14 5. $\star\star$ Kunen, exercise I.14.17 6. $\star\star\star$ Kunen, exercise I.14.19 7. $\star\star\star$ The rest of Kunen, exercise I.14.21. If $\gamma>\omega$ is a limit then $V_\gamma$ is a model of ZC. On the other hand Replacement does not hold in $V_{\omega+\omega}$. 8. You may also attempt any of the other exercises in this section.
2018-03-20 05:52:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658329486846924, "perplexity": 473.8814844066774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647299.37/warc/CC-MAIN-20180320052712-20180320072712-00630.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-10th-edition/chapter-13-counting-and-probability-13-2-permutations-and-combinations-13-2-asses-your-understanding-page-855/23
## Precalculus (10th Edition) The ordered arrangements: $abc,abd,abe,acb,acd,ace,adb,adc,ade,aeb,aec,aed$ $bac,bad,bae,bca,bcd,bce,bda,bdc,bde,bea,bec,bed$ $cab,cad,cae,cba,cbd,cbe,cda,cdb,cde,cea,ceb,ced$ $dab,dac,dae,dba,dbc,dbe,dca,dcb,dce,dea,deb,dec$ $eab,eac,ead,eba,ebc,ebd,eca,ecb,ecd,eda,edb,edc$ $P(5,3)=60$ The ordered arrangements: $abc,abd,abe,acb,acd,ace,adb,adc,ade,aeb,aec,aed$ $bac,bad,bae,bca,bcd,bce,bda,bdc,bde,bea,bec,bed$ $cab,cad,cae,cba,cbd,cbe,cda,cdb,cde,cea,ceb,ced$ $dab,dac,dae,dba,dbc,dbe,dca,dcb,dce,dea,deb,dec$ $eab,eac,ead,eba,ebc,ebd,eca,ecb,ecd,eda,edb,edc$ We know that $P(n,r)=n(n-1)(n-2)...(n-k+1).$ Also $P(n,0)=1$ by convention. Hence, $P(5,3)=5\cdot4\cdot3=60$.
2021-10-16 05:37:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5607620477676392, "perplexity": 729.9490749813327}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583423.96/warc/CC-MAIN-20211016043926-20211016073926-00182.warc.gz"}
http://www.gamedev.net/index.php?app=forums&module=extras&section=postHistory&pid=5023890
• Create Account ### #ActualAshaman73 Posted 21 January 2013 - 08:09 AM In general, ATI and NVIDIA will handle not clearly defined glsl code differently. Nvidia is known for more lax handling of glsl syntax, whereas ATI often requires strict one. Best to compile and run your GLSL code as often as possible on both platforms to detect errors early enough. Just guessing, but I believe, that  the pow implementation is more picky on nvidia (per definition, the behaviour of pow(x,y) is undefined if x<0 or x=0 and y=0). Therefore I would put the pow function into the if-clause: if (SpecularFactor > 0) { SpecularFactor = pow(SpecularFactor, specularPower); SpecularColor = _light.colour.rgb * specularIntensity * SpecularFactor; } ### #2Ashaman73 Posted 21 January 2013 - 08:08 AM In general, ATI and NVIDIA will handle not defined glsl code differently. Nvidia is known for more lax handling of glsl syntax, whereas ATI requires strict one. Best to compile and run your GLSL code as often as possible on both platforms to detect errors early enough. Just guessing, but I believe, that  the pow implementation is more picky on nvidia (per definition, the behaviour of pow(x,y) is undefined if x<0 or x=0 and y=0). Therefore I would put the pow function into the if-clause: if (SpecularFactor > 0) { SpecularFactor = pow(SpecularFactor, specularPower); SpecularColor = _light.colour.rgb * specularIntensity * SpecularFactor; } ### #1Ashaman73 Posted 21 January 2013 - 08:05 AM Just guessing, but I believe, that  the pow implementation is more picky on nvidia (per definition, the behaviour of pow(x,y) is undefined if x<0 or x=0 and y=0). Therefore I would put the pow function into the if-clause: if (SpecularFactor > 0) { SpecularFactor = pow(SpecularFactor, specularPower); SpecularColor = _light.colour.rgb * specularIntensity * SpecularFactor; } PARTNERS
2013-12-11 09:21:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6275121569633484, "perplexity": 12656.048964260703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164033950/warc/CC-MAIN-20131204133353-00089-ip-10-33-133-15.ec2.internal.warc.gz"}
https://mindmatters.ai/2022/09/page/4/
Mind Matters Natural and Artificial Intelligence News and Analysis # Monthly Archive September 2022 ## Amazon’s Rings of Power: Some Warning Signs But Still Hope The screenwriters had to create dialogue from Tolkien’s notes about the world in which Lord of the Rings is set ## Taiwan Has Bet Its Uncertain Future on Advanced Microchips An increasingly belligerent China has long claimed to own Taiwan, which manufactures 90% of the world’s *advanced* microchips Taiwan is the world’s largest manufacturer of microchips*, and not just by a small margin. Taiwan manufactures 65% of the microchips used in everything from smartphones to missiles. This compares to the U.S. at 10% and China at 5%. South Korea and Japan produce the rest. More important, Taiwan manufactures 90% of the world’s advanced microchips. In other words, without Taiwan, the world’s supply of microchips would come to a standstill, something that has been keenly felt since 2021 when chip shortages affected the auto industry. So far, the world’s dependence on Taiwan’s chips has protected the self-governing island nation from a potential invasion or ruinous trade sanctions from China. Earlier, we looked at U.S. House Speaker Nancy Pelosi’s visit Read More › ## Can Religion Without Belief “Make Perfect Sense”? Philosopher Philip Goff, a prominent voice in panpsychism, also defends the idea of finding meaning in a religion we don’t really believe Durham University philosopher Philip Goff, co-editor of Is Consciousness Everywhere? Essays on Panpsychism (November 1, 2022), has an interesting take on religion. While it’s common to assume that religious people are “believers,” he thinks that people can meaningfully be part of a religion without actually believing in it: But there is more to a religion than a cold set of doctrines. Religions involve spiritual practices, traditions that bind a community together across space and time, and rituals that mark the seasons and the big moments of life: birth, coming of age, marriage, death. This is not to deny that there are specific metaphysical views associated with each religion, nor that there is a place for assessing how plausible those views Read More › ## The Vector Algebra Wars: A Word in Defense of Clifford Algebra A well-recognized, deep problem with using complex numbers as vectors is that they only really work with two dimensions Vector algebra is the manipulation of directional quantities. Vector algebra is extremely important in physics because so many of the quantities involved are directional. If two cars hit each other at an angle, the resulting direction of the cars is based not only on the speed they were traveling, but also on the specific angle they were moving at. Even if you’ve never formally taken a course in vector algebra, you probably have some experience with the easiest form of vector algebra — complex numbers (i.e., numbers that include the imaginary number i). In a complex number, you no longer have a number line, but, instead, you have a number plane. The image below shows the relationship between the real Read More › ## Don’t Worship Math: Numbers Don’t Equal Insight The unwarranted assumption that investing in stocks is like rolling dice has led to some erroneous conclusions and extraordinarily conservative advice My mentor, James Tobin, considered studying mathematics or law as a Harvard undergraduate but later explained that I studied economics and made it my career for two reasons. The subject was and is intellectually fascinating and challenging, particularly to someone with taste and talent for theoretical reasoning and quantitative analysis. At the same time it offered the hope, as it still does, that improved understanding could better the lot of mankind. I was an undergraduate math major (at Harvey Mudd, not Harvard) and chose economics for the much the same reasons. Mathematical theories and empirical data can be used to help us understand and improve the world. For example, during the Great Depression in the 1930s, governments everywhere had so Read More › ## Analysis: Can “Communitarian Atheism” Really Work? Ex-Muslim journalist Zeeshan Aleem, fearing that we are caught between theocracy and social breakdown, sees it as a possible answer Zeeshan Aleem, an American journalist raised as a Muslim — but now an atheist — views his country as caught between “the twin crises of creeping theocracy and the death of conventional religion.” He seeks a new kind of atheism — communitarian atheism — as part of a solution: A rapidly increasing share of Americans are detaching from religious communities that provide purpose and forums for moral contemplation, and not necessarily finding anything in their stead. They’re dropping out of church and survey data suggests they’re disproportionately like to be checked out from civic life. Their trajectory tracks with a broader decades-long trend of secular life defined by plunging social trust, faith in institutions, and participation in civil society. My Read More › ## There Really Is a “Batman” and He Isn’t in the Comics Daniel Kish lost both eyes to cancer as a baby. With nothing to lose, he discovered human echolocation Perhaps one should not really say that Daniel Kish “discovered” human echolocation. Yet, having no other options as a blind infant cancer survivor, he discovered early on — and began to publicize — a sense that few sighted persons would even think of: He calls his method FlashSonar or SonarVision. He elaborated for the BBC: Do people need to be blind to do it? Not necessarily: In 2021, a small study led by researchers at Durham University showed that blind and sighted people alike could learn to effectively use flash sonar in just 10 weeks, amounting to something like 40 to 60 hours of total training. By the end of it, some of them were even better at specific tests Read More › ## News From the Search for Extraterrestrial Life 3 The Webb gets a good closer look at an exoplanet Exoplanets are hard to spot but the James Webb Space Telescope got an image of one (HIP 65426b), reported September 1: The planet is more than 10,000 times fainter than its host star and about 100 times farther from it than Earth is from the Sun (~93 million miles), so it easily could be spotted when the telescope’s coronagraphs removed the starlight. The exoplanet is between six and 12 times the mass of Jupiter—a range that could be narrowed once the data in these images is analyzed. The planet is only 15 million to 20 million years old, making it very young compared to our 4.5-billion-year-old Earth. Isaac Schultz, “See Webb Telescope’s First Images of an Exoplanet” at Gizmodo (September Read More › ## Madness: Why Sci-Fi Multiverse Stories Often Feel Boring In a multiverse, every plot development, however implausible, is permitted because we know it won’t affect our return to the expected climax Filmmakers communicate with audiences using common and accepted story devices (tropes) that viewers identify with — maybe the “average person takes the crown” or “love triangle.” Some tropes are overused or used in ways that undermine the story. In discussing what I think went wrong with Dr. Strange in the Multiverse of Madness (2022) and some similar films, I’ll use the word trope to refer to any story element that is used to push the plot. I find four tropes particularly annoying: the Multiverse, Time Travel, the Liar Revealed, and the MacGuffin Chase. Because I’ve just finished reviewing Multiverse of Madness, let’s start with the Multiverse trope. Before reviewing the Dr. Strange sequel, I’d written an essay, “Dr. Strange: Can Read More ›
2023-03-24 07:16:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.259904682636261, "perplexity": 3511.3275208620826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00125.warc.gz"}
http://mathhelpforum.com/algebra/214361-basic-arithmetic-surds-print.html
# Basic Arithmetic and Surds • March 6th 2013, 09:31 PM alicesayde Basic Arithmetic and Surds Express (2root5 - 3root10)^2 in the form a+broot2 and hence find the values of a and b. If you can please help and explain the method and how to solve equations similar to this. :) Thanks • March 6th 2013, 10:47 PM veileen Re: Basic Arithmetic and Surds That is not an equation. Anyway: $(a-b)^2=a^2-2ab+b^2$, so: $(2\sqrt 5-3\sqrt {10})^2=(2\sqrt 5)^2-2\cdot 2\sqrt 5 \cdot 3\sqrt {10} + (3\sqrt {10})^2=$ $=4\cdot 5-12\sqrt{5\cdot 10}+3\cdot 10=20-12\cdot 5 \sqrt 2 +30=50-60\sqrt 2$ a = 50, b = -60 • March 6th 2013, 10:49 PM MINOANMAN Re: Basic Arithmetic and Surds Alice show us some of your work...apply the well known identity (a-b)^2 =a^2+b^2-2ab....where a and b could be anything...in your case 2root5 or 2root10......try it
2015-02-01 09:58:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43890050053596497, "perplexity": 3395.9644460752434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120453043.42/warc/CC-MAIN-20150124172733-00204-ip-10-180-212-252.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/39721-triangle-problem-sines.html
# Thread: Triangle Problem with sines 1. ## Triangle Problem with sines A forest fire is spotted from two fire towers. the triangle determined by two towers and the fire has angles of 28 degrees and 37 degrees at the tower vertices. If the towers are 3000 meters apart, which one is closer to the fire. 2. Originally Posted by victorfk06 A forest fire is spotted from two fire towers. the triangle determined by two towers and the fire has angles of 28 degrees and 37 degrees at the tower vertices. If the towers are 3000 meters apart, which one is closer to the fire. 1. Draw a sketch 2. The angle at the fire tower is 115°. 3. Use Sine Rule: $\frac{x}{3000}=\frac{\sin(37^\circ)}{\sin(115^\cir c)}$ $\frac{y}{3000}=\frac{\sin(28^\circ)}{\sin(115^\cir c)}$ 4. Without any calculations it is obvious that y < x because sin(28°) < sin(37°)
2017-10-19 05:25:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6964342594146729, "perplexity": 1319.3309720613343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00402.warc.gz"}
https://ai.stackexchange.com/questions/32145/how-do-the-bfs-and-dfs-search-algorithms-choose-between-nodes-with-the-same-pri/32154#32154
# How do the BFS and DFS search algorithms choose between nodes with the "same priority"? I am currently taking an Artificial Intelligence course and learning about DFS and BFS. If we take the following example: From my understanding, the BFS algorithm will explore the first level containing $$B$$ and $$C$$, then the second level containing $$D,E,F$$ and $$G$$, etc., till it reaches the last level. I am lost concerning which node between $$B$$ and $$C$$ (for example) will the BFS expand first? Originally, I thought it is different every time, and, by convention, we choose to illustrate that it's done from the left to the right (so exploring $$B$$ then $$C$$), but my professor said that our choice between $$B$$ and $$C$$ depends on each case and we choose the "shallowest node first". In made examples, there isn't a distance factor between $$A$$ and $$B$$, and $$A$$ and $$C$$, so how could one choose then? My question is the same concerning DFS where I was told to choose the "deepest node first". I am aware that there are pre-order versions and others, but the book "Artificial Intelligence - A Modern Approach, by Stuart Russel" didn't get into them. I tried checking the CLRE algorithms book for more help but the expansion is done based on the order in the adjacency list which didn't really help. BFS and DFS are usually applied to unweighted graphs (or, equivalently, to graphs where the edges have all the same weights). In this case, BFS is optimal, i.e., assuming a finite branching factor, it eventually finds the optimal solution, but it doesn't mean that it takes a short time, in fact, it might well be the opposite, depending on the search space. DFS would not be optimal because it may not terminate if there is an infinite path. In the case of unweighted graphs, BFS really proceeds level-by-level. In other words, it starts at the root (level $$l=0$$), then expands (i.e. adds to the FIFO queue) all children of the root (level $$l=1$$), then it expands all children of the children of the root (level $$l=2$$), and so on. So, for example, all children at level $$l=2$$ have a distance of $$2$$ to the root (if we assume that edges have a weight of $$1$$). The order in which you choose nodes at a certain level to add to your FIFO queue is, as far as I know, typically from left-to-right, but you could, in principle, choose them in different ways (e.g. from right-to-left). So, this is a convention. Of course, this choice may affect when (and if, in the case of DFS) you find the solution. Now, if we applied BFS to a weighted graph, then what happens here? It depends. If you ignore the weights, then it is the usual BFS. If you take into account the weights, it depends on how you take them into account. For example, if you choose to expand (i.e. add to the queue) nodes with the shortest path so far to the root, then this becomes a uniform-cost search (this is explained in the AIMA book, e.g. 3rd edition, section 3.4.2, p. 83). So, maybe your professor has in mind the uniform-cost search when he's telling you to choose the "shallowest node first", but you need to talk to him, tell him about uniform-cost search, and ask him if that's what he means. It also seems to me that your original idea of how DFS works is correct. It goes deep, then backtracks (without taking into account weights). If you take into account the weights, I don't know what DFS could turn into. Either one. The BFS algorithm and DFS algorithm do not specify. Typically, it's programmed as left-to-right, just because that's the way programmers think about trees. It doesn't have to be. Note that DFS isn't "deepest node first" either. Imagine that nodes H and I in your tree did not exist; D, J, K, E, B would be a perfectly valid DFS traversal of B, even though J and K are deeper than D. So would B, D, E, J, K even though E is the parent of J and K! DFS says that you look at a node's children before you look at other nodes on the same layer, but it doesn't say you have to look at the node's children before the node itself. In fact there are three well-known variants (pre-order, post-order and in-order) depending on whether you visit each node before, after or in the middle of its children. Now, if this is for an AI then you probably do care about the order. If this tree represents a game tree, then you probably want to estimate which node is likely to have the best outcome for the AI player, and check that one first. This can be called "best-first search". • Your first 3 paragraphs are correct, but I think it may be misleading to say that DFS is not "deepest node first". You go as deep as possible in the current path, then backtrack and do that again. So, in your example, D is really the deepest node in that path. This is the reason why it's called depth-first, but I understand your example. In your last paragraph, note that best-first search is a family of algorithms, of which A* is one instantiation. The difference between these best-first search algorithms and e.g. BFS is that you use an heuristic function to guide the search. – nbro Oct 22 '21 at 21:02
2022-01-24 13:38:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7735172510147095, "perplexity": 463.160405860484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304570.90/warc/CC-MAIN-20220124124654-20220124154654-00124.warc.gz"}
http://grouper.ieee.org/groups/802/3/10G_study/email/msg05381.html
# [802.3ae] 10GBASE-X PCS; status register definition? All, I'm looking for clarification on how the PMA/PMD management register 1.1.2, "Receive Link Status" should behave when the PHY instance is a 10GBASE-X PCS/PMA. The specification describes it thus: \begin{quote} When read as a one, bit 1.1.2 indicates that the PMA is locked to the received signal. When read as a zero, bit 1.1.2 indicates that the PMA be implemented with latching low behavior as defined in the introductory text of 45.2. \end{quote} which I guess is aimed at the optional sync_err signal on the XSBI for the clause 49 PCS and clause 51 PMA. Thing is, it's not explicitly mapped to any similar signal (or should I say primitive) on the 10GBASE-X PCS/PMA boundary, nor is it stated how it should relate to the state of PMA lock of each and any of the 4 PMA lanes. Does the draft need to be refined at this point? Or am I just failing to spot the reference? Cheers Gareth -- / /\/\ Gareth Edwards mailto:gareth.edwards@xxxxxxxxxx \ \ / Design Engineer / / \ System Logic & Networking Phone: +44 131 666 2600 x234 \_\/\/ Xilinx Scotland Fax: +44 131 666 0222
2019-05-27 02:05:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5424522161483765, "perplexity": 7888.465604891264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260358.69/warc/CC-MAIN-20190527005538-20190527031538-00161.warc.gz"}
http://digitalrevolutionary.blogspot.com/2010/03/tag-clouds.html
## Wednesday, March 10, 2010 ### Tag clouds Verrry cool tag clouds. A tag cloud or word cloud (or weighted list in visual design) is a visual depiction of user-generated tags, or simply the word content of a site, typically used to describe the content of web sites. Tags are usually single words and are normally listed alphabetically, and the importance of a tag is shown with font size or color. Thus, both finding a tag by alphabet and by popularity are possible. The tags are usually hyperlinks that lead to a collection of items that are associated with a tag. In principle, the font size of a tag in a tag cloud is determined by its incidence. For a word cloud of categories like weblogs, the frequency of use for example, corresponds to the number of weblog entries that are assigned to a category. For small frequencies it's sufficient to indicate directly for any number from one to a maximum font size. For larger values, a scaling should be made. In a linear normalization, the weight ti of a descriptor is mapped to a size scale of 1 through f, where tmin and tmax are specifying the range of available weights. $s_i = \left \lceil \frac{f_{\mathrm{max}}\cdot(t_i - t_{\mathrm{min}})}{t_{\mathrm{max}}-t_{\mathrm{min}}} \right \rceil$ for ti > tmin; else si = 1 • si: display fontsize • fmax: max. fontsize • ti: count • tmin: min. count • tmax: max. count
2018-04-19 09:53:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3332592248916626, "perplexity": 2398.619752304558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936833.6/warc/CC-MAIN-20180419091546-20180419111546-00428.warc.gz"}
https://mathoverflow.net/questions/19651/steins-method-proof-of-the-berry-ess%C3%A9en-theorem
# Stein's method proof of the Berry-Esséen theorem The relevant paper is "An estimate of the remainder in a combinatorial central limit theorem" by Erwin Bolthausen. I would like to understand the estimate on page three right before the sentence "where we used independence of $S_{n-1}$ and $X_n$": \begin{align}E|f'(S_n) - f'(S_{n-1})| &\le E \bigg(\frac{|X_n|}{\sqrt{n}} \big(1 + 2|S_{n-1}| + \frac{1}{\lambda} \int_0^1 1_{[z,z+\lambda]} (S_{n-1} + t \frac{X_n}{ \sqrt{n}}) dt\big)\bigg) \\ &\le \frac{C}{\sqrt{n}} \big(1 + \delta(\gamma, n-1) / \lambda\big)\end{align} that is, where $\delta(\gamma, n-1)/\lambda$ shows up, which is the error term in the Berry-Esséen bound. Here $S_n = \sum_{i=1}^n X_i / \sqrt{n}$ and $X1, \ldots, X_n$ are iid with $E X_i =0$, $E X_i^2 = 1$, and $E|X_i|^3 = \gamma$. Furthermore, denote $\mathcal{L}_n$ to be the set of all sequences of $n$ random variables satisfying the above assumptions, then $\delta(\lambda, \gamma,n) = \sup \{ |E(h_{z,\lambda} (S_n)) - \Phi(h_{z,\lambda})|: z \in \mathbb{R}, X_1, \ldots, X_n \in \mathcal{L}_n \}$ and $h_{z, \lambda}(x) = ((1 + (z-x)/\lambda) \wedge 1) \vee 0$ and $\delta(\gamma, n)$ is a short hand for $\delta(0,\gamma, n)$, and $h_{z,0}$ is interpreted as $1_{(-\infty, z]}$. I am mainly interested in verifying the second inequality, so I don't need to reproduce the definition of $f$ here, but it is related to $h$. This paper is freely available online through springer. thanks in advance. • Reviving this after 10 years! The final bound shown in the proof looks like: $\delta(\gamma, n) \leq c\frac{\gamma}{\sqrt{n}}+\frac{\delta(\gamma, n-1)}{2}$. The goal is show that $\delta(\gamma, n) \leq C\frac{\gamma}{\sqrt{n}}$ Using induction and the fact that $\delta(\gamma, 1) \leq 1$, I can get this statement, but not for a universal constant $C$. Each time the induction is implied the constant increases by a multiplicative factor larger than 1. Could anyone who has looked at this paper help me out with the induction part? Thank you! Jul 13 '21 at 20:25 If you take expectation first with respect to $S_{n-1}$, then by Fubini's theorem the last term gives $$E \left[\frac{|X_n|}{\sqrt{n}}\frac{1}{\lambda} \int_0^1 P\left(z-t\frac{X_n}{\sqrt{n}} \le S_{n-1} \le z-t\frac{X_n}{\sqrt{n}} + \lambda\right) dt\right].$$ Now if $Y$ is a standard Gaussian random variable and $a\in \mathbb{R}$, then $$P(a\le S_{n-1} \le a+\lambda) \le P(a\le Y \le a+\lambda) + 2\delta(\gamma,n-1) \le \frac{\lambda}{\sqrt{2\pi}} + 2\delta(\gamma,n-1),$$ so the expectation above is bounded by $\frac{1}{\sqrt{n}}\left(\frac{1}{\sqrt{2\pi}}+2\frac{\delta(\gamma,n-1)}{\lambda}\right)$. • Hi Mark, thanks a lot for visiting my day-old question. I am still a little concerned about bounding the last term on LHS using $\delta(\gamma, n-1)$. By definition $\delta$ is the levy distance between the softened distribution function of $S_n$ and that of the standard gaussian. But the last term on the LHS is only the self-difference (so to speak) of the distribution of $S_n$ at different points of the real line, so I don't see how it connects back to the gaussian. Mar 29 '10 at 15:55 • Whoops, that wasn't what I meant to write. I'll go back and edit when I have a chance. The point is that the probability inside the integral is within $\delta(\gamma,n-1)$ of the corresponding probability for a Gaussian; the integral of the Gaussian probability is bounded by $\lambda/\sqrt{2\pi}$ (because the Gaussian density is bounded by $(2\pi)^{-1/2}$) and so that just goes into the constant. Mar 29 '10 at 16:27 • I agree that the Gaussian probability is bounded by $\lambda / \sqrt{2 \pi}$ and indeed I thought I could bound that by a constant. But since we are dividing $\delta$ by $\lambda$, and as is revealed in the end by the author, $\lambda$ is of order $1/ \sqrt{n}$, wouldn't that make it insufficient to bound with $c/\sqrt{n}$? Mar 29 '10 at 16:44 • I think you may be overlooking the $1/\sqrt{n}$ in the prefactor. I've edited my answer so it should be clearer now (as in, actually correct). Mar 29 '10 at 17:59 • Yes you are indeed right! The $1/\lambda$ cancels with the $\lambda$ in the integral. Thanks so much. Mar 30 '10 at 0:35
2022-01-24 11:48:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791930317878723, "perplexity": 180.23343555350866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00715.warc.gz"}
https://tsfa.co/questions/Calculus/506357
# Find dy First, divide dx e to the power of ysin(x) equals x plus xy Find dy/dx e^ysin(x)=x+xy Differentiate both sides of the equation. Differentiate the left side of the equation. Differentiate using the Product Rule which states that is where and . The derivative of with respect to is . Differentiate using the chain rule, which states that is where and . To apply the Chain Rule, set as . Differentiate using the Exponential Rule which states that is where =. Replace all occurrences of with . Should be rewritten as . Shift terms. Differentiate the right side of the equation. Differentiate. By the Sum Rule, the derivative of with respect to is . Differentiate using the Power Rule which states that is where . Evaluate . Differentiate using the Product Rule which states that is where and . Rewrite as . Differentiate using the Power Rule which states that is where . First, multiply by . Shift terms. Reform the equation by setting the left side equal to the right side. Solve for . Reorder factors in . Subtract from both sides of the equation. Subtract from both sides of the equation. Factor out of . Factor out of . Factor out of . Factor out of . Divide each term by and simplify. Divide each term in by . The common factor should be canceled of . Cancel the common factor. Divide by . Replace with . Do you need help with solving Find dy/dx e^ysin(x)=x+xy? We can help you. You can write to our math experts in our application. The best solution for you is above on this page.
2023-01-28 10:18:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410243391990662, "perplexity": 1358.6436763779925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00654.warc.gz"}
https://learn.digilentinc.com/Documents/375
# Design Challenge ## Introduction For this deisgn challenge, we will be setting up our own digital thermometer and displaying the temperature on a LCD screen. For more information about the concepts behind an LCD screen or a refresher on thermistors, feel free to refer to the links on the right. ##### Before you begin, you should: • Know how to use a thermistor. • Be familiar with using an LCD screen. ##### After you're done, you should: • Be able to show off your thermometer in high resolution. ## Inventory: Qty Description Typical Image Schematic Symbol Breadboard Image 1 NTC Thermistor 1 10 kΩ Resistor 1 Basic I/O Shield ## Basic Theory As you know, we will be calculating the temperature from an analog reading from the thermistor. We will then display our calculated temperature on the LCD screen in a much fancier fashion than the seven-segment display. ## Step 1: Wiring ### Wiring of the 10kOhm thermistor: The chipKIT™ Basic I/O Shield™ has the same form factor as the chipKIT Uno32™, so it will fit directly on top of the Uno32. For our project, all of the pins that we used on the Uno32 will be in the same place as they are on the Basic I/O Shield. However, if you wanted to use any of the digital pins on the Basic I/O Shield, be aware that many of them are inherently tied to various components on the shield. Only the PWM pins, 3, 5, 6, and 9 will be able to connect to a breadboard without interferring with any of the components on the shield. Check out the link on the right for more information about the digital pins on the Basic I/O Shield. • Place the thermistor somewhere on the breadboard. Using a 10 kΩ resistor, connect one of the leads to the power bus strip. • Connect the power bus to the 3.3V supply on the chipKIT Uno32 board, which is labeled as “3V3”. • Attach the other lead of the thermistor to the negative bus on the breadboard. Connect this bus to either one of the ground pins on the chipKIT Uno32, which are both labeled as “GND”. • Now, run a wire from the leg of the thermistor that has the 10 kΩ resistor to pin A0 on the chipKIT Uno32. ## Determining the Temperature Since this is a design challenge, you should be able to derive the equation that will be used in the code to determine the temperature from the equations for the output voltage of the termistor, the equation for the analog-to-digital (ADC) value of the Uno32, and the B parameter equation for the temperature, where the B value is 4100 for our thermistor: $\Large V_{0} = \frac{R}{R + 10k\Omega} \times V_{cc}$ $\Large ADC = \frac{V_{i} \times 1023}{V_{cc}}$ $\Large \frac{1}{T} = \frac{1}{T_{0}} + \frac{1}{B} \times \ln{\frac{R}{R_{0}}}$ For reference, here is the final equation that you should end up with: $\Large T = \frac{T_{0} \times 4100}{4100 + T_{0} \times ln{\frac{R}{R_{0}}}}$ Remember that the temperature is still in the Kelvin scale and that it will need to be corrected to the Fahrenheit or Celsius temperature scale, if you so desire. ## Step 2: Write some code We should be able to write the code that we need in order display each digit of the temperature on the LCD screen one at a time. What we will go over is how to create a new character that we want to show, like a degree symbol. We will create a message that says “I (heart) chipKIT”. What we need to do, in essence, is to create a so-called font library where we will define what pixels are to be displayed for each new character. Each character will consist of eight bytes representing the eight columns (with eight rows each) to define all 64 pixels available for the character space. //Including the needed library #include //Some necessary definitions of how big our glyphs and font library are #define OledChar 8 //number of bytes in a glyph #define OledUserFont (32*OledChar) //number of bytes in user font table //The Library in Hexidecimal where the first number after the “0x” //defines the bottom four pixels and the second number defines the //top four pixels of that particular column uint8_t UserFont[OledUserFont] = { 0x00, 0x0C, 0x1E, 0x3C, 0x78, 0x3C, 0x1E, 0x0C // 0x00 }; //note that we only used 1 of these 32 byte sets //global variable char heart=0x00; void setup(){ IOShieldOled.begin(); IOShieldOled.defineUserChar(heart, &UserFont[heart*OledChar]); }//end of setup void loop(){ IOShieldOled.clearBuffer(); IOShieldOled.setCursor(7,1); IOShieldOled.putString("I"); IOShieldOled.setCursor(7,2); IOShieldOled.putChar(heart); IOShieldOled.setCursor(4,3); IOShieldOled.putString("chipKIT"); IOShieldOled.updateDisplay(); delay(500); }//end of loop
2019-05-25 18:14:31
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24025538563728333, "perplexity": 1701.8475221037988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258147.95/warc/CC-MAIN-20190525164959-20190525190959-00012.warc.gz"}
http://multithreaded.stitchfix.com/blog/2017/04/13/making-the-tour-part-2/
# The Making of the Tour, Part 2: Simulations ###### April 13, 2017 - San Francisco, CA In our first installment of this Making of the Tour series we gave a general overview of our development process and our scrollytelling code structure. Now we get to dig into some details. In this post, we’ll talk about some simulation-powered animations, provide some cleaned-up code that you can use, and discuss these animations’ genesis and utility for visualizing abstract systems and algorithms or for visualizing real historical data and projected futures. ## Simulations in JavaScript?1 Each of the animations described in this post are powered by randomized simulations, coded in JavaScript and running behind the scenes in the browser. If you refresh the Tour or watch two screens at once you’ll see different behavior. However, they are generally structured in a way that would also allow them to use historical data or simulation results that are streaming from a server, since in most cases that was their original design spec. We’ll walk through each of the four independently below, but take this chance to first highlight their shared structure: configure visualization d3.timer or d3.interval loop { periodically read or simulate new data d3 enter-update-exit pattern to animate svg } ## State Transitions in the Tour code: block, github gist Underlying this state transitions animation is a discrete-time simulation, where (a) entities are being added to the system with some probability (as a function of time) and (b) they are moving between states based on a transition probabilities matrix. The svg update occuring at each timestep is twofold. Another stacked bar is added to the graph on the right according to how many entities are in each state at that time. Then there’s the more fun part on the left, which–as it happens–is just a multi-foci force layout. At each simulation timestep we simply need to modify each circle’s associated focus and let the force layout do the rest. We anticipate that state transition animations like this could have lots of potential data visualization applications: client states over time, international migration, etc. ## Latent Size Learning in the Tour code with extras (control panel, uncertainty bands, histograms): block, github gist bare-bones code: block, github gist The underlying simulation here requires a bit of explanation. Each circle (e.g. each client or clothing style) is assumed to have some latent value along the horizontal axis–some true value for an attribute that we cannot observe directly but that we can try to estimate based on feedback. Note that we don’t know the latent values for either the A elements or the B elements, and the feedback we get is from attempted pair-matches involving one of each type. Our simulated algorithm is fairly simple: 1. assign each entity a current estimated latent value, initialized at the center of the scale 2. select A-B pairs randomly, weighted by the distance between their current estimated latent value (shorter distances produce higher probabilities of selection) 3. if the feedback from the pair attempt says their relative latent values are different than what our estimates suggest, move both of the current estimated latent values in the direction of feedback (e.g. if A says B is too small, then move A to the right and B to the left), multiplied by a learning rate 4. repeat The underlying simulation, then, is running this algorithm over a set of entities while also simulating the entities, each of which has its own latent value and the feedback it provides when paired with other entities is based on the actual differences between their latent values, with some noise added for good measure. The svg update is straightforward: at each timestep, pairs are shown by lines between the circles, and the circles are transitioned to their new location based on their current estimated latent value. (You may notice that the code with extras version uses recursive d3.timeout calls instead of d3.interval. This is simply to allow the timestep to vary with the speed slider. In general, that code is somewhat longer than desired for an illustrative example since we opted to include the uncertainty bands, histograms and control panel. So we’ve also provided a simpler bare-bones version where it is easier to see the simulation / animation structure.) We’ll admit that this animation was never really intended for serious internal use, but only to help explain the ideas of latent variables and how one might imagine getting at them. As we note in the Tour text, our treatment of fit is more complex than we show here, but we do think that this 1D visualization does effectively highlight some of the the essential ideas of latent size. Presumably it may also apply to various other two-sided marketplace pairing analyses. ## Inventory Stock Management in the Tour code: block, github gist Mechanical engineers, eat your heart out. The underlying simulation here is that of a dynamic system model–essentially a set of difference equations with random disturbances–and a fairly crude control logic. The simulation and svg update is part of a d3.timer function, making it more like a continuous simulation than the previous two cases (which were both discrete simulations with relatively long timestep lengths). The svg update is dead simple: the widths of the lines and the heights of the rectangles are set to the current volumes and flow rates calculated by the simulation. We like this simulation / animation as an example of the Illustrator-D3 methods we talked about in the previous installment of this series: sketch something, make it into an svg and then attach (simulation) data to particular attributes of the sketched objects. ## Inventory Cycle in the Tour code with javascript simulation: block, github gist code with external data: block, github gist Although this animation looks nice as a randomized simulation, its utility is much greater for visualizing historical data. (The other three, by contrast, can say some interesting things about system dynamics, algorithms and control strategies without necessarily using real data.) Using real data about a process flow, you can watch various patterns emerge: bottlenecks, turbulence, eddies, etc. It is also meant for use with many more checkpoints than the two in our Tour and the three in this example. If returns aren’t as important in your application as they are for Stitch Fix, you can consider using a line instead of a circle (though the circle does look nicer!). In both versions of the code linked above, the “simulation” part of the code is keeping track of each unit’s previous event, its next event and its temporal position between the two (either by simulating it in situ or by reading those values from elsewhere). The svg update is then mapping from that position-between-events to a point on the screen. Comparing these two versions of the code–one with a javascript simulation, the other using an external data file–demonstrates how you can fairly easily move between the two use cases while keeping with the same general pattern. With just a bit of wrangling, the same sort of change could be applied to any of the animations above. And you could similarly apply this pattern, in either of these two modes, to plenty of other kinds of system simulation / animation that you might sketch onto a napkin. ## More in Store In our next post, we’ll talk about some of the finer points. Stay tuned! 1 Wouldn’t be the first time we’ve engaged in such shenanigans. ## Come Work with Us! We’re a diverse team dedicated to building great products, and we’d love your help. Do you want to build amazing products with amazing peers? Join us!
2017-08-21 15:45:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37572619318962097, "perplexity": 1483.719723645706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00491.warc.gz"}
https://eprint.iacr.org/2015/1062
## Cryptology ePrint Archive: Report 2015/1062 Lower Bounds on Assumptions behind Indistinguishability Obfuscation Mohammad Mahmoody; Ameer Mohammed; Soheil Nematihaji; Rafael Pass; abhi shelat Abstract: Since the seminal work of Garg et. al (FOCS'13) in which they proposed the first candidate construction for indistinguishability obfuscation (iO for short), iO has become a central cryptographic primitive with numerous applications. The security of the proposed construction of Garg et al. and its variants are proved based on multi-linear maps (Garg et. al Eurocrypt'13) and their idealized model called the graded encoding model (Brakerski and Rothblum TCC'14 and Barak et al. Eurocrypt'14). Whether or not iO could be based on standard and well-studied hardness assumptions has remain an elusive open question. In this work we prove emph{lower bounds} on the assumptions that imply iO in a black-box way, based on computational assumptions. Note that any lower bound for iO needs to somehow rely on computational assumptions, because if P = NP then statistically secure iO does exist. Our results are twofold: 1. There is no fully black-box construction of iO from (exponentially secure) collision-resistant hash functions unless the polynomial hierarchy collapses. Our lower bound extends to (separate iO from) any primitive implied by a random oracle in a black-box way. 2. Let P be any primitive that exists relative to random trapdoor permutations, the generic group model for any finite abelian group, or degree-$O(1)$ graded encoding model for any finite ring. We show that achieving a black-box construction of iO from P is emph{as hard as} basing public-key cryptography on one-way functions. In particular, for any such primitive P we present a constructive procedure that takes any black-box construction of iO from P and turns it into a a construction of semantically secure public-key encryption form any one-way functions. Our separations hold even if the construction of iO from P is {semi-} black-box (Reingold, Trevisan, and Vadhan, TCC'04) and the security reduction could access the adversary in a non-black-box way. Category / Keywords: Indistinguishability Obfuscation, Black-Box Separations, Lower Bounds. Original Publication (in the same form): IACR-TCC-2016 Date: received 30 Oct 2015, last revised 31 Oct 2015 Contact author: mahmoody at gmail com Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2015/1062 [ Cryptology ePrint archive ]
2017-03-26 03:30:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8292996287345886, "perplexity": 3553.5317239938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189092.35/warc/CC-MAIN-20170322212949-00057-ip-10-233-31-227.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/dogs-inbred-desirable-characteristics-blue-eye-color-unfortunate-product-inbreeding-emerge-q2627795
Dogs are inbred for such desirable characteristics as blue eye color; but an unfortunate by-product of such inbreeding can be the emergence of characteristics such as deafness. A 1992 study of Dalmatians (by Strain and others, as reported in The Dalmatians Dilemma) found the following: (i) 31% of all Dalmatians have blue eyes. (ii) 38% of all Dalmatians are deaf. (iii) 42% of blue-eyed Dalmatians are deaf. Based on the results of this study is "having blue eyes" independent of "being deaf"? (a) No, since .31 * .38 is not equal to .42. (b) No, since .38 is not equal to .42. (c) No, since .31 is not equal to .42. (d) Yes, since .31 * .38 is not equal to .42. (e) Yes, since .38 is not equal to .42.
2015-11-25 16:30:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8814123272895813, "perplexity": 3848.676334259925}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445219.14/warc/CC-MAIN-20151124205405-00027-ip-10-71-132-137.ec2.internal.warc.gz"}
https://socratic.org/questions/what-is-the-electric-current-produced-when-a-voltage-of-9-v-is-applied-to-a-circ-13
# What is the electric current produced when a voltage of 9 V is applied to a circuit with a resistance of 66 Omega? Jan 23, 2016 Current $= 136.364 \text{ mA}$ #### Explanation: $I = \frac{V}{R}$ where $I$ is the current, $V$ is the voltage, and $R$ is the resistance. $\textcolor{w h i t e}{\text{XX}}$Think of it this way: $\textcolor{w h i t e}{\text{XXXX}}$If you increase the pressure (voltage), you will increase the amount of current. $\textcolor{w h i t e}{\text{XXXX}}$If you increase the resistance, you will decrease the amount of current. Current is measured with a base unit of $A =$ ampere which is defined as the current produced by $1 V$ through a circuit with $1 \Omega$ resistance. For the given values: $\textcolor{w h i t e}{\text{XXX}} I = \frac{9 V}{66 \Omega}$ $\textcolor{w h i t e}{\text{XXX}} = \frac{3}{22} A = 0.136364 A$ For values in this range it is more common to specify the result in $m A$ (miliamperes) where $1000 m A = 1 A$
2020-09-23 07:09:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7977292537689209, "perplexity": 375.57554734408086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209999.57/warc/CC-MAIN-20200923050545-20200923080545-00108.warc.gz"}
https://www.r-bloggers.com/2013/10/post-7-sampling-the-item-parameters-with-generic-functions/
[This article was first published on Markov Chain Monte Carlo in Item Response Models, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. In this post, we will build the samplers for the item parameters using the generic functions developed in Post 5 and Post 6. We will check that the samplers work by running them on the fake data from Post 2, visualizing the results, and checking that the true values are recovered. # Implementing the item samplers Recall that the refactored person ability sampler from Post 5 needs only a proposal function (prop.th.abl) and a complete conditional density (th.abl.cc): ## Define the person ability sampler sample.th.refactor The generic normal proposal function from Post 6 makes implementing the proposal functions for all of the parameters trivial: prop.th.abl The complete conditional densities require a bit more work. We discuss them below. ## Implementing the the complete conditional densities Recall from Post 1 that the relevant complete conditional densities are: begin{aligned} f(theta_p|text{rest}) & propto prod_{i=1}^I pi_{pi}^{u_{pi}} (1 – pi_{pi})^{1-u_{pi}} f_text{Normal}(theta_p| 0,sigma_theta^2) ~, \ % f(sigma^2_theta|text{rest}) & propto % f_text{Inverse-Gamma}left(sigma^2_theta left| % alpha_theta + frac{P}{2}, % beta_theta + frac{1}{2} sum_{p=1}^P theta^2_p right. %right) \ f(a_i|text{rest}) & propto prod_{p=1}^P pi_{pi}^{u_{pi}} (1 – pi_{pi})^{1-u_{pi}} f_text{Log-Normal}(a_i| mu_a,sigma_a^2) ~, \ f(b_i|text{rest}) & propto prod_{p=1}^P pi_{pi}^{u_{pi}} (1 – pi_{pi})^{1-u_{pi}} f_text{Normal}(b_i| 0,sigma_b^2) , end{aligned} where $text{rest}$ stands in for the conditioning variables, $f_star(dagger|dots)$ represents the density of the random variable named $dagger$ (which has a $star$ distribution), and $pi_{pi}$ is defined as: begin{equation} ln{frac{pi_{pi}}{1+pi_{pi}}} = a_i ( theta_p – b_i) label{eq:pipi} quad. end{equation} Each of these complete conditionals contain a likelihood term $left( pi_{pi}^{u_{pi}} (1 - pi_{pi})^{1-u_{pi}} right)$ which is either multiplied across the items in the case of the person ability parameter or multiplied across the persons by the item parameters. As discussed in Post 4 and Post 5 we implement this by calculating a matrix of the log-likelihood terms and then collapsing that matrix by column for the person ability parameters. For the item parameters, we do much the same, except by row. The full implementation is as follows. ### Person ability complete conditional density From Post 5, with explanation of what the code does in in Post 4: ## Complete conditional for the person ability parameters th.abl.cc ### Item discrimination complete conditional density The item discrimination complete conditional is very similar. It sums over the persons instead of the item for the log-likelihood term. Since its prior is a log normal, it uses the dlnorm function for its prior. See Post 1 for details of why the log normal was chosen. ## Complete conditional for the item discrimination parameters a.disc.cc ### Item difficulty complete conditional density The item difficulty parameter is the same as the item discrimination parameter, except it has a normal, instead of log-normal prior: ## Complete conditional for the item discrimination parameters b.diff.cc ## Implementing the samplers Now that the proposal functions and complete conditional densities are implemented, we can use the generic Metropolis-Hastings sampler to define the complete conditional samplers: ## Define the person ability sampler sample.th # Testing that the samplers work To test that the item samplers work, we run a chain, visualize it, and check that the item and person parameters are recovered properly. ## Running the chain Running the chain follows the same pattern as before, where the call to source('.../setup/post-7.R') loads the necessary functions to use the refactored samplers. ## Load the necessary code from this and previous posts source("http://mcmcinirt.stat.cmu.edu/setup/post-7.R") ## Set the seed to keep results reproducible set.seed(314159) ## Run the sampler with overdispersed theta.abl, ## a.disc, and b.diff values. Keep sig2.theta ## at its true value. run.C Note that the acceptance rates reported by the sampler are between 30% and 55%. This is in the “good enough” range between 20% and 60%. I achieved this by changing the values for the proposal variances until they were just right. The tuning runs are not shown. Tuning will be covered in Post 9. ## Visualizing the chain First we convert to a coda object: require(coda) run.C.mcmc and then look at one of each of the parameters: plot( run.C.mcmc[, get.2pl.params(1,1,1,1)], density=FALSE, smooth=TRUE ) From the non-parametric smoother, it looks like the chain is done burning it at around 200 iterations or so. We can examine a few more trace plots (not shown) to verify this is the case: ## 9 person ability parameters plot( run.C.mcmc[, get.2pl.params(1:9,NULL,NULL,NULL)], density=FALSE, smooth=TRUE, ylim=c(-5,5) ) ## 9 item discrimination parameters plot( run.C.mcmc[, get.2pl.params(NULL,1:9,NULL,NULL)], density=FALSE, smooth=TRUE, ylim=c(-2,2) ) ## 9 item difficulty parameters plot( run.C.mcmc[, get.2pl.params(NULL,NULL,1:9,NULL)], density=FALSE, smooth=TRUE, ylim=c(-4,-1) ) ## Recovering parameters To estimate parameters we use only the converged part of the chain. In this example, we calculate EAP estimates using only iterations after 200: all.eap We then visually compare the EAP estimates with the true parameter values: ## Person Ability check.sampler.graph( theta.abl, all.eap[ get.2pl.params(1:P.persons,NULL,NULL,NULL)], desc="Person Ability", ylab="EAP Estimates", col="blue" ) ## Item discrimination check.sampler.graph( a.disc, all.eap[ get.2pl.params(NULL,1:I.items,NULL,NULL)], desc="Item discrimination", ylab="EAP Estimates", col="blue" ) ## Item difficulty check.sampler.graph( b.diff, all.eap[ get.2pl.params(NULL,NULL,1:I.items,NULL)], desc="Item difficulty", ylab="EAP Estimates", col="blue" ) ## Conclusion By refactoring the person ability parameter code from Post 4 in Post 5 and Post 6, we were able to quickly put together a Metropolis-Hastings within Gibbs sampler. Additionally, the sampler should be free of bugs given the checks that we have implemented along the way. In the next post, we complete the MH within Gibbs sampler by implementing a Gibbs step for the variance of the person ability parameters.
2020-09-27 14:09:58
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4775393605232239, "perplexity": 4531.034573779239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400279782.77/warc/CC-MAIN-20200927121105-20200927151105-00719.warc.gz"}
http://travelideas.es/books/2010/03
## Magneto-Optics (Springer Series in Solid-State Sciences) Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 5.99 MB He established, among many things, the connection between the speed of propagation of an electromagnetic wave and the speed of light, and establishing the theoretical understanding of light. If you are unnerved by these seemingly mutually exclusive behaviours of EM radiation - it's a wave with electric and magnetic components, no - it's a charge-less particle, the photon, you are in good company. With AC current, this magnetic field will fluctuate and cause a changing magnetic field in coupled circuits or wires. ## From Lodestone to Supermagnets Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 10.11 MB Basic to magnetism are magnetic fields and their effects on matter, as, for instance, the deflection of moving charges and torques on other magnetic objects. Electricity and magnetism were long thought to be separate forces. And it can be re-adjusted in seconds at any time, for any new circumstance, including the changes taking place in the Earth's natural field. So what are the fundamental equations that describe how sources give rise to electromagnetic fields? ## Handbook on the Physics and Chemistry of Rare Earths, Volume Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 6.50 MB We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. The intricate chromosomes DNA has also been shown to be affected by electromagnetic field. The diagram shows a rectangular current-carrying coil mounted on a freely-pivoted horizontal shaft between the poles of a permanent magnet. ## Physics of Low-Dimensional Systems: Proceedings of Nobel Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 10.01 MB If the primary has 500 turns, how many turns should the secondary have? If you continue browsing the site, you agree to the use of cookies on this website. What makes this field exciting is the advent of new pulsed energy sources, and the challenging fact that a motor of zero curvature is virtually free of all fundamental limitations on size, acceleration and velocity. Photons in quantum physics are thought of as packets or bits of information. When absorbed, they are capable of ejecting electrons entirely from atoms and thus ionizing them (i.e., causing them to have a net positive electric charge). ## Metamaterials: Critique and Alternatives Format: Paperback Language: Format: PDF / Kindle / ePub Size: 6.64 MB Is it possible to design a procedure so that, using only two such experiments, we can always find $$\mathbf{E}$$ and $$\mathbf{B}$$? Equally, the associated electromagnetic field changes its orientation 50 times every second. But they are also generated in space--by unstable electron beams in the magnetosphere, as well as at the Sun and in the far-away universe, telling us about energetic particles in distant space, or else teasing us with unresolved mysteries. ## Electric-Field Control of Magnetization and Electronic Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 7.16 MB A change in the direction of the current flow produces a change in the direction of the magnetic field. In 2015, the European Commission Scientific Committee on Emerging and Newly Identified Health Risks reviewed electromagnetic fields in general, as well as cell phones in particular. The method is highly effective (more than 90 per cent success) in adult patients when used in conjunction with good management techniques that are founded on biomechanical principles. ## Metallic Magnetism (Topics in Current Physics) Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 9.30 MB When the power is off it is not magnetized. Besides being influenced by a magnetic field, a wire that is carrying a current also produces a magnetic field (however, the wire does not experience any overall force due to its own self-generated field). We say materials like this are ferromagnetic, which really just means they're "magnetic like iron." While the invention has been described in its preferred versions and embodiments with some degree of particularity, it is understood that this description has been given only by way of example and that numerous changes in the details of construction, fabrication, and use, including the combination and arrangement of parts, may be made without departing from the spirit and scope of the invention. ## PROBLEMS IN UNDERGRADUATE PHYSICS: VOLUME II: ELECTRICITY Format: Paperback Language: Format: PDF / Kindle / ePub Size: 12.41 MB This effect is known as electromagnetic induction. By rotating the coil relative to the magnet. The end of the tube is attached to the center of the paper cone of the speaker. Chilton, "MD Theory and History", Proc. of the Third Princeton-AIAA Symp. on Space Manufacturing, 1977, published by the AIAA. Plastic, cloth, and other materials are not magnetic. The normal level in the station is higher than that of an MRI. These forms of electromagnetic radiation make up the electromagnetic spectrum much as the various colors of light make up the visible spectrum (the rainbow). ## A History of the English Railway: Its Social Relations and Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 8.23 MB Ok, so the physics for this type of hoverboard seems possible. Using the approximation $$\gamma=(1-v^2/c^2)^{-1/2}\approx 1+v^2/2c^2$$ for $$v\ll c$$, the total charge per unit length in frame 2 is Let $$R$$ be the distance from the line charge to the lone charge. This action required the expenditure of energy, and whenever an electric field is created, a certain amount of energy (in the form of repositioning of electric charges against the direction they "want" to go) is required. ## Transition Temperatures and Related Properties of Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 7.15 MB
2017-09-24 17:28:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3555293083190918, "perplexity": 1544.5862186266133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690112.3/warc/CC-MAIN-20170924171658-20170924191658-00684.warc.gz"}
http://math.stackexchange.com/questions/162319/confusion-regarding-lagrange-multipliers
# Confusion regarding Lagrange multipliers I was studying Lagrange multipliers. However, I have some confusion. Let's say I have a function $f(x,y)$ to be minimized and I have some constraints $g(x,y) = 0$. If I minimize the function $$L(x,y,\lambda) = f(x,y) + \lambda g(x,y) \>,$$ then how does it include the constraint $g(x,y) = 0$. The book says that if I minimize $L$ with respect to $\lambda$ then it will be equivalent to minimize the function $f(x,y)$ with the constraint $g(x,y)$. I need some clarifications. Further it is said that gradient(f)+ lambda * gradient(g) = 0 ............(1) L(x,y,lambda) = f(x,y) + lambda * g(x,y)...........(2) I didn't get this portion how come equation 1 led to equation 2? Also I am a bit confused when it comes to inequality constraints like g(x,y) >= 0 It is being said that f(x,y) will be maximum if its gradient is oriented away from the region g(x,y) > 0 and therefore gradient(f(x,y)) = - lambda * gradient(g(x,y)) I just didn't get this. - ## migrated from stats.stackexchange.comJun 24 '12 at 8:54 This question came from our site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Please remark that the method of Lagrange multipliers simply gives a condition to find critical points of $f$ constrained to $g^{-1}(0)$. Free critical points of $L$ needn't be minima. –  Siminore Jun 24 '12 at 10:53 Setting the partial derivative of L with respect to lambda f to 0 forces g(x,y)=0. Requiring partial of L with respect x and y to 0 will lead to a local extreme point subject to g(x,y) = 0. Because of the form of L this could be a minimum. - I will show you an example if the formulation of the minimization problem was of a single variable as: $f(x)+\lambda g(x)$ Now to find the lambda's first solve a closed form for x by setting the gradient w.r.t x as zero. You will have a closed form for x, containing the lambda's. Now consider this to be $x^{*}=c(\lambda)$. Now substitute the closed form for $x^{*}$ in the constraint as, $g(x^{*})=0$ and solve for $\lambda$ which would give you a $\lambda$ that can enforce your constraint at the optimal value of $x^*$. In a statistical modeling scenario though, the $\lambda's$ are estimated by cross-validation if f(.) and g(.) were loss functions required to be optimized over random variables. But I am not sure about the domain of your work. - This answer seems to make quite strong (but, unstated) assumptions about $f$ and $g$ –  cardinal Jun 24 '12 at 3:02 @cardinal, Are you indicating about the statement over the cross-validation and over existence of closed-forms, and over existence of a unique minimum? Also at user31820, What is the range of the function $g(x,y)$? This was a start , and the answer can be modified based on further inputs and discussion. –  user23600 Jun 24 '12 at 3:07
2015-05-30 03:31:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8978511691093445, "perplexity": 254.1983079531374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.66/warc/CC-MAIN-20150521113210-00213-ip-10-180-206-219.ec2.internal.warc.gz"}
https://gmatclub.com/forum/audrey-4-hours-to-complete-a-certain-job-ferris-can-do-the-159032.html
Summer is Coming! Join the Game of Timers Competition to Win Epic Prizes. Registration is Open. Game starts Mon July 1st. It is currently 18 Jul 2019, 14:49 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Audrey 4 hours to complete a certain job. Ferris can do the Author Message TAGS: ### Hide Tags Intern Joined: 28 Dec 2010 Posts: 22 Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 02 Sep 2013, 11:17 7 21 00:00 Difficulty: 65% (hard) Question Stats: 61% (02:14) correct 39% (02:29) wrong based on 336 sessions ### HideShow timer Statistics Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? A. 5 B. 10 C. 15 D. 20 E. 25 Math Expert Joined: 02 Sep 2009 Posts: 56257 Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 02 Sep 2013, 13:29 10 3 gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3 hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? a) 5 b) 10 c) 15 d) 20 e) 25 In 2 hours Audrey completed 2/4=1/2 of the job, so the remaining one half was completed by Ferris in 2 hours with 3 breaks of equal length. Now, to complete the half of the job Ferris needs 1.5 hours, or 90 minutes. Ferris's schedule according to the stem was: work-break-work-break-work-break-work. So, the 3 breaks took 120-90=30 minutes, which means that each break was 30/3=10 minutes long. Hope it's clear. _________________ ##### General Discussion Manager Joined: 30 May 2013 Posts: 152 Location: India Concentration: Entrepreneurship, General Management GPA: 3.82 Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 02 Sep 2013, 19:18 2 gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? a) 5 b) 10 c) 15 d) 20 e) 25 Audery and Ferris collective Work rate: 1/4 + 1/3 = 7/12 Collective work Time = 12/7 = 1.7 Hrs Job Was actually done in = 2 (Includes breaks) Breaks = Actual time taken - Collective work time = 2 - 1.7 = .3 Hrs = 1/2 so ferrais took 3 breaks =.3/3=.1 hrs = 10 m so Answer is B) 10 mins Director Joined: 17 Dec 2012 Posts: 630 Location: India Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 02 Sep 2013, 21:06 2 gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? a) 5 b) 10 c) 15 d) 20 e) 25 1. In 2 hours Audrey would have completed 1/2 of the job as he takes 4 hours to complete the job 2. Ferris completed the remaining half in those 2 hours 3. But Ferris would normally complete 1/2 of the job in 3/2 = 1.5 hours 4. So Ferris totally took break for 30 min and since he took 3 breaks of equal length , each break was 10 min long. _________________ Srinivasan Vaidyaraman Sravna Test Prep http://www.sravnatestprep.com Holistic and Systematic Approach Director Joined: 25 Apr 2012 Posts: 668 Location: India GPA: 3.21 Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 03 Sep 2013, 22:51 rrsnathan wrote: gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? a) 5 b) 10 c) 15 d) 20 e) 25 Audery and Ferris collective Work rate: 1/4 + 1/3 = 7/12 Collective work Time = 12/7 = 1.7 Hrs Job Was actually done in = 2 (Includes breaks) Breaks = Actual time taken - Collective work time = 2 - 1.7 = .3 Hrs = 1/2 so ferrais took 3 breaks =.3/3=.1 hrs = 10 m so Answer is B) 10 mins Portion Highlighted in red doesn't look to be correct.. 1.7 hrs means 1 hr 42 minutes total Time taken 120 minutes So break size will be 6 minutes which is not even the answer... 0.3 Hr will be 18 minutes or 0.1 hr will be 6 minutes Hi Bunuel, Why can't we get answer by this method??? _________________ “If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.” Director Joined: 17 Dec 2012 Posts: 630 Location: India Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 03 Sep 2013, 22:59 2 mridulparashar1 wrote: rrsnathan wrote: gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? a) 5 b) 10 c) 15 d) 20 e) 25 Audery and Ferris collective Work rate: 1/4 + 1/3 = 7/12 Collective work Time = 12/7 = 1.7 Hrs Job Was actually done in = 2 (Includes breaks) Breaks = Actual time taken - Collective work time = 2 - 1.7 = .3 Hrs = 1/2 so ferrais took 3 breaks =.3/3=.1 hrs = 10 m so Answer is B) 10 mins Portion Highlighted in red doesn't look to be correct.. 1.7 hrs means 1 hr 42 minutes total Time taken 120 minutes So break size will be 6 minutes which is not even the answer... 0.3 Hr will be 18 minutes or 0.1 hr will be 6 minutes Hi Bunuel, Why can't we get answer by this method??? You would not get the answer by the above method because only Ferris took the break, So you need to get the breaks only from the time Ferris totally took. But above, you are instead calculating the time taken with both working and then trying to find the break time. But since Ferris took some breaks you would not get the answer this way. _________________ Srinivasan Vaidyaraman Sravna Test Prep http://www.sravnatestprep.com Holistic and Systematic Approach Manager Joined: 25 Oct 2013 Posts: 143 Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 17 Jan 2014, 08:38 Audrey works 4 hrs to get the job done. So in 2 hours Audrey completed 1/2 the work. To complete the remaining half Ferris needs only 1.5 hours that is 90 mins. However he took 3 breaks and finished the work in 2 hrs. Therefore Ferris took a break of 120 min - 90 mins = 30 mins spread across 3 equal sessions. So each break is 30/3 = 10 mins. _________________ Click on Kudos if you liked the post! Practice makes Perfect. Manager Joined: 20 Dec 2013 Posts: 225 Location: India Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 28 Mar 2014, 02:38 Option B. Let the units of work be 12 units(LCM of 3,4) A's rate=3u/hr B's rate=4u/hr A works continuously for 3 hrs,So A will complete 6u of work. The remaining will be completed by F in 2 hrs. Now F' rate without break is 8u in 2 hrs.But actually he did only 6u in 2 hrs. So the time he take to complete 2u=Time he spent on breaks. 60 min=4u So 15 min=1u And 30 min=2 u Divide by 3 we get 10. Manager Joined: 20 Dec 2013 Posts: 225 Location: India Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 28 Mar 2014, 03:27 Approach #2:A took 4 hrs. to complete the work. So in 2 hrs. he'll do half the work and rest will be done by F. No F does whole work in 3 hrs.So he'll do half the work in 1.5 hrs. But he worked for 2 hrs. So 0.5 hrs is his break time.Divide by 3 we get 10. SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1787 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 13 Jan 2015, 03:59 2 rrsnathan wrote: gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? a) 5 b) 10 c) 15 d) 20 e) 25 Audery and Ferris collective Work rate: 1/4 + 1/3 = 7/12 Collective work Time = 12/7 = 1.7 Hrs Job Was actually done in = 2 (Includes breaks) Breaks = Actual time taken - Collective work time = 2 - 1.7 = .3 Hrs = 1/2 so ferrais took 3 breaks =.3/3=.1 hrs = 10 m so Answer is B) 10 mins Portion Highlighted in red doesn't look to be correct.. 1.7 hrs means 1 hr 42 minutes total Time taken 120 minutes So break size will be 6 minutes which is not even the answer... 0.3 Hr will be 18 minutes or 0.1 hr will be 6 minutes Hi Bunuel, Why can't we get answer by this method???[/quote] You would not get the answer by the above method because only Ferris took the break, So you need to get the breaks only from the time Ferris totally took. But above, you are instead calculating the time taken with both working and then trying to find the break time. But since Ferris took some breaks you would not get the answer this way Combined rate, if calculated, comes up = 7/12, so time taken = 12/7 However, given that both combined took time of 2 hours, so the given information supersedes the calculated part, hence this approach cannot be taken _________________ Kindly press "+1 Kudos" to appreciate SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1787 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 13 Jan 2015, 20:49 4 ........................ Rate ................. Time .................. Work Audrey ............. $$\frac{1}{4}$$ ..................... 4 ..................... 1 Ferris ............... $$\frac{1}{3}$$ ..................... 3 ...................... 1 Combined .................................... 2 ........................... This is the given information inclusive of 3 breaks Audrey worked nonstop for 2 hours, work done $$= \frac{1}{4} * 2 = \frac{1}{2}$$ Work left over for Ferris $$= 1 - \frac{1}{2} = \frac{1}{2}$$ Time required by Ferris$$= \frac{1}{2} * 3 = 1.5$$ Break taken by Ferris = 2 - 1.5 = 0.5 = 30 Minutes Each break = 10 Minutes _________________ Kindly press "+1 Kudos" to appreciate Intern Joined: 07 Jun 2013 Posts: 10 Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 16 Mar 2015, 15:33 Let Ferris takes x hrs including break to complete the work alone. Given Audrey alone takes 2 hrs and Ferris alone takes 3 hrs, and together they takes 2 hrs to complete the work. So, 1/2+1/x=1/2. Solving x=4 hrs. Implies in total 4 hours of work time, Ferris works for 3 hours and takes 1 hr break. Therefore, in total 2 hours of work time, he will take 30 minutes of break. There are 3 equal breaks, so each break will be equal to 10 mins. B Manager Joined: 22 Aug 2014 Posts: 144 Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 15 May 2015, 01:25 gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? A. 5 B. 10 C. 15 D. 20 E. 25 WHAT IS THE PROBLEM IN BELOW METHOD: y=total time of 3 breaks 1/4+1/(3+y)=1/2 y=1 hours and thus each break 20 mins long What I am doing wrong? Current Student Joined: 13 Nov 2014 Posts: 108 GMAT 1: 740 Q50 V40 Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 15 May 2015, 07:58 gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? A. 5 B. 10 C. 15 D. 20 E. 25 I did this a completely different way that may not be optimal. so A does a rate of 1/4 per hour and F does a rate of 1/3 per hour. Convert those to 4/12 (for F) and 3/12 (for A). We know that A worked for the full 2 hours so A did 6/12 of the job, which means that F completed 6/12 of the job. If F took no breaks, he would have completed 8/12, but we only need 6/12 so we have a difference of 2/12. So F need take breaks for a total of 1/4 of the total time. 1/4 of two hours is 30 mins and he took 3 breaks (30/3=10). Each break = 10 mins B _________________ Gmat prep 1 600 Veritas 1 650 Veritas 2 680 Gmat prep 2 690 (48Q 37V) Gmat prep 5 730 (47Q 42V) Gmat prep 6 720 (48Q 41V) e-GMAT Representative Joined: 04 Jan 2015 Posts: 2943 Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 19 May 2015, 00:22 2 Presenting the detailed solution Given We are told about the time taken by Audrey and Ferris to do a work(let's say W) i.e. 4 hours and 3 hours respectively. We are also told that both of them working together take 2 hours to complete the job. However during these 2 hours, Ferris took 3 breaks of equal time intervals whereas Audrey worked for the full 2 hours. We are asked to find the time taken by Ferris for each break. Approach We know that Work = Rate * Time. Since we are given the time taken by both Audrey & Ferris to do a particular work we can find out their respective rates in terms of work done i.e. W For the situation when both of them are working together, we know the following: a. Amount of work to be done i.e. W b. Rate of work done by Audrey & Ferris in terms of W c. Time for which Audrey worked We can use the above information and the time rate equation to find out the time for which Ferris worked which can be used to calculate the time taken by Ferris for each break. Working Out Let the amount of work done be W. Rate at which Audrey works $$W = Ra * 4$$ i.e. $$Ra = \frac{W}{4}$$ Rate at which Ferris works Similarly $$Rb = \frac{W}{3}$$ Both Audrey & Ferris working together Let's assume the time for which Ferris worked be t. $$W = Ra * 2 + Rb * t$$ $$W = \frac{W}{4} * 2 + \frac{W}{3} * t$$ $$t = 1.5$$ hours. Since Ferris worked for 1.5 hours, he took a total break of ( 2 - 1.5) hours = 30 minutes. Since he took 3 equal breaks his each break length = $$\frac{30}{3} = 10$$ minutes. Hope this helps Regards Harsh _________________ Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 9446 Location: Pune, India Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 19 May 2015, 22:58 Wofford09 wrote: gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? A. 5 B. 10 C. 15 D. 20 E. 25 I did this a completely different way that may not be optimal. so A does a rate of 1/4 per hour and F does a rate of 1/3 per hour. Convert those to 4/12 (for F) and 3/12 (for A). We know that A worked for the full 2 hours so A did 6/12 of the job, which means that F completed 6/12 of the job. If F took no breaks, he would have completed 8/12, but we only need 6/12 so we have a difference of 2/12. So F need take breaks for a total of 1/4 of the total time. 1/4 of two hours is 30 mins and he took 3 breaks (30/3=10). Each break = 10 mins B Perfect logic. I would just explain this "So F need take breaks for a total of 1/4 of the total time." Since his rate of work is 4/12 and he does 6/12 of the work, time taken = Work/Rate = (6/12)/(4/12) = 1.5 hrs. So he took a break for half an hour i.e. 30 mins and then proceed. _________________ Karishma Veritas Prep GMAT Instructor VP Joined: 07 Dec 2014 Posts: 1206 Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 08 Nov 2016, 15:20 2 gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? A. 5 B. 10 C. 15 D. 20 E. 25 if ferris takes no breaks, then in 2 hours he and audrey can complete 2(1/4+1/3), or 7/6 of the job thus, ferris' total break time accounts for 1/6 of the job if ferris can do entire job in 3 hours, then he can do 1/6 of job in 1/2 hour 1/2 hour=30 minutes 30/3=10 minutes per break B Director Joined: 23 Jan 2013 Posts: 542 Schools: Cambridge'16 Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 29 Nov 2016, 05:00 1/4+1/3=7/12 is combined rate (7/12)*2=14/12 is a work without break 14/12-12/12=2/12 is a work not done due to the breaks (2/12):(1/3)=1/2hours dedicated to 3 breaks, so one break is 10 min B Board of Directors Status: QA & VA Forum Moderator Joined: 11 Jun 2011 Posts: 4512 Location: India GPA: 3.5 Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 29 Nov 2016, 09:39 1 gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? A. 5 B. 10 C. 15 D. 20 E. 25 Let the total work be 12 Efficiency of Audrey = 3 Efficiency of Ferris = 4 Let total break of Ferris be x Hour SO, 3*2 + 4 ( x - 2 ) = 12 Or, 6 + 4x - 8 = 12 Or, 4x = 24 Or, x = 6 Thus each of Ferris break was 6/60 hrs = 10 minutes... Answer will be (B) 10 minutes... _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) Target Test Prep Representative Affiliations: Target Test Prep Joined: 04 Mar 2011 Posts: 2823 Re: Audrey 4 hours to complete a certain job. Ferris can do the  [#permalink] ### Show Tags 22 Feb 2018, 17:30 gary391 wrote: Audrey 4 hours to complete a certain job. Ferris can do the same job in 3hours. Audrey and Ferris decided to collaborate on the job, working at their respective rates. While Audrey worked continuously, Ferris took 3 breaks of equal length. If the two completed the job together in 2 hours, how many minutes long was each of Ferris’ breaks ? A. 5 B. 10 C. 15 D. 20 E. 25 The rate of Audrey is 1/4 and the rate of Ferris is 1/3. If we let each break of Ferris equal x, his time worked is 2 - 3x ad Audrey’s time is 2. We can create the following equation and solve for x: (1/4)(2) + (1/3)(2 - 3x) = 1 Multiplying by 12 we have: 6 + 4(2 - 3x) = 12 8 - 12x = 6 2 = 12x 1/6 = x So each break was 1/6 x 60 = 10 minutes long. _________________ # Jeffrey Miller Jeff@TargetTestPrep.com 122 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews If you find one of my posts helpful, please take a moment to click on the "Kudos" button. Re: Audrey 4 hours to complete a certain job. Ferris can do the   [#permalink] 22 Feb 2018, 17:30 Go to page    1   2    Next  [ 21 posts ] Display posts from previous: Sort by
2019-07-18 21:49:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7823662757873535, "perplexity": 3676.9952095935273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525829.33/warc/CC-MAIN-20190718211312-20190718233312-00438.warc.gz"}
https://www.encyclopediaofmath.org/index.php/Monge_cone
Monge cone directing cone The envelope of the tangent planes to the integral surface at a point $(x_0,y_0,z_0)$ of a partial differential equation $$F(x,y,z,p,q)=0,\tag{*}$$ where $p=\partial z/\partial x$, $q=\partial z/\partial y$. If $F$ is a non-linear function in $p$ and $q$, then the general case holds: The tangent planes form a one-parameter family of planes passing through a fixed point; their envelope is a cone. If $F$ is a linear function in $p$ and $q$, then a bundle of planes passing through a line is obtained, that is, the Monge cone degenerates to the so-called Monge axis. The directions of the generators of the Monge cone corresponding to some point $(x_0,y_0,z_0)$ are called characteristic directions. A line on the integral surface which is tangent at each point to a corresponding generator of the Monge cone is called a characteristic line, a characteristic, a focal curve, or a Monge curve. Figure: m064630a The geometric interpretation (see Fig.) of equation \ref{*}, as a field of directing cones, was given by G. Monge (1807).
2019-04-18 22:35:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313910603523254, "perplexity": 288.30522603212194}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526904.20/warc/CC-MAIN-20190418221425-20190419003425-00494.warc.gz"}
http://mathhelpforum.com/math-challenge-problems/127156-nice-inequality.html
# Math Help - Nice Inequality !! 1. ## Nice Inequality !! Hii ! a, b and c the réels posifives numbers , prove That : $a^2+b^2+c^2 \geq a+b+c+2( \frac{a-1}{a+1} + \frac{b-1}{b+1} + \frac{c-1}{c+1} )$ Have Fun 2. Originally Posted by Perelman Hii ! a, b and c the réels posifives numbers , prove That : $a^2+b^2+c^2 \geq a+b+c+2( \frac{a-1}{a+1} + \frac{b-1}{b+1} + \frac{c-1}{c+1} )$ Have Fun Spoiler: Take all the terms over to the left side: $a^2 - a -2\Bigl(\frac{a-1}{a+1}\Bigr) = \frac{(a-1)^2(a+2)}{a+1}$, which is obviously positive if a is! – and similarly for the b and c terms. 3. Hii!! Perfect Mr Opalg ! So My Solution : $a^2+b^2+c^2 \geq a+b+c+2( \frac{a-1}{a+1} + \frac{b-1}{b+1} + \frac{c-1}{c+1} )$ $\Leftrightarrow$ $a^2+ b^2+ c^2 + 4(\frac{1}{a+1} + \frac{1}{b+1}+\frac{1}{c+1}) \geq a +b +c + 6$ Put : $a+1=x$ , $b+1=y$ and $c+1=z$ : $x^2+y^2+z^2+4(\frac{1}{x} + \frac{1}{y}+\frac{1}{z}) \geq 3(x+y+z)$ By Cauchy Schwartz : $\frac{1}{x} + \frac{1}{y}+\frac{1}{z} \geq \frac{9}{x+y+z}$ And We Have : $x^2+ y^2+ z^2 \geq \frac{(x+y+z)2}{3}$ So, We need prove : $\frac{(x+y+z)^2}{3} + \frac{36}{x+y+z} \geq 3(x+y+z)$ $\Leftrightarrow$ $(x+y+z)^2 -9(x+y+z)^2+108 \geq 0$ $\Leftrightarrow$ $(x+y+z-6)^2 (x+y+z+3) \geq 0$ Cqfd . 4. Thread closed due to failure to comply with the rules in the sticky: http://www.mathhelpforum.com/math-he...-subforum.html
2014-12-27 02:52:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423460960388184, "perplexity": 3933.449680789539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447550262.140/warc/CC-MAIN-20141224185910-00070-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.cs.ubc.ca/~poole/aibook/2e/html/ArtInt2e.Ch5.S6.SS2.html
# 5.6.2 Proof Procedures for Negation as Failure ## Bottom-Up Procedure The bottom-up procedure for negation as failure is a modification of the bottom-up procedure for definite clauses. The difference is that it can add literals of the form $\mbox{\sim}p$ to the set $C$ of consequences that have been derived; $\mbox{\sim}p$ is added to $C$ when it can determine that $p$ must fail. Failure can be defined recursively: $p$ fails when every body of a clause with $p$ as the head fails. A body fails if one of the literals in the body fails. An atom $b_{i}$ in a body fails if $\mbox{\sim}b_{i}$ can be derived. A negation $\mbox{\sim}b_{i}$ in a body fails if $b_{i}$ can be derived. Figure 5.11 gives a bottom-up negation-as-failure interpreter for computing consequents of a ground KB. Note that this includes the case of a clause with an empty body (in which case $m=0$, and the atom at the head is added to $C$) and the case of an atom that does not appear in the head of any clause (in which case its negation is added to $C$). ###### Example 5.31. Consider the following clauses: $\displaystyle{p\leftarrow\mbox{}q\wedge\mbox{}\mbox{\sim}r.}$ $\displaystyle{p\leftarrow\mbox{}s.}$ $\displaystyle{q\leftarrow\mbox{}\mbox{\sim}s.}$ $\displaystyle{r\leftarrow\mbox{}\mbox{\sim}t.}$ $\displaystyle{t.}$ $\displaystyle{s\leftarrow\mbox{}w.}$ The following is a possible sequence of literals added to $C$: $\displaystyle{t}$ $\displaystyle{\mbox{\sim}r}$ $\displaystyle{\mbox{\sim}w}$ $\displaystyle{\mbox{\sim}s}$ $\displaystyle{q}$ $\displaystyle{p}$ where $t$ is derived trivially because it is given as an atomic clause; $\mbox{\sim}r$ is derived because $t\in C$; $\mbox{\sim}w$ is derived as there are no clauses for $w$, and so the “for every clause” condition of line 18 of Figure 5.11 trivially holds. Literal $\mbox{\sim}s$ is derived as $\mbox{\sim}w\in C$; and $q$ and $p$ are derived as the bodies are all proved. ## Top-Down Negation-as-Failure Procedure The top-down procedure for the complete knowledge assumption proceeds by negation as failure. It is similar to the top-down definite-clause proof procedure of Figure 5.4. This is a non-deterministic procedure (see the box) that can be implemented by searching over choices that succeed. When a negated atom $\mbox{\sim}a$ is selected, a new proof for atom $a$ is started. If the proof for $a$ fails, $\mbox{\sim}a$ succeeds. If the proof for $a$ succeeds, the algorithm fails and must make other choices. The algorithm is shown in Figure 5.12. ###### Example 5.32. Consider the clauses from Example 5.31. Suppose the query is $\mbox{{ask}~{}}~{}p$. Initially, $G=\{p\}$. Using the first rule for $p$, $G$ becomes $\{q,\mbox{\sim}r\}$. Selecting $q$, and replacing it with the body of the third rule, $G$ becomes $\{\mbox{\sim}s,\mbox{\sim}r\}$. It then selects $\mbox{\sim}s$ and starts a proof for $s$. This proof for $s$ fails, and thus $G$ becomes $\{\mbox{\sim}r\}$. It then selects $\mbox{\sim}r$ and tries to prove $r$. In the proof for $r$, there is the subgoal $\mbox{\sim}t$, and so it tries to prove $t$. This proof for $t$ succeeds. Thus, the proof for $\mbox{\sim}t$ fails and, because there are no more rules for $r$, the proof for $r$ fails. Therefore, the proof for $\mbox{\sim}r$ succeeds. $G$ is empty and so it returns yes as the answer to the top-level query. Note that this implements finite failure, because it makes no conclusion if the proof procedure does not halt. For example, suppose there is just the rule $p\leftarrow\mbox{}p$. The algorithm does not halt for the query $\mbox{{ask}~{}}~{}p$. The completion, $p\iff p$, gives no information. Even though there may be a way to conclude that there will never be a proof for $p$, a sound proof procedure should not conclude $\mbox{\sim}p$, as it does not follow from the completion.
2017-10-20 05:16:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 100, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7832671999931335, "perplexity": 386.30383435345567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823731.36/warc/CC-MAIN-20171020044747-20171020064747-00206.warc.gz"}
http://mathoverflow.net/feeds/question/80378
Produce an irreducible polynomial that can't be proved irreducible by using Eisenstein - MathOverflow [closed] most recent 30 from http://mathoverflow.net 2013-05-20T01:13:27Z http://mathoverflow.net/feeds/question/80378 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/80378/produce-an-irreducible-polynomial-that-cant-be-proved-irreducible-by-using-eisen Produce an irreducible polynomial that can't be proved irreducible by using Eisenstein david 2011-11-08T11:28:45Z 2011-11-08T12:32:26Z <p>give An example of an irreducible polynomial that cannot prove it by using the Eisenstein criterion even with the use of all linear change variable($x-c=y$). </p> http://mathoverflow.net/questions/80378/produce-an-irreducible-polynomial-that-cant-be-proved-irreducible-by-using-eisen/80380#80380 Answer by Gerry Myerson for Produce an irreducible polynomial that can't be proved irreducible by using Eisenstein Gerry Myerson 2011-11-08T11:37:07Z 2011-11-08T11:37:07Z <p>$x^2+8$ is an example. </p> http://mathoverflow.net/questions/80378/produce-an-irreducible-polynomial-that-cant-be-proved-irreducible-by-using-eisen/80382#80382 Answer by Maurizio Monge for Produce an irreducible polynomial that can't be proved irreducible by using Eisenstein Maurizio Monge 2011-11-08T12:28:11Z 2011-11-08T12:28:11Z <p>It is possible to produce a polynomial that cannot (provably) be proved to be irreducible considering the valuations of the roots, or even any polynomial function in the roots (which can be much more general than a linear substitution), for every possible valuation over the base field.</p> <p>Let $L/K$ be an unramifed extension of numeber fields (for instance the Hilbert class field of a $K$ with non-trivial class group), generated by $\alpha$ say. Then the minimal polynomial for $\alpha$ over $K$ will do.</p>
2013-05-20 01:13:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.983200192451477, "perplexity": 1305.752413869872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698196686/warc/CC-MAIN-20130516095636-00030-ip-10-60-113-184.ec2.internal.warc.gz"}
https://deepai.org/publication/estimates-of-the-reconstruction-error-in-partially-redressed-warped-frames-expansions
# Estimates of the Reconstruction Error in Partially Redressed Warped Frames Expansions In recent work, redressed warped frames have been introduced for the analysis and synthesis of audio signals with non-uniform frequency and time resolutions. In these frames, the allocation of frequency bands or time intervals of the elements of the representation can be uniquely described by means of a warping map. Inverse warping applied after time-frequency sampling provides the key to reduce or eliminate dispersion of the warped frame elements in the conjugate variable, making it possible, e.g., to construct frequency warped frames with synchronous time alignment through frequency. The redressing procedure is however exact only when the analysis and synthesis windows have compact support in the domain where warping is applied. This implies that frequency warped frames cannot have compact support in the time domain. This property is undesirable when online computation is required. Approximations in which the time support is finite are however possible, which lead to small reconstruction errors. In this paper we study the approximation error for compactly supported frequency warped analysis-synthesis elements, providing a few examples and case studies. ## Authors • 2 publications • 1 publication • ### On the Decomposition of Multivariate Nonstationary Multicomponent Signals With their ability to handle an increased amount of information, multiva... 03/31/2019 ∙ by Ljubisa Stankovic, et al. ∙ 0 • ### Exact, Parallelizable Dynamic Time Warping Alignment with Linear Memory Audio alignment is a fundamental preprocessing step in many MIR pipeline... 08/04/2020 ∙ by Christopher Tralie, et al. ∙ 0 • ### Frequency, Acceptability, and Selection: A case study of clause-embedding We investigate the relationship between the frequency with which verbs a... 04/08/2020 ∙ by Aaron Steven White, et al. ∙ 0 • ### Using a Pitch-Synchronous Residual Codebook for Hybrid HMM/Frame Selection Speech Synthesis This paper proposes a method to improve the quality delivered by statist... 12/30/2019 ∙ by Thomas Drugman, et al. ∙ 0 • ### Time-Frequency Analysis and Parameterisation of Knee Sounds for Non-invasive Detection of Osteoarthritis Objective: In this work the potential of non-invasive detection of knee ... 04/27/2020 ∙ by Costas Yiallourides, et al. ∙ 0 • ### Learning Deep Models from Synthetic Data for Extracting Dolphin Whistle Contours We present a learning-based method for extracting whistles of toothed wh... 05/18/2020 ∙ by Pu Li, et al. ∙ 0 • ### Geometry-aligned moving frames by removing spurious divergence in curvilinear mesh with geometric approximation error The vertices of curvilinear elements usually lie on the exact domain. Ho... 12/15/2020 ∙ by Sehun Chun, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction The availability of configurable time-frequency schemes is an asset for sound analysis and synthesis, where the elements of the representation can efficiently capture features of the signal. These include features of interpretation: e.g. glissandi and vibrati; human perception: e.g. non-uniform frequency sensitivity of the cochlea; music theory: e.g. scales (pentatonic, 12-tone, Indian scales with unequally spaced tones, etc.); physical effects: e.g. non-harmonic overtones in the low register of the piano or of percussion instruments and so forth. When used in the context of Music Information Retrieval, the adaptation of the representation to music scales is bound to improve. e.g. for instrument-, note-, chord- and overtone detection and recognition. Traditionally, two extreme cases have been considered: Gabor expansions featuring uniform time and frequency resolutions and orthogonal wavelet expansions and frames featuring octave band allocation and constant uncertainty of the representation elements. In previous work [1, 2, 3, 4, 5, 6, 7], generalised Gabor frames have been constructed which allow for non-uniform time-frequency schemes with perfect reconstruction. In [3] the allocation of generalised Gabor atoms is specified according to a frequency or time warping map. In [8] the STFT redressing method is introduced, which, with the use of additional warping in time-frequency, shows under which conditions one can have generalised Gabor frames. These conditions stem from the interaction of sampling in time-frequency and frequency or time warping operators, which allow to incorporate the results in [3] in a more general context. It is shown that arbitrary allocation of the atoms is exactly possible in the so called painless case, i.e. in the case of finite time support of the windows for arbitrary time interval allocation and of finite frequency support of the windows for arbitrary frequency band allocation. Non-uniform frequency analysis by means of warping was introduced in [7]. Non-uniform Gabor frames with constant-Q were previously introduced in [1] , based on the theory developed in [2], where an ad hoc procedure was employed for their construction. In [3] we provided an alternative general method for their construction, using warping. In [8] the redressing method was introduced and applied to the general construction of non-stationary Gabor frames. In [9] the general theory was revisited with the real-time computational aspects in mind. There, the first approximate schemes were introduced without extensive testing. Since online computation of the generalised Gabor analysis-synthesis is only possible with finite duration windows, the arbitrary frequency band allocation does not lead to an exact solution in applications that require real-time, while the arbitrary time interval allocation presents little or no problem. In [9] approximations leading to nearly exact representations were introduced. In this paper we expand on the results in [9] and provide a study of the approximation error on a wide class of signals when finite duration windows are required in arbitrary frequency band allocation. The paper is organised as follows. In Section 2 we review the concept of applying time and frequency warping to time-frequency representations, together with the redressing method, which involves a further warping operation in the time-frequency domain to reduce or eliminate dispersion. In Section 3 we introduce approximations suitable for the online computation of redressed frame expansions. In Section 4 the results of numerical experiments are shown, which provide estimations of the approximation error. In Section 5 we draw our conclusions. ## 2 Redressed Warped Gabor Frames In this section we review the concepts leading to the definition of redressing in the context of frequency warped time-frequency representations. First we review the basic notions of STFT (Short-Time Fourier Transform) and Gabor frames. Then we move on to the definition of warped frames and then to the redressing procedure. ### 2.1 STFT and Gabor Frames Gabor expansions can be considered as a form of sampling and exact reconstruction of the STFT. As is well-known, given a window and defining the time-shift operator and the modulation operator , the STFT is obtained by applying the operator to the signal : (1) where the overbar denotes complex conjugation. A Gabor system is generated by the kernel of by sampling the time-frequency plane : G(h,a,b)={TnaMqbh:n,q∈Z} (2) where are sampling parameters. The scalar products of the signal with the members of the Gabor system ⟨s,TnaMqbh⟩,n,q∈Z (3) provide evaluations of the STFT (1) of a signal with window at the time-frequency grid of points , with . The question whether the signal can be reconstructed from these evaluations can be addressed by introducing the concept of frame. A sequence of functions in the Hilbert space is called a frame if there exist both positive constant lower and upper bounds and , respectively, such that A∥s∥2≤∑l∈I|⟨s,ψl⟩|2≤B∥s∥2∀s∈H, (4) where is the norm square or total energy of the signal. Frames generate signal expansions, i.e., the signal can be perfectly reconstructed from its projections over the frame. A Gabor system that is a frame is called a Gabor frame. In this case, the signal can be reconstructed from the corresponding samples of the STFT (3). While not unique, reconstruction can be achieved with the help of a dual frame, which in turn is a Gabor frame generated by a dual window . Perfect reconstruction essentially depends on the choice of the window and the sampling grid. One can show that there exist no Gabor frames when . See [10] for more informations about Gabor frames. ### 2.2 Warped STFT and Gabor Frames The warped STFT can be obtained by warping the signal prior to applying the STFT operator. In this paper we focus on pure frequency warping. A frequency warping operator is completely characterised by a function composition operator , such that , in the frequency domain: W~θ=F−1WθF, (5) where is the Fourier transform operator. The function is the frequency warping map, which transforms the Fourier transform of a signal into the Fourier transform of another signal . We affix the symbol over the map as a reminder that the map operates in the frequency domain. If the warping map is one-to-one and almost everywhere differentiable then a unitary form of the warping operator can be defined by the following frequency domain action ^sfw(ν)=[ˆU~θs](ν)=√∣∣dθdν∣∣^s(θ(ν)), (6) where denotes frequency. We assume henceforth that all warping maps are almost everywhere increasing so that the magnitude sign can be dropped from the derivative under the square root. Given a frequency warping operator , the warped STFT is defined through the operator as follows (7) which is indeed a warped version of (1), where is the adjoint of the warping operator. If the warping operator is unitary then we have . In that case, warping the signal prior to STFT is perfectly equivalent to perform STFT analysis with inversely frequency warped windows. The warped STFT is unitarily equivalent to the STFT so that a number of properties concerning conditioning and reconstruction hold [11]. The Fourier transforms of the frequency warped STFT analysis elements are ^~hτ,ν(f)=[ˆW~θ−1hτ,ν](f)=√dθ−1df^h(θ−1(f)−ν)e−j2πθ−1(f)τ, (8) i.e., the warped STFT analysis elements are obtained from frequency warped modulated windows centred at frequencies . The windows are time-shifted with dispersive delay, where the group delay is . Frequency warping generally disrupts the time organisation of signals due to the fact that the time-shift operator does not commute with the frequency warping operator [9]. From (4) it is easy to see that any unitary operation, in particular unitary warping, on a frame results in a new frame with the same frame bounds and [11]. Since the atoms are not generated by shifting and modulating a single window function, the resulting frames are not necessarily of the Gabor type. However, warping prior to conventional Gabor analysis and unwarping after Gabor synthesis always leads to perfect reconstruction. Starting from a Gabor frame (analysis) and dual frame (synthesis) : φn,q=TnaMqbhγn,q=TnaMqbg, (9) where and are dual windows, warped frames can be generated, following (8), by unwarping the analysis and synthesis frames. In the case of non-unitary warping, a frequency domain scaling operation is necessary in order to reconstruct the original signal. For the case of unitary warping we simply have: ~φn,q=U†~θφn,q=U~θ−1TnaMqbh~γn,q=U†~θγn,q=U~θ−1TnaMqbg, (10) where is the frequency warped analysis frame and is the dual warped frame for the synthesis. With these definitions, one obtains the signal expansion s=∑n,q∈Z⟨s,~φn,q⟩~γn,q. (11) Warped Gabor frames suffer from the same problem as the warped STFT. Indeed the Fourier transforms of the warped Gabor frame elements bear frequency dispersive delays so that dispersive time samples are produced by the direct application of the frequency warped frame analysis. ### 2.3 Redressing Methods As shown in [8, 9], the dispersive delays intrinsic to the warped STFT can be redressed, i.e. made into constant delays in each analysis band if frequency unwarping is performed in the transformed time domain, i.e. with respect to time shift. In other words, instead of (7) we consider the similarity transformation on the STFT operator, which is time-shift covariant. In fact, one has: [ˆW~θ−1SW~θs](f,ν)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯^h0,0(θ−1(f)−ν)^s(f), (12) which is in the form of a time-invariant filtering operation, corresponding to convolution in time domain. The filters are frequency warped versions of the modulated windows in the traditional STFT. The Fourier transform of the redressed analysis elements are (13) which shows that the dispersive delays in the analysis elements (8) are brought back to non-dispersive delays. In redressing warped Gabor frames one faces a further difficulty due to time-frequency sampling. In this case, inverse frequency warping can only be applied to sequences (with the respect to the time shift index) and may not perfectly reverse the dispersive effect of the original map on delays. Unitary frequency warping in discrete time can be realised with the help of an orthonormal basis of constructed from an almost everywhere differentiable warping map that is one-to-one and onto , as follows: μm(n)=∫+12−12√dϑdνej2π(nϑ(ν)−mν)dν, (14) where (see [12, 13, 14, 15, 16]). The map can be extended over the entire real axis as congruent modulo to a -periodic function. Given any sequence in , the action of the discrete-time unitary warping operator is defined as follows: ~x(m)=[D~ϑx](m)=⟨x,μm⟩ℓ2(Z). (15) In fact, the sequence in satisfies ^~x(ν)=√dϑdν^x(ϑ(ν)), (16) where the symbol, when applied to sequences, denotes discrete-time Fourier transform. The sequences define the nucleus of the inverse unitary frequency warping operator . where . In order to limit or eliminate time dispersion in the frequency warped Gabor expansion, the discrete-time frequency warping operator is applied to the time sequence of expansion coefficients over the warped Gabor frames. Since the operator is applied only on the time index, for generality, one can include dependency of the map and of the sequences on the frequency index . The process can be equivalently described by defining the redressed frequency warped Gabor analysis and synthesis frames as follows: ~~φn,q=D~ϑq−1~φ∙,q=∑mηn,q(m)~φm,q~~γn,q=D~ϑq−1~γ∙,q=∑mηn,q(m)~γm,q, (17) obtaining: s=∑n,q∈(Z)⟨s,~~φn,q⟩~~γn,q. (18) One can show [9] that the Fourier transforms of the redressed frame are ^~~φn,q(f)=A(f)^h(θ−1(f)−qb)e−j2πnϑq(aθ−1(f)), (19) where A(f)=√dθ−1df√dϑqdν∣∣∣ν=aθ−1(f). (20) Hence, dispersion is completely eliminated if ϑq(aθ−1(f))=dqf (21) for any , where are positive constants controlling the time scale in each frequency band. In this case, the Fourier transforms of the redressed frame elements become: ^~~φn,q(f)=√dqa^h(θ−1(f)−qb)e−j2πndqf. (22) When all are identical all the time samples are aligned to a uniform time scale throughout frequencies. If the are distinct, time realignment when displaying the non-uniform spectrogram is a simple matter of different time base or time scale for each frequency band. Unfortunately, due to the discrete nature of the redressing warping operation, each map is constrained to be congruent modulo to a -periodic function, while the global warping map can be arbitrarily selected. Moreover, the functions must be one-to-one in each unit interval, therefore they can have at most an increment of there. In Fig. 1 we illustrate the phase linearisation problem. There, the black curve is the amplitude scaled warping map and the grey curve represents the map , which is -periodic. Both maps are plotted in the abscissa . By amplitude scaling the warping map one can allow the values of the map to lie in the range of the discrete-time warping map . The amplitude scaling factors happen to be the new time sampling intervals of the redressed warped Gabor expansion. In the “painless” case, which was hand picked in [3], the window is chosen to have compact support in the frequency domain. Through equation (19) and condition (21), the redressing method shows that, given any continuous and almost anywhere differentiable and increasing warping map, only in the painless case one can exactly eliminate the dispersive delays with the help of (17). In fact, in this case linearisation of the phase is only required within a finite frequency range given by the frequency support of the frame elements [8, 9], which is compatible with the periodicity constraint of the redressing maps . In the general case, a perfect time realignment of the components is not guaranteed. Notice, however, that, by construction, the redressed warped Gabor systems are guaranteed to be frames for any choice of the maps satisfying the stated periodicity conditions, even when the phase is not completely linearised. Locally, within a certain band of the warped modulated windows it is possible to linearise the phase of the complex exponentials in (19). In the sequel we will refer to this band as the essential bandwidth since, hopefully, the magnitude of the Fourier transform of the window is negligible outside it, at least as a design goal. Unfortunately, in general both the painless case and the partially redressed cases lead to infinite duration windows, which are undesirable when online computation is required. In the next we are going to propose some approximations and study the reconstruction error also through numerical experiments. ## 3 Real-time computation of the Warped Gabor Expansion For real-time computation one needs to make assumptions on the signal, as well on the window functions in order to keep the computational load as low as possible. By requiring the window to be real-valued and to be odd, we obtain = . If, additionally the input signal is real-valued, then the coefficients fulfil and thus we only need to compute about half the coefficients and frame elements. By enforcing shift-invariance of the warped frame elements (i.e. ), which we must do since otherwise the pre-computation of the frame elements and hence the real-time computation of the expansion would be impossible, it is sufficient to only store for non-negative values of q. It is advisable that the warping map is selected as a continuously differentiable function, since the error in the resynthesised signal in the frequency domain at the points of discontinuity of is very high. A strictly positive derivative helps the atoms not get too stretched in the time domain, which is undesirable in real time applications. To avoid aliasing we further require that and for some , where SR denotes the sampling rate. This ensures that the high frequencies are smoothly mapped back to the negative frequencies and avoids extra aliasing introduced by the approximation. By requiring the Fourier transform of the window to have an analytic expression, we can avoid numerical errors in the computation of the warped windows. We further only worked with windows which are dual to itself, i.e. which is actually not a necessary restriction. ### 3.1 Implementation Details We implemented the approximate warped redressed Gabor analysis and synthesis as C externals interfaced to Pure-Data (32 bit and 64 bit). It runs both under Windows and Linux and can use multiple cores. The warped windows are computed using equation (22) and applying an IDFT of that data. The inner products are directly computed with a loop. Also for the synthesizing part, the sum is directly computed. It turned out that this is sufficiently fast for real-time computation and whence we made no use of fast convolution algorithm. Since the data rate of the analysis part is not uniform in time, PD’s signal connections are not suitable to transfer the coefficients. Therefore we use the signal connections to transfer pointers rather than signals. ### 3.2 Interface Details For simplicity we require that the essential bandwidth B is a multiple of the frequency shift parameter , i.e. . Then, in order to obtain a frame the following conditions must be fulfilled as can be seen easily in Figure 1 [9, eq. (34) to (36)]: abK≤1 (23) dqBq≤1 where is the ess. bandwidth of the warped modulated window. Bq=θ(qb+B2)−θ(qb−B2) (24) In the case of an exponential increasing warping map, it makes sense to set the frequency shifting parameter in a way that adjacent notes fall away from the windows main lobe in the frequency domain (If there is such a main lobe, as in the case of the raised cosine window). This on the other hand determines the window length , a minimal value for and the time shift parameter a=ThR (25) with due to equation (23). We set for simplicity. Equations (23) are fulfilled by setting b =1aRCb dq ≃1BqCd (26) where can be chosen inside the PD-patch. together with controls the oversampling, where controls the bandwidth in which the phase is linearised and also influences the oversampling. The are different for each band which results in a non-uniform data rate for each band. We remark that the numbers have to be chosen, such that , where SR is the sampling rate. The number of bands we need for a given SR is qsup=θ−1(SR/2)b. (27) The assumptions on the warping map ensure that this is a natural number. Due to the warping, the supports of the windows are in general unbounded. Thus we compute the windows with a zero padded array of length of ( for compute), which is defined as Tc=Thθ′infCTc (28) where can be chosen inside the PD-patch and . Due to the same reason it is indispensable to cut the windows after warping. To define a sensible atom length after warping, we set the atoms to zero after the point from which they were smaller than , with a constant which can be chosen inside the PD-patch. Afterwards the windows are truncated accordingly and with respect to the parameter , which defines the maximal desired window length. It is important that the windows are cut after aligning them to the time origin. The second approach suggested in [9] to obtain windows with finite length by computing only a linearised version of the warped windows, leads to very bad reconstruction. ### 3.3 Computational Costs #### 3.3.1 Analysis A rough estimation yields the following. Let denote the length of the -window . It will be clear from the context if denotes seconds or samples. To compute the inner product of that window with the signal one needs many real multiplications and summations. This has to be done every samples. Hence, per sample we have many floating point operations (flops) in average. If the essential bandwidth is not too large, then is proportional to and is proportional to since, linearising around yields and hence Bq=θ(qb+Kb2)−θ(qb−Kb2)≃θ′(qb)Kb ⇒dq=1Bq≃1θ′(qb)Kb (29) Summing up over all windows we get the average number of operations per sample : Navg=∑q4Tqdq∼∑q4T0θ′(qb)Kbθ′(qb)1≃4Kθ−1(SR/2)T0 (30) Where , defined above, denotes the number of bands and depends on . That means the complexity of the analysis part is proportional to , and and . If there is a lower bound for the ’s, one can choose identical numbers for all bands. However, this increases the computational load tremendously, making real time computation impossible. The above is an estimation of the average computational cost. In the worst case all inner products of the windows with the signal have to be computed starting at one frame. The number of operations for that frame is ∑q4Tq≃4∑qT0θ′(qb) (31) However, if one processes the audio stream block wise, the worst case cannot arise for all samples in the block at once, because two worst case scenarios have a distance of at least (actually ), the next samples have for sure lower computational cost. Furthermore, the next samples have zero computational cost. Therefore the average costs are a suitable measure for the analysis process. #### 3.3.2 Synthesis Part The complexity of the synthesis part is the same as that of the analysis part. Furthermore, in the synthesis part the worst case scenario can be avoided, because only the parts of the next frame buffer have to be computed in real-time, the rest can be computed later. Nevertheless, our given implementation is not optimised in this direction. #### 3.3.3 Memory Costs Our algorithm for precomputing the windows needs cells. With slight modifications it only needs . The smallest possible number of cells being needed is . The analysis and synthesis algorithm both need at least a buffer of size , where denotes the window length of the longest windows used in the analysis and synthesis, and audiobuffer the buffer length in which the audio is processed. The frame elements for our tests below needed between 50 and 400 MB. ## 4 Computational Error The measured -numbers are the difference between the averaged RMS-amplitude in dB of the input signal and the analysed-synthesised output signal (i.e. negative signal to noise ratio). For comparison: 16 bit quantization (which is CD-quality) has  dB, 8 bit quantization has  dB. The tests were conducted with the PD-patch available at tommsch.com/science.php in real-time over a time of about 20 s with double precision floating point numbers and a sample rate of  kHz. We used the following stationary and non-stationary test signals • white:White noise • sine X: A pure sine tone with X Hz • const: A constant signal • clicks: Clicks with a spacing of 1 s • beet: Beethoven - Piano Sonata op 31.2, length 25 s. • speech: A man counting from one to twenty, length 20 s. • fire: A firework, length 23 s. • atom: A sample of sparse synthesised warped Gabor atoms which were also used for that specific test run. Since our method would lead to perfect reconstruction if the windows were time-shift invariant, the behaviour of the algorithm for stationary signals over a long time is of greatest interest. The constant signal is of interest since it is the one with the lowest possible frequency and our algorithm may bear problems with low frequencies due to the necessary cutting. The clicks represent the other extreme point of signals. The atoms are of interest since they shows whether our algorithm has the ability to reproduce its atoms with high quality. For the warping map, we used a function which is exponentially increasing between two frequencies and , namely where are constant parameters which can be set inside the PD-patch. For all the tests we set and . Outside of and between the map is linear. The function attains exactly Nyquist frequency, i.e. at . The frequencies , are chosen such, that the resulting map is . See Figure 4 for a plot of . ### 4.1 Raised Cosine Window As proposed in [9] we first performed tests with a raised cosine window. h(t)={√2bRcosπtT−T2≤t≤+T20otherwise (32) with Fourier-Transform ^h(ν)=√b2R(sinc(νT−12)+sinc(νT+12)) (33) where T is the total duration of the window, is an integer, , , . See [9, Section 4] for an ansatz about how to compute these parameters. This window has a very slow decay in the frequency domain after warping: ^~~φ0,q(f)=√dqa^h(θ−1e(f)−qb)≃1log2f (34) This means that either the warped windows are not bounded anymore (i.e.: ) or that they are discontinuous, which is clearly visible in the warped windows, a plot of one window can be seen in Figure 5. Hence the computation of the warped windows with the IDFT bears numerical errors. Also the test results with this window were suboptimal, yielding an error between  dB and  dB, depending on the chosen parameters and input signal. Changing the parameter or the parameter had a significant effect on the error. In Figure 6 on can see the influence of . This can be expected, since the essential bandwidth, in which the phase is linearised, depends on that parameter. Since oversampling is directly proportional to the computational load (as computed in  (30)), for real-time applications there is a natural upper bound. provided good results. Too small and too big numbers for both gave worst results. On the contrary, changing the parameter while preserving the oversampling factor and the parameter had no effect, as long . The relation between the window length after cutting and the error was nearly proportional to the value of in a certain range, see Figure 8. From this observation one can determine a suitable cutting parameter . Changing the parameter has clearly only an influence on the for low frequencies. The parameter had no big influence as long it was about was about twice the window length , see Figure 7 The warping map has a very big influence on the error. At points where the map is not smooth the error for these frequencies is order of magnitudes higher. This can be seen in Figures 9 and 10 for the sine with 30 Hz signal. At this point, our used warping map is only . For a warping map the error was again 10 dB higher. The table in Figure 9 shows selected numerical results with well chosen parameters for the raised cosine window. All values in dB, values above  dB and below  dB are coloured. The number denotes the average computational average complexity (see above). ### 4.2 Gaussian Window In order to obtain a window with proper decay in the frequency domain after warping, we used a Gaussian window. This window does not overlap-add to one. Hence higher overlap is necessary to minimize the deviation from one. The warped windows were still fast decaying in the time domain, resulting in the possibility to cut them much shorter then the warped raised-cosine windows which compensated the high computational load due to the high overlap. Our used Gaussian window and its Fourier Transform is h(t)=C√b2Re−14t2T2,^h(t)=CT√bRe−t2T2 (35) where T is the total duration of the window, the overlap factor is an integer, , , and is a constant factor used to approximate the overlap-add to one condition. This window has a fast enough decay to ensure that the warped windows in the frequency domain still are in and hence their Inverse Fourier transformed (i.e. the warped windows in the time domain) are bounded and continuous. This window leads to significantly better results down to  dB which is the limit when using PD’s single precision numbers. The influence of the parameters on the error was the same as for the raised-cosine window. The tables in Figure 10 show selected numerical results. ### 4.3 Comparison of these two windows For the raised cosine a high number of bands must be used to achieve a small error. In Figure 8 the tests of the red dots are conducted using 92 bands, the tests with yellow dots were with 229 bands. Since the windows have a bad decay in the frequency domain, a high essential bandwidth has to be used too, which entails big overlap in time. and hence a very high average computational load, in our example k floating point operations per sample. If one uses similar parameters to the ones used for the Gaussian window in Figure 10, the error is in the range of  dB. Gaussian windows on the other hand do not overlap add to one and hence are not dual to themselves. Hence we at first used a very high overlap to minimize the deviation from 1, which turned out to be not necessary later. The fast decay in the time domain allows to cut the windows much shorter than the raised cosine window. This decreases the computational load, in our example only 7.3k flops per sample in average. In Figure 8 one can see that for the Gaussian window, with proper parameters, the error is in the range of  dB. ## 5 Conclusions We have introduced a novel, flexible, easy way to construct frames starting from a classical Gabor frame. Tests show, that even in the painful case, where perfect time realignment of the components is not guaranteed and hence our method does not lead to perfect reconstruction, the error can be made as small as the accuracy of single precision floating point numbers for a wide range of signals. This does not prove that the error is small for all signals, but gives a good estimation of the error to be expected. The transform is working in real time. We are going to implement maps that can be arbitrarily defined, e.g., by means of interpolation of a selected number of points. We will also implement Gabor Multipliers [17] in the redressed warped Gabor expansion (partially done already with the external winarym̃issing). Since we already implemented this method as a Pure Data external, it is ready to use for audio-applications. The whole external explained in detail as well as the source code can be found online at  tommsch.com/science.php and in [18]. ## 6 Acknowledgements Many thanks to the great number of suggestions by the great number of anonymous reviewers on how to improve the paper. ## References • [1] G. A. Velasco, N. Holighaus, M. Dörfler, and T. Grill, “Constructing an invertible constant-Q transform with non-stationary Gabor frames,” in Proceedings of the Digital Audio Effects Conference (DAFx-11), Paris, France, 2011, pp. 93–99. • [2] P. Balazs, M. Dörfler, F. Jaillet, N. Holighaus, and G. A. Velasco, “Theory, implementation and applications of nonstationary Gabor Frames,” Journal of Computational and Applied Mathematics, vol. 236, no. 6, pp. 1481–1496, 2011. • [3] G. Evangelista, M. Dörfler, and E. Matusiak, “Phase vocoders with arbitrary frequency band selection,” in Proceedings of the 9th Sound and Music Computing Conference, Copenhagen, Denmark, 2012, pp. 442–449. • [4] N. Holighaus and C. Wiesmeyr, “Construction of warped time-frequency representations on nonuniform frequency scales, Part I: Frames,” ArXiv e-prints, Sep. 2014. • [5] T. Twaroch and F. Hlawatsch, “Modulation and warping operators in joint signal analysis,” in Time-Frequency and Time-Scale Analysis, 1998. Proceedings of the IEEE-SP International Symposium on, Oct. 1998, pp. 9–12. • [6] R. G. Baraniuk and D. L. Jones, “Warped wavelet bases: unitary equivalence and signal processing,” in Acoustics, Speech, and Signal Processing, 1993. ICASSP-93., 1993 IEEE International Conference on, Apr. 1993, vol. 3, pp. 320–323 vol.3. • [7] C. Braccini and A. Oppenheim, “Unequal bandwidth spectral analysis using digital frequency warping,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 22, no. 4, pp. 236–244, Aug. 1974. • [8] G. Evangelista, “Warped Frames: dispersive vs. non-dispersive sampling,” in Proceedings of the Sound and Music Computing Conference (SMC-SMAC-2013), Stockholm, Sweden, 2013, pp. 553–560. • [9] G. Evangelista, “Approximations for Online Computation of Redressed Frequency Warped Vocoders,” in Proceedings of the Digital Audio Effects Conference (DAFx-14), Erlangen, Germany, 2014, pp. 1–7. • [10] K. Gröchenig, Foundations of Time-Frequency Analysis, Applied and Numerical Harmonic Analysis. Birkhäuser Boston, 2001. • [11] R. G. Baraniuk and D. L. Jones, “Unitary equivalence : A new twist on signal processing,” IEEE Transactions on Signal Processing, vol. 43, no. 10, pp. 2269–2282, Oct. 1995. • [12] P. W. Broome, “Discrete orthonormal sequences,” Journal of the ACM, vol. 12, no. 2, pp. 151–168, Apr. 1965. • [13] L. Knockaert, “On Orthonormal Muntz-Laguerre Filters,” IEEE Transactions on Signal Processing, vol. 49, no. 4, pp. 790–793, Apr. 2001. • [14] G. Evangelista, “Dyadic Warped Wavelets,” Advances in Imaging and Electron Physics, vol. 117, pp. 73–171, Apr. 2001. • [15] G. Evangelista and S. Cavaliere, “Frequency Warped Filter Banks and Wavelet Transform: A Discrete-Time Approach Via Laguerre Expansions,” IEEE Transactions on Signal Processing, vol. 46, no. 10, pp. 2638–2650, Oct. 1998. • [16] G. Evangelista and S. Cavaliere, “Discrete Frequency Warped Wavelets: Theory and Applications,” IEEE Transactions on Signal Processing, vol. 46, no. 4, pp. 874–885, Apr. 1998, special issue on Theory and Applications of Filter Banks and Wavelets. • [17] Hans G. Feichtinger and Krzysztof Nowak, A First Survey of Gabor Multipliers, pp. 99–128, Birkhäuser Boston, Boston, MA, 2003. • [18] Thomas Mejstrik, “Real time computation of redressed frequency warped gabor expansion,” Master thesis, University of Music and Performing Arts Vienna, tommsch.com/science.php, 2015.
2021-07-31 09:18:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8327364325523376, "perplexity": 973.7577678724742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00192.warc.gz"}
https://cs.stackexchange.com/tags/optimization/new
# Tag Info Suppose in one step of BFGS: compute $p_k$ such that $B_{k}p_{k} = -grad(f)$ $x_{k+1} = x_{k}+\alpha_{k}*p_{k}$ where $\alpha_k$ is determined by Wolfie Condtion. Maybe you can check it here:https://en.wikipedia.org/wiki/Wolfe_conditions The problem is NP-hard even when all sets have at most (or exactly) one element. This can be seen by a reduction from (the decision version of) vertex cover. Given graph $H$, you can build the graph $G=(V,E)$ by starting with a graph containing a single vertex $s$ and doing the following for each $e=(u,v)$ of $H$: Add three new vertices to $G$ namely $x_e, ... 1 You didn't explain how your solution today works, so it's hard to give any concrete pointers. But more generally, in a Monte Carlo solution, you'll take random steps all the way to a terminal state (win/loss). As you explore more and more nodes, you can start to focus more on promising ones, by having them be more likely to be selected. In pandemic there are ... 1 I'm not sure this will answer your question "mathematically", but it will definitely give you some idea to why it is "advantageous". Usual RL requires to keep in memory a state-value vector. When the number of possible states is large, or even infinite - this is not practical to keep in memory. In addition, the usual RL requirements for ... 1 You could try expressing this as an integer linear expression: let$v_{x,b}$be 1 if item$x$is placed in box$b$, and let$w_{x,x',b,b'}$be 1 if$v_{x,b}=1$and$v_{x',b'}=1$; then your objective function is a linear function of these zero-or-one variables. You can also enforce the relationship between the$v$'s and$w$'s, and ensure that the$v\$'s ...
2021-09-21 20:23:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7437525987625122, "perplexity": 269.8584924212088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057227.73/warc/CC-MAIN-20210921191451-20210921221451-00501.warc.gz"}
https://www.studysmarter.us/textbooks/math/essential-calculus-early-transcendentals-2nd/applications-of-integration/q45e-to-show-the-volume-enclosed-by-the-barrel-v-frac13pi-hl/
Suggested languages for you: Americas Europe Q45E Expert-verified Found in: Page 380 ### Essential Calculus: Early Transcendentals Book edition 2nd Author(s) James Stewart Pages 830 pages ISBN 9781133112280 # To show the volume enclosed by the barrel $$V = \frac{1}{3}\pi h\left( {2{R^2} + {r^2} - \frac{2}{5}{d^2}} \right)$$. The volume enclosed by the barrel $$V = \frac{1}{3}\pi h\left( {2{R^2} + {r^2} - \frac{2}{5}{d^2}} \right)$$ is proved. See the step by step solution ## Given data A barrel with height $$h$$ and maximum radius $$R$$. The barrel is constructed by rotation about the $$x$$-axis the parabola $$y = R - c{x^2}, - \frac{h}{2} \le x \le \frac{h}{2}$$. Where, $$c$$ is a positive constant. The value of $$d = \frac{{c{h^2}}}{4}$$. ## Concept used of Barrel A barrel solid of revolution composed of parallel circular top and bottom with a common axis and a side formed by a smooth curve symmetrical about the midplane. ## Solve to find the volume Sketch the barrel with height $$h$$ and radius $$R$$ as shown in Figure 1. Refer to Figure 1. About the $$y$$-axis the barrel is symmetric, since for $$x > 0$$ part, the volume of the barrel is twice the volume of that part of the barrel. The barrel is constructed by rotation about the $$x$$-axis. Hence, the barrel is the volume of rotation. Show the formula for the volume of rotation as shown below. $$V = 2\int_a^b \pi {y^2}dx$$ …..(1) Here $$y$$ is the Equation parabola. Substitute $$\left( {R - c{x^2}} \right)$$ for y, 0 for $$a$$ and $$\frac{h}{2}$$ for $$b$$ in Equation (1). \begin{aligned}{}V = 2\int_0^{\frac{h}{2}} \pi {\left( {R - c{x^2}} \right)^2}dx\\ = 2\pi \int_0^{\frac{h}{2}} {\left( {{R^2} + {c^2}{x^4} - 2Rc{x^2}} \right)} dx\\ = 2\pi \left( {{R^2}x + {c^2}\frac{{{x^5}}}{5} - 2Rc\frac{{{x^3}}}{3}} \right)_0^{\frac{h}{2}}\\ = 2\pi \left( {{R^2}\left( {\frac{h}{2}} \right) + {c^2}\left( {\frac{{{h^5}}}{{5 \times 32}}} \right) - 2Rc\left( {\frac{{{h^3}}}{{24}}} \right) - 0} \right)\end{aligned} Which means, $$V = 2\pi \left( {\frac{1}{2}{R^2}h + \frac{1}{{160}}{c^2}{h^5} - \frac{1}{{12}}Rc{h^3}} \right)$$ ….(2) Divide and multiply both sides of the Equation (1) by 3 . \begin{aligned}{}V = \frac{2}{3}\pi h\left( {\frac{3}{2}{R^2} + \frac{3}{{160}}{c^2}{h^4} - \frac{3}{{12}}Rc{h^2}} \right)\\ = \frac{1}{3}\pi h\left( {3{R^2} + \frac{3}{{80}}{c^2}{h^4} - \frac{1}{2}Rc{h^2}} \right)\\ = \frac{1}{3}\pi h\left( {2{R^2} + \left( {{R^2} + \frac{3}{{80}}{c^2}{h^4} - \frac{1}{2}Rc{h^2}} \right)} \right)\\ = \frac{1}{3}\pi h\left( {2{R^2} + \left( {{R^2} + \frac{1}{{16}}{c^2}{h^4} - \frac{1}{{40}}{c^2}{h^4} - \frac{1}{2}Rc{h^2}} \right)} \right)\end{aligned} Simplify further, \begin{aligned}{}V = \frac{1}{3}\pi h\left( {2{R^2} + {{\left( {R - \frac{1}{4}c{h^2}} \right)}^2} - \frac{1}{{40}}{c^2}{h^4}} \right)\\ = \frac{1}{3}\pi h\left( {2{R^2} + {{\left( {R - \frac{1}{4}c{h^2}} \right)}^2} - \frac{2}{5}{{\left( {\frac{{c{h^2}}}{4}} \right)}^2}} \right)\end{aligned} …..(3) Substitute $$d$$ for $$\frac{{c{h^2}}}{4}$$ in Equation (3). $$V = \frac{1}{3}\pi h\left( {2{R^2} + {{(R - d)}^2} - \frac{2}{5}{{(d)}^2}} \right)$$ …..(4) Substitute $$r$$ for $$R - d$$ in Equation (4). $$V = \frac{1}{3}\pi h\left( {2{R^2} + {r^2} - \frac{2}{5}{{(d)}^2}} \right)$$ Therefore, the volume enclosed by the barrel $$V = \frac{1}{3}\pi h\left( {2{R^2} + {r^2} - \frac{2}{5}{d^2}} \right)$$ is proved.
2023-03-30 02:31:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9946790337562561, "perplexity": 4156.6692240686925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00350.warc.gz"}
http://weblib.cern.ch/collection/ATLAS%20Theses?ln=es
# ATLAS Theses 2019-11-11 02:08 Layer Intercalibration of the ATLAS Electromagnetic Calorimeter and CP-odd Higgs Boson Couplings Measurements in the Four-Lepton Decay Channel with Run 2 Data of the LHC / Laudrain, Antoine After the Higgs boson discovery at the LHC in 2012, interest turned to Higgs boson property measurements to refine the tests of the Standard Model and probe for new physics [...] CERN-THESIS-2019-205 - 318 p. Full text 2019-11-09 01:00 A search for wino pair production with $B-L$ $R$-Parity violating chargino decay to a trilepton resonance with the ATLAS experiment / Schaefer, Leigh Catherine This dissertation presents several searches for supersymmetric partners (superpartners) to Standard Model particles [...] CERN-THESIS-2019-204 - 252 p. Full text 2019-11-08 16:03 Search for boosted Higgs boson and other resonances decaying into $b$-quark pairs using the ATLAS detector and studies of CMOS pixel sensors for the HL-LHC / Di Bello, Francesco Armando The Large Hadron Collider (LHC) at CERN is the largest particle collider ever built and enables experimental study of the fundamental constituents of matter at the highest centre-of-mass energy ($\sqrt s$) ever achieved [...] CERN-THESIS-2019-201 - 213 p. Full text 2019-11-08 02:38 Sweet Little Nothings; or, Searching for a Pair of Stops, a Pair of Higgs Bosons, and a Pair of New Small Wheels for the Upgrade of the Forward Muon System of the ATLAS Detector at CERN / Antrim, Daniel Joseph This thesis reports on two searches for physics beyond the Standard Model (SM), performed using data collected from the $\sqrt{s}=13$\,TeV proton-proton ($pp$) collisions recorded by the ATLAS detector at CERN between the years 2015--2018, the period of LHC Run 2 [...] CERN-THESIS-2019-200 - 355 p. Full text 2019-11-06 18:41 Large-Scale Data Analysis for Higgs Boson Mass Reconstruction in ttH Production / Urban, Petr This thesis deals with the problem of reconstruction of the invariant mass of the Higgs boson using machine learning techniques – neural networks [...] CERN-THESIS-2019-195 - 79 p. Full text 2019-11-06 10:43 Jet calibration, cross-section measurements and New Physics searches with the ATLAS experiment within the Run 2 data / Hankache, Robert The Standard Model (SM) is the current theory used to describe the elementary particles and their fundamental interactions (except the gravity) [...] CERN-THESIS-2019-193 - Full text 2019-11-05 12:33 Novel searches for top squarks at the LHC / Merlassino, Claudia In this thesis, I present two searches for new physics performed analysing the data collected by the ATLAS detector, during the Run 2 of the LHC [...] CERN-THESIS-2019-191 - 152 p. Full text 2019-11-01 10:35 Search for an Invisibly Decaying Higgs Boson Produced via Vector Boson Fusion using the ATLAS Detector / Truong, Thi Ngoc Loan The thesis presents a search for the Higgs boson produced via the vector boson fusion process and decaying invisibly. [...] CERN-THESIS-2015-476 CERN-THESIS-2015-203. - 100 p. Full text 2019-10-30 16:59 Probing new physics with boosted $H \rightarrow b\bar{b}$ decays with the ATLAS detector at 13 TeV / Jacobs, Ruth Magdalena The discovery of the Higgs boson by the ATLAS and CMS collaborations at the Large Hadron Collider was a major success in particle physics [...] CERN-THESIS-2019-184 - 139 p. Full text 2019-10-29 15:46 The development of missing transverse momentum reconstruction with the ATLAS detector using the PUfit algorithm in pp collisions at 13 TeV / Li, Zhelun Many interesting physical processes produce non-interacting particles that could only be measured using the missing transverse momentum [...] CERN-THESIS-2019-180 - 101 p. Full text - Full text
2019-11-18 01:03:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8272154927253723, "perplexity": 2863.0961798503254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669422.96/warc/CC-MAIN-20191118002911-20191118030911-00541.warc.gz"}
https://gmatclub.com/forum/what-is-the-greatest-integer-k-such-that-5-k-is-a-factor-of-the-prod-287198.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 22 Oct 2019, 06:48 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # What is the greatest integer, k, such that 5^k is a factor of the prod Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 58428 What is the greatest integer, k, such that 5^k is a factor of the prod  [#permalink] ### Show Tags 24 Jan 2019, 03:29 00:00 Difficulty: 15% (low) Question Stats: 72% (00:51) correct 28% (01:02) wrong based on 48 sessions ### HideShow timer Statistics What is the greatest integer, k, such that 5^k is a factor of the product of the integers from 1 through 24, inclusive? A 1 B 2 C 3 D 4 E 5 _________________ VP Joined: 31 Oct 2013 Posts: 1465 Concentration: Accounting, Finance GPA: 3.68 WE: Analyst (Accounting) Re: What is the greatest integer, k, such that 5^k is a factor of the prod  [#permalink] ### Show Tags 24 Jan 2019, 03:45 Bunuel wrote: What is the greatest integer, k, such that 5^k is a factor of the product of the integers from 1 through 24, inclusive? A 1 B 2 C 3 D 4 E 5 product of the integers from 1 to 24 = 24! 5^k is a factor of 24!. greatest value of k. 24/5 = 4. GMAT Club Legend Joined: 18 Aug 2017 Posts: 5031 Location: India Concentration: Sustainability, Marketing GPA: 4 WE: Marketing (Energy and Utilities) Re: What is the greatest integer, k, such that 5^k is a factor of the prod  [#permalink] ### Show Tags 24 Jan 2019, 04:44 Bunuel wrote: What is the greatest integer, k, such that 5^k is a factor of the product of the integers from 1 through 24, inclusive? A 1 B 2 C 3 D 4 E 5 to determine the 5^k factor ; value of k 24!/5 = 4 IMO D e-GMAT Representative Joined: 04 Jan 2015 Posts: 3078 Re: What is the greatest integer, k, such that 5^k is a factor of the prod  [#permalink] ### Show Tags 24 Jan 2019, 04:58 Solution Given: • $$5^k$$ is a factor of the product of the integers from 1 through 24, inclusive To find: • The greatest integer value of k Approach and Working: • k = the number of 5’s in 24! • Implies, k = $$[\frac{24}{5}] + [\frac{24}{25}] = 4 + 0 = 4$$ Therefore, the maximum possible value of k = 4 Hence, the correct answer is Option D _________________ Re: What is the greatest integer, k, such that 5^k is a factor of the prod   [#permalink] 24 Jan 2019, 04:58 Display posts from previous: Sort by
2019-10-22 13:48:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7954105734825134, "perplexity": 2907.175932578774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00457.warc.gz"}
https://forum.azimuthproject.org/discussion/991/mathics-org-and-sage
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Options # Mathics.org and Sage I mathematician friend referred me to a relatively new open-source project called Mathics. Since it uses Sage it seemed like it might be interesting for the group here. From the home page: Mathics is a free, general-purpose online computer algebra system featuring Mathematica-compatible syntax and functions. It is backed by highly extensible Python code, relying on SymPy for most mathematical tasks and, optionally, Sage for more advanced stuff. Comment Source:Its still in development but it looks nice. thanks for the link Curtis.
2022-09-27 18:30:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31358760595321655, "perplexity": 2728.513321982766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00381.warc.gz"}
http://freerangestats.info/blog/2019/11/09/sampling-from-urns.html
# A small simple random sample will often be better than a huge not-so-random one ## At a glance: A small random sample will give better results than a much larger non-random sample, under certain conditions; but more importantly, it is reliable and controls for risk. 09 Nov 2019 ## An interesting big data thought experiment The other day on Twitter I saw someone referencing a paper or a seminar or something that was reported to examine the following situation: if you have an urn with a million balls in it of two colours (say red and white) and you want to estimate the proportion of balls that are red, are you better off taking the top 800,000 balls - or stirring the urn and taking a sample of just 10,000? The answer given was the second of these options. The idea is to illustrate the limitations of “big data” methods, which can often be taken to mean samples that are very large but of uncertain quality with regard to representativeness and randomness. The nature of the Twitter user experience is such that I’ve since lost track of the original post and can’t find it again. My first thought was “wow, what a great illustration!” My second was “actually, that sounds a bit extreme.” After all, worsening your estimate by 80:1 a pretty severe design effect penalty to pay for non-random sampling. A bit of thinking shows that whether the small, random sample outperforms the bigger one is going to depend very much on how the urn was filled in the first place. Consider as an extreme example that the urn was filled by pouring in all the balls of one colour, then all of another. In this situation you will certainly be better off with the stirring method (in all that follows, I am going to assume that the stirring is highly effective, and that stirring and sampling equates to taking a simple random sample). But at the other extreme, if the urn was filled in a completely random order, than either sampling method equates to simple random sampling, and the larger sample will greatly outperform the smaller. In fact, in that case the standard errors from the stirring method will be 8.9 times from the large data method (square root of 80). So the choice between the methods depends on how spatially correlated (in 3 dimensional space) the colour of the balls is within the urn. This is similar to how the need for special time series methods depends on whether there is autocorrelation over time; spatial methods depends on whether there is spatial autocorrelation; and adjusting for survey design effects depends on intra-cluster correlation. ## Efficiently simulating filling an urn with balls To explore just how correlated the balls need to be for the simple random sampling method to be preferred, I ran a simple simulation. In this simulation, the balls are added to the urn one at a time, with no natural stirring. There is an overall probability p of a ball being red (and this is the exact probability of the first ball that goes in being red), but for the second and subsequent balls there is a second probability to be aware of, q, which is the chance that this ball will be forced to be the same colour as the previous ball, rather than pulled from the hyper population with parameter p. Here’s a simple R function that will generate an urn of size n with those parameters: Post continues after R code. However, this R program is too slow for my purposes. I’m going to want to generate many thousands of these urns, with a million balls in each, so time really matters. It was worth re-writing this in C++ via the Rcpp R package. Post continues after R code. The C++ function was as easy to write as the R version (helped by the Rcpp function rbinom which makes C++ seem even more familiar to R users) and delivers roughly a 40- or 50-fold speed up. Here’s the results of that benchmarking at the end of the last chunk of code: Unit: microseconds expr min lq mean median uq max neval C++ 705.9 755.85 991.928 851.15 960.3 6108.1 100 R 31868.7 36086.60 69251.103 41174.25 48143.8 427038.0 100 By the way, the as<double>() in my Rcpp code is needed because rbinom generates a vector (of size one in this case) and I want to treat it as a scalar. There may be a more elegant way of handling this, but this was good enough for my purposes. ## Comparing the two sampling methods To help with comparing the two sampling methods, I write two more functions: • compare_methods() to run the “big data” and random sampling methods on a single urn and compare their results to the true proportion in that particular urn (using the actual proportion in the urn, not the hypothetical hyper-parameter p) • overall_results() to generate many urns with the same set of parameters Post continues after R code. Here’s what I get when I compare the two methods for a thousand runs with urns of 10,000 balls each, with p = 0.3 and various values of q: … and here are the results for the original use case, a thousand runs (at each value of q) of an urn with one million balls: So we can see in both cases we need a lot of serial correlation between balls (based on the order they go into the urn) for the method of random sampling 1% of the balls to out-perform the brute force selection of the top 80% of balls. Somewhere between a value for q of 0.99 and 0.999 is when the stirring method is clearly better. Remember, q is the probability that any ball going into the urn is the same as the previous ball, before the alternative colour selection being chosen with probability p (0.3 in our case). Here’s the code for the actual simulations. Post continues after R code. One final point - what if we look at the absolute value of the discrepency between each methods estimate of the proportion and its true value. We see basically the same picture. ## Reflections If we thought the data generating process (ie how the urn was filled with balls) resembled my simulation, you would be better off choosing the large sample method (assuming equal cost) unless you had reason to believe the serial relationship factor q was very high. But this doesn’t invalidate the basic idea of the usefulness of random sampling. This is for several reasons: • The costs are unlikely to be the same. Even with today’s computers, it is easier and cheaper to collect, process and analyse smaller than larger datasets. • The “big data” method is risky, in the sense that it makes the analyst vulnerable to the true data generating process in a way that simple random sampling doesn’t. With random sampling we can calculate the properties of the statistic exactly and with confidence, so long as our stirring is good. We can’t say the same for the “top 80%” method. • Related to the point above, the risk is particularly strong if the data generating process is quirkier than my simulation. For example, given my simulated data generating process, both sampling methods produce unbiased results. However, this isn’t always going to be the case. Consider if the urn had been filled on the basis of “put all the red balls in first; then fill up the rest of the urn with white balls”. In this case the “top 80%” method will be very badly biased to underestimate the proportion of red balls (in fact, with p = 0.3, the method will estimate it to be 0.125 - a ghastly result). That final dot point might sound perverse, but actually it isn’t hard to imagine a real life situation in which data is generated in a way that is comparable to that method. For example, I might have one million observations of the level of sunlight, taken between 1am and 9am on a November morning in a temperate zone. So my overall conclusion - a small random sample will give better results than a much larger non-random sample, under certain conditions; but more importantly, it is reliable and controls for risk. ← Previous post Next post →
2020-04-08 20:27:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6470334529876709, "perplexity": 588.0916438450312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371824409.86/warc/CC-MAIN-20200408202012-20200408232512-00469.warc.gz"}
https://phys.libretexts.org/Bookshelves/Classical_Mechanics/Book%3A_Variational_Principles_in_Classical_Mechanics_(Cline)/13%3A_Hamilton%E2%80%99s_Principle_of_Least_Action/13.1%3A_Introduction_to_Hamilton%E2%80%99s_Principle_of_Least_Action
$$\require{cancel}$$ 13.1: Introduction to Hamilton’s Principle of Least Action In two papers published in 1834 and 1835, Hamilton announced a dynamical principle upon which it is possible to base all of mechanics, and indeed most of classical physics. Hamilton was seeking a theory of optics when he developed Hamilton’s Principle, plus the field of Hamiltonian mechanics, both of which play a pivotal role in classical mechanics. Hamilton’s Principle is based on defining the action functional $$S$$ of the $$n$$ generalized coordinates $$q$$ and their corresponding velocities $$\dot{q}$$. $S = \int _ { t _ { 1 } } ^ { t _ { 2 } } L ( \mathbf { q } , \dot { \mathbf { q } } , t ) d t \label{13.1}$ Action The term action functional often is abbreviated to action. It is called Hamilton’s Principal Function in older texts. The scalar quantity $$S$$ is a functional of the Lagrangian $$L( q , \dot { q } , t )$$. In principle, higher order time derivatives of the generalized coordinates could be included, but most systems in classical mechanics are described adequately by including only the generalized coordinates, plus their velocities. Note that the definition of the action functional does not limit the specific form of the Lagrangian. That is, it allows for more general Lagrangians than the standard Lagrangian $L ( \mathbf { q } , \dot { \mathbf { q } } , t ) = T ( \dot { \mathbf { q } } , t ) - U ( \mathbf { q } , t )$ that was used throughout chapters 5 − 12. Hamilton stated that the actual trajectory of a mechanical system is given by requiring that the action functional is stationary. The action functional is stationary if the variational principle is written in terms of virtual infinitessimal displacement $$\delta$$ to be $\delta S = \delta \int _ { t _ { 1 } } ^ { t _ { 2 } } L ( \mathbf { q } , \dot { \mathbf { q } } , t ) d t = 0$ Typically this stationary point corresponds to a minimum of the action functional. Applying variational calculus to the action functional leads to the Lagrange equations of motion for the system. That is, Hamilton’s Principle, applied to the Lagrangian function $$L ( \mathbf { q } , \mathbf { \dot { q } } , t )$$, generates the Lagrangian equations of motion. $\frac { d } { d t } \frac { \partial L } { \partial \dot { q } _ { j } } - \frac { \partial L } { \partial q _ { j } } = 0$ These Lagrange equations agree with those derived using d’Alembert’s Principle, if the $\sum _ { k = 1 } ^ { m } \lambda _ { k } \frac { \partial g _ { k } } { \partial q _ { j } } ( \mathbf { q } , t ) + Q^{EX C}_j$ generalized force terms are ignored. Hamilton’s Principle can be considered to be the fundamental postulate of classical mechanics. It replaces Newton’s postulated three laws of motion. As illustrated in chapters 6 − 12, Lagrangian mechanics based on the standard Lagrangian $$L = T - U$$ provides a remarkably powerful and consistent approach to solving the equations of motion in classical mechanics. This chapter extends the discussion to non-standard Lagrangians. Chapter 5.12 developed a plausibility argument, based on Newton’s laws of motion, that led to the Lagrange equations of motion using the standard Lagrangian. d’Alembert’s Principle of virtual work was used in chapter 6 to provide a more fundamental derivation of Lagrange’s equations of motion which was based on the standard Lagrangian. An important feature is that Hamilton’s Principle extends Lagrangian mechanics to the use of non-standard Lagrangians.
2019-10-18 14:22:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486699461936951, "perplexity": 289.9723613863752}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986682998.59/warc/CC-MAIN-20191018131050-20191018154550-00465.warc.gz"}
https://www.acmicpc.net/problem/7668
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율 1 초 128 MB 2 2 2 100.000% 문제 On the planet Zoop, numbers are represented in base 62, using the digits 0, 1, . . . , 9, A, B, . . . , Z, a, b, . . . , z where • A (base 62) = 10 (base 10) • B (base 62) = 11 (base 10) • . . . • z (base 62) = 61 (base 10). Given the digit representation of a number x in base 62, your goal is to determine if x is divisible by 61. 입력 The input test file will contain multiple cases. Each test case will be given by a single string containing only the digits ‘0’ through ‘9’, the uppercase letters ‘A’ through ‘Z’, and the lowercase letters ’a’ through ’z’. All strings will have a length of between 1 and 10000 characters, inclusive. The end-of-input is denoted by a single line containing the word “end”, which should not be processed. 출력 For each test case, print “yes” if the number is divisible by 61, and “no” otherwise. 예제 입력 1v3 2P6 IsThisDivisible end 예제 출력 yes no no 힌트 In the first example, 1v3 = 1 × 622 + 57 × 62 + 3 = 7381, which is divisible by 61. In the second example, 2P6 = 2 × 622 + 25 × 62 + 6 = 9244, which is not divisible by 61.
2016-10-23 08:05:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20098356902599335, "perplexity": 1047.0231560473228}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719192.24/warc/CC-MAIN-20161020183839-00464-ip-10-171-6-4.ec2.internal.warc.gz"}
https://en.formulasearchengine.com/wiki/Groundwater_flow_equation
# Groundwater flow equation Used in hydrogeology, the groundwater flow equation is the mathematical relationship which is used to describe the flow of groundwater through an aquifer. The transient flow of groundwater is described by a form of the diffusion equation, similar to that used in heat transfer to describe the flow of heat in a solid (heat conduction). The steady-state flow of groundwater is described by a form of the Laplace equation, which is a form of potential flow and has analogs in numerous fields. The groundwater flow equation is often derived for a small representative elemental volume (REV), where the properties of the medium are assumed to be effectively constant. A mass balance is done on the water flowing in and out of this small volume, the flux terms in the relationship being expressed in terms of head by using the constituitive equation called Darcy's law, which requires that the flow is slow. ## Mass balance A mass balance must be performed, and used along with Darcy's law, to arrive at the transient groundwater flow equation. This balance is analogous to the energy balance used in heat transfer to arrive at the heat equation. It is simply a statement of accounting, that for a given control volume, aside from sources or sinks, mass cannot be created or destroyed. The conservation of mass states that for a given increment of time (Δt) the difference between the mass flowing in across the boundaries, the mass flowing out across the boundaries, and the sources within the volume, is the change in storage. ${\displaystyle {\frac {\Delta M_{stor}}{\Delta t}}={\frac {M_{in}}{\Delta t}}-{\frac {M_{out}}{\Delta t}}-{\frac {M_{gen}}{\Delta t}}}$ ## Diffusion equation (transient flow) Mass can be represented as density times volume, and under most conditions, water can be considered incompressible (density does not depend on pressure). The mass fluxes across the boundaries then become volume fluxes (as are found in Darcy's law). Using Taylor series to represent the in and out flux terms across the boundaries of the control volume, and using the divergence theorem to turn the flux across the boundary into a flux over the entire volume, the final form of the groundwater flow equation (in differential form) is: ${\displaystyle S_{s}{\frac {\partial h}{\partial t}}=-\nabla \cdot q-G.}$ This is known in other fields as the diffusion equation or heat equation, it is a parabolic partial differential equation (PDE). This mathematical statement indicates that the change in hydraulic head with time (left hand side) equals the negative divergence of the flux (q) and the source terms (G). This equation has both head and flux as unknowns, but Darcy's law relates flux to hydraulic heads, so substituting it in for the flux (q) leads to ${\displaystyle S_{s}{\frac {\partial h}{\partial t}}=-\nabla \cdot (-K\nabla h)-G.}$ Now if hydraulic conductivity (K) is spatially uniform and isotropic (rather than a tensor), it can be taken out of the spatial derivative, simplifying them to the Laplacian, this makes the equation ${\displaystyle S_{s}{\frac {\partial h}{\partial t}}=K\nabla ^{2}h-G.}$ Dividing through by the specific storage (Ss), puts hydraulic diffusivity (α = K/Ss or equivalently, α = T/S) on the right hand side. The hydraulic diffusivity is proportional to the speed at which a finite pressure pulse will propagate through the system (large values of α lead to fast propagation of signals). The groundwater flow equation then becomes ${\displaystyle {\frac {\partial h}{\partial t}}=\alpha \nabla ^{2}h-G.}$ Where the sink/source term, G, now has the same units but is divided by the appropriate storage term (as defined by the hydraulic diffusivity substitution). ### Rectangular cartesian coordinates Three-dimensional finite difference grid used in MODFLOW Especially when using rectangular grid finite-difference models (e.g. MODFLOW, made by the USGS), we deal with Cartesian coordinates. In these coordinates the general Laplacian operator becomes (for three-dimensional flow) specifically ${\displaystyle {\frac {\partial h}{\partial t}}=\alpha \left[{\frac {\partial ^{2}h}{\partial x^{2}}}+{\frac {\partial ^{2}h}{\partial y^{2}}}+{\frac {\partial ^{2}h}{\partial z^{2}}}\right]-G.}$ MODFLOW code discretizes and simulates an orthogonal 3-D form of the governing groundwater flow equation. However, it has an option to run in a "quasi-3D" mode if the user wishes to do so; in this case the model deals with the vertically averaged T and S, rather than k and Ss. In the quasi-3D mode, flow is calculated between 2D horizontal layers using the concept of leakage. ### Circular cylindrical coordinates Another useful coordinate system is 3D cylindrical coordinates (typically where a pumping well is a line source located at the origin — parallel to the z axis — causing converging radial flow). Under these conditions the above equation becomes (r being radial distance and θ being angle), ${\displaystyle {\frac {\partial h}{\partial t}}=\alpha \left[{\frac {\partial ^{2}h}{\partial r^{2}}}+{\frac {1}{r}}{\frac {\partial h}{\partial r}}+{\frac {1}{r^{2}}}{\frac {\partial ^{2}h}{\partial \theta ^{2}}}+{\frac {\partial ^{2}h}{\partial z^{2}}}\right]-G.}$ ### Assumptions This equation represents flow to a pumping well (a sink of strength G), located at the origin. Both this equation and the Cartesian version above are the fundamental equation in groundwater flow, but to arrive at this point requires considerable simplification. Some of the main assumptions which went into both these equations are: • the aquifer material is incompressible (no change in matrix due to changes in pressure — aka subsidence), • the water is of constant density (incompressible), • any external loads on the aquifer (e.g., overburden, atmospheric pressure) are constant, • for the 1D radial problem the pumping well is fully penetrating a non-leaky aquifer, • the groundwater is flowing slowly (Reynolds number less than unity), and • the hydraulic conductivity (K) is an isotropic scalar. Despite these large assumptions, the groundwater flow equation does a good job of representing the distribution of heads in aquifers due to a transient distribution of sources and sinks. If the aquifer has recharging boundary conditions a steady-state may be reached (or it may be used as an approximation in many cases), and the diffusion equation (above) simplifies to the Laplace equation. ${\displaystyle 0=\alpha \nabla ^{2}h}$ This equation states that hydraulic head is a harmonic function, and has many analogs in other fields. The Laplace equation can be solved using techniques, using similar assumptions stated above, but with the additional requirements of a steady-state flow field. A common method for solution of this equations in civil engineering and soil mechanics is to use the graphical technique of drawing flownets; where contour lines of hydraulic head and the stream function make a curvilinear grid, allowing complex geometries to be solved approximately. Steady-state flow to a pumping well (which never truly occurs, but is sometimes a useful approximation) is commonly called the Thiem solution. ## Two-dimensional groundwater flow The above groundwater flow equations are valid for three dimensional flow. In unconfined aquifers, the solution to the 3D form of the equation is complicated by the presence of a free surface water table boundary condition: in addition to solving for the spatial distribution of heads, the location of this surface is also an unknown. This is a non-linear problem, even though the governing equation is linear. An alternative formulation of the groundwater flow equation may be obtained by invoking the Dupuit–Forchheimer assumption, where it is assumed that heads do not vary in the vertical direction (i.e., ${\displaystyle \partial h/\partial z=0}$). A horizontal water balance is applied to a long vertical column with area ${\displaystyle \delta x\delta y}$ extending from the aquifer base to the unsaturated surface. This distance is referred to as the saturated thickness, b. In a confined aquifer, the saturated thickness is determined by the height of the aquifer, H, and the pressure head is non-zero everywhere. In an unconfined aquifer, the saturated thickness is defined as the vertical distance between the water table surface and the aquifer base. If ${\displaystyle \partial h/\partial z=0}$, and the aquifer base is at the zero datum, then the unconfined saturated thickness is equal to the head, i.e., b=h. Assuming both the hydraulic conductivity and the horizontal components of flow are uniform along the entire saturated thickness of the aquifer (i.e., ${\displaystyle \partial q_{x}/\partial z=0}$ and ${\displaystyle \partial K/\partial z=0}$), we can express Darcy's law in terms of integrated discharges, Qx and Qy: ${\displaystyle Q_{x}=\int _{0}^{b}q_{x}dz=-Kb{\frac {\partial h}{\partial x}}}$ ${\displaystyle Q_{y}=\int _{0}^{b}q_{y}dz=-Kb{\frac {\partial h}{\partial y}}}$ Inserting these into our mass balance expression, we obtain the general 2D governing equation for incompressible saturated groundwater flow: ${\displaystyle {\frac {\partial nb}{\partial t}}=\nabla \cdot (Kb\nabla h)+N.}$ Where n is the aquifer porosity. The source term, N (length per time), represents the addition of water in the vertical direction (e.g., recharge). By incorporating the correct definitions for saturated thickness, specific storage, and specific yield, we can transform this into two unique governing equations for confined and unconfined conditions: ${\displaystyle S{\frac {\partial h}{\partial t}}=\nabla \cdot (KH\nabla h)+N.}$ (confined), where S=Ssb is the aquifer storativity and ${\displaystyle S_{y}{\frac {\partial h}{\partial t}}=\nabla \cdot (Kh\nabla h)+N.}$ (unconfined), where Sy is the specific yield of the aquifer. Note that the partial differential equation in the unconfined case is non-linear, whereas it is linear in the confined case. For unconfined steady-state flow, this non-linearity may be removed by expressing the PDE in terms of the head squared: ${\displaystyle \nabla \cdot (K\nabla h^{2})=-2N.}$ Or, for homogeneous aquifers, ${\displaystyle \nabla ^{2}h^{2}=-{\frac {2N}{K}}.}$ This formulation allows us to apply standard methods for solving linear PDEs in the case of unconfined flow. For heterogeneous aquifers with no recharge, Potential flow methods may be applied for mixed confined/unconfined cases.
2021-09-23 05:47:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 20, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7763596773147583, "perplexity": 725.0342498207906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057417.10/warc/CC-MAIN-20210923044248-20210923074248-00320.warc.gz"}
http://dontloo.github.io/blog/em/
# Expectation Maximization Sketch ### Intro Say we have observed data $$X$$, the latent variable $$Z$$ and parameter $$\theta$$, we want to maximize the log-likelihood $$\log p(X|\theta)$$. Sometimes it’s not an easy task, probably because it doesn’t have a closed-form solution, the gradient is difficult to compute, or there’re complicated constraints that $$\theta$$ must satisfy. If somehow the joint log-likelihood $$\log p(X, Z|\theta)$$ can be maximized more easily, we can turn to the Expectation Maximization algorithm for help. There are several ways to formulate the EM algorithm, as will be discussed in this blog. ### Joint Log-likelihood The basic idea is just to optimize the joint log-likelihood $$\log p(X, Z|\theta)$$ instead of the data log-likelihood $$\log p(X|\theta)$$. But since the true values of latent variables $$Z$$ are unknown, we need to estimate a posterior distribution $$p(z|x, \theta)$$ for each data point $$x$$, then maximize the expected log-likelihood over the posterior $\sum_x E_{p_{z|x}}[\log p(x,z|\theta)].$ The optimization follows two iterative steps. The E-step computes the expectation under the current parameter $$\theta’$$ $\sum_x \sum_{z} p(z|x, \theta’) \log p(x,z|\theta) = Q(\theta|\theta’).$ The M-step tries to find the new parameter $$\theta$$ that maximizes $$Q(\theta|\theta’)$$. It turns out that such method is guaranteed to find a local maximum data log-likelihood $$\log p(X|\theta)$$, as will be shown in later sections. ### Evidence Lower Bound (ELBO) One way to derive EM formally is via constructing the evidence lower bound of $$\log p(X|\theta)$$ using Jensen’s inequality $\log p(X|\theta) = \sum_x \log p(x|\theta)$ $= \sum_x \log \sum_z p(x, z|\theta)$ $= \sum_x \log \sum_z q_{z|x}(z) \frac{p(x, z|\theta)}{q_{z|x}(z)}$ $\geq \sum_x \sum_z q_{z|x}(z) \log \frac{p(x, z|\theta)}{q_{z|x}(z)}$ where $$q_{z|x}(z)$$ is an arbitrary distribution over the latent variable associated with data point $$x$$. At the E-step, we keep $$\theta$$ fixed and find the $$q$$ that makes the equality hold. Since $$q$$ has to satisfy the properties of being a probability distribution, the problem becomes, for each data point $$x$$, $\max_{q_{z|x}(z)} \sum_z q_{z|x}(z) \log \frac{p(x, z|\theta)}{q_{z|x}(z)}$ s.t. $q_{z|x}(z)\geq 0, \sum_z q_{z|x}(z) = 1.$ Knowing from the previous section, the solution to this should be $$q_{z|x}(z) = p(z|x, \theta)$$. Specifically, if $$z$$ is a discrete variable, it can be solved using Lagrange multipliers, see this tutorial by Justin Domke (my teacher ;). At the M-step we maximize over $$\theta$$ while keeping $$q_{z|x}$$ fixed. $\sum_x \sum_z q_{z|x}(z) \log \frac{p(x, z|\theta)}{q_{z|x}(z)}$ $= \sum_x \sum_z q_{z|x}(z) \log p(x, z|\theta) - \sum_x \sum_z q_{z|x}(z) \log q_{z|x}(z)$ $= Q(\theta|\theta’) + \sum_x H(q_{z|x})$ The second term $$H(q_{z|x})$$ is independent of $$\theta$$ given $$q_{z|x}$$ is fixed, so we only need to optimize $$Q(\theta|\theta’)$$ which is in line with the previous formulation. So the M-step maximizes the lower bound w.r.t $$\theta$$) and the E-step sets a new lower bound based on the current value $$\theta$$. ### Latent Distribution Let’s see now to decompose the lower bound from the data likelihood without using Jensen’s inequality. For simplicity only the derivation of one data point $$x$$ is given here. $\log p(x|\theta) = \sum_z q_{z|x}(z) \log p(x|\theta)$ $= \sum_z q_{z|x}(z) \log \frac{p(x,z|\theta)}{p(z|x,\theta)}$ $= \sum_z q_{z|x}(z) \log \frac{p(x,z|\theta)q_{z|x}(z)}{p(z|x,\theta)q_{z|x}(x)}$ $= \sum_z q_{z|x}(z) \log \frac{p(x,z|\theta)}{q_{z|x}(z)} - \sum_z q_{z|x}(z) \log \frac{p(z|x,\theta)}{q_{z|x}(z)}$ $= F(q_{z|x}, \theta) + D_{KL}(q_{z|x} | p_{z|x})$ Here $$F(q_{z|x}, \theta)$$ is the evidence lower bound and the remaining term is the KL divergence between the latent distribution $$q_{z|x}(z)$$ and the posterior $$p(z|x,\theta)$$. We’ve formalized the lower bound as a function (functional) of two parameters, EM essentially does the optimization via coordinate ascent. In the E-step we optimize $$F(q_{z|x}, \theta)$$ w.r.t $$q_{z|x}$$ while holding $$\theta$$ fixed. Since $$\log p(x|\theta)$$ does not depend on $$q_{z|x}$$, the largest value of $$F(q_{z|x}, \theta)$$ occurs when $$D_{KL}(q_{z|x} | p_{z|x})=0$$, we have again $$q_{z|x}(z) = p(z|x,\theta)$$. In the M-step $$F(q_{z|x}, \theta)$$ is maximized w.r.t $$\theta$$, which is the same as the above section. ### KL Divergence It turns out the lower bound $$F(q_{z|x}, \theta)$$ above is also in the form of KL divergence. If we let $$q(x,z) = q(z|x)p(x)$$, where $$q(z|x) = q_{z|x}(z)$$ and $$p(x)=\frac{1}{|X|}\sum_{i}\delta_i(x)$$ is a distribution that places all its mass on the observed data $$X$$, we have $\sum_x p(x)f(x) = |X|\sum_x f(x).$ Then the lower bound can be rewritten as $\sum_x F(q_{z|x}, \theta) = \sum_x \sum_z q_{z|x}(z) \log \frac{p(x,z|\theta)}{q_{z|x}(z)}$ $= \frac{|X|}{\log|X|} \sum_x \sum_z q(x,z) \log \frac{p(x,z|\theta)}{ q(z|x) }$ $= -\frac{|X|}{\log|X|} D_{KL}(q_{x,z} | p_{x,z}).$ Therefore the E-step is minimizing $$D_{KL}(q_{x,z} | p_{x,z})$$ w.r.t $$p_{x,z}$$. Similarly for the $$D_{KL}(q_{z|x} | p_{z|x})$$ term, it follows $\sum_x D_{KL}(q_{z|x} | p_{z|x}) = - \sum_x \sum_z q_{z|x}(z) \log \frac{p(z|x,\theta)}{ q_{z|x}(z) }$ $= - |X| \sum_x \sum_z q(x,z) \log \frac{p(x, z, \theta)}{q(x,z)}$ $= |X| D_{KL}(q_{x,z} | p_{x,z})$ So the M-step becomes is minimizing the same KL divergence $$D_{KL}(q_{x,z} | p_{x,z})$$ but w.r.t $$q$$. Since $$q(x,z)$$ follows the restriction that it must aline with the data, and $$p(x,z|\theta)$$ must be a distribution under the specified model, they can be thought of as living on two manifolds in the space of all distributions, namely the data manifold and the model manifold. Therefore EM can be viewed as to minimize the distance between two manifolds $$D_{KL}(q_{x,z} | p_{x,z})$$ via coordinate descent. More about the geometric view of EM please refer to this paper, also see this question on SE. ### Log-sum to Sum-log In spite of these different views of EM, the advantage of EM lies in Jensen’s inequality which moves the logarithm inside the summation $\sum_x \log \sum_z q_{z|x}(z) \frac{p(x, z|\theta)}{q_{z|x}(z)} \geq \sum_x \sum_z q_{z|x}(z) \log \frac{p(x, z|\theta)}{q_{z|x}(z)}.$ If the joint distribution $$p(x, z|\theta)$$ belongs to the exponential family, it turns a log-sum-exp operation into a weighted summation of the exponents (often sufficient statistics), which could be easier to optimize. ### Alternatives for E-step and M-step Sometimes we aren’t able to reach the optimal solution to the E-step or the M-step, probably because the difficulty in calculation, optimization, trade-off between simplicity and accuracy, or other restrictions on distributions or parameters. In these cases, we can use alternative approaches for a suboptimal solution. For example K-means is a special case of EM for GMMs, where the latent distribution is restricted to be a delta function (hard assignment). In LDA, a prior distribution is added to the parameter, thus has made the parameter another latent variable and the posterior of latent variables becomes difficult to compute. So variational methods are used for approximation, specifically, the latent distribution $$q$$ is characterized by a variational model with parameter $$\psi$$. Then in the E-step we optimize $$q$$ w.r.t $$\phi$$ and in the M-step we optimize $$p$$ w.r.t $$\theta$$. For parameters that can not be solved in closed-form, gradient based optimization are applied.
2021-06-14 18:18:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348109364509583, "perplexity": 231.5105641671094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613380.12/warc/CC-MAIN-20210614170602-20210614200602-00572.warc.gz"}
https://thb.lt/blog/2013/fast-ftp-sync-for-jekyll.html
# Hashsync: Fast FTP synchronization for static website generators 15 Décembre 2013 Hashsync syncs a directory over FTP using a hash database instead of timestamps to push only modified files. Most static CMS rebuild the complete site at each modification; this script provides a way to push only the files that have been modified. {: .lead} ## Installation You need a Python interpreter (2.7 required, 3.x recommended if you need unicode filenames) and OpenSSL (or any other hash command, see Usage below). curl https://gist.github.com/thblt/7975807/raw/53cb65f2fd72ae4719869423f2c493adfd2c43a4/hashsync.py > /usr/local/bin/hashsync To run on Python 2, change the first line to #!/usr/bin/env python. Python 3.x is required if the files you want to synchronize have non-ASCII characters in their names. ## Usage The basic usage is very straightforward: hashsync /home/public_www/my_site ftp://login:password@ftp.example.com/www/ Some optional arguments are available, the most important being -d (--delete) which makes deletes on the remote location files which doesn’t exist locally. Invoke with -h or --help to view all available options. ## Technical notes ### Limitations All these limitations may be removed in future versions depending on needs and contributions. 1. Bug: HashSync doesn’t do create missing paths on remote server, assumes the whole folder structure is already present, and crashes if not. 2. Optimization: Calculating hashes may become extremely slow on large files. In a future version, Hashsync may allow to declare that some files never gets modified, but can only be added or deleted, thus removing the need to compute hashes for them. This may apply for, eg, large video or pictures files which are quite unlikely to be modified. 3. Optimization: Hashes are currently computed on a single thread. 4. Optimization: Cryptographically secure hashes are generally slow to calculate. The use of unsafe algorithms should be considered. 5. Protocols: Neither FTPS nor SFTP are currently supported. 6. Usage: Hashsync has no --exclude option (it syncs the whole directory, without any exceptions). 7. Compatibility: Hashsync depends on OpenSSL (or another command-line tool, see -H, --hash) being installed. This isn’t a problem on most Unixes, but may not be so easy on Windows machines. A later version may fallback to Python hashlib if OpenSSL isn’t available. 8. Compatibility: If invoked from multiple machines, Hashsync must be run with the exact same parameters to work properly. In the future, the configuration (hash algorithm, exclusions, and so on) may be stored on the remote server. 9. Hashsync is uni-directional (by design): it only syncs from local to remote. ### Cache file format Super easy: {relative path to file 1}\t{hash of file 1}\n {relative path to file 2}\t{hash of file 2}\n … {relative path to file N}\t{hash of file N}\n Example: .htaccess 21f731da56d2f3a91a857fa52cbe8b16ad30ec8f 404.html 45ce8f0e7901fcb70209313e7a84bc31103073b6 blog/2013/qmail-raspberry-pi.html 5a12796c26ed92f3043d3b63752e9efe4747ef73
2018-12-16 03:16:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21100986003875732, "perplexity": 10213.186722809625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827252.87/warc/CC-MAIN-20181216025802-20181216051802-00093.warc.gz"}
https://ai.stackexchange.com/questions/7580/is-the-discount-not-needed-in-a-deterministic-environment-for-reinforcement-lear
# Is the discount not needed in a deterministic environment for Reinforcement Learning? I'm now reading a book titled as "Deep Reinforcement Learning Hands-On" and the author said the following on the chapter about AlphaGo Zero: Self-play In AlphaGo Zero, the NN is used to approximate the prior probabilities of the actions and evaluate the position, which is very similar to the Actor-Critic (A2C) two-headed setup. On the input of the network, we pass the current game position (augmented with several previous positions) and return two values. The policy head returns the probability distribution over the actions and the value head estimates the game outcome as seen from the player's perspective. This value is undiscounted, as moves in Go are deterministic. Of course, if you have stochasticity in the game, like in backgammon, some discounting should be used. All the environments that I have seen so far are stochastic environments, and I understand the discount factor is needed in stochastic environment. I also understand that the discount factor should be added in infinite environments (no end episode) in order to avoid the infinite calculation. But I have never heard (at least so far on my limited learning) that the discount factor is NOT needed in deterministic environment. Is it correct? And if so, why is it NOT needed? The motivation for adding the discount factor $\gamma$ is generally, at least initially, based simply in "theoretical convenience". Ideally, we'd like to define the "objective" of an RL agent as maximizing the sum of all the rewards it gathers; its return, defined as: $$\sum_{t = 0}^{\infty} R_t,$$ where $R_t$ denotes the immediate reward at time $t$. As you also already noted in your question, this is inconvenient from a theoretical point of view, because we can have many different such sums that all end up being equal to $\infty$, and then the objective of "maximizing" that quantity becomes quite meaningless. So, by far the most common solution is to introduce a discount factor $0 \leq \gamma < 1$, and formulate our objective as maximizing the discounted return: $$\sum_{t = 0}^{\infty} \gamma^t R_t.$$ Now we have an objective that will never be equal to $\infty$, so maximizing that objective always has a well-defined meaning. As far as I am aware, the motivation described above is the only motivation for a discount factor being strictly necessary / needed. This is not related to the problem being stochastic or deterministic. If we have a stochastic environment, which is guaranteed to have a finite duration of at most $T$, we can define our objective as maximizing the following quantity: $$\sum_0^T R_t,$$ where $R_t$ is a random variable drawn from some distribution. Even in the case of stochastic environments, this is well-defined, we do not strictly need a discount factor. Above, I addressed the question of whether or not a discount factor is necessary. This does not tell the full story though. Even in cases where a discount factor is not strictly necessary, it still might be useful. Intuitively, discount factors $\gamma < 1$ tell us that rewards that are nearby in a temporal sense (reachable in a low number of time steps) are more important than rewards that are far away. In problems with a finite time horizon $T$, this is probably not true, but it can still be a useful heuristic / rule of thumb. Such a rule of thumb is particularly useful in stochastic environments, because stochasticity can introduce greater variance / uncertainty over long amounts of time than over short amounts of time. So, even if in an ideal world we'd prefer to maximize our expected sum of undiscounted rewards, it is often easier to learn how to effectively maximize a discounted sum; we'll learn behaviour that mitigates uncertainty caused by stochasticity because it prioritizes short-term rewards over long-term rewards. This rule of thumb especially makes a lot of sense in stochastic environments, but I don't agree with the implication in that book that it would be restricted to stochastic environments. A discount factor $\gamma < 1$ has also often been found to be beneficial for learning performance in deterministic environments, even if afterwards we evaluate an algorithm's performance according to the undiscounted returns, likely because it leads to a "simpler" learning problem. In a deterministic environment there may not be any uncertainty / variance that grows over time due to the environment itself, but during a training process there is still uncertainy / variance in our agent's behaviour which grows over time. For example, it will often be selecting suboptimal actions for the sake of exploration. • Quite elucidating. So glad to seem the math formatting getting immediate use. Possibly dumb question, but can I ask why t is superscripted with the gamma? – DukeZhou Aug 15 '18 at 20:23 • @DukeZhou It's $\gamma$ raised to the power $t$ (time). Suppose, for example, that $\gamma = 0.9$. Then our first reward ($R_0$) will be multiplied by $0.9^0 = 1$ (fully valued). The second reward ($R_1$) is multiplied by $0.9^1 = 0.9$ (only "90% important"). The third reward is multiplied by $0.9^2 = 0.81$ (only "81% important"), etc. Such a sum can be proven to never reach $\infty$ (assuming that none of the individual rewards $R_t$ are equal to $\infty$) – Dennis Soemers Aug 16 '18 at 8:11
2020-10-22 09:47:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7777210474014282, "perplexity": 465.90281583869466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879362.3/warc/CC-MAIN-20201022082653-20201022112653-00402.warc.gz"}
https://www.beatthegmat.com/aringo-claim-their-client-at-hbs-with-580-hbs-didn-t-respond-t270055.html?sid=9e507ab312b829a20b84efdd89b224ec
## Aringo claim their client at HBS with 580.HBS didn't respond Figure out where you wish to apply ##### This topic has expert replies Junior | Next Rank: 30 Posts Posts: 23 Joined: 25 Aug 2013 Thanked: 1 times ### Aringo claim their client at HBS with 580.HBS didn't respond by blessu » Sat Sep 28, 2013 12:25 am While hovering Google I ran into the consulting company Aringo. They have a page about their client GMAT scores, and they claim there that one of their clients got into Harvard MBA with 580 GMAT. I contacted Aringo asked to get the client's name and email address to verify. The gave me the first name, they say it's a privacy problem to provide contact info. Mine is 590, and initially I wasn't even considering HBS. Once I saw this, I contacted HBS admission office and asked them if it's true about 580. They responded vaguely, that GMAT score is just one factor, and it wasn't clear from their answer whether they deny it or not... Anyone here get into HBS with GMAT under 600, or heard such cases? I have really strong leadership, should I even bother? Thank for any responses. blessu Last edited by blessu on Sun Oct 06, 2013 7:26 am, edited 1 time in total. Junior | Next Rank: 30 Posts Posts: 18 Joined: 14 Aug 2013 Thanked: 1 times by tinarey12 » Sat Sep 28, 2013 4:26 pm As has been said many times in these forums, GMAT is not the only factor for deciding whether an applicant is accepted but the averages speak for themselves. I don't doubt that Harvard has accepted people with a very low GMAT but I'm sure it is a rare occurrence. There are many factors that are taken in to consideration by the adcoms, some of them are obvious to us and some are not so obvious. I wouldn't assume that just because it's been done before you have any kind of a reasonable chance if you apply with a 590 unless you have other extra-ordinary qualifications. These could be such ungovernable particulars as country of origin and related experience or other unusual associations that the adcom finds worthy. You never know. If you have a fascinating story, go ahead and apply. Failing that, make a concerted effort to substantially improve your GMAT score. Junior | Next Rank: 30 Posts Posts: 23 Joined: 25 Aug 2013 Thanked: 1 times by blessu » Sun Oct 06, 2013 7:28 am Thanks for the comment, and I'll probably try to improve my score first, as I don't believe in miracles... Newbie | Next Rank: 10 Posts Posts: 2 Joined: 05 Jun 2013 by ronduron » Sun Oct 20, 2013 1:28 pm You won't be able to get personal contact information either from acceptance consultants or from the schools themselves. It's classified information, as funny as that sounds. But there are stories in these forums going back a ways. Not that you have to go back... I bet you can find find something in this months' acceptance posts which are starting to come out now. Junior | Next Rank: 30 Posts Posts: 16 Joined: 23 Sep 2013 by josbilou » Sat Oct 26, 2013 8:23 am Although an acceptance with a GMAT below 600 or 620 is a rare occurrence at all the top schools nevertheless it does happen regularly. There will always be a small number of applicants, usually from outside the USA, who have extraordinary life stories that show intelligence and tenacity but who have been unable to succeed in the GMAT (I would think that the Verbal sections would be the biggest stumbling block for these people). It's a tricky point. The schools want a wide representation of students and are willing to take an occasional risk with the GMAT score to get a student who is superior in other ways. As for the applicant, he or she would have to assess himself very highly and also have some pretty terrific recommendations from highly placed persons. Junior | Next Rank: 30 Posts Posts: 12 Joined: 08 May 2013 by gorelik52 » Thu Oct 31, 2013 1:29 pm blessu wrote:While hovering Google I ran into the consulting company Aringo. They have a page about their client GMAT scores, and they claim there that one of their clients got into Harvard MBA with 580 GMAT. I contacted Aringo asked to get the client's name and email address to verify. The gave me the first name, they say it's a privacy problem to provide contact info. Mine is 590, and initially I wasn't even considering HBS. Once I saw this, I contacted HBS admission office and asked them if it's true about 580. They responded vaguely, that GMAT score is just one factor, and it wasn't clear from their answer whether they deny it or not... Anyone here get into HBS with GMAT under 600, or heard such cases? I have really strong leadership, should I even bother? Thank for any responses. blessu 590 is way low for HBS, almost off the screen. Maybe if you manage to make a huge jump to say 680 that might be seen to be beneficial. Don't rest on 590 if there's any chance you can improve a lot. There are schools, good ones, that you'd have a chance with, but not HBS. Newbie | Next Rank: 10 Posts Posts: 6 Joined: 01 Jun 2013 by arnasol78 » Tue Nov 05, 2013 1:36 pm Note that Aringo is a consulting company. They 'have ways' of helping people like yourself with a low GMAT to have a decent chance of getting accepted to a top school. It's what they do. Maybe you should think about going this route if you are at all able to, moneywise. Without that kind of help your chances are extremely small, to put it kindly. Junior | Next Rank: 30 Posts Posts: 18 Joined: 14 Aug 2013 Thanked: 1 times by tinarey12 » Mon Nov 11, 2013 5:35 am arnasol78 wrote:Note that Aringo is a consulting company. They 'have ways' of helping people like yourself with a low GMAT to have a decent chance of getting accepted to a top school. It's what they do. Maybe you should think about going this route if you are at all able to, moneywise. Without that kind of help your chances are extremely small, to put it kindly. I have to agree with this, if you are unable to vastly improve your GMAT. If you can afford it, a good consultant can help you in ways that will make a difference not just in your application to B School but further on down the road in your studies as well. They teach you a lot about how to determine what is being asked for, what is essential and what is not, and how to express yourself succinctly. I found it to be invaluable. Newbie | Next Rank: 10 Posts Posts: 6 Joined: 01 Jun 2013 by arnasol78 » Sat Nov 16, 2013 10:17 am I've heard this from many people. Basically if your stats are low (and you can't improve them) you need help with the rest. Newbie | Next Rank: 10 Posts Posts: 7 Joined: 20 Oct 2013 by janablum17 » Fri Nov 22, 2013 6:51 am blessu wrote:While hovering Google I ran into the consulting company Aringo. They have a page about their client GMAT scores, and they claim there that one of their clients got into Harvard MBA with 580 GMAT. I contacted Aringo asked to get the client's name and email address to verify. The gave me the first name, they say it's a privacy problem to provide contact info. Mine is 590, and initially I wasn't even considering HBS. Once I saw this, I contacted HBS admission office and asked them if it's true about 580. They responded vaguely, that GMAT score is just one factor, and it wasn't clear from their answer whether they deny it or not... Anyone here get into HBS with GMAT under 600, or heard such cases? I have really strong leadership, should I even bother? Thank for any responses. blessu I just saw this. Let me understand.... you called Harvard and asked them if they had ever admitted anyone with a 580 GMAT? What did you think they were going to say? How did you represent yourself? I'm astounded. I can see calling Aringo but they wouldn't be eager to give you any details - they're running a business not a self-help group. It's just obvious that whoever got in to HBS with a 590 GMAT (and it is on record), did so with a lot of help. Junior | Next Rank: 30 Posts Posts: 18 Joined: 14 Aug 2013 Thanked: 1 times by tinarey12 » Wed Nov 27, 2013 2:00 pm Not everyone can afford the services of a good consultant and it's a big decision. I would say that if your GMAT is below 600 and if English is not your mother-tongue or you are not good at essay writing, it should be a serious consideration if you hope to get accepted to one of the top 10 schools. Think about how much it costs just to apply to several schools and you might reconsider using a consultant. Newbie | Next Rank: 10 Posts Posts: 2 Joined: 05 Jun 2013 by ronduron » Tue Dec 03, 2013 10:59 am You don't need a consultant to improve your GMAT if that's the main thing holding you back. There are excellent workouts available online and lots of advice on the forums. But you have to be determined and dedicated - there are no shortcuts to a high GMAT. Newbie | Next Rank: 10 Posts Posts: 6 Joined: 07 Jun 2013 by LastChanceMBA » Tue Dec 03, 2013 11:29 am First, this person who got in with a 580 is obviously a very special case, and I'm sure he/she has a remarkable resume and personal story. So unless you accomplished something truly remarkable (Rhodes scholar, saved a village, escaped a war torn country, NFL player, founded a startup worth $10 million+, etc.) don't expect to get into HBS with a 590. Study hard, and try your best to raise it to at least 680+. Second, be very skeptical of Aringo or any other consulting firm when they brag about clients with low GMAT scores who got into top b-schools. Now I'm not accusing them of lying. However, unless you know the exact background and experience of the candidate in question, you're not getting any useful information that is relevant to YOU. After all let's be honest. A white/Asian guy in banking is not getting in with a 580 or some other subpar score. The ugly truth is that admission consultants actually can't help you that much with getting into top schools. Junior | Next Rank: 30 Posts Posts: 16 Joined: 23 Sep 2013 by josbilou » Wed Dec 04, 2013 4:45 am LastChanceMBA wrote:First, this person who got in with a 580 is obviously a very special case, and I'm sure he/she has a remarkable resume and personal story. So unless you accomplished something truly remarkable (Rhodes scholar, saved a village, escaped a war torn country, NFL player, founded a startup worth$10 million+, etc.) don't expect to get into HBS with a 590. Study hard, and try your best to raise it to at least 680+. Second, be very skeptical of Aringo or any other consulting firm when they brag about clients with low GMAT scores who got into top b-schools. Now I'm not accusing them of lying. However, unless you know the exact background and experience of the candidate in question, you're not getting any useful information that is relevant to YOU. After all let's be honest. A white/Asian guy in banking is not getting in with a 580 or some other subpar score. The ugly truth is that admission consultants actually can't help you that much with getting into top schools. When you hire a consultant you don't need to know anything about any other client of theirs. Everyone is a unique case. In my experience, though I didn't have such a low GMAT, only a good consultant can help you to make the most of the rest of your application. Money is the main drawback and it's a big one, but if you are applying to the top schools the consultants have the first-hand experience to make you sound worthwhile.They know what the adcoms are looking for. Newbie | Next Rank: 10 Posts Posts: 6 Joined: 07 Jun 2013 by LastChanceMBA » Wed Dec 04, 2013 5:36 pm josbilou wrote: LastChanceMBA wrote:First, this person who got in with a 580 is obviously a very special case, and I'm sure he/she has a remarkable resume and personal story. So unless you accomplished something truly remarkable (Rhodes scholar, saved a village, escaped a war torn country, NFL player, founded a startup worth \$10 million+, etc.) don't expect to get into HBS with a 590. Study hard, and try your best to raise it to at least 680+. Second, be very skeptical of Aringo or any other consulting firm when they brag about clients with low GMAT scores who got into top b-schools. Now I'm not accusing them of lying. However, unless you know the exact background and experience of the candidate in question, you're not getting any useful information that is relevant to YOU. After all let's be honest. A white/Asian guy in banking is not getting in with a 580 or some other subpar score. The ugly truth is that admission consultants actually can't help you that much with getting into top schools. When you hire a consultant you don't need to know anything about any other client of theirs. Everyone is a unique case. In my experience, though I didn't have such a low GMAT, only a good consultant can help you to make the most of the rest of your application. Money is the main drawback and it's a big one, but if you are applying to the top schools the consultants have the first-hand experience to make you sound worthwhile.They know what the adcoms are looking for. I respectfully disagree. It matters because you are competing against those with similar background, whether demographics or industry. The former NFL cornerback who is a first-year at HBS and scored 570 on the GMAT or ex-Navy Seals, do not tell me anything about my chances because they are in their own unique groups and competing against those who are similar. It is a well-known fact that the admissions bar is higher for white/Asian males in finance/consulting or Indian males in IT. It's not politically correct to say this, but it's true. Now back to Aringo and consultants in general. I had free consultations with many companies including Aringo. The Aringo lady brought up how a lot of their clients with sub-700 GMAT scores got into elite b-schools. I asked her, "Ok that's nice, but how many of them were Asian males in finance?" (my background). She hesitated for a few seconds and replied "Well, we are not allowed to divulge any information about our clients." I then made the points that I made in the preceding paragraph, and she was unable to provide a good response. I'm not saying that consultants can't help. I'm sure some of them are good. But by and large, I think they are vastly overrated and provide a service whose value is questionable and cannot be empirically verified. If someone gets into a program of their choice they may say that a consultant helped make it happen. But how do we know that? This is pure conjecture since we don't know what the outcome would have been if he had not used a consultant. He may be a superstar who would've gotten in anyways; we just don't know.
2021-04-11 21:23:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18766729533672333, "perplexity": 2200.1806650577246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038065492.15/warc/CC-MAIN-20210411204008-20210411234008-00187.warc.gz"}
https://www.physicsforums.com/threads/a-question-in-qft-book-of-peskin-schoeder.575460/
# A question in QFT book of Peskin&Schoeder? 1. Feb 8, 2012 ### ndung200790 In the book writing: ...consider the color invariant: (t$^{a}$)$_{ij}$(t$^{a}$)$_{kl}$(18.38).The indices i,k transform according to to 3 representation of color; the indices j,l transform according to 3$^{-}$.Thus,(18.38) must be a linear combination of the two possible way to contract these indices, Aδ$_{il}$δ$_{kj}$+Bδ$_{ij}$δ$_{kl}$(18.39). The constant A and B can be determined by contracting (18.38) and (18.39) with δ$_{ij}$ and with δ$_{jk}$..... I do not understand why (18.38)must be a linear combination as (18.39)? Thank you very much for your kind helping. 2. Feb 8, 2012 ### ndung200790 Here t$^{a}$ is generator of SU(3). 3. Feb 8, 2012 ### Physics Monkey To get started with an argument, what would happen if you acted on all ijkl indices with an arbitrary matrix U in the fundamental of SU(3). In other words, what can you say about $(U t^a U^+)_{ij} (U t^a U^+)_{kl}$? 4. Feb 8, 2012 5. Feb 8, 2012 ### ndung200790 The book writing:...and adjusting A and B so that the contractions of (18.39) obey the identities: tr[t$^{a}$](t$^{a}$)$_{kl}$=0;(t$^{a}$t$^{a}$)$_{il}$=(4/3)δ$_{il}$ (18.40). This gives the identity: (t$^{a}$)$_{ij}$(t$^{a}$)$_{kl}$=(1/2)(δ$_{il}$δ$_{kj}$-(1/3)δ$_{ij}$δ$_{kl}$) (18.41) 6. Feb 9, 2012 ### ndung200790 Now I think that (18.41) is correct because (18.40) are more loosely conditions than the conditions that t$^{a}$ make themself the Lie algebras.Is that correct? 7. Feb 10, 2012 ### ndung200790 If t$^{a}$ are satisfied (18.41)(in #5) then are t$^{a}$ still the generators of SU(3)? 8. Feb 11, 2012 ### ndung200790 I have heard that this can be solved by 't Hooft's double line formalism.Then what is this?
2018-03-18 05:08:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4648461639881134, "perplexity": 3709.766002858344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645513.14/warc/CC-MAIN-20180318032649-20180318052649-00647.warc.gz"}
https://cs.stackexchange.com/questions/99979/turing-machine-to-output-enumeration-of-a-language
# Turing machine to output enumeration of a language I am trying to write a Turing machine enumerator that enumerates the language where $$w = 0^n1^n$$ and $$n ≥ 0$$. So for example it should output the following to the first tape: e,#,0,1,#,0,0,1,1,#,0,0,0,1,1,1 etc. Here e means empty, i.e., to leave that index blank. I can't figure out how to display this as a Turing machine with states and transitions. • Have your searched "online Turing machine simulator"? Select one that suits you and try it. – John L. Nov 12 '18 at 19:09 • Initialize the tape with $$\#01$$ (or $$\epsilon\#01$$). • At each step, write $$\#$$, then go back to the previous $$\#$$ and copy the string written there, say $$0^n1^n$$. • Change the first $$1$$ to a $$0$$, add two $$1$$s at the end, and go to the next step.
2020-08-09 04:52:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7112266421318054, "perplexity": 802.098466983394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738425.43/warc/CC-MAIN-20200809043422-20200809073422-00557.warc.gz"}
https://brilliant.org/discussions/thread/case-study/
# Case Study Hello everybody! as you know that if , $$a\geq b$$ ,then it is not necessary that $$\phi(a)\geq\phi(b)$$ . Since $$\phi$$ not an increasing function. But there are some cases in which above inequality holds true. Case 1 When either of $$a$$ and $$b$$ , suppose lets take $$a$$ as an arbitary prime , then $$b=a+1$$ or $$b=a-1$$. Same follows if $$b$$ is prime. Case 2 When $$a=b$$ Case 3 When $$a$$ and $$b$$ are both primes and , $$a>b$$ 2 years, 1 month ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: $$Consider\quad f\left( n \right) =\phi (n)\quad \quad \quad Where\quad \phi (n)\quad is\quad Euler\quad Totient\quad Function\\ f\left( 5186 \right) =f\left( 5186+1 \right) =f\left( 5186+2 \right) =2592\quad \\ =>\quad f\left( 5186 \right) =f\left( 5187 \right) =f\left( 5188 \right) =2592\quad \\ 5186\quad is\quad the\quad only\quad number\quad which\quad satisfy\quad f\left( x \right) =f\left( x+1 \right) =f\left( x+2 \right) \\ and\quad is\quad less\quad than\quad { 10 }^{ 10 }.$$ - 2 years, 1 month ago
2018-04-20 16:37:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999729335308075, "perplexity": 7062.481917774246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944479.27/warc/CC-MAIN-20180420155332-20180420175332-00372.warc.gz"}
https://scicomp.stackexchange.com/questions/19215/how-to-solve-odes-with-constraints-using-bvp4c
# How to solve ODEs with constraints using BVP4C? I am using BVP4C to solve a system of ODEs which is as follows. \left\{ \begin{aligned} \frac{\partial f(x,y)}{\partial x} &- \frac{d}{ds}\big(\dot{x} f(x,y)\big) = \lambda \ddot{x}(s)\\ \frac{\partial f(x,y)}{\partial y} &- \frac{d}{ds}\big(\dot{y} f(x,y)\big) = \lambda \ddot{y}(s)\\ \end{aligned} \right. There is a constraint which is of the form $$\dot{x}^2 + \dot{y}^2 = 1$$. The boundary conditions are $$x(0) = x_A, \, y(0) = y_A, \, x(l) = x_B, y(l) = y_B$$. What should I do to deal with above constraint? • Welcome to SciComp Exchange. A better description of your problem is needed to provide you suggestions that are related to what you want. What is your system of equations? Mar 23 '15 at 17:18 • @yagoo: This system of equations doesn't look like a system of ODEs; rather, it looks like a system of PDEs. I suppose $s$ is supposed to be the "time" variable? Are you discretizing the PDEs in terms of $x$ and $y$? Mar 24 '15 at 7:56 • @Geoff Oxberry: It looks like a system of PDEs, however, it is a system of ODEs. As you said, $s$ is supposed to be the "time" variable. Mar 24 '15 at 8:42 • So what is $f$? Is it a given function? Please describe all objects that you use. – cfh Mar 24 '15 at 9:14 Introduce a new unknown $\phi(s)$ such that $$(\dot x(s), \dot y(s)) = (\cos\phi(s), \sin\phi(s)),$$ then rewrite the problem as a system of coupled first-order ODEs. You now got rid of the constraint and can use any standard ODE integrator. • @yagoo, if this solve your problem why don't you accept the answer? Mar 30 '15 at 18:24 You call this system an ODE (ordinary differential equation), but this sort of system is actually called an DAE (differential algebraic equation). What should I do to deal with above constraint? The constraint $\dot{x}^2 + \dot{y}^2 = 1$ is a smaller problem than the fact that you need to determine the time evolution of $\lambda$, or more precisely analytically compute the constraint satisfied by $\lambda$. One way to do this is to first write the system in integral form: \left\{ \begin{aligned} \frac{d}{ds}\big(\dot{x} (f(x,y)-\lambda)\big) &= \frac{\partial f(x,y)}{\partial x}\\ \frac{d}{ds}\big(\dot{y} (f(x,y)-\lambda)\big) &= \frac{\partial f(x,y)}{\partial y}\\ \end{aligned} \right. and then introduce dummy variables for converting it into semi-explicit form: \left\{ \begin{aligned} \frac{dx}{ds} &= \dot{x}\\ \frac{dy}{ds} &= \dot{y}\\ \frac{du}{ds} &= \frac{\partial f(x,y)}{\partial x}\\ \frac{dv}{ds} &= \frac{\partial f(x,y)}{\partial y}\\ \end{aligned} \right. \left\{ \begin{aligned} u &= \dot{x} (f(x,y)-\lambda)\\ v &= \dot{y} (f(x,y)-\lambda)\\ 1 &= \dot{x}^2 + \dot{y}^2\\ \end{aligned} \right. You are lucky that the algebraic equation system can be solved for $\dot{x}$, $\dot{y}$, and $\lambda$. In general, you must use Pantelides algorithm (for example) to generate more constraints (and dummy variables) until your original dynamic variables are uniquely determined by the constrains. You don't even need $\lambda$, so let's eliminate it: \left\{ \begin{aligned} \dot{y}u &= \dot{x}v\\ 1 &= \dot{x}^2 + \dot{y}^2\\ \end{aligned} \right. This algebraic equation system now allows you to compute $\dot{x}$ and $\dot{y}$ from $u$ and $v$. So BVP4C will only see $x$, $y$, $u$, and $v$, and you will solve for $\dot{x}$ and $\dot{y}$ yourself and use it where it is needed. Edit: Warning, the solution above is wrong! Writing the system in integral form is not as straightforward as suggested, because we actually have \left\{ \begin{aligned} \frac{d}{ds}\big(\dot{x} f(x,y)\big)-\lambda\frac{d}{ds}\dot{x} &= \frac{\partial f(x,y)}{\partial x}\\ \frac{d}{ds}\big(\dot{y} f(x,y)\big)-\lambda\frac{d}{ds}\dot{y} &= \frac{\partial f(x,y)}{\partial y}\\ \end{aligned} \right. We could introduce dummy variables $\ddot{x}$ and $\ddot{y}$ to get an integral form \left\{ \begin{aligned} \frac{d}{ds}\big(\dot{x} f(x,y)\big) &= \frac{\partial f(x,y)}{\partial x}+\lambda\ddot{x}\\ \frac{d}{ds}\big(\dot{y} f(x,y)\big) &= \frac{\partial f(x,y)}{\partial y}+\lambda\ddot{y}\\ \frac{d}{ds}\dot{x} &= \ddot{x}\\ \frac{d}{ds}\dot{y} &= \ddot{y}\\ \end{aligned} \right. And now we really need to apply Pantelides algorithm... • @yagoo I just noticed that my answer is wrong. The transformation to integral form omitted a term involving a derivative of $\lambda$. This probably means that the index of the system is bigger than one, which will require some mastery of the techniques to solve. Mar 30 '15 at 7:45
2021-09-24 09:52:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 7, "x-ck12": 0, "texerror": 0, "math_score": 0.9999300241470337, "perplexity": 979.8842471826574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00581.warc.gz"}
https://www.trustudies.com/ncert-solutions/class-9/maths/lines-and-angles/
NCERT solution for class 9 maths lines and angles ( Chapter 6) Solution for Exercise 6.1 1.In figure, lines AB and CD intersect at O. If $$\angle{AOC}$$ + $$\angle{BOE}$$ = $$70^\circ$$ and $$\angle{BOD}$$ = $$40^\circ$$, find $$\angle{BOE}$$ and reflex $$\angle{COE}$$. Here, $$\angle{AOC}$$ and $$\angle{BOD}$$ are vertically opposite angles. Therefore, $$\angle{AOC}$$ = $$\angle{BOD}$$ Hence, $$\angle{AOC}$$ = $$40^\circ$$, [Since, $$\angle{BOD}$$ = $$40^\circ$$] ....(i) It is given that, $$\angle{AOC}$$ + $$\angle{BOE}$$ = $$70^\circ$$ Hence, from Eq. (i), $$40^\circ$$ + $$\angle{BOE}$$ = $$70^\circ$$ i.e., $$\angle{BOE}$$= $$70^\circ$$ - $$40^\circ$$ Therefore, $$\angle{BOE}$$ = $$30^\circ$$ Now, by Linear pair axiom, $$\angle{AOC}$$ + $$\angle{COE}$$ + $$\angle{BOE}$$ = $$180^\circ$$ By substituting the values, we get, $$40^\circ$$ + $$\angle{COE}$$ + $$30^\circ$$ = $$180^\circ$$ i.e., $$\angle{COE}$$ = $$180^\circ$$ - $$40^\circ$$ - $$30^\circ$$ Therefore, $$\angle{COE}$$ = $$110^\circ$$ Now, so as to find the reflex angle, $$\angle{COE}$$ + reflex $$\angle{COE}$$ = $$360^\circ$$ i.e., $$110^\circ$$ + reflex $$\angle{COE}$$ = $$360^\circ$$ .....(proved) i.e., reflex $$\angle{COE}$$ = $$360^\circ$$ - $$110^\circ$$ Therefore, reflex $$\angle{COE}$$ = $$250^\circ$$ 2. In figure, lines XY and MN intersect at O. If $$\angle{POY}$$ = $$90^\circ$$ and a :b = 2:3. Find c. It is given that, $$\angle{POY}$$ = $$90^\circ$$ Now, by Linear pair axiom, $$\angle{POY}$$ + $$\angle{POX}$$ = $$180^\circ$$ i.e., $$\angle{POX}$$ = $$180^\circ$$ - $$\angle{POY}$$ Therefore, $$\angle{POX}$$ = $$90^\circ$$ So, we get, a + b = $$90^\circ$$ ....(i) It is also given that, a :b = 2:3 Let a = 2k and b = 3k, From eq. (i), 2k + 3k = $$90^\circ$$ i.e., 5k = $$90^\circ$$ Therefore, k = $$18^\circ$$ So, a = 2 × $$18^\circ$$ = $$36^\circ$$ and b = 3 × $$18^\circ$$ = $$54^\circ$$ Again by Linear pair axiom, $$\angle{MOX}$$ + $$\angle{XON}$$ = $$180^\circ$$ i.e., b + c = $$180^\circ$$ i.e., c = $$180^\circ$$ - $$54^\circ$$ Therefore, c = $$126^\circ$$ 3. In figure,$$\angle{PQR}$$ = $$\angle{PRQ}$$, then prove that$$\angle{PQS}$$ = $$\angle{PRT}$$ . By Linear pair axiom, $$\angle{PQS}$$ + $$\angle{PQR}$$ = $$180^\circ$$...(i) Similarly, again by Linear pair axiom, $$\angle{PRQ}$$ + $$\angle{PRT}$$ = $$180^\circ$$ ...(ii) Thus by Eq. (i) and (ii), we get, $$\angle{PQS}$$ + $$\angle{PQR}$$ = $$\angle{PRQ}$$ + $$\angle{PRT}$$ As it is given, $$\angle{PQR}$$ = $$\angle{PRQ}$$ Therefore, $$\angle{PQS}$$ + $$\angle{PRQ}$$ = $$\angle{PRQ}$$ + $$\angle{PRT}$$ By cancelling equal terms, we get, $$\angle{PQS}$$ = $$\angle{PRT}$$ Hence, proved. 4.In figure, if x + y = w + z, then prove that AOB is a line. Since, x,y,w and z are angles at a single point, x + y + w + z = $$360^\circ$$ But it is given that, x + y = w + z Hence, x + y + x + y = $$360^\circ$$ i.e., 2(x + y) = $$360^\circ$$ Therefore, (x + y) = $$180^\circ$$ Hence, by converse of linear pair axiom, it is proved that AOB is a straight line. 5.In figure below, POQ is a line. Ray OR is perpendicular to line PQ. OS is another ray lying between rays OP and OR. Prove that $$\angle{ROS}$$ = 1/2($$\angle{QOS}$$ - $$\angle{POS}$$). Since it is given that, OR is perpendicular to PQ, we have,$$\angle{PQR}$$ = $$\angle{ROQ}$$ = $$90^\circ$$ Also, we can say that, $$\angle{POS}$$ + $$\angle{ROS}$$ = $$90^\circ$$ So, $$\angle{ROS}$$ = $$90^\circ$$ - $$\angle{POS}$$ ...(i) Now, by adding, $$\angle{ROS}$$ on both the side, we get, 2$$\angle{ROS}$$ = $$90^\circ$$ - $$\angle{POS}$$ + $$\angle{ROS}$$ i.e., 2$$\angle{ROS}$$ = ($$90^\circ$$ + $$\angle{ROS}$$) - $$\angle{POS}$$ 2$$\angle{ROS}$$ = $$\angle{QOS}$$ - $$\angle{POS}$$ Therefore, $$\angle{ROS}$$ = 1/2($$\angle{QOS}$$ - $$\angle{POS}$$). Hence, proved. 6. It is given that $$\angle{XYZ}$$ = $$64^\circ$$ and XY is produced to point P. Draw a figure from the given information. If ray YQ bisects $$\angle{ZYP}$$ , find $$\angle{XYQ}$$ and reflex$$\angle{QYP}$$. Given that, ray YQ bisects $$\angle{ZYP}$$ Thus, $$\angle{ZYQ}$$ = $$\angle{QYP}$$ = 1/2($$\angle{ZYP}$$) ...(i) By Linear pair axiom, $$\angle{XYZ}$$ + $$\angle{ZYQ}$$ + $$\angle{QYP}$$ = $$180^\circ$$ But, it is given that, $$\angle{XYZ}$$ = $$64^\circ$$ ...(ii) Therefore, from (i) and (ii), we get, i.e., $$64^\circ$$ + $$\angle{ZYQ}$$ + $$\angle{ZYQ}$$ = $$180^\circ$$ i.e., 2$$\angle{ZYQ}$$ = $$180^\circ$$ - $$64^\circ$$ i.e., 2$$\angle{ZYQ}$$ = $$116^\circ$$ Therefore, $$\angle{ZYQ}$$ = $$58^\circ$$ Now, since, $$\angle{XYQ}$$ = $$\angle{XYZ}$$ + $$\angle{ZYQ}$$ i.e., $$\angle{XYQ}$$ = $$64^\circ$$ + $$58^\circ$$ = $$122^\circ$$ Now, $$\angle{QYP}$$ + reflex$$\angle{QYP}$$ = $$360^\circ$$ i.e., $$58^\circ$$ + reflex$$\angle{QYP}$$ = $$360^\circ$$ Therefore, reflex$$\angle{QYP}$$ = $$302^\circ$$ Solution for Exercise 6.2 1. In figure, find the values of x and y and then show that AB ||CD. Since, x + $$50^\circ$$ = $$180^\circ$$ .....(Linear pair) i.e., x = $$130^\circ$$ Therefore, y = $$130^\circ$$ .....(vertically opposite pair) Here, these are corresponding angles for lines AB and CD. Hence, it is proved that AB || CD. 2. In figure, if AB || CD, CD || EF and y : z = 3:7 , find x. It is given that, y : z = 3:7 Let, y = 3k and z = 7k, x = $$\angle{CHG}$$ (Corresponding angles) ....(i) Z = $$\angle{CHG}$$ (Alternate angles) ....(ii) From (i) and (ii), we get, Therefore, x = z ....(iii) Now, x + y = $$180^\circ$$ ....(Internal angles on the same side of the transversal) From eq.(iii), z + y = $$180^\circ$$ Therefore, 3k + 7k = $$180^\circ$$ i.e., 10k = $$180^\circ$$ i.e., k = $$18^\circ$$ Therefore, y = 3 × $$18^\circ$$ and z = 7 × $$18^\circ$$ Therefore, y = $$54^\circ$$ and z = $$126^\circ$$ Hence, x = $$126^\circ$$ 3. In figure, if AB || CD, EF is perpendicular to CD and $$\angle{GED}$$ = $$126^\circ$$ , find $$\angle{AGE}$$,$$\angle{GEF}$$ and $$\angle{FGE}$$ Since, $$\angle{AGE}$$ = $$\angle{GED}$$ ....(Alternate interior angles) But it is given that, $$\angle{GED}$$ = $$126^\circ$$ Therefore, $$\angle{AGE}$$ = $$126^\circ$$ ....(i) Also, $$\angle{GEF}$$ + $$\angle{FED}$$ = $$126^\circ$$ Since, EF is perpendicular to CD, $$\angle{GEF}$$ + $$90^\circ$$ = $$126^\circ$$ Therefore, $$\angle{GEF}$$ = $$36^\circ$$ Also, by linear pair axiom, we get, $$\angle{AGE}$$ + $$\angle{FGE}$$ = $$180^\circ$$ i.e., $$126^\circ$$ + $$\angle{FGE}$$ = $$180^\circ$$ ....(from (i)) $$\angle{FGE}$$ = $$54^\circ$$ 4. In figure, if PQ || ST, $$\angle{PQR}$$ = $$110^\circ$$ and $$\angle{RST}$$ = $$130^\circ$$ , find $$\angle{QRS}$$ . Construction : Draw a line parallel to ST through R. As it is given that, PQ || ST,$$\angle{PQR}$$ = $$110^\circ$$ and $$\angle{RST}$$ = $$130^\circ$$ We can also say that, AB || PQ || ST Since, $$\angle{PQR}$$ + $$\angle{QRA}$$ = $$180^\circ$$ ....(interior angles on the same side of transversal) so, $$110^\circ$$ + $$\angle{QRA}$$ = $$180^\circ$$ i.e., $$\angle{QRA}$$ = $$70^\circ$$ Since, $$\angle{ARS}$$ = $$130^\circ$$ ....(Alternate Interior angle) As, $$\angle{RST}$$ = $$130^\circ$$ Now, so as to find $$\angle{QRS}$$, We have, $$\angle{ARS}$$ = $$\angle{ARQ}$$ + $$\angle{QRS}$$ i.e., $$130^\circ$$ = $$70^\circ$$ + $$\angle{QRS}$$ Therefore, $$\angle{QRS}$$ = $$60^\circ$$ 5. In figure, if AB || CD, $$\angle{APQ}$$ = $$50^\circ$$ and $$\angle{PRD}$$ = $$127^\circ$$, find x and y. We have, AB || CD $$\angle{PQR}$$ = $$\angle{APQ}$$ ....(Alternate interior angles) i.e.,x = $$50^\circ$$ ....(i)(given ($$\angle{APQ}$$ = $$50^\circ$$)) Now, as we know that, Exterior angle is equal to sum of interior opposite angles of a triangle. Therefore, $$\angle{PQR}$$ + $$\angle{QPR}$$ = $$127^\circ$$ from (i), we get, $$50^\circ$$ + $$\angle{QPR}$$ = $$127^\circ$$ Therefore, y = $$77^\circ$$ 6. In figure, PQ and RS are two mirrors placed parallel to each other. An incident ray AB strikes the mirror PQ at B, the reflected ray moves along the path BC and strikes the mirror RS at C and again reflects back along CD. Prove that AB || CD. Draw perpendiculars BE and CF on PQ and RS, respectively. Therefore, we can say that BE || CF As angle of incidence = angle of reflection, we get, $$\angle{a}$$ = $$\angle{b}$$ ....(i) Similarly, we get, $$\angle{x}$$ = $$\angle{y}$$ ....(ii) By Alternate angles theorem, $$\angle{b}$$ = $$\angle{y}$$ Now, doubling the angles, we get, 2$$\angle{b}$$ = 2$$\angle{y}$$ i.e., $$\angle{b}$$ + $$\angle{b}$$ = $$\angle{y}$$ + $$\angle{y}$$ from (i) and (ii), $$\angle{a}$$ + $$\angle{b}$$ = $$\angle{x}$$ + $$\angle{y}$$ Hence, $$\angle{ABC}$$ = $$\angle{DCB}$$ Thus by converse of alternate angles theorem, we get, AB || CD Hence, it is proved. Solution for Exercise 6.3 1. In figure, sides QP and RQ of $$\angle{PQR}$$ are produced to points S and T, respectively. If $$\angle{SPR}$$ = $$135^\circ$$ and $$\angle{PQT}$$ = $$110^\circ$$ , find $$\angle{PRQ}$$. By linear pair axiom, $$\angle{RPS}$$ + $$\angle{RPQ}$$ = $$180^\circ$$ i.e., $$135^\circ$$ + $$\angle{RPQ}$$ = $$180^\circ$$ i.e., $$\angle{RPQ}$$ = $$180^\circ$$ - $$135^\circ$$ Therefore, $$\angle{RPQ}$$ = $$45^\circ$$ We also know that, Sum of interior opposite angles is equal to exterior angles. Thus, $$\angle{RPQ}$$ + $$\angle{PRQ}$$ = $$\angle{PQT}$$ i.e.,$$45^\circ$$ + $$\angle{PRQ}$$ = $$110^\circ$$ i.e., $$\angle{PRQ}$$ = $$110^\circ$$ - $$45^\circ$$ Therefore, $$\angle{PRQ}$$ = $$65^\circ$$. 2. In figure, $$\angle{X}$$ = $$60^\circ$$ , $$\angle{XYZ}$$ = $$54^\circ$$ , if YO and ZO are the bisectors of $$\angle{XYZ}$$ and $$\angle{XZY}$$ respectively of $$\triangle{XYZ}$$ , find $$\angle{OZY}$$ and $$\angle{YOZ}$$. In $$\triangle{XYZ}$$, $$\angle{X}$$ + $$\angle{Y}$$ + $$\angle{Z}$$ = $$180^\circ$$ because, Sum of all angles of triangle is equal to 180 Therefore, $$62^\circ$$ + $$\angle{Y}$$ + $$\angle{Z}$$ = $$180^\circ$$ i.e., $$\angle{Y}$$ + $$\angle{Z}$$ = $$118^\circ$$ Now, so as to find bisected angles, multiply both sides by 1/2, We get, 1/2[$$\angle{Y}$$ + $$\angle{Z}$$] = 1/2 × $$118^\circ$$ = $$59^\circ$$ Thus we get, $$\angle{OYZ}$$ + $$\angle{OZY}$$ = $$59^\circ$$....(as YO and ZO are the bisectors of $$\angle{XYZ}$$ and $$\angle{XZY}$$) Also, it is given that, $$\angle{XYZ}$$ = $$54^\circ$$ and we have $$\angle{OYZ}$$ = 1/2 × $$\angle{XYZ}$$ $$\angle{OZY}$$ + 1/2 × $$54^\circ$$ = $$59^\circ$$ $$\angle{OZY}$$ = $$59^\circ$$ - $$27^\circ$$ = $$32^\circ$$ Also, in $$\triangle{YOZ}$$, $$\angle{OYZ}$$ + $$\angle{YOZ}$$ + $$\angle{OZY}$$ = $$180^\circ$$ because, Sum of all angles of triangle is equal to 180 Therefore, $$27^\circ$$ + $$\angle{YOZ}$$ + $$32^\circ$$ = $$180^\circ$$ i.e., $$\angle{YOZ}$$ + $$59^\circ$$ = $$180^\circ$$ i.e., $$\angle{YOZ}$$ = $$180^\circ$$ - $$59^\circ$$ Therefore, $$\angle{YOZ}$$ = $$121^\circ$$ 3. In figure, if AB || DE , $$\angle{BAC}$$ = $$35^\circ$$ and $$\angle{CDE}$$ = $$53^\circ$$ , find $$\angle{DCE}$$. We have, AB || DE, Therefore, by alternate angles theorem, we get, $$\angle{AED}$$ = $$\angle{BAE}$$ Also, given that, $$\angle{BAC}$$ = $$35^\circ$$ and $$\angle{BAC}$$ = $$\angle{BAE}$$ Therefore, $$\angle{AED}$$ = $$35^\circ$$ Now, in $$\triangle{DCE}$$, $$\angle{DCE}$$ + $$\angle{CED}$$ + $$\angle{CDE}$$ = $$180^\circ$$ because, Sum of all angles of triangle is equal to 180. Thus, by putting given values, we get, $$\angle{DCE}$$ + $$35^\circ$$ + $$53^\circ$$ = $$180^\circ$$ i.e., $$\angle{DCE}$$ = $$180^\circ$$ - $$88^\circ$$ Therefore, $$\angle{DCE}$$ = $$92^\circ$$ 4. In figure, if lines PQ and RS interest at point T, such that $$\angle{PRT}$$ = $$40^\circ$$ ,$$\angle{RPT}$$ = $$95^\circ$$ and $$\angle{TSQ}$$ = $$75^\circ$$ , find $$\angle{SQT}$$. Since, we know, exterior angle is equal to sum of interior opposite angles Thus, we get, $$\angle{PTS}$$ = $$\angle{RPT}$$ + $$\angle{PRT}$$ i.e., $$\angle{PTS}$$ = $$95^\circ$$ + $$40^\circ$$ Therefore, $$\angle{PTS}$$ = $$135^\circ$$ .....(given $$\angle{PRT}$$ = $$40^\circ$$ and $$\angle{RPT}$$ = $$95^\circ$$) Similarly, $$\angle{TSQ}$$ + $$\angle{SQT}$$ = $$\angle{PTS}$$ i.e., $$75^\circ$$ + $$\angle{SQT}$$ = $$135^\circ$$ i.e., $$\angle{SQT}$$ = $$135^\circ$$ - $$75^\circ$$ Therefore, $$\angle{SQT}$$ = $$60^\circ$$ 5.In figure, if PQ is perpendicular to PS , PQ || SR , $$\angle{SQR}$$ = $$28^\circ$$ and $$\angle{QRT}$$ = $$65^\circ$$ , then find the values of x and y. Here, as PQ || SR, by alternate angles axiom, we get, $$\angle{PQR}$$ = $$\angle{QRT}$$ It is given that, $$\angle{PQS}$$ = x, $$\angle{SQR}$$ = $$28^\circ$$ and $$\angle{QRT}$$ = $$65^\circ$$ i.e., $$\angle{PQS}$$ + $$\angle{SQR}$$ = $$\angle{QRT}$$ i.e., x + $$28^\circ$$ = $$65^\circ$$ Therefore, x = $$37^\circ$$ ...(i) Now, considering right angled triangle PQS, $$\angle{SPQ}$$ = $$90^\circ$$ $$\angle{SPQ}$$ + x + y = $$180^\circ$$ ....(Since Sum of all angles of a triangle is equal to 180) i.e., $$90^\circ$$ + $$37^\circ$$ + y = $$180^\circ$$ i.e., y = $$180^\circ$$ - $$127^\circ$$ Therefore, y = $$53^\circ$$ 6. In figure, the side QR of $$\angle{PQR}$$ is produced to a point S. If the bisectors of $$\angle{PQR}$$ and $$\angle{PRS}$$ meet at point T, then prove that $$\angle{QTR}$$ = (1/2) $$\angle{QPR}$$ In $$\triangle{PQS}$$, we have, $$\angle{QPR}$$ + $$\angle{PQR}$$ = $$\angle{PRS}$$ ...(i) As we know, Sum of interior opposite angles is equal to exterior angle. Similarly in $$\triangle{TQR}$$, we have, $$\angle{QTR}$$ + $$\angle{TQR}$$ = $$\angle{TRS}$$ ...(ii) As, QT and RT are the bisectors of $$\angle{PRS}$$ and $$\angle{PQR}$$, respectively, we get, $$\angle{TRS}$$ = (1/2) $$\angle{PRS}$$ ...(iii) $$\angle{TQR}$$ = (1/2) $$\angle{PQR}$$ ...(iv) By multiplying both sides of eq.(i), we get, (1/2)$$\angle{QPR}$$ + (1/2)$$\angle{PQR}$$ = (1/2)$$\angle{PRS}$$ from eq. (iii) and (iv), (1/2)$$\angle{QPR}$$ + $$\angle{TQR}$$ = $$\angle{TRS}$$ ...(v) Thus, from eq.(ii) and (v), we get, $$\angle{QTR}$$ + $$\angle{TQR}$$ = (1/2)$$\angle{QPR}$$ + $$\angle{TQR}$$ cancelling equal terms, we get, $$\angle{QTR}$$ = (1/2)$$\angle{QPR}$$ Hence, proved.
2020-07-09 19:58:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8838246464729309, "perplexity": 748.9634361640224}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655901509.58/warc/CC-MAIN-20200709193741-20200709223741-00359.warc.gz"}
https://proofwiki.org/wiki/Definition:Independent_Events/Definition_1
Definition:Independent Events/Definition 1 Definition Let $\mathcal E$ be an experiment with probability space $\left({\Omega, \Sigma, \Pr}\right)$. Let $A, B \in \Sigma$ be events of $\mathcal E$ such that $\Pr \left({A}\right) > 0$ and $\Pr \left({B}\right) > 0$. The events $A$ and $B$ are defined as independent (of each other) iff the occurrence of one of them does not affect the probability of the occurrence of the other one. Formally, $A$ is independent of $B$ iff: $\Pr \left({A \mid B}\right) = \Pr \left({A}\right)$ where $\Pr \left({A \mid B}\right)$ denotes the conditional probability of $A$ given $B$.
2019-11-22 11:07:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9961686730384827, "perplexity": 119.30370034038678}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671249.37/warc/CC-MAIN-20191122092537-20191122120537-00105.warc.gz"}
https://harveynick.com/2018/05/09/fast-ai-via-ipad-with-paperspace-and-juno-app/
# Fast.ai via iPad with Paperspace and Juno App Note: This is a repost from my other blog. Having started Fast.ai’s Practical Deep Learning for Coders course, the first thing I noticed is how much less structured it is than Andrew Ng’s Coursera Deep Learning Specialization (non affiliate link). Fast.ai supplies you with the Jupyter notebooks needed for the assignments, but here a lot of the setup is down to you. At first I was a little frustrated by the extra work that Fast.ai was making me do. Then I came to the conclusion that it’s actually a good thing. In the first instance, the less controlled environment is better preparation for actual problems. In the second, it means I can try doing the whole course via iPad. I’ve already noted that Jupyter in the browser is a pretty miserable experience on iPad. Happily there’s an excellent native Jupyter app called Juno, which solves that problem nicely. But a bit of extra work is needed to get it working well. I decided to use Paperspace1 (Fast.ai’s recommended option) as my GPU cloud for this course. There are instructions for setting up Paperspace for fast.ai here. Once you’ve done that, your workflow will look something like this: 1. Start your instance via the Paperspace console; 3. Copy the URL with the magic token; 4. Paste it into your browser, replacing localhost with your instance’s public IP; 5. Hack hack hack; 6. Shut down your instance via the Paperspace console. Step 3 and 4 don’t work so well for Juno, and step 2 is also pretty superfluous. We can eliminate these by turning on password authentication and automatically starting Jupyter on boot. Password authentication comes first, which will make connecting via Juno a lot easier. I’m assuming you’ve followed the setup I linked to above. Start your instance and log in via the terminal. Now run this on the commend line: cd fastai Then give it your chosen password. Next: run Jupyter on startup. Type this on the command line: crontab -e Now add this to the bottom of the file which opens: @reboot cd /fastai; source /.bashrc; /anaconda3/envs/fastai/bin/jupyter notebook >>/cronrun.log 2>&1 Even though Jupyter will now start automatically, there are still reasons to log in. You’re going to need to download additional datasets, for one thing. ssh would be the usual means of doing so, but from the iPad mosh (short for “Mobile Shell”) is a more robust option. I’m using an app called Blink for that. Paperspace machines are not set up to allow the ports mosh uses by default. So you’ll need to open one, like so: sudo ufw allow 60001 After that mosh should work just fine. 1. That’s an affiliate link which will get you $10 of credit. If you prefer a non-affiliate link there’s one here. If you go that route and still want the$10 credit, you can use my code, which is: AAGWLUH.
2019-03-26 18:38:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.267739474773407, "perplexity": 2866.7932031632636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205600.75/warc/CC-MAIN-20190326180238-20190326202238-00299.warc.gz"}
https://cringproject.wordpress.com/category/uncategorized/
Archive for the 'Uncategorized' Category A finite presentation chapter? March 15, 2011 There are a whole slew of results in commutative algebra and algebraic geometry that are essentially elaborations on a standard set of tricks for finitely presented objects. For instance, one has the following fact: if $\{ A_\alpha\}$ is an inductive system of rings, then any finitely presented module over the colimit descends to one of the $A_\alpha$. Moreover, the category of f.p. modules over the colimit is the “colimit category” of the categories of f.p. modules over the $A_\alpha$. Similarly, any f.p. algebra over the colimit descends to one of the $A_\alpha$. This, together with fpqc descent, is behind Grothendieck’s extremely awesome proof of Chevalley’s theorem that a quasi-finite morphism is quasi-affine; this trick, in EGA IV-3, is what lets him reduce to the case where the target scheme is the Spec of some local ring.  So I think it would be fun to have a whole bunch of these sorts of results. On the other hand, I’m not sure whether it would be pedantic to devote an entire chapter to them. There are probably more important things in commutative algebra proper, and the above results are really cleaner if we can use the language of schemes a bit (then we can talk about quasi-coherent sheaves on projective limits, and even derive ZMT!), though it is an open question exactly how much we should delve into algebraic geometry. Thoughts?
2017-10-24 00:33:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6753588318824768, "perplexity": 336.58292277469}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827662.87/warc/CC-MAIN-20171023235958-20171024015958-00739.warc.gz"}
http://math.stackexchange.com/questions/74184/showing-an-isomorphism-between-exterior-products-of-a-free-module
# Showing an isomorphism between exterior products of a free module The following is a question that comes from a counterexample to a problem about exterior algebras of modules and the notes mentions somehow this all relates to the fact that symplectic geometry is done over even dimension. Let $F$ be a free $R$-module of rank $n\geq 3$ where $R$ is a commutative ring with identity. Let $T(F) = \oplus_{k=0}^{\infty} T^{k}(F)$ where $T^k(F) = F \otimes F \otimes \ldots \otimes F$ is tensor product of $k$ modules. Let $\wedge F$ denote the exterior algebra of the $R$-module $F$, that is the quotient of the tensor algebra $T(F)$ by the ideal $A(F)$ generated by all $x \otimes x$ for $x \in F$. Suppose $1 \leq k \leq n-1$. How do we show $\wedge ^k F \cong \wedge^{n-k} F$? - The isomorphism is called the Hodge dual en.wikipedia.org/wiki/Hodge_dual but I only know it in dimension 2 (where it rotates by 90 degrees using the formula for perpendicular slope) and dimension 3 (where it is called the cross product more or less). – Jack Schmidt Oct 20 '11 at 2:21 Here are the steps to proving the (non-canonical) isomorphism you ask about: • Prove that $\wedge^k F$ is free of rank $\binom nk$. (This is just the same as in the context of vector spaces over a field; if you choose a free basis $x_i$, $i = 1,\ldots, n$, for $F$, then the $k$th exterior power is freely generated by the products $x_{i_1}\wedge x_{i_2} \wedge \cdots \wedge x_{i_k}$, for sequences $1 \leq i_1 < i_2 < \cdots < i_k \leq n$.) • Observe that $\binom nk=\binom{n}{n-k}$, and that free modules of the same free rank are isomorphisc. Here is a related, but more canonical, isomorphism: Wedge product induces a bilinear pairing $\wedge^k F \times \wedge^{n-k}F \to \wedge^n F$, which one checks to be non-degenerate. The target is a line (i.e. one-dimensional), and so from this one deduces a canonical isomorphism $\wedge^{n-k}F \cong (\wedge^k F)^*\otimes \wedge^n F.$ (Here ${}^*$ denotes the dual of a free $R$-module.) -
2016-05-26 16:50:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9739326238632202, "perplexity": 173.6002964267822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276131.38/warc/CC-MAIN-20160524002116-00009-ip-10-185-217-139.ec2.internal.warc.gz"}
https://oxscience.com/angular-tangential-velocity/
# What is difference between angular and tangential velocity? The difference between angular velocity and tangential velocity is that “The angular displacement covered by a body in unit time is called angular velocity and tangential velocity is the velocity, which is tangent to the circular path.” Now! We learn in detail about these quantities in an easy way. Keep reading… ## Angular velocity “Time rate of change of angular displacement is known as angular velocity.”It is denoted by ω, its formula is given by: Unit of angular velocity is • radian per second  ( S.I unit) • Degree per second   ( B.E System ) • Revolution per second Very often we are interested in knowing how fast or how slow a body is rotating. It is determined by its angular velocity which is defined as the rate at which the angular displacement is changing with time. If Δθ is the angular displacement during the time interval Δt , the average angular velocity ωav during this interval is given by: The instantaneous angular velocity ω is the limit of the ratio Δθ/Δt as Δt , following instant t, approaches to zero. Thus: In the limit when Δt approaches zero, the angular displacement would be infinitesimally small. So it would be a vector quantity. Its direction is along the axis of rotation and given by right-hand rule. See Also: Difference b/w Speed & Velocity ## Types of angular velocity ### Average angular velocity “Total rate of change of angular displacement is called average angular velocity.” ### Uniform angular velocity “If the rate of change of angular displacement is constant is called uniform angular velocity.” ### Non-uniform angular velocity “If the rate of change of angular displacement is not constant is called non-uniform angular velocity.” ### Instantaneously angular velocity “Angular velocity at a particular instant of time is called instantaneous angular velocity.” ## Tangential velocity Tangential velocity is the velocity, which is tangent to the circular path. Its mathematical form is expressed as: Vt = Let’s see video now: Related Topics: Check Also Close
2021-10-21 16:58:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728894233703613, "perplexity": 1072.7917358881184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00288.warc.gz"}
https://4gte.com/products/hp-agilent-346c-noise-source-10-mhz-to-26-5-ghz-nominal-enr-15-db-2/
# HP/Agilent 346C Noise Source, 10 MHz to 26.5 GHz, nominal ENR 15 dB w/ Option H01 0 in stock Includes Option H01 The Agilent 346C noise source is the ideal companion to Agilent’s noise figure solutions. Since it is broadband (10 MHz to 26.5 GHz), it eliminates the necessity for several sources at different frequency bands. The low SWR of the noise source reduces a major source of measurement uncertainty; reflections of test signals. Option K01 is a coaxial noise source and features coverage from 1 to 50 GHz with a 2.4 mm coaxial connector. SKU: AGT346C-H01 Categories: , Tag: Brand: ## Description Low SWR for reducing noise figure (NF) measurement uncertainty Individually calibrated ENR values at specific frequencies to enhance accuracy
2022-08-08 02:01:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586012125015259, "perplexity": 10847.561664614057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00074.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=mzm&paperid=381&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Mat. Zametki: Year: Volume: Issue: Page: Find Mat. Zametki, 2002, Volume 71, Issue 5, Pages 732–741 (Mi mz381) Turing Machines Connected to the Undecidability of the Halting Problem L. M. Pavlotskaya Abstract: The problem of finding a Turing machine with undecidable halting problem whose program contains the smallest number of instructions is well known. Obviously, such a machine must satisfy the following condition: by deleting even a single instruction from its program, we get a machine with decidable halting problem. In this paper, Turing machines with undecidable halting problem satisfying this condition are called connected. We obtain a number of general properties of such machines and deduce their simplest corollaries concerning the minimal machine with undecidable halting problem. DOI: https://doi.org/10.4213/mzm381 Full text: PDF file (187 kB) References: PDF file   HTML file English version: Mathematical Notes, 2002, 71:5, 667–675 Bibliographic databases: UDC: 510.53+512.54.05 Citation: L. M. Pavlotskaya, “Turing Machines Connected to the Undecidability of the Halting Problem”, Mat. Zametki, 71:5 (2002), 732–741; Math. Notes, 71:5 (2002), 667–675 Citation in format AMSBIB \Bibitem{Pav02} \by L.~M.~Pavlotskaya \paper Turing Machines Connected to the Undecidability of the Halting Problem \jour Mat. Zametki \yr 2002 \vol 71 \issue 5 \pages 732--741 \mathnet{http://mi.mathnet.ru/mz381} \crossref{https://doi.org/10.4213/mzm381} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=1936197} \zmath{https://zbmath.org/?q=an:1029.03026} \transl \jour Math. Notes \yr 2002 \vol 71 \issue 5 \pages 667--675 \crossref{https://doi.org/10.1023/A:1015840005656} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000176477200009} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-0141513942}
2020-10-24 14:06:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26486632227897644, "perplexity": 10542.83047715556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107883636.39/warc/CC-MAIN-20201024135444-20201024165444-00386.warc.gz"}
https://math.stackexchange.com/questions/1080172/integral-along-a-contour-is-0-how?noredirect=1
# Integral along a contour is $0$, how? I recently had an extremely failed attempt at asking the same question, so I am posting the same question more or less to hope that someone can give me feedback. Consider the integral: $$\int_{0}^{\infty} \frac{\log^2(x)}{x^2 + 1} dx$$ $\hskip1in$ Image taken and modified from: Complex Analysis Solution (Please Read for background information). $R$ is the big radius, $\delta$ is the small radius. Actually, lets consider $u$ the small radius. Let $\delta = u$ Ultimately the goal is to let $u \to 0$ We can parametrize, $$z = ue^{i\theta}$$ $$\int_{\delta} f(z)dz = (-)\cdot\int_{0}^{\pi} \frac{(i\theta + \log(u))^2\cdot (uie^{i\theta})}{(ue^{i\theta})^2 + 1} d\theta$$ $$\left | \int_{0}^{\pi} \frac{(i\theta + \log(u))^2\cdot (uie^{i\theta})}{(ue^{i\theta})^2 + 1} d\theta \right | \le \int_{0}^{\pi} \frac{|(i\theta + \log(u))|^2\cdot(u)}{|(ue^{i\theta})^2 + 1 |} d\theta$$ $$|(ue^{i\theta})^2 + 1 | < u^2 + 1$$ $$\frac{1}{u^2 + 1} < \frac{1}{|(ue^{i\theta})^2 + 1 |}$$ Since the maximum value of $\theta$ is $\theta = \pi$ $$|(i\theta + \log(u))| = \sqrt{\log^2(u) - \theta^2} \le \sqrt{\log^2(u) + \pi^2}$$ So: $$|(i\theta + \log(u))|^2 \le \log^2(u) + \pi^2$$ Then: $$|(i\theta + \log(u))|^2 \le \log^2(u) + \pi^2$$ For values $u$ near $0$. $$(u)|(i\theta + \log(u))|^2 \le (\log^2(u) + \pi^2)u \le (\pi^2)u + 5\pi^2$$ Therefore, $$\frac{|\log(z)|}{|z^2 + 1|} \le \frac{(\pi^2)u + 5\pi^2}{u^2 + 1}$$ Then we take the limit as $u \to 0$ which makes the RHS of the inequality 0. hence the LHS upperbound is $0$. So is the contour integral around the small semi circle $\delta = 0$? How do I do this? Thanks • @AaronMaroja, I wanted to explore other ways – Amad27 Dec 24 '14 at 22:14 • But log is discontinuous at $z=0$ the residue theorem wont apply – Amad27 Dec 24 '14 at 22:15 • Oh, that's right. – Aaron Maroja Dec 24 '14 at 22:16 • So I still do need help with this. – Amad27 Dec 24 '14 at 22:16 • – Aaron Maroja Dec 24 '14 at 22:17 According to your picture you have chosen a $\ds{\ln\pars{z}}$-branch as follows: $$\ln\pars{z}=\ln\pars{\verts{z}} + \,{\rm Arg}\pars{z}\ic\,\quad z \not= 0\,,\quad -\,{\pi \over 2} < \,{\rm Arg}\pars{z} < {3\pi \over 2}$$ \begin{align}&\int_{C}{\ln^{2}\pars{z} \over z^{2} + 1}\,\dd x =2\pi\ic\,{\bracks{\ln\pars{1} + \pi\ic/2}^{2} \over 2\ic}=-\,{\pi^{3} \over 4} \end{align} Moreover, \begin{align} -\,{\pi^{3} \over 4}&=\lim_{\epsilon\ \to\ 0^{+}}\left[\int_{-\infty}^{-\epsilon}{\bracks{\ln\pars{-x} + \pi\ic}^{2} \over x^{2} + 1}\,\dd x\right. \\[5mm]&\left.\phantom{\lim_{\epsilon\ \to\ 0^{+}}\left[A\right.} +\ \underbrace{% \int_{\pi}^{0}{\bracks{\ln\pars{\epsilon} + \ic\theta}^{2}\over \epsilon^{2}\expo{2\ic\theta} + 1}\,\epsilon\expo{\ic\theta}\ic\,\dd\theta} _{\ds{\dsc{\to\ 0}\ \mbox{when}\ \dsc{\epsilon \to 0^{+}}. \mbox{See below.}}} +\int_{\epsilon}^{\infty}{\bracks{\ln\pars{x} + 0\ic}^{2} \over x^{2} + 1}\,\dd x\right] \\[1cm]&=\int_{0}^{\infty} {\ln^{2}\pars{x} + \bracks{\ln\pars{x} + \pi\ic}^{2} \over x^{2} + 1}\,\dd x \\[5mm]&=2\int_{0}^{\infty}{\ln^{2}\pars{x} \over x^{2} + 1}\,\dd x +2\pi\ic\ \overbrace{\int_{0}^{\infty}{\ln\pars{x} \over x^{2} + 1}\,\dd x} ^{\ds{=\ \dsc{0}}}\ -\ \pi^{2}\ \overbrace{\int_{0}^{\infty}{\dd x \over x^{2} + 1}} ^{\ds{=\ \dsc{\pi \over 2}}} \\[5mm]&=2\int_{0}^{\infty}{\ln^{2}\pars{x} \over x^{2} + 1}\,\dd x -{\pi^{3} \over 2}\quad\imp\quad \color{#66f}{\large\int_{0}^{\infty}{\ln^{2}\pars{x} \over x^{2} + 1}\,\dd x} =\half\pars{{\pi^{3} \over 2} - {\pi^{3} \over 4}} =\color{#66f}{\large{\pi^{3} \over 8}} \end{align} The integral in the small semicircle satisfies $\ds{\pars{~\mbox{with}\ 0 < \epsilon < 1~}}$: \begin{align} 0&<\verts{\int_{\pi}^{0}{\bracks{\ln\pars{\epsilon} + \ic\theta}^{2}\over \epsilon^{2}\expo{2\ic\theta} + 1}\,\epsilon\expo{\ic\theta}\ic\,\dd\theta}< \epsilon\int_{0}^{\pi}{\bracks{\ln\pars{\epsilon} + \ic\theta}^{2}\over \verts{\epsilon^{2}\expo{2\ic\theta} + 1}}\,\dd\theta \\[5mm]&<{\epsilon \over 1 - \epsilon^{2}} \bracks{\pi\ln^{2}\pars{\epsilon} + \pi^{2}\verts{\ln\pars{\epsilon}} + {\pi^{3} \over 3}}\quad\to\quad 0\quad\mbox{when}\quad\epsilon\to 0^{+}. \end{align} • @MathN00b I'll wait for a OP comment to delete it. Thanks. – Felix Marin Dec 24 '14 at 23:17 • @MathN00b I changed the answer. I didn't notice the $^{2}$ in the $\log$. Thanks. – Felix Marin Dec 25 '14 at 0:21 • Thanks for answering, it is very helpful, do you mind if I ask a question? How is: $$\epsilon\cdot\int_{0}^{\pi} \frac{(\ln(\epsilon) + i\theta)^2 d\theta}{|(\epsilon^2 \cdot e^{2i\theta} + 1)|} < \frac{\epsilon}{1-\epsilon^2}\cdot[\pi\cdot\ln^2(\epsilon) + \pi^2|\ln(\epsilon)|] + \frac{\pi^3}{3}]$$ Are you using the ML-Inequality? How did you derive the RHS? It is generally: INT < M*(l) $l$ is arc length of contour, and $M$ is the max: en.wikipedia.org/wiki/Estimation_lemma ? – Amad27 Dec 25 '14 at 8:55 • Wait: Also, how did I choose that branch. Thats weird. The angle I have is ranginging from $0 \le \theta \le \pi$ How is it from $-\pi/2 \le \theta \le 3\pi/2$???? Also: What is the form of the log in the beginning of your answer? – Amad27 Dec 25 '14 at 9:05 • If you please answer the above, I came up with a solution $$\left |\int \right| \le ML(\Gamma)$$ The M-L Inequality. Then I figured $M = \max|f(z)|$ $|\log^2(z)| = |\log^2(\epsilon) - \theta^2| \le \log^2(\epsilon)$ $$\frac{1}{|z^2 + 1|} \le \frac{1}{|\epsilon^2 - 1|}$$ $$\frac{\log^2(\epsilon)}{|\epsilon^2 - 1|} \le M$$ $$L = (1/2)(2\pi\epsilon) = \pi\epsilon$$ $$\left |\oint \right| \le \frac{\pi\epsilon \cdot \log^2(\epsilon)}{|\epsilon^2 - 1|}$$ As $\epsilon \to 0$ Limit $\to 0$ But is this correct? I dont know if thats the actual Max (M)?? Can it be $\le Max$?? – Amad27 Dec 25 '14 at 10:37 We want to show that $$\int_{0}^{\infty}\frac{\ln^2 x}{x^2+1} = \frac{\pi^3}{8} \tag{1}$$ Breaking through, let's take $f(z) = \frac{\ln z}{z^2+1}$ with branch $\Big(|z|> 0 -\frac{\pi}{2} < \arg z < \frac{3\pi}{2}\Big)$ of the multiple-valued function $\ln z / (z^2+1)$. As long as we are isolating $z = i$ we're going to take $\delta < 1 < R$. According to Cauchy's Residue Theorem, $$\int_{L_1} f(z)dz + \int_{C_R} f(z)dz + \int_{L_2} f(z)dz + \int_{C_\delta} f(z)dz= 2\pi i Res_{z=i}f(z)$$ That is, $$\int_{L_1} f(z)dz + \int_{L_2} f(z)dz = 2\pi i Res_{z=i}f(z) - \int_{C_R} f(z)dz - \int_{C_\delta} f(z)dz \tag{2}$$ Since $$f(z) = \frac{(\ln r + i\theta)^2}{r^2e^{i0} + 1} \ \ \ \ \ \ (z=re^{i\theta})$$ the parametric representations $$z = r e^{i0} = r \ \ (\delta\leq r\leq R) \ \ \text{and}\ \ z = re^{i\pi} = -r \ \ (\delta\leq r \leq R)$$ for the legs $L_1$ and $-L_2$ can be used to write the LHS pf equation $(2)$ as $$\int_{L_1} f(z)dz - \int_{-L_2} f(z)dz = \int_{\delta}^{R} \frac{\ln^2 r}{r^2 + 1}dr + \int_{\delta}^{R} \frac{(\ln r + i\pi)^2}{r^2 + 1}dr$$ Also, since $$Res_{z=i}f(z) = \frac{p(z)}{\phi'(z)}\ \ \text{where}\ \ p(z) = \ln^2 z \ \ \text{and}\ \ \phi(z) = z^2 + 1$$ then $$Res_{z=i}f(z) = \frac{\Big(\ln (1) + i\frac{\pi}{2}\Big)^2}{2i}$$ Thus equation $(2)$ becomes \begin{align}&2\int_{\delta}^{R} \frac{\ln^2 r}{r^2 + 1}dr + 2\pi i\int_{\delta}^{R} \frac{\ln r }{r^2 + 1}dr - \pi^2\int_{\delta}^{R} \frac{1}{r^2 + 1}dr\\ & = 2\pi i \frac{\Big(\ln (1) + i\frac{\pi}{2}\Big)^2}{2i} - \int_{C_R} f(z)dz - \int_{C_\delta} f(z)dz \\ &= -\frac{\pi^3}{4} - \int_{C_R} f(z)dz - \int_{C_\delta} f(z)dz \end{align} Evaluating integrals 1- $\lim_{\delta \to 0}_{R\to \infty}\int_{\delta}^{R} \frac{\ln r }{r^2 + 1}dr = 0$ 2- $\lim_{\delta \to 0}_{R \to \infty}\int_{\delta}^{R} \frac{1}{r^2 + 1}dr = \frac{\pi}{2}$ 3- $\lim_{R\to\infty}\int_{C_R} f(z)dz = 0$ 4- $\lim_{\delta \to 0}\int_{C_\delta} f(z)dz = 0$ Showing $4$. Take $z = \delta e^{i\theta}$. Notice that if $\delta < 1$ and $z = \delta e^{i\theta}$, \begin{align}|\log^2 z| &=|(\ln \delta + i\theta )^2| = |\ln^2\delta + 2i\theta\ln\delta - \theta^2|\\&\leq |\ln^2\delta| + 2|i\theta\ln\delta|+\theta^2 \leq \ln^2\delta -2\pi\ln \delta + \pi^2\end{align} and $$|z^2+1| \geq ||z^2| - 1| = 1 - \delta^2$$ then $$\Bigg|\int_{C_\delta} f(z)dz\Bigg| \leq \int_{C_\delta} |f(z)| |dz| \leq \frac{\ln^2\delta -2\pi\ln \delta + \pi^2}{1 - \delta^2} \pi\delta$$ the RHS of the inequality goes to $0$ as $\delta \to 0$. There fore we get $$2\int_{0}^{\infty} \frac{\ln^2 r}{r^2 + 1}dr = \frac{\pi^3}{4} \Rightarrow \int_{0}^{\infty} \frac{\ln^2 r}{r^2 + 1}dr = \frac{\pi^3}{8}$$ • If you have any questions concerning residue and other parts, feel free to ask. – Aaron Maroja Dec 24 '14 at 22:45 • We can't, the demoninator is 1, not indeterminate form. – Amad27 Dec 24 '14 at 22:45 • Well, the limit goes to zero, need to solve it here. – Aaron Maroja Dec 24 '14 at 22:48 • But just because the limit is 0 doesn't mean that the integral is 0 does it? – Amad27 Dec 24 '14 at 22:49 • You're estimating the integral value, and finding that as $\delta \to 0$ it "vanishes", then yes, it means that the integral is zero on the limit. – Aaron Maroja Dec 24 '14 at 22:52
2019-10-19 12:44:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956737160682678, "perplexity": 1607.1041393299438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986693979.65/warc/CC-MAIN-20191019114429-20191019141929-00387.warc.gz"}
https://moodle.org/mod/forum/discuss.php?d=38935&parent=952636
## General plugins ### Slideshow module again Re: Slideshow module again Pushed changes to github with various fixes. I also sent a pull request to Paul Vaughan's repo yesterday, to try and keep the module from fragmenting too much. The position of the images is always left. The settting of the position hasn't function. Do you mean the "Centred" setting? It works for me in both Chrome and Firefox. The thumbnails in the caption page will be displayed only if the file is really ending with the extension jpg. I haven't been able to reproduce this, .png and .jpg files both show the thumbnail on the caption editing page for me. view.php • line 51 - 58: add <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> --> so the special characters (e.g. ä, ü) are displayed correctly if the background is white. Incorporated this, and also fixed the markup. It was missing html and body opening tags. captions.php • line 38: add the full name of the course in the header Added it to media.php and the autopopup as well I found another bug: • The description can be found in the html code, but isn't visible on the website. It might be behind the image. If you delete the image from the website (not from the slideshow) than you can see the description. I fixed both under the image and to the right of the image caption display. Might I suggest you clone one of the repositories on github? It's easier than using zip files to share changes. Average of ratings: - Re: Slideshow module again "The thumbnails in the caption page will be displayed only if the file is really ending with the extension jpg. I haven't been able to reproduce this, .png and .jpg files both show the thumbnail on the caption editing page for me." OK. Probably it is caused by the uppercase. To accept uppercase and lowercase in the file extension, add in lib.php in line 345 "i": Zele 345: if ( preg_match("/\.jpe?g$/i",$file_record->filename) || preg_match("/\.gif$/i",$file_record->filename) || preg_match("/\.png$/i",$file_record->filename)) Regards Michael Average of ratings: - Re: Slideshow module again Fixed it and pushed to the repo. Thanks for your input Average of ratings: - Re: Slideshow module again "The thumbnails in the caption page will be displayed only if the file is really ending with the extension jpg. I haven't been able to reproduce this, .png and .jpg files both show the thumbnail on the caption editing page for me." Once again, the thumbnails in the caption page don't appear, because it is set generally the file extension ".jpg". Also is displayed the filename always with the filename extension ".jpg, regardless of which it is actually. This is the case on my local system as well as on the test server. Until now, I don't found the cause. Average of ratings: - Re: Slideshow module again You might be running an outdated version; I fixed this a few revisions ago and it's currently working for me with png images (see attached image). Here is the latest version. Average of ratings: - Re: Slideshow module again In another slideshow it is always the extension ".PNG". There are images with this extension but not all. The images have different extensions. The position of the image (postion 1, 2, 3, ...) with the displayed extension is different, not always the same. Average of ratings: - Re: Slideshow module again Do you mean that the images with different extension aren't displaying? Were they displayed in previous versions (Paul Vaughan or James Barrett's)? I think the module expects all images to be the same format, allowing multiple formats might be quite a big change; I'll have a look tomorrow. Average of ratings: - Re: Slideshow module again Thumbnails/(Imagees) with different extension in the same slideshow don't be displayed in the caption page. In the slideshow modules (for moodle 1.X) from James Barrett the thumbnails in the caption page have been produced by the database information: slideshow_captions --> image and the html code ".jpg". This worked as long as only jpg file extension were accepted. In the version for moodle 2.x there weren't thumbnails on the caption page. If you can solve it, it would be a fine thing. In my version I changed the database record in the table slideshow_comments in the field image. If it is a new slideshow, the filename is written with extension. If it is a old slideshow, the thumbnails in the caption page are also displayed. Wether the change is applied to the database depends on the server. Maybe it could be also a solution for you. Average of ratings: -
2017-12-14 01:40:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43753933906555176, "perplexity": 4134.53501186734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948532873.43/warc/CC-MAIN-20171214000736-20171214020736-00324.warc.gz"}
https://cosmocoffee.info/viewtopic.php?f=2&t=64
[astro-ph/0409513] HEALPix -- a Framework for High Resolution Discretization, and Fast Analysis of Data Distributed on the Sphere Authors: K. M. Gorski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, M. Bartelman Abstract: HEALPix -- the Hierarchical Equal Area iso-Latitude Pixelization -- is a versatile data structure with an associated library of computational algorithms and visualization software that supports fast scientific applications executable directly on very large volumes of astronomical data and large area surveys in the form of discretized spherical maps. Originally developed to address the data processing and analysis needs of the present generation of cosmic microwave background (CMB) experiments (e.g. BOOMERanG, WMAP), HEALPix can be expanded to meet many of the profound challenges that will arise in confrontation with the observational output of future missions and experiments, including e.g. Planck, Herschel, SAFIR, and the Beyond Einstein CMB polarization probe. In this paper we consider the requirements and constraints to be met in order to implement a sufficient framework for the efficient discretization and fast analysis/synthesis of functions defined on the sphere, and summarise how they are satisfied by HEALPix. [PDF]  [PS]  [BibTex]  [Bookmark] Discussion related to specific recent arXiv papers Antony Lewis Posts: 1618 Joined: September 23 2004 Affiliation: University of Sussex Contact: [astro-ph/0409513] HEALPix -- a Framework for High Resolutio This paper reviews the Healpix pixelisation scheme. Obviously Healpix is an immensely useful package and a large volume of high quality software is already written. 1). There is one significant disadvantage of Healpix relative to igloo pixelization schemes; namely, that pixels numbers in Healpix are restricted to the form $12 \times n^2$ where n is a power of two. i.e. if you want to increase the resolution you have to do so by a factor of four. This is quite a large factor both in memory and computer time (especially for inverses etc scaling as $n^6$). By constrast an igloo scheme can be devised in many ways to give a number of pixels close to any number you like. This may be quite relevant for Planck where n=1024 is not really engouh, but n=2048 is getting on for overkill. 2). The equidistant cylindrical projection oversamples near the poles, and is perhaps justly criticised for this reason. However the transform time (for a given resolution at the equator) is virtually the same as with Healpix because the oversampling is in the $\phi$ direction which is a fast FFT that takes a small fraction of the computing time. An advantage of oversampling near the poles is that this is precisely where the Healpix pixelisation error is worst: i.e. the cylindrical project is to some extent putting more pixels precisely where you need them to improve the Healpix transform but at no extract computational cost. Of course the downside is that the total number of pixels is much larger with a subsequent cost in terms of memory and further processing. David Larson Posts: 11 Joined: September 25 2004 Affiliation: Johns Hopkins University There is a little bit more flexibility in the number of pixels than may immediately be apparent. The number of main (base) tiles does not have to be 12. Examples are given in the paper where it can be 6, 8, 9, 12, 16, or 20. It looks like any product of two small numbers is available. Using a different number of base tiles would mean writing new code, though. Also, N_side doesn't strictly have to be a power of two. If you're willing to give up hierarchical ordering, it can be any (positive) integer. The ring ordering scheme should still allow you to store the pixels in a fits file. If you don't want to give up hierarchical ordering, N_side can be a power of any integer. For example, instead of increasing by a factor of 4 with each change in resolution, you can increase by a factor of 3^2 = 9, or 5^2 = 25. It is also possible to increase by a different factor at each change in resolution. That would mean any product of integers would be a valid value of N_side. While this may be more complication than people want to deal with (or write the code for), it does give finer control over the number of pixels used in the sphere. Antony Lewis Posts: 1618 Joined: September 23 2004 Affiliation: University of Sussex Contact: Healpix I agree entirely with this: so in principle a wide variety of pixel densities could be obtained with this kind of scheme. However the Healpix software does not allow this kind of flexibility at the moment - the factor 4 scaling is hard coded throughout the code. Krzysztof M. Gorski Posts: 2 Joined: September 30 2004 Affiliation: JPL/Caltech David Larson already partially replied to point 1) raised by Anthony Lewis. I can add, however, that there is one more degree of flexibility in HEALPix. The resolution parameter n_side can be actually changed nearly arbitrarily. For example, rather than going up a factor of 2 from 1024 to 2048, one can work with n_side=1234, or enything else, if that is desired. This would allow a great degree of resolution manipulation. Of course, this only applies to "theoretical" aspects of work. By this I mean that if one chooses their preferred n_side and proceeds to generate sky maps with synfast, and analize them with anafast, or other utilities (except fotr those which REQUIRE hierarchical tree structure of the maps) there is no obstacle posed by HEALPix, the software can be used, or slightly modified to deal with such applications. However, this degree of flexibility does not obtain when confronting a data set which has already been made. The arguments for stepping n_side by factor of 2 were explained in HEALPix documentation and papers. The motivation was to support specifically the hierarchically structured data sets (as for dealing with multi-resolution CMB data sets). If anyone would like to convince the makers of the large angular resolution full sky maps beased on experimental data that hierarchical tree structure thereof is unnecesary, HEALPix still can provide the data model and software ready to deal with that. On Anothony's second point: Again, we attempted to discuss those issues before in what was written about HEALPix. It is indeed very well known that the geographic, or ECP, grids, or point-sets (from quadrature point of view) allow better error behaviour than both HEALPIx and Igloo, at roughly the same CPU time consumption level, for the price of large oversampling near the poles. Nothing new here. But, "You can't both have your cake and eat it!" HEALPix was built to meet simultaneously the requirements of hierarchical structure (based on preferably low number of base resolution elements - with flexibility allowed as discussed in the recent paper), equal area pixels, and iso-latitude distrubution of pixel centers. And - what you see is what you get ... If these requirements are not desirable, some relaxation within HEALPix data stracture and software is possible. Regarding the statement "The equidistant cylindrical projection oversamples near the poles, and is perhaps justly criticised for this reason." I'd like to add that our own critiques were merely repetitions of those expressed on numerous occasion in literature by both the climate and geophysics researchers, who would often state the obvious, namely that the mathematical structure forces super-resolution near the poles, but the data acquisition usually occurs with fixed resolution instruments, as indeed the CMB experiments work. Again, this was a principal driver for equal area pixels of HEALPix. Krzysztof M. Gorski Posts: 2 Joined: September 30 2004 Affiliation: JPL/Caltech On Anthony's second msg: The hard coding of "4" is easily lifted, if so desired. Most important, THIS IS POSSIBLE with HEALPix. We will be happy to consider this an example of "User Feedback" and implement in the next release of HEALPix. Some broader expression of desirability would be good though.
2020-08-08 18:13:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6408488750457764, "perplexity": 1535.864740158204}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738015.38/warc/CC-MAIN-20200808165417-20200808195417-00069.warc.gz"}
https://dimag.ibs.re.kr/event/2022-09-06/
• This event has passed. # Bjarne Schülke, A local version of Katona’s intersection theorem ## September 6 Tuesday @ 4:30 PM - 5:30 PM KST Room B332, IBS (기초과학연구원) Katona’s intersection theorem states that every intersecting family $\mathcal F\subseteq[n]^{(k)}$ satisfies $\vert\partial\mathcal F\vert\geq\vert\mathcal F\vert$, where $\partial\mathcal F=\{F\setminus x:x\in F\in\mathcal F\}$ is the shadow of $\mathcal F$. Frankl conjectured that for $n>2k$ and every intersecting family $\mathcal F\subseteq [n]^{(k)}$, there is some $i\in[n]$ such that $\vert \partial \mathcal F(i)\vert\geq \vert\mathcal F(i)\vert$, where $\mathcal F(i)=\{F\setminus i:i\in F\in\mathcal F\}$ is the link of $\mathcal F$ at $i$. Here, we prove this conjecture in a very strong form for $n> \binom{k+1}{2}$. In particular, our result implies that for any $j\in[k]$, there is a $j$-set $\{a_1,\dots,a_j\}\in[n]^{(j)}$ such that $\vert \partial \mathcal F(a_1,\dots,a_j)\vert\geq \vert\mathcal F(a_1,\dots,a_j)\vert.$A similar statement is also obtained for cross-intersecting families. ## Details Date: September 6 Tuesday Time: 4:30 PM - 5:30 PM KST Event Category: Event Tags: Room B332
2022-12-08 03:06:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926094651222229, "perplexity": 726.5522038797162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00382.warc.gz"}
http://www.sceneadvisor.com/Texas/minimum-square-error.html
Address 212 Loma Vista Dr, Kerrville, TX 78028 (830) 377-7894 http://www.holtech247.com # minimum square error Kerrville, Texas Absolute error in the sense of “non-squared L2 distance between points” does not work that way, but is ok with orthogonal re-parameterizations. John Wiley & Sons. Please enable JavaScript to use all the features on this page. The generalization of this idea to non-stationary cases gives rise to the Kalman filter. Noting that the n equations in the m variables in our data comprise an overdetermined system with one unknown and n equations, we may choose to estimate k using least squares. I see - FWIW I do think the post is slightly misleading, in that it becomes untrue if you use the transformation Y1 = X1 + X2, Y2 = X1 - L.; Casella, G. (1998). "Chapter 4". p.60. Alternative form An alternative form of expression can be obtained by using the matrix identity C X A T ( A C X A T + C Z ) − 1 As a consequence, to find the MMSE estimator, it is sufficient to find the linear MMSE estimator. A point I emphasize is minimizing square-error (while not obviously natural) gets expected values right. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. For a Gaussian distribution this is the best unbiased estimator (that is, it has the lowest MSE among all unbiased estimators), but not, say, for a uniform distribution. Since some error is always present due to finite sampling and the particular polling methodology adopted, the first pollster declares their estimate to have an error z 1 {\displaystyle z_{1}} with For instance, we may have prior information about the range that the parameter can assume; or we may have an old estimate of the parameter that we want to modify when This can happen when y {\displaystyle y} is a wide sense stationary process. This approach was notably used by Tobias Mayer while studying the librations of the moon in 1750, and by Pierre-Simon Laplace in his work in explaining the differences in motion of East Tennessee State University 42,959 views 8:30 Model Fitness - Mean Square Error(Test & Train error) - Duration: 8:10. New York: Springer. Among unbiased estimators, minimizing the MSE is equivalent to minimizing the variance, and the estimator that does this is the minimum variance unbiased estimator. Some feature selection techniques are developed based on the LASSO including Bolasso which bootstraps samples,[12] and FeaLect which analyzes the regression coefficients corresponding to different values of α {\displaystyle \alpha } Levinson recursion is a fast method when C Y {\displaystyle C_ σ 8} is also a Toeplitz matrix. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. For instance, we may have prior information about the range that the parameter can assume; or we may have an old estimate of the parameter that we want to modify when The talk page may contain suggestions. (February 2016) (Learn how and when to remove this template message) Main article: Regularized least squares Tikhonov regularization Main article: Tikhonov regularization In some contexts Weighted least squares See also: Weighted mean and Linear least squares (mathematics) §Weighted linear least squares A special case of generalized least squares called weighted least squares occurs when all the This means that the squared error is independent of re-parameterizations: for instance, if you define $$\vec Y_1 = (X_1 + X_2, X_1 - X_2)$$, then the minimum-squared-deviance estimators for $$Y$$ and L. (1968). Neither part of it seems true to me (and the claims seem somewhat unrelated)$$\endgroup$$ reply preview submit subscribe format posts in markdown. In such case, the MMSE estimator is given by the posterior mean of the parameter to be estimated. Journal of the American Statistical Association. 103 (482): 681–686. Regression for fitting a "true relationship". For this reason, given the important property that the error mean is independent of the independent variables, the distribution of the error term is not an important issue in regression analysis. Box 607, SF 33101 Tampere, Finland. These methods bypass the need for covariance matrices. Since the matrix C Y {\displaystyle C_ − 0} is a symmetric positive definite matrix, W {\displaystyle W} can be solved twice as fast with the Cholesky decomposition, while for large Regularized versions This section may be too technical for most readers to understand. In fact the Euclidean inner product is in some sense the “only possible” axis-independent inner product in a finite-dimensional vector space, which means that the squared error has uniquely nice geometric Rating is available when the video has been rented. However, a biased estimator may have lower MSE; see estimator bias. Examples Example 1 We shall take a linear prediction problem as an example. Lasso method An alternative regularized version of least squares is Lasso (least absolute shrinkage and selection operator), which uses the constraint that ∥ β ∥ {\displaystyle \|\beta \|} , the L1-norm The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. ISBN3-540-25674-1. The autocorrelation matrix C Y {\displaystyle C_ ∑ 2} is defined as C Y = [ E [ z 1 , z 1 ] E [ z 2 , z 1 Phil Chan 3,648 views 7:32 Statistics 101: Point Estimators - Duration: 14:48. In NLLSQ non-convergence (failure of the algorithm to find a minimum) is a common phenomenon whereas the LLSQ is globally concave so non-convergence is not an issue. Linear MMSE estimator for linear observation process Let us further model the underlying process of observation as a linear process: y = A x + z {\displaystyle y=Ax+z} , where A MSE is a risk function, corresponding to the expected value of the squared error loss or quadratic loss. Then, the MSE is given by \begin{align} h(a)&=E[(X-a)^2]\\ &=EX^2-2aEX+a^2. \end{align} This is a quadratic function of $a$, and we can find the minimizing value of $a$ by differentiation: \begin{align} h'(a)=-2EX+2a. \end{align} email will only be used for the most wholesome purposes. The basic idea behind the Bayesian approach to estimation stems from practical situations where we often have some prior information about the parameter to be estimated. x ^ = W y + b . {\displaystyle \min _ − 4\mathrm − 3 \qquad \mathrm − 2 \qquad {\hat − 1}=Wy+b.} One advantage of such linear MMSE estimator is Depending on context it will be clear if 1 {\displaystyle 1} represents a scalar or a vector. Do you mean interpreting Tikhonov regularization as placing a Gaussian prior on the coefficients? Note also, \begin{align} \textrm{Cov}(X,Y)&=\textrm{Cov}(X,X+W)\\ &=\textrm{Cov}(X,X)+\textrm{Cov}(X,W)\\ &=\textrm{Var}(X)=1. \end{align} Therefore, \begin{align} \rho(X,Y)&=\frac{\textrm{Cov}(X,Y)}{\sigma_X \sigma_Y}\\ &=\frac{1}{1 \cdot \sqrt{2}}=\frac{1}{\sqrt{2}}. \end{align} The MMSE estimator of $X$ given $Y$ is \begin{align} \hat{X}_M&=E[X|Y]\\ &=\mu_X+ \rho \sigma_X \frac{Y-\mu_Y}{\sigma_Y}\\ &=\frac{Y}{2}. \end{align} so that ( n − 1 ) S n − 1 2 σ 2 ∼ χ n − 1 2 {\displaystyle {\frac {(n-1)S_{n-1}^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}} . Optimization by Vector Space Methods (1st ed.). L.; Casella, George (1998). ISBN978-0-387-84858-7. ^ Bühlmann, Peter; van de Geer, Sara (2011). A data point may consist of more than one independent variable.
2019-01-24 04:38:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526296854019165, "perplexity": 1034.6914089793715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584518983.95/warc/CC-MAIN-20190124035411-20190124061411-00140.warc.gz"}
https://physics.stackexchange.com/questions/645249/galaxies-negative-pressure-positive-feedback
# Galaxies: Negative pressure, positive feedback As part of research trying to develop a cosmological model that can be static (scale-factor), but in dynamic equilibrium - a way is being considered whereby dense collapsing regions of matter 'bounce' or form jets such as those in Active Galactic Nuclei. The background thinking and a few relevant links are at the bottom, but basically, the reasoning is this. The singularity in a black hole seems unphysical, a state of infinite density - and something should happen to prevent it from forming. An example of an infinite quantity hinting at different physics is the ultraviolet catastrophe. Black holes are said to form due to the pressure, resisting the collapse, adding to the gravity. A kind of positive feedback for positive pressure and gravity. High gravity requires high pressure to resist the collapse, but the pressure adds to the gravity attracting the matter more strongly requiring higher pressure etc... The density of active gravitational mass according to General Relativity is $$\rho +\frac{3p}{c^2}\tag 1$$ and this expression shows that positive pressure adds to the active gravitational mass. The question is: Can there be a situation whereby negative pressure causes a decrease in gravity, that allows matter to escape from a dense region, causing more negative pressure etc... this time positive feedback for negative pressure? Are there any models like that, presumably for galactic nuclei, that have already been developed? In diagram 1. A dense region of matter (blue circles), is resisting collapse due to pressure (green arrows). The pressure is positive at the outside circle but at the middle circle, since the pressure is acting inwards and outwards, some inward pressure arrows have been put on. Diagram 2. The region of matter inside the inner circle of diagram 1. has now escaped, due to the high pressure and possibly some asymmetry or disturbance such as might occur if a star collided with a galactic nucleus. The red arrows show the direction of the escaping matter. Now we are left with the green arrows on diagram 2 i.e. negative pressure and less outward positive pressure at that circle. According to formula 1) the active gravitational mass of matter within that circle is reduced, but there is still the high pressure that would speed up the matter, which would then be ejected also in the direction of the red arrows. So in this way could a negative pressure - positive feedback cycle occur in a galactic nucleus? When enough matter has been ejected, the pressure would become negligible and the situation would then revert to the usual situation, where positive pressure is resisting the gravitational collapse. Links to other work/questions I have done on this: 1. Does General Relativity allow a reduction in the strength of gravity? 2. Cosmology - an expansion of all length scales 3. John Hunter, A New Solution of the Friedman Equations, https://vixra.org/abs/2006.0209 • This doesn't answer the specific question you asked, but if the motive is finding a mechanism that prevents singularities, then why restrict the scope to clasical physics? Classical general relativity is an approximation, and the singularities it predicts are expected to be mere artifacts of the approximation. Even in a semiclassical model, where gravity is still treated classically and everything else is quantum, quantum effects can already change the picture substantially (see arXiv:1912.06047). Jun 19, 2021 at 13:33 • Yes, there seems to be a problem with singularities - true, but also the motivation is to complete a cosmological model that is static (in scale-factor) but with a redshift. That's link 2) at the bottom of the question - it predicts a matter density between 0.25 and 0.33. Being apparently static, but with a redshift, it's also required that collapsing matter 'bounce' and this may include the Big Bang itself. Early attempts were made in link 1). This question is to do with finding a way in which a static universe (with a redshift) can be in dynamic equilibrium. Jun 19, 2021 at 14:39 There seems to be confusion about pressure here. The diagrams portray pressure as a vector, but it is a component of a second-order tensor -- or in the case of isotropic fluids, simply a scalar, as you are denoting $$p$$. There is no distinction between "outward" and "inward" pressure; they are the same thing. To be clear: Vacuum doesn't suck (well, except perhaps a tiny amount due to cosmological dark energy). The intuition that it does is based on Earth conditions where the baseline is ambient atmospheric pressure. But the $$p$$ in the gravitational formulas is absolute pressure.
2022-07-03 09:40:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7767903804779053, "perplexity": 441.32539328186743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215805.66/warc/CC-MAIN-20220703073750-20220703103750-00399.warc.gz"}
https://www.physicsforums.com/threads/are-unitary-transformations-always-linear.415271/
# Are Unitary Transformations Always Linear? 1. Jul 10, 2010 ### lunde Hello, I had a question regarding unitary transformations. The most common definition I see for unitary transformations is defined as a transformation between Hilbert spaces that preserves inner products. I was wondering if all unitary transformations between Hilbert spaces (according to this definition) are necessarily bounded linear transformations. (i.e. $$U( \alpha x + \beta y ) = \alpha U x + \beta U y$$ and $$U \in \mathcal{L} (H_1 , H_2)$$.) I have been trying to prove this to myself for the last hour but can't seem to show this for some reason. 2. Jul 10, 2010 ### zpconn Try showing that the quantity $$|U(x + y) - U(x) - U(y)|^2$$ is zero by writing it as an inner product, expanding, and finally using the preservation of the inner product by U. 3. Jul 10, 2010 ### element4 I think it follows from the linearity of the inner product. One might try to calculate $$<U( \alpha x + \beta y ) - \alpha U x + \beta U y,U( \alpha x + \beta y ) - \alpha U x + \beta U y>$$, using the axioms of the inner product. If it gives zero, you are home. 4. Jul 10, 2010 ### lunde Thanks. That's a cool way to show this, and then since it's an isometry it's bounded, great. 5. Jul 10, 2010 ### Landau Preserving inner product is equivalent to being an isometry, and this implies boundedness and linearity. However, unitary transformations are also (by definition) required to be surjective, or at least have dense range. 6. Jul 10, 2010 ### lunde How can you show that all surjective isometries between Hilbert spaces are linear? 7. Jul 10, 2010 ### Landau Surjectivity is not needed for linearity. Every isometry between inner product spaces is linear, as follows from showing that the quantity which element4 wrote equals zero.
2018-03-20 12:37:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255066514015198, "perplexity": 476.63706796929273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647406.46/warc/CC-MAIN-20180320111412-20180320131412-00283.warc.gz"}
https://www.proofwiki.org/wiki/Conjugacy_Class_of_Element_of_Center_is_Singleton
# Conjugacy Class of Element of Center is Singleton ## Theorem Let $G$ be a group. Let $\map Z G$ denote the center of $G$. The elements of $\map Z G$ form singleton conjugacy classes, and the elements of $G \setminus \map Z G$ belong to multi-element conjugacy classes. ### Corollary The number of single-element conjugacy classes of $G$ is the order of $\map Z G$ and divides $\order G$. ## Proof Let $\conjclass a$ be the conjugacy class of $a$ in $G$. $\ds a$ $\in$ $\ds \map Z G$ $\ds \leadstoandfrom \ \$ $\ds \forall x \in G: \,$ $\ds x a$ $=$ $\ds a x$ $\ds \leadstoandfrom \ \$ $\ds \forall x \in G: \,$ $\ds x a x^{-1}$ $=$ $\ds a$ $\ds \leadstoandfrom \ \$ $\ds \conjclass a$ $=$ $\ds \set a$ $\blacksquare$
2022-07-06 12:13:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348623752593994, "perplexity": 110.29204115190925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104672585.89/warc/CC-MAIN-20220706121103-20220706151103-00135.warc.gz"}
https://www.cheenta.com/measure-of-angle-prmo-2018-problem-no-29/
What is the NO-SHORTCUT approach for learning great Mathematics? # How to Pursue Mathematics after High School? For Students who are passionate for Mathematics and want to pursue it for higher studies in India and abroad. Try this beautiful Trigonometry Problem based on Measure of Angle from PRMO -2018. ## Measure of Angle - PRMO 2018- Problem 29 Let $D$ be an interior point of the side $B C$ of a triangle ABC. Let $l_{1}$ and $l_{2}$ be the incentres of triangles $A B D$ and $A C D$ respectively. Let $A l_{1}$ and $A l_{2}$ meet $B C$ in $E$ and $F$ respectively. If $\angle B l_{1} E=60^{\circ},$ what is the measure of $\angle C l_{2} F$ in degrees? , • $25$ • $20$ • $35$ • $30$ • $45$ Trigonometry Triangle Angle ## Suggested Book | Source | Answer Pre College Mathematics #### Source of the problem Prmo-2018, Problem-29 #### Check the answer here, but try the problem first $30$ ## Try with Hints #### First Hint According to the questations at first we draw the picture . We have to find out the value of $\angle C l_{2} F$. Now at first find out $\angle AED$ and $\angle AFD$ which are the exterioe angles of $\triangle BEL_1$ and $\triangle CL_2F$. Now sum of the angles is $180^{\circ}$ Now can you finish the problem? #### Second Hint $\angle E A D+\angle F A D=\angle E A F=\frac{A}{2}$ $\angle A E D=60^{\circ}+\frac{B}{2}$ $\angle A F D=\theta+\frac{C}{2}$ Therefore $\quad \ln \Delta A E F: \frac{A}{2}+60^{\circ}+\frac{B}{2}+\theta+\frac{C}{2}=180^{\circ}$ $90^{\circ}+60^{\circ}+\theta=180^{\circ}$ (as sum of the angles of a Triangle is $180^{\circ}$ Therefore $\quad \theta=30^{\circ}$ ## What to do to shape your Career in Mathematics after 12th? From the video below, let's learn from Dr. Ashani Dasgupta (a Ph.D. in Mathematics from the University of Milwaukee-Wisconsin and Founder-Faculty of Cheenta) how you can shape your career in Mathematics and pursue it after 12th in India and Abroad. These are some of the key questions that we are discussing here: • What are some of the best colleges for Mathematics that you can aim to apply for after high school? • How can you strategically opt for less known colleges and prepare yourself for the best universities in India or Abroad for your Masters or Ph.D. Programs? • What are the best universities for MS, MMath, and Ph.D. Programs in India? • What topics in Mathematics are really needed to crack some great Masters or Ph.D. level entrances? • How can you pursue a Ph.D. in Mathematics outside India? • What are the 5 ways Cheenta can help you to pursue Higher Mathematics in India and abroad? ## Want to Explore Advanced Mathematics at Cheenta? Cheenta has taken an initiative of helping College and High School Passout Students with its "Open Seminars" and "Open for all Math Camps". These events are extremely useful for students who are really passionate for Mathematic and want to pursue their career in it. To Explore and Experience Advanced Mathematics at Cheenta This site uses Akismet to reduce spam. Learn how your comment data is processed.
2021-08-05 22:49:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2679133713245392, "perplexity": 1199.2255574887652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00333.warc.gz"}
https://mne.tools/stable/generated/mne.decoding.TimeDelayingRidge.html
# mne.decoding.TimeDelayingRidge¶ class mne.decoding.TimeDelayingRidge(tmin, tmax, sfreq, alpha=0.0, reg_type='ridge', fit_intercept=True, n_jobs=1, edge_correction=True)[source] Ridge regression of data with time delays. Parameters tmin The starting lag, in seconds (or samples if sfreq == 1). Negative values correspond to times in the past. tmax The ending lag, in seconds (or samples if sfreq == 1). Positive values correspond to times in the future. Must be >= tmin. sfreqfloat The sampling frequency used to convert times into samples. alphafloat The ridge (or laplacian) regularization factor. reg_type Can be “ridge” (default) or “laplacian”. Can also be a 2-element list specifying how to regularize in time and across adjacent features. fit_interceptbool If True (default), the sample mean is removed before fitting. n_jobs The number of jobs to use. Can be an int (default 1) or 'cuda'. New in version 0.18. edge_correctionbool If True (default), correct the autocorrelation coefficients for non-zero delays for the fact that fewer samples are available. Disabling this speeds up performance at the cost of accuracy depending on the relationship between epoch length and model duration. Only used if estimator is float or None. New in version 0.18. Notes This class is meant to be used with mne.decoding.ReceptiveField by only implicitly doing the time delaying. For reasonable receptive field and input signal sizes, it should be more CPU and memory efficient by using frequency-domain methods (FFTs) to compute the auto- and cross-correlations. Methods __hash__(self, /) Return hash(self). fit(self, X, y) Estimate the coefficients of the linear model. get_params(self[, deep]) Get parameters for this estimator. predict(self, X) Predict the output. set_params(self, \*\*params) Set the parameters of this estimator. __hash__(self, /) Return hash(self). fit(self, X, y)[source] Estimate the coefficients of the linear model. Parameters Xarray, shape (n_samples[, n_epochs], n_features) The training input samples to estimate the linear coefficients. yarray, shape (n_samples[, n_epochs], n_outputs) The target values. Returns selfinstance of TimeDelayingRidge Returns the modified instance. get_params(self, deep=True)[source] Get parameters for this estimator. Parameters deepbool, optional If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsmapping of str to any Parameter names mapped to their values. predict(self, X)[source] Predict the output. Parameters Xarray, shape (n_samples[, n_epochs], n_features) The data. Returns Xndarray The predicted response. set_params(self, **params)[source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Returns ——- self
2020-02-17 06:28:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22784718871116638, "perplexity": 5748.632793409808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141749.3/warc/CC-MAIN-20200217055517-20200217085517-00045.warc.gz"}
https://www.bookofproofs.org/branches/set-partition/
Welcome guest You're not logged in. 350 users online, thereof 0 logged in Definition: Set Partition Let $X$ be a non-empty set. A partition of $X$ is a set $P$ of non-empty subsets of $X,$ which are mutually disjoint. | | | | | created: 2019-09-08 10:17:36 | modified: 2019-09-08 10:22:50 | by: bookofproofs | references: [983] (none)
2019-09-18 07:44:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24066001176834106, "perplexity": 5797.888573080508}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573258.74/warc/CC-MAIN-20190918065330-20190918091330-00060.warc.gz"}
https://www.gamedev.net/forums/topic/184505-smallest-rect-to-contain-x-amount-of-rects/
#### Archived This topic is now archived and is closed to further replies. # Smallest rect to contain X amount of rects!? This topic is 5248 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Okey, didn''t really know what to name this thread, and didn''t really know where to put it. Well, I need to optimize the memory usage! And what I need to do is that I need to figure out the smallest possible rect to store X amount of other rect. What I mean is: Example: If I have these rects: GG HHHHH BBB GG DDD HHHHH and BBB and GG and DDD BBB GG I could make them into one square that would look like: HHHHHGGBBB HHHHHGGBBB DDD GGBBB DDD GG Or... HHHHH HHHHH GGDDD GGDDD GGBBB GGBBB BBB The second one would in this case be optimal(I think!?) Anyway, that''s the idea of it. Therse is one (or maybe many alike) optimal ways to do this, and I know a computer can do it. You just need an algorithm that check every possiblity to see which is smallest. Can anyone help me out!? /MindWipe ##### Share on other sites I recently wrote some code to do this, although it''s kind of simplistic so it''s not nearly as optimized as it could be... http://www.blackpawn.com/texts/lightmaps/default.html I haven''t really had time to read it in depth but it looks like it''s what you''re looking for Raj ##### Share on other sites quote: Original post by RajanSky I recently wrote some code to do this, although it''s kind of simplistic so it''s not nearly as optimized as it could be... http://www.blackpawn.com/texts/lightmaps/default.html I haven''t really had time to read it in depth but it looks like it''s what you''re looking for Raj Thank, but that didn''t sort them in an optimal way. It was just an algorithm to add many rects to one rect. Anyone else got anything that might help? /MindWipe ##### Share on other sites What do you need it for, why can''t it be done with a linear array? ##### Share on other sites quote: Original post by Anonymous Poster What do you need it for, why can''t it be done with a linear array? It''s java, J2me, so it''s very limited. I need the images compressed (it uses PNGs) and I need to use the inbuilt functions. Hence the problem :/ /MindWipe ##### Share on other sites Here''s how I''d approach it: 1) Start with (arbitrarily) the tallest and/or widest rectangle in the top left of your container rectangle... heck just first rectangle in your array/list would probably do just as well. 2) For each remaining rectangle, run through every combination of adding it immediately to the a) right and then the b) bottom of the previous rectangle. 3) Only switch to the bottom after you have processed all the remaining rectangles in the same way. 4) Upon placing the last rectangle in the container rectangle for each configuration, check if the configuration has produced less empty spaces than the minimal ammount of empty spaces generated by any particular configuration so far. 5) Do this until you''ve either tried all configurations, or you find a configuration that yeilds no empty space. Hope this helps! -Glyph Reading back through my answer, I realize that step 2 certainly won''t produce optimal results... perhaps try playing around with adding it to a) the right, b) the bottom, or c) the top of the next empty column. I''m sure there are others I can''t think of right now... ##### Share on other sites I wonder if the optimal solution doesn''t fall in the P!=NP category. That is nearly impossible to reach exactly. Reminds me some scientific article in American Science. But not sure, I am not a real scientist ##### Share on other sites This is a classic knapsack problem, which I believe is NP hard. A very common solution which yields good results is to start with the biggest object, and work your way smaller and smaller. ##### Share on other sites Thank you all for the replies! I think you can do it with the following algorithm: Set the maximum height/width, then begin placing one rect going from top left to bottom right, so that it has been placed in every possible way. After you have placed that one, place another. You will also have to make a list off all possible orders. Then you do the same with this other one, try every possibility and do the same thing with the next in line, etc etc. That WOULD solve it, and will be rather simple to write. But it could take a while to computete. But I''m only talking 8-10 rects at once, so... /MindWipe ##### Share on other sites *shrug* I think the thing I wrote would work pretty fine in this case... it''s not too fast but it gets the job done. Basically as someone mentioned, you begin by sorting your objects by size, so you insert the largest one first, and so on.... (Note that size isn''t area, but rather the max of the rectangles width and height) Another trick that works very well is to allow rotations. For example, if you attempt to add a rectangle and it fails, then rotate it 90 degrees and try again... Then just set a flag that it has been rotated. Then, the basic idea is that every time you want to add a rectangle you scan all pixels and say "can I insert it at this (x,y)?"... Now obviously if you implement it the simple way and do a collision test with every rectangle, for every pixel, it''s going to run like crap. But, with some acceleration techniques it runs fine assuming it''s not used every frame or in any other speed critical area, and the space taken up is minimized as far as I could tell. The solutions are probably no less than 5% away from optimal. There is one other potential issue, depending on how you do things... This algorithm assumes that the rectangles are sorted from large to small. If your rectangle list is determined prior to running the algorithm then you can sort it once and be done with it. But if the interface to your code is that rectangles are added one at a time, and you don''t know if they will come in any particular order, then you will have to recalculate the entire solution every time, so that you can add the new rectangle, sort the whole rect list, and then re-compute the solution. This can be expensive though... So, what I did was simply allow new rects to be added without recomputing (so there''s no guarantee it will be optimal)... but, then the next time a rectangle fails to be added, recompute the solution. Hopefully this will free up some space, so now the texture addition will work properly. If not, then expand the rectangle dimensions by some amount and recompute the solution for that area. In my case, since I am dealing with textures, I expand to the next power of two. So if I''m at 32x64 then I expand to 64x64. Hope that wasn''t too confusing Here''s a screenshot w/ an example... It''s a tool for adding textures to a texture sheet, automatically keeping track of the UV coordinates. This was made without the rotation optimization- I decided to take it out because it still produces pretty good results, and I didn''t want to waste lots of time on this.. Raj
2018-02-21 05:29:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36230799555778503, "perplexity": 1078.0565434922955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813431.5/warc/CC-MAIN-20180221044156-20180221064156-00223.warc.gz"}
https://ask.sagemath.org/question/55724/trouble-creating-a-set-of-vectors/
# Trouble creating a set of vectors I'm writing a script to compute the numbers of vectors with a given property. I would like it to work is as the following does, so adding a vector to the set at every cycle T = [] for j in GF(8): c = (j) T.append(j) print(T) That gives the following output [0] [0, z3] [0, z3, z3^2] [0, z3, z3^2, z3 + 1] [0, z3, z3^2, z3 + 1, z3^2 + z3] [0, z3, z3^2, z3 + 1, z3^2 + z3, z3^2 + z3 + 1] [0, z3, z3^2, z3 + 1, z3^2 + z3, z3^2 + z3 + 1, z3^2 + 1] [0, z3, z3^2, z3 + 1, z3^2 + z3, z3^2 + z3 + 1, z3^2 + 1, 1] (the only purposeof the print is to show how I want my set to be built). However the actual code does this: T = [] for j in GF(8): b = random_matrix(GF(8), 1, 3) b[0, 0] = j T.append(b) print(T) [[ 0 z3^2 z3^2 z3 + 1]] [[ z3 z3^2 z3^2 z3 + 1], [ z3 z3^2 z3^2 z3 + 1]] [[ z3^2 z3^2 z3^2 z3 + 1], [ z3^2 z3^2 z3^2 z3 + 1], [ z3^2 z3^2 z3^2 z3 + 1]] [[z3 + 1 z3^2 z3^2 z3 + 1], [z3 + 1 z3^2 z3^2 z3 + 1], [z3 + 1 z3^2 z3^2 z3 + 1], [z3 + 1 z3^2 z3^2 z3 + 1]] [[z3^2 + z3 z3^2 z3^2 z3 + 1], [z3^2 + z3 z3^2 z3^2 z3 + 1], [z3^2 + z3 z3^2 z3^2 z3 + 1], [z3^2 + z3 z3^2 z3^2 z3 + 1], [z3^2 + z3 z3^2 z3^2 z3 + 1]] [[z3^2 + z3 + 1 z3^2 z3^2 z3 + 1], [z3^2 + z3 + 1 z3^2 z3^2 z3 + 1], [z3^2 + z3 + 1 z3^2 z3^2 z3 + 1], [z3^2 + z3 + 1 z3^2 z3^2 z3 + 1], [z3^2 + z3 + 1 z3^2 z3^2 z3 + 1], [z3^2 + z3 + 1 z3^2 z3^2 z3 + 1]] [[z3^2 + 1 z3^2 z3^2 z3 + 1], [z3^2 + 1 z3^2 z3^2 z3 + 1], [z3^2 + 1 z3^2 z3^2 z3 + 1], [z3^2 + 1 z3^2 z3^2 z3 + 1], [z3^2 + 1 z3^2 z3^2 z3 + 1], [z3^2 + 1 z3^2 z3^2 z3 + 1], [z3^2 + 1 z3^2 z3^2 z3 + 1]] [[ 1 z3^2 z3^2 z3 + 1], [ 1 z3^2 z3^2 z3 + 1], [ 1 z3^2 z3^2 z3 + 1], [ 1 z3^2 z3^2 z3 + 1], [ 1 z3^2 z3^2 z3 + 1], [ 1 z3^2 z3^2 z3 + 1], [ 1 z3^2 z3^2 z3 + 1], [ 1 z3^2 z3^2 z3 + 1]] So instead of adding the new vector it turns the set into a copy of several vectors all equal to the new one. What's the problem in my code? And how can I solve it? edit retag close merge delete 1 Hello, @Alain Ngalani! I don't seem to have this problem. Check this code. However, I am not 100% sure, because your code is wrongly indented. By the way, here are a few details you might be interest on. There is also a random_vector() function which seems to better fit your code. ( 2021-02-16 03:49:03 +0200 )edit 1 I should point out that your code is not recursive (in the sense of computer science), nor is it a method (in the sense of computer science again). Perhaps the question should be edited to reflect more properly the situation. Perhaps simply write "why my code to define a set of vectors doesn't work?" ( 2021-02-16 04:02:48 +0200 )edit Indeed your code works, I made an error writing my wrong code. It should be this that as you see doesn't work well ( 2021-02-16 12:21:26 +0200 )edit I see... I have tried modify your original question so that it reflects that fact. Unfortunately, Ask SageMath won't let me, for some unknown reason (I just get an "oops, something went wrong" message.) Would you kind enough to make the change yourself? This question you pose is a very common issue, so somebody in the future will have the same problem, and can refer to your question and the answer I added. I hope this helps you clarify your doubts. Hopefully, it won't add new ones. ( 2021-02-16 19:43:12 +0200 )edit Sort by » oldest newest most voted Hello, @Alain Ngalani! Your problem comes from the fact that matrices (as defined by Sage) are mutable objects. The issue of mutable vs. immutable objects is an eternal struggle for us, Python/Sage programmers. Alas, it is a necessary one. Let me explain with the following "classic" example code: A = matrix(ZZ, [1, 2, 3]) B = A B[0,0] = 1000 Conventional wisdom suggests that after this code has been executed, A == [1, 2, 3] and B == [1000, 2, 3] hold True. That would be the case if matrices were immutable objects. However, that is not the case, and you will actually see that A == [1000, 2, 3] and B == [1000, 2, 3] are True, i.e, modifying the matrix B also modifies the matrix A. (Warning: here comes a slightly technical explanation!) The reason for this is that the instruction A = matrix(ZZ, [1, 2, 3]) first creates an object in memory (the matrix itself), and the assigns a reference to that object to the variable A. In naive terms, A is not the matrix, but a representation of its direction in computer memory (a reference to the object.) When you do B = A you are copying what A is into B, that is, you are copying the direction on memory, not the matrix (this is called aliasing). Now, both A and B reference the same object (basically, they are different names for the same matrix, they are alias for the same matrix.) Finally, when you execute B[0,0] = 1000 you are telling Python/Sage "go to the matrix referenced by B and change its first entry to 1000." Since A references the same object, this process also "alters" A. A problem like this is only possible with mutable objects. Have matrices been immutable, then the instruction B[0,0] = 1000 would have forced Python/Sage to create a new object with the value ¨[1000, 2, 3]¨. Indeed, because immutable objects are not allowed to be altered, they need to be created and re-created every time you make a change on them. (This is time an resource consuming, and that is why there are also mutable objects, which are less intensive on your computer.) As an example, since tuples are immutable, we have this code: A = (1, 2, 3) B = A B += (4,) This creates a tuple in memory and makes A reference to it. Now, B = A makes an aliasing, so that A and B are (for now) references to the same object. When you run B += (4,), you are asking the element 4 to be added to B. Since B is immutable, that is not possible, so a new object is created with the additional element. So, after these instructions execute, you have that A == (1, 2, 3) and B = (1, 2, 3, 4) are True. T = [] b = random_matrix(GF(8), 1, 3) for j in GF(8): b[0, 0] = j T.append(b) show(T) notice that b is defined outside of the loop, so it is created only once. When you do T.append(b) inside the loop, you are not appending the value of b, you are literally appending b, which as we have seen, is a reference to an object in memory. When this loop ends, your list T is filled with $8$ (the cardinality of GF(8)) references to only one abject, namely the one you created outside the loop. If you use any of these references to alter the value of that object, then that change will be reflected for the other alias of that object. That is why the instruction b[0, 0] = j alters all the elements in your list T: because they point to the same MUTABLE object (intuitively, they are the same object.) The solution is to put the line b = random_matrix(GF(8), 1, 3) inside your loop so that every iteration creates a completely new object and assigns to b a reference to it, thus making sure that b references a different object every time. That way, altering one object in T won't change the other elements. T = [] for j in GF(8): b = random_matrix(GF(8), 1, 3) b[0, 0] = j T.append(b) show(T) Hope this helps! Sorry for the long answer! more 1 If keeping creation of b outside the loop was for a reason (i.e. if various modifications of b should differ in just one element), then the following code will do the job: T = [] b = random_matrix(GF(8), 1, 3) for j in GF(8): c = copy(b) c[0, 0] = j T.append(c) show(T) ( 2021-02-16 20:02:15 +0200 )edit This was indeed the problem, and this solutions looks better then mine (creating a new random vecotr inside and then modify every component of it to be equal to either the ones of b or anoter one). ( 2021-02-17 22:07:46 +0200 )edit
2021-06-24 09:03:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4267164468765259, "perplexity": 2526.932978482761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488552937.93/warc/CC-MAIN-20210624075940-20210624105940-00255.warc.gz"}
https://zbmath.org/?q=an:1206.82082
Asymptotic behavior of edge-reinforced random walks.(English)Zbl 1206.82082 Summary: We study linearly edge-reinforced random walk on general multi-level ladders for large initial edge weights. For infinite ladders, we show that the process can be represented as a random walk in a random environment, given by random weights on the edges. The edge weights decay exponentially in space. The process converges to a stationary process. We provide asymptotic bounds for the range of the random walker up to a given time, showing that it localizes much more than an ordinary random walker. The random environment is described in terms of an infinite-volume Gibbs measure. MSC: 82C41 Dynamics of random walks, random surfaces, lattice animals, etc. in time-dependent statistical mechanics 60K35 Interacting random processes; statistical mechanics type models; percolation theory 60K37 Processes in random environments Full Text: References: [1] Coppersmith, D. and Diaconis, P. (1986). Random walk with reinforcement. Unpublished manuscript. [2] Diaconis, P. (1988). Recent progress on de Finetti’s notions of exchangeability. In Bayesian Statistics 3 ( Valencia , 1987 ) 111–125. Oxford Univ. Press, New York. · Zbl 0707.60033 [3] Diaconis, P. and Freedman, D. (1980). de Finetti’s theorem for Markov chains. Ann. Probab. 8 115–130. · Zbl 0426.60064 [4] Diaconis, P. and Rolles, S. W. W. (2006). Bayesian analysis for reversible Markov chains. Ann. Statist. 34 1270–1292. · Zbl 1118.62085 [5] Durrett, R. (2004). Probability : Theory and Examples , 3rd ed. Duxbury Press, Belmont, CA. · Zbl 1202.60001 [6] Keane, M. S. and Rolles, S. W. W. (2000). Edge-reinforced random walk on finite graphs. In Infinite Dimensional Stochastic Analysis ( Amsterdam, 1999 ) 217–234. R. Neth. Acad. Arts Sci., Amsterdam. · Zbl 0986.05092 [7] Merkl, F. and Rolles, S. W. W. (2005). Edge-reinforced random walk on a ladder. Ann. Probab. 33 2051–2093. · Zbl 1102.82010 [8] Pemantle, R. (1988). Phase transition in reinforced random walk and RWRE on trees. Ann. Probab. 16 1229–1241. · Zbl 0648.60077 [9] Rolles, S. W. W. (2003). How edge-reinforced random walk arises naturally. Probab. Theory Related Fields 126 243–260. · Zbl 1029.60089 [10] Rolles, S. W. W. (2006). On the recurrence of edge-reinforced random walk on $$\mathbbZ\timesG$$. Probab. Theory Related Fields 135 216–264. · Zbl 1206.82045 [11] Thorisson, H. (2000). Coupling , Stationarity , and Regeneration . Springer, New York. · Zbl 0949.60007 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-10-02 23:43:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8167248368263245, "perplexity": 2409.4498246048875}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00228.warc.gz"}
https://casmusings.wordpress.com/2011/09/
# Monthly Archives: September 2011 ## The Demise of Graphing Calculators? Here’s an interesting question posed on a graphing calculator discussion group: At Hypothetical High School, all students have laptop computers 24/7, and fast and open Internet access both at school and at home. Students will already have free access to Geogebra, Excel, Microsoft Maths, Wolfram Alpha, and heavens knows what else.  If more Maths power is needed, it can be bought relatively cheaply…. For such a school, is there any justification in asking parents to pay an extra AUS \$190 for a graphics calculator? Just so I don’t confuse anyone, remember that I am a firm believer in the power of technology for enhancing the relevance and power of mathematics teaching.  I have seen students explore and discover amazing mathematics because they had the instant feedback abilities of mathematics software (handheld and computer-based) that answered their “what if” questions whenever they occurred whether or not an official teacher was present.  It levels the “playing field” for all students of mathematics and grants them access to understandable answers that sometimes result from intervening mathematics beyond the current reach of the explorers.  It is one more tool that can be used to lure the curious into the beautiful worlds of mathematics and patterns. But if all students and teachers have access to calculation/graphing/manipulation tools that are far faster than any handheld calculator, how can we possibly justify charging (or asking) families to pay even more?  The only argument posted in the discussion group in response concerns high stakes testing.  Real or not, that strikes me as an anemic response for many reasons. • We already ask families (or all tax-payers via school-funded testing) to shell out huge sums for annual testing. • Testing already occupies a disproportionately (and dis-appropriately?) large amount of the focus of many schools. • If math and science “explorers” already have laptops, won’t requiring them to learn the specific workings of handhelds just take up more classroom time that should be spent on content? • Are we really so wedded to testing that we are willing to spend extra time, money, and other resources to keep it in place? As much as my students and I have grown from the presence of graphing calculators in their hands over my years as a math teacher, is it time to say goodbye to my old friend, the handheld calculator? I first encountered this problem about a decade ago when I attended one of the first USACAS conferences sponsored by MEECAS in the Chicago area.  I’ve gotten lots of mileage from it with my own students and in professional circles. For a standard form quadratic, $y=a*x^2+b*x+c$, you probably know how the values of a and c change the graph of the parabola, but what does b do?  Be specific and prove your claim. ## Elementary Multiplication One of my daughters is now in 2nd grade and I’ve always been interested in keeping her curiosity piqued–whether in math or any other discipline. I never want to push her to memorize anything or accelerate her learning beyond what she’s ready to engage.  But she has always enjoyed games and has been intensely interested in art.  Following are some ideas I’ve been playing with my daughter during our recent conversations.  Perhaps some of parents out there can benefit from my ideas or others can give me some additional leads on other good ideas I always play number games with my daughter.  A few years ago I asked her how many apples (or dolls, or crackers, or whatever was in front of her at the time) she would have if she had 2 and I gave her 2 more.  There were many variation on this theme.  Eventually the numbers grew larger and then I asked her how many I would need to give her if she had 2 apples now and would have 5 after my donation.  It was my attempt at introducing subtraction without needing to name a new concept.  From my end, this has worked well.  My daughter likes playing with numbers and I keep pushing the window of what she can handle.  I make it clear that she can always ask for hints and that I’m never disappointed if she can’t handle a question I give her so long as she tries.  It’s a delicate balancing act, reading my daughter’s readiness and trying not to overburden her.  When I misjudge, her blank face tells me to go in another direction.
2017-02-20 22:22:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27206966280937195, "perplexity": 1642.3961416039058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00127-ip-10-171-10-108.ec2.internal.warc.gz"}