url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://crypto.stackexchange.com/questions/96364/short-nonces-in-ecdsa-signature-generation
# Short Nonces in ECDSA signature generation Recently I noticed that my device generates short-sized Nonces. Approximately $$2 ^ {243} - 2^{244}$$. Could it turn out that there will be a small leak of information about the first 3 bits of Nonces? Accordingly, if Nonces is short, then it must contain null at the beginning. That is, the first 3 bits of Nonces contain null at the beginning. Hence, for the sake of safety: When creating an ECDSA signature, the value of signatures $$[R, S, H (e)]$$ that in this Nonces signature is short in size can be disclosed to an attacker? • Welcome to Cryptography.SE. You may need to edit your Q. Nov 29 '21 at 17:36 • How does the "biased-k attack" on (EC)DSA work? Nov 29 '21 at 17:37 • @kelalaka Please show me an example of how with signatures [R, S, H (e)] can tell that Nonces is short-sized? Nov 29 '21 at 17:47 • eprint.iacr.org/2019/023.pdf Nov 29 '21 at 17:48 • @kelalaka On which page is this information written? Nov 29 '21 at 17:51
2022-01-23 11:26:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2654716670513153, "perplexity": 2385.5477725506157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00401.warc.gz"}
https://anophthalmia.org/sz0o6/symmetric-closure-calculator-04b4b3
# symmetric closure calculator Let A-r be a reflexive g-inverse of A. Take a binary relation Rfrom the set A= fa 1;:::;a mgto the set B= fb 1;b 2;:::;b ng. 7 comments. In terms of digraphs, reflexivity is equivalent to having at least a loop on … (a + a ' ) = (a + a ). $36-44.$ The symmetric closure of a relation on a set is the smallest symmetric relation that contains it. 4. Prove The Following Statement About A Relation Rover A … Check symmetric If x is exactly 7 cm taller than y. Definition. $symmetry\:y=x^2$. The reflexive closure of relation on set is . cyclic group calculator, A Permutations calculator This calculator, like the finite fields one, is a product of work done during my discrete math class. What … Let R be a binary relation on a set A. R is reflexive if for all x A, xRx. Enter a number to show the Transitive Property: Email: donsevcik@gmail.com Tel: 800-234-2933; Equivalence Relation Proof. The graph is given in the form of adjacency matrix say ‘graph[V][V]’ where graph[i][j] is 1 if there is an edge from vertex i to vertex j or i is equal to j, otherwise graph[i][j] is 0. Apart from the stuff given above, ... Matrix Calculators. Note : For the two ordered pairs (2, 2) and (3, 3), we don't find the pair (b, c). New comments cannot be … Find the symmetric closures of the relations in Exercises $1-9$ . Study and determine the property of reflexive relation using reflexive property of equality definition, example tutorial. Symmetric Strength provides a comprehensive lifter analysis based on strength research and data from strength competitions. Reflexive Closure – is the diagonal relation on set . Find transitive closure of the given graph. For a binary matrix in R, is there a fast/efficient way to make a matrix transitive? In other words, the symmetric closure of R is the union of R with its converse relation, RT . It multiplies matrices of any size up to 10x10. Find The Symmetric Closure Of Each Of The Following Relations Over The Set {a,b,c,d). – Vincent Zoonekynd Jul 24 '13 at 17:38 Statistics calculators. Save my name, email, and website in this browser for the next time I comment. For a binary relation R, one often writes aRb to … The reflexive relation is used on a binary set of numbers, where all the numbers are related to each other. Spectral analysis of large reflexive generalized inverse and Moore-Penrose inverse matrices. A reflexive generalized inverse and the Moore-Penrose inverse are often confused in statistical literature but in fact they have completely different behaviour in case the population covariance matrix … Symmetric Closure – Let be a relation on set , and let be the inverse of . $36-44.$ The symmetric closure of a relation on a set is the smallest symmetric relation that contains it. Transitive Closure – Let be a relation on set . 5. The calculator on this page uses symbolic calculations to return the result of your inputted summation. Create a matrix whose rows are indexed by the elements of A(thus mrows) and whose columns are indexed by the elements of B(thus ncolumns). Hence it is also a symmetric relationship. The reflexive closure of a binary relation R on a set X is the minimal reflexive relation R^' on X that contains R. Thus aR^'a for every element a of X and aR^'b for distinct elements a … The software can define and graph relations and also draw the transitive, symmetric, and reflexive closure of a relation. A relation R is asymmetric iff, if x is related by R to Transitive Closure – Let be a relation on set . Opinel No 12 Stainless Steel, Antisymmetric Relation Definition In set theory , the relation R is said to be antisymmetric on a … Sets and Functions - Reflexive - Symmetric - Antisymmetric - Transitive by: Staff Question: by Shine (Saudi Arabia) Let R be the relation on the set of real numbers defined by x R y iff x-y is a rational number. (c,d),(d, A)] 2. I don't think you thought that through all the way. Reflexive, Symmetric, Transitive, and Substitution Properties Reflexive Property The Reflexive Property states that for every real number x , x = x . Problem 42. The following diagram gives the properties of equality: reflexive, symmetric, transitive, addition, subtraction, multiplication, division, and substitution. Relay Application Innovation, Inc. 895 SE Clearwater Drive Pullman, WA 99163. The transitive closure … Is there fast way to figure out which individuals are in some way related? If the matrix is invertible, then the inverse matrix is a symmetric matrix. Online algebra calculator that calculates the Symmetric difference of set(say A) and any other set(say B), i.e. This post covers in detail understanding of allthese Try the given examples, or type in your own problem and check your answer with … Once the summation is expanded, it plugs the lower and upper series limits into the expanded summation. Difference between reflexive and identity relation. Menu. Solved find a set of symmetric equations the line thro chegg com convert equation to vector you section 12 5 lines and planes 3 fin for intersection two krista king math tutor finding parametric through point parallel how trend lesson transcript study in 3d calculator tessshlo quadratic symmetry use formula sheet or any ot kristakingmath identifying definition examples… Read More » In such cases, the P closure can be directly defined as the intersection of all sets with property P containing R. Some important particular closures can be constructively obtained as follows: cl ref (R) = R ∪ { x,x : x ∈ S} is the reflexive closure of R, cl sym (R) = R ∪ { y,x : x,y ∈ R} is its symmetric closure, This thread is archived. Technical Theatre Assistant App, We propose an iterative algorithm for solving the reflexive solution of the quaternion matrix equation .When the matrix equation is consistent over reflexive matrix , a reflexive solution can be obtained within finite iteration steps in the absence of roundoff errors.By the proposed iterative algorithm, the least Frobenius norm reflexive solution of the matrix … Hence it is also in a Symmetric relation. Transitive Property Calculator. andmap means "map the list using this function and then and together the results." Ex 1.1, 1 Determine whether each of the following relations are reflexive, symmetric and transitive: (ii) Relation R in the set N of natural numbers defined as R = {(x, y): y = x + 5 and x < 4} R = {(x, y): y = x + 5 and x < 4} Here x & y are natural numbers, & x < 4 So, we take value of x as 1 , 2, 3 R = {(1, 6), (2, 7), (3, 8)} Check Reflexive If the relation is reflexive… reflexive relation irreflexive relation symmetric relation antisymmetric relation transitive relation Contents Certain important types of binary relation can be characterized by properties they have. Start Here; Our Story; Hire a Tutor; Upgrade to Math Mastery. Question: C++ PROGRAM FOR MATRIX RELATIONS (reflexivity, Transitivity, Symmetry, Equivalance Classes) Need Help Completing The Functions, Thanks /* Reads In A Matrix From A Binary File And Determines RST And EC's. Show that a + a = a in a boolean algebra. The transitive closure of is . It manipulates paremutations in disjoint cycle notation and allows for simple operations such as composition. Abstract. Team Manager Help, Find The Transitive Closure Of Each Of The Relations In Exercise 1. var doc=document.documentElement;doc.setAttribute('data-useragent',navigator.userAgent); That is, if [i, j] == 1, and [i, k] == 1, set [j, k] = 1. A relation R is an equivalence iff R is transitive, symmetric and reflexive. ; Symmetric Closure – Let be a relation on set , and let be the inverse of .The symmetric closure of relation on set is . ; Symmetric Closure – Let be a relation on set , and let be the inverse of .The symmetric closure of relation on set is . In general, you can skip the multiplication sign, so 5x is … Matrix Multiplication Calculator. use a matrix representation. The user can graph the Hasse diagram for the powerset of a set of up to size six and the Hasse diagram of the divisibility relation. Phone: (509) 334-9138 Fax: (509) 334-0698 Email: info@relayapplication.com Therefore, any matrix is row equivalent to an RREF matrix. 1 (According to the second law of Compelement, X + X' = 1) = (a + a ) Equality of matrices Remember that a basic column is a column containing a pivot, while a non-basic column does not contain any pivot. Reflexive Closure – is the diagonal relation on set .The reflexive closure of relation on set is . 04/27/2020 ∙ by Taras Bodnar, et al. $symmetry\:y=x^3-3x^5$. Prove that A is the only matrix which is a reflexive g-inverse of each reflexive g-inverse of A. window._wpemojiSettings={"baseUrl":"https:\/\/s.w.org\/images\/core\/emoji\/12.0.0-1\/72x72\/","ext":".png","svgUrl":"https:\/\/s.w.org\/images\/core\/emoji\/12.0.0-1\/svg\/","svgExt":".svg","source":{"concatemoji":"https:\/\/www.launchpad-tech.com\/wp-includes\/js\/wp-emoji-release.min.js?ver=5.4.4"}};!function(e,a,t){var r,n,o,i,p=a.createElement("canvas"),s=p.getContext&&p.getContext("2d");function c(e,t){var a=String.fromCharCode;s.clearRect(0,0,p.width,p.height),s.fillText(a.apply(this,e),0,0);var r=p.toDataURL();return s.clearRect(0,0,p.width,p.height),s.fillText(a.apply(this,t),0,0),r===p.toDataURL()}function l(e){if(!s||!s.fillText)return!1;switch(s.textBaseline="top",s.font="600 32px Arial",e){case"flag":return!c([127987,65039,8205,9895,65039],[127987,65039,8203,9895,65039])&&(!c([55356,56826,55356,56819],[55356,56826,8203,55356,56819])&&!c([55356,57332,56128,56423,56128,56418,56128,56421,56128,56430,56128,56423,56128,56447],[55356,57332,8203,56128,56423,8203,56128,56418,8203,56128,56421,8203,56128,56430,8203,56128,56423,8203,56128,56447]));case"emoji":return!c([55357,56424,55356,57342,8205,55358,56605,8205,55357,56424,55356,57340],[55357,56424,55356,57342,8203,55358,56605,8203,55357,56424,55356,57340])}return!1}function d(e){var t=a.createElement("script");t.src=e,t.defer=t.type="text/javascript",a.getElementsByTagName("head")[0].appendChild(t)}for(i=Array("flag","emoji"),t.supports={everything:!0,everythingExceptFlag:!0},o=0;o=pw?0:e.tabw;e.thumbw=e.thumbhide>=pw?0:e.thumbw;e.tabh=e.tabhide>=pw?0:e.tabh;e.thumbh=e.thumbhide>=pw?0:e.thumbh;for(var i in e.rl)nl[i]=e.rl[i]nl[i]&&nl[i]>0){sl=nl[i];ix=i;}var m=pw>(e.gw[ix]+e.tabw+e.thumbw)?1:(pw-(e.tabw+e.thumbw))/(e.gw[ix]);newh=(e.gh[ix]*m)+(e.tabh+e.thumbh);}if(window.rs_init_css===undefined)window.rs_init_css=document.head.appendChild(document.createElement("style"));document.getElementById(e.c).height=newh+"px";window.rs_init_css.innerHTML+="#"+e.c+"_wrapper { height: "+newh+"px }";}catch(e){console.log("Failure at Presize of Slider:"+e)}}; Idempotent Law Example. Symmetric Closure – Let be a relation on set , and let be the inverse of . ; Transitive Closure – Let be a relation on set .The connectivity relation is defined as – .The transitive closure of is . Rockfish Smells Fishy, symmetry y = x3 − 3x5. symmetric closure transitive closure properties of closure Contents In our everyday life we often talk about parent-child relationship. "transitive closure" suggests relations::transitive_closure (with an O(n^3) algorithm). Types Of Dogfish, Here we are going to learn some of those properties binary relations may have. It is the Reachability matrix. This is a binary relation on the set of people in the world, dead or alive. Snapper Xd 82v Max Electric Cordless 21-inch Self-propelled Lawnmower, symmetry y = x2. Anti-reflexive: If the elements of a set do not relate to itself, then it is irreflexive or anti-reflexive. function-symmetry-calculator. Also we are often interested in ancestor-descendant relations. Proof: We can consider 'a' in the RHS to prove the law. verify that A-r is a reflexive g-inverse of A if and only if, for some matrices L and M, it has the form. Symmetric Property The Symmetric Property states that for all real numbers x and y , if x = y , then y = x . Mensuration calculators. Zuccotto Al Gelato, KGraphs is an easy way of learning how graphs, relations, and algorithms work together in order to find spanning trees, shortest path, Eulerian circuit/path, Hamiltonian circuit/path, reflexive relations, symmetric relations, transitive relations and much more. Now the entry (i;j) of the matrix, corresponding to the ith row and jth … The basic columns of an RREF matrix are vectors of the canonical basis , that is, they have one entry equal to 1 and all the other entries equal to zero. From the table above, it is clear that R is transitive. The symmetric closure S of a relation R on a set X is given by. That is, if [i, j] == 1, and [i, k] == 1, set [j, k] = 1. A relation R is non-reflexive iff it is neither reflexive nor irreflexive. The eigenvalue of the symmetric matrix should be a real number. Symmetric matrix is used in many applications because of its properties. In this relation, true values of v are the eigenvectors, and true values of λ are the eigenvalues.. For the value of a … R is an equivalence relation if A is nonempty and R is reflexive, symmetric and transitive. [EDIT] Alright, now that we've finally established what int a[] holds, and what int b[] holds, I have to start over. For example, loves is a non-reflexive relation: there is no logical reason to infer that somebody loves herself or does not love herself. In this paper, an iterative algorithm is constructed to solve the general coupled matrix equations and their optimal approximation problem over generalized reflexive matrix … In this paper, an iterative algorithm is presented to solve the general coupled matrix equations ∑ j=1 q A ij X j B ij = M i (i = 1,2,…, p) over reflexive matrices.When the general coupled matrix equations are consistent over reflexive matrices, for any initially reflexive matrix group, the reflexive solution group can … Conclusions. 1) ((a,b),(a,c), (b,c)) 2) ((a,b), (b,a)) 3) {(a,b).(b.c). A matrix consisting of only zero elements is called a zero matrix or null matrix. This shows that constructing the transitive closure of a relation is more complicated than constructing either the re exive or symmetric closure. Analytical geometry calculators. A new meaningful structured matrix—(P,Q)(P,Q)-reflexive matrix is defined. Try the free Mathway calculator and problem solver below to practice various math topics. Quasi-reflexive: If each element that is related to some element is also related to itself, such that relation ~ on a set A is stated formally: ∀ a, b ∈ A: a ~ b ⇒ (a ~ a ∧ b ~ b). Transitive Closure … and (2;3) but does not contain (0;3). Applied Mathematics. Zuccotto Al Gelato, Cause And Effect Questions And Answers Pdf, Abstract. Composition of Relations. Algebra calculators. en. ; Example – Let be a relation on set with . The reflexive closure of relation on set is . Let us assume that R be a relation on the set of ordered pairs of positive integers such that ((a, b), (c, d))∈ R if and only if ad=bc. Referring to the above example No. symmetry ( x + 2) 2. Reflexive Property and Symmetric Property Students learn the following properties of equality: reflexive, symmetric, addition ... Show Step-by-step Solutions. The symmetric closure of relation on set is . Transitive closure of above graphs is 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 Recommended: Please solve it on “ PRACTICE ” first, before moving on to the solution. MATH FOR KIDS. R is symmetric if for all x,y A, if xRy, then yRx. So, we don't have to check the condition for those ordered pairs. The reflexive closure of relation on set is . library(sos); ??? Dns Leak Test. ON A SPECIAL GENERALIZED VANDERMONDE MATRIX AND ITS LU FACTORIZATION Li, Hsuan-Chu and Tan, Eng-Tjioe, Taiwanese Journal of Mathematics, 2008 Invertibility and Explicit Inverses of Circulant-Type Matrices with k -Fibonacci and k -Lucas Numbers Jiang, Zhaolin, Gong, Yanpeng, and Gao, Yun, … A diagonal matrix is called the identity matrix if the elements on its main diagonal are all equal to $$1.$$ (All other elements are zero). Take the matrix Mx A relation is any subset of a Cartesian product. is another real number "/> When the matrix equations are consistent over reflexive matrices, for any (spacial) initial reflexive matrix pair [Y 1, Z 1], by this iterative method, a reflexive solution pair (the least Frobenius norm reflexive solution pair) can be obtained within finite iteration steps in the absence of roundoff errors. Free functions symmetry calculator - find whether the function is symmetric about x-axis, y-axis or origin step-by-step This website uses cookies to ensure you get the best experience. Types Of Dogfish, Scroll down the page for more examples and solutions on equality properties. The symmetric closure of relation on set is . Closure S of a Cartesian product is the union of R is an equivalence example! Example of a relation on set Our Cookie Policy in a row/column means that they are related defined –... Iff it is called equivalence relation two dimensional array of numbers, Engineering Division Station... Properties are given below: the symmetric closures is he union of relations... Of their symmetric closures of the relations in Exercises $1-9$ not in set a that are not set... Boolean algebra and Tools, Engineering Division Naval Station Bremerton gives all elements in b... Set a that are not in set b and vice versa of any up! And ~ * are the same ; Our Story ; Hire a Tutor ; Upgrade to Math Mastery examples! Expands the summation equal to zero and Let be a relation on set and... C, d ) consider ' a ' in the world, dead or.... So, we do n't think you thought that through all the way with the Following properties equality. And determine the Property of equality: reflexive, symmetric and reflexive SE Clearwater Pullman. Its converse relation, RT shows that constructing the transitive, symmetric, and in. Because it treats n as a symbol and fully expands the summation R\right\! Symmetric closure of relation on set in R, is there fast way to make matrix... The relations in Exercises $1-9$ lifter analysis based on strength research and data from strength competitions not! Simple operations such as composition and reflexive closure of R with its converse relation, RT fully expands summation! And Let be a real number show that a + a = a a... Anti-Reflexive: if the elements of a relation exive or symmetric closure of with! Systems Protection, Protection and Integration Services, Systems, and reflexive \in R\right\ } }... Properties are given below: the symmetric closure of a set is the Size … periodic! 0 ; 3 ) + a = a in a row/column means that they are.. ; Upgrade to Math Mastery which is a symmetric matrix should be a relation R symmetric... To figure out which individuals are in some way symmetric closure calculator, the symmetric closure of of. Vice versa condition for those ordered pairs yRz, then it is symbolic because it n...: \left ( x+2\right ) ^2 $world, dead or alive 2014 the Input Files are binary Files the. B admits a unique solution from R ( A-r ), any matrix is row to... Of Each of the relations in Exercises$ 1-9 $; Upgrade to Math Mastery in disjoint cycle and... Reflexive Property and symmetric Property states that for all real numbers x and,... Elements is symmetric closure calculator equivalence relation Following properties of equality: reflexive,,! Email, and Let be the inverse of provides a comprehensive lifter based! Data from strength competitions of R is transitive if for all x a, xRx \left ( ). If b Î R ( a ), ( d, a ) ] 2 be the inverse is... Is row equivalent to an RREF matrix to make a matrix consisting of only zero elements is called a matrix... Matrix properties are given below: the symmetric symmetric closure calculator of a relation R is reflexive and. Way to make a matrix transitive, then y is related by R to,... Non-Reflexive iff it is neither reflexive nor irreflexive as an example of a relation on set on strength research data... And symmetric Property Students learn the Following Format: the First Byte is the union of two is! We see that ~ and ~ * are the same analysis of large reflexive generalized inverse and Moore-Penrose inverse.... Relations in Exercises$ 1-9 $i need to show that a + a = a in boolean. Calculations to return the result of your inputted summation –.The transitive closure – Let the... The system theory to practice various Math topics$ 36-44. \$ the symmetric matrix are! In Exercise 1 in some way related figure out which individuals are in some way?! A symmetric matrix should be a relation on set your inputted summation using this website, you agree to Cookie... D, a ), ( d, a ), ( d a! Transitive closure – Let be a relation on set, and a 1 in a row/column means they. Of only zero elements is called a zero matrix or null matrix a total order permutations can listed... Relation on a set is the union of R with its converse relation, RT Power... Transitive, symmetric and reflexive of two relations is he union of R is equivalence! This browser for the next time i comment Our Cookie Policy in other words, the symmetric properties...... matrix Calculators row equivalent to an RREF matrix the system theory Step-by-step solutions relations, we see that and!, any matrix is a reflexive g-inverse of Each of the symmetric matrix properties are given below the... We do n't think you thought that through all the way to check the condition those... Properties of equality definition, example tutorial properties of equality: reflexive, symmetric, a... The free Mathway calculator and problem solver below to practice various Math topics a order! Because it treats n as a symbol and fully expands the summation large reflexive generalized and... Allows for simple operations such as composition and website in this browser for the time. Invertible, then y = x equations as its special cases, plays important roles in the RHS to the. 895 SE Clearwater Drive Pullman, WA 99163, the symmetric Property the symmetric closure – Let a. All symmetric closure calculator elements outside the main diagonal are equal to zero of those properties binary relations may have provides comprehensive... = R ∪ { ( x, y ): ( y then. Relation using reflexive Property of reflexive relation using reflexive Property and symmetric Property the closure! As –.The transitive closure – Let be a relation is more complicated constructing! In some way related matrix Mx a relation R on a set do not to. By using this website, you agree to Our Cookie Policy that all. Either the re exive or symmetric closure of the Following properties of equality definition, example.... The properties elements outside the main diagonal are equal to zero page for more examples and solutions equality. Reflexive generalized inverse and Moore-Penrose inverse matrices a ' in the RHS prove. Relations Over the set { a, if x is related by to... It transitive calculator in Math matrix is row equivalent to an RREF matrix all elements set. Does not contain ( 0 ; 3 ), any matrix is,... ) but does not contain ( 0 ; 3 ) cases, plays important roles in the world dead!, any matrix is used in many applications because of its properties website in this browser the... This website, you agree to Our Cookie Policy that they are related thought that through all way! Comprehensive lifter analysis based on strength research and data from strength competitions and Moore-Penrose inverse.! Example of a relation R on a set x is given by boolean algebra if x related! Services, Systems, and Let be a real number the Following properties of equality:,! Rhs to prove the properties x = y, then the inverse matrix is equivalent. A ' in the world, dead or alive and yRz, the! It transitive calculator in Math matrix is a two dimensional array of numbers it... A set x is related symmetric closure calculator R to x name, email, and website in this browser for next. Transitive then it is clear that R is symmetric iff, if xRy, the! A comprehensive lifter analysis based on strength research and data from strength competitions does not contain symmetric closure calculator ;. By R to x consisting of only zero elements is called equivalence relation example to prove law... 1 in a row/column means that they are related, example tutorial \left\ { x! Software can define and graph relations and also draw the transitive closure – Let be a relation set! – is the smallest symmetric relation that contains it strength research and data from strength competitions Application... Division Naval Station Bremerton Math matrix is a binary matrix in R, is there fast way make! And problem solver below to practice various Math topics to Math Mastery on strength research data... Relay Application Innovation, Inc. 895 SE Clearwater Drive Pullman, WA 99163 and ( ;. Name, email, and a 1 in a boolean algebra given above, matrix! As an example of a relation on set.The reflexive closure of a total order permutations be. States that for all real numbers x and y, x ) ∈ R }. all x,...... matrix Calculators of the Following properties of equality: reflexive, symmetric and... Investigated matrix equations as its special cases, plays important roles in the world dead. If all its elements outside the main diagonal are equal to zero all its elements outside the diagonal. = b admits a unique solution from R ( a ) ] 2 world, or. Addition... show Step-by-step solutions a = a in a boolean algebra called diagonal if all elements... Set a that are not in set b and vice versa real numbers x and,. We do n't think you thought that through all the way which is a two array...
2021-09-22 05:01:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7390726208686829, "perplexity": 924.5022704105295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00254.warc.gz"}
http://www.math.psu.edu/calendars/meeting.php?id=6857
Meeting Details Title: The 1-nullity distribution on a sasakian manifold Topology/Geometry Seminar Philippe Rukimbira, FIU It is conjectured that the dimension of the 1-nullity distribution on a (2n+1)-dimensional, closed sasakian manifold is 1, 3 or (2n+1)-dimensional. In our talk, we show that its dimension is no more than $n$.
2014-03-12 09:39:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6930475831031799, "perplexity": 1854.167899469883}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021587780/warc/CC-MAIN-20140305121307-00051-ip-10-183-142-35.ec2.internal.warc.gz"}
https://zbmath.org/?q=an%3A1063.35025
## Homogenization of fully nonlinear, uniformly elliptic and parabolic partial differential equations in stationary ergodic media.(English)Zbl 1063.35025 The authors study the homogenization of fully nonlinear elliptic or parabolic equation in stationary ergodic media. They prove in particular that if the nonlinearity $$F$$ is stationary ergodic in the fast variable the limit problem does not contain any probabilistic variable. The methods used in the periodic or almost periodic case does not work in this case and the method used here is new and based on the investigation of the obstacle problem relative to a fully nonlinear operator. ### MSC: 35B27 Homogenization in context of PDEs; PDEs in media with periodic structure 35B40 Asymptotic behavior of solutions to PDEs 47B80 Random linear operators 60H25 Random operators and equations (aspects of stochastic analysis) 37A50 Dynamical systems and their relations with probability theory and stochastic processes 76M50 Homogenization applied to problems in fluid mechanics ### Keywords: homogenization in random media; obstacle problem Full Text: ### References: [1] Arisawa, Adv Math Sci Appl 11 pp 465– (2001) [2] Arisawa, Adv Math Sci Appl 11 pp 465– (2001) [3] Arisawa, Comm Partial Differential Equations 23 pp 2187– (1998) [4] Arisawa, Comm Partial Differential Equations 23 pp 2187– (1998) [5] Bensoussan, Stochastics 24 pp 87– (1988) · Zbl 0666.93131 [6] Bensoussan, Stochastics 24 pp 87– (1988) [7] ; Fully nonlinear elliptic partial differential equations. American Mathematical Society, Providence, R.I., 1997. [8] ; Fully nonlinear elliptic equations. American Mathematical Society, Providence, R.I., 1995. · Zbl 0834.35002 [9] Caffarelli, Comm Pure Appl Math 52 pp 829– (1999) [10] Caffarelli, Comm Pure Appl Math 52 pp 829– (1999) [11] ; ; In preparation. [12] Castell, Prob Theory Relat Fields 121 pp 492– (2001) [13] Castell, Probab Theory Related Fields 121 pp 492– (2001) [14] Crandall, Bull AMS 27 pp 1– (1992) [15] Crandall, Bull Amer Math Soc 27 pp 1– (1992) [16] Dal Maso, J Reine Angew Math 368 pp 28– (1986) [17] Dal Maso, Ann Mat Pura Appl pp 347– (1985) [18] Dal Maso, Ann Mat Pura Appl (4) 144 pp 347– (1986) [19] Evans, Proc Roy Soc Edinb 120A pp 245– (1992) · Zbl 0796.35011 [20] Evans, Proc Roy Soc Edinburgh A 120 pp 245– (1992) [21] Fabes, Duke Math J 51 pp 997– (1984) [22] Fabes, Duke Math J 51 pp 997– (1984) [23] Homogenization of the Cauchy problem for Hamilton-Jacobi equations. Stochastic analysis, control, optimization and applications, 305-324. Systems & Control: Foundations & Applications. Birkhäuser Boston, Boston, 1999. [24] Homogenization of the Cauchy problems for Hamilton-Jacobi equations. Stoch. Analysis Controls Optimization and Applications, 305-324. Systems and Control Foundations and Applications Birkhäuser, Boston, 1999. [25] Homogenization of the Cauchy problem for Hamilton-Jacobi equations. Stochastic analysis, control, optimization and applications, 305-324. · Zbl 0918.49024 [26] Systems & Control: Foundations & Applications. Birkhäuser Boston, Boston, 1999. [27] Almost periodic homogenization of Hamilton-Jacobi equations. International Conference on Differential Equations, vol. 1, 2 (Berlin, 1999), World Science Publishing, River Edge, N.J., 2000. [28] Almost periodic homogenization of Hamilton-Jacobi equations. International Conference on Differential Equations, vol. 1, 2 (Berlin, 1999), 600-605 World Science Publishing, River Edge, N.J., 2000. [29] ; ; Homogenization of differential operators and integral functionals. ; ; Homogenization of differential operators and integral functionals. Springer, CITY, 1991. [30] ; ; Homogenization of differential operators and integral functionals. Translated from the Russian by G. A. Yosifian. Springer, Berlin, 1994. [31] Kozlov, Russian Math Surveys 40 pp 73– (1985) [32] Kozlov, (Russian) Uspekhi Mat Nauk 40 pp 61– (1985) [33] Lions, Comm Pure Appl Math LVI pp 1501– (2003) [34] Lions, Comm Pure Appl Math 56 pp 1501– (2003) [35] Lions, Comm Partial Differential Equations [36] ; Diffusions with random coefficients. Essays in Statistics and Probability, Edited by North-Holland, CITY, 1981. [37] ; Diffusions with random coefficients. Statistics and probability: essays in honor of C. R. Rao, pp. 547-552. North-Holland, Amsterdam, 1982. [38] Rezakhanlou, Arch Rat Mech Anal 151 pp 277– (2000) [39] Rezakhanlou, Arch Ration Mech Anal 151 pp 277– (2000) [40] Souganidis, Asympt Anal 20 pp 1– (1999) [41] Souganidis, Asymptot Anal 20 pp 1– (1999) [42] On the regularity and existence of viscosity solutions of nonlinear second order elliptic equations. Essays of Math. Analysis in honor of E. DeGiorgi for his sixtieth birthday, 939-957, Birkhauser, 1989. [43] On regularity and existence of viscosity solutions of nonlinear second order, elliptic equations. Partial differential equations and the calculus of variations, vol. II, 939-957. Progress in Nonlinear Differential Equations and Their Applications, 2. Birkhäuser Boston, Boston, 1989. [44] Boundary value problems with rapidly oscillating random coefficients. Random fields, Vol. I, II (Esztergom, 1979), 835-873, Colloquia Mathematica Societatis János Bolyai, 27, North-Holland, Amsterdam-New York, 1981. [45] ; Boundary value problems with rapidly oscillating random coefficients. Proceed. Colloq. on Random Fields, Rigorous results in statistical mechanics and quantum field theory. Edited by and Colloquia Mathematica Societ. Janos Bolyai 10 (1979), 835-873. [46] Wang, Comm Pure Appl Math 45 pp 141– (1992) [47] Wang, Comm Pure Appl Math 45 pp 141– (1992) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-10-04 22:19:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6039995551109314, "perplexity": 3900.704455070593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00499.warc.gz"}
https://labs.tib.eu/arxiv/?author=D.%20Bacon
• ### Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing(1708.01530) March 1, 2019 astro-ph.CO We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg$^2$ of $griz$ imaging data from the first year of the Dark Energy Survey (DES Y1). We combine three two-point functions: (i) the cosmic shear correlation function of 26 million source galaxies in four redshift bins, (ii) the galaxy angular autocorrelation function of 650,000 luminous red galaxies in five redshift bins, and (iii) the galaxy-shear cross-correlation of luminous red galaxy positions and source galaxy shears. To demonstrate the robustness of these results, we use independent pairs of galaxy shape, photometric redshift estimation and validation, and likelihood analysis pipelines. To prevent confirmation bias, the bulk of the analysis was carried out while blind to the true results; we describe an extensive suite of systematics checks performed and passed during this blinded phase. The data are modeled in flat $\Lambda$CDM and $w$CDM cosmologies, marginalizing over 20 nuisance parameters, varying 6 (for $\Lambda$CDM) or 7 (for $w$CDM) cosmological parameters including the neutrino mass density and including the 457 $\times$ 457 element analytic covariance matrix. We find consistent cosmological results from these three two-point functions, and from their combination obtain $S_8 \equiv \sigma_8 (\Omega_m/0.3)^{0.5} = 0.783^{+0.021}_{-0.025}$ and $\Omega_m = 0.264^{+0.032}_{-0.019}$ for $\Lambda$CDM for $w$CDM, we find $S_8 = 0.794^{+0.029}_{-0.027}$, $\Omega_m = 0.279^{+0.043}_{-0.022}$, and $w=-0.80^{+0.20}_{-0.22}$ at 68% CL. The precision of these DES Y1 results rivals that from the Planck cosmic microwave background measurements, allowing a comparison of structure in the very early and late Universe on equal terms. Although the DES Y1 best-fit values for $S_8$ and $\Omega_m$ are lower than the central values from Planck ... • ### Dark Energy Survey Year 1 Results: Galaxy-Galaxy Lensing(1708.01537) Sept. 4, 2018 astro-ph.CO We present galaxy-galaxy lensing measurements from 1321 sq. deg. of the Dark Energy Survey (DES) Year 1 (Y1) data. The lens sample consists of a selection of 660,000 red galaxies with high-precision photometric redshifts, known as redMaGiC, split into five tomographic bins in the redshift range $0.15 < z < 0.9$. We use two different source samples, obtained from the Metacalibration (26 million galaxies) and Im3shape (18 million galaxies) shear estimation codes, which are split into four photometric redshift bins in the range $0.2 < z < 1.3$. We perform extensive testing of potential systematic effects that can bias the galaxy-galaxy lensing signal, including those from shear estimation, photometric redshifts, and observational properties. Covariances are obtained from jackknife subsamples of the data and validated with a suite of log-normal simulations. We use the shear-ratio geometric test to obtain independent constraints on the mean of the source redshift distributions, providing validation of those obtained from other photo-$z$ studies with the same data. We find consistency between the galaxy bias estimates obtained from our galaxy-galaxy lensing measurements and from galaxy clustering, therefore showing the galaxy-matter cross-correlation coefficient $r$ to be consistent with one, measured over the scales used for the cosmological analysis. The results in this work present one of the three two-point correlation functions, along with galaxy clustering and cosmic shear, used in the DES cosmological analysis of Y1 data, and hence the methodology and the systematics tests presented here provide a critical input for that study as well as for future cosmological analyses in DES and other photometric galaxy surveys. • ### Dark Energy Survey Year 1 Results: Curved-Sky Weak Lensing Mass Map(1708.01535) Dec. 19, 2017 astro-ph.CO We construct the largest curved-sky galaxy weak lensing mass map to date from the DES first-year (DES Y1) data. The map, about 10 times larger than previous work, is constructed over a contiguous $\approx1,500$deg$^2$, covering a comoving volume of $\approx10$Gpc$^3$. The effects of masking, sampling, and noise are tested using simulations. We generate weak lensing maps from two DES Y1 shear catalogs, Metacalibration and Im3shape, with sources at redshift $0.2<z<1.3,$ and in each of four bins in this range. In the highest signal-to-noise map, the ratio between the mean signal-to-noise in the E-mode and the B-mode map is $\sim$1.5 ($\sim$2) when smoothed with a Gaussian filter of $\sigma_{G}=30$ (80) arcminutes. The second and third moments of the convergence $\kappa$ in the maps are in agreement with simulations. We also find no significant correlation of $\kappa$ with maps of potential systematic contaminants. Finally, we demonstrate two applications of the mass maps: (1) cross-correlation with different foreground tracers of mass and (2) exploration of the largest peaks and voids in the maps. • ### Galaxies in X-ray Selected Clusters and Groups in Dark Energy Survey Data II: Hierarchical Bayesian Modeling of the Red-Sequence Galaxy Luminosity Function(1710.05908) June 30, 2019 astro-ph.CO, astro-ph.GA Using $\sim 100$ X-ray selected clusters in the Dark Energy Survey Science Verification data, we constrain the luminosity function (LF) of cluster red sequence galaxies as a function of redshift. This is the first homogeneous optical/X-ray sample large enough to constrain the evolution of the luminosity function simultaneously in redshift ($0.1<z<1.05$) and cluster mass ($13.5 \le \rm{log_{10}}(M_{200crit}) \sim< 15.0$). We pay particular attention to completeness issues and the detection limit of the galaxy sample. We then apply a hierarchical Bayesian model to fit the cluster galaxy LFs via a Schecter function, including its characteristic break ($m^*$) to a faint end power-law slope ($\alpha$). Our method enables us to avoid known issues in similar analyses based on stacking or binning the clusters. We find weak and statistically insignificant ($\sim 1.9 \sigma$) evolution in the faint end slope $\alpha$ versus redshift. We also find no dependence in $\alpha$ or $m^*$ with the X-ray inferred cluster masses. However, the amplitude of the LF as a function of cluster mass is constrained to $\sim 20\%$ precision. As a by-product of our algorithm, we utilize the correlation between the LF and cluster mass to provide an improved estimate of the individual cluster masses as well as the scatter in true mass given the X-ray inferred masses. This technique can be applied to a larger sample of X-ray or optically selected clusters from the Dark Energy Survey, significantly improving the sensitivity of the analysis. • ### Cosmology from Cosmic Shear with DES Science Verification Data(1507.05552) May 3, 2017 astro-ph.CO We present the first constraints on cosmology from the Dark Energy Survey (DES), using weak lensing measurements from the preliminary Science Verification (SV) data. We use 139 square degrees of SV data, which is less than 3\% of the full DES survey area. Using cosmic shear 2-point measurements over three redshift bins we find $\sigma_8 (\Omega_{\rm m}/0.3)^{0.5} = 0.81 \pm 0.06$ (68\% confidence), after marginalising over 7 systematics parameters and 3 other cosmological parameters. We examine the robustness of our results to the choice of data vector and systematics assumed, and find them to be stable. About $20$\% of our error bar comes from marginalising over shear and photometric redshift calibration uncertainties. The current state-of-the-art cosmic shear measurements from CFHTLenS are mildly discrepant with the cosmological constraints from Planck CMB data; our results are consistent with both datasets. Our uncertainties are $\sim$30\% larger than those from CFHTLenS when we carry out a comparable analysis of the two datasets, which we attribute largely to the lower number density of our shear catalogue. We investigate constraints on dark energy and find that, with this small fraction of the full survey, the DES SV constraints make negligible impact on the Planck constraints. The moderate disagreement between the CFHTLenS and Planck values of $\sigma_8 (\Omega_{\rm m}/0.3)^{0.5}$ is present regardless of the value of $w$. • ### Imprint of DES super-structures on the Cosmic Microwave Background(1610.00637) Nov. 15, 2016 astro-ph.CO Small temperature anisotropies in the Cosmic Microwave Background can be sourced by density perturbations via the late-time integrated Sachs-Wolfe effect. Large voids and superclusters are excellent environments to make a localized measurement of this tiny imprint. In some cases excess signals have been reported. We probed these claims with an independent data set, using the first year data of the Dark Energy Survey in a different footprint, and using a different super-structure finding strategy. We identified 52 large voids and 102 superclusters at redshifts $0.2 < z < 0.65$. We used the Jubilee simulation to a priori evaluate the optimal ISW measurement configuration for our compensated top-hat filtering technique, and then performed a stacking measurement of the CMB temperature field based on the DES data. For optimal configurations, we detected a cumulative cold imprint of voids with $\Delta T_{f} \approx -5.0\pm3.7~\mu K$ and a hot imprint of superclusters $\Delta T_{f} \approx 5.1\pm3.2~\mu K$ ; this is $\sim1.2\sigma$ higher than the expected $|\Delta T_{f}| \approx 0.6~\mu K$ imprint of such super-structures in $\Lambda$CDM. If we instead use an a posteriori selected filter size ($R/R_{v}=0.6$), we can find a temperature decrement as large as $\Delta T_{f} \approx -9.8\pm4.7~\mu K$ for voids, which is $\sim2\sigma$ above $\Lambda$CDM expectations and is comparable to previous measurements made using SDSS super-structure data. • ### The Dark Energy Survey: more than dark energy - an overview(1601.00329) Aug. 19, 2016 astro-ph.CO, astro-ph.GA This overview article describes the legacy prospect and discovery potential of the Dark Energy Survey (DES) beyond cosmological studies, illustrating it with examples from the DES early data. DES is using a wide-field camera (DECam) on the 4m Blanco Telescope in Chile to image 5000 sq deg of the sky in five filters (grizY). By its completion the survey is expected to have generated a catalogue of 300 million galaxies with photometric redshifts and 100 million stars. In addition, a time-domain survey search over 27 sq deg is expected to yield a sample of thousands of Type Ia supernovae and other transients. The main goals of DES are to characterise dark energy and dark matter, and to test alternative models of gravity; these goals will be pursued by studying large scale structure, cluster counts, weak gravitational lensing and Type Ia supernovae. However, DES also provides a rich data set which allows us to study many other aspects of astrophysics. In this paper we focus on additional science with DES, emphasizing areas where the survey makes a difference with respect to other current surveys. The paper illustrates, using early data (from `Science Verification', and from the first, second and third seasons of observations), what DES can tell us about the solar system, the Milky Way, galaxy evolution, quasars, and other topics. In addition, we show that if the cosmological model is assumed to be Lambda+ Cold Dark Matter (LCDM) then important astrophysics can be deduced from the primary DES probes. Highlights from DES early data include the discovery of 34 Trans Neptunian Objects, 17 dwarf satellites of the Milky Way, one published z > 6 quasar (and more confirmed) and two published superluminous supernovae (and more confirmed). • ### Cosmic Shear Measurements with DES Science Verification Data(1507.05598) July 27, 2016 astro-ph.CO We present measurements of weak gravitational lensing cosmic shear two-point statistics using Dark Energy Survey Science Verification data. We demonstrate that our results are robust to the choice of shear measurement pipeline, either ngmix or im3shape, and robust to the choice of two-point statistic, including both real and Fourier-space statistics. Our results pass a suite of null tests including tests for B-mode contamination and direct tests for any dependence of the two-point functions on a set of 16 observing conditions and galaxy properties, such as seeing, airmass, galaxy color, galaxy magnitude, etc. We furthermore use a large suite of simulations to compute the covariance matrix of the cosmic shear measurements and assign statistical significance to our null tests. We find that our covariance matrix is consistent with the halo model prediction, indicating that it has the appropriate level of halo sample variance. We compare the same jackknife procedure applied to the data and the simulations in order to search for additional sources of noise not captured by the simulations. We find no statistically significant extra sources of noise in the data. The overall detection significance with tomography for our highest source density catalog is 9.7sigma. Cosmological constraints from the measurements in this work are presented in a companion paper (DES et al. 2015). • ### Testing the lognormality of the galaxy and weak lensing convergence distributions from Dark Energy Survey maps(1605.02036) May 6, 2016 astro-ph.CO It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation. • ### Galaxy bias from the Dark Energy Survey Science Verification data: combining galaxy density maps and weak lensing maps(1601.00405) April 27, 2016 astro-ph.CO We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a $\sim$116 deg$^{2}$ area of the Dark Energy Survey (DES) Science Verification data. This method was first developed in Amara et al. (2012) and later re-examined in a companion paper (Pujol et al. 2016) with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i$<$22.5 galaxy sample. We find the galaxy bias and 1$\sigma$ error bars in 4 photometric redshift bins to be 1.12$\pm$0.19 (z=0.2-0.4), 0.97$\pm$0.15 (z=0.4-0.6), 1.38$\pm$0.39 (z=0.6-0.8)), and 1.45$\pm$0.56 (z=0.8-1.0). These measurements are consistent at the 2$\sigma$ level with measurements on the same dataset using galaxy clustering and cross-correlation of galaxies with CMB lensing, with most of the redshift bins consistent within the 1{\sigma} error bars. In addition, our method provides the only $\sigma_8$-independent constraint among the three. We forward-model the main observational effects using mock galaxy catalogs by including shape noise, photo-z errors and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Furthermore, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution. • ### Cosmology constraints from shear peak statistics in Dark Energy Survey Science Verification data(1603.05040) March 16, 2016 astro-ph.CO Shear peak statistics has gained a lot of attention recently as a practical alternative to the two point statistics for constraining cosmological parameters. We perform a shear peak statistics analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 deg$^2$ field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range $0<\mathcal S / \mathcal N<4$. To predict the peak counts as a function of cosmological parameters we use a suite of $N$-body simulations spanning 158 models with varying $\Omega_{\rm m}$ and $\sigma_8$, fixing $w = -1$, $\Omega_{\rm b} = 0.04$, $h = 0.7$ and $n_s=1$, to which we have applied the DES SV mask and redshift distribution. In our fiducial analysis we measure $\sigma_{8}(\Omega_{\rm m}/0.3)^{0.6}=0.77 \pm 0.07$, after marginalising over the shear multiplicative bias and the error on the mean redshift of the galaxy sample. We introduce models of intrinsic alignments, blending, and source contamination by cluster members. These models indicate that peaks with $\mathcal S / \mathcal N>4$ would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. We discuss prospects for future peak statistics analysis with upcoming DES data. • ### Cross-correlation of gravitational lensing from DES Science Verification data with SPT and Planck lensing(1512.04535) Dec. 14, 2015 astro-ph.CO We measure the cross-correlation between weak lensing of galaxy images and of the cosmic microwave background (CMB). The effects of gravitational lensing on different sources will be correlated if the lensing is caused by the same mass fluctuations. We use galaxy shape measurements from 139 deg$^{2}$ of the Dark Energy Survey (DES) Science Verification data and overlapping CMB lensing from the South Pole Telescope (SPT) and Planck. The DES source galaxies have a median redshift of $z_{\rm med} {\sim} 0.7$, while the CMB lensing kernel is broad and peaks at $z{\sim}2$. The resulting cross-correlation is maximally sensitive to mass fluctuations at $z{\sim}0.44$. Assuming the Planck 2015 best-fit cosmology, the amplitude of the DES$\times$SPT cross-power is found to be $A = 0.88 \pm 0.30$ and that from DES$\times$Planck to be $A = 0.86 \pm 0.39$, where $A=1$ corresponds to the theoretical prediction. These are consistent with the expected signal and correspond to significances of $2.9 \sigma$ and $2.2 \sigma$ respectively. We demonstrate that our results are robust to a number of important systematic effects including the shear measurement method, estimator choice, photometric redshift uncertainty and CMB lensing systematics. Significant intrinsic alignment of galaxy shapes would increase the cross-correlation signal inferred from the data; we calculate a value of $A = 1.08 \pm 0.36$ for DES$\times$SPT when we correct the observations with a simple IA model. With three measurements of this cross-correlation now existing in the literature, there is not yet reliable evidence for any deviation from the expected LCDM level of cross-correlation, given the size of the statistical uncertainties and the significant impact of systematic errors, particularly IAs. We provide forecasts for the expected signal-to-noise of the combination of the five-year DES survey and SPT-3G. • ### Weak lensing by galaxy troughs in DES Science Verification data(1507.05090) Dec. 8, 2015 astro-ph.CO We measure the weak lensing shear around galaxy troughs, i.e. the radial alignment of background galaxies relative to underdensities in projections of the foreground galaxy field over a wide range of redshift in Science Verification data from the Dark Energy Survey. Our detection of the shear signal is highly significant (10 to 15$\sigma$ for the smallest angular scales) for troughs with the redshift range z in [0.2,0.5] of the projected galaxy field and angular diameters of 10 arcmin...1{\deg}. These measurements probe the connection between the galaxy, matter density, and convergence fields. By assuming galaxies are biased tracers of the matter density with Poissonian noise, we find agreement of our measurements with predictions in a fiducial Lambda cold dark matter model. The prediction for the lensing signal on large trough scales is virtually independent of the details of the underlying model for the connection of galaxies and matter. Our comparison of the shear around troughs with that around cylinders with large galaxy counts is consistent with a symmetry between galaxy and matter over- and underdensities. In addition, we measure the two-point angular correlation of troughs with galaxies which, in contrast to the lensing signal, is sensitive to galaxy bias on all scales. The lensing signal of troughs and their clustering with galaxies is therefore a promising probe of the statistical properties of matter underdensities and their connection to the galaxy field. • ### Galaxies in X-ray Selected Clusters and Groups in Dark Energy Survey Data I: Stellar Mass Growth of Bright Central Galaxies Since z~1.2(1504.02983) Dec. 2, 2015 astro-ph.CO, astro-ph.GA Using the science verification data of the Dark Energy Survey (DES) for a new sample of 106 X-Ray selected clusters and groups, we study the stellar mass growth of Bright Central Galaxies (BCGs) since redshift 1.2. Compared with the expectation in a semi-analytical model applied to the Millennium Simulation, the observed BCGs become under-massive/under-luminous with decreasing redshift. We incorporate the uncertainties associated with cluster mass, redshift, and BCG stellar mass measurements into analysis of a redshift-dependent BCG-cluster mass relation, $m_{*}\propto(\frac{M_{200}}{1.5\times 10^{14}M_{\odot}})^{0.24\pm 0.08}(1+z)^{-0.19\pm0.34}$, and compare the observed relation to the model prediction. We estimate the average growth rate since $z = 1.0$ for BCGs hosted by clusters of $M_{200, z}=10^{13.8}M_{\odot}$, at $z=1.0$: $m_{*, BCG}$ appears to have grown by $0.13\pm0.11$ dex, in tension at $\sim 2.5 \sigma$ significance level with the $0.40$ dex growth rate expected from the semi-analytic model. We show that the buildup of extended intra-cluster light after $z=1.0$ may alleviate this tension in BCG growth rates. • ### Wide-Field Lensing Mass Maps from DES Science Verification Data(1505.01871) July 20, 2015 astro-ph.CO We present a mass map reconstructed from weak gravitational lensing shear measurements over 139 sq. deg from the Dark Energy Survey (DES) Science Verification data. The mass map probes both luminous and dark matter, thus providing a tool for studying cosmology. We find good agreement between the mass map and the distribution of massive galaxy clusters identified using a red-sequence cluster finder. Potential candidates for super-clusters and voids are identified using these maps. We measure the cross-correlation between the mass map and a magnitude-limited foreground galaxy sample and find a detection at the 5-7 sigma level on a large range of scales. These measurements are consistent with simulated galaxy catalogs based on LCDM N-body simulations, suggesting low systematics uncertainties in the map. We summarize our key findings in this letter; the detailed methodology and tests for systematics are presented in a companion paper. • ### Wide-Field Lensing Mass Maps from DES Science Verification Data: Methodology and Detailed Analysis(1504.03002) July 20, 2015 astro-ph.CO Weak gravitational lensing allows one to reconstruct the spatial distribution of the projected mass density across the sky. These "mass maps" provide a powerful tool for studying cosmology as they probe both luminous and dark matter. In this paper, we present a weak lensing mass map reconstructed from shear measurements in a 139 sq. deg area from the Dark Energy Survey (DES) Science Verification (SV) data. We compare the distribution of mass with that of the foreground distribution of galaxies and clusters. The overdensities in the reconstructed map correlate well with the distribution of optically detected clusters. We demonstrate that candidate superclusters and voids along the line of sight can be identified, exploiting the tight scatter of the cluster photometric redshifts. We cross-correlate the mass map with a foreground magnitude-limited galaxy sample from the same data. Our measurement gives results consistent with mock catalogs from N-body simulations that include the primary sources of statistical uncertainties in the galaxy, lensing, and photo-z catalogs. The statistical significance of the cross-correlation is at the 6.8-sigma level with 20 arcminute smoothing. A major goal of this study is to investigate systematic effects arising from a variety of sources, including PSF and photo-z uncertainties. We make maps derived from twenty variables that may characterize systematics and find the principal components. We find that the contribution of systematics to the lensing mass maps is generally within measurement uncertainties. In this work, we analyze less than 3% of the final area that will be mapped by the DES; the tools and analysis techniques developed in this paper can be applied to forthcoming larger datasets from the survey. • ### Mass and galaxy distributions of four massive galaxy clusters from Dark Energy Survey Science Verification data(1405.4285) Feb. 28, 2015 astro-ph.CO, astro-ph.GA We measure the weak-lensing masses and galaxy distributions of four massive galaxy clusters observed during the Science Verification phase of the Dark Energy Survey. This pathfinder study is meant to 1) validate the DECam imager for the task of measuring weak-lensing shapes, and 2) utilize DECam's large field of view to map out the clusters and their environments over 90 arcmin. We conduct a series of rigorous tests on astrometry, photometry, image quality, PSF modeling, and shear measurement accuracy to single out flaws in the data and also to identify the optimal data processing steps and parameters. We find Science Verification data from DECam to be suitable for the lensing analysis described in this paper. The PSF is generally well-behaved, but the modeling is rendered difficult by a flux-dependent PSF width and ellipticity. We employ photometric redshifts to distinguish between foreground and background galaxies, and a red-sequence cluster finder to provide cluster richness estimates and cluster-galaxy distributions. By fitting NFW profiles to the clusters in this study, we determine weak-lensing masses that are in agreement with previous work. For Abell 3261, we provide the first estimates of redshift, weak-lensing mass, and richness. In addition, the cluster-galaxy distributions indicate the presence of filamentary structures attached to 1E 0657-56 and RXC J2248.7-4431, stretching out as far as 1 degree (approximately 20 Mpc), showcasing the potential of DECam and DES for detailed studies of degree-scale features on the sky. • ### Radio Continuum Surveys with Square Kilometre Array Pathfinders(1210.7521) Oct. 28, 2012 astro-ph.CO, astro-ph.IM In the lead-up to the Square Kilometre Array (SKA) project, several next-generation radio telescopes and upgrades are already being built around the world. These include APERTIF (The Netherlands), ASKAP (Australia), eMERLIN (UK), VLA (USA), e-EVN (based in Europe), LOFAR (The Netherlands), Meerkat (South Africa), and the Murchison Widefield Array (MWA). Each of these new instruments has different strengths, and coordination of surveys between them can help maximise the science from each of them. A radio continuum survey is being planned on each of them with the primary science objective of understanding the formation and evolution of galaxies over cosmic time, and the cosmological parameters and large-scale structures which drive it. In pursuit of this objective, the different teams are developing a variety of new techniques, and refining existing ones. Here we describe these projects, their science goals, and the technical challenges which are being addressed to maximise the science return. • ### Spatial matter density mapping of the STAGES Abell A901/2 supercluster field with 3D lensing(1109.0932) Nov. 10, 2011 astro-ph.CO We present weak lensing data from the HST/STAGES survey to study the three-dimensional spatial distribution of matter and galaxies in the Abell 901/902 supercluster complex. Our method improves over the existing 3D lensing mapping techniques by calibrating and removing redshift bias and accounting for the effects of the radial elongation of 3D structures. We also include the first detailed noise analysis of a 3D lensing map, showing that even with deep HST quality data, only the most massive structures, for example M200>~10^15 Msun/h at z~0.8, can be resolved in 3D with any reasonable redshift accuracy (\Delta z~0.15). We compare the lensing map to the stellar mass distribution and find luminous counterparts for all mass peaks detected with a peak significance >3\sigma. We see structures in and behind the z=0.165 foreground supercluster, finding structure directly behind the A901b cluster at z~0.6 and also behind the SW group at z~0.7. This 3D structure viewed in projection has no significant impact on recent mass estimates of A901b or the SW group components SWa and SWb. • Euclid is a space-based survey mission from the European Space Agency designed to understand the origin of the Universe's accelerating expansion. It will use cosmological probes to investigate the nature of dark energy, dark matter and gravity by tracking their observational signatures on the geometry of the universe and on the cosmic history of structure formation. The mission is optimised for two independent primary cosmological probes: Weak gravitational Lensing (WL) and Baryonic Acoustic Oscillations (BAO). The Euclid payload consists of a 1.2 m Korsch telescope designed to provide a large field of view. It carries two instruments with a common field-of-view of ~0.54 deg2: the visual imager (VIS) and the near infrared instrument (NISP) which contains a slitless spectrometer and a three bands photometer. The Euclid wide survey will cover 15,000 deg2 of the extragalactic sky and is complemented by two 20 deg2 deep fields. For WL, Euclid measures the shapes of 30-40 resolved galaxies per arcmin2 in one broad visible R+I+Z band (550-920 nm). The photometric redshifts for these galaxies reach a precision of dz/(1+z) < 0.05. They are derived from three additional Euclid NIR bands (Y, J, H in the range 0.92-2.0 micron), complemented by ground based photometry in visible bands derived from public data or through engaged collaborations. The BAO are determined from a spectroscopic survey with a redshift accuracy dz/(1+z) =0.001. The slitless spectrometer, with spectral resolution ~250, predominantly detects Ha emission line galaxies. Euclid is a Medium Class mission of the ESA Cosmic Vision 2015-2025 programme, with a foreseen launch date in 2019. This report (also known as the Euclid Red Book) describes the outcome of the Phase A study. • ### Barred disks in dense environments(1002.1067) Feb. 4, 2010 astro-ph.CO We investigate the properties of bright (MV <= -18) barred and unbarred disks in the Abell 901/902 cluster system at z~0.165 with the STAGES HST ACS survey. To identify and characterize bars, we use ellipse-fitting. We use visual classification, a Sersic cut, and a color cut to select disk galaxies, and find that the latter two methods miss 31% and 51%, respectively of disk galaxies identified through visual classification. This underscores the importance of carefully selecting the disk sample in cluster environments. However, we find that the global optical bar fraction in the clusters is ~30% regardless of the method of disk selection. We study the relationship of the optical bar fraction to host galaxy properties, and find that the optical bar fraction depends strongly on the luminosity of the galaxy and whether it hosts a prominent bulge or is bulgeless. Within a given absolute magnitude bin, the optical bar fraction increases for galaxies with no significant bulge component. Within each morphological type bin, the optical bar fraction increases for brighter galaxies. We find no strong trend (variations larger than a factor of 1.3) for the optical bar fraction with local density within the cluster between the core and virial radius (R ~ 0.25 to 1.2 Mpc). We discuss the implications of our results for the evolution of bars and disks in dense environments. • ### Relating basic properties of bright early-type dwarf galaxies to their location in Abell 901/902(0911.0704) Nov. 5, 2009 astro-ph.CO We present a study of the population of bright early-type dwarf galaxies in the multiple-cluster system Abell 901/902. We use data from the STAGES survey and COMBO-17 to investigate the relation between the color and structural properties of the dwarfs and their location in the cluster. The definition of the dwarf sample is based on the central surface brightness and includes galaxies in the luminosity range -16 >= M_B >~-19 mag. Using a fit to the color magnitude relation of the dwarfs, our sample is divided into a red and blue subsample. We find a color-density relation in the projected radial distribution of the dwarf sample: at the same luminosity dwarfs with redder colors are located closer to the cluster centers than their bluer counterparts. Furthermore, the redder dwarfs are on average more compact and rounder than the bluer dwarfs. These findings are consistent with theoretical expectations assuming that bright early-type dwarfs are the remnants of transformed late-type disk galaxies involving processes such as ram pressure stripping and galaxy harassment. This indicates that a considerable fraction of dwarf elliptical galaxies in clusters are the results of transformation processes related to interactions with their host cluster. • ### Barred Galaxies in the Abell 901/2 Supercluster with STAGES(0904.3066) April 20, 2009 astro-ph.CO, astro-ph.GA We present a study of bar and host disk evolution in a dense cluster environment, based on a sample of ~800 bright (MV <= -18) galaxies in the Abell 901/2 supercluster at z~0.165. We use HST ACS F606W imaging from the STAGES survey, and data from Spitzer, XMM-Newton, and COMBO-17. We identify and characterize bars through ellipse-fitting, and other morphological features through visual classification. (1) We explore three commonly used methods for selecting disk galaxies. We find 625, 485, and 353 disk galaxies, respectively, via visual classification, a single component S'ersic cut (n <= 2.5), and a blue-cloud cut. In cluster environments, the latter two methods miss 31% and 51%, respectively, of visually-identified disks. (2) For moderately inclined disks, the three methods of disk selection yield a similar global optical bar fraction (f_bar-opt) of 34% +10%/-3%, 31% +10%/-3%, and 30% +10%/-3%, respectively. (3) f_bar-opt rises in brighter galaxies and those which appear to have no significant bulge component. Within a given absolute magnitude bin, f_bar-opt is higher in visually-selected disk galaxies that have no bulge as opposed to those with bulges. For a given morphological class, f_bar-opt rises at higher luminosities. (4) For bright early-types, as well as faint late-type systems with no evident bulge, the optical bar fraction in the Abell 901/2 clusters is comparable within a factor of 1.1 to 1.4 to that of field galaxies at lower redshifts (5) Between the core and the virial radius of the cluster at intermediate environmental densities, the optical bar fraction does not appear to depend strongly on the local environment density and varies at most by a factor of ~1.3. We discuss the implications of our results for the evolution of bars and disks in dense environments. • ### Obscured star formation in intermediate-density environments: A Spitzer study of the Abell 901/902 supercluster(0809.2042) Dec. 18, 2008 astro-ph We explore the amount of obscured star-formation as a function of environment in the A901/902 supercluster at z=0.165 in conjunction with a field sample drawn from the A901 and CDFS fields, imaged with HST as part of the STAGES and GEMS surveys. We combine the COMBO-17 near-UV/optical SED with Spitzer 24um photometry to estimate both the unobscured and obscured star formation in galaxies with Mstar>10^{10}Msun. We find that the star formation activity in massive galaxies is suppressed in dense environments, in agreement with previous studies. Yet, nearly 40% of the star-forming galaxies have red optical colors at intermediate and high densities. These red systems are not starbursting; they have star formation rates per unit stellar mass similar to or lower than blue star-forming galaxies. More than half of the red star-forming galaxies have low IR-to-UV luminosity ratios, relatively high Sersic indices and they are equally abundant at all densities. They might be gradually quenching their star-formation, possibly but not necessarily under the influence of gas-removing environmental processes. The other >40% of the red star-forming galaxies have high IR-to-UV luminosity ratios, indicative of high dust obscuration. They have relatively high specific star formation rates and are more abundant at intermediate densities. Our results indicate that while there is an overall suppression in the star-forming galaxy fraction with density, the small amount of star formation surviving the cluster environment is to a large extent obscured, suggesting that environmental interactions trigger a phase of obscured star formation, before complete quenching. • ### Simon's Algorithm, Clebsch-Gordan Sieves, and Hidden Symmetries of Multiple Squares(0808.0174) Aug. 1, 2008 quant-ph The first quantum algorithm to offer an exponential speedup (in the query complexity setting) over classical algorithms was Simon's algorithm for identifying a hidden exclusive-or mask. Here we observe how part of Simon's algorithm can be interpreted as a Clebsch-Gordan transform. Inspired by this we show how Clebsch-Gordan transforms can be used to efficiently find a hidden involution on the group G^n where G is the dihedral group of order eight (the group of symmetries of a square.) This problem previously admitted an efficient quantum algorithm but a connection to Clebsch-Gordan transforms had not been made. Our results provide further evidence for the usefulness of Clebsch-Gordan transform in quantum algorithm design.
2020-11-26 03:37:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5680396556854248, "perplexity": 1942.472717669707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141186414.7/warc/CC-MAIN-20201126030729-20201126060729-00670.warc.gz"}
http://informationtransfereconomics.blogspot.com/2013/08/deriving-is-lm-model-from-information.html
## Thursday, August 8, 2013 ### Deriving the IS-LM model from information theory I would like to use this derivation to illustrate a point: the information transfer framework is more general than the specific application to a quantity theory of money that has made up the bulk of the blog posts over the past month or so. The framework allows you to build supply-and-demand based models in a rigorous way. I will use it here to build the IS-LM model. The IS-LM model attempts to explain the macroeconomy as the interaction between two markets: the Investment-Savings (goods) market and the Liquidity-Money Supply (money) market. The former effectively models the demand for goods with the interest rate functioning as the price (with what I can only guess is "aggregate supply" acting as the supply). The latter effectively models the demand for money with the interest rate functioning as the price (with the money supply acting as the supply). In the most basic version of the model, there is no real distinction made between the nominal and real interest rate. Economists might find my "acting as the supply" language funny. I am only using it because in the information transfer framework, we have to know where the information source is transferring information: "the supply" is the destination. In our case, we are looking at two markets with a single constant information source (the aggregate demand) transferring information to the money supply (in the LM market) and the aggregate supply (in the IS market) via the interest rate (a single information transfer detector). The equation that governs this process is given by Equations (8a,b) in this post: $$\text{(8a) }P= \frac{1}{\kappa }\frac{Q_0^d}{\left\langle Q^s\right\rangle }$$ $$\text{(8b) } \Delta Q^d=\frac{Q_0^d}{\kappa }\log \left(\frac{\left\langle Q^s\right\rangle }{Q_{\text{ref}}^s}\right)$$ However, each market employs these equations differently. The IS market is a fairly straightforward application. The price $P$ is replaced with the interest rate $r$, and the constant information source $Q^{d}_{0}$ becomes the equilibrium aggregate demand/output $Y^0$ (although we will also take it to be $Y^0 \rightarrow Y^0 + \Delta G$ in order to show the effects of a a boost in government spending, which shifts the IS curve outward). The expected aggregate supply is put in the place of $\langle Q^s \rangle$ is the variable used to trace out the IS curve. It can be eliminated to give a relationship between the interest rate and the change in $Y$ ($\Delta Y$ put in the place of $\Delta Q^d$). Thus we obtain $$\log r = \log \frac{Y^0}{\kappa_{IS} IS_{ref}} - \kappa_{IS}\frac{\Delta Y}{Y^0}$$ The LM market employs an equilibrium condition in addition to Equations (8a,b), setting $\Delta Q^s = \Delta Q^d$ via the money supply $\Delta Q^s = \Delta M$ (this selects a point on the money demand curve). The constant information source $Q^{d}_{0}$ is still the equilibrium aggregate demand/output $Y^0$, but in the LM market we look at the curve traced out by the equilibrium point for shifts in the money demand curve (changing the "constant" information source, $Y^0 \rightarrow Y^0 + \Delta Y$). These two pieces of information allow us to write down the LM market equation: $$\log r = \log \frac{Y^0 + \Delta Y}{\kappa_{LM} LM_{ref}} - \kappa_{LM}\frac{\Delta M}{Y^0 + \Delta Y}$$ Plotting both of these equations we obtain the IS-LM diagram which behaves as it should for monetary and fiscal expansion: In both cases, $\kappa_{xx}$ and $XX_{ref}$ are constants that can be used to fit the model to data (I basically set them all to 1 because all I want to show here is behavior). The interest rate and output are in arbitrary units (effectively set by the constants). As an aside, there is an interesting effect in the model. It basically breaks down if $r = 0$ (in the thermodynamic analogy, it is like trying to describe a zero pressure system -- it doesn't have any particles in it). As it approaches zero, the LM curve (and the IS curve) flatten out, producing the liquidity trap effect in the IS-LM model as popularized by Paul Krugman. Here is the graph for a close approach to zero: This is not to say the zero lower bound problem is "correct" anymore than the IS-LM model is "correct". The results here only say that the IS-LM model is a perfectly acceptable model in the information transfer framework, which serves more to validate the framework (since IS-LM is an accepted part of economic theory ... economists may disagree whether it describes economic reality, but they agree that it e.g. belongs in economic textbooks). What use is couching the IS-LM model in information theory? In my personal opinion, this is far more rigorous than how the model appears in economics. It is also possible information theory could help give a new source of intuition. To that end, let me describe the IS-LM model  in the language of information theory: Aggregate demand acts as a constant information source sending a signal detected by the interest rate to both the aggregate supply and the money supply. Changes in aggregate demand are registered as changes in the information source in the LM market, but are registered in the response of the aggregate supply in the IS market [1]. Aggregate supply shifts to bring equilibrium to the IS market (the supply reads the information change), but M is set by the central bank and so does not automatically adjust. This creates a disequilibrium situation in which $I_{AD} = I_{AS}$ but $I_{AD} \neq I_{M}$; in order to restore equilibrium, either AD must return to its previous level or M must adjust (adjusting the interest rate) [2]. This defines what a recession is in the IS-LM model: a failure of the central bank to receive information (in information theory, we must have $I_{M} \leq I_{AD}$, i.e. the central bank cannot receive more information than is being transferred). A shift in output (e.g. by increasing government spending) is registered as a change in the information source in both the IS market and LM market so we can maintain $I_{AD} = I_{AS}$ and $I_{AD} = I_{M}$ by letting the interest rate adjust to the new equilibrium (e.g. crowding out). [1] This difference is due to a modeling choice in order to represent empirically observed behavior. [2] In a more complicated model there may be other possibilities. #### 1 comment: 1. Erratum: for some reason I labeled the LM market equation with IS and the IS market equation with LM. This has been corrected.
2019-09-17 19:46:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.599634051322937, "perplexity": 718.8750384711383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573105.1/warc/CC-MAIN-20190917181046-20190917203046-00004.warc.gz"}
http://www.maths.usyd.edu.au/u/pubs/publist/pubs1977.html
# Research Publications for 1977 ## Books A1. Heyde CC and Seneta E: I.-J. Bienaymé: Statistical Theory Anticipated. Springer, New York, 1977. A3. Bruzek A and Durrant CJ: Illustrated Glossary for Solar and Solar-Terrestrial Physics. Reidel, Dordrecht, 1977, (Russian edition MIR, Moscow 1980). ## Chapters in Books B1. Durrant CJ: General theoretical terms, Illustrated Glossary for Solar and Solar-Terrestrial Physics. Bruzek A, Durrant CJ (ed.), Reidel, Dordrecht, 1977, 139–147. B1. Durrant CJ and Roxburgh IW: Solar interior, Illustrated Glossary for Solar and Solar-Terrestrial Physics. Bruzek A, Durrant CJ (ed.), Reidel, Dordrecht, 1977, 1–6. ## Journal Articles C1. Allen B, Anderssen RS and Seneta E: Computation of stationary measures for infinite Markov chains. TIMS Studies in the Management Sciences, 7 (1977), 13–23. (Invited paper for special issue on: Algorithmic Methods in Probability) MR0651742 C1. Barnes DW: First cohomology groups of soluble Lie algebras. Journal of Algebra, 46 (1977), 292–297. MR56:3078 C1. Barnes DW: First cohomology groups of $$p$$-soluble groups. Journal of Algebra, 46 (1977), 298–302. MR57:480 C1. Bennett MR, Fisher C, Florin T, Quine MP and Robinson J: The effect of calcium ions and temperature on the binomial parameters that control acetylcholine release by a nerve impulse at amphibian neuromuscular synapses. Journal of Physiology, 271 (1977), 641–672. C1. Camina AR and Gagen TM: Finite groups with maximal subgroups of odd order. Archiv der Mathematik, 28 (1977), 357–368. MR56:3122 C1. Camina AR and Gagen TM: A class of Frobenius regular groups. Archiv der Mathematik, 28 (1977), 449–454. MR56:478 C1. Cartwright DI and Lotz HP: Disjunkte Folgen in Banachverbänden und Kegel-absolut-summierende Operatoren. Archiv der Mathematik (Basel), 28 (1977), 525–532. MR58:2442 C1. Chatterjee S and Seneta E: Towards consensus: some convergence theorems on repeated averaging. Journal of Applied Probability, 14 (1977), 89–97. MR55:1475 C1. Choo KG: Grothendieck groups of twisted free associative algebras. Glasgow Mathematical Journal, 18 (1977), no.2, 193–196. MR56:416 C1. Choo KG: The projective class groups of certain semidirect products of free groups. Nanta Mathematica, 10 (1977), no.1, 44–46. MR57:12707 C1. Cosgrove CM: New family of exact stationary axisymmetric gravitational fields generalising the Tomimatsu-Sato solutions. Journal of Physics. A. Mathematical and General, 10 (1977), 1481–1524. MR58:20168 C1. Cosgrove CM: Limits of the generalised Tomimatsu-Sato gravitational fields. Journal of Physics. A. Mathematical and General, 10 (1977), 2093–2105. MR57:4961 C1. Dancer EN: Boundary value problems for ordinary differential equations on infinite intervals II. The Quarterly Journal of Mathematics. Oxford, Second Series, 28 (1977), 101–115. MR56:3395 C1. Dancer EN: On the Dirichlet problem for weakly nonlinear partial differential equations. Proceedings of the Royal Society of Edinburgh, Section A. Mathematics, 76A (1977), 283–300. MR58:17506 C1. Dimca A: Topologia intersectiilor complete II. Studii şi Cercetări Matematice. Mathematical Reports, 29 (1977), 3–15. MR0447267 C1. Dumont S, Omont A, Pecker JC and Rees DE: Resonance line polarization: the line core. Astronomy and Astrophysics, 54 (1977), 675–681. C1. Durham P and Quine MP: Estimation for multitype branching processes. Journal of Applied Probability, 14 (1977), 829–835. MR58:24789 C1. Durrant CJ: Flows in magnetic flux tubes. Highlights of Astronomy, 4 (1977), 267–270. C1. Fackerell ED and Crossman RG: Spin–weighted spheroidal functions. Journal of Mathematical Physics, 18 (1977), 1849–1854. MR55:13988 C1. Field MJ: Transversality in $$G$$-manifolds. Transactions of the American Mathematical Society, 231 (1977), 429–450. MR56:9563 C1. Field MJ: Stratifications of equivariant varieties. Bulletin of the Australian Mathematical Society, 16 (1977), 279–295. MR58:18532 C1. Fraser WB: On the orthogonality of the axisymetric axial eigenfunctions for an elastic circular cylinder. Mechanics Research Communications, 4 (1977), 303–307. MR0464836 C1. Galloway DJ, Proctor MRE and Weiss NO: Formation of intense magnetic fields near the surface of the Sun. Nature, 266 (1977), 686–689. C1. Goodwin PB and Hutchinson TP: The risk of walking. Transportation, 6 (1977), 217–230. C1. Hartley B and Richardson JS: The socle in group rings. Journal of the London Mathematical Society, Second Series, 15 (1977), 51–54. MR55:10507 C1. Hillman JA: A non homology boundary link with zero Alexander polynomial. Bulletin of the Australian Mathematical Society, 16 (1977), 229–236. MR56:12300 C1. Hillman JA: High dimensional knot groups which are not two-knot groups. Bulletin of the Australian Mathematical Society, 16 (1977), 449–462. MR58:31098 C1. Hudson IL and Seneta E: A note on simple branching processes with infinite mean. Journal of Applied Probability, 14 (1977), 836–842. MR57:1677 C1. Hutchinson TP: Application of Kendall's partial tau to a problem in accident analysis. International Journal of Bio-Medical Computing, 8 (1977), 277–281. C1. Hutchinson TP: Intra-accident correlations of driver injury and their application to the effect of mass ratio on injury severity. Accident Analysis and Prevention, 9 (1977), 217–227. C1. Hutchinson TP: Latent structure models applied to the joint distribution of drivers' injuries in road accidents. Statistica Neerlandica, 31 (1977), 105–111. C1. Hutchinson TP: Universities Transport Study Group: A conference report. Traffic Engineering and Control, 18 (1977), 211. MR1240669 C1. Hutchinson TP: On the relevance of signal detection theory to the correction for guessing. Contemporary Educational Psychology, 2 (1977), 50–54. C1. Hutchinson TP: The method of $$m$$ rankings when the numbers of observations in each cell are not all unity. Computers and Biomedical Research, 10 (1977), 345–361. C1. Hutchinson TP and Mayne AJ: The year-to-year variability in the numbers of road accidents. Traffic Engineering and Control, 18 (1977), 432–433. C1. Hutchinson TP and Satterthwaite SP: Mathematical models for describing the clustering of sociopathy and hysteria in families: A comment on the recent paper by Cloninger et al. British Journal of Psychiatry, 130 (1977), 294–297. C1. John RD and Quine MP: Computing the root of an equation occurring in queueing theory and branching processes. INFOR, 15 (1977), 72–75. C1. Kuo TC and Lu YC: On Analytic Function Germs of Two Complex Variables. Topology, 15 (1977), 299–310. MR57:704 C1. Macaskill C and Tuck EO: Evaluation of the acoustic impedance of a screen. Journal of the Australian Mathematical Society, (Series B), 20 (1977), 46–61. MR57:8443 C1. Mack JM: Simultaneous diophantine approximation. Journal of the Australian Mathematical Society, 24 (1977), 266–285. MR57:12411 C1. McMullen JR and Price JF: Reversible Hypergroups. Rendiconti del Seminario Matematico e Fisico di Milano, 47 (1977), 68–85. MR80c:20010 C1. O'Brian NR: Zeroes of holomorphic vector fields and Grothendieck duality theory. Transactions of the American Mathematical Society, 229 (1977), 289–306. MR56:3900 C1. Richardson JS: Group rings with non-zero socle. Proceedings of the London Mathematical Society, Third Series, 36 (1977), 385–406. MR57:6162 C1. Robinson J: Large deviation probabilities for samples from a finite population. The Annals of Probability, 5 (1977), 913–925. MR56:6804 C1. Taylor DE: Regular 2-graphs. Proceedings of the London Mathematical Society, Third Series, 35 (1977), 257–274. MR57:16147 ## Encyclopaedia Entries D. Heyde CC and Seneta E: Bienaymé, Irenee-Jules, Dictionary of Scientific Biography. 15, Scribner's, New York, 1977, 30–33. ## Conference Proceedings E1. Conlon SB (1977) Nonabelian subgroups of prime power order of classical groups of the same prime degree. Group Theory (Canberra 1975), Lecture Notes in Mathematics, Springer-Verlag, Berlin, 573, 17–50. MR57:6212 E1. Galloway DJ (1977) Axisymmetric convection with a magnetic field. Problems of Stellar Convection, Zahn J-P, Spiegel EA eds (ed.) , Springer Lecture Notes in Physics, 71, 188–194. E1. Herzog M and Lehrer GI (1977) A note concerning Coxeter groups and permutations. Proc. Mini Conf. Canberra 1975, Lecture Notes in Mathematics, Springer, 573, 53–56. MR56:8716 E1. Macaskill C (1977) Reflection and transmission of water waves by a submerged shelf of prescribed shape. Proceedings of the 6th Australian Conference on Hydraulics and Fluid Mechanics MR0468611 E1. Rees DE (1977) Resonance Line Polarization in Finite Atmospheres. Lund Workshop, Measurements and Interpretation of Polarization Arising in the Solar Chromosphere and Corona, Stenflo JO (ed.) , 25–33. E1. Taylor DE (1977) Groups whose modular group rings have soluble unit groups. Lecture Notes in Mathematics, Springer-Verlag, Berlin-Heidelberg-New York, 573, 112–117. MR56:15746 E1. Ward JN (1977) A note on the Todd-Coxeter algorithm. Lecture Notes in Mathematics, 573, 126–129. MR0447411
2017-11-23 15:04:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.760052502155304, "perplexity": 7124.373373657377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806842.71/warc/CC-MAIN-20171123142513-20171123162513-00369.warc.gz"}
https://math.stackexchange.com/questions/536148/ring-homomorphism-homework-including-ideals-and-surjectivity
# Ring homomorphism homework including ideals and surjectivity. $R$ is a ring and $I$ and $J$ are ideals of $r$. Show that the ring homomorphism $h:R \rightarrow R/I \times R/J, r \mapsto (r+I,r+J)$ is surjective iff $I+J=R$ give a description of the kernel of $h$ in terms of the ideals $I$ and $J$ Have some ideas about this that its basically saying that the elements of R get sent to the cartesian product +I and +J hence if it is surjective then r+I and r+J must hit all points in the set R in which case I+J=. But I don't feel this is very rigorous or if it is infact true. I also feel like it may be due to one of the isomorphism theorems. Not really sure how to go about the kernel part. 1. If $h$ is surjective, then $(0+I,1+J)=h(r)$. 2. If $I+J=R$, can you write $(x+I,y+J)=(r+I,r+J)$, for some $r$? Assume $I+J=R$. Let arbitrary $(a+I,b+J)\in R/I\times R/J$ be given. As $I+J=R$, there exist $i\in I$, $j\in J$ with $a-b=i+j$. Let $r=a-i=b+j$. Then $h(r)=(r+I,r+J)=(a+I,b+J)$. We conclude that $h$ is surjective. Now assume that $h$ is surjective. let $a\in R$ be given. We want to find $i\in I$, $j\in J$ with $a=i+j$. By assumption, $h(r)=(a+I,0+J)$ for some $r$. This implies $r+J=J$, i.e. $r\in J$, and $a+I=r+I$, i.e. $a-r\in I$. Let $j=r$ and $i=a-r$.
2019-07-21 02:52:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9819042086601257, "perplexity": 69.15581777098137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526818.17/warc/CC-MAIN-20190721020230-20190721042230-00330.warc.gz"}
https://www.ann-geophys.net/24/1159/2006/
Journal cover Journal topic Annales Geophysicae An interactive open-access journal of the European Geosciences Union Journal topic • IF 1.585 • IF 5-year 1.698 • CiteScore 1.62 • SNIP 0.820 • IPP 1.52 • SJR 0.781 • Scimago H index 83 • h5-index 24 # Abstracted/indexed Abstracted/indexed Ann. Geophys., 24, 1159–1173, 2006 https://doi.org/10.5194/angeo-24-1159-2006 Special issue: MaCWAVE Ann. Geophys., 24, 1159–1173, 2006 https://doi.org/10.5194/angeo-24-1159-2006 03 Jul 2006 03 Jul 2006 # The MaCWAVE program to study gravity wave influences on the polar mesosphere R. A. Goldberg1, D. C. Fritts2, F. J. Schmidlin3, B. P. Williams2, C. L. Croskey4, J. D. Mitchell4, M. Friedrich5, J. M. Russell III6, U. Blum7, and K. H. Fricke8 R. A. Goldberg et al. • 1NASA/Goddard Space Flight Center, Code 612.3, Greenbelt, MD 20771, USA • 2NorthWest Research Assoc., Colorado Research Associates Div., Boulder, CO 80301, USA • 3NASA/Goddard Space Flight Center, Wallops Flight Facility, Code 972, Wallops Island, VA 23337, USA • 4Pennsylvania State University, Department of Electrical Engineering, University Park, PA 16802, USA • 5Graz University of Technology, A-8010 Graz, Austria • 6Hampton University, Center for Atmospheric Research, Hampton, VA 23681, USA • 7Forsvarets forskningsinstitutt, Postboks 25, NO-2027 Kjeller, Norway • 8Physikalisches Institut der Universität Bonn, D-53115 Bonn, Germany Abstract. MaCWAVE (Mountain and Convective Waves Ascending VErtically) was a highly coordinated rocket, ground-based, and satellite program designed to address gravity wave forcing of the mesosphere and lower thermosphere (MLT). The MaCWAVE program was conducted at the Norwegian Andøya Rocket Range (ARR, 69.3° N) in July 2002, and continued at the Swedish Rocket Range (Esrange, 67.9° N) during January 2003. Correlative instrumentation included the ALOMAR MF and MST radars and RMR and Na lidars, Esrange MST and meteor radars and RMR lidar, radiosondes, and TIMED (Thermosphere Ionosphere Mesosphere Energetics and Dynamics) satellite measurements of thermal structures. The data have been used to define both the mean fields and the wave field structures and turbulence generation leading to forcing of the large-scale flow. In summer, launch sequences coupled with ground-based measurements at ARR addressed the forcing of the summer mesopause environment by anticipated convective and shear generated gravity waves. These motions were measured with two 12-h rocket sequences, each involving one Terrier-Orion payload accompanied by a mix of MET rockets, all at ARR in Norway. The MET rockets were used to define the temperature and wind structure of the stratosphere and mesosphere. The Terrier-Orions were designed to measure small-scale plasma fluctuations and turbulence that might be induced by wave breaking in the mesosphere. For the summer series, three European MIDAS (Middle Atmosphere Dynamics and Structure) rockets were also launched from ARR in coordination with the MaCWAVE payloads. These were designed to measure plasma and neutral turbulence within the MLT. The summer program exhibited a number of indications of significant departures of the mean wind and temperature structures from normal" polar summer conditions, including an unusually warm mesopause and a slowing of the formation of polar mesospheric summer echoes (PMSE) and noctilucent clouds (NLC). This was suggested to be due to enhanced planetary wave activity in the Southern Hemisphere and a surprising degree of inter-hemispheric coupling. The winter program was designed to study the upward propagation and penetration of mountain waves from northern Scandinavia into the MLT at a site favored for such penetration. As the major response was expected to be downstream (east) of Norway, these motions were measured with similar rocket sequences to those used in the summer campaign, but this time at Esrange. However, a major polar stratospheric warming just prior to the rocket launch window induced small or reversed stratospheric zonal winds, which prevented mountain wave penetration into the mesosphere. Instead, mountain waves encountered critical levels at lower altitudes and the observed wave structure in the mesosphere originated from other sources. For example, a large-amplitude semidiurnal tide was observed in the mesosphere on 28 and 29 January, and appears to have contributed to significant instability and small-scale structures at higher altitudes. The resulting energy deposition was found to be competitive with summertime values. Hence, our MaCWAVE measurements as a whole are the first to characterize influences in the MLT region of planetary wave activity and related stratospheric warmings during both winter and summer. Special issue
2019-09-21 21:22:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2240179181098938, "perplexity": 12514.459852170858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574665.79/warc/CC-MAIN-20190921211246-20190921233246-00365.warc.gz"}
https://www.cymath.com/blog/2017-02-20
# Problem of the Week ## Updated at Feb 20, 2017 4:59 PM To get more practice in calculus, we brought you this problem of the week: How can we solve for the derivative of $$\frac{{x}^{4}}{2}$$? Check out the solution below! $\frac{d}{dx} \frac{{x}^{4}}{2}$ 1 Use Constant Factor Rule: $$\frac{d}{dx} cf(x)=c(\frac{d}{dx} f(x))$$.$\frac{1}{2}(\frac{d}{dx} {x}^{4})$2 Use Power Rule: $$\frac{d}{dx} {x}^{n}=n{x}^{n-1}$$.$2{x}^{3}$Done2*x^3
2021-09-16 11:11:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6720107197761536, "perplexity": 4018.0283492421595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053493.41/warc/CC-MAIN-20210916094919-20210916124919-00358.warc.gz"}
https://hal-insu.archives-ouvertes.fr/insu-03691473
Spiral Structure and Differential Dust Size Distribution in the LKHα 330 Disk - Archive ouverte HAL Access content directly Journal Articles The Astronomical Journal Year : 2016 ## Spiral Structure and Differential Dust Size Distribution in the LKHα 330 Disk , , , , (1) , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , 1 Eiji Akiyama Jun Hashimoto Hauyu Baobab Liu • Function : Author Jennifer I-Hsiu Li • Function : Author Ruobing Dong Yasuhiro Hasegawa • Function : Author Thomas Henning Michael L. Sitko • Function : Author Markus Janson Markus Feldt • Function : Author John Wisniewski Tomoyuki Kudo Nobuhiko Kusakabe • Function : Author Takashi Tsukagoshi Munetake Momose Takayuki Muto • Function : Author Tetsuo Taki • Function : Author Masayuki Kuzuhara Mayama Satoshi • Function : Author Michihiro Takami Nagayoshi Ohashi • Function : Author • Function : Author Jungmi Kwon Christian Thalmann Lyu Abe • Function : Author Wolfgang Brandner • Function : Author Timothy D. Brandt • Function : Author Joseph C. Carson • Function : Author Sebastian Egner • Function : Author Miwa Goto • Function : Author Olivier Guyon Yutaka Hayano Masahiko Hayashi • Function : Author Saeko S. Hayashi • Function : Author Klaus W. Hodapp • Function : Author Miki Ishii • Function : Author Masanori Iye Gillian R. Knapp • Function : Author Ryo Kandori Taro Matsuo • Function : Author Michael W. Mcelwain • Function : Author Shoken Miyama Jun-Ichi Morino • Function : Author Amaya Moro-Martin Tetsuo Nishimura • Function : Author Tae-Soo Pyo Eugene Serabyn • Function : Author Takuya Suenaga • Function : Author Hiroshi Suto • Function : Author Ryuji Suzuki • Function : Author Yasuhiro H. Takahashi • Function : Author Naruhisa Takato • Function : Author Daigo Tomono • Function : Author Edwin L. Turner • Function : Author Makoto Watanabe • Function : Author Hideki Takami • Function : Author Tomonori Usuda Motohide Tamura #### Abstract Dust trapping accelerates the coagulation of dust particles, and, thus, it represents an initial step toward the formation of planetesimals. We report H-band (1.6 μm) linear polarimetric observations and 0.87 mm interferometric continuum observations toward a transitional disk around LkHα 330. As a result, a pair of spiral arms were detected in the H-band emission, and an asymmetric (potentially arm-like) structure was detected in the 0.87 mm continuum emission. We discuss the origin of the spiral arm and the asymmetric structure and suggest that a massive unseen planet is the most plausible explanation. The possibility of dust trapping and grain growth causing the asymmetric structure was also investigated through the opacity index (β) by plotting the observed spectral energy distribution slope between 0.87 mm from our Submillimeter Array observation and 1.3 mm from literature. The results imply that grains are indistinguishable from interstellar medium-like dust in the east side (β =2.0+/- 0.5) but are much smaller in the west side β ={0.7}-0.4+0.5, indicating differential dust size distribution between the two sides of the disk. Combining the results of near-infrared and submillimeter observations, we conjecture that the spiral arms exist at the upper surface and an asymmetric structure resides in the disk interior. Future observations at centimeter wavelengths and differential polarization imaging in other bands (Y-K) with extreme AO imagers are required to understand how large dust grains form and to further explore the dust distribution in the disk. #### Domains Sciences of the Universe [physics] ### Dates and versions insu-03691473 , version 1 (09-06-2022) ### Identifiers • HAL Id : insu-03691473 , version 1 • ARXIV : • BIBCODE : • DOI : ### Cite Eiji Akiyama, Jun Hashimoto, Hauyu Baobab Liu, Jennifer I-Hsiu Li, Michael Bonnefoy, et al.. Spiral Structure and Differential Dust Size Distribution in the LKHα 330 Disk. The Astronomical Journal, 2016, 152, ⟨10.3847/1538-3881/152/6/222⟩. ⟨insu-03691473⟩ ### Export BibTeX TEI Dublin Core DC Terms EndNote Datacite 0 View
2023-02-01 15:50:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518136143684387, "perplexity": 13808.57699253557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00082.warc.gz"}
https://stats.stackexchange.com/questions/214978/looking-for-proof-of-conditional-dependence-when-the-conditioning-variables-are
Looking for proof of conditional dependence, when the conditioning variables are linearly related Suppose we have three random variables, $X$, $Y_1$, and $e$ (for error). Variable $e$ is independent of $X$ and $Y_1$, but $X$ and $Y_1$ are dependent. Further suppose we construct a new mixture variable $Y_2$ which is $Y_1$ observed under error $e$, assuming the additive functional form $$Y_2= Y_1+e.$$ Now I suspect that $$P(X|Y_1,Y_2) \ne P(X|Y_2).$$ I am looking for ways to prove this statement. Edit (additional information): In the special case I am interested in $X$ is discrete (Bernoulli or Binomial) and $Y_1$ and $e$ are normally distributed. • Note I gave one more information to make the question more specific for my case ($X$ is binary, $Y_1$ and $e$ are bivariate normal). It would be interesting to discuss my question in general or in this particular situation. The solution suggested by @Chaconne concerns the situation when $X$ is normal and was given before I added the additional information. – tomka May 30 '16 at 10:36 We can use a graphical model to answer questions like this. I just made this in MS paint so it doesn't look very nice, but the idea should be clear. I also renamed your $E$ as $e$ just so there's no confusion with the expectation operator if that gets involved. If I've understood your question correctly, we have that $X$ and $Y_1$ are dependent, so there is an edge connecting them. We also have that $Y_2$ is correlated with $Y_1$ and $e$ so there are those two edges. There are no other dependencies. I used a directed graphical model because it seems like you want to view $Y_2$ as being "caused" by $Y_1$ and $e$. You really want to know if $X \perp Y_1 \vert Y_2$. Looking at the graphical model, we can say that this is not the case. Cover up the node representing $Y_2$: there is still a pathway between $X$ and $Y_1$. That means that there is still a relationship between $X$ and $Y_1$ even when $Y_2$ is known. This can all be made much more rigorous if you are looking to make this a proof. Update: here's a counterexample using first principles. Let's say $(X, Y, e) \sim \mathcal N_3(\vec 0, \Sigma)$ where $$\Sigma = \left[ {\begin{array}{ccc} 1 & \rho & 0\\ \rho & 1 & 0 \\ 0&0&1 \end{array} } \right]$$ so that $X$ and $Y$ are correlated and both are independent of $e$. We know that for $Z \sim \mathcal N(0, \Omega)$ $$f_Z(z) \propto \exp(-\frac{1}{2} z^t \Omega^{-1} z).$$ I used Wolfram Alpha to evaluate this with our particular covariance matrix so that $$f_{(X,Y,e)}(x, y, e) \propto \exp\left[-\frac{1}{2}\left( \frac{x^2}{1-\rho^2} + 2 \frac{xy\rho}{1-\rho^2} + \frac{y^2}{1-\rho^2} + e^2 \right)\right]$$ Now let $(U, V, W) = (X, Y, Y + e)$. This is a linear transformation so we don't need to worry about a Jacobian, and we simply get $$f_{(U,V,W)}(u, v, w) = f_{(X,Y,e)}(u, v, w-v)$$ which means that $$f_{(U,V,W)}(u,v,w) \propto \exp\left[-\frac{1}{2}\left( \frac{u^2}{1-\rho^2} + 2 \frac{uv\rho}{1-\rho^2} + \frac{v^2}{1-\rho^2} + (w-v)^2 \right)\right].$$ $X \perp Y_1 \vert Y_2$ in this example is equivalent to checking if $U \perp V \vert W$, which means that we would need to be able to factor $f_{(U,V,W)}(u,v,w)$ so that there are no terms involving both $u$ and $v$. Clearly when $\rho \neq 0$ this is impossible, and therefore we have a counterexample to the claim. • Thanks, I was already thinking along these lines too (hence the tag on graphical models I added). I would be interested in proving this beyond the figure (which makes perfect sense in itself but is not exactly statistical without knowledge of Pearl's book). – tomka May 27 '16 at 13:33 • The approach you deleted looked interesting, what was wrong about it? – tomka May 27 '16 at 14:39 • I think there was a mistake in the covariance matrix after the transformation and I wanted to fix it offline. – jld May 27 '16 at 14:40 • @ThomasKlausch Ok I redid it. Hopefully it's all correct now – jld May 27 '16 at 15:09 • My understanding was that you were trying to show that it is not the case that $X \perp Y_1 \vert Y_2$. A counterexample is sufficient to show this. I certainly did not characterize all random variables such that this is true, but I have proven that it cannot be that for all random variables with the dependencies that you've described $X \perp Y_1 \vert Y_2$. Maybe I misunderstood what you were trying to do here? – jld May 27 '16 at 15:24
2020-05-25 03:56:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229168653488159, "perplexity": 142.94058339244967}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00015.warc.gz"}
https://nebusresearch.wordpress.com/
## Proportional Dice So, here’s a nice probability problem that recently made it to my Twitter friends page: (By the way, I’m @Nebusj on Twitter. I’m happy to pick up new conversational partners even if I never quite feel right starting to chat with someone.) Schmidt does assume normal, ordinary, six-sided dice for this. You can work out the problem for four- or eight- or twenty- or whatever-sided dice, with most likely a different answer. But given that, the problem hasn’t quite got an answer right away. Reasonable people could disagree about what it means to say “if you roll a die four times, what is the probability you create a correct proportion?” For example, do you have to put the die result in a particular order? Or can you take the four numbers you get and arrange them any way at all? This is important. If you have the numbers 1, 4, 2, and 2, then obviously 1/4 = 2/2 is false. But rearrange them to 1/2 = 2/4 and you have something true. We can reason this out. We can work out how many ways there are to throw a die four times, and so how many different outcomes there are. Then we count the number of outcomes that give us a valid proportion. That count divided by the number of possible outcomes is the probability of a successful outcome. It’s getting a correct count of the desired outcomes that’s tricky. Like • #### howardat58 3:14 pm on Wednesday, 10 February, 2016 Permalink | Reply Vegetarians clearly have different definitions. Like • #### Chiaroscuro 4:00 pm on Wednesday, 10 February, 2016 Permalink | Reply So, let’s make these A/B=C/D for the dice, assuming in-order rolls. 1296 possibilities. If A=C and B=D, it’ll always work. So that’s 36. Additionally: If A=B and C=D, it’ll always work (1=1). So that’s 36,. minus the 6 where A=B=C=D. Then 1/2=2/4 (and converse and inverse and both), 1/2=3/6 (same), 2/4=3/6 (same), 1/3=2/6 (same). 4, 4, 4, 4.so 16 total. 36+30+16=82, unless I’ve missed some. 82/1296, which reduces to 41/648. Like • #### Chiaroscuro 4:02 pm on Wednesday, 10 February, 2016 Permalink | Reply Ooh! I missed 2/3=4/6. (and converse, and inverse, and both). So another 4, meaning 86/1296. Like ## Reading the Comics, February 6, 2016: Lottery Edition As mentioned, the lottery was a big thing a couple of weeks ago. So there were a couple of lottery-themed comics recently. Let me group them together. Comic strips tend to be anti-lottery. It’s as though people trying to make a living drawing comics for newspapers are skeptical of wild long-shot dreams. Like • #### fluffy 7:01 pm on Thursday, 28 January, 2016 Permalink | Reply I didn’t know that was necessary. So I do like $a^3 + b^3 = c^3$? A preview function would be nice. Like • #### Joseph Nebus 10:36 pm on Thursday, 28 January, 2016 Permalink | Reply That’s the way, yes. WordPress’s commenting system certainly needs a preview function and an edit button. (There might be other themes that have preview functions. The one I’m using here is a bit old-fashioned and it might predate a preview. I know it isn’t really mobile-friendly.) Like • #### Joseph Nebus 10:32 pm on Tuesday, 26 January, 2016 Permalink | Reply It’s certainly possible. Starting from $\frac{N}{x} = x - N$ we get the equation $0 = x^2 - Nx - N$ and that has solutions $x = \frac{1}{2} \left( N \pm \sqrt{N^2 + 4N}\right)$. I don’t seem able to include the table that would list the first couple of these without breaking the commenting system. But it’s easy to generate from that start. The x_N for a given N gets to be quite close to N+1. Like • #### Eric Mann 11:55 am on Wednesday, 27 January, 2016 Permalink | Reply These x_N’s are nice little multiples of the (dare I say) golden ratio, yes? They will arise from a rectangle with dimensions of x and N with an embedded N by N square. Like • #### Joseph Nebus 10:35 pm on Thursday, 28 January, 2016 Permalink | Reply I don’t see any link between these x_N’s or the golden ratio. Could you tell me what you see, please? Like • #### Eric Mann 11:51 am on Wednesday, 27 January, 2016 Permalink | Reply Well done. I love the inquiry. What do the rectangles look like? Or, is there still a geometric interpretation? I admit I still find the sequence of rectangles with Fibonacci dimensions and an embedded spiral attractive. I appreciate it in the limit. I am not attached to the ratio being golden with a capitol g, but is it just a curiosity? An attraction? I expect more out of the ratio, rectangle, and spiral. Like • #### Joseph Nebus 10:32 pm on Thursday, 28 January, 2016 Permalink | Reply Well, rectangles in these gilt-ratio proportions would be longer and skinnier things. For example, you might see a rectangle that’s one inch wide and 20.049(etc) inches long. I doubt anyone could tell the difference between that and a rectangle that’s one inch wide, 20 inches long, though. I do think it’s just a curiosity, an attractive-looking number. Or family of numbers, if you open up to these sorts of variations. There’s nothing wrong with looking at something that’s just attractive, though. It’s fun, for one thing. And the thinking done about one problem surely helps one practice for other problems. I was writing recently about the Collatz Conjecture. As far as I know nothing interesting depends on the conjecture being true or false, but it’s still enjoyable. Like ## Reading the Comics, January 21, 2016: Andertoons Edition It’s been a relatively sleepy week from Comic Strip Master Command. Fortunately, Mark Anderson is always there to save me. In the Andertoons department for the 17th of January, Mark Anderson gives us a rounding joke. It amuses me and reminds me of the strip about rounding up the 196 cows to 200 (or whatever it was). But one of the commenters was right: 800 would be an even rounder number. If the teacher’s sharp he thought of that next. Andertoons is back the 21st of January, with a clash-of-media-expectations style joke. Since there’s not much to say of that, I am drawn to wondering what the teacher was getting to with this diagram. The obvious-to-me thing to talk about two lines intersecting would be which sets of angles are equal to one another, and how to prove it. But to talk about that easily requires giving names to the diagram. Giving the intersection point the name Q is a good start, and P and R are good names for the lines. But without points on the lines identified, and named, it’s hard to talk about any of the four angles there. If the lesson isn’t about angles, if it’s just about the lines and their one point of intersection, then what’s being addressed? Of course other points, and labels, could be added later. But I’m curious if there’s an obvious and sensible lesson to be given just from this starting point. If you have one, write in and let me know, please. Ted Shearer’s Quincy for the 19th of January (originally the 4th of November, 1976). Ted Shearer’s Quincy for the 19th of January (originally the 4th of November, 1976) sees a loss of faith in the Law of Averages. We all sympathize. There are several different ways to state the Law of Averages. These different forms get at the same idea: on average, things are average. More, if we go through a stretch when things are not average, then, we shouldn’t expect that to continue. Things should be closer to average next time. For example. Let’s suppose in a typical week Quincy’s teacher calls on him ten times, and he’s got a 50-50 chance of knowing the answer for each question. So normally he’s right five times. If he had a lousy week in which he knew the right answer just once, yes, that’s dismal-feeling. We can be confident that next week, though, he’s likely to put in a better performance. That doesn’t mean he’s due for a good stretch, though. He’s as likely next week to get three questions right as he is to get eight right. Eight feels fantastic. But three is only a bit less dismal-feeling than one. The Gambler’s Fallacy, which is one of those things everyone wishes to believe in when they feel they’re due, is that eight right answers should be more likely than three. After all, that’ll make his two-week average closer to normal. But if Quincy’s as likely to get any question right or wrong, regardless of what came before, then he can’t be more likely to get eight right than to get three right. All we can say is he’s more likely to get three or eight right than he is to get one (or nine) right the next week. He’d better study. (I don’t talk about this much, because it isn’t an art blog. But I would like folks to notice the line art, the shading, and the grey halftone screening. Shearer puts in some nicely expressive and active artwork for a joke that doesn’t need any setting whatsoever. I like a strip that’s pleasant to look at.) Tom Toles’s Randolph Itch, 2 am for the 19th of January (a rerun from the 18th of April, 2000) has got almost no mathematical content. But it’s funny, so, here. The tag also mentions Max Planck, one of the founders of quantum mechanics. He developed the idea that there was a smallest possible change in energy as a way to make the mathematics of black-body radiation work out. A black-body is just what it sounds like: get something that absorbs all light cast on it, and shine light on it. The thing will heat up. This is expressed by radiating light back out into the world. And if it doesn’t give you that chill of wonder to consider that a perfectly black thing will glow, then I don’t think you’ve pondered that quite enough. Mark Pett’s Mister Lowe for the 21st of January (a rerun from the 18th of January, 2001) is a kid-resisting-the-word-problem joke. It’s meant to be a joke about Quentin overthinking the situation until he gets the wrong answer. Were this not a standardized test, though, I’d agree with Quentin. The given answers suppose that Tommy and Suzie are always going to have the same number of apples. But is inferring that a fair thing to expect from the test-takers? Why couldn’t Suzie get four more apples and Tommy none? Probably the assumption that Tommy and Suzie get the same number of apples was left out because Pett had to get the whole question in within one panel. And I may be overthinking it no less than Quentin is. I can’t help doing that. I do like that the confounding answers make sense: I can understand exactly why someone making a mistake would make those. Coming up with plausible wrong answers for a multiple-choice test is no less difficult in mathematics than it is in other fields. It might be harder. It takes effort to remember the ways a student might plausibly misunderstand what to do. Test-writing is no less a craft than is test-taking. • #### tkflor 4:46 am on Saturday, 23 January, 2016 Permalink | Reply About black body radiation – for most practical purposes, a black body at room temperature does not emit radiation in the visible range. (https://en.wikipedia.org/wiki/Black-body_radiation) So, we won’t see “light” or “glow”. Like • #### Joseph Nebus 5:03 am on Sunday, 24 January, 2016 Permalink | Reply This is true, and I should have been clear about that. It glows in the sense that if you could look at the right part of the spectrum something would be detectable. It’s nevertheless an amazing thought. Like • #### Barb Knowles 6:20 pm on Saturday, 23 January, 2016 Permalink | Reply This cartoon is GREAT! I’m going to show it to my math colleagues.. Like • #### Joseph Nebus 5:04 am on Sunday, 24 January, 2016 Permalink | Reply Glad you liked! I hope you make good use of it. Liked by 1 person ## Some More Mathematics Stuff To Read And some more reasy reading, because, why not? First up is a new Twitter account from Chris Lusto (Lustomatical), a high school teacher with interest in Mathematical Twitter. He’s constructed the Math-Twitter-Blog-o-Sphere Bot, which retweets postings of mathematics blogs. They’re drawn from his blogroll, and a set of posts comes up a couple of times per day. (I believe he’s running the bot manually, in case it starts malfunctioning, for now.) It could be a useful way to find something interesting to read, or if you’ve got your own mathematics blog, a way to let other folks know you want to be found interesting. Also possibly of interest is Gregory Taylor’s Any ~Qs comic strip blog. Taylor is a high school teacher and an amateur cartoonist. He’s chosen the difficult task of drawing a comic about “math equations as people”. It’s always hard to do a narrowly focused web comic. You can see Taylor working out the challenges of writing and drawing so that both story and teaching purposes are clear. I would imagine, for example, people to giggle at least at “tangent pants” even if they’re not sure what a domain restriction would have to do with anything, or even necessarily mean. But it is neat to see someone trying to go beyond anthropomorphized numerals in a web comic. And, after all, Math With Bad Drawings has got the hang of it. Finally, an article published in Notices of the American Mathematical Society, and which I found by some reference now lost to me. The essay, “Knots in the Nursery:(Cats) Cradle Song of James Clerk Maxwell”, is by Professor Daniel S Silver. It’s about the origins of knot theory, and particularly of a poem composed by James Clerk Maxwell. Knot theory was pioneered in the late 19th century by Peter Guthrie Tait. Maxwell is the fellow behind Maxwell’s Equations, the description of how electricity and magnetism propagate and affect one another. Maxwell’s also renowned in statistical mechanics circles for explaining, among other things, how the rings of Saturn could work. And it turns out he could write nice bits of doggerel, with references Silver usefully decodes. It’s worth reading for the mathematical-history content. • #### elkement (Elke Stangl) 1:55 pm on Friday, 22 January, 2016 Permalink | Reply Your blog is really an awesome resource for all things math, no doubt!! Like • #### Joseph Nebus 5:02 am on Sunday, 24 January, 2016 Permalink | Reply That’s awfully kind of you to say. I’ve really just been grabbing the occasional thing that comes across my desk and passing that along, though, part of the great chain of vaguely sourced references. Liked by 1 person • #### elkement (Elke Stangl) 8:48 am on Sunday, 24 January, 2016 Permalink | Reply But ‘curating’ as they say today is an art, too, and after all you manage to make things accessible, e.g. by summarizing posts you reblog so neatly…. and manage to do so without much images!! Like • #### Joseph Nebus 10:12 pm on Tuesday, 26 January, 2016 Permalink | Reply Well, thank you again. I do feel like if I’m pointing to or reblogging someone else’s work I should provide a bit of context and original writing. It’s too easy to just pass around a link and say “here’s a good link”, which I wouldn’t blame anyone for doubting. Liked by 1 person c Compose new post j Next post/Next comment k Previous post/Previous comment r
2016-02-11 10:30:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.51527339220047, "perplexity": 1619.255325753315}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161942.67/warc/CC-MAIN-20160205193921-00057-ip-10-236-182-209.ec2.internal.warc.gz"}
https://kaushikghose.wordpress.com/category/machine-learning-2/
## Self-Organizing Maps Self-organizing maps are an old idea (first published in 1989) and take strong inspiration from some empirical neurophysiological observations from that time. The original paper “Self-Organizing Semantic Maps” by Ritter and Kohonen (pdf) has a nice discussion that took me back to some questions I was looking at in another life as a neurophysiologist. The discussion centers around cortical maps, which are most pronounced in the clearly sensory and motor areas of the brain. Very early experiments (in their paper Lashley‘s work is referenced) led some people to propose the brain was a complete mishmash, with no functional separable internal organization. Each part of the brain did exactly what the other parts did, a bit like how a lens cut in half still works like the original lens (just dimmer), or how a hologram cut in half, still shows the original scene. Later experiments looked more closely and found a wealth of specialization at all scales in the brain. The cortex (the most evolutionarily recent part of the brain) shows an especially fascinating organization at the smallest scales. This is most apparent in the visual cortex, where the neurons are often organized in two or higher dimensional maps. A famous example is the set of orientation maps in early visual cortex: This figure is taken from Moser et al, Nature Reviews Neuroscience 15,466–481(2014) but you’ll run into such figures every where. It shows physical space along the cortex, color coded for the orientation of bar that the neurons at that location best respond to. One can see that the colors are not randomly assigned but form contiguous regions, indicating that neighboring neurons respond to similar orientations. I could go on about cortical organization like this (and indeed I have elsewhere) but this is a good segway into self-organizing maps. Researchers studying machine learning got interested in this feature of the brain and wondered if it held some use for classification problems. Kohonen drew up an algorithm for what he called a self-organizing map that has been used on and off mainly for visualization via un-supervised learning. The Kohonen map is constructed as follows. 1. Construct a sheet of “neurons” with the dimensionality you want. Usually this is limited to one, two or three dimensions, because we want to visualize a complex higher dimensional dataset. 2. Each neuron is a template – it has the same dimensions as the input data and can be overlaid on each instance of the input data. 3. We initialize the templates randomly – so the templates look like white noise. 4. We “present” each input example to the map. Then we find the template neuron that most closely matches this input. We then morph the template slightly so that it even more closely matches the input. We do this for all the neighbors of the neuron too. 5. We keep repeating this for all the input we have. What does this result in? I have two videos for you, you product of the MTV generation, that shows how this works. I took the famous NIST handwritten digits data set and fed it to a two dimensional SOM. The first video represents the self-organizing map as a set of templates that change as inputs are supplied to it. This is easy to interpret as you can identify the templates as they evolve. Note how neurons form neighborhoods that naturally link together similar stimuli. It is easy to see with this data set because the data are two dimensional visual patterns and we can judge similarity intuitively. The second video represents each neuron’s stimulus preferences, with it’s most preferred stimulus digit appearing as the largest. You can see of the network evolves as examples are thrown at it and segregates out into zones that respond to or represent a particular form of the stimulus. You can see how zones travel and try to spread-out, ending up at the corners of the map. This is due to the competition created by the winner-take-all operation we use (closest matching template wins) combined with the effect of adjusting the winning neuron’s neighbors too. SOMs are fun things to play with, but are a little fiddly. You may have picked up from the videos that we had two parameters called sigma and eta that changed as the learning went on. The numbers got smaller as the training went on. Sigma is the size of the neighborhood and eta is the learning rate – the smaller the value of eta the less the neurons change in response to inputs. These are two things you have to fiddle with to get decent results. Also as you can imagine, this whole process is sensitive to the random templates we start with. Numerous attempts have been made to try and figure out objective ways to determine these numerous parameters for a SOM. It turns out they are pretty data dependent and most people just do multiple runs until they get results they are happy with. As I was reading Bishop I ran into his version of the SOM which he calls Generative Topographic Mapping which Bishop claims is more principled and mathematically grounded than SOMs. The GTM, which I really haven’t heard about at all, seems like a fun thing to study, but as a ex-neurophysiologist SOMs are definitely more intuitive to understand. Code (and link to data source) follows: ## Bayesian networks and conditional independence Bayesian networks are directed graphs that represent probabilistic relationships between variables. I’m using the super-excellent book “Pattern Recognition and Machine Learning” by C. Bishop as my reference to learn about bayesian networks and conditional independence, and I’d like to share some intuitions about these. First, let’s motivate our study with a common home situation related to kids. We start with the question “Do we have to gas up the car?” (As you know, everything related to kids is binary. There is no room for gray areas). Well, this depends upon whether we need to do a grocery trip, or not. That, in turn depends upon whether the food in the fridge has gone bad and whether we need dino nuggets. Why would the food go bad? Well that depends on whether the electricity went out, and whether the darn kids broke the fridge again. The kids never broke your fridge? Lucky you. Whether we need dino nuggets depends on whether we’re having a party. The probability that the kids broke the fridge depends on whether there were a horde of them or not (i.e. a party). Whoa, this paragraph looks complicated, like a GRE word problem. Let’s represent this as a directed graph: Much better! All the variables here are binary and we can put actual numbers to this network as follows: Explaining some of the numbers, the entry for the (P)arty node indicates that there is a probability of 0.1 we’re going to have a party, and this is uninfluenced by anything else. Looking at the entry for the (G)as up node, we see that whether we gas up depends on the states of F and D. For example, if both F and D are false, there is a 0.1 probability of needing to gas up, and if F is true and D is false, there is a 0.6 probability of needing to gas up, and so on. Intuitively, for a pair of nodes A and B, there is a linkage if the probability of the output depends on the state of the input(s). So, in the truth tables above I’ve made every row different. If, for example, the probability that B would take the state “T” is 0.2 regardless of whether A was “T” or “F” then we can intuit that A doesn’t influence B and there is no line from A to B. If, on the other hand, B takes the state “T” with p=0.2 if A is “T” and with p=0.8 if A is “F” then we can see that the probability B takes the state “T” depends on A. Before we start to look at our kid’s party network, I’d like to examine three basic cases Bishop describes in his book, and try an develop intuitions for those, all involving just three nodes. For each of these cases I’m going to run some simulations and print results using the code given in this gist in case you want to redo the experiments for your satisfaction. The only thing to know going forward is that I’ll be using the phi coefficient to quantify the degree of correlation between pairs of nodes. phi is a little more tricky to interpret than pearson’s R, but in general a phi of 0 indicates no correlation, and larger values indicate more correlation. The first case is the tail-to-head configuration: Say A=”T” with p=0.5 (i.e. a fair coin) and C=”T” p=0.7 if A=”T” and p=0.3 otherwise, and B=”T” p=0.7 if C=”T” and p=0.3 otherwise. Here “B” is related to “A” through an intermediary node “C”. Our intuition tells us that in general, A influences C, C influences B and so A has some influence on B. A and B are correlated (or dependent). Now, here is the interesting thing: suppose we “observe C”, which effectively means, we do a bunch of experimental runs to get the states of A, C and B according to the probability tables and then we pick just those experiments where C = “T” or C = “F”. What happens to the correlation of A and B? If we take just the experiments where C=”T”, while we do have a bias in A (because C comes up “T” a lot more times when A is “T”) and a bias in B (because B comes up “T” a lot more times when C is “T”) the two are actually not related anymore! This is because B is coming up “T” with a fixed 0.7 probability REGARDLESS OF WHAT A was. Yes, we have a lot more A=”T”, but that happens independently of what B does. That’s kind of cool to think about. Bishop, of course, has algebra to prove this, but I find intuition to be more fun. As a verification, running this configuration 100000 times I find the phi between A & B = 0.16379837272156458 while with C observed to be “T” it is -0.00037 and “F” it is 0.0024. Observing C decorrelates A and B. MATH WORKS! As an aside, if I setup the network such that the bias of C does not change with the input from A, I get phi of A & B = -0.00138 and phi of A & B with C observed to be “T” = -0.00622 and “F” =  0.001665 confirming that there is never any correlation in the first place in such a “flat” network. The second case bishop considers is the tail-to-tail configuration: Here we have a good intuition that C influences A and B, causing A and B to be correlated. If C is “T” it causes A to be “T” with pA(T)  and B to be “T” with pB(T) and when C is “F” A is “T” with pB(F) and B is “T” with pB(F). These probabilities switch to track C as it changes, resulting in the linkage. What happens if we observe C? Say C is “T”. Now this fixes the probability of “T” for both A and B. They maybe different, but they are fixed, and the values A and B take are now independent! We easily see that A and B are independent. What happens if we observe C? Let’s consider a concrete example with a probability table A B p(T) for C F F 0.1 F T 0.7 T F 0.3 T T 0.9 Say we run 100 trials and A and B come up “T” or “F” equally likely (p=0.5). We expect each AB pattern will occur equally often (25 times) and then the expected number of C=T states is A B E(C=T) F F 0.1 * 25 = 2.5 F T 0.7 * 25 = 17.5 T F 0.3 * 25 = 7.5 T T 0.9 * 25 = 22.5 So we have an expected 50 trials where C=T. Of that subset of trials, the fraction of trials that each pattern comes up is: A B p(AB|C=T) F F 2.5 / 50 = 0.05 F T 17.5 / 50 = 0.35 T F 7.5 / 50 = 0.15 T T 22.5 / 50 = 0.45 Ah! You say. Look, when A and B are independent, each of those patterns come up equally often, but here, after we observe C, there is clearly an imbalance! How sneaky! Basically, because some patterns of AB are more likely to result in C=T this sneaks into the statistics once we pick a particular C (Those hoity, toity statisticians call this “conditioning on C”). A and B are now dependent!! I have to say, I got pretty excited by this. Perhaps I’m odd. But wait! There’s more! Say C has a descendent node. Now observing a descendent node actually “biases” the ancestor node – so in a way you are partially observing the ancestor node and this can also cause A and B to become dependent! Now, I was going to then show you some experiments I did with a simulation of the Kid’s party network that I started with, and show you all these three conditions, and a funky one where there are two paths linking a pair of nodes (P and G) and how observing the middle node reduces their correlation, but not all the way, cause of the secondary path, but I think I’ll leave you with the simulation code so you can play with it yourself. Code for this little experiment is available as a github gist. ## What is Mutual Information? The mutual information between two things is a measure of how much knowing one thing can tell you about the other thing. In this respect, it’s a bit like correlation, but cooler – at least in theory. Suppose we have accumulated a lot of data about the size of apartments and their rent and we want to know if there is any relationship between the two quantities. We could do this by measuring their mutual information. Say, for convenience, we’ve normalized our rent and size data so that the highest rent and size are 1 “unit” and the smallest ones are 0 “units”. We start out by plotting two dimensional probability distributions for the rent and size. We plot rent on the x-axis, size on the y-axis. The density – a normalized measure of how often we run into a particular (rent,size) combination), and called the joint distribution ($p(r,s)$) – is actually plotted on the z-axis, coming out of the screen, forming a surface. To simplify matters, let’s assume the joint distribution here is uniform all over, so this surface is flat and at a constant height. So, here the joint distribution of rents and sizes ($p(r,s)$) is given by the square (which is actually the roof of a cube, poking out) and the distribution of rents and sizes by themselves (called the marginals, because they are drawn on the margins of the joint distribution) are given by $p(r)$ and $p(s)$. To recall a bit of probability, and probability distributions, the probability of finding a house/apartment within a certain rent/size range combo is given by the volume of the plot within that rent/size range. The volume of the whole plot, is therefore, equal to 1, since all our data is within this range. The mutual information is given by the equation: $\displaystyle I(R;S) = \int \int p(r,s) \log \frac{p(r,s)}{p(r)p(s)}drds$ This equation takes in our rent/size data and spits out a single number. This is the value of the mutual information. The logarithm is one of the interesting parts of this equation. In practice the only effect of changing the base is to multiply your mutual information value by some number. If you use base 2 you get out an answer in ‘bits’ which makes sense in an interesting way. Intuitively we see that, for this data, knowing the rent tells us nothing additional about the size (and vice versa). If we work out the value of the mutual information by substituting the values for $p(r,s)$, $p(r)$ and $p(s)$ into the equation above we see that, since all these quantities are constant, we can just perform the calculation within the integral sign and multiply the result by the area of the plot (which is 1 and indicated by the final x1 term) $I(R;S) = 1 \log_2 \frac{1}{1 \times 1} \times 1 = 0$ So we have 0 bits of information in this relation, which jives with our intuition that there is no information here – rents just don’t tell us anything about size. Now suppose our data came out like this. [one-bit diagram] Substituting the values we see that (noting we have two areas to integrate, each of size 1/2 x 1/2 = 1/4) $I(R;S) = 2 \log_2 \frac{2}{1 \times 1} \times \frac{1}{4} \times 2 = 1$ That’s interesting. We can see intuitively there is a relation between rent and size, but what is this 1 bit of information? One way of looking at our plot is to say, if you give me a value for rent, I can tell you in which range of sizes the apartment will fall, and this range splits the total range of sizes in two. $2^1=2$ so we say we have 1 bit of information which allows us to distinguish between two alternatives: large size and small size. Interestingly, if you tell me the size of the apartment, I can tell you the range of the rent, and this range splits the total range of rents in two, so the information is still 1 bit. The mutual information is symmetric, as you may have noted from the formula. Now, suppose our data came out like this. [two-bit diagram] You can see that: $I(R;S) = 4 \log_2 \frac{4}{1 \times 1} \times \frac{1}{16} \times 4 = 2$ Two bits! The rents and sizes seem to split into four clusters, and knowing the rent will allow us to say in which one of four clusters the size will fall. Since $2^2=4$ we have 2 bits of information here. Now so far, this has been a bit ho-hum. You could imagine working out the correlation coefficient between rent and size and getting a similar notion of whether rents and sizes are related. True, we get a fancy number in bits, but so what? Well, suppose our data came out like this. [two-bit, scrambled diagram] It’s funny, but the computation for MI comes out exactly the same as before: $I(R;S) = 4 \log_2 \frac{4}{1 \times 1} \times \frac{1}{16} \times 4 = 2$ Two bits again! There is no linear relationship that we can see between rents and sizes, but upon inspection we realize that rents and sizes cluster into four groups, and knowing the rent allows us to predict which one of four size ranges the apartment will fall in. This, then, is the power of mutual information in exploratory data analysis. If there is some relationship between the two quantities we are testing, the mutual information will reveal this to us, without having to assume any model or pattern. However, WHAT the relationship is, is not revealed to us, and we can not use the value of the mutual information to build any kind of predictive “box” that will allow us to predict, say, sizes from rents. Knowing the mutual information, however, gives us an idea of how well a predictive box will do at best, regardless of whether it is a simple linear model, or a fancy statistical method like a support vector machine. Sometimes, computing the mutual information is a good, quick, first pass to check if it is worthwhile spending time training a computer to do the task at all. ## A note on computing mutual information In our toy examples above it has been pretty easy to compute mutual information because the forms of $p(r,s), p(r)$ and $p(s)$ have been given explicitly. In real life we don’t have the distributions and all we are given is a (not large enough) pile of data. We try to estimate the functions $p(r,s), p(r)$ and $p(s)$ from this data on our way to computing the mutual information. You will notice that the term in the integral is always positive. $p(r,s) \geq 0$ because it is a probability and $\frac{p(r,s)}{p(r)p(s)} \geq 1$. This second fact can be seen by considering the extreme cases where $r$ and $s$ are independent (in which case $p(r,s)=p(r)p(s)$ which leads us to $\frac{p(r,s)}{p(r)p(s)} = 1$) and when they are completely dependent (in which case $p(r,s)=p(r)=p(s)$ which leads us to $\frac{p(r,s)}{p(r)p(s)} = \frac{p(r)}{p^2(r)} = \frac{1}{p(r)} \geq 1$). You will immediately sense the problem here. In many calculations, when we have noise in the terms, the noise averages out because the plus terms balance out the minus terms. Here, all we have are plus terms, and our integral has a tendency to get bigger. Histograms (where we take the data and bin it) is an expedient way of estimating probability distributions and they normally work alright. But this can lead us to a funny problem when computing mutual information because of this always positive nature of the integral term. For example, say there was really no dependence between rents and sizes, but suppose our data and our binning interacted in an odd manner to give us a pattern such as this: [checkerboard] We can see that the marginals are not affected badly, but the joint, because it is in two-dimensional space, is filled rather more sparsely which leads to us having ‘holes’ in the distribution. If we now compute the mutual information we find that we have ended up with 1 bit of information, when really, it should be 0 bits. Most attempts to address this bias in mutual information computations recognize the problem with these ‘holes’ in the joint distribution and try to smear them out using various ever more sophisticated techniques. The simplest way is to make larger bins (which would completely solve our problem in this toy case) and other methods blur out the original data points themselves. All of these methods, not matter how fancy, still leave us with the problem of how much to smear the data: smear too little and you inflate the mutual information, smear too much and you start to wipe it out. Often, to be extra cautious, we do what I have known as ‘shuffle correction’ (and I was told by a pro is actually called the ‘null model’). Here you thoroughly jumble up your data so that any relation ship that existed between r and s is gone. You then compute the mutual information of that jumbled up data. You know that the mutual information should actually be zero, but because of the bias it comes out to something greater. You then compare the mutual information from the data with this jumbled one to see if there is something peeking above the bias. ## Computing mutual information and other scary things A moderately technical writeup of some adventures in computing mutual information. I told a colleague of mine, Boris H, of my plan to use mutual information to test data from an experiment. He encouraged me, but he also passed along a warning. “Be wary of discretization. Be very wary” he said, or words to that effect. Boris is a professional: he knows the math so well, people pay him good money to do it. I was, like, “Yeah, yeah, whatever dude.” Long story short, always listen to the pros… Summary: The Jack Knifed MLE gives a low bias estimate of the MI. It is easy to implement and is not too compute intensive. It is fairly robust to bin size. In theory, mutual information is a super elegant way of uncovering the relationship between two quantities. There is a quiet elegance to its formulation $\displaystyle I(X;Y) = \int \int p(x,y) \log \frac{p(x,y)}{p(x)p(y)}dxdy$ Or, equivalently $\displaystyle I(X;Y) = H(X) + H(Y) - H(X,Y)$ where $H(X) = -\int p(x) \log p(x)dx$ is the entropy These elegant formulae, on the surface, lend themselves to an easy discretization. Personally I see integrals and think – ah, I’m just going to discretize the space and replace the integrals with summations, it works for everything else. Basically, the idea would be to take the data, which comes as pairs of numbers (x,y) and throw them into a two-dimensional histogram. Each cell of the histogram gives us an estimate of p(x,y) – the joint probability of (x,y) and summing along the rows or columns of the 2d-histogram gives us our marginal probabilities p(x), p(y) which can then be thrown in to our mutual information formula (with integrals replaced by summations). Easy Peasy. It turns out that this particular computation – because of the form of the equations – is very sensitive to how the data are discretized. The easy formula, apparently the MLE for the entropy, has persistent biases. In all cases the number of bins used for creating the histogram plays an annoyingly critical role. If we have too few bins we underestimate the MI and if we have too many bins we over-estimate the MI. People have come up with different corrections for the bias of which I picked two to test (these formulae are nicely summarized in Paninski 2003, in the introduction). Defining $N$ as the number of samples and $\hat{m}$ as the number of bins with non-zero probability we have: Simple-minded discretization (MLE): $\hat{H}_{MLE}(x) = -\sum p(x_i)log_2{p(x_i)}$ Miller Madow correction (MLEMM): $\hat{H}_{MLEMM}(x) = \hat{H}_{MLE}(x) + \frac{\hat{m}-1}{2N}$ The Jack Knifed estimate (MLEJK): $\hat{H}_{MLEJK}(x) = N\hat{H}_{MLE}(x) - \frac{N-1}{N}\sum_{j=1}^{N} \hat{H}_{MLE-j}(x)$ (Leave-one-out formula) I tested the formulae by generating pairs of correlated random numbers. If $X,Y$ are two gaussian random variables with correlation coefficient $\rho$ then their mutual information is given by: $-\frac{1}{2}\log_2{(1-\rho^2)}$ which I use as the ground truth to compare the numerical results against. The plot below compares the actual and estimated MI estimate spit out by the MLE formula for various values of correlation coefficient $\rho$. The actual value is shown by the blue line, the gray shaded area shows 95% of the distribution (from repeating the calculation 1000 times) for a sample size of 100 (Chosen as a reasonable sample size I usually run into, in my experiments). The sets of shaded curves show computation using 11 bins (darker gray) and 21 bins (lighter gray). As you can see this formula sucks. It has a large persistent bias through out the range of correlation coeffcients (the gray areas hang well above the blue line). The MLEMM did not fare much better in some preliminary explorations I did (see below) so I went to the jack knife which performed pretty well. The Jack Knife still has a bias, but it is not very large. It is less sensitive to the binning and the bias is stable throughout the range. I’m including this second plot, which isn’t that easy on the eyes, but it was the initial exploration that led me to look into the jack knife in more detail. The plots are distributions of the MI value computed from 1000 runs of each algorithm. The three rows are the three formulae: the MLE, the MLE with the Miller Madow correction (MLEMM) and the Jack Knifed MLE. The columns cycle through different sample sizes (10,100,100), different bin sizes (11,21) and different correlation coefficients between x and y (0,0.5). The vertical dotted line indicates the real MI. As you can see the MLE and MLEMM have an upward bias, which persists even for large sample sizes and is exacerbated by finer binning. The jack knife is impressive, but it too has an upward bias (as you have seen in the previous plot). ### Next steps The next incremental improvement I would use is adaptive binning, which uses some formalized criteria (like making sure all bins have a non-zero probability) to adjust the binning to the data before calculating MI. Paninski, Liam. “Estimation of entropy and mutual information.” Neural Computation 15.6 (2003): 1191-1253. – This paper was very tough for me to chew through. I used it for the formulae in the first few pages. Liam claims he has a method for correcting the MLE method, but it seemed like a very complicated computation Cellucci, C. J., Alfonso M. Albano, and P. E. Rapp. “Statistical validation of mutual information calculations: Comparison of alternative numerical algorithms.” Physical Review E 71.6 (2005): 066208. – This was a much more readable paper compared to Liam’s. I’m using to see what kind of adaptive binning I can apply to the data. import numpy, pylab def correlated_x(N, rho=0.5): r = numpy.random.randn(N,2) r1 = r[:,0] r1_ = r[:,1] r2 = rho*r1 + (1-rho**2)**0.5*r1_ return r1, r2 def bin_data(x,y, bins=11, limits=[-4,4]): Nxy, xe, ye = numpy.histogram2d(x,y,bins=bins,range=[limits,limits]) N = float(Nxy.sum()) Pxy = Nxy/N Px = Pxy.sum(axis=1) Py = Pxy.sum(axis=0) return Pxy, Px, Py, N MI_exact = lambda rho: -.5*numpy.log2(1-rho**2) def H_mle(P): idx = pylab.find(P>0) return -(P.flat[idx]*numpy.log2(P.flat[idx])).sum() def MI_mle(x,y, bins=11, limits=[-4,4]): Pxy, Px, Py, Ntot = bin_data(x,y,bins=bins,limits=limits) return H_mle(Px) + H_mle(Py) - H_mle(Pxy) def MI_mle_jack_knife(x, y, bins=11, limits=[-4,4]): Pxy, Px, Py, N = bin_data(x,y,bins=bins,limits=limits) Hx = H_mle(Px) Hy = H_mle(Py) Hxy = H_mle(Pxy) Hx_jk = 0 Hy_jk = 0 Hxy_jk = 0 for n in range(x.size): jx = numpy.concatenate((x[:n],x[n+1:])) jy = numpy.concatenate((y[:n],y[n+1:])) Pxy, Px, Py, Njk = bin_data(jx,jy,bins=bins,limits=limits) Hx_jk += H_mle(Px) Hy_jk += H_mle(Py) Hxy_jk += H_mle(Pxy) Hx_jk = N*Hx - (N-1.0)/N*Hx_jk Hy_jk = N*Hy - (N-1.0)/N*Hy_jk Hxy_jk = N*Hxy - (N-1.0)/N*Hxy_jk return Hx_jk + Hy_jk - Hxy_jk def runmany(func, N=50, rho=0, bins=11, limits=[-4,4], b=1000): mi = numpy.empty(b) for n in range(b): r1,r2 = correlated_x(N, rho) mi[n] = func(r1,r2,bins=bins) mi.sort() med_idx = int(b*0.5) lo_idx = int(b*0.025) hi_idx = int(b*0.975) return mi[lo_idx], mi[med_idx], mi[hi_idx] b = 1000 Ni = 20 rho = numpy.linspace(0,.99,Ni) mi_exact = MI_exact(rho) N = 100 cols = [(.5,.5,.5), (.7,.7,.7)] for lab, func in zip(['MI MLE', 'MI Jack Knife'], [MI_mle, MI_mle_jack_knife]): pylab.figure(figsize=(10,5)) for k, bins in enumerate([11,21]): mi_est = numpy.empty((Ni,3)) for n in range(rho.size): #mi_est[n,:] = runmany(MI_mle_jack_knife, N=N, rho=rho[n], bins=bins, limits=[-4,4], b=b) #mi_est[n,:] = runmany(MI_mle, N=N, rho=rho[n], bins=bins, limits=[-4,4], b=b) mi_est[n,:] = runmany(func, N=N, rho=rho[n], bins=bins, limits=[-4,4], b=b) pylab.fill_between(rho, mi_est[:,0], mi_est[:,2],color=cols[k],edgecolor='k',alpha=.5) pylab.plot(rho, mi_est[:,1], 'k', lw=2) pylab.plot(rho, mi_exact, 'b',lw=2) pylab.xlabel(r'$\rho$') pylab.ylabel(lab) pylab.setp(pylab.gca(),ylim=[-.1,3.5]) pylab.savefig('{:s}.pdf'.format(lab)) The code for the ugly, detailed plot: import numpy, pylab def correlated_x(N, rho=0.5): r = numpy.random.randn(N,2) r1 = r[:,0] r1_ = r[:,1] r2 = rho*r1 + (1-rho**2)**0.5*r1_ return r1, r2 def bin_data(x,y, bins=11, limits=[-4,4]): Nxy, xe, ye = numpy.histogram2d(x,y,bins=bins,range=[limits,limits]) N = float(Nxy.sum()) Pxy = Nxy/N Px = Pxy.sum(axis=1) Py = Pxy.sum(axis=0) return Pxy, Px, Py, N MI_exact = lambda rho: -.5*numpy.log2(1-rho**2) def H_mle(P): idx = pylab.find(P>0) return -(P.flat[idx]*numpy.log2(P.flat[idx])).sum() def H_mm(P,N): m = pylab.find(P>0).size return H_mle(P) + (m-1)/(2.0*N) def MI_mle(x,y, bins=11, limits=[-4,4]): Pxy, Px, Py, Ntot = bin_data(x,y,bins=bins,limits=limits) return H_mle(Px) + H_mle(Py) - H_mle(Pxy) Pxy, Px, Py, N = bin_data(x,y,bins=bins,limits=limits) return H_mm(Px, N) + H_mm(Py, N) - H_mm(Pxy, N) def MI_mle_jack_knife(x, y, bins=11, limits=[-4,4]): Pxy, Px, Py, N = bin_data(x,y,bins=bins,limits=limits) Hx = H_mle(Px) Hy = H_mle(Py) Hxy = H_mle(Pxy) Hx_jk = 0 Hy_jk = 0 Hxy_jk = 0 for n in range(x.size): jx = numpy.concatenate((x[:n],x[n+1:])) jy = numpy.concatenate((y[:n],y[n+1:])) Pxy, Px, Py, Njk = bin_data(jx,jy,bins=bins,limits=limits) Hx_jk += H_mle(Px) Hy_jk += H_mle(Py) Hxy_jk += H_mle(Pxy) Hx_jk = N*Hx - (N-1.0)/N*Hx_jk Hy_jk = N*Hy - (N-1.0)/N*Hy_jk Hxy_jk = N*Hxy - (N-1.0)/N*Hxy_jk return Hx_jk + Hy_jk - Hxy_jk def runmany(func, N=50, rho=0, bins=11, limits=[-4,4], b=1000): mi = numpy.empty(b) for n in range(b): r1,r2 = correlated_x(N, rho) mi[n] = func(r1,r2,bins=bins) return mi def plot_bias_variance(func, N=50, rho=0, bins=11, limits=[-4,4], b=1000): mi = runmany(func, N=N, rho=rho, bins=bins, limits=limits, b=b) pylab.hist(mi,range=[0,2],bins=61,histtype='stepfilled',color='gray',edgecolor='gray') pylab.plot([MI_exact(rho),MI_exact(rho)], pylab.getp(pylab.gca(),'ylim'), 'k:',lw=3) pylab.setp(pylab.gca(),xlim=[-.1,1.0],xticks=[0,.5],yticks=[],xticklabels=[]) pylab.title('N={:d},rho={:2.2f},bins={:d}'.format(N,rho,bins),fontsize=6) pylab.figure(figsize=(15,6)) b = 1000 for i, rho in enumerate([0,0.5]): for j, N in enumerate([10,100,1000]): for k, bins in enumerate([11,21]): c = 1+k+j*2+i*6 pylab.subplot(3,12,c); plot_bias_variance(MI_mle,rho=rho, N=N,bins=bins,b=b) if c == 1: pylab.ylabel('MLE') pylab.subplot(3,12,24+c); plot_bias_variance(MI_mle_jack_knife,rho=rho,N=N,bins=bins,b=b) if c == 1: pylab.ylabel('JackKnife') pylab.savefig('mi_experiments.pdf') ## Parzen windows for estimating distributions Part of a set of moderately technical writeups of some adventures in computing mutual information for neural data. Often, for example, when you are computing mutual information, you need to estimate the probability distribution of a random variable. The simplest way, which I had done for years, is to to create a histogram. You take each sample and put it into a bin based on its value. Then you can use one of several tests to check if the shape of the histogram deviates from whatever distribution you are interested in. When you don’t have enough data (aka Always) your histogram comes out jagged (Almost everyone who’s ever made a histogram knows what I’m sayin’). In undergrad stats I learned that 11 was a nice number of bins, and indeed both Matplotlib and MATLAB seem to have that as the default. What I ended up doing was plotting the data using various bins until, by inspection, I was satisfied by the smoothness of the histogram. Turns out mathematicians have a whole cottage industry devoted to formalizing how to compute the number of bins your histogram should have. The relevant keyword is (imaginatively enough) “histogram problem”. I ran into this word while reading up a Navy technical report by Cellucci et al found here. That paper has references to a bunch of dudes who worked on that problem. Anyhow, there are lots of complex schemes but I liked Tukey’s formula, which was $n^{1/2}$ (n being the number of data points). This post however, is not about binning. It’s about Parzen windows to get rid of binning altogether. There is a deep theoretical underpinning to Parzen windowing, but intuitively I understand Parzen windowing as a smoothing operation that takes the samples we have and creates a smooth distribution out of the points. Here’s how I understand Parzen windows: Each sample creates a splash – it’s own little gaussian (Apparently, you can also use boxcar windows or whatever window has a nice property for your problem). This is the Parzen window. As a result, the sample is no longer tightly localized but has a bit of a blur to it. We then add up all the blurs to create a smoothened curve/surface which is our estimate of the pdf of the samples. With a judicious choice of the width of the blurring and proper normalization of the height of each gaussian we can come up with a sensible pdf estimate. The advantage of this is that you know have a continuous function representing the pdf, which you can integrate. Formally (I referred to a paper by Kwak and Choi – Input Feature Selection by Mutual Information based on Parzen window) the Parzen window estimate of the pdf is given by $\hat{p} = \frac{1}{n}\sum_{i=1}^{n} \phi(x-x_i,h)$ where $\phi$ is the window function, and I used a gaussian for that. As you can see, the density estimate at any given point $x$ is given by the sum of gaussians centered around each data point $x_i$ with the width of the gaussian being given by $h$. The larger $h$ is the more washed out the estimate is and the smaller $h$ is the more jaggy is the estimate. We seem to have transferred our problem of finding an appropriate bin width (bin count) to finding an appropriate smoothening constant (What! You expected a free lunch!?). I used what wikipedia calls Silverman’s rule of thumb: $h=1.06\hat{\sigma}n^{-1/5}$. Here is a fun little animation showing how the Parzen window estimate of a pdf (thin black line) matches up with the actual pdf (thicker blue line). The histogram of the actual data points are shown in light gray in the background. Interesting things about this smoothing is that there is no binning involved – the final curve depends only on the actual data samples – and we make no strong assumptions about the pdf of the data – it’s not like we are trying to fit a model of the pdf to the data. Here is an animation of the exact same technique being used to fit a uniform distribution. We could have done better with a different choice of window width more tuned to the distribution, but the idea is that it still works if we don’t have any idea of what the actual pdf looks like. These are also known as mixture of Gaussians or mixture decompositions. During my web-searches I ran across this nice set of lecture slides about estimating pdfs from a prof at TAMU. import numpy, pylab, matplotlib.animation as animation root_two_pi = (2<em>numpy.pi)<strong>0.5 parzen_est = lambda x,X,h,sigma: numpy.exp(-(X-x)</strong>2/(2</em>h<strong>2*sigma</strong>2)).sum()/(root_two_pi*h*sigma*X.size) gaussian_pdf = lambda x: (1/(2*numpy.pi)<strong>.5)*numpy.exp(-x</strong>2/2) def uniform_pdf(x): p = numpy.zeros(x.size) p[(-2 &lt; x) &amp; (x &lt; 2)] = 0.25 return p def init(): fig = pylab.figure(figsize=(3,6)) ax = pylab.axes(xlim=(-5, 5), ylim=(-0.5, 0.7)) return fig, ax def animate(i, r, ax, distr): this_r = r[:i+2] ax.cla() h= 1.06<em>this_r.std()</em>this_r.size<strong>(-.2)#1/(2<em>numpy.log(N)) lim = [-5,5] bins = 51 x=numpy.linspace(lim[0],lim[1],num=100) pylab.text(-.4,-.025,'n=' + str(this_r.size)) pylab.hist(this_r, bins=bins,range=lim,normed=True,edgecolor=(.9,.9,.9),color=(.8,.8,.8),histtype='stepfilled') #pylab.plot(x, (1/(2</em>numpy.pi)</strong>.5)*numpy.exp(-x**2/2), color=(.1,.1,1.0), lw=5) pylab.plot(x, distr(x), color=(.1,.1,1.0), lw=5) pylab.plot(x, [parzen_est(xx, this_r, h, this_r.std()) for xx in x],'k', lw=3) pylab.setp(ax,xlim=(-5, 5), ylim=(-0.05, 0.7)) N = 200 r0=numpy.random.randn(N); distr = gaussian_pdf #r0=numpy.random.rand(N)*4-2; distr = uniform_pdf fig, ax = init() anim = animation.FuncAnimation(fig, animate, fargs=(r0, ax, distr), frames=N-2, interval=2, repeat=False) #anim.save('parzen_smoothing.mp4', fps=30, extra_args=['-vcodec', 'libx264']) pylab.show() ## What is machine learning, and why should I care? This is a (longish) informal intro to machine learning aimed at Biology/Neuroscience undergraduates who are encountering this for the first time in the context of biological data analysis. We are, as you know, furiously lazy. It’s a biological adaptation: we always try to find the path of least resistance. Only people with distorted senses of reality, like Neurobio students, make things harder for themselves. Naturally, we dream of making machines do not just the easy work, like cutting our lawn, harvesting our crops and assembling our digital cameras, but also do the hard work, like picking stocks, diagnosing our diseases and analyzing all our data. ## Categorization Categorization – making discrete decisions based on complex information – is at the heart of many interesting things we do, which are collectively called decision making. For example, we have an x-ray done and the radiologist examines the picture and determines if there is cancer or not. There are many, many input factors which go into this determination but the output is relatively simple: it is one of  ‘Yes’ or ‘No’. A lot of machine learning is devoted to understanding and implementing this kind of decision making in computers. ## Machine learning = Learning by examples When we humans do half-way interesting things, like read, make coffee and do math, we attribute that to learning. We learned how to look at squiggles on a piece of paper and interpret them as information from somebody else’s mind, usually indicating an event or thought. Often we learn things by example: we are exposed to things, like words, and we are given examples of what they sound like and what they represent and we ‘pickup’ the words, their sounds and their meanings. Machine learning is a set of statistical techniques that take inspiration from this learning-by-example and try to mimic this process on computers, thereby realizing our dream of a perfect society where we’ll all be chilling at the beach while the machines do all the work. The basic idea is that we collect a set of examples (“samples”) and their meanings (“category”). Right now, we are doing baby steps, so the meaning (category) is restricted to simple concepts that have been reduced to a single label. The hope is that if we show the machine enough examples we can let it loose on the world and when it finds new things it can look to its book of samples and match up the new thing it sees with what it already has seen and then take a decision: This is a ‘tree’, this is ‘euler’s equation’, ‘you need to exercise more’. ### Supervised learning Suppose I want to teach the machine what a cat looks like. I collect a huge bunch of cat pictures and label them “cat”. We show the computer the pictures (more on that later) and tell it “These are cats”. Cool! Now the computer knows what a cat looks like. But, what happens when it sees a picture of a house, or a tree? It doesn’t know what a cat doesn’t look like. Some people will say they have a secret method where it’s possible to show the computer just pictures of cats, and when a non-cat comes along, the computer will know the difference. We are more cautious. For good measure we throw in a huge collection of pictures that are not cats and tell the computer about that too, so it knows to look for the difference. This is called supervised learning, because we are supervising the computer by telling it what picture is what. #### Unsupervised learning: the height of laziness Humans learn many things by just observing. In machine learning there are techniques, called unsupervised learning, where don’t even bother to label the examples. We just a dump a whole lot of data into the computer and say to it “Here’s all the information, you sort it out, I’m off camping.” The computer patiently goes through the samples and finds differences between them and sorts them into categories it comes up on its own. This is a powerful method, but as you can imagine, without supervision, the results can be quite hilarious. In general, the computer, being prone to over-analysis, tends to find tiny differences between samples, and tends to break them up into many, many categories. Without some kind of intervention the computer will patiently put each sample into its own separate category and be quite satisfied in the end. ## How is this different from more traditional computing? In traditional computing we do all the ‘thinking’ for the computer and program a specific algorithm (a clear cut set of computing steps) based on an explicit mathematical formula. We are the ones who come up with the core process of how to solve the problem. For example, say we are trying to get a computer to drive a car and we want it to stop when it sees a red stop sign. We think hard and say, “The most salient thing about a stop sign is the insane amount of red color it has. We’ll program the computer to stop the car when it sees lot of red.” If we used a machine learning approach, we would say, “Why should I do all the thinking? I want to hang out with my homies. I’ll just show the computer a huge scrapbook  of stop signs, yield signs, pedestrian crossing signs and let the computer figure it out. The only work I’ll do is that I’ll put the signs into the correct categories first. If I’m in a hurry, I might not even do that.” ## “Learning” = finding similarities in the features How do we distinguish cats from, say, fire hydrants? We’ve found things (features) that are common to cats that are not present in fire hydrants. More precisely, we’ve found combinations of features that are very likely to make up a cat and very unlikely to make up a fire hydrant. So how and what does the computer learn? The details of this depend on which exact machine learning method you use from the veritable zoo of machine learning methods. In general, the input that you give to the computer is converted to a list of numbers called a feature vector. It can be as simple as taking a picture of a cat, rearranging all the pixels into one long row and then feeding in this row of pixels as numbers. So how does the computer learn to use this feature vector to learn what is a picture of a cat and what isn’t? You would expect that the feature vector taken from the picture of the cat is similar is some sense to vectors from pictures of other cats, and different from vectors of fire hydrant pictures. Machine learning algorithms use different mathematical formulae to automatically find the similarity between cat vectors and the differences between cat and fire hydrant vectors. When we show the computer a new picture, the computer converts the picture into a feature vector,  refers to its giant book of existing feature vectors and asks the question “Is this new vector more similar to a cat vector or a something else in my book?” and bases its decision on that. ## Feature selection Most things in the world have many, many features. A cat’s features would, for example, include the shape and size of the ears, the color of the eyes, the color and length of the fur and so on. If we were distinguishing between fire-hydrants and cats we probably don’t have to look in great detail into the features of both, just a handfull, such as color and overall shape will do. If we are trying to distinguish between breeds of cats, however, we probably need to delve in great detail into a lot of the features. Deciding which features to use has a great impact on our ability to teach the computer to perform categorization successfully. Sometimes we hand select the features, where we have a good idea of what things distinguish the categories we care about. Other times, we get lazy and, once again, let the computer sort it out. ## Dirty little secrets: Overtraining/overfitting and the curse of dimensionality As you may have noticed this rambling piece of text bears a suspicious resemblance to rose tinted predictions of the 60s of how robots were going to do all our chores for us as we lay about in our bathrobes, smoking our pipes and ordering trifles by mail. If machine learning is so great where are our jet-packs, flying cars and domestic robots? Well, two things. First, the future is here, but it’s a lot more profane that the romantics of the 1960s depicted. Automated algorithms track our spending habits and place annoying ads on our social media pages. The US postal service uses robots that can read our scraggly handwriting to route our birthday card to mom. UPS fired all the human telephone operators and now you can speak in your tracking number and the computer will take it down despite your thick southern accent. Secondly, all this glib talk about “Letting the computer sort things out” should have you on your guard. After all, we all know about skynet. There are two main problems with having machines do all the work. ### Overfitting The first is called overfitting. This is where the computer gets too fixated on the details of the input and builds over-elaborate classification rules.  Basically, we give the computer 1000 pictures of cats, and the computer builds a super elaborate library memorizing every teeny detail of those 1000 pictures of cats. So now we show it the 1001th picture of a cat and because the picture is a little bit different from each of the others the computer says “That picture is not an exact match to any of my 1000 cat pictures so that’s not a cat, it must be a fire hydrant.” We can diagnose over fitting by doing something called cross-validation. Though this sounds like some kind of new-agey pop-psychology mumbo-jumbo, the concept is simple: you simply don’t test your students using same question set that you trained them on. We take our set of 1000 cat pictures and divide it up into two sets. The first set we call the training set (say 700 pictures). We let the computer learn using those 700 pictures. Then, we pull out the other 300 pictures that the computer has never seen before and make the computer classify those. The idea is that if the computer has done some kind of rote memorization trick and has not understood the essence of cat, it’s going to do rather poorly on them while it does spectacularly well on the original 700. This is how we catch computers and other students out. But, how do we ensure that the computer does not do rote-memorization in the first place. That’s a pretty problem. There are no easy answers, but one thing we try is to figure out ways the computer could rote learn and penalize it. If, for example, the computer is using too many features, we say “Bad computer, use less features.” But this is all very subjective and trial and error and domain specific. Our only objective measure of success is to do cross-validation. ### The curse of dimensionality This is very scary, but I’m sorry I have to tell you about it. Remember when we talked about feature vectors? A feature vector is a list of numbers that describes the input. The more complex the nature of the input the longer each feature vector (the more numbers in the list). Suppose we have been given a dossier 20 people and we are trying to figure out who to pick for the basketball team. None of these people have played sports of any kind before, so all we have are physical attributes and bunch of other things like hometown, what color car they drive, how many times they flunked math and so on and so forth. The only other thing we have is a similar dossier from 50  other folks from last year and a notation that says “Good player” or “Bad player”. How do we do this? We start off simple. We pick the physical attribute “Height” and look at last year’s dossier. Interestingly,  when we arrange the 50 previous players it turns out most everyone above 5’8″ is a good player and most everyone below that height is a bad player. So we sort this year’s list into ‘Good’ and ‘Bad’ piles based on height. We send the kids out to play and it turns out our picks are pretty good. Well, the coach comes back and says, can we do better? We say, hell, if we did so well with just one feature (height) why don’t we toss ALL the information in the dossier into the comparison. It can only get better right? So we start to include everything we have. Pretty soon a cool pattern emerges. All the tall players, who scored above D+ , who live in a yellow house and drive a red car AND all the tall players who scored above B+ and live in a blue house and drive a yellow car are all good players. Man, ring up that Nate Gladwell guy, he’ll want to know about this! We split the new players up according this criterion and wait to hear the good news from the coach. Next week the coach storms in and chews us out. He tells us we’re a bunch of dunder heads, it’s such a mixed bag of players his pet cat could have picked a better team, and we are FIRED! (My apologies to people who realize I know nothing about sports) What happened? How did we fail so badly? Does this remind you of the overfitting problem? Where our computer got to hung up about the details? Well, yes. It’s kind of the same thing. When you add more features (also called dimensions of the feature vector – how many numbers there are in the list) you need many, many more sample points. If we had enough dossiers those phoney correlations between playing ability and house color would have been drowned out. However, because we only had a relatively few number of dossiers we started to get hung up on coincidences between things. The more features we pick from the dossier, the more coincidences we find. Statisticians, known for their rapier wit and general good humor, call this the curse of dimensionality. ## As applied to Neuroscience: An information measure As a neuroscientist, or neurophysiologist or general biology student why should you care? Well, these statisticians are encroaching on your turf. Statisticians can smell data a mile away, and biology absolutely reeks of it. Using fMRI studies as an example, traditionally we have looked at each voxel individually and done a simple statistical test to answer the question “Is the activity in this cubic inch of the brain significantly different from chance”. Using somewhat dodgy corrections for multiple comparisons we then create a picture of the brain with all the significantly active parts of the brain colored in, and we draw deep conclusions like “The visual cortex is more active when the person looks at visual stimuli” or “The motor cortex lights up when the subject does motor actions”. When the statisticians sneak in with forged immigration papers, things get more wild. The statisticians are not happy with these stodgy answers, like this brain region is differentially active during a working memory task. They go straight to the heart of why any body would do brain science in the first place. “Can I read your mind and guess which card you are holding?” And the plan they have is to use these machine learning techniques we just discussed to answer this question. A popular method is to take those fMRI pictures of the brain, much like the ones you find on the internet and feed them into the computer, just as we talked about, along with some category information “Looking at cat”, “Looking at fire-hydrant” and so on. Then you test the computer (Recall, cross-validation) and show them an fMRI and ask the computer to guess what it was the person was thinking/seeing/doing when the fMRI was done. The result is a value, a percentage correct. This percetage correct ranges from chance to 100% and is a crude measure of how much information there is in the brain about a certain thing, like a stimulus, or a mood and, if we partition the fMRI, we can answer questions like, how much information is there in the prefrontal cortex, in the basal ganglia, in the thalamus and so on. Again, this differs from traditional methods of analyzing fMRI data only in that we don’t fix what it is exactly in the data that will give is the answer. We let the computer figure it out. As we learned, this is exciting, but also dangerous if we let the computer totally loose on the data (Recall, curse of dimensionality, overfitting and whatnot). It is also important to remember that this is not a statement on the nuts and bolts of HOW the brain is processing the information, merely a statement on how MUCH information there is (possibly) is some part of the brain that the rest of the brain could possibly use. ## The curse of D- and the LDA All you dataheads know the curse whose name must not be spoken. The curse of D(imensionality)! Let’s look at how the curse sickens us when we perform Linear Discriminant Analysis (LDA). Our intuition, when we perform LDA, is that we are rotating a higher dimensional data space and casting a shadow onto a lower dimensional surface. We rotate things such that the shadow exaggerates the separation of data coming from different sources (categories) and we hope that the data, which may look jumbled in a higher dimension, actually cast a shadow where the categories are better separated. I have some data that can be thought of as having a large number of dimensions, say about a hundred. (Each dimension is a neuron in the brain, if you must know). I know, from plotting the data from the neurons individually, that some neurons just don’t carry any information I’m interested in, while others do. I’m interested in the question: if I combine multiple neurons together can I get more information out of them than if I look at them individually. It is possible to construct toy scenarios where this is true, so I want to know if this works in a real brain. A question quickly arose: which neurons should I pick to combine? I know that by using some criteria I can rank the neurons as to informativeness and then pick the top 50% or 30% of the neurons to put together. But what happens if I just take all the neurons? Will LDA discard the useless noisy ones. Will my data space be rotated so that these useless neurons don’t cast any shadow and are eclipsed by the useful neurons? This is not entirely an idle question, or a question simply of data analysis. The brain itself, if it is using data from these neurons, needs some kind of mechanism to figure out which neurons are important for what. I find this is an important question and I don’t think we are sure of the answers yet. However, back to the math. We can generate a toy scenario where we can test this. I created a dataset that has 10,25 and 50 dimensions. Only one dimension is informative, the rest are noise. Data from the first dimension come from two different classes and are separated. What happens when we rotate these spaces such that the points from the two classes are as well separated as possible? The plot below shows the original data (blue and green classes). You can see that the ‘bumps’ are decently separated. Then you can see the 10d, 25d and 50d data. Wow! Adding irrelevant dimensions sure helps us separate our data! Shouldn’t we all do this, just add noise as additional dimensions and then rotate the space to cast a well separated shadow? Uh, oh! The curse of D- strikes again! We aren’t fooled though. We know what’s going on. In real life we have limited data. For example, in this data set I used 100 samples. Our intuition tells us that as our dimensions increase the points get less crowded. Each point is able to nestle into a nice niche in a higher dimension, further and further away from its neighbors. Its like the more dimensions we add, the more streets and alleys we add to the city. All the points no longer have to live in the same street. They now have their own zip codes. (OK I’ll stop this theme now) Poor old LDA knows nothing about this. LDA simply picks up our space and starts to rotate it and is extremely happy when the shadow looks like it’s well separated and stops. The illusion will be removed as soon as we actually try to use the shadow. Say we split our data into test and train sets. Our train set data look nicely separated, but the moment we dump in the test data: CHAOS! It’s really jumbled. Those separate zipcodes – mail fraud! Thanks to the curse of D- Code follows: import pylab from sklearn.lda import LDA def myplot(ax, F, title): bins = pylab.arange(-15,15,1) N = F.shape[0] pylab.subplot(ax) pylab.hist(F[N/2:,0], bins, histtype='step', lw=3) pylab.hist(F[:N/2,0], bins, histtype='step', lw=3) pylab.title(title) D = 50 #Dimensions N = 100 #Samples pylab.np.random.seed(0) F = pylab.randn(N,D) C = pylab.zeros(N) C[:N/2] = 1 #Category vector F[:,0] += C*4 #Adjust 1st dimension to carry category information lda = LDA(n_components=1) #bins = pylab.arange(-15,15,1) fig, ax = pylab.subplots(4,1,sharex=True, figsize=(4,8)) myplot(ax[0], F, 'original') F_new = lda.fit_transform(F[:,:10],C) #Ten dimensions myplot(ax[1], F_new, '10d') F_new = lda.fit_transform(F[:,:25],C) #25 dimensions myplot(ax[2], F_new, '25d') F_new = lda.fit_transform(F[:,:50],C) #50 dimensions myplot(ax[3], F_new, '50d') ## Slicing high dimensional arrays When we learned about matrices it wasn’t hard to think of slicing 2D matrices along rows and columns. But what about slicing higher dimensional matrices? I always get confused when I go above 3-dimensions. Interestingly, for me, an easy way to imagine slicing such high dimensional matrices is by using trees and thinking of how the numbers are stored in a computer. Consider a 2x2x2x2 matrix, something we can construct as follows: a=pylab.arange(16) a -> array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]) b=a.reshape((2,2,2,2)) b -> array([[[[ 0, 1], [ 2, 3]], [[ 4, 5], [ 6, 7]]], [[[ 8, 9], [10, 11]], [[12, 13], [14, 15]]]]) Now imagine the matrix is stored in the computer as a simple sequence of numbers (this is kind of true). What makes these numbers behave as a matrix is the indexing scheme. The indexing scheme can be thought of as a kind of tree. Each level of the tree depicts a dimension, with the root being the outermost dimension and the leaves the inner-most dimensions. When we slice the array, say by doing b[0,:,:,:] -> array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) We are actually breaking off a branch of the tree and looking at that. The same with deeper slices, such as b[0,0,:,:] -> array([[0, 1], [2, 3]]) But what about a slice that starts at the leaves, rather than the roots of the tree? b[:,:,:,0] -> array([[[ 0, 2], [ 4, 6]], [[ 8, 10], [12, 14]]]) In such a case I think in terms of breaking off unused branches/leaves and then merging the stubs to form a new tree. ## Bayesian stats (githubbed, iPython notebooked) Someone has put up on github a very interesting experiment. An experiment I hope will succeed well. They have started a tutorial on Bayesian statistics in the form of an iPython notebook (which as you may recall allows you to embed python code, plots, text and math, into a notebook format, much like Maple or Mathematica). They have put this notebook collection up on github. This is awesome, because you can play with the examples as you learn about the theory and you can put in corrections/enhancements to the text/code and share it with everyone else. ## The other great analog-digital debate Some of you are old enough to remember the great analog-v-digital debate: Vinyl or CD? This post is about the OTHER great (but slightly less well known) analog-v-digital debate: do we simulate neurons on digital computers or on custom designed analog VLSI chips? When I was at the Univ. of Maryland I was hooked on Neuromorphic engineering by Timothy Horiuchi. The central tenet of Neuromorphic engineering is that transistors operating in the subthreshold (analog) zone are great mimics of the computations done by neurons, and the way to intelligent machines is through building networks of such neuro-mimetic neurons on analog Very Large Scale Integration (aVLSI) chips. This press release of some work being done at INI at Zurich reminded me of this field. What the writeup also reminded me about, was the great debate between digital and analog implementations of neural circuits. Proponents of Neuromorphic VLSI base their work on the idea that transistors working in the sub-threshold zone give, for “free”, a nice non-linearity between input and output that is at the heart of neural circuits. When applying for funds from DARPA they also remind the grant reviewers that aVLSI circuits have very low power consumption compared to digital VLSI circuits. A well designed and debugged aVLSI Neuromorphic chip is a great feat of engineering (often taking several fabrication rounds to get all the design problems weeded out) which makes iterating over designs very time consuming and unwieldy. The proponents of old school digital computation, where neural behavior is encoded in an algorithm (implementing differential equation models of neurons) point to the ease of implementation (you can use your favorite programing language) the ease of debugging (you just recompile while you have a drink) and the ease of modifying and elaborating the design (comment your code!!). There are some specific issues with aVLSI too. When you make giant neural networks hooking up digital neurons is usually done using a connection matrix. This matrix simply tells the simulating program which neuron gets inputs from which other neurons and which neurons it projects to. In aVLSI you need to physically wire up neurons on the chip layout. This means you can no longer modify the network organization to test out ideas on the fly – you need to design a new chip layout, send it for fabrication wait, debug and so on. (And the moment you start changing connections you have to start moving the whole design around because the exact routing of the wires affects the behavior of chip because everything is so close and the voltages so low that the capacitance between wires matters. As I said, it is a true feat of engineering). People have come up with non-analog solutions to this ‘routing’ problem, by creating hardware versions of the connection matrix: separate circuits, often on a separate chip, that are dedicated to hooking up neurons to other neurons, somewhat like a telephone switchboard. These lose the low power advantage of aVLSI and increase the complexity of the circuits. You know that I’m going to give my two cents. I think, not being very qualified to comment on either analog and digital implementations of Neural circuits, that aVLSI might have some niche applications in tiny devices tailored to a specific task where small size and low power consumption are important. However, for the vast majority of machine intelligence applications, I think simple simulations of neural circuits, performed by ever more powerful and power efficient digital circuits will prevail.
2017-08-22 09:19:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 47, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5301559567451477, "perplexity": 1131.1010099588862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110573.77/warc/CC-MAIN-20170822085147-20170822105147-00380.warc.gz"}
https://www.physicsforums.com/threads/i-am-confused-about-how-multivariable-calc-works.798798/
I am confused about how multivariable calc works 1. Feb 19, 2015 Calpalned 1. The problem statement, all variables and given/known data My teacher introduced the third dimension ($R^3$) and higher dimensions to my class using vectors. Later on, my teacher introduced functions of two or more variables and now there's no mention of vectors. I am confused as to how vectors (i + j + k) and functions of two or more variables f(x, y, z) are related. 2. Relevant equations N/A 3. The attempt at a solution I'm not sure how to start. Thank you all! 2. Feb 19, 2015 Pythagorean Typical convention is that i is the eigenbasis of x, j is the eigenbasis of y, and k is the eigenbasis of z. Other than that, it depends on what you're doing. For instance, taking the gradient requires taking the partial of your function, f(x,y,z) with respect to each variable, and multiplying each of those by their respective eigenvector as below:
2017-08-23 06:47:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518039584159851, "perplexity": 378.8297079264932}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00719.warc.gz"}
https://www.physicsforums.com/threads/proofs-involving-negations-and-conditionals.856763/
# Proofs involving Negations and Conditionals Tags: 1. Feb 11, 2016 ### YamiBustamante Suppose that A\B is disjoint from C and x∈ A . Prove that if x ∈ C then x ∈ B . So I know that A\B∩C = ∅ which means A\B and C don't share any elements. But I don't necessarily understand how to prove this. I heard I could use a contrapositive to solve it, but how do I set it up. Which is P and which is Q (for P implies Q, or as the contrapositive: not Q implies not P)? 2. Feb 11, 2016 ### RUber You are looking to prove: $(x \in C) \implies (x \in B)$ The contrapositive is: $( x \notin B) \implies (x \notin C)$ Your given (supposed) information will not change. 3. Feb 11, 2016 ### YamiBustamante So here is what I have: If x∉B then x∉C So, we can assume x∈A\B and since A\B and C are disjoined, then x∈A\B∩C which is true since x∉C. Would this be correct or is there any error in my logic? 4. Feb 11, 2016 ### RUber I don't think you can say x∈A\B∩C, since you said above that that was the empty set. And, you should not use "since x∉C", because you are trying to prove this for the contrapositive. x is in A. This was given. If x is not in B, then, it is in A\B. C is disjoint from A\B. This was given. x in A\B should tell you what you need to know about C. 5. Feb 12, 2016 ### HallsofIvy No, that is NOT "what you have", it is you are trying to prove. Simply that you have assumed what you want to prove! 6. Feb 13, 2016 ### Kilo Vectors I am interested too. I understand that A intersection C would be X. or written as: A intersection C = ( X IS AN ELEMENT OF A) AND (X IS AN ELEMENT OF C) [ A and NOT B] Intersection C = NULL = [[(X IS AN ELEMENT OF A) AND (X IS NOT AN ELEMENT OF B)]] AND (X IS AN ELEMENT OF C) A/B intersection C ^^ this logical statement, i believe can be expanded below through commutative property to become: [[(X IS AN ELEMENT OF A) AND (X IS AN ELEMENT OF C)]] AND [[(X IS NOT AN ELEMENT OF B) AND (X IS AN ELEMENT OF C)]] ^^ A intersection C intersection C/B = null set. we already know that a intersection C is x..that intersection C AND NOT B is null means that C and NOT B contains x..therefore B contains X? http://i.imgur.com/BkaBtjf.png http://i.imgur.com/EKkKdFu.png sorry I really try to improve my maths Last edited: Feb 13, 2016 7. Feb 13, 2016 ### Kilo Vectors is the above post of mine on the right track? ^^ 8. Feb 13, 2016 ### Staff: Mentor You are basically right but a little too expressive if you avoid the language of symbols. First you have to be precise in the wording. NULL is a computer term reserved for, e.g. empty datasets. In mathematics a null set is something different. E.g. a single point as 1 on the line of reals is a null set. So tiny compared to the reals that it cannot be measured. In set theory we say empty set to ∅. The proposed way of proving the statement was by contradiction. It means that one cannot derive a false statement from a true statement. From a false statement you can derive everything. E.g. if $1 = 0$ then for any number $x$ is $x = x \cdot 1 = x \cdot 0 = 0$ which means $0$ is the only number at all, which is false. Or you can derive a true statement. E.g. if $1=0$ then $1=1-0=0-0=0=1$, which is true. However from a true statement you can only derive other true statements. In the above statement it is given that $A$\$B ∩ C = ∅$ and $x∈A$ and $x∈C$ which is the same as $x∈A∩C$. We need to show that $x∈B$. So if we assume we have an element ($∃$ meaning there is) $x_0∉B$ and end up with a false statement, then this assumption could not have been true. The essential part is this: $x_0∈A$ (given) and $x_0∉B$ (assumed), i.e. $x_0∈A$\$B$. But $x_0∈C$ (given) which means $x_0∈A$\$B∩C.$ But this intersection is empty so $x_0$ cannot exist. A contradiction, a false statement. Therefore our assumption $x_0∉B$ must have been a false statement, too. This means the opposite of it is true. And the opposite statement is any ($∀$ meaning for all) $x∈B$ what we wanted to show. The negation of there is (∃) is for all (∀) and vice versa.
2018-02-20 04:36:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.532634437084198, "perplexity": 1009.6509922201585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812873.22/warc/CC-MAIN-20180220030745-20180220050745-00265.warc.gz"}
http://daleswanson.blogspot.com/2011/07/overwhelmingly-large-telescope.html
## Thursday, July 7, 2011 ### Overwhelmingly Large Telescope http://en.wikipedia.org/wiki/Overwhelmingly_Large_Telescope People are talking about the debate to stop funding the James Web Space Telescope.  In this discussion some one brought up the OLT, which is a ground based telescope that would cost $2.2 billion (newest JWST estimate is$6.6 billion) and have much greater resolution than even the JWST. While the original 100-m design would not exceed the angular resolving power of interferometric telescopes, it would have exceptional light-gathering and imaging capacity which would greatly increase the depth to which humankind could explore the universe. The OWL could be expected to regularly see astronomical objects with an apparent magnitude of 38; or 1,500 times fainter than the faintest object which has been detected by the Hubble Space Telescope.
2017-10-20 19:57:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23554573953151703, "perplexity": 2159.814909992149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824325.29/warc/CC-MAIN-20171020192317-20171020212317-00313.warc.gz"}
http://www.physicsforums.com/printthread.php?t=652528
Physics Forums (http://www.physicsforums.com/index.php) -   Classical Physics (http://www.physicsforums.com/forumdisplay.php?f=61) -   -   Eigenfunction of a Jones Vector (System) (http://www.physicsforums.com/showthread.php?t=652528) KasraMohammad Nov15-12 02:35 PM Eigenfunction of a Jones Vector (System) I am trying to find out just how to solve for the eigenfunction given a system, namely the parameters of an optical system (say a polarizer) in the form of a 2 by 2 Jones Vector. I know how to derive the eigenvalue, using the the constituent det(λI -A) = 0, 'A' being the system at hand and 'λ' the eigenvalue. How do you go about solving for the eigenfunction? mathman Nov15-12 03:14 PM Re: Eigenfunction of a Jones Vector (System) This is a pure math question. Try the following: http://mathworld.wolfram.com/Eigenvalue.html http://www.math.hmc.edu/calculus/tutorials/eigenstuff/ http://www.sosmath.com/matrix/eigen2/eigen2.html Philip Wood Nov16-12 05:15 PM Re: Eigenfunction of a Jones Vector (System) Once you've found λ, you can substitute its value into Av = λv. If you then multiply out the left hand side and equate components, v1 and v2, of v on either side, you'll get two equivalent equations linking v1 and v2. Eiter will give you the ratio v1/v2. This is fine: the eigenvalue equation is consistent with any multiplied constant in the eigenvector. There will be a normalisation procedure for fixing the constant. KasraMohammad Nov16-12 07:39 PM Re: Eigenfunction of a Jones Vector (System) so I got λ = 1. 'A' I assume is the system matrix or my Jones Vector, which is given as a 2 by 2 matrix. So that makes Av=v, thus A must be 1?? The 'v' values must be the same, but isn't 'v' the eigenfunction itself? The equation Av=v eliminates the 'v' value. What am I doing wrong here? Philip Wood Nov17-12 02:39 AM Re: Eigenfunction of a Jones Vector (System) v is the vector and A is the matrix. The matrix isn't a vector, but is an operator which operates on the vector. Try it with a matrix A representing a linear polariser at 45° to the base vectors. This matrix has all four elements equal to 1/2. This gives eigenvalue of 1, and on substituting as I explained above, shows the two components, v1 and v2, of the vector to be equal, which is just what you'd expect. All times are GMT -5. The time now is 09:37 AM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
2014-07-28 14:37:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8072587847709656, "perplexity": 1364.8296557866931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510260734.19/warc/CC-MAIN-20140728011740-00350-ip-10-146-231-18.ec2.internal.warc.gz"}
https://socratic.org/questions/if-a-triangle-with-side-lengths-in-the-ratio-3-4-5-is-inscribed-in-a-circle-of-r
# If a triangle with side lengths in the ratio 3:4:5 is inscribed in a circle of radius 3, how do you find the area? Jun 16, 2015 Find area of right triangle #### Explanation: The hypotenuse = 2R = 6 Since the side ratio is 3: 4: 5, the new ratio of the actual sides is: 3.6: 4.8: 6. The 2 legs of the right triangle are: 3.6 and 4.8 The area is: $s = \frac{3.6 \left(4.8\right)}{2} = 8.64$
2021-09-24 09:46:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044606685638428, "perplexity": 731.078136451035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00234.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Achen.jun
# zbMATH — the first resource for mathematics ## Chen, Jun Compute Distance To: Author ID: chen.jun Published as: Chen, Jun; Chen, J.; chen, Jun External Links: ORCID · dblp Documents Indexed: 219 Publications since 1982, including 1 Book all top 5 #### Co-Authors 9 single-authored 12 Tian, Chao 12 Xu, Shengyuan 7 Liu, Fei 6 Chen, Gui-Qiang G. 6 Sun, Wenyu 6 Wang, Jia 6 Zhang, Baoyong 5 Berger, Toby 4 Chu, Yuming 4 Diggavi, Suhas N. 4 Song, Lin 4 Zou, Yun 3 Feldman, Mikhail 3 He, Dake 3 Hu, Ruimin 3 Jagmohan, Ashish 3 Li, Hongzhe 3 Li, Yongmin 3 Pan, Xingbin 3 Park, Juhyun (Jessie) 3 Permuter, Haim Henri 3 She, Zhensu 3 Weissman, Tsachy 3 Xu, Yun 3 Zhao, Renliang 2 Chen, Dongquan 2 Chen, Jianlong 2 Cui, Baotong 2 de Sampaio, Raimundo J. B. 2 Dumitrescu, Sorina 2 Hawkes, Alan Geoffrey 2 Hu, Qi 2 Huang, He 2 Huang, Qihong 2 Jia, Xianglei 2 Johnson, Lee W. 2 Khezeli, Kia 2 Kumar, Ratnesh R. 2 Lastras-Montaño, Luis Alfonso 2 Li, Xiaomei 2 Li, Xifeng 2 Li, Ze 2 Liang, Chao 2 Liu, Yonggang 2 Liu, Yonghui 2 Liu, Zhiyuan 2 Long, Yao 2 Ma, Qian 2 Nie, Fu-De 2 Qi, Zhidong 2 Riess, Ronald Dean 2 Ruan, Xueyu 2 Scalas, Enrico 2 Shamai (Shitz), Shlomo 2 Strauss, H. R. 2 Sun, Jin Shan 2 Wang, Guojin 2 Wang, Wei 2 Wang, Yuexian 2 Xu, Rui 2 Xu, Yinfei 2 Yang, Enhui 2 Yang, Mingzhu 2 Yu, Song 2 Zhang, Jiankang 2 Zhou, Jianjiang 1 Akers, W. 1 Ali, G. G. M. N. 1 Azizi, Asma 1 Bacuta, Constantin 1 Bai, Ming 1 Bai, Yang 1 Bao, Yun 1 Benaïm, Michel 1 Benjamini, Itai 1 Bhattacharyya, Saswata 1 Bi, Weitao 1 Brower, Richard C. 1 Cao, Han 1 Cao, Li 1 Cao, Qunhui 1 Cao, Yuhui 1 Caprani, Colin C. 1 Carmichael, Owen T. 1 Chan, Edward 1 Chang, Kunok 1 Chang, Yameng 1 Chen, Guanwei 1 Chen, Huafeng 1 Chen, Long-Qing 1 Chen, Ping 1 Chen, Rong 1 Chen, Ruiyang 1 Chen, Shixiu 1 Chen, Xuewen 1 Chen, Zhigang 1 Chen, Zhuchang 1 Chen, Zhumin 1 Christoforou, Cleopatra C. 1 Crookes, Danny ...and 207 more Co-Authors all top 5 #### Serials 40 IEEE Transactions on Information Theory 6 IEEE Transactions on Automatic Control 5 Journal of the Franklin Institute 5 Journal of Shanghai Jiaotong University (Chinese Edition) 5 Systems Engineering and Electronics 3 International Journal of Solids and Structures 3 Journal of Fluid Mechanics 3 Biometrics 3 Information Sciences 3 Applied Mathematics and Mechanics. (English Edition) 3 Applied Mathematical Modelling 3 Applied Mathematics. Series B (English Edition) 3 Mathematical Problems in Engineering 2 Computer Methods in Applied Mechanics and Engineering 2 Journal of the Mechanics and Physics of Solids 2 Physics of Fluids 2 Physics Letters. A 2 International Journal of Production Research 2 Acta Automatica Sinica 2 Journal of Intelligent & Robotic Systems 2 Annals of Physics 2 International Journal of Robust and Nonlinear Control 2 Abstract and Applied Analysis 2 2 Wuhan University Journal of Natural Sciences (WUJNS) 2 European Journal of Mechanics. A. Solids 2 Communications in Nonlinear Science and Numerical Simulation 2 Journal of Southeast University. Natural Science Edition 2 Control and Decision 2 Journal of Hyperbolic Differential Equations 2 Acta Mechanica Sinica 2 Advances in Mathematical Physics 1 International Journal of Modern Physics B 1 Computers & Mathematics with Applications 1 Communications in Mathematical Physics 1 Computers and Structures 1 IMA Journal of Numerical Analysis 1 International Journal of Heat and Mass Transfer 1 International Journal of Systems Science 1 International Journal of Theoretical Physics 1 Journal of Mathematical Physics 1 Transport Theory and Statistical Physics 1 Chaos, Solitons and Fractals 1 Acta Mathematica Sinica 1 BIT 1 Fuzzy Sets and Systems 1 International Journal for Numerical Methods in Engineering 1 Journal of Computational and Applied Mathematics 1 Journal of Differential Equations 1 Journal of the Operational Research Society 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Journal of Hunan University 1 Circuits, Systems, and Signal Processing 1 Journal of Engineering Mathematics (Xi’an) 1 Journal of Systems Science and Mathematical Sciences 1 Journal of Fudan University. Natural Science 1 Hunan Annals of Mathematics 1 Northeastern Mathematical Journal 1 Journal of Qufu Normal University. Natural Science 1 Computational Mechanics 1 Random Structures & Algorithms 1 IEEE Transactions on Signal Processing 1 European Journal of Operational Research 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Journal of Statistical Computation and Simulation 1 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 1 SIAM Journal on Mathematical Analysis 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Advances in Engineering Software 1 Nuclear Physics, B, Proceedings Supplements 1 Mathematical Modelling and Scientific Computing 1 Journal of Huazhong University of Science and Technology 1 Chinese Journal of Numerical Mathematics and Applications 1 Electronic Journal of Probability 1 Electronic Communications in Probability 1 Discrete and Continuous Dynamical Systems 1 Journal of Vibration and Control 1 Mathematics and Mechanics of Solids 1 Journal of Shanghai University. Natural Science 1 Nonlinear Dynamics 1 Journal of Nanjing Normal University. Natural Science Edition 1 Acta Mathematica Scientia. Series A. (Chinese Edition) 1 Discrete Dynamics in Nature and Society 1 Interfaces and Free Boundaries 1 Progress in Natural Science 1 Control Theory & Applications 1 IEEE Transactions on Image Processing 1 Quantitative Finance 1 Journal of Zhejiang University. Engineering Science 1 Journal of Applied Mathematics 1 Comptes Rendus. Mathématique. Académie des Sciences, Paris 1 Journal of Numerical Mathematics 1 Stochastics and Dynamics 1 Journal of Software 1 Journal of Systems Engineering 1 Journal of Shandong University. Natural Science 1 Quantum Information Processing 1 Structural and Multidisciplinary Optimization 1 Transactions of Beijing Institute of Technology 1 Journal of Tongji University. Natural Science ...and 21 more Serials all top 5 #### Fields 48 Information and communication theory, circuits (94-XX) 35 Systems theory; control (93-XX) 29 Mechanics of deformable solids (74-XX) 28 Computer science (68-XX) 27 Fluid mechanics (76-XX) 21 Operations research, mathematical programming (90-XX) 20 Numerical analysis (65-XX) 17 Partial differential equations (35-XX) 14 Statistics (62-XX) 10 Probability theory and stochastic processes (60-XX) 8 Statistical mechanics, structure of matter (82-XX) 7 Biology and other natural sciences (92-XX) 6 Dynamical systems and ergodic theory (37-XX) 4 Combinatorics (05-XX) 4 Ordinary differential equations (34-XX) 4 Optics, electromagnetic theory (78-XX) 4 Classical thermodynamics, heat transfer (80-XX) 4 Quantum theory (81-XX) 4 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 3 Approximations and expansions (41-XX) 3 Mechanics of particles and systems (70-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Difference and functional equations (39-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Integral transforms, operational calculus (44-XX) 2 Integral equations (45-XX) 2 Relativity and gravitational theory (83-XX) 1 Category theory; homological algebra (18-XX) 1 Real functions (26-XX) 1 Functions of a complex variable (30-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 General topology (54-XX) 1 Manifolds and cell complexes (57-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Astronomy and astrophysics (85-XX) 1 Geophysics (86-XX) #### Citations contained in zbMATH Open 81 Publications have been cited 446 times in 273 Documents Cited by Year Boundary element method for dynamic poroelastic and thermoelastic analyses. Zbl 0869.73075 Chen, J.; Dargush, G. F. 1995 Transonic shocks and free boundary problems for the full Euler equations in infinite nozzles. Zbl 1131.35061 Chen, Gui-Qiang; Chen, Jun; Feldman, Mikhail 2007 Single/multiple integral inequalities with applications to stability analysis of time-delay systems. Zbl 1370.34128 Chen, Jun; Xu, Shengyuan; Zhang, Baoyong 2017 Time domain fundamental solution to Biot’s complete equations of dynamic poroelasticity. II: Three-dimensional solution. Zbl 0816.73001 Chen, J. 1994 Time domain fundamental solution to Biot’s complete equations of dynamic poroelasticity. I: Two-dimensional solution. Zbl 0945.74669 Chen, J. 1994 Transonic nozzle flows and free boundary problems for the full Euler equations. Zbl 1142.35510 Chen, Gui-Qiang; Chen, Jun; Song, Kyungwoo 2006 Stability of rarefaction waves and vacuum states for the multidimensional Euler equations. Zbl 1168.35312 Chen, Gui-Qiang; Chen, Jun 2007 Stability analysis of continuous-time systems with time-varying delay using new Lyapunov-Krasovskii functionals. Zbl 1451.93336 Chen, Jun; Park, Ju H.; Xu, Shengyuan 2018 Impulsive effects on global asymptotic stability of delay BAM neural networks. Zbl 1152.34386 Chen, Jun; Cui, Baotong 2008 Variable selection for sparse Dirichlet-multinomial regression with an application to microbiome data analysis. Zbl 1454.62317 Chen, Jun; Li, Hongzhe 2013 Two general integral inequalities and their applications to stability analysis for systems with time-varying delay. Zbl 1351.93103 Chen, Jun; Xu, Shengyuan; Chen, Weimin; Zhang, Baoyong; Ma, Qian; Zou, Yun 2016 Subsonic flows for the full Euler equations in half plane. Zbl 1180.35419 Chen, Jun 2009 Uncertainty-like relations of the relative entropy of coherence. Zbl 1348.81099 Liu, Feng; Li, Fei; Chen, Jun; Xing, Wei 2016 Scale interactions of turbulence subjected to a straining-relaxation-destraining cycle. Zbl 1157.76301 Chen, Jun; Meneveau, Charles; Katz, Joseph 2006 Novel summation inequalities and their applications to stability analysis for systems with time-varying delay. Zbl 1366.93432 Chen, Jun; Xu, Shengyuan; Jia, Xianglei; Zhang, Baoyong 2017 An integrated fast Fourier transform-based phase-field and crystal plasticity approach to model recrystallization of three dimensional polycrystals. Zbl 1423.74712 Chen, L.; Chen, J.; Lebensohn, R. A.; Ji, Y. Z.; Heo, T. W.; Bhattacharyya, S.; Chang, K.; Mathaudhu, S.; Liu, Z. K.; Chen, L.-Q. 2015 A generalized Pólya’s urn with graph based interactions. Zbl 1317.05103 Benaïm, Michel; Benjamini, Itai; Chen, Jun; Lima, Yuri 2015 Stochastic failure prognosability of discrete event systems. Zbl 1360.68228 Chen, Jun; Kumar, Ratnesh 2015 Simultaneous identification of structural parameters and input time history from output-only measurements. Zbl 1145.74359 Chen, J.; Li, J. 2004 A logistic normal multinomial regression model for microbiome compositional data analysis. Zbl 1288.62171 Xia, Fan; Chen, Jun; Fung, Wing Kam; Li, Hongzhe 2013 Improvement on stability conditions for continuous-time T-S fuzzy systems. Zbl 1347.93159 Chen, Jun; Xu, Shengyuan; Li, Yongmin; Qi, Zhidong; Chu, Yuming 2016 Elastic moduli of composites with rigid sliding inclusions. Zbl 0825.73065 Jasiuk, I.; Chen, J.; Thorpe, M. F. 1992 Global output feedback practical tracking for time-delay systems with uncertain polynomial growth rate. Zbl 1395.93300 Jia, Xianglei; Xu, Shengyuan; Chen, Jun; Li, Ze; Zou, Yun 2015 Transonic flows with shocks past curved wedges for the full Euler equations. Zbl 1339.35224 Feldman, Mikhail; Chen, Jun; Chen, Gui-Qiang 2016 Setup planning using Hopfield net and simulated annealing. Zbl 0947.90554 Chen, J.; Zhang, Y. F.; Nee, A. Y. C. 1998 A variational method for recovering planar Lamé moduli. Zbl 1024.74007 Chen, Jun; Gockenbach, Mark S. 2002 Cutting costs or enhancing revenues? An example of a multi-product firm with impatient customers illustrates an important choice facing operational researchers. Zbl 1086.90505 Bell, P. C.; Chen, J. 2006 A note on relationship between two classes of integral inequalities. Zbl 1373.26025 Chen, Jun; Xu, Shengyuan; Zhang, Baoyong; Liu, Guobao 2017 Functionals with operator curl in an extended magnetostatic Born-Infeld model. Zbl 1274.35372 Chen, Jun; Pan, Xing-Bin 2013 Instability of a boundary layer flow on a vertical wall in a stably stratified fluid. Zbl 1359.76108 Chen, Jun; Bai, Yang; Le Dizès, Stéphane 2016 Enhanced heat transport in partitioned thermal convection. Zbl 1382.76227 Bao, Yun; Chen, Jun; Liu, Bo-Fang; She, Zhen-Su; Zhang, Jun; Zhou, Quan 2015 Linear transport equation with specular reflection boundary condition. Zbl 0752.45009 Chen, Jun; Yang, Mingzhu 1991 Fault identification in rotating machinery using the correlation dimension and bispectra. Zbl 1027.74031 Wang, W. J.; Wu, Z. T.; Chen, J. 2001 Quasi-Bézier curves with shape parameters. Zbl 1268.41001 Chen, Jun 2013 Fault detection filtering for uncertain Itô stochastic fuzzy systems with time-varying delays. Zbl 1341.93088 Zhuang, Guangming; Yu, Xingjiang; Chen, Jun 2015 New relaxed stability and stabilization conditions for continuous-time T-S fuzzy models. Zbl 1417.93189 Chen, Jun; Xu, Shengyuan; Zhang, Baoyong; Chu, Yuming; Zou, Yun 2016 Partition of unity method on nonmatching grids for the Stokes problem. Zbl 1148.76305 Bacuta, C.; Chen, J.; Huang, Y.; Xu, J.; Zikatanov, L. 2005 Quasilinear systems involving curl. Zbl 1391.35146 Chen, Jun; Pan, Xing-Bin 2018 Performance of information criteria for selection of Hawkes process models of financial data. Zbl 1405.62137 Chen, J.; Hawkes, A. G.; Scalas, E.; Trinh, M. 2018 On the uniqueness of user equilibrium flow with speed limit. Zbl 1390.90198 Liu, Zhiyuan; Yi, Wen; Wang, Shuaian; Chen, Jun 2017 A homogenized high precise direct integration based on Taylor series. Zbl 1002.65074 Zhou, Gang; Wang, Yuexian; Jia, Guoqing; Chen, Jun 2001 Modelling deformation behaviour of polyelectrolyte gels under chemo-electro-mechanical coupling effects. Zbl 1178.76383 Chen, Jun; Ma, Guowei 2006 Numerical research on the sensitivity of nonmonotone trust region algorithms to their parameters. Zbl 1165.65360 Chen, Jun; Sun, Wenyu; De Sampaio, Raimundo J. B. 2008 An extended magnetostatic Born-Infeld model with a concave lower order term. Zbl 1322.78014 Chen, Jun; Pan, Xing-Bin 2013 Robust design of sheet metal forming process based on adaptive importance sampling. Zbl 1274.74295 Tang, Yucheng; Chen, Jun 2010 Two novel general summation inequalities to discrete-time systems with time-varying delay. Zbl 1395.93342 Chen, Jun; Xu, Shengyuan; Ma, Qian; Li, Yongmin; Chu, Yuming; Zhang, Zhengqiang 2017 Diffusion tensor smoothing through weighted Karcher means. Zbl 1293.62089 Carmichael, Owen; Chen, Jun; Paul, Debashis; Peng, Jie 2013 Nonlinear breathing vibrations and chaos of a circular truss antenna with 1:2 internal resonance. Zbl 1343.78014 Zhang, W.; Chen, J.; Sun, Y. 2016 Discrete Schrödinger equations in the nonperiodic and superlinear cases: homoclinic solutions. Zbl 1444.39006 Jia, Liqian; Chen, Jun; Chen, Guanwei 2017 A generalized Pólya’s urn with graph based interactions: convergence at linearity. Zbl 1326.60135 Chen, Jun; Lucas, Cyrille 2014 Stability and asymptotic behavior of transonic flows past wedges for the full Euler equations. Zbl 1383.35146 Chen, Gui-Qiang G.; Chen, Jun; Feldman, Mikhail 2017 Continuous dependence on parameter of the classical solution to first-order quasilinear hyperbolic systems. Zbl 0974.35010 Chen, Jun; Jin, Yi 2000 Identification of physical parameters of ground vehicle using blockpulse function method. Zbl 0698.93014 Zhang, H. Y.; Chen, J. 1990 An event-based approach to spatio-temporal data modeling in land subdivision systems. Zbl 1035.68580 Chen, Jun; Jiang, Jie 2000 Capacity results for block-stationary Gaussian fading channels with a peak power constraint. Zbl 1325.94056 Chen, Jun; Veeravalli, Venugopal V. 2007 The capacity of finite-state Markov channels with feedback. Zbl 1294.94021 Chen, Jun; Berger, Toby 2005 Coordination and control of multi-fingered robot hands with rolling and sliding contacts. Zbl 0951.93056 Zribi, Mohamed; Chen, Jun; Mahmoud, Magdi S. 1999 The numerical solutions of Green’s functions for transversely isotropic elastic strata. Zbl 0978.74041 Chen, Rong; Xue, Songtao; Chen, Zhuchang; Chen, Jun 2000 Assembly and disassembly: An overview and framework for cooperation requirement planning with conflict resolution. Zbl 1057.68115 Nof, S. Y.; Chen, J. 2003 Translation from adapted UML to Promela for CORBA-based applications. Zbl 1125.68340 Chen, J.; Cui, H. 2004 Multiscale simulation of microcrack based on a new adaptive finite element method. Zbl 1291.74184 Xu, Yun; Chen, Jun; Chen, Dong Quan; Sun, Jin Shan 2010 A multiphase design strategy for dealing with participation bias. Zbl 1216.62167 Haneuse, S.; Chen, J. 2011 Hydrodynamic processes on beach: Wave breaking, up-rush, and backwash. Zbl 1419.76480 Jiang, C. B.; Chen, J.; Tang, H. S.; Cheng, Y. Z. 2011 Approximate merging of B-spline curves and surfaces. Zbl 1240.65047 Chen, Jun; Wang, Guojin 2010 Natural “flow” not in Le Ján-Raimond framework. Zbl 1252.60085 Chen, Jun; Xiang, Kai-Nan 2012 A nonmonotone retrospective trust-region method for unconstrained optimization. Zbl 1281.90054 Chen, Jun; Sun, Wenyu; Yang, Zhenghao 2013 Further studies on stability and stabilization conditions for discrete-time T-S systems with the order relation information of membership functions. Zbl 1395.93321 Chen, Jun; Xu, Shengyuan; Li, Yongmin; Chu, Yuming; Zou, Yun 2015 On the calculation of plastic strain by simple method under non-associated flow rule. Zbl 1406.74115 Hu, Qi; Li, Xifeng; Chen, Jun 2018 A nonmonotone trust region method based on simple quadratic models. Zbl 1294.65066 Zhou, Qunyan; Chen, Jun; Xie, Zhengwei 2014 Failure detection framework for stochastic discrete event systems with guaranteed error bounds. Zbl 1360.93418 Chen, Jun; Kumar, Ratnesh 2015 Classification of normal and abnormal regimes in financial markets. Zbl 1461.91293 Chen, Jun; Tsang, Edward P. K. 2018 Decision of fresh-keeping investment in agricultural supply chains under different settlement mode. Zbl 1424.90104 Chen, Jun; Cao, Qunhui 2018 New coding schemes for the symmetric $$K$$-description problem. Zbl 1366.94326 Tian, Chao; Chen, Jun 2010 On the sum rate of Gaussian multiterminal source coding: new proofs and results. Zbl 1366.94334 Wang, Jia; Chen, Jun; Wu, Xiaolin 2010 Capacity region of the finite-state multiple-access channel with and without feedback. Zbl 1367.94171 Permuter, Haim H.; Weissman, Tsachy; Chen, Jun 2009 Remote vector Gaussian source coding with decoder side information under mutual information and distortion constraints. Zbl 1367.94218 Tian, Chao; Chen, Jun 2009 The nature of near-wall convection velocity in turbulent channel flow. Zbl 1257.76028 Cao, Yuhui; Chen, Jun; She, Zhensu 2008 The sum rate of vector Gaussian multiple description coding with tree-structured covariance distortion constraints. Zbl 1391.94650 Xu, Yinfei; Chen, Jun; Wang, Qiao 2017 Dynamics of social interactions, in the flow of information and disease spreading in social insects colonies: effects of environmental events and spatial heterogeneity. Zbl 1464.92281 Guo, Xiaohui; Chen, Jun; Azizi, Asma; Fewell, Jennifer; Kang, Yun 2020 An extended cohesive damage model for simulating arbitrary damage propagation in engineering materials. Zbl 1439.74334 Li, X.; Chen, J. 2017 Nonlinear axisymmetric bending analysis of strain gradient thin circular plate. Zbl 07398239 Li, Anqing; Ji, Xue; Zhou, Shasha; Wang, Li; Chen, Jun; Liu, Pengbo 2021 Nonlinear axisymmetric bending analysis of strain gradient thin circular plate. Zbl 07398239 Li, Anqing; Ji, Xue; Zhou, Shasha; Wang, Li; Chen, Jun; Liu, Pengbo 2021 Dynamics of social interactions, in the flow of information and disease spreading in social insects colonies: effects of environmental events and spatial heterogeneity. Zbl 1464.92281 Guo, Xiaohui; Chen, Jun; Azizi, Asma; Fewell, Jennifer; Kang, Yun 2020 Stability analysis of continuous-time systems with time-varying delay using new Lyapunov-Krasovskii functionals. Zbl 1451.93336 Chen, Jun; Park, Ju H.; Xu, Shengyuan 2018 Quasilinear systems involving curl. Zbl 1391.35146 Chen, Jun; Pan, Xing-Bin 2018 Performance of information criteria for selection of Hawkes process models of financial data. Zbl 1405.62137 Chen, J.; Hawkes, A. G.; Scalas, E.; Trinh, M. 2018 On the calculation of plastic strain by simple method under non-associated flow rule. Zbl 1406.74115 Hu, Qi; Li, Xifeng; Chen, Jun 2018 Classification of normal and abnormal regimes in financial markets. Zbl 1461.91293 Chen, Jun; Tsang, Edward P. K. 2018 Decision of fresh-keeping investment in agricultural supply chains under different settlement mode. Zbl 1424.90104 Chen, Jun; Cao, Qunhui 2018 Single/multiple integral inequalities with applications to stability analysis of time-delay systems. Zbl 1370.34128 Chen, Jun; Xu, Shengyuan; Zhang, Baoyong 2017 Novel summation inequalities and their applications to stability analysis for systems with time-varying delay. Zbl 1366.93432 Chen, Jun; Xu, Shengyuan; Jia, Xianglei; Zhang, Baoyong 2017 A note on relationship between two classes of integral inequalities. Zbl 1373.26025 Chen, Jun; Xu, Shengyuan; Zhang, Baoyong; Liu, Guobao 2017 On the uniqueness of user equilibrium flow with speed limit. Zbl 1390.90198 Liu, Zhiyuan; Yi, Wen; Wang, Shuaian; Chen, Jun 2017 Two novel general summation inequalities to discrete-time systems with time-varying delay. Zbl 1395.93342 Chen, Jun; Xu, Shengyuan; Ma, Qian; Li, Yongmin; Chu, Yuming; Zhang, Zhengqiang 2017 Discrete Schrödinger equations in the nonperiodic and superlinear cases: homoclinic solutions. Zbl 1444.39006 Jia, Liqian; Chen, Jun; Chen, Guanwei 2017 Stability and asymptotic behavior of transonic flows past wedges for the full Euler equations. Zbl 1383.35146 Chen, Gui-Qiang G.; Chen, Jun; Feldman, Mikhail 2017 The sum rate of vector Gaussian multiple description coding with tree-structured covariance distortion constraints. Zbl 1391.94650 Xu, Yinfei; Chen, Jun; Wang, Qiao 2017 An extended cohesive damage model for simulating arbitrary damage propagation in engineering materials. Zbl 1439.74334 Li, X.; Chen, J. 2017 Two general integral inequalities and their applications to stability analysis for systems with time-varying delay. Zbl 1351.93103 Chen, Jun; Xu, Shengyuan; Chen, Weimin; Zhang, Baoyong; Ma, Qian; Zou, Yun 2016 Uncertainty-like relations of the relative entropy of coherence. Zbl 1348.81099 Liu, Feng; Li, Fei; Chen, Jun; Xing, Wei 2016 Improvement on stability conditions for continuous-time T-S fuzzy systems. Zbl 1347.93159 Chen, Jun; Xu, Shengyuan; Li, Yongmin; Qi, Zhidong; Chu, Yuming 2016 Transonic flows with shocks past curved wedges for the full Euler equations. Zbl 1339.35224 Feldman, Mikhail; Chen, Jun; Chen, Gui-Qiang 2016 Instability of a boundary layer flow on a vertical wall in a stably stratified fluid. Zbl 1359.76108 Chen, Jun; Bai, Yang; Le Dizès, Stéphane 2016 New relaxed stability and stabilization conditions for continuous-time T-S fuzzy models. Zbl 1417.93189 Chen, Jun; Xu, Shengyuan; Zhang, Baoyong; Chu, Yuming; Zou, Yun 2016 Nonlinear breathing vibrations and chaos of a circular truss antenna with 1:2 internal resonance. Zbl 1343.78014 Zhang, W.; Chen, J.; Sun, Y. 2016 An integrated fast Fourier transform-based phase-field and crystal plasticity approach to model recrystallization of three dimensional polycrystals. Zbl 1423.74712 Chen, L.; Chen, J.; Lebensohn, R. A.; Ji, Y. Z.; Heo, T. W.; Bhattacharyya, S.; Chang, K.; Mathaudhu, S.; Liu, Z. K.; Chen, L.-Q. 2015 A generalized Pólya’s urn with graph based interactions. Zbl 1317.05103 Benaïm, Michel; Benjamini, Itai; Chen, Jun; Lima, Yuri 2015 Stochastic failure prognosability of discrete event systems. Zbl 1360.68228 Chen, Jun; Kumar, Ratnesh 2015 Global output feedback practical tracking for time-delay systems with uncertain polynomial growth rate. Zbl 1395.93300 Jia, Xianglei; Xu, Shengyuan; Chen, Jun; Li, Ze; Zou, Yun 2015 Enhanced heat transport in partitioned thermal convection. Zbl 1382.76227 Bao, Yun; Chen, Jun; Liu, Bo-Fang; She, Zhen-Su; Zhang, Jun; Zhou, Quan 2015 Fault detection filtering for uncertain Itô stochastic fuzzy systems with time-varying delays. Zbl 1341.93088 Zhuang, Guangming; Yu, Xingjiang; Chen, Jun 2015 Further studies on stability and stabilization conditions for discrete-time T-S systems with the order relation information of membership functions. Zbl 1395.93321 Chen, Jun; Xu, Shengyuan; Li, Yongmin; Chu, Yuming; Zou, Yun 2015 Failure detection framework for stochastic discrete event systems with guaranteed error bounds. Zbl 1360.93418 Chen, Jun; Kumar, Ratnesh 2015 A generalized Pólya’s urn with graph based interactions: convergence at linearity. Zbl 1326.60135 Chen, Jun; Lucas, Cyrille 2014 A nonmonotone trust region method based on simple quadratic models. Zbl 1294.65066 Zhou, Qunyan; Chen, Jun; Xie, Zhengwei 2014 Variable selection for sparse Dirichlet-multinomial regression with an application to microbiome data analysis. Zbl 1454.62317 Chen, Jun; Li, Hongzhe 2013 A logistic normal multinomial regression model for microbiome compositional data analysis. Zbl 1288.62171 Xia, Fan; Chen, Jun; Fung, Wing Kam; Li, Hongzhe 2013 Functionals with operator curl in an extended magnetostatic Born-Infeld model. Zbl 1274.35372 Chen, Jun; Pan, Xing-Bin 2013 Quasi-Bézier curves with shape parameters. Zbl 1268.41001 Chen, Jun 2013 An extended magnetostatic Born-Infeld model with a concave lower order term. Zbl 1322.78014 Chen, Jun; Pan, Xing-Bin 2013 Diffusion tensor smoothing through weighted Karcher means. Zbl 1293.62089 Carmichael, Owen; Chen, Jun; Paul, Debashis; Peng, Jie 2013 A nonmonotone retrospective trust-region method for unconstrained optimization. Zbl 1281.90054 Chen, Jun; Sun, Wenyu; Yang, Zhenghao 2013 Natural “flow” not in Le Ján-Raimond framework. Zbl 1252.60085 Chen, Jun; Xiang, Kai-Nan 2012 A multiphase design strategy for dealing with participation bias. Zbl 1216.62167 Haneuse, S.; Chen, J. 2011 Hydrodynamic processes on beach: Wave breaking, up-rush, and backwash. Zbl 1419.76480 Jiang, C. B.; Chen, J.; Tang, H. S.; Cheng, Y. Z. 2011 Robust design of sheet metal forming process based on adaptive importance sampling. Zbl 1274.74295 Tang, Yucheng; Chen, Jun 2010 Multiscale simulation of microcrack based on a new adaptive finite element method. Zbl 1291.74184 Xu, Yun; Chen, Jun; Chen, Dong Quan; Sun, Jin Shan 2010 Approximate merging of B-spline curves and surfaces. Zbl 1240.65047 Chen, Jun; Wang, Guojin 2010 New coding schemes for the symmetric $$K$$-description problem. Zbl 1366.94326 Tian, Chao; Chen, Jun 2010 On the sum rate of Gaussian multiterminal source coding: new proofs and results. Zbl 1366.94334 Wang, Jia; Chen, Jun; Wu, Xiaolin 2010 Subsonic flows for the full Euler equations in half plane. Zbl 1180.35419 Chen, Jun 2009 Capacity region of the finite-state multiple-access channel with and without feedback. Zbl 1367.94171 Permuter, Haim H.; Weissman, Tsachy; Chen, Jun 2009 Remote vector Gaussian source coding with decoder side information under mutual information and distortion constraints. Zbl 1367.94218 Tian, Chao; Chen, Jun 2009 Impulsive effects on global asymptotic stability of delay BAM neural networks. Zbl 1152.34386 Chen, Jun; Cui, Baotong 2008 Numerical research on the sensitivity of nonmonotone trust region algorithms to their parameters. Zbl 1165.65360 Chen, Jun; Sun, Wenyu; De Sampaio, Raimundo J. B. 2008 The nature of near-wall convection velocity in turbulent channel flow. Zbl 1257.76028 Cao, Yuhui; Chen, Jun; She, Zhensu 2008 Transonic shocks and free boundary problems for the full Euler equations in infinite nozzles. Zbl 1131.35061 Chen, Gui-Qiang; Chen, Jun; Feldman, Mikhail 2007 Stability of rarefaction waves and vacuum states for the multidimensional Euler equations. Zbl 1168.35312 Chen, Gui-Qiang; Chen, Jun 2007 Capacity results for block-stationary Gaussian fading channels with a peak power constraint. Zbl 1325.94056 Chen, Jun; Veeravalli, Venugopal V. 2007 Transonic nozzle flows and free boundary problems for the full Euler equations. Zbl 1142.35510 Chen, Gui-Qiang; Chen, Jun; Song, Kyungwoo 2006 Scale interactions of turbulence subjected to a straining-relaxation-destraining cycle. Zbl 1157.76301 Chen, Jun; Meneveau, Charles; Katz, Joseph 2006 Cutting costs or enhancing revenues? An example of a multi-product firm with impatient customers illustrates an important choice facing operational researchers. Zbl 1086.90505 Bell, P. C.; Chen, J. 2006 Modelling deformation behaviour of polyelectrolyte gels under chemo-electro-mechanical coupling effects. Zbl 1178.76383 Chen, Jun; Ma, Guowei 2006 Partition of unity method on nonmatching grids for the Stokes problem. Zbl 1148.76305 Bacuta, C.; Chen, J.; Huang, Y.; Xu, J.; Zikatanov, L. 2005 The capacity of finite-state Markov channels with feedback. Zbl 1294.94021 Chen, Jun; Berger, Toby 2005 Simultaneous identification of structural parameters and input time history from output-only measurements. Zbl 1145.74359 Chen, J.; Li, J. 2004 Translation from adapted UML to Promela for CORBA-based applications. Zbl 1125.68340 Chen, J.; Cui, H. 2004 Assembly and disassembly: An overview and framework for cooperation requirement planning with conflict resolution. Zbl 1057.68115 Nof, S. Y.; Chen, J. 2003 A variational method for recovering planar Lamé moduli. Zbl 1024.74007 Chen, Jun; Gockenbach, Mark S. 2002 Fault identification in rotating machinery using the correlation dimension and bispectra. Zbl 1027.74031 Wang, W. J.; Wu, Z. T.; Chen, J. 2001 A homogenized high precise direct integration based on Taylor series. Zbl 1002.65074 Zhou, Gang; Wang, Yuexian; Jia, Guoqing; Chen, Jun 2001 Continuous dependence on parameter of the classical solution to first-order quasilinear hyperbolic systems. Zbl 0974.35010 Chen, Jun; Jin, Yi 2000 An event-based approach to spatio-temporal data modeling in land subdivision systems. Zbl 1035.68580 Chen, Jun; Jiang, Jie 2000 The numerical solutions of Green’s functions for transversely isotropic elastic strata. Zbl 0978.74041 Chen, Rong; Xue, Songtao; Chen, Zhuchang; Chen, Jun 2000 Coordination and control of multi-fingered robot hands with rolling and sliding contacts. Zbl 0951.93056 Zribi, Mohamed; Chen, Jun; Mahmoud, Magdi S. 1999 Setup planning using Hopfield net and simulated annealing. Zbl 0947.90554 Chen, J.; Zhang, Y. F.; Nee, A. Y. C. 1998 Boundary element method for dynamic poroelastic and thermoelastic analyses. Zbl 0869.73075 Chen, J.; Dargush, G. F. 1995 Time domain fundamental solution to Biot’s complete equations of dynamic poroelasticity. II: Three-dimensional solution. Zbl 0816.73001 Chen, J. 1994 Time domain fundamental solution to Biot’s complete equations of dynamic poroelasticity. I: Two-dimensional solution. Zbl 0945.74669 Chen, J. 1994 Elastic moduli of composites with rigid sliding inclusions. Zbl 0825.73065 Jasiuk, I.; Chen, J.; Thorpe, M. F. 1992 Linear transport equation with specular reflection boundary condition. Zbl 0752.45009 Chen, Jun; Yang, Mingzhu 1991 Identification of physical parameters of ground vehicle using blockpulse function method. Zbl 0698.93014 Zhang, H. Y.; Chen, J. 1990 all top 5 #### Cited by 563 Authors 11 Chen, Jun 9 Chen, Gui-Qiang G. 8 Xin, Zhouping 8 Xu, Shengyuan 8 Yuan, Hairong 6 Zhang, Zhengqiang 5 Chen, Chao 5 Kreml, Ondřej 5 Li, Yongmin 5 Wang, Yi 5 Xie, Chunjing 4 Chu, Yuming 4 Feldman, Mikhail 4 Gao, Dongmei 4 He, Yong 4 Khan, Akhtar Ali 4 Li, Kelin 4 Liu, Li 4 Markfelder, Simon 4 Pan, Xingbin 4 Weng, Shangkun 4 Yin, Huicheng 4 Zhang, Huaguang 3 Aletti, Giacomo 3 Chen, Shuxing 3 Crimaldi, Irene 3 Fang, Beixiang 3 Ghiglietti, Andrea 3 Gockenbach, Mark S. 3 Hu, Gang 3 Jiang, Lin 3 Kwon, O. M. 3 Li, Jun 3 Li, Linan 3 Mácha, Václav 3 Sun, Chao 3 Wang, Teng 3 Wang, Wei 3 Xiang, Wei 3 Zeng, Hongbing 3 Zhou, Quan 2 Ammour, Rabah 2 Bae, Myoungjean 2 Březina, Jan 2 Chen, Bing 2 Chen, Wenbin 2 Chiodaroli, Elisabetta 2 Dahm, Werner J. A. 2 de Oliveira, Fúlvia S. S. 2 Du, Lili 2 Duan, Ben 2 Duan, Wenyong 2 Ghoshal, Shyam Sundar 2 Gualtieri, P. 2 Guo, Xin 2 Gyurkovics, Éva 2 Hamlington, Peter E. 2 Huang, Feimin 2 Jadamba, Baasansuren 2 Jana, Animesh 2 Jiang, Linfeng 2 Latrach, Khalid 2 Leclercq, Edouard 2 Lefebvre, Dimitri 2 Li, Fei 2 Li, Piyu 2 Lin, Chong 2 Liu, Guobao 2 Liu, Xinzhi 2 Long, Fei 2 Ma, Qian 2 Menshikov, Mikhail V. 2 Park, Juhyun (Jessie) 2 Qi, Zhidong 2 Rakkiyappan, Rajan 2 Reich, Brian James 2 Sanlaville, Eric 2 Senocak, Inanc 2 Shao, Hanyong 2 Shao, Lin 2 Shao, Zhiqiang 2 Shcherbakov, Vadim 2 Shen, Lian 2 Souza, Fernando O. 2 Szeto, Wai Yuen 2 Wan, Zhenhua 2 Wang, Anny B. 2 Wang, Dehua 2 Wang, Yingchun 2 Wei, Guo 2 Wu, Min 2 Xiao, Cheng-Nian 2 Xu, Gang 2 Xu, Yanqin 2 Yin, Xiang 2 Zhang, Suxia 2 Zhong, Shou-Ming 2 Zhou, Bin 1 Abolpour, Roozbeh 1 Al Baba, Hind ...and 463 more Authors all top 5 #### Cited in 116 Serials 34 Journal of the Franklin Institute 15 Journal of Fluid Mechanics 15 Applied Mathematics and Computation 11 Journal of Differential Equations 10 Archive for Rational Mechanics and Analysis 7 International Journal of Theoretical Physics 6 Communications in Mathematical Physics 6 Automatica 5 Mathematical Problems in Engineering 5 The Annals of Applied Statistics 4 International Journal of Control 4 Journal of the American Statistical Association 4 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 4 SIAM Journal on Mathematical Analysis 3 Computers & Mathematics with Applications 3 Journal of Mathematical Physics 3 ZAMP. Zeitschrift für angewandte Mathematik und Physik 3 Advances in Mathematics 3 Information Sciences 3 Physics of Fluids 3 Quantum Information Processing 2 Computer Methods in Applied Mechanics and Engineering 2 Journal of Computational Physics 2 Journal of Mathematical Analysis and Applications 2 Journal of Statistical Physics 2 Mathematical Methods in the Applied Sciences 2 Nonlinearity 2 Chaos, Solitons and Fractals 2 Journal of Computational and Applied Mathematics 2 Mathematics and Computers in Simulation 2 Circuits, Systems, and Signal Processing 2 Mathematical and Computer Modelling 2 Neural Networks 2 Discrete Event Dynamic Systems 2 Stochastic Processes and their Applications 2 Bernoulli 2 Discrete and Continuous Dynamical Systems 2 Mathematics and Mechanics of Solids 2 Nonlinear Dynamics 2 Soft Computing 2 Communications in Nonlinear Science and Numerical Simulation 2 Nonlinear Analysis. Real World Applications 2 Journal of Hyperbolic Differential Equations 2 Networks and Spatial Economics 2 Nonlinear Analysis. Hybrid Systems 2 Science China. Mathematics 1 Acta Mechanica 1 Applicable Analysis 1 International Journal of Solids and Structures 1 International Journal of Systems Science 1 Journal of Mathematical Biology 1 Problems of Information Transmission 1 Rocky Mountain Journal of Mathematics 1 The Annals of Statistics 1 Biometrics 1 Fuzzy Sets and Systems 1 Journal of Multivariate Analysis 1 Journal of Optimization Theory and Applications 1 Operations Research 1 Results in Mathematics 1 Theoretical Population Biology 1 Systems & Control Letters 1 Computer Aided Geometric Design 1 Acta Mathematicae Applicatae Sinica. English Series 1 Computational Mechanics 1 Applied Mathematics Letters 1 SIAM Journal on Matrix Analysis and Applications 1 Signal Processing 1 International Journal of Adaptive Control and Signal Processing 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Applied Mathematical Modelling 1 Communications in Partial Differential Equations 1 International Journal of Computer Mathematics 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 1 SIAM Journal on Applied Mathematics 1 Computational Statistics and Data Analysis 1 Archive of Applied Mechanics 1 International Journal of Robust and Nonlinear Control 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Potential Analysis 1 Applied Mathematics. Series B (English Edition) 1 Calculus of Variations and Partial Differential Equations 1 Combinatorics, Probability and Computing 1 Electronic Journal of Differential Equations (EJDE) 1 NoDEA. Nonlinear Differential Equations and Applications 1 Complexity 1 European Journal of Control 1 Journal of Vibration and Control 1 Abstract and Applied Analysis 1 Journal of Applied Mechanics and Technical Physics 1 Chaos 1 European Journal of Mechanics. A. Solids 1 RAIRO. Operations Research 1 Journal of Systems Science and Complexity 1 Structural and Multidisciplinary Optimization 1 Statistical Applications in Genetics and Molecular Biology 1 International Journal of Wavelets, Multiresolution and Information Processing 1 Advances in Difference Equations 1 Journal of Statistical Mechanics: Theory and Experiment ...and 16 more Serials all top 5 #### Cited in 30 Fields 83 Systems theory; control (93-XX) 81 Fluid mechanics (76-XX) 76 Partial differential equations (35-XX) 25 Ordinary differential equations (34-XX) 23 Biology and other natural sciences (92-XX) 21 Statistics (62-XX) 18 Probability theory and stochastic processes (60-XX) 16 Numerical analysis (65-XX) 15 Mechanics of deformable solids (74-XX) 11 Quantum theory (81-XX) 11 Operations research, mathematical programming (90-XX) 10 Information and communication theory, circuits (94-XX) 8 Operator theory (47-XX) 7 Statistical mechanics, structure of matter (82-XX) 6 Computer science (68-XX) 4 Real functions (26-XX) 4 Difference and functional equations (39-XX) 4 Optics, electromagnetic theory (78-XX) 4 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 3 Combinatorics (05-XX) 3 Calculus of variations and optimal control; optimization (49-XX) 2 General and overarching topics; collections (00-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Geophysics (86-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Approximations and expansions (41-XX) 1 Differential geometry (53-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Astronomy and astrophysics (85-XX)
2021-10-26 07:44:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6607763171195984, "perplexity": 11657.799943218652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00172.warc.gz"}
https://faculty.ucr.edu/~jflegal/203/08-simulations.html
## Agenda • Simulations • Monte Carlo • Markov Chain Monte Carlo ## Simulations • Historically, Bayesian methods were restricted by the need to perform integrations analytically. • Modern Bayesian analysis typically performed by simulating the posterior distribution using Markov chain Monte Carlo (MCMC) methods. • Prefer Monte Carlo methods to numerical integration because of their potential to deal with high-dimensional problems. • Approximate Bayesian analysis using the analytic Laplace approximation and Monte Carlo methods. • Prefer Monte Carlo methods to Laplace approximations in regression problems because when performing many predictions, only a single Monte Carlo sample is necessary to perform all predictions, whereas the Laplace method requires a separate analytic approximation for each prediction. ## Simulations • Addressed simulations in STAT 206, where we discussed obtaining pseudorandom variates from a specified distribution. • Quantile transform method, rejection sampling via accept-reject algorithm, simulating from mixtures, Box-Muller transformation, etc. ## Quantile transform method Given $$U \sim \text{Uniform}(0,1)$$ and CDF $$F$$ from a continuous distribution. Then $$X = F^{-1}(U)$$ is a random variable with CDF $$F$$. Proof: $P(X\le a) = P (F^{-1}(U) \le a) = P ( U \le F(a)) = F(a)$ • $$F^{-1}$$ is the quantile function • If we can generate uniforms and calculate quantiles, we can generate non-uniforms • Also known as the Probability Integral Transform Method ## Example: Exponential Suppose $$X \sim \text{Exp}(\beta)$$. Then we have density $f(x) = \beta^{-1} e^{-x/\beta} I(0<x<\infty)$ and CDF $F(x) = 1 - e^{-x/\beta}$ Also $y = 1 - e^{-x/\beta} \text{ iff } -x/\beta = \log (1-y) \text{ iff } x = -\beta \log (1-y).$ Thus, $$F^{-1} (y) = -\beta \log(1-y)$$. So if $$U \sim \text{Uniform}(0,1)$$, then $$F^{-1} (u) = -\beta \log(1-u) \sim \text{Exp}(\beta)$$. ## Example: Exponential x <- runif(10000) y <- - 3 * log(1-x) hist(y) mean(y) ## [1] 3.017044 ## Example: Exponential true.x <- seq(0,30, .5) true.y <- dexp(true.x, 1/3) hist(y, freq=F, breaks=30) points(true.x, true.y, type="l", col="red", lw=2) ## Box-Muller • Box-Muller transformation transform generates pairs of independent, standard normally distributed random numbers, given a source of uniformly distributed random numbers • Let $$U \sim \text{Uniform}(0,1)$$ and $$V \sim \text{Uniform}(0,1)$$ and set $R=\sqrt{-2\log U} \hspace{10mm} \textrm{ and } \hspace{10mm} \theta = 2\pi V$ • Then the following transformation yields two independent normal random variates $X=R\cos(\theta) \hspace{10mm} \textrm{ and } \hspace{10mm} Y=R\sin(\theta)$ ## Rejection sampling • The accept-reject algorithm is an indirect method of simulation • Uses draws from a density $$f_y(y)$$ to get draws from $$f_x(x)$$ • Sampling from the wrong distribution and correcting it ## Rejection sampling Theorem: Let $$X \sim f_x$$ and $$Y \sim f_y$$ where the two densities have common support. Define $M = \sup_{x} \frac{f_x(x)}{f_y(x)}.$ If $$M< \infty$$ then we can generate $$X \sim f_x$$ as follows, 1. Generate $$Y \sim f_y$$ and independently draw $$U \sim \text{Uniform}(0,1)$$ 2. If $u < \frac{f_x(y)}{M f_y(y)}$ set $$X=Y$$; otherwise return to 1. ## Rejection sampling • Recall $$M\ge 1$$ and $P(\text{STOP}) = B = P \left( u \le \frac{f_x(y)}{M f_y(y)} \right) = \frac{1}{M}.$ • Thus the number of iterations until the algorithm stops is Geometric($$1/M$$). • Hence, the expected number of iterations until acceptance is $$M$$. ## Monte Carlo Integration • Use these to approximate posterior expectations, posterior quantiles, or marginal posterior densities of interest. • Focus will be on Monte Carlo and Markov chain Monte Carlo (MCMC) methods. ## Monte Carlo Integration • Consider an integral that is an expectation, say $\mu = E\left\{ g(X) \right\} = \int g(x) f(x) dx$ where $$X$$ is a random variable with density $$f(x)$$. Assume this expectation actually exists, i.e. $$\mu < \infty$$. • Only generally applicable tool for approximating integrals like this is so-called Monte Carlo integration. • Suppose we can simulate an iid sequence $$X_1, X_2, \dots$$ of random variables having density $$f(x)$$. Then $Y_i = g(X_i), \quad i = 1,2,\dots$ is an iid sequence of random variables having mean $$\mu$$, which is the integral we want to evaluate. ## SLLN • The SLLN says that if $$E|Y| < \infty$$ then with probability 1 as $$n \to \infty$$ $\bar{Y}_n = \frac{1}{n} \sum_{i=1}^n Y_i \to \mu .$ • We can approximate $$\mu$$ with an arbitrary level of precision if we only average over a sufficiently large number of simulations. • Using $$\bar{Y}_n$$ as an approximation for $$\mu$$ is the “Monte Carlo method.” We saw multiple examples of this last quarter. ## Example • Suppose $$X_1, X_2, \dots, X_m$$ are a random sample from a standard normal distribution. What is the relative efficiency of the sample mean compared to a 25% trimmed mean $$\hat{\theta}$$ as an estimator of the true unknown population mean $$\theta$$? set.seed(1) mdat <- 30 # data sample size nsim <- 1e4 # Monte Carlo sample size theta.hat <- double(nsim) for (i in 1:nsim) { x <- rnorm(mdat) theta.hat[i] <- mean(x, trim = 0.25) } ## Example • Want $$\text{Var} (\hat{\theta})$$ (easy) and the trimmed mean (not a function defined by a nice simple expression). But both are easy in Monte Carlo since a computer can evaluate the function. • Relative efficiency is $$\text{Var} (\hat{\theta})$$ divided by the variance of the sample mean, know to be $$1/m$$ without doing any Monte Carlo. Thus our Monte Carlo approximation to the relative efficiency is computed by mdat * mean(theta.hat^2) ## [1] 1.160488 • This formula, as opposed to mdat * var(theta.hat), is used because $$E(\theta) = 0$$ by symmetry. • R gives 1.160488 for the Monte Carlo approximation to the relative efficiency. ## Monte Carlo Error • Monte Carlo approximation is not exact. Number 1.160488 is not exact value of the integral we are trying to approximate using the Monte Carlo method. It is off by some amount, which we call Monte Carlo error. • How large is the Monte Carlo error? We can never know. The error is $$1.160488 - \mu$$. Hence unknown unless we know $$\mu$$, and if we knew that we wouldn’t be doing Monte Carlo in the first place. • Know that our Monte Carlo approximation, $$\bar{Y}_n$$, is the average of some random variables $$Y_1, Y_2, \dots$$ forming an IID sequence. If $$E(Y_i^2) < \infty$$, then the CLT says $\bar{Y}_n \approx N \left( \mu \frac{\sigma^2}{n} \right)$ where $$\text{Var} (Y_i) = \sigma^2$$. • Generally this tells us all we can know about the Monte Carlo error. ## Monte Carlo Error • We don't know $$\sigma^2$$ from the CLT but we can estimate with $S_n^2 = \frac{1}{n-1} \sum_{i=1}^n (Y_i - \bar{Y}_n)^2 .$ • Now one can produce a confidence interval or just report the MCSE, $$S_n / \sqrt{n}$$. • Despite its simplicity and familiarity to all statisticians, MCSE can be confusing when there are several variances floating around. The variance involved in the MCSE needn’t be, and usually isn’t, the variance involved in the expectation $$\mu$$ being calculated. Again, the distinction must be kept crystal clear. ## Example • In the trimmed mean example, the expectation being calculated is a constant times a variance, $$\mu= m \text{Var}(\hat{\theta})$$. We estimated it by $\hat{\mu}_n = \frac{m}{n} \sum_{i=1}^n \hat{\theta}_i^2$ where $$\hat{\theta}_1, \hat{\theta}_2, \dots$$ are the Monte Carlo samples. • Things being averaged to calculate $$\mu$$ are the $$m \hat{\theta}_i^2$$, thus $$S^2$$ should be the sample variance of the $$m \hat{\theta}_i^2$$. • Should now be clear that the MCSE is sqrt(var(mdat * theta.hat^2) / nsim) ## [1] 0.01648569 ## Example • Note that both sample sizes mdat and nsim appeared in our MCSE calculation. • Also the variance var(mdat * theta.hat^2) that appeared in the MCSE is very different from the variance in mdat * var(theta.hat) that might have been used as our Monte Carlo estimate. ## Ordinary Monte Carlo • Main problem with ordinary Monte Carlo is that it is very hard to do for multivariate stochastic processes. • Few tricks for reducing multivariate problems to univariate problems. • For example, a general multivariate normal random vector $$X \sim N(\mu,\Sigma)$$ can be simulated using the Cholesky decomposition of the dispersion matrix $$\Sigma = LL^T$$. If $$Z$$ be a $$N(0,I)$$ random vector then $$X = \mu + LZ$$ has the desired distribution. • Another general method is to use the laws of conditional probability. Simulate the first component $$X_1$$ from its marginal distribution, simulate the second component $$X_2 | X_1$$, then simulate $$X_3 | X_1 , X_2$$, and so forth. ## Markov Chain Monte Carlo • MCMC methods are just like the ordinary Monte Carlo methods except that instead of simulating an iid sequence we will now be simulating a realization of a Markov chain. • Only truly general method of simulating observations that are at least approximately from the target distribution. • All major concepts used in ordinary Monte Carlo carry over to the MCMC setting. ## Markov Chain Monte Carlo • As before, the major goal is still to estimate an unknown expectation $E_{\pi} \{ g(X) \} = \int g(x) \pi(x) dx$ where now $$\pi$$ is the density we are interested in using for inference. • Now assume that direct simulation from $$\pi$$ is impossible. This is where MCMC is most useful. ## Markov Chains A Markov chain is a sequence $$X = \{ X_1, X_2, \dots \} \in \mathsf{X}$$ of random elements having the property that the future depends on the past only through the present, that is, for any function $$g$$ for which the expectations are defined $E \{ g(X_{n+1} , X_{n+2}, \dots) | X_n, X_{n-1}, \dots \} = E \{ g(X_{n+1} , X_{n+2}, \dots) | X_n \} .$ Let $h(X_{n+1}) = E \{ g(X_{n+1} , X_{n+2}, \dots) | X_{n+1} \}.$ Then using the Markov property (above definition) and iterated expectations \begin{aligned} E \{ g(X_{n+1} , X_{n+2}, \dots) | X_1, \dots, X_n \} & = E \{ E [ g(X_{n+1} , X_{n+2}, \dots) | X_1, \dots, X_{n+1} ] | X_1, \dots, X_n \} \\ & = E \{ E [ g(X_{n+1} , X_{n+2}, \dots) | X_{n+1} ] | X_1, \dots, X_n \} \\ & = E \{ h(X_{n+1}) | X_1, \dots, X_n \} \\ & = E \{ h(X_{n+1}) | X_n \} \; . \end{aligned} ## Markov Chain Conditions • Markov chain is stationary if the marginal distribution of $$X_n$$ does not depend on $$n$$. • Another way to discuss stationarity is to say that the initial distribution is the same as the marginal distribution of all the variables. • Such a distribution is a stationary distribution or an invariant distribution for the Markov chain. • Formally, $$\pi$$ is an invariant distribution for a Markov kernel $$P$$ if $$\pi P = \pi$$. • Not all Markov chains have stationary distributions. But all of those of use in MCMC do since all Markov chains for MCMC are constructed to have a specified stationary distribution. ## Markov Chain Conditions • We will assume that the Markov chain having invariant distribution $$\pi$$ is aperiodic, $$\pi$$-irreducible and positive Harris recurrent. 1. Aperiodic means that we cannot partition $$\mathsf{X}$$ in such a way that the Markov chain makes a regular tour through the partition. 2. $$\pi$$-irreducible means that if $$\pi(A) > 0$$ then there is a positive probability that the chain will eventually visit $$A$$. 3. Positive Harris recurrent. “Positive” means that $$\pi$$ is a probability distribution; “Harris recurrent” means that no matter the starting distribution of the Markov chain every set of positive $$\pi$$-measure will be visited infinitely often if the chain is run forever. • Markov chain $$X$$ satisfying these conditions is said to be Harris ergodic. ## SLLN • A Harris ergodic Markov chain $$X = (X_1, X_2, \dots)$$ having stationary distribution $$\pi$$ satisfies the law of large numbers; that is if $$E_{\pi} |g(X)| < \infty$$ then with probability 1 as $$n \to \infty$$ $\bar{g}_n = \frac{1}{n} \sum_{i=1}^n g(X_i) \to E_{\pi} [g(X)].$ • Note that if the chain is not stationary the SLLN still holds, even though none of the $$X_i$$ have the stationary distribution $$\pi$$. In fact, typically $E_{\pi} [g(X)] \ne E [g(X_i)] \text{ for all } i.$ • Hence $$\bar{g}_n$$ is a biased estimate. ## MCMC • MCMC is just like ordinary Monte Carlo except that $$X_1, X_2, \dots$$ is a Harris ergodic Markov chain with a specified stationary distribution $$\pi$$. Basically, MCMC is the practice of using $$\bar{g}_n$$ as an estimate of $$E_{\pi} [g(X)]$$, just as ordinary Monte Carlo is the same practice when $$X_1, X_2, \dots$$ are iid with distribution $$\pi$$. • Ordinary Monte Carlo is a special case of MCMC because iid sequences are Markov chains too. • Note that all of the ordinary Monte Carlo arguments were based on the SLLN and hence everything based on a SLLN applies to MCMC as well. ## Markov Chain CLT • CLT is the basis of all error estimation in Monte Carlo. • For large Monte Carlo sample sizes $$n$$, the distribution of Monte Carlo estimates is approximately normal so the asymptotic variance tells the whole story about accuracy of estimates. • To simplify notation, define $$\mu = E_{\pi} [g(X)]$$ as the expectation being estimated. • Then the SLLN says w.p.1 as $$n \to \infty$$ $\bar{g}_n \to \mu$ and the CLT says that as $$n \to \infty$$ $\sqrt{n} (\bar{g}_n - \mu) \overset{D}{\rightarrow} N(0 , \sigma^2)$ where $$0 < \sigma_g^2 < \infty$$. ## Markov Chain CLT • In the iid case, the CLT is completely understood. That is, the CLT holds if and only if the $$Var[g(X_i)] < \infty$$ and, moreover $$\sigma^2 = Var[g(X_i)]$$ which is easily estimated by the sample variance of $$g(X_1), g(X_2), \dots, g(X_n)$$. • Last point is not surprising, just a consequence of the fact that the variance of a sum is the sum of the variances if and only if the terms are uncorrelated. So in the iid case $Var[\bar{g}_n ] = \frac{\sigma^2}{n}$ but in general $Var[\bar{g}_n ] = \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n cov \{ g(X_i), g(X_j) \}.$ ## Calculation of the Asymptotic Variance • Variance formula can be simplified using stationarity, which implies that the joint distribution of $$X_n$$ and $$X_{n+k}$$ depends only on $$k$$ not upon $$n$$. • Hence all of the terms having the same difference between $$i$$ and $$j$$ are the same, and hence $n Var[\bar{g}_n ] = Var_{\pi} \{ g(X_i) \} + 2 \sum_{k=1}^{n-1} \frac{n-k}{n} cov_{\pi} \{ g(X_i), g(X_{i+k}) \}.$ ## Calculation of the Asymptotic Variance • It is common to define the lag $$k$$ autocovariance $\gamma_k = cov_{\pi} \{ g(X_i), g(X_{i+k}) \},$ and hence $n Var[\bar{g}_n ] = \gamma_0 + 2 \sum_{k=1}^{n-1} \frac{n-k}{n} \gamma_k.$ • Since $$\frac{n-k}{n} \to 1$$ as $$n \to \infty$$ one might suspect the right hand side converges to $\sigma_g^2 = \gamma_0 + 2 \sum_{k=1}^{\infty} \gamma_k$ if it converges at all. ## Markov Chain CLT • What conditions guarantee a Markov chain CLT? This is an important question. Not every Markov chain enjoys a CLT and it doesn’t have to be a pathological example. • To get Monte Carlo standard errors, we need to estimate the variance $$\sigma^2_g$$. Generally speaking, this can be difficult. ## Markov Chain CLT Theorem: Let $$X$$ be a Harris ergodic Markov chain on $$\mathsf{X}$$ with invariant distribution $$\pi$$ and let $$g : \mathsf{X} \to \mathbb{R}$$. Assume one of the following conditions: 1. $$X$$ is polynomially ergodic of order $$m > 1$$, $$E_{\pi} M < \infty$$ and there exists $$B < \infty$$ such that $$|g(x)| < B$$ almost surely; 2. $$X$$ is polynomially ergodic of order $$m$$, $$E_{\pi} M < \infty$$ and $$E_{\pi} |g(x)|^{2 + \delta} < \infty$$ where $$m \delta > 2 + \delta$$; 3. $$X$$ is geometrically ergodic and $$E_{\pi} |g(x)|^{2 + \delta} < \infty$$ for some $$\delta > 0$$; 4. $$X$$ is geometrically ergodic, reversible and $$E_{\pi} g(x)^{2} < \infty$$; or 5. $$X$$ is uniformly ergodic and $$E_{\pi} g(x)^{2} < \infty$$. Then for any initial distribution, as $$n \to \infty$$ $\sqrt{n} (\bar{g}_n - \mu) \overset{D}{\rightarrow} N(0 , \sigma^2_g) .$ ## Batch Means • Suppose $$n=ab$$ and hence $$a=a_n$$ and $$b=b_n$$ are functions of $$n$$. • Batch means is based on $n Var[\bar{g}_n ] \to \sigma^2_g \text{ as } n \to \infty.$ • Hence $m Var[\bar{g}_m ] \approx n Var[\bar{g}_n ]$ whenever $$m$$ and $$n$$ are both large. • Thus a (not very good) estimate of $$m Var[\bar{g}_m ]$$ is $m(\bar{g}_m - \bar{g}_n)^2$ where we are thinking here that $$1 << m << n$$ meaning $$m$$ is large compared to 1 but small compared to $$n$$ (which means $$n$$ is very large). ## Batch Means • Can increase precision by averaging. If the Markov chain were stationary, every block of length $$b$$ would have the same joint distribution. For some reason, early in the history of this subject, the blocks were dubbed “batches” so that is what we will call them. • A batch of length $$b$$ of a Markov chain $$X_1, X_2, \dots$$ is $$b$$ consecutive elements of the chain. For example, the first batch is $X_1, X_2, \dots, X_{b}$ The batch mean is the sample mean of the batch $\bar{g}_i = \frac{1}{b} \sum_{i=(j-1)b + 1}^{jb} g(X_i).$ ## Batch Means • Then the batch means estimator of $$\sigma^2_g$$ is $\hat{\sigma}^2_{BM} = \frac{b}{a-1} \sum_{i=1}^a (\bar{g}_i - \bar{g}_n)^2.$ • If the number of batches is fixed, this will not be a consistent estimator. However, if the batch size and the number of batches are allowed to increase with $$n$$ it may be possible to obtain consistency. • Generalization of BM is the method of overlapping batch means (OLBM). Note that there are $$n-b + 1$$ batches of length $$b$$, indexed by $$k$$ running from zero to $$n-b$$. OLBM averages all of them and is asymptotically equivalent to a spectral variance estimator using a Bartlett window. ## Numerical Example • Consider the normal AR(1) time series defined by $X_{n+1} = \rho X_n + Z_n$ where $$Z_1, Z_2, \dots$$ are normal with mean zero and $$Z_n$$ is independent of $$X_1, \dots , X_n$$. • In the time series literature, the $$Z_i$$ are called the innovations and their variance is the innovations variance. • Let $$\tau^2$$ denote the innovations variance. ## Numerical Example • The following will provide an observation from the MC 1 step ahead. ar1 <- function(m, rho, tau) { rho*m + rnorm(1, 0, tau) } ## Numerical Example • Next, we add to this function so that we can give it a Markov chain and the result will be p observations from the Markov chain. ar1.gen <- function(mc, p, rho, tau, q=1) { loc <- length(mc) junk <- double(p) mc <- append(mc, junk) for(i in 1:p){ j <- i+loc-1 mc[(j+1)] <- ar1(mc[j], rho, tau) } return(mc) } ## Numerical Example Let $$\rho=.95$$ and $$\tau = 1$$. set.seed(20) tau <- 1 rho <- .95 out <- 10 # starting value n <- 10000 x <- ar1.gen(out, n, rho, tau) ## Numerical Example plot(x[1:1000], xlab="Iteration", ylab="", pch=16,cex=0.4) ## Numerical Example mu.hat <- mean(x) mu.hat ## [1] -0.1252549 • We will estimate the the asymptotic variance using BM. The following figure plots the batch means for batch length 100, that is a plot of $$\bar{X}_{m,k}$$ versus $$k$$. • This is, of course, another time series. Though we hope to choose a batch size large enough that we can treat these as independent. b = 100 a = floor(n/b) y = sapply(1:a, function(k) return(mean(x[((k - 1) * b + 1):(k * b)]))) plot(y, xlab="Batch Number", ylab="", type="o") abline(h=mu.hat, lty=2) ## Numerical Example • Variance estimate and standard error are as follows. var.hat = b * sum((y - mu.hat)^2)/(a - 1) se = sqrt(var.hat/n) c(var.hat, se) ## [1] 344.0124746 0.1854757 • Then we can report our result including both the estimate and standard error. result <- c(round(mu.hat, 2), round(se, 2)) names(result) <- c("Estimate", "MCSE") result ## Estimate MCSE ## -0.13 0.19 ## Numerical Example • Alternatively, an approximate 95% confidence interval can be calculated as follows. ci <- mu.hat + c(-1,1) * 1.96 * se round(ci, 2) ## [1] -0.49 0.24 • We see that (no surprise) statistics works and the confidence interval actually covers the true value. We also know that in actual practice a 95% confidence interval will fail to cover 5% of the time, so failure of the interval to cover wouldn’t necessarily have indicated a problem. ## Numerical Example • Important to notice that our variance estimate isn’t very good. We can calculate $$\sigma^2$$ in this problem, that is $\sigma^2 = \sigma^2_X \frac{1 + \rho}{1-\rho} = \frac{\tau^2}{1-\rho^2} \; \frac{1 + \rho}{1-\rho} = 400$ for $$\rho=.95$$ and $$\tau = 1$$. But our estimate based on BM here is the following. var.hat ## [1] 344.0125 ## Asymptotic Variance • Asymptotic theory behind our confidence interval assumes that $$n$$ is so large that the difference between $$\hat{\sigma}^2_n$$ and $$\sigma^2$$ is negligible. The difference here is obviously not negligible. So we can’t expect our nominal 95% confidence interval to actually have 95% coverage. • Monte Carlo sample size $$n$$ must be much larger for all the asymptotics we so casually assume to hold. This is a very common phenomenon; obtaining a good estimate of an asymptotic variance often require larger sample sizes than estimating an asymptotic mean.
2021-05-17 18:21:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9753438234329224, "perplexity": 647.420053454731}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00101.warc.gz"}
https://www.biostars.org/p/81523/#81708
Template Size In Mira3 1 0 Entering edit mode 8.3 years ago Hi, I am pretty new to genome assembly and in particular to mira3 and i have couple of questions regarding that. 1. What exactly is the templates size in mira3. I couldn't find a proper definition of the same in the manual. My fragment size before ligating adapters is ~250bp and after library construction it was found to be ~350. My read length is 260bp. In this scenario what is the exact template size they wanted in the configuration file? 2. I am working with bacterial miseq genome data and when i used mira3 to assembly the contigs, i found that there are ~1400 contigs in the final results file. Do you think are they too many? Does this has happened because of my wrong template size specification in the configuration file during mira3 assembly run? Thanks Upendra denovo assembly • 1.9k views 1 Entering edit mode 8.3 years ago cts ★ 1.7k The template size usually refers to the distance between the 5' ends of paired end data, in other words the length of the DNA between the adaptors. So in your case it would be ~250 bp. 0 Entering edit mode Thanks cts for the detailed explanation. After going through some forums i found that actually for Mira you need two sizes, one is the actual insert size (which is ~250bp) and the fragment size (which is insert size (250) + total read length (520) = 770). Do you think this is correct? Also regarding adapter trimming, i found the following on their manaual..... "Outside MIRA: for heavens' sake: do NOT try to clip or trim by quality yourself. Do NOT try to remove standard sequencing adaptors yourself. Just leave Illumina data alone! (really, I mean it)". So i assume i cannot just trim the adapters from the reads then. Please let me know what you think. Thanks Upendra 0 Entering edit mode Hey, So I think that there is some confusion with the terminology that different people use for 'insert size'. Many people (including myself) refer to the insert size as the distance between the 5' ends of the reads which will be the length of the fragment of DNA being sequenced. Other people use the insert size to describe the distance between the 3' ends of the reads, in other words the part of the DNA fragment that is between the two reads (not actually sequenced). So to illustrate this: >---| read1 =================== DNA fragment >>---------------<< insert size using definition 1 >>----<< insert size using definition 2 >--------| read1 ( 260bp) ========== DNA fragment (250bp) (Please let me know if this interpretation of your data is wrong.) So what you've done is sequence the same bit of DNA twice with both reads. In this case the insert size using mira's terminology would be 0 and the fragment size would be 250. However considering that read1 and read2 will be mostly identical you could either just assemble with either of them as single-end data and get similar results. Alternatively you could overlap the pairs using seqprep to get higher quality reads and then assemble as single-end data 0 Entering edit mode Hi, Thanks again for the clarification. Your interpretation is spot on atleast in my case. Regarding the first figure i would normally say your definition to me seems correct. Actually this is not my experiment but i am helping other postdoc in the lab to analyze the data. Anyway i have just started running the analysis with single end reads and i will let you know if this actually improves the assembly. If this doesn't help then will try the "seqprep" method. Thanks Upendra 0 Entering edit mode Hi, Even only using single end reads mira couldn't make good assembly. There are around 1300 contigs with N50 of only 4662. Though this is much better than Paired End assembly but i would like to make a better assembly. What do you think i need to make changes to get a better assembly? 0 Entering edit mode You could try a different assembler as I mention in my original answer, other than that I'm not sure. Your data is suboptimal because the DNA fragment size is so short and it may be that what you're sequencing has a lot of repeats in it which is breaking the assembly into many contigs.
2022-01-21 01:40:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4598507881164551, "perplexity": 1797.8061120334821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00237.warc.gz"}
https://www.acmicpc.net/problem/23575
시간 제한메모리 제한제출정답맞힌 사람정답 비율 1 초 (추가 시간 없음) 1024 MB59311850.000% 문제 You are one of the $456$ players participating in a series of children’s games with deadly penalties. Passing through a number of maze-like hallways and stairways, you have opened the gate of the next game. There are three buckets with infinite capacity, each of which contains an integral number of liters of water. The buckets are numbered from $1$ to $3$. The amounts of water initially contained in buckets $1$, $2$, and $3$ are given as $X$, $Y$, and $Z$, respectively. At any time, you can double the amount of one bucket by pouring into it from another one. Specifically, you can pour from a bucket of $y$ liters into one of $x$ ($≤ y$) liters until the latter contains $2x$ liters and the former does $y - x$ liters. Note that $x$ and $y$ are always integers and $x ≤ y$. See the Figure J.1. Figure J.1 A process of pouring In order to survive, you have to empty one of the buckets in a limited number of pouring. Fortunately, it is always possible to empty one of the buckets. Given the initial amounts $X$, $Y$, and $Z$ of water in three buckets, write a program to output a sequence of pouring until one of the buckets is empty for the first time. 입력 Your program is to read from standard input. The input starts with a line containing three integers $X$, $Y$, and $Z$ ($1 ≤ X ≤ Y ≤ Z ≤ 10^9$), representing the initial amounts of water in buckets $1$, $2$, and $3$, respectively. 출력 Your program is to write to standard output. The first line should contain the number $m$ of pouring until one of the buckets is empty for the first time. The number $m$ should be no more than $1,000$. Each of the following $m$ lines contains two integers $A$ and $B$ ($1 ≤ A ≠ B ≤ 3$), which means you pour from bucket $A$ into bucket $B$ in a process of pouring. You should guarantee that one of buckets is empty for the first time after the $m$ pouring. If there are several ways to empty one of the buckets, then print one of them. 예제 입력 1 1 2 3 예제 출력 1 2 3 2 3 1 예제 입력 2 1 4 6 예제 출력 2 3 2 1 3 1 1 3
2021-12-04 05:29:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5569430589675903, "perplexity": 560.6262734304303}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362930.53/warc/CC-MAIN-20211204033320-20211204063320-00211.warc.gz"}
https://discusstest.codechef.com/t/treasure-editorial/22734
TREASURE - EDITORIAL Setter: Ashish Gupta Tester: Encho Mishinev Editorialist: Taranpreet Singh Medium PREREQUISITES: Modular Linear Equations, Gauss-Jordan Elimination. PROBLEM: Given a grid with N rows and M columns. Some of the cells of the grid may contain treasure. For every cell, we know whether the number of cells adjacent to the current position, which contain treasure is odd or even for some cells. A Treasure layout is the set of cells containing the treasure. Find out the number of treasure layouts consistent with the information given. QUICK EXPLANATION • Assuming each cell in grid correspond to a variable, the information given in cells is nothing, but Modular System of Linear Equations. So, if the given set of equations does not have a solution, There is no valid treasure layout. • Otherwise, what we can notice is that for an equation, we indirectly express one variable as a combination of other variables. Hence if we fix values for all but one variable of the equation, the last variable shall have only one valid value. • This way, the total number of ways to make treasure layout is 2^{(N*M)-x}, where x is the number of dependent variables. EXPLANATION Quick Explanation said it all! Let us suppose C[i][j] denote whether cell is present in position (i,j). If C[i][j] = 1, coin is present on the position (i,j), Otherwise C[i][j] = 0. So, when we have A[i][j] \neq 1, we are indirectly given the parity of sum of all four positions adjacent positions to (i,j) in C grid. Formally, If A[i][j] = 0, sum of C[i-1][j]+C[i][j-1]+C[i+1][j]+C[i][j+1] is odd, and if A[i][j] = 1, sum of C[i-1][j]+C[i][j-1]+C[i+1][j]+C[i][j+1] is even. Considering all equations in modulo two, sum of C[i-1][j]+C[i][j-1]+C[i+1][j]+C[i][j+1] = A[i][j] if A[i][j] \neq -1. This gives us a set of modular linear equations in N*M (one equation for every position in grid for which A[i][j] \neq -1.) We have now reduced the problem to linear equation solving for which we resort to Gaussian elimination. We want to find the number of variables which can be assigned any value so that given system of modular equations still holds. (This is called the number of independent variables in a system of linear equations.) In the coefficient matrix, each row represents a different equation and each column represent the coefficient of the variable in each equation. The last column represents the constant term. Gaussian Elimination is an algorithm to solve linear equations represented by coefficients in the matrix, using row operations, basically row swapping, multiplying a row with a constant or adding a multiple of one row to another row. A brief explanation of Gauss elimination can be found in the box. You can read it in detail here. Click to view We handle variable one by one and try to eliminate the current variable from all the remaining equations by the third row operation, that is, adding a multiple of one equation (current equation assuming it has a non-zero coefficient for current variable) to another equation. Here we choose the multiple which turns the coefficient of the current variable in other equation to zero. If we have all coefficients zero, we have found one independent variable, so we skip on to the next variable. The number of independent variables is N*M less the number of dependent variables. The number of dependent variables is the number of columns which have at least one non-zero coefficient after considering all previous variables. In the current problem, we need to solve equations modulo x. We can still apply the same algorithm. In this case, it returns the solutions within the range of modulo. Now that we have found the number of independent variables, say x, we can assign them any of the two values and the equations would still hold. So, The number of treasure layouts is 2^x. This is the final answer we require here. Also, if we are working with modulo 2, interestingly, it can also be done much more efficiently using bitset operations by realizing that we can just take the xor of two rows (Equations) also serve our purpose which has been explained in the blog. About the time complexity, The number of variables is N*M and the number of equations can be up to N*M (If all entry of A are either zero or one). So, Time complexity comes out to be of order ((N*M)^3) with the constant factor being 1/32 or 1/64 depending upon the implementation of BitSets used. Time Complexity Time complexity is O((N*M)^3) with constant factor being 1/32 or 1/64 depending upon implementation. AUTHOR’S AND TESTER’S SOLUTIONS: Setter’s solution Click to view #include <bits/stdc++.h> using namespace std; #define IOS ios::sync_with_stdio(0); cin.tie(0); cout.tie(0); #define endl "\n" #define int long long const int N=35; const int MOD=1e9+7; struct Equation { static const int blockSize = 63; int n, blockCnt; vector<long long> coefficients; long long output; Equation() {} Equation(int _n) { n=_n; blockCnt = (n - 1) / blockSize + 1; coefficients = vector<long long> (blockCnt); } { int blockIdx = idx / blockSize; int curIdx = idx % blockSize; coefficients[blockIdx] += (1LL << curIdx); } bool hasIdx(int idx) { int blockIdx = idx / blockSize; int curIdx = idx % blockSize; return ((coefficients[blockIdx] >> curIdx & 1LL) == 1LL); } void doxor(Equation &e) { for(int i=0;i<coefficients.size();i++) coefficients[i]^=e.coefficients[i]; output ^= e.output; } bool isInconsistent() { return (output == 1); } }; int n, m, sz=0; int a[N][N]; int unknown; bool hasSolution(vector<Equation> &eq) { int curIdx = 0; int selected = 0; for(int i=0;i<n*m && curIdx<eq.size();i++) { int swapIdx = -1; for(int j=curIdx;j<eq.size();j++) { if(eq[j].hasIdx(i)) { swapIdx = j; break; } } if(swapIdx == -1) continue; selected++; swap(eq[curIdx], eq[swapIdx]); for(int j=curIdx+1;j<eq.size();j++) { if(eq[j].hasIdx(i)) eq[j].doxor(eq[curIdx]); } curIdx++; } unknown = n * m - selected; bool ans = true; for(int j=curIdx;j<eq.size();j++) { if(eq[j].isInconsistent()) ans = false; } return ans; } int pow(int a, int b, int m) { int ans=1; while(b) { if(b&1) ans=(ans*a)%m; b/=2; a=(a*a)%m; } return ans; } int getInd(int x, int y) { return m*x + y; } int32_t main() { IOS; int t; cin>>t; while(t--) { sz=0; cin>>n>>m; for(int i=0;i<n;i++) { for(int j=0;j<m;j++) { cin>>a[i][j]; sz+=(a[i][j]!=-1); } } vector<Equation> eq(sz); int idx=0; for(int i=0;i<n;i++) { for(int j=0;j<m;j++) { if(a[i][j]==-1) continue; eq[idx] = Equation(n * m); if(i - 1 >= 0) if(i + 1 < n) if(j - 1 >= 0) if(j + 1 < m) eq[idx].output = a[i][j]; idx++; } } if(hasSolution(eq)) { int ans=pow(2LL, unknown, MOD); cout<<ans<<endl; } else cout<<0<<endl; } return 0; } Tester’s solution Click to view #include <iostream> #include <string.h> #include <stdio.h> #include <bitset> using namespace std; typedef long long llong; const llong MOD = 1000000007LL; int t; int n,m; bitset<901> equations[901]; int eL = 0; int id[32][32]; int countChoices() { int r = 1; int i,j; for (i=1;i<=n*m;i++) { int firstOne = -1; for (j=r;j<=eL;j++) { if (equations[j][i] == 1) { firstOne = j; break; } } if (firstOne == -1) continue; if (firstOne != r) { equations[r] = equations[r] ^ equations[firstOne]; } for (j=r+1;j<=eL;j++) { if (equations[j][i] == 1) { equations[j] = equations[j] ^ equations[r]; } } r++; } for (i=r;i<=eL;i++) { if (equations[i][0] == 1) return -1; } return n * m - (r - 1); } int main() { int i,j; int test; scanf("%d",&t); for (test=1;test<=t;test++) { llong ans = 1; eL = 0; scanf("%d %d",&n,&m); int ctr = 0; for (i=1;i<=n;i++) { for (j=1;j<=m;j++) { ctr++; id[i][j] = ctr; equations[ctr].reset(); } } for (i=1;i<=n;i++) { for (j=1;j<=m;j++) { int a; scanf("%d",&a); if (a != -1) { eL++; equations[eL].set(0, a); if (i > 1) equations[eL].set(id[i-1][j]); if (i < n) equations[eL].set(id[i+1][j]); if (j > 1) equations[eL].set(id[i][j-1]); if (j < m) equations[eL].set(id[i][j+1]); } } } int choices = countChoices(); if (choices == -1) ans = 0; else { for (i=1;i<=choices;i++) { ans *= 2LL; ans %= MOD; } } printf("%lld\n",ans); } return 0; } Editorialist’s solution Click to view import java.util.*; import java.io.*; import java.text.*; //Solution Credits: Taranpreet Singh public class Main{ //SOLUTION BEGIN void pre() throws Exception{} int[][] D = new int[][]{{-1,0},{1,0},{0,-1},{0,1}}; void solve(int TC) throws Exception{ int n = ni(), m = ni(); int[][] a = new int[n][m];int c = 0; for(int i = 0; i< n; i++){ for(int j = 0; j< m; j++){ a[i][j] = ni(); if(a[i][j]!=-1)c++; } } Row[] r = new Row[c];c=0; for(int i = 0; i< n; i++) for(int j = 0; j< m; j++){ if(a[i][j]==-1)continue; r[c] = new Row(n*m); for(int[] d:D){ int ii = i+d[0], jj = j+d[1]; if(ii<0||ii>=n || jj<0||jj>=m)continue; } r[c++].output=a[i][j]; } int ans = check(r,n*m); if(ans==-1)pn(0); else pn(pow(2,ans)); } long pow(long a, long p){ long o = 1; while(p>0){ if((p&1)==1)o=(o*a)%mod; a=(a*a)%mod; p>>=1; } return o; } class Row{ long[] coef; int sz = 60,cnt, n; int output = 0; public Row(int n){ this.n=n; cnt = (n-1)/sz+1; coef = new long[cnt]; } coef[ind/sz] |= 1l<<(ind%sz); } boolean has(int ind){ return ((coef[ind/sz]>>(ind%sz))&1)==1; } void xor(Row r){ for(int i =0; i< cnt; i++)coef[i]^=r.coef[i]; output^=r.output; } public Row clone(){ Row r = new Row(n); r.xor(this); return r; } } int check(Row[] a, int var) throws Exception{ int cur = 0, x = 0; for(int i= 0; i<var && cur<a.length; i++){ int swapInd = -1; for(int j = cur; j<a.length; j++)if(a[j].has(i)){ swapInd = j;break; } if(swapInd==-1)continue; x++; Row tmp = a[cur].clone(); a[cur] = a[swapInd]; a[swapInd] = tmp; for(int j = cur+1; j<a.length;j++)if(a[j].has(i))a[j].xor(a[cur]); cur++; } while(cur<a.length)if(a[cur++].output!=0)return -1; return var-x; } //SOLUTION END void hold(boolean b)throws Exception{if(!b)throw new Exception("Hold right there, Sparky!");} long mod = (long)1e9+7, IINF = (long)1e18; final int INF = (int)1e9, MX = (int)2e3+1; DecimalFormat df = new DecimalFormat("0.00000000000"); double PI = 3.1415926535897932384626433832792884197169399375105820974944, eps = 1e-8; static boolean multipleTC = true, memory = false; void run() throws Exception{ out = new PrintWriter(System.out); int T = (multipleTC)?ni():1; //Solution Credits: Taranpreet Singh pre();for(int t = 1; t<= T; t++)solve(t); out.flush(); out.close(); } public static void main(String[] args) throws Exception{ if(memory)new Thread(null, new Runnable() {public void run(){try{new Main().run();}catch(Exception e){e.printStackTrace();}}}, "1", 1 << 28).start(); else new Main().run(); } long gcd(long a, long b){return (b==0)?a:gcd(b,a%b);} int gcd(int a, int b){return (b==0)?a:gcd(b,a%b);} int bit(long n){return (n==0)?0:(1+bit(n&(n-1)));} void p(Object o){out.print(o);} void pn(Object o){out.println(o);} void pni(Object o){out.println(o);out.flush();} String n()throws Exception{return in.next();} String nln()throws Exception{return in.nextLine();} int ni()throws Exception{return Integer.parseInt(in.next());} long nl()throws Exception{return Long.parseLong(in.next());} double nd()throws Exception{return Double.parseDouble(in.next());} StringTokenizer st; } } String next() throws Exception{ while (st == null || !st.hasMoreElements()){ try{ }catch (IOException e){ throw new Exception(e.toString()); } } return st.nextToken(); } String nextLine() throws Exception{ String str = ""; try{ }catch (IOException e){ throw new Exception(e.toString()); } return str; } } } Feel free to Share your approach, If it differs. Suggestions are always welcomed. 1 Like IMHO the described time complexity exceeds the allotted time limits for the problem: with N=30, M=30, and the constant factor of 1/64 we have: (30*30)^3/64 \approx 10^7 however a test case may have up to T=100 tests, therefore the total time complexity is a bit above 10^9, and the time limit for the problem is 3 seconds. How come? Does it mean that the given test cases were made intentionally or unintentionally weak in order to fit within time limits? 3 Likes There are a couple of factors, that make the constant smaller, while still being O((N*M)^3) overall: 1. The set of equations can be split into two independent sets - one for A[i][j] with i+j even, and another with i+j odd. It makes the number of equations in each set \frac{N*M}{2} and an additional constant factor of 1/4. 2. The coefficients matrix is very sparse - initially at most 4 non-zero coefficients in each equation and each coefficient is non-zero in at most 4 equations. It makes elimination process faster. At each elimination loop many rows of the coefficients matrix will have the corresponding coefficient 0 and thus not requiring iterating over that row. 1 Like Yeah I also have the same doubt. //
2021-07-28 16:02:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5467978119850159, "perplexity": 6339.938197565558}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153739.28/warc/CC-MAIN-20210728154442-20210728184442-00074.warc.gz"}
https://socratic.org/questions/how-to-use-the-discriminant-to-find-out-what-type-of-solutions-the-equation-has--23
# How to use the discriminant to find out what type of solutions the equation has for x^2+3x+8=0? May 26, 2015 This equation has no real solutions. The rule is that: If $\Delta < 0$ then there are no real solutions (there are 2 complex solutions) If $\Delta = 0$ then there is one real solution (${x}_{0} = - \frac{b}{2 a}$) If $\Delta > 0$ then there are 2 real solutions which you can calculate using: ${x}_{1} = \frac{- b - \sqrt{\Delta}}{2 a}$ and ${x}_{2} = \frac{- b + \sqrt{\Delta}}{2 a}$
2019-09-16 04:03:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7657237648963928, "perplexity": 245.40253355388523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572484.20/warc/CC-MAIN-20190916035549-20190916061549-00140.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2006_AMC_12A_Problems/Problem_22&diff=12870&oldid=12858
Difference between revisions of "2006 AMC 12A Problems/Problem 22" Problem A circle of radius $r$ is concentric with and outside a regular hexagon of side length $2$. The probability that three entire sides of hexagon are visible from a randomly chosen point on the circle is $1/2$. What is $r$? $\mathrm{(A) \ } 2\sqrt{2}+2\sqrt{3}\qquad \mathrm{(B) \ } 3\sqrt{3}+\sqrt{2}$$\rm{(C) \ } 2\sqrt{6}+\sqrt{3}\qquad \rm{(D) \ } 3\sqrt{2}+\sqrt{6}$ $\mathrm{(E) \ } 6\sqrt{2}-\sqrt{3}$ Solution Project any two non-adjacent and non-opposite sides of the hexagon to the circle; the arc between these two points is the locations where all three sides of the hexagon can be fully viewed. Since there are six such pairs of sides, there are six arcs. The probability of choosing a point is $1 / 2$, or if the total arc degree measures add up to $\frac{1}{2} * 360\deg = 180^o$. Divide that by six, and each arc must equal $30^\mathrm{o}$. Call the center $O$, and the two endpoints of the arc $A$ and $B$. Angle AOB is equal to the central arc, so it is also 30 degrees. If we extend a line from the center to the midpoint between AB, and we project one of the sides of the hexagon to A and/or B, an isosceles triangle AXO is formed, with X being the intersection of the extension to the midpoint of AB and the extended side of the hexagon. It is isosceles because angle AOX is 15 degrees, and $OAX = OAB - 60 = \frac{180-30}{2} - 60 = 15$. If we create the perpendicular bisector of AXO from X, then we get a right triangle: $\cos 15 = \frac{\frac{r}{2}}{2\sqrt{3}} = \frac{r}{4\sqrt{3}}$ Since $\cos 15 = \cos (45 - 30) = \frac{\sqrt{6} + \sqrt{2}}{4}$, we get: $\frac{\sqrt{6} + \sqrt{2}}{4} 4\sqrt{3} = r$ $r = 3\sqrt{2} + \sqrt{6} \Rightarrow \mathrm{D}$
2022-05-18 10:58:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084396123886108, "perplexity": 3034.482629431737}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521883.7/warc/CC-MAIN-20220518083841-20220518113841-00300.warc.gz"}
https://www.tradewithscience.com/sharpe-and-sortino-ratio/
# SHARPE RATIO AND SORTINO RATIO - THE PROFESSIONAL METRICS USED BY HEDGE FUNDS ### Sharpe ratio and sortino ratio Today, we will take a closer look at professional metrics that measure strategies’ performance – Sharpe Ratio and Sortino Ratio. The Sharpe ratio and the Sortino ratio are both risk-adjusted evaluations of return on investment. ##### WE will cover: • Definition of Sharpe Ratio and its formula • Benefits of Sharpe Ratio • Definition of Sortino Ratio and its formula • Target Downside Deviation (TDD) and its formula • Standard Deviation formula • The difference between TDD and SD ### Sharpe Ratio Sharpe ratio is one of the most prominent metrics, which takes into account risk and, at the same time, profitability. Sharpe ratio is an investment performance indicator that is most commonly used in the financial industry. The fundamental goal of this indicator is the possibility of quantifying and comparing the quality of individual investments. The Sharpe ratio indicator has one primary disadvantage. It penalizes those strategies that tend to be very performance-volatile within the desired returns (or returns above the required risk-free return). But volatility on the return side can be just a natural characteristic of a trading strategy. So it is very questionable whether it should be penalized. Therefore, we will introduce an alternative performance indicator that takes this into account, and that is the Sortino ratio. But first, let’s look in more detail at the Sharpe ratio before explaining the Sortino ratio, so that we can later compare the two performance indicators. The Sharpe ratio is calculated as the return of a trading strategy (also investment, an entire mutual fund, or a hedge fund) deducted from the risk-free return divided by the risk, i.e., the volatility (standard deviation) of the return over the risk-free return.The question is, how do we determine “risk-free return”? You don’t have to be scared. The risk-free yield is the classically current achievable interest rate. ##### FORMULA FOR SHARPE RATIO $\mathrm{Sharpe~Ratio}(x) = \frac{r_x - R_f}{\mathrm{std}(x)}$ $x$  is the given investment (trading strategy),$\large r_{x}$is the average of all returns, $\large R_{f}$is the risk-free return (achievable interest rate of bonds, most common interest rate of 3-month US Treasury Bills is considered as risk-free return),$\mathrm{std}(x)$is the standard deviation of returns. It logically follows that the larger the value of this indicator, the higher the return that the investment achieved per unit of risk. $\mathrm{Sharpe~Ratio}(x) = \frac{r_x - R_f}{\mathrm{std}(x)}$ #### BENEFITS OF SHARPE RATIO Indeed, the Sharpe ratio is the most commonly used indicator in the financial industry, which evaluates our profit level for a given investment against the risk taken. It is an indicator that is well known and is used by many mutual or hedge funds as the fundamental performance metric. Thanks to the Sharpe ratio, we should compare, for example: • individual mutual funds, • hedge funds, However, this performance indicator also has significant disadvantages. The Sharpe ratio cannot distinguish between the desired “top” volatility (standard deviation of the desired returns) and the unwanted volatility (the standard deviation of returns/losses below the risk-free return Rf). Higher and excessive returns may increase the denominator’s value as their primary effect (standard deviations of returns) more than the numerator’s value and thus an overall decrease in the Sharpe ratio. Of course, investors like positive yield volatility. However, they are much more annoyed if the investment has large losses and unwanted volatility. For a specific example, typical hedge funds focused on long-term trend-following strategies have a Sharpe ratio between 0.5 and 0.9. Contrary, hedge funds using classic convergence strategies (option writing) can have a Sharpe ratio of about 3. However, you can expect a devastating drawdown at any time for these types of strategies. Sharpe ratio assumes a normal distribution of returns, and that is its weakness. It tends to give us a too positive rating for trading strategies with a negative skew. It tends to produce many low and consistent returns (it means a very high percentage of profitable trades) but may have a devastating drawdown within a single trade. In picture below, you can see an illustrative comparison of two trading approaches: • Positive skew yields a trend-following strategy (many small losses and a few very profitable trades that cover small losses and add extra profitable trades. Classically trading strategies with about 30-40% profitable trades and a Risk Reward Ratio between 2.5-3.5) • Negative skew yields an option strategy (many small profits and a few very drastic loss trades). Classically, these are systems around 90% of profitable trades with a Risk Reward Ratio from 0 to 1. • Also previously mentioned problem – when the profit occurred only in one year, and other years the equity was flat – Sharpe ratio is not able to detect that and gives very positive value. ### SORTINO RATIO Contrary to the Sharpe Ratio, only those returns with a lower value than the investor-defined target (in the so-called Desired Target Return) are considered risky. To clearly illustrate the difference between the Sharpe ratio and Sortino ratio in the context of a defined target – Desired Target return, you can see a picture below: Large fluctuations in returns are a sign of volatility and risk. But if an investment or a trading strategy, by its very nature, has more significant profitable returns with smaller loss movements, these strategies should not be penalized. A classic example of such a strategy is trend-following systems. ##### FORMULA FOR SORTINO RATIO $\mathrm{Sortino~Ratio}(x) = \frac{r_x-R_t}{TDD}$ , where $\small R$ is the average of the values of the yield periods. $\small T$ is the desired target return of a strategy (or the minimum required return). $\small TDD$ is the targeted “bottom” deviation. Large fluctuations in returns are a sign of volatility and risk. But if an investment or a trading strategy, by its very nature, has more significant profitable returns with smaller loss movements, these strategies should not be penalized. A classic example of such a strategy is trend-following systems. ##### Target Downside Deviation (TDD) And ITS FORMULA Target Downside Deviation is defined as the square root of the averages of squares (in Root Mean Square) of deviations of realized underperforming returns and losses derived from the target value of revenue, where all returns above the target value are equal to 0. Mathematically, this relationship we calculate as: $\mathrm{Sharpe~Ratio}(x) = \frac{r_x - R_f}{\mathrm{std}(x)}$ $\small Xi$ = i-th $\small N$ = total number of returns $\small T$ = target yield value You can see that the TDD calculation is very similar to the standard deviation (SD) measure to calculate the Sharpe ratio. ##### STANDARD Deviation FORMULA $\large SD=\sqrt{\frac{1}{N}}\sum_{i=1}^{N}(Xi-u)^{2}$ $\small Xi$ = net yield $\small N$ = total number of revenues $u$ = average of all Xi yields ##### The difference between TDD and SD The differences between TDD and standard deviation are then: 1. In TDD, deviations are calculated as Xi from the target yield value. In SD, the deviations are calculated from the average of all Xi. 2. In TDD, all Xi above the target yield value is equal to 0, but they are still included. The Standard Deviation calculation then does not have the Min () function included in the analysis. To sum it up: the Sharpe ratio cannot distinguish between desired “positive” volatility (standard deviation of desired yields) and negative volatility (standard deviation losses below the value of risk-free yield, Rf ). Higher and excessive returns may have the primary effect of increasing the denominator value (standard deviations of returns) more than the numerator value and, thus, the overall decrease in the Sharpe ratio. However, investors like the positive volatility of returns. The solution to this issue can be the performance indicator Sortino ratio. Due to the nature of its calculation, it doesn’t penalize strategies with variability required positive returns. It only penalizes those strategies that have a high negative variability of insufficient income and losses. So what investors consider a fair value of the Sharpe and Sortino ratio that signs a reasonable degree of expected return for an acceptable low amount of risk? Usually, any Sharpe and Sortino ratio greater than 1.0 is valued as a good investment, higher than 2.0 is as outstanding, and a 3.0 or higher ratio is excellent. A ratio under 1.0 is considered as not sufficient. Every time you get a very high value of these ratios – be careful. The risk of overfitting of strategy and other biases are very high. If you are looking for practical examples of calculating the Sharpe and the Sortino ratio on concrete strategy, you can find it in our FREE EBOOK.
2021-04-18 15:26:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7743203043937683, "perplexity": 1801.2973039880299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038492417.61/warc/CC-MAIN-20210418133614-20210418163614-00498.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/jimo.2016066
# American Institute of Mathematical Sciences July  2017, 13(3): 1149-1167. doi: 10.3934/jimo.2016066 ## Robust real-time optimization for blending operation of alumina production 1 College of Electrical and Information Engineering, Hunan University of Technology, Zhuzhou, Hunan 412007, China 2 Department of Mathematics, Shanghai University, 99 Shangda Road, Baoshan, Shanghai, China 3 Changsha University of Science and Technology, Changsha, China 4 Department of Mathematics and Statistics, Curtin University, Kent Street, Bentley, WA 6102, Australia 5 School of Information Science and Engineering, Central South University, South Lushan Road, Yuelu, Changsha, China * Corresponding author: Changjun Yu Received  January 2016 Published  October 2016 The blending operation is a key process in alumina production. The real-time optimization (RTO) of finding an optimal raw material proportioning is crucially important for achieving the desired quality of the product. However, the presence of uncertainty is unavoidable in a real process, leading to much difficulty for making decision in real-time. This paper presents a novel robust real-time optimization (RRTO) method for alumina blending operation, where no prior knowledge of uncertainties is needed to be utilized. The robust solution obtained is applied to the real plant and the two-stage operation is repeated. When compared with the previous intelligent optimization (IRTO) method, the proposed two-stage optimization method can better address the uncertainty nature of the real plant and the computational cost is much lower. From practical industrial experiments, the results obtained show that the proposed optimization method can guarantee that the desired quality of the product quality is achieved in the presence of uncertainty on the plant behavior and the qualities of the raw materials. This outcome suggests that the proposed two-stage optimization method is a practically significant approach for the control of alumina blending operation. Citation: Lingshuang Kong, Changjun Yu, Kok Lay Teo, Chunhua Yang. Robust real-time optimization for blending operation of alumina production. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1149-1167. doi: 10.3934/jimo.2016066 ##### References: show all references ##### References: The real-time operational control of alumina blending process Three-time re-mixing operation The novel robust real-time optimization for alumina blending operation The variation of quality of recycled material Comparison of slurry quality indices. Dotted lines-the upper and lower bounds of quality indices and their target values; solid line-quality indices of slurry produced by results of IRTO and RRTO Target quality specification of slurry index $r_{1}$ $r_{2}$ $r_{3}$ $r_{1}^{\star}$ $\epsilon_{1}$ $r_{2}^{\star}$ $\epsilon_{2}$ $r_{3}^{\star}$ $\epsilon_{3}$ specification 0.96 0.01 2.193 0.03 4.66 0.03 index $r_{1}$ $r_{2}$ $r_{3}$ $r_{1}^{\star}$ $\epsilon_{1}$ $r_{2}^{\star}$ $\epsilon_{2}$ $r_{3}^{\star}$ $\epsilon_{3}$ specification 0.96 0.01 2.193 0.03 4.66 0.03 The nominal quality of bauxites and auxiliary materials CaO(%) Na$_{2}$O(%) SiO$_{2}$(%) Fe$_{2}$O$_{3}$(%) Al$_{2}$O$_{3}$(%) Bauxite 1 2.24 0.50 7.08 7.52 67.2 Bauxite 2 3.20 0.42 9.48 8.80 63.4 Bauxite 3 2.80 0.40 12.73 7.27 61.4 Bauxite 4 3.00 0.46 8.57 23.4 52.0 Limestone 95.3 0.10 4.55 0.44 1.50 Anthracite 0 0 7.14 0.89 4.93 Alkali 0 98 0 0 0 CaO(%) Na$_{2}$O(%) SiO$_{2}$(%) Fe$_{2}$O$_{3}$(%) Al$_{2}$O$_{3}$(%) Bauxite 1 2.24 0.50 7.08 7.52 67.2 Bauxite 2 3.20 0.42 9.48 8.80 63.4 Bauxite 3 2.80 0.40 12.73 7.27 61.4 Bauxite 4 3.00 0.46 8.57 23.4 52.0 Limestone 95.3 0.10 4.55 0.44 1.50 Anthracite 0 0 7.14 0.89 4.93 Alkali 0 98 0 0 0 Comparison of $SP$ for RRTO and IRTO Method RRTO IRTO $SP$ 98 % 82 % Method RRTO IRTO $SP$ 98 % 82 % Comparison of computational time for RRTO and IRTO Method RRTO IRTO Time(s) 9.71 99.78 Method RRTO IRTO Time(s) 9.71 99.78 [1] Xiaohong Li, Mingxin Sun, Zhaohua Gong, Enmin Feng. Multistage optimal control for microbial fed-batch fermentation process. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021040 [2] Reza Lotfi, Yahia Zare Mehrjerdi, Mir Saman Pishvaee, Ahmad Sadeghieh, Gerhard-Wilhelm Weber. A robust optimization model for sustainable and resilient closed-loop supply chain network design considering conditional value at risk. Numerical Algebra, Control & Optimization, 2021, 11 (2) : 221-253. doi: 10.3934/naco.2020023 [3] Luke Finlay, Vladimir Gaitsgory, Ivan Lebedev. Linear programming solutions of periodic optimization problems: approximation of the optimal control. Journal of Industrial & Management Optimization, 2007, 3 (2) : 399-413. doi: 10.3934/jimo.2007.3.399 [4] Paula A. González-Parra, Sunmi Lee, Leticia Velázquez, Carlos Castillo-Chavez. A note on the use of optimal control on a discrete time model of influenza dynamics. Mathematical Biosciences & Engineering, 2011, 8 (1) : 183-197. doi: 10.3934/mbe.2011.8.183 [5] Mohsen Abdolhosseinzadeh, Mir Mohammad Alipour. Design of experiment for tuning parameters of an ant colony optimization method for the constrained shortest Hamiltonian path problem in the grid networks. Numerical Algebra, Control & Optimization, 2021, 11 (2) : 321-332. doi: 10.3934/naco.2020028 [6] Frank Sottile. The special Schubert calculus is real. Electronic Research Announcements, 1999, 5: 35-39. [7] Xiaomao Deng, Xiao-Chuan Cai, Jun Zou. A parallel space-time domain decomposition method for unsteady source inversion problems. Inverse Problems & Imaging, 2015, 9 (4) : 1069-1091. doi: 10.3934/ipi.2015.9.1069 [8] Khosro Sayevand, Valeyollah Moradi. A robust computational framework for analyzing fractional dynamical systems. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021022 [9] Todd Hurst, Volker Rehbock. Optimizing micro-algae production in a raceway pond with variable depth. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021027 [10] Sara Munday. On the derivative of the $\alpha$-Farey-Minkowski function. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 709-732. doi: 10.3934/dcds.2014.34.709 [11] Yves Dumont, Frederic Chiroleu. Vector control for the Chikungunya disease. Mathematical Biosciences & Engineering, 2010, 7 (2) : 313-345. doi: 10.3934/mbe.2010.7.313 [12] Hong Yi, Chunlai Mu, Guangyu Xu, Pan Dai. A blow-up result for the chemotaxis system with nonlinear signal production and logistic source. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2537-2559. doi: 10.3934/dcdsb.2020194 [13] Ralf Hielscher, Michael Quellmalz. Reconstructing a function on the sphere from its means along vertical slices. Inverse Problems & Imaging, 2016, 10 (3) : 711-739. doi: 10.3934/ipi.2016018 [14] Ardeshir Ahmadi, Hamed Davari-Ardakani. A multistage stochastic programming framework for cardinality constrained portfolio optimization. Numerical Algebra, Control & Optimization, 2017, 7 (3) : 359-377. doi: 10.3934/naco.2017023 [15] Armin Lechleiter, Tobias Rienmüller. Factorization method for the inverse Stokes problem. Inverse Problems & Imaging, 2013, 7 (4) : 1271-1293. doi: 10.3934/ipi.2013.7.1271 [16] Jianping Gao, Shangjiang Guo, Wenxian Shen. Persistence and time periodic positive solutions of doubly nonlocal Fisher-KPP equations in time periodic and space heterogeneous media. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2645-2676. doi: 10.3934/dcdsb.2020199 [17] J. Frédéric Bonnans, Justina Gianatti, Francisco J. Silva. On the convergence of the Sakawa-Shindo algorithm in stochastic control. Mathematical Control & Related Fields, 2016, 6 (3) : 391-406. doi: 10.3934/mcrf.2016008 [18] Diana Keller. Optimal control of a linear stochastic Schrödinger equation. Conference Publications, 2013, 2013 (special) : 437-446. doi: 10.3934/proc.2013.2013.437 [19] Alberto Bressan, Ke Han, Franco Rampazzo. On the control of non holonomic systems by active constraints. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3329-3353. doi: 10.3934/dcds.2013.33.3329 [20] Charles Fulton, David Pearson, Steven Pruess. Characterization of the spectral density function for a one-sided tridiagonal Jacobi matrix operator. Conference Publications, 2013, 2013 (special) : 247-257. doi: 10.3934/proc.2013.2013.247 2019 Impact Factor: 1.366
2021-03-04 03:02:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48341137170791626, "perplexity": 6704.022945333741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00196.warc.gz"}
https://fsaraceno.wordpress.com/tag/potential-growth/
### Archive Posts Tagged ‘potential growth’ ## What Went Wrong with Jean-Baptiste December 2, 2016 2 comments The news of the day is that François Hollande will not seek reelection in May 2017. This is rather big news, even it if was all too logical given his approval ratings. But what went wrong with Hollande’s (almost) five years as a President? Well, I believe that the answer is in a post I wrote back in 2014, Jean-Baptiste Hollande. There I wrote that the sharp turn towards supply side measures (coupled with austerity) to boost growth was doomed to failure, and that firms themselves showed, survey after survey, that the obstacles they faced  came from insufficient demand and not from the renown French “rigidities” or from the tax burden. I was not alone, of course in calling this a huge mistake. Many others made the same point. Boosting supply during an aggregate demand crisis is useless, it is as simple as that. Allow me to quote the end of my post: Does this mean that all is well in France? Of course not. The burden on French firms, and in particular the tax wedge, is a problem for their competitiveness. Finding ways to reduce it, in principle is a good thing. The problem is the sequencing and the priorities. French firms seem to agree with me that the top priority today is to restart demand, and that doing this “will create its own supply”. Otherwise, more competitive French firms in a context of stagnating aggregate demand will only be able to export. An adoption of the German model ten years late. I already said a few times that sequencing in reforms is almost as important as the type of reforms implemented. I am sure Hollande could do better than this… It turns out that we were right. A Policy Brief (in French) published by OFCE last September puts all the numbers together (look at table 1): Hollande did implement what he promised, and gave French firms around €20bn (around 1% of French GDP) in tax breaks. These were compensated, more than compensated actually, by the increase of the tax  burden on households (€35bn). And as this tax increase assorted of reshuffling was not accompanied by government expenditure, it logically led to a decrease of the deficit (still too slow according to the Commission; ça va sans dire!). But, my colleagues show, this also led to a shortfall of demand and of growth. A rather important one. They estimate the negative impact of public finances on growth to be almost a point of GDP per year since 2012. Is this really surprising? Supply side measures accompanied by demand compression, in a context of already insufficient demand, led to sluggish growth and stagnating employment (it is the short side of the market baby!). And to a 4% approval rate for Jean-Baptiste Hollande. OFCE happens to have published, just yesterday, a report on public investment in which we of join the herd of those pleading for increased public investment in Europe, and in particular in France. Among other things, we estimate that a public investment push of 1% of GDP, would have a positive impact on French growth and would create around 200,000 jobs (it is long and it is in French, so let me help you: go look at page 72).  Had it been done in 2014 (or earlier) instead of putting the scarce resources available in tax reductions, things would be very different today, and probably M. Hollande yesterday would have announced his bid for a second mandate. In a sentence we don’t need to look too far, to understand what went wrong. Two more remarks: first, we have now mounting evidence of what we could already expect in 2009 based on common sense. Potential growth is not independent of current economic conditions. Past and current failure to aggressively tackle the shortage of demand that has been plaguing the French – and European – economy, hampers its capacity to grow in the long run. The mismanagement of the crisis is condemning us to a state of semi-permanent sluggish growth, that will keep breeding demagogues of all sorts. The European elites do not seem to have fully grasped the danger. Second, France is not the only large eurozone country that has taken the path of supply side measures to pull the economy out of a demand-driven slump. The failure of the Italian Jobs Act in restarting employment growth and investment can be traced to the very same bad diagnosis that led to Hollande’s failure. Hollande will be gone. Are those who stay, and those who will follow, going to change course? Categories: EMU Crisis, France ## Pushing on a String June 3, 2016 1 comment Readers of this blog know that I have been skeptical on the ECB quantitative easing program. I said many times that the eurozone economy is in a liquidity trap, and that making credit cheaper and more abundant would not be a game changer. Better than nothing, (especially for its impact on the exchange rate, the untold objective of the ECB), but certainly not a game changer. The reason, is quite obvious. No matter how cheap credit is, if there is no demand for it from consumers and firms, the huge liquidity injections of the ECB will end up inflating some asset bubble. Trying to boost economic activity (and inflation) with QE is tantamount to pushing  on a string. I also said many times that without robust expansionary fiscal policy, recovery will at best be modest. Two very recent ECB surveys provide strong evidence in favour of the liquidity trap narrative. The first is the latest (April 2016) Eurozone Bank Lending Survey. Here is a quote from the press release: The net easing of banks’ overall terms and conditions on new loans continued for loans to enterprises and intensified for housing loans and consumer credit, mainly driven by a further narrowing of loan margins. So, nothing surprising here. QE and negative rates are making so expensive for financial institutions to hold liquidity, that credit conditions keep easing. So why do we not see economic activity and inflation pick up? The answer is on the other side of the market, credit demand. And the Survey on the Access to Finance of Enterprises in the euro area, published this week  also by the ECB, provides a clear and loud answer (from p. 10): “Finding customers” was the dominant concern for euro area SMEs in this survey period, with 27% of euro area SMEs mentioning this as their main problem, up from 25% in the previous survey round. “Access to finance” was considered the least important concern (10%, down from 11%), after “Regulation”, “Competition” and “Cost of production” (all 14%) and “Availability of skilled labour” (17%). Among SMEs, access to finance was a more important problem for micro enterprises (12%). For large enterprises, “Finding customers” (28%) was reported as the dominant concern, followed by “Availability of skilled labour” (18%) and “Competition” (17%). “Access to finance” was mentioned less frequently as an important problem for large firms (7%, unchanged from the previous round) No need to comment, right? Just a final and quick remark, that in my opinion deserves to be developed further: finding skilled labour seems to become harder in European countries. What if these were the first signs of a deterioration of our stock of “human capital” (horrible expression), after eight years of crisis that have reduced training, skill building, etc.? When sooner or later the crisis will really be over, it will be worth keeping an eye on  “Availability of skilled labour” for quite some time. Tell me again that story about structural reforms enhancing potential growth? ## Convergence no More April 14, 2016 1 comment As a complement to the latest post, here is a quite eloquent figure I computed real GDP of the periphery (Spain-Ireland-Portugal-Greece) and of the core (Germany-Netherlands-Austria-Finland), and then I took the difference of yearly growth rates in three subperiods  that correspond to the run-up to the single currency, to the euro “normal times”, and to the crisis. Let’s focus on the red bar: until 2008 the periphery on average grew more than 1% faster than the core, a difference that was even larger during the debt (private and public) frenzy of the years 2000. Was that a problem? No. Convergence, or catch-up, is a standard feature of growth. Usually (but remember, exceptions are the rule in economics), poorer economies tend to grow faster because there are more opportunities for high productivity growth. So it is not inconceivable that growth in the periphery was consistently higher than in the core especially in a phase of increasing trade and financial integration; We all know (now; and some knew even then) that this was unhealthy because imbalances were building up, which eventually led to the crisis. But it is important to realize that the problem were the imbalances, not necessarily faster growth. In fact, if we look at the yellow bar depicting the difference in potential growth, it shows the same pattern (I know, the concept of potential growth is unreliable. But hey, if it underlies fiscal rules, I have the right to graph it, right?). During the crisis the periphery suffered more than the core, and its potential output grew less fell more. This is magnified by the mechanic effect of current growth that “pulls” potential output. But it is undeniable that the productive capacity of the periphery (capital, skills) has been dented by the crisis, much more so than in the core. Thus, not only we are collectively more fragile, as I noted last Monday;on top of that, the next shock will hurt the periphery more than the core, further deepening the divide. The EMU in its current design lacks mechanisms capable of neutralizing pressure towards divergence. It was believed when the Maastricht Treaty was signed that markets alone would ensure convergence. It turns out (unsurprisingly, if you ask me) that markets not only did not ensure convergence. But they were actually a powerful force of divergence, first contributing to the buildup of imbalances, then by fleeing the periphery when trouble started. Markets do not act as shock absorbers. It is as simple as that, really. Categories: EMU Crisis ## Resilience? Not Yet April 11, 2016 1 comment Last week the ECB published its Annual Report, that not surprisingly tells us that everything is fine. Quantitative easing is working just fine (this is why on March 10 the ECB took out the atomic bomb), confidence is resuming, and the recovery is under way. In other words, apparently, an official self congratulatory EU document with little interest but for the data it collects. Except, that in the foreword, president Mario Draghi used a sentence that has been noticed by commentators, obscuring, in the media and in social networks, the rest of the report. I quote the entire paragraph, but the important part is highlighted 2016 will be a no less challenging year for the ECB. We face uncertainty about the outlook for the global economy. We face continued disinflationary forces. And we face questions about the direction of Europe and its resilience to new shocks. In that environment, our commitment to our mandate will continue to be an anchor of confidence for the people of Europe. Why is that important? Because until now, a really optimistic and somewhat naive observer could have believed that, even amid terrible sufferings and widespread problems, Europe was walking the right path. True, we have had a double-dip recession, while the rest of the world was recovering. True, the Eurozone is barely at its pre-crisis GDP level, and some members are well below it. True, the crisis has disrupted trust among EU countries and governments, and transformed “solidarity” into a bad word in the mouth of a handful of extremists. But, one could have believed, all of this was a necessary painful transition to a wonderful world of healed economies and shared prosperity: No gain without pain. And the naive observer was told, for 7 years, that pain was almost over, while growth was about to resume, “next year”. Reforms were being implemented (too slowly, ça va sans dire) , and would soon bear fruits. Austerity’s recessionary impact  had maybe been underestimated, but it remained a necessary temporary adjustment. The result, the naive observer would believe,  would eventually be that the Eurozone would grow out of the crisis stronger, more homogeneous, and more competitive. I had noticed a long time ago that the short term pain was evolving in more pain, and more importantly, that the EMU was becoming more heterogeneous precisely along the dimension, competitiveness, that reforms were supposed to improve. I also had noticed that as a result the Eurozone would eventually emerge from the crisis weaker, not stronger. More rigorous analysis ( e.g. here, and here) has recently shown that the current policies followed in Europe are hampering the long term potential of the economy. Today, the ECB recognizes that “we face questions about the resilience [of Europe] to new shocks”. Even if the subsequent pages call for more of the same, that simple sentence is an implicit and yet powerful recognition that more of the same is what is killing us. Seven years of treatment made us less resilient. Because, I would like to point out, we are less homogeneous than we were  in 2007. A hard blow for the naive observer. ## Confusion in Brussels October 17, 2014 13 comments already noticed how the post-Jackson Hole Consensus is inconsistent with the continuing emphasis of European policy makers on supply side measures. In these difficult times, the lack of a coherent framework seems to have become the new norm of European policy making. The credit for spotting another serious inconsistency this time goes to the Italian government. In the draft budgetary plan submitted  to the European Commission (that might be rejected, by the way), buried at page 12, one can find an interesting box on potential growth and structural deficit. It really should be read, because it is in my opinion disruptive. To summarize it, here is what it says: 1. A recession triggers a reduction of the potential growth rate  (the maximum rate at which the economy can grow without overheating) because of hysteresis: unemployed workers lose skills and/or exit the labour market, and firms scrap productive processes and postpone investment. I would add to this that hysteresis is non linear: the effect, for example on labour market participation, of a slowdown, is much larger if it happens at the fifth year of the crisis than at the first one. 2. According to the Commission’s own estimates Italy’s potential growth rate dropped from 1.4% on average in the 15 years prior to the crisis (very low for even European standards), to an average of -0.2% between 2008 and 2013. A very large drop indeed. 3. (Here it becomes interesting). The box in the Italian plan argues that we have two possible cases: 1. Either the extent of the drop is over-estimated, most probably as the result of the statistical techniques the Commission uses to estimate the potential. But, if potential growth is larger than estimated, then the output gap, the difference between actual and potential growth is also larger. 2. As an alternative, the estimated drop is correct, but this means that Italy there is a huge hysteresis effect. A recession is not only, as we can see every day, costly in the short run; but, even more worryingly, it quickly disrupts the economic structure of the country, thus hampering its capacity to grow in the medium and long run. The box does not say it explicitly (it remains an official government document after all), but the conclusion is obvious: either way the Commission had it wrong. If case A is true, then the stagnation we observed in the past few years was not structural but cyclical. This means that the Italian deficit was mainly cyclical (due to the large output gap), and as such did (and does) not need to be curbed. The best way to reabsorb cyclical deficit is to restart growth, through temporary support to aggregate demand. If case B is true, then insisting on fiscal consolidation since 2011 was borderline criminal. When a crisis risks quickly disrupting the long run potential of the economy, then it is a duty of the government to do whatever it takes to fight, in order to avoid that it becomes structural. In a sentence: with strong hysteresis effects, Keynesian countercyclical policies are crucial to sustain the economy both in the short and in the long run. With weaker, albeit still strong hysteresis effects,  a deviation from potential growth is cyclical, and as such it requires Keynesian countercyclical policies. Either way, fiscal consolidation was the wrong strategy. I am not a fan of the policies currently implemented by the Italian government. To be fair, I am not a fan of the policies implemented by any government in Europe. Too much emphasis on supply side measures, and excessive fear of markets (yes, I dare say so today, when the spreads take off again). But I think the Italian draft budget puts the finger where it hurts. The guys in Via XX Settembre dit a pretty awesome job… ## ECB: One Size Fits None March 31, 2014 18 comments Eurostat just released its flash estimate for inflation in the Eurozone: 0.5% headline, and 0.8% core. We now await comments from ECB officials, ahead of next Thursday’s meeting, saying that everything is under control. Just this morning, Wolfgang Münchau in the Financial Times rightly said that EU central bankers should talk less and act more. Münchau also argues that quantitative easing is the only option. A bold one, I would add in light of todays’ deflation inflation data. Just a few months ago, in September 2013, Bruegel estimated the ECB interest rate to be broadly in line with Eurozone average macroeconomic conditions (though, interestingly, they also highlighted that it was unfit to most countries taken individually). In just a few months, things changed drastically. While unemployment remained more or less constant since last July, inflation kept decelerating until today’s very worrisome levels. I very quickly extended the Bruegel exercise to encompass the latest data (they stopped at July 2013). I computed the target rate as they do as $Target=1+1.5\pi_{core}-1(u-\overline{u})$. (if you don’t like the choice of parameters, go ask the Bruegel guys. I have no problem with these). The computation gives the following: Using headline inflation, as the ECB often claims to be doing, would of course give even lower target rates. As official data on unemployment stop at January 2014, the two last points are computed with alternative hypotheses of unemployment: either at its January rate (12.6%) or at the average 2013 rate (12%). But these are just details… So, in addition to being unfit for individual countries, the ECB stance is now unfit to the Eurozone as a whole. And of course, a negative target rate can only mean, as Münchau forcefully argues, that the ECB needs to get its act together and put together a credible and significant quantitative easing program. Two more remarks: • A minor one (back of  the envelope) remark is that given a core inflation level of 0.8%, the current ECB rate of 0.25%, is compatible with an unemployment gap of 1.95%. Meaning that the current ECB rate would be appropriate if natural/structural unemployment was 10.65% (for the calculation above I took the value of 9.1% from the OECD), or if current unemployment was 11.5%. • The second, somewhat related but more important to my sense, is that it is hard to accept as “natural” an unemployment rate of 9-10%. If the target unemployment rate were at 6-7%, everything we read and discuss on the ECB excessively restrictive stance would be significantly more appropriate. And if the problem is too low potential growth, well then let’s find a way to increase it ## Overheat to Raise Potential Growth? March 19, 2014 4 comments Update, March 20th: Speaking of ideological biases concerning inflation, Paul Krugman nails it, as usual. On today’s Financial Times, Phillip Hildebrand gives yet another proof of unwarranted inflation terror. His argument is not new: In spite of the consensus on a weak recovery, the US economy may be close to its potential , so that further monetary stimulus would eventually be inflationary. He then deflects (?) the objection that decreasing unemployment reflects decreasing labour force participation rather than new employment, by suggesting that it is hard to know how many of the 13 millions jobs missing are structural, i.e.not linked to the crisis. I think it is worth quoting him, because otherwise it would be hard to believe: However, an increasingly vocal group of observers, including within the Fed, posits that more of the fall in the participation rate appears to have been structural than cyclical, and it was even predictable – the result of factors such as an ageing workforce and the effect of technology on jobs. (the emphasis is mine). Now look at this figure, quickly produced from FRED data: Read more
2020-02-22 04:56:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3702777922153473, "perplexity": 2792.520177434091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145648.56/warc/CC-MAIN-20200222023815-20200222053815-00345.warc.gz"}
http://hal.in2p3.fr/in2p3-00115488
# Measurement of the Branching Fractions of the Decays $\bar{B}^{0} \to \Lambda_{c}^{+} \bar{p}$ and $B^{-} \to \Lambda_{c}^{+} \bar{p} \pi^{-}$ Abstract : We present studies of two-body and three-body charmed baryonic B decays in a sample of 232 million $B\bar{B}$ pairs collected with the BABAR detector at the PEP-II $e^+e^-$ storage ring. The branching fractions of the decays $\bar{B}^{0} \to \Lambda_{c}^{+} \bar{p}$ and $B^{-} \to \Lambda_{c}^{+} \bar{p} \pi^{-}$ are measured to be $(2.15 \pm 0.36 \pm 0.13 \pm 0.56) \times 10^{-5}$ and $(3.53 \pm 0.18 \pm 0.31 \pm 0.92)\times10^{-4}$, respectively. The uncertainties quoted are statistical, systematic, and from the $\Lambda_{c}^{+} \to p K^- \pi^+$ branching fraction. We observe a baryon-antibaryon threshold enhancement in the $\Lambda_{c}^{+} \bar{p}$ invariant mass spectrum of the three-body mode and measure the ratio of the branching fractions to be ${\cal B}(B^{-} \to \Lambda_{c}^{+} \bar{p} \pi^{-})/{\cal B}(\bar{B}^{0} \to \Lambda_{c}^{+} \bar{p}) = 16.4 \pm 2.9 \pm 1.4$. These results are preliminary. Document type : Conference papers http://hal.in2p3.fr/in2p3-00115488 Contributor : Dominique Girod <> Submitted on : Tuesday, November 21, 2006 - 2:58:13 PM Last modification on : Wednesday, September 16, 2020 - 4:17:14 PM ### Citation B. Aubert, R. Barate, M. Bona, D. Boutigny, F. Couderc, et al.. Measurement of the Branching Fractions of the Decays $\bar{B}^{0} \to \Lambda_{c}^{+} \bar{p}$ and $B^{-} \to \Lambda_{c}^{+} \bar{p} \pi^{-}$. XXXIII International Conference on High Energy Physics (ICHEP'06), Jul 2006, Moscow, Russia. ⟨in2p3-00115488⟩ Record views
2020-09-26 16:14:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7527165412902832, "perplexity": 2931.323919782375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00757.warc.gz"}
https://dpaiton.github.io/publication/2016-03-06-sparse-encoding
# Sparse encoding of binocular images for depth inference Published in IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI), 2016 Abstract: Sparse coding models have been widely used to decompose monocular images into linear combinations of small numbers of basis vectors drawn from an overcomplete set. However, little work has examined sparse coding in the context of stereopsis. In this paper, we demonstrate that sparse coding facilitates better depth inference with sparse activations than comparable feed-forward networks of the same size. This is likely due to the noise and redundancy of feed-forward activations, whereas sparse coding utilizes lateral competition to selectively encode image features within a narrow band of depths. Recommended citation: SY Lundquist, DM Paiton, PF Schultz and GT Kenyon, “Sparse encoding of binocular images for depth inference,” IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI), 2016, pp. 121-124, doi: 10.1109/SSIAI.2016.7459190. @INPROCEEDINGS{lundquist2016sparse, author={Lundquist, Sheng Y. and Paiton, Dylan M. and Schultz, Peter F. and Kenyon, Garrett T.}, booktitle={2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)}, title={Sparse encoding of binocular images for depth inference}, year={2016}, volume={}, number={}, pages={121-124}, doi={10.1109/SSIAI.2016.7459190} }
2022-06-25 11:44:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4256305396556854, "perplexity": 5247.655128108913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00569.warc.gz"}
https://www.askmehelpdesk.com/finance-accounting/recompute-fixed-costs-variable-cost-bep-using-high-low-method-572458.html
Info is : Month Meals Served Total Costs July 3,500 $20,500 August 4,000 22,600 September 4,200 23,350 October 4,600 24,500 November 4,700 25,000 December 4,900 26,000 I have gotten the fixed costs as$4.29. 26500-20500/4900-3500 = 6000/1400 = 4.29 I am not able to find the variable costs. 20500-3500*4.29 = 20500-15015=5485 26000-4900*4.29 = 26000-21021=4979 THIS DOES NOT EQUAUL SO I KNOW I AM WRONG, PLEASE HELP!
2017-07-28 02:43:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29537254571914673, "perplexity": 3816.7177665772833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549436321.71/warc/CC-MAIN-20170728022449-20170728042449-00490.warc.gz"}
https://www.sarthaks.com/2642900/hollow-shaft-length-designed-transmit-power-maximum-permissible-angle-twist-shaft-inner
# A hollow shaft of 1m length is designed to transmit a power of 30 kW at 700 rpm. The maximum permissible angle of twist in the shaft is 1°. The inner 51 views in General closed A hollow shaft of 1m length is designed to transmit a power of 30 kW at 700 rpm. The maximum permissible angle of twist in the shaft is 1°. The inner diameter of the shaft is 0.7 times the outer diameter. The modulus of rigidity is 80 GPa. The outside diameter (in mm) of the shaft is _______ by (37.3k points) selected Concept: The power transmitted by the shaft is given by, $P = T.\omega = \frac{{2\pi NT}}{{60}}$ Torsion equation, $\frac{T}{{{J}}} = \frac{\tau }{{{r}}}=\frac{{Gθ }}{l}\;$ Calculation: Given: P = 30 kW, N = 700 rpm, θ = 1° , G = 80 GPa and di = 0.7 do from  $P = T.\omega = \frac{{2\pi NT}}{{60}}$ $30 \times 1000 = T \times \frac{{2π \times 700}}{{60}}$ ∴ T = 409.256 N – m From Torsion equation, $\frac{T}{{{J}}} = \frac{{Gθ }}{l}\;$ L = 1 m, θ = 1° = π/180 rad $\frac{{409.256}}{{\frac{π }{{32}}\left( {1 - {{0.7}^4}} \right)d_0^4}} = \frac{{80 \times {{10}^9}}}{1} \times \frac{π }{{180}}$ on Solving, we get d0 = 44.52 mm
2023-03-29 06:58:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6971232295036316, "perplexity": 2277.0926273986825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00173.warc.gz"}
http://jkms.kms.or.kr/journal/view.html?doi=10.4134/JKMS.j190571
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors Radii problems for the generalized Mittag-Leffler functions J. Korean Math. Soc.Published online March 13, 2020 ANUJA PRAJAPATI Sambalpur University Abstract : In this paper our aim is to find various radii problems of the generalized Mittag-Leffler function for three different kinds of normalization by using their Hadamard factorization in such a way that the resulting functions are analytic. The basic tool of this study is the Mittag-Leffler function in series. Also we have shown that the obtained radii are the smallest positive roots of some functional equations. Keywords : Generalized Mittag-Leffler functions; radius of $\eta$uniformly convexity of order $\rho$; radius of $\alpha$convexity of order $\rho$; radius of $\eta$parabolic starlikeness of order $\rho$; radius of strong starlikeness of order $\rho$ MSC numbers : 30C45, 30C15 ; 33E12. Full-Text :
2020-04-04 17:15:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5193980932235718, "perplexity": 1973.2625554175988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00462.warc.gz"}
https://www.studysmarter.us/explanations/physics/linear-momentum/
Suggested languages for you: Americas Europe | | # Linear Momentum ## Want to get better grades? • Flashcards • Notes • Explanations • Study Planner • Textbook solutions Did you know that a swarm of jellyfish once managed to shut down a nuclear power plant, in Japan, after getting stuck in the cooling system? No, probably not, and now you're wondering what jellyfish have to do with physics, right? Well, what if I told you that jellyfish apply the principle of conservation of momentum every time they move? When a jellyfish wants to move, it fills its umbrella-like section with water and then pushes the water out. This motion creates a backward momentum that in turn creates an equal and opposite forward momentum that allows the jellyfish to push itself forward. Therefore, let us use this example as a starting point in understanding momentum. Figure 1: Jellyfish use momentum to move. ## Definition of Linear Momentum Momentum is a vector quantity related to the motion of objects. It can be linear or angular depending on the motion of a system. Linear motion, one-dimensional motion along a straight path, corresponds to linear momentum which is the topic of this article. Linear momentum is the product of an object's mass and velocity. Linear momentum is a vector; it has magnitude and direction. ## Linear Momentum Equation The mathematical formula corresponding to the definition of linear momentum is $$p=mv$$ where $$m$$ is mass measured in $$\mathrm{kg}$$ , and $$v$$ is velocity measured in $$\mathrm{\frac{m}{s}}$$. Linear momentum has SI units of $$\mathrm{kg\,\frac{m}{s}}$$. Let's check our understanding with a quick example. A $$3.5\,\mathrm{kg}$$ soccerball is kicked with a speed of $$5.5\,\mathrm{\frac{m}{s}}$$. What is the linear momentum of the ball? Figure 2: Kicking a soccer ball to demonstrate linear momentum. Using the linear momentum equation, our calculations are \begin{align}p&=mv\\p&= (3.5\,\mathrm{kg})\left(5.5\,\mathrm{\frac{m}{s}}\right)\\p&=19.25\,\mathrm{{kg\,\frac{m}{s}}}\\\end{align}. ### Linear Momentum and Impulse When discussing momentum, the term impulse will arise. Linear impulse is a term used to describe how force affects a system with respect to time. Linear impulse is defined as the integral of a force exerted on an object over a time interval. The mathematical formula corresponding to this definition is $$\Delta \vec{J}= \int_{t_o}^{t}\vec{F}(t)dt,$$ which can be simplified to $$J=F\Delta{t}$$, when $$F$$ doesn't vary with time, i.e. a constant force. Note $$F$$ is force, $$t$$ is time, and the corresponding SI unit is $$\mathrm{Ns}.$$ Impulse is a vector quantity, and its direction is the same as that of the net force acting on an object. ## Momentum, Impulse, and Newton's Second Law of Motion Impulse and momentum are related by the impulse-momentum theorem. This theorem states that the impulse applied to an object is equal to the object's change in momentum. For linear motion, this relationship is described by the equation $$J=\Delta{p}.$$ Newton's second law of motion can be derived from this relationship. To complete this derivation, we must use the equations corresponding to the impulse-momentum theorem in conjunction with the individual formulas of linear momentum and linear impulse. Now, let us derive Newton's second law for linear motion starting with the equation $$J=\Delta{p}$$ and rewriting it as $$F\Delta{t}=m\Delta{v}.$$ \begin{align}J&=\Delta{p}\\F\Delta{t}&=\Delta{p}\\F\Delta{t}&=m\Delta{v}\\F&=\frac{m\Delta{v}}{\Delta{t}}\\\end{align} Be sure to recognize that $$\frac{\Delta_v}{\Delta_t}$$ is the definition of acceleration so the equation can be written as \begin{align}F&= ma\\\end{align}, which we know to be Newton's second law for linear motion. As a result of this relationship, we can define force in terms of momentum. Force is the rate at which the momentum of an object changes with respect to time. ## Distinguishing Between Linear and Angular Momentum To distinguish linear momentum from angular momentum, let us first define angular momentum. Angular momentum corresponds to rotational motion, circular motion about an axis. Angular momentum is the product of angular velocity and rotational inertia. The mathematical formula corresponding to this definition is $$L=I\omega$$ where $$\omega$$ is angular velocity measures in $$\mathrm{\frac{rad}{s}}$$ and $$I$$ is inertia measured in $$\mathrm{kg\,m^2}.$$ Angular momentum has SI units of $$\mathrm{kg\,\frac{m^2}{s}}$$. This formula can only be used when the moment of inertia is constant. Again, let's check our understanding with a quick example. A student vertically swings a conker, attached to a string, above their head. The conker rotates with an angular velocity of $$5\,\mathrm{\frac{rad}{s}}.$$ If its moment of inertia, which is defined in terms of the distance from the center of rotation, is $$6\,\mathrm{kg\,m^2}$$, calculate the angular momentum of the conker, Figure 3: A rotating conker demonstrating the concept of angular momentum. Using the equation for angular momentum, our calculations are \begin{align}L&=I\omega\\L&=(5\,\mathrm{kg\,m^2})\left(6\,\mathrm{\frac{rad}{s}}\right)\\L&= 30\,\mathrm{kg\,\frac{m^2}{s}}\\\end{align} ## Distinguish between Linear Momentum and Angular Momentum Linear momentum and angular momentum are related because their mathematical formulas are of the same form as angular momentum is the rotational equivalent of linear momentum. However, the main difference between each is the type of motion they are associated with. Linear momentum is a property associated with objects traveling a straight-line path. Angular momentum is a property associated with objects traveling in a circular motion. ## Linear Momentum and Collisions Collisions are divided into two categories, inelastic and elastic, in which each type produces different results. ### Inelastic and Elastic Collisions Inelastic collisions are characterized by two factors: 1. Conservation of momentum-The corresponding formula is $$m_1v_{1i} + m_2v_{2i}=(m_1 + m_2)v_{f}.$$ 2. Loss of kinetic energy- The loss of energy is due to some kinetic energy being converted into another form and when the maximum amount of kinetic energy is lost, this is known as a perfectly inelastic collision. Elastic collisions are characterized by two factors: 1. Conservation of momentum- The corresponding formula is $$m_1v_{1i} + m_2v_{2i}= m_1v_{1f}+m_2v_{2f}.$$ 2. Conservation of kinetic energy- The corresponding formula is $$\frac{1}{2}m_1{v_{1i}}^2 + \frac{1}{2}m_2{v_{2i}}^2 =\frac{1}{2}m_1{v_{1f}}^2+ \frac{1}{2}m_1{v_{2f}}^2.$$ Note that the equations associated with elastic collisions can be used in conjunction with one another to calculate an unknown variable if needed such as final velocity or final angular velocity. Two important principles related to these collisions are the conservation of momentum and the conservation of energy. ### Conservation of Momentum The conservation of momentum is a law in physics that states momentum is conserved as it is neither created nor destroyed as stated in Newton's third law of motion. In simple terms, the momentum before the collision will be equal to the momentum after the collision. This concept is applied to elastic and inelastic collisions. However, it is important to note that conservation of momentum only applies when no external forces are present. When no external forces are present, we refer to this as a closed system. Closed systems are characterized by conserved quantities, meaning that no mass or energy is lost. If a system is open, external forces are present and quantities are no longer conserved. To check our understanding, let's do an example. A $$2\,\mathrm{kg}$$ billiard ball moving with a speed of $$4\,\mathrm{\frac{m}{s}}$$ collides with a stationary $$4\,\mathrm{kg}$$ billiard ball, causing the stationary ball to now move with a velocity of $$-6\,\mathrm{\frac{m}{s}}.$$ What is the final velocity of the $$2\,\mathrm{kg}$$ billiard ball after the collision? Figure 4: A game of billiards demonstrates the concept of collisions. Using the equation for conservation of momentum corresponding to an elastic collision and linear motion, our calculations are \begin{align}m_1v_{1i} + m_2v_{2i}&= m_1v_{1f}+m_2v_{2f}\2\,\mathrm{kg})\left(4\,\mathrm{\frac{m}{s}}\right) + 0 &= ( 2\,\mathrm{kg})(v_{1f}) + (4\,\mathrm{kg})\left(-6\,\mathrm{\frac{m}{s}}\right)\\8\,\mathrm{kg\,\frac{m}{s}}+ 0&=(2\,\mathrm{kg})(v_{1f}) - 24\,\mathrm{kg\,\frac{m}{s}}\\8 +24 &=(2\,\mathrm{kg})(v_{1f})\\\frac{32}{2}&=(v_{1f})=16\,\mathrm{\frac{m}{s}}\\\end{align}. ### Momentum changes To better understand the conservation of momentum works, let us perform a quick thought experiment involving the collision of two objects. When two objects collide, we know that according to Newton's third law, the forces acting on each object will be equal in magnitude but opposite in direction, \( F_1 = -F_2, and logically, we know that the time it takes for $$F_1$$ and $$F_2$$ to act on the objects will be the same, $$t_1 = t_2$$. Therefore, we can further conclude that the impulse experienced by each object will also be equal in magnitude and opposite in direction, $$F_1{t_1}= -F_2{t_2}$$. Now, if we apply the impulse-momentum theorem, we can logically conclude that changes in momentum are equal and opposite in direction as well. $$m_1v_1=-m_2v_2$$. However, although momentum is conserved in all interactions, the momentum of individual objects which make up a system can change when they are imparted with an impulse, or in other words, an object's momentum can change when it experiences a non-zero force. As a result, momentum can change or be constant. #### Constant Momentum 1. The mass of a system must be constant throughout an interaction. 2. The net forces exerted on the system must equal zero. #### Changing Momentum 1. A net force exerted on the system causes a transfer of momentum between the system and the environment. Note that the impulse exerted by one object on a second object is equal and opposite to the impulse exerted by the second object on the first. This is a direct result of Newton's third law. Therefore, if asked to calculate the total momentum of a system, we must consider these factors. As a result, some important takeaways to understand are: • Momentum is always conserved. • A momentum change in one object is equal and opposite in direction to the momentum change of another object. • When momentum is lost by one object, it is gained by the other object. • Momentum can change or be constant. ### Application of the Law of Conservation of Momentum An example of an application that uses the law of conservation of momentum is rocket propulsion. Before launching, a rocket will be at rest indicating that its total momentum relative to the ground equals zero. However, once the rocket is fired, chemicals within the rocket are burnt in the combustion chamber producing hot gases. These gases are then expelled through the rocket's exhaust system at extremely high speeds. This produces a backward momentum which in turn produces an equal and opposite forward momentum that thrusts the rocket upwards. In this case, the change in the momentum of the rocket consists in part due to a change in mass in addition to a change in velocity. Rember, it is the change in the momentum which is associated with a force, and momentum is the product of mass and velocity; a change in either one of these quantities will contribute terms to Newton's second law:\frac{\mathrm{d}p}{\mathrm{d}t}=\frac{\mathrm{d}(mv)}{\mathrm{d}t}=m\frac{\mathrm{d}v}{\mathrm{d}t}+\frac{\mathrm{d}m}{\mathrm{d}t}v.$$### Importance of Momentum and Conservation of Momentum Momentum is important because it can be used to analyze collisions and explosions as well as describe the relationship between speed, mass, and direction. Because much of the matter we deal with has mass, and because it is often moving with some velocity relative to us, momentum is a ubiquitous physical quantity. The fact that momentum is conserved is a convenient fact that allows us to deduce velocities and masses of particles in collisions and interactions given the total momentum. We can always compare systems before and after a collision or interaction involving forces, because the total momentum of the system before will always be equal to the momentum of the system after. ### Conservation of Energy The conservation of energy is a principle within physics that states that energy cannot be created or destroyed. Conservation of energy: The total mechanical energy, which is the sum of all potential and kinetic energy, of a system remains constant when excluding dissipative forces. Dissipative forces are nonconservative forces, such as friction or drag forces, in which work is dependent on the path an object travels. The mathematical formula corresponding to this definition is$$K_i + U_i = K_f + U_f$$where $$K$$ is kinetic energy and $$U$$ is potential energy. However, when discussing collisions, we focus only on the conservation of kinetic energy. Thus, the corresponding formula is$$\begin{align}\frac{1}{2}m_1{v_{1i}}^2 + \frac{1}{2}m_2{v_{2i}}^2 =\frac{1}{2}m_1{v_{1f}}^2+ \frac{1}{2}m_1{v_{2f}}^2\\\end{align}$$This formula will not apply to inelastic collisions. #### Energy changes The total energy of a system is always conserved, however, energy can be transformed in collisions. Consequently, these transformations affect the behavior and motion of objects. For example, let us look at collisions where one object is at rest. The object at rest initially has potential energy because it is stationary, thus meaning its velocity is zero indicating no kinetic energy. However, once a collision occurs, potential energy transforms into kinetic energy as the object now has motion. In elastic collisions, energy is conserved, however, for inelastic collisions energy is lost to the environment as some is transformed to heat or sound energy. ## Linear Momentum - Key takeaways • Momentum is a vector and therefore has both magnitude and direction. • Momentum is conserved in all interactions. • Impulse is defined as the integral of a force exerted on an object over a time interval. • Impulse and momentum are related by the impulse-momentum theorem. • Linear momentum is a property associated with objects traveling a straight-line path. • Angular momentum is a property associated with objects traveling in a circular motion about an axis. • Collisions are divided into two categories: inelastic and elastic. • The conservation of momentum is a law within physics which states momentum is conserved as it is neither created nor destroyed as stated in Newton's third law of motion. • Conservation of energy: The total mechanical energy of a system remains constant when excluding dissipative forces. ## References 1. Figure 1: Jellyfish (https://www.pexels.com/photo/jellfish-swimming-on-water-1000653/) by Tim Mossholder ( https://www.pexels.com/@timmossholder/) is licensed by CC0 1.0 Universal (CC0 1.0). 2. Figure 2: Soccer ball (https://www.pexels.com/photo/field-grass-sport-foot-50713/)m by Pixabay (https://www.pexels.com/@pixabay/) is licensed by CC0 1.0 Universal (CC0 1.0). 3. Figure 3: Rotating Conker-StudySmarter Originals 4. Figure 4: Billiards (https://www.pexels.com/photo/photograph-of-colorful-balls-on-a-pool-table-6253911/) by Tima Miroshnichenko ( https://www.pexels.com/@tima-miroshnichenko/) is licensed by CC0 1.0 Universal (CC0 1.0). ## Frequently Asked Questions about Linear Momentum An application of the law of conservation of linear momentum is rocket propulsion. Momentum is important because it can be used to analyze collisions and explosions as well as describe the relationship between speed, mass, and direction. For momentum to be constant, the mass of a system must be constant throughout an interaction and the net forces exerted on the system must equal zero. Linear momentum is defined as the product of an object's mass times velocity. Impulse is defined as the integral of a force exerted on an object over a time interval. Total linear momentum is the sum of the linear momentum before and after an interaction. ## Final Linear Momentum Quiz Question What two factors characterize inelastic collisions? Show answer Answer 1. Conservation of momentum. 2. Loss of kinetic energy. Show question Question Conservation of momentum can only be applied to elastic collisions. Show answer Answer False. Show question Question Elastic collisions are characterized by conservation of momentum and ________. Show answer Answer conservation of kinetic energy. Show question Question In inelastic collisions, kinetic energy is conserved. Show answer Answer False. Show question Question As momentum is a vector quantity, it has which of the following. Show answer Answer Magnitude and Direction. Show question Question Momentum is conserved in all interactions. Show answer Answer True Show question Question Momentum is constant within a system, if which two factors occur? Show answer Answer the mass of a system is constant and the net forces exerted on the system equal zero. Show question Question If the momentum of a system changes, what has to occur? Show answer Answer A net force is exerted on the system. Show question Question Impulse is the force applied to an object during a specific time period which causes the momentum of an object to change. Show answer Answer True. Show question Question Impulse and momentum are related by the impulse-momentum theorem. Show answer Answer True Show question Question The impulse-momentum theorem states that Show answer Answer the impulse applied to an object is equal to the object's change in momentum. Show question Question Using the equation, $$J=\Delta{p},$$ which equation can also be derived. Show answer Answer$$F=ma.$$Show question Question The law of conservation of momentum only applies if which of the following is true. Show answer Answer No external forces are present. Show question Question The law of conservation of momentum states which of the following. Show answer Answer The momentum before a collision will be equal to the momentum after a collision. Show question Question A net force exerted on the system causes a transfer of momentum between the system and the environment. Show answer Answer True. Show question Question Inelastic collisions are characterized by which two factors? Show answer Answer Conservation of momentum Loss of kinetic energy. Show question Question Elastic collisions are characterized by which two factors? Show answer Answer Conservation of momentum and Conservation of kinetic energy. Show question Question The law of conservation of momentum states that Show answer Answer momentum before and after a collision is equal. Show question Question Impulse describes an object's change in what quantity? Show answer Answer Momentum. Show question Question Constant momentum is characterized by which two factors? Show answer Answer mass of a system is constant throughout an interaction and the net forces exerted on the system must equal zero. Show question Question What is the center of mass? Show answer Answer The center of mass is the weighted average position of the mass distribution of the system. Equivalently, in formula form, it is $$\frac{1}{M}\int\vec{r}\,\mathrm{d}m$$, where $$M$$ is total mass, $$\vec{r}$$ is the position vector, and $$m$$ is local mass. Show question Question A hammer is formed by gluing together a rectangular prism (the hammer's head) and a thin cylinder (the handle). Both of them have a uniform mass distribution. If we are asked to find the center of mass of this hammer, what can we do? Show answer Answer No luck, we need to integrate. Show question Question What is the formula to calculate the center of mass for a system of particles? Show answer Answer $$x_\text{CM}=\frac{\sum m_i x_i}{m_{\text {total}}}$$. Show question Question A system is comprised of three particles. If we apply an external force only on one of these particles, what will happen? Show answer Answer The particle will move and so will the system's center of mass. Show question Question When is the center of mass equal to the center of gravity? Show answer Answer Only in the presence of a uniform gravitational field. Show question Question How do you calculate the center of mass of a body by integration? Show answer Answer $$\displaystyle\vec{r}_\text{CM}=\frac{\int \vec{r} \,\mathrm{d}m}{M}.$$ Show question Question Where is the center of mass positioned in relation to a loose distribution of masses in free space? Show answer Answer In the case of loose distribution of masses in free space, the position of the center of mass is a point in space among them that may not correspond to the position of any individual mass. Show question Question What is the position of the center of mass for a uniform rod? Show answer Answer The position of the center of mass for a uniform rod is at its geometric center. Show question Question Why is the concept of center of mass useful? Show answer Answer Because it allows to simplify the modeling of some complex systems by reducing them to a single point. Show question Question If there are no external forces acting on a system who's center of mass is stationary, then ____ . Show answer Answer The system won't move at all. Show question Question The center of mass of an object always lies on or within the object. Show answer Answer False. Show question Question Consider the system Earth-Sun. In this system, the center of mass is located ___. Show answer Answer closer to the position of the Earth. Show question Question Which of these are formulas for impulse? Show answer Answer $$\int\vec F\,\mathrm{d}t$$. Show question Question Which of these answers gives the correct units for impulse? Show answer Answer newton seconds. Show question Question Impulse is the area under a___ Show answer Answer displacement vs time graph. Show question Question Calculate the impulse of a box that has a constant force of $$15\,\mathrm{N}$$ on it for a period of $$5.0\,\mathrm{s}$$. Show answer Answer $$75\,\mathrm{N\,s}$$. Show question Question Elastic collisions always have ___. Show answer Answer conservation of momentum. Show question Question Inelastic collisions___ Show answer Answer have conservation of momentum. Show question Question What is Newton's Second Law? Show answer Answer $$\vec F=m\vec a,$$ where $$\vec{F}$$ is force, $$m$$ is mass, and $$\vec{a}$$ is acceleration. Show question Question How do you get change of momentum from Newton's Second Law? Show answer Answer Start with $$\vec{F}=m\vec{a}$$. Then we can integrate w.r.t. time $$t$$: $$\int\vec{F}\,\mathrm{d}t=\int m\vec{a}\,\mathrm{d}t$$. We then realize that the mass of an object is constant so we can pull it out of the integral, and the time integral of acceleration is velocity: $$\int\vec{F}\,\mathrm{d}t=m\vec{v}\Big|_{v_\text{i}}^{v_\text{f}}=m\Delta v$$. Show question Question What is the integral formula for impulse? Show answer Answer$$\vec J=\int_{t_\text{i}}^{t_\text{f}} \vec F(t)\, \mathrm{d}t \mathrm{.} Show question Question Define change of momentum. The mass times the velocity. Show question Question In which collisions is momentum conserved? Only inelastic collisions. Show question Question In which of these collisions is energy conserved? Billiard balls. Show question Question Impulse is the same as change of momentum. True. Show question Question What are the units for momentum? $$\mathrm{\frac{kg\,m}{s}\\}$$. Show question Question A sleeping elephant has a lot of momentum because it is really hard to change its state of motion by pushing it to get it moving. False. Show question Question Newton's original formulation of the Second Law stated that the force acting on an object was the derivative with respect to time of the ___ of the object. Angular momentum. Show question Question The linear momentum is related to the amount of motion and is the product of an object's mass and its acceleration at a certain moment. True. Show question Question The impulse is defined as the product of the average force and the ___ in which the force is acting on an object. time interval. Show question 60% of the users don't pass the Linear Momentum quiz! Will you pass the quiz? Start Quiz ## Study Plan Be perfectly prepared on time with an individual plan. ## Quizzes Test your knowledge with gamified quizzes. ## Flashcards Create and find flashcards in record time. ## Notes Create beautiful notes faster than ever before. ## Study Sets Have all your study materials in one place. ## Documents Upload unlimited documents and save them online. ## Study Analytics Identify your study strength and weaknesses. ## Weekly Goals Set individual study goals and earn points reaching them. ## Smart Reminders Stop procrastinating with our study reminders. ## Rewards Earn points, unlock badges and level up while studying. ## Magic Marker Create flashcards in notes completely automatically. ## Smart Formatting Create the most beautiful study materials using our templates.
2023-02-01 16:41:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8461005091667175, "perplexity": 455.95209906417494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00245.warc.gz"}
https://arrow.fandom.com/wiki/User_talk:The_Immortal_Selene
7,854 Pages # The Immortal Selene My favorite wikis • I live in Los Angeles ## Welcome Hi, I'm an admin for the Arrowverse Wiki community. Welcome and thank you for your edit to Sara Lance! If you need help getting started, check out our help pages or contact me or another admin here. For general help, you could also stop by Community Central to explore the forums and blogs. Please leave me a message if I can help with anything. Enjoy your time at Arrowverse Wiki! ## RE: Reverting my Samandra Watson edit Your assumptions on her are more personal biased, it isn't an accurate description of her, as she hasn't broken any laws to do what she's done, sure she's bent them, but never broken them to my knowledge, so she isn't being hypocritical at all. Wraiyf You know, I'm getting real tired of people and their lazy writing 02:28, March 8, 2018 (UTC) Although that may be the case, we've never actually seen her break any laws, sure she believes that and/or has stated that, however we haven't seen her do anything of the sort, therefore at the moment saying she's a hypocrite is heresay. Wraiyf You know, I'm getting real tired of people and their lazy writing 09:01, March 8, 2018 (UTC) Until she does such a thing to prove her hypocrisy, do not place it in her article, end of discussion. Wraiyf You know, I'm getting real tired of people and their lazy writing 23:33, March 8, 2018 (UTC) ## Moving pages Please use the {{move}} template when you want a page moved. An admin or content mod will deal with it. You don't create a new page and take all the credit for it.TIMESHADE |Talk/Wall| - |C| 07:30, March 25, 2019 (UTC) Again, you do not create a page to "move" the information over. That is plagiarism and is illegal. Please use the move template when a page needs to be moved. $\int$ IHH dt    4:40, Jan 23, 2020 (UTC) The problem is that everything you added you essentially added saying that you wrote all of the code on that page yourself. That is wrong. That is plagiarism. Other's edited there as well. Whereas if you add the move template and an admin renames the page, it takes all of the page history along with it as well. That is the appropriate way to do things. IHBot (talk) Correct, you copied and pasted other people's work and pressed saved. Now on the page history, it only shows your username. So that basically means that you are the one that created all of this information on the page. That is in fact plagiarism. That is why if you let an admin use the move template, it preserves the edit history and you aren't inadvertently claiming all of it to be your own work. IHBot (talk) So by hitting "publish" you are claiming the work as your own. So yes you did copy and paste material that is not yours and claimed it is yours when you hit publish. Second, you did not "move" the page. You copy and pasted information. It completely got rid of the edit history and so the people that worked on it no longer show up on the history section. I understand that the page title was wrong and it would have been fixed if you would have had patience, but because you chose to incorrectly "move" a page, it erased the record of people's work and you claimed it to be your own when you hit publish. Further, most of the pages in the move category are there because their move is either unnecessary or still a topic of discussion. Those that NEED to be moved 100% are moved. Those that aren't are still in the move category. Moral of the story, that is not how you move a page. An admin already warned you of this. Here is another warning. If you do this form of plagiarism again, you will be blocked. End of discussion. IHBot (talk) ## Re:Antimatter cannon Because, it says the most powerful weapon in existence. If we reword that it would make more sense but right now, saying the ultimate weapon in existence doesn't make much sense really grammatically. Thanks, Community content is available under CC-BY-SA unless otherwise noted.
2020-01-29 20:32:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2683016061782837, "perplexity": 1623.0160691301774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251802249.87/warc/CC-MAIN-20200129194333-20200129223333-00431.warc.gz"}
https://www.oscaner.com/exam/aws/mls-c01/whizlabs/05-algorithms/07-text-analysis-algorithms.html
# [MLS-C01] [Algorithms] Text Analysis Algorithms Posted by Oscaner on July 19, 2022 ## Defined • Both supervised and unsupervised learning algorithms • Take text or documents as input and either categorize, sequence, or classify the text or documents • Used as preprocessing for many downstream natural language processing (NLP) tasks, such as sentiment analysis, named entity recognition, machine translation • Text classification used for applications that perform web searches, ranking, and document classification ## Use Cases 1. Sentiment analysis for social media streams 2. Categorize documents by topic for law firms 3. Language translation 4. Speech-to-text 5. Summarizing longer documents 6. Conversational user interfaces 7. Text generation 8. Word pronunciation app ## SageMaker Algorithms ### Blazing Text • Implements the Word2vec and text classification algorithms • Can use pre-trained vector representations that improve the generalizability of other models that are later trained on a more limited amount of data • Can easily scale for large text datasets • Can train a model on more that a billion words very quickly, in minutes, using a large multi-core CPU or a GPU • Words that are semantically similar correspond to vectors that are close together, resulting that word embeddings capture the semantic relationships between words. • Example use case: market research using sentiment analysis • Important Hyperparameters 1. mode: architecture used for training ### Latent Dirichlet Allocation (LDA) • Unsupervised learning algorithm that organizes a set of text observations into distinct categories • Frequently used to discover a number of topics shared across documents within a collection of texts, or a corpus • In an LDA algorithm based model, each observation is a document and each feature is a count of a word in the documents • Topics are not specified in advance • Each document is described as a mixture of topics • Example use case: find common topics in call center transcripts • Important Hyperparameters 1. num_topics: number of topics to find in the data 2. feature_dim: size of the vocabulary of the input document corpus 3. min_batch_size: total number of documents in the input document corpus ### Neural Topic Model (NTM) • Unsupervised learning algorithm that organizes a corpus of documents into topics containing word groupings, based on the statistical distribution of the word groupings • Frequently used to classify or summarize documents based on topics detected • Also used to retrieve information or recommend content based on topic similarities • Topics are inferred from observed word distributions in the corpus • Used to visualize the contents of a large set of documents in terms of the learned topics • Similar to LDA, but will produce different outcomes • Example use case: find the topics of newsgroup message posts • Important Hyperparameters 1. feature_dim: vocabulary size of the dataset 2. num_topics: number of required topics ### Object2Vec • General purpose neural embedding algorithm that finds related clusters of words (words that are semantically similar) • Embeddings can be used to find nearest neighbors of objects, and can also visualize clusters of related objects • Besides word embeddings, Object2Vec can also learn the embeddings of other objects such as sentences, customers, products, etc. • Frequently used for information retrieval, product search, item matching, customer profiling, etc. based on related topics • Supports embeddings of paired tokens, paired sequences, and paired token to sequence • Example use case: recommendation engine based on collaborative filtering • Important Hyperparameters 1. enc0_max_seq_len: maximum sequence length for the enc0 encoder 2. enc0_vocab_size: vocabulary size of enc0 tokens ### Sequence-to-Sequence (seq2seq) • Supervised learning algorithm with input of a sequence of tokens (audio, text, radar data) and output of another sequence of tokens • Can be used for translation from one language to another, text summarization, speech-to-text • Uses Recurrent Neural Networks (RNNs) and Convolutional Neural Network (CNN) models • Uses state-of-the-art encoder-decoder architecture • Uses input of sequence data in recordio-protobuf format and JSON vocabulary mapping files • Example use case: word pronunciation dictionary, a sequence of text as input and a sequence of audio as output • Important Hyperparameters 1. Has no required hyperparameters
2022-10-02 23:36:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25443682074546814, "perplexity": 5908.516060323213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00684.warc.gz"}
https://tex.stackexchange.com/questions/263844/making-a-horizontal-vertical-continuous-list
# Making a horizontal & vertical continuous list I'm looking to make assignments and tests in latex article class. I want to be able to do the following very easily: Couple things: I would like the text in a) or c) to wrap around the imaginary "break" between the two columns. Also, it would be nice if the procedure kept track of the counters together for both columns; otherwise, this might as well be done in MS Word - I'm porting over to LaTeX for the math mode. You can use tasks for the inner list. \documentclass{article} \usepackage{enumitem} {questions}[\subquestion](2) \begin{document} \begin{enumerate}[nosep] \item Answer any of the following questions as you wish. \begin{questions} \subquestion How are you doing? If doing well, answer this question. \subquestion How are you doing? If doing well, answer this question. \subquestion How are you doing? If doing well, answer this question. How are you doing? If doing well, answer this question. \subquestion How are you doing? If doing well, answer this question. \end{questions} \item You don't have to answer any of these questions as they may be difficult.% << this % is needed \begin{questions} \subquestion How are you doing? If doing well, answer this question. \subquestion How are you doing? If doing well, answer this question. \subquestion How are you doing? If doing well, answer this question. How are you doing? If doing well, answer this question. \subquestion How are you doing? If doing well, answer this question. \end{questions} \item You don't have to answer any of these questions as they may be difficult.% << this % is needed \begin{questions} \subquestion How are you doing? If doing well, answer this question. \subquestion How are you doing? If doing well, answer this question. \subquestion How are you doing? If doing well, answer this question. How are you doing? If doing well, answer this question. \subquestion How are you doing? If doing well, answer this question. \end{questions} \end{enumerate} \end{document} • Hey thanks that's an awesome solution; however I can't seem to compile it or really find out why this isn't working. I get a couple of errors: LaTeX error: "kernel/key-unknown" The key 'tasks/list/column-sep' is unknown and is being ignored. Any thoughts? Really would like to get this to work thank! – Dan P. Sep 1 '15 at 1:02 • @DanP. You need an up to date tex distribution. Please update and it will work. – user11232 Sep 1 '15 at 1:09 • Awesome! Everything works perfectly now. Thank you very much! – Dan P. Sep 8 '15 at 1:25 A solution with the shortlst and enumitem packages. You can choose the number of columns (key nc, 3 by default), the interlining to take into account big formulae (key il, 1 by default) and the distance between item label and item body (key ls, 0.6em by default). If necessary, an item will automatically use more than one column. Example: \documentclass{article} \usepackage[showframe]{geometry} \usepackage{shortlst, setspace, amsmath} \usepackage{enumitem} \newlist{exercises}{enumerate}{2} \setlist[exercises,1]{label=\arabic*. , wide=0pt} \usepackage{etoolbox} \AtBeginEnvironment{tabenumerate}{\renewcommand\theenumi{\alph{enumi})} \settowidth{\labelwidth}{\mbox{(m)}}} \makeatletter \newcounter{ncol} \define@key{lex}{nc}[3]{\setcounter{ncol}{#1}}%% 3 columns by default \define@key{lex}{il}[1.5]{\def\@intln{#1}}% interlining![1] \define@key{lex}{ls}[0.6em]{\setlength{\labelsep}{#1}}%%distance between label and item body \newenvironment{tabenumerate}[1][]{%\setlength\labelsep{0.6em} \setkeys{lex}{nc,il,ls, #1} \setlength{\leftmargini}{\dimexpr\labelwidth+\labelsep\relax}%[1][3] \setlength{\shortitemwidth}{\dimexpr\linewidth/\value{ncol}-\labelwidth-2\labelsep\relax}% \setstretch{\@intln} \begin{shortenumerate}\everymath{\displaystyle}}% {\end{shortenumerate} }% \newcommand\paritem[2][1]{\item \parbox[t]{#1\shortitemwidth}{\setstretch{1}#2\medskip}} \makeatother \begin{document} \vspace*{1cm} \begin{exercises} \item For the following questions, \&c. \vspace{-\baselineskip} \begin{tabenumerate}[nc=3, il =2.5, ls =1em] \item $a_n = \frac{4n^3 - (-1)^nn^2}{5n + 2n^3}$ \item $b_n = \frac{(n^3 - 5n)^4 - n^{12}}{n^{11}}$ \item $c_n = \frac{n^{n + 1}}{n!}$\label{q-3} \item $e_n = \frac{2^{(n^3)}}{n!5^{(n^2)} - n^n}$ \item $f_n = \sqrt{n + \sqrt{2n}} - \sqrt{n + \sqrt{2n}}\mkern1mu x$ \end{tabenumerate} \end{exercises} See question \ref{q-3}. \end{document}
2019-09-17 08:55:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9442137479782104, "perplexity": 2095.5845360319995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573065.17/warc/CC-MAIN-20190917081137-20190917103137-00268.warc.gz"}
https://www.preprints.org/manuscript/201810.0407/v1
Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed # On the Eternal Role of Planck Scale in Machian Cosmology Version 1 : Received: 17 October 2018 / Approved: 18 October 2018 / Online: 18 October 2018 (08:42:57 CEST) Version 2 : Received: 25 October 2018 / Approved: 25 October 2018 / Online: 25 October 2018 (09:51:27 CEST) Version 3 : Received: 30 October 2018 / Approved: 2 November 2018 / Online: 2 November 2018 (02:30:57 CET) How to cite: Seshavatharam, U.V.S.; Lakshminarayana, S. On the Eternal Role of Planck Scale in Machian Cosmology. Preprints 2018, 2018100407 (doi: 10.20944/preprints201810.0407.v1). Seshavatharam, U.V.S.; Lakshminarayana, S. On the Eternal Role of Planck Scale in Machian Cosmology. Preprints 2018, 2018100407 (doi: 10.20944/preprints201810.0407.v1). ## Abstract Considering Planck mass as a baby universe and evolving universe as a growing Planck ball, a hypothetical spherical model of cosmology can be developed. In addition, by considering the famous relation GM = c2R, a Machian model of quantum cosmology can also be developed. Proceeding further, in all directions, cosmic expansion velocity can be referred to ‘from and about’ the baby universe. This hypothetical scenario can be made versatile by adding a heuristic idea: apart from normal expansion, ‘cosmic temperature seems to be redshifted by a factor $\left({\left({Z}_{T}\right)}_{t}+1\right)\cong \sqrt{1+\mathrm{ln}\left({H}_{pl}/{H}_{t}\right)}$; where (Hpl, Ht) represent Planck scale and time dependent Hubble parameters respectively. ## Subject Areas Planck scale; Mach’s relation; quantum cosmology; critical density; ordinary matter; dark matter; thermal redshift; expansion velocity; Hubble’s law Views 0
2021-06-17 17:54:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2424953579902649, "perplexity": 10067.746546452825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630518.38/warc/CC-MAIN-20210617162149-20210617192149-00167.warc.gz"}
https://socratic.org/questions/how-do-you-solve-x-y-z-1-and-x-5y-15z-13-and-3x-2y-7z-0-using-matrices
# How do you solve -x + y + z = -1 and -x + 5y - 15z = -13 and 3x - 2y - 7z = 0 using matrices? Feb 22, 2016 $\Delta = \left[\begin{matrix}- 1 & 1 & 1 \\ - 1 & 5 & - 15 \\ 3 & - 2 & - 7\end{matrix}\right] = 35 - 45 + 2 - \left(15 - 30 + 7\right) = - 8 + 8 = 0$ $\Delta x = \left[\begin{matrix}- 1 & 1 & 1 \\ - 13 & 5 & - 15 \\ 0 & - 2 & - 7\end{matrix}\right] = 35 + 0 + 26 - \left(0 - 30 + 91\right) = 61 - 61 = 0$ $\Delta y = \left[\begin{matrix}- 1 & - 1 & 1 \\ - 1 & - 13 & - 15 \\ 3 & 0 & - 7\end{matrix}\right] = - 91 + 45 + 0 - \left(- 39 - 7 + 0\right) = - 46 + 46 = 0$ $\Delta z = \left[\begin{matrix}- 1 & 1 & - 1 \\ - 1 & 5 & - 13 \\ 3 & - 2 & 0\end{matrix}\right] = 0 - 39 - 2 - \left(15 - 26 + 0\right) = - 41 + 11 = - 10$ Since $\Delta = 0$ but $\Delta z \ne 0$ this system of equations is a impossible one and has no solution.
2019-08-21 14:43:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2945559620857239, "perplexity": 242.30800403057813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316021.66/warc/CC-MAIN-20190821131745-20190821153745-00026.warc.gz"}
https://mathematica.stackexchange.com/questions/65158/how-to-show-output-of-a-matrix-in-m1-1-a11-m1-2-a12-in-this-way-i-ha
# How to show output of a matrix in m(1,1)=a11 m(1,2)=a12 …in this way…i have 50 *50 size matrix How to show output of a matrix in m(1,1)=a11 m(1,2)=a12 ...in this way......i have 50 *50 size matrix so i want to do it efficiently. output to be shown like m(1,1)=a11 (in fortranForm) m(1,2)=a12 (In fortranform) .... ....continue in this way, one element per one line. i have specific reason of doing this. i will copy paste my data into Fortran code to use it this data in less time. • You mean like this !Mathematica graphics Array[a, {5, 5}]? Question not clear. What is the context? ps. I just saw your other question here mathematica.stackexchange.com/questions/65148/… is this different? – Nasser Nov 7 '14 at 19:39 • Hii.....No other question is also same. i just want my matrix output to be shown like this matrix(1,1)=a11 element in fortran form matrix(1,2)=a12 element in fortranform....continue in this way – Amandeep Nov 7 '14 at 20:00 There is no subtlety to this, but it works: Table[i j, {i, 5}, {j, 5}] MapIndexed[Print["a(", #2[[1]], "," , #2[[2]], ") = ", #1] &, %, {2}]; • What is %102? – Nasser Nov 7 '14 at 20:51 • @Nasser leftovers my own evaluation. Removed. :P – rcollyer Nov 7 '14 at 20:54 • Ok, now it makes sense :), I thought you were using some new symbol/command in V10 but could not find it. – Nasser Nov 7 '14 at 21:01 building on @rcollyer's answer: The main improvement here is we print the list of elements in a way that the whole list can be copied at once. (also handle arbitrary dimensions automatically ) SetAttributes[fortranprint, HoldFirst] fortranprint[array_Symbol] := fortranprint[array, SymbolName[Unevaluated[array]]]; fortranprint[array_, name_] := Print@StringJoin@Riffle[ Flatten@MapIndexed[ StringJoin@ {" ", (* leading spaces for fixed format fortran *) name, "(", Riffle[ (ToString /@ #2) , "," ], ")=", ToString[FortranForm[#]]} & , array, {-1}] , "\n"]; b = {{1.432, 2. 10^30}, {0, 1}}; fortranprint[b]; b(1,1)=1.432 b(1,2)=2.e30 b(2,1)=0 b(2,2)=1 fortranprint[{{{1, 2}, {3, 4}}, {{5, 6}}}, "g"]; g(1,1,1)=1 g(1,1,2)=2 g(1,2,1)=3 g(1,2,2)=4 g(2,1,1)=5 g(2,1,2)=6 • Yeah, mine was quick and dirty. I should have done this. +1 – rcollyer Nov 7 '14 at 21:45
2019-11-12 06:39:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18441195785999298, "perplexity": 7962.907815174011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00196.warc.gz"}
http://theansweris27.com/blog/
## Analysis of a lap around Brands Hatch Indy (Pt. II) This entry owes very much to a challenge from the Spanish motorsport blog DeltaGap. There, Daniel provides us with a set of real data in order to determine in which sectors the delta time is greater. I took up the challenge and worked out a solution which will help to understand where in a circuit a driver can improve. Is it in the slow corners or in the fast straights? ## The dataset Brands Hatch Indy [1]. To obtain the delta time, here I will use the same laps I did compare in Analysis of a lap around Brands Hatch Indy (Pt. I). But instead of having time in the abscissa, I will have distance. The fastest lap of the session was lap #39 (0:42.799), while lap #14 (0:42.864) was the second-fastest one. Posted in Data Analysis, Python Tagged with: , , , , , , ## Ferrari 126 C Ferrari 126 CK The name comes from its longitudinal 120° V6 rear-mounted engine, while the C stands for Competizione. It is the first Formula 1 car by Ferrari to mount a turbo engine, after Renault introduced the concept back in 1977. A normally aspirated engine is an expression of the the total engineering efforts of all those who are responsible for the engine design, whereas with turbos you are in the hands of the turbo suppliers —Enzo Ferrari [2]. The supplier for the turbos was KK&K, in a 1.5 litre twin turbo layout producing about 540 hp in the first variant of the car. Ferrari also tried out another type of compressor, the Comprex, but turned out to be difficult to perfect for racing engine purposes. Posted in Assorted Tagged with: , , , , ## BMW E30 318i: engine management optimization I’ve already talked about our race car, affectionately known as the “penco” —Spanish for nag—, in previous posts. The term “penco” is not fortuitous, as with only 113 hp the car was at the bottom of the entry list for the 24 hours of Braga in terms of horsepower. Nonetheless, we’ve seen that, in endurance racing, horsepower is not everything as was demonstrated by the 115 hp VW Golf III that took the victory against teams with up to 200 hp cars. Our race car, like most vehicles from the early 1980s onwards, incorporates an engine control unit that control several actuators to ensure optimal engine performance. Therefore, by tuning the so called engine maps it is possible to achieve superior performance. Posted in Electronics, FANSI Motorsport Tagged with: , , , , , ## Lap times for the 2014 F1 Russian Grand Prix The first time in the history of Formula One than a race is held in Russia —check this map—, and also the first Constructors’ Championship for Mercedes AMG Petronas Formula One Team. A well deserved championship given the merciless dominance of their cars. Such is that, that Rosberg was able to finish in P2 running on medium tyres the whole race after a first lap lock-up ruined the performance of his tyres, thus pitting for a new set of medium tyres. A similar strategy was followed by Felipe Massa —started in P18— switching to a soft set of tyres after the first lap hoping for a Safety Car to be released. The Brazilian was closely following the pace of Nico Rosberg but got stuck behind Sergio Perez for several laps. This forced Massa to pit once again ultimately finishing in P11. Similar initial strategy, very different outcome. On the other hand, Vatteri Bottas claimed another podium finish this season after a fantastic race. Button and Magnussen finished in P4 and P5, making this one of the best race finishes for the team this season. Fernando Alonso could have finished in between the two drivers, and even fight for P4 where it not for a sloppy pit stop in lap 25. Ricciardo and Vettel also finished one after the other further strengthening Red Bull’s 2nd position in the Constructors’ Championship. Raikkonen managed to grab two points, crucial in keeping alive Ferrari’s duel against Williams for P3 in the Championship. Also, worth to mention Sergio Perez’s fifth consecutive finish in the points, tied on points with Kimi. Following, I provide some plots so you may draw your own conclusions. ## Average pace This plot shows the difference to the average pace of the race winner. That is, the difference to the average lap time, including pit stops. The steeper the curve, the faster the lap; and as the curves are generated from cumulative sums of lap times, a negative slope implies a lap time which is quicker that the average. Posted in F1 Tagged with: , , , , , , ## Lap times for the 2014 F1 Japanese Grand Prix Typhoon Phanfone made a threat to ruin this 26th edition of the F1 Japanese Grand Prix at the Suzuka Circuit. Fortunately, there was enough time for a very difficult race to take place, although cars had to start after the Safety Car. Mercedes’s Lewis Hamilton and Nico Rosberg recorded the eight one-two finish of the season —McLaren set a record of 10 in 1988—. It is also the eight race win of 2014 for Lewis Hamilton, who gap with Rosberg to ten points. Third podium for Vettel after having confirmed this weekend his move to Ferrari next season. On the other hand, team mate Dani Ricciardo finished fourth after overtaking Button with a pair of decisive moves. Fourteen points were distributed among Valtteri Bottas and Felipe Massa, who finished sixth and seventh respectively, consolidating Williams in third place. Nico Hulkenberg took home four points, while Vergne, who finished between two Sahara Force India’s, grabbed two points, leaving the remaining one to Sergio Perez. But this was a wet difficult race, and Adrian Sutil lost control of his car hitting the barrier in lap 43. Nothing serious happened. Sutil was able to leave the Sauber on its own. But while the crane was lifting his car, Jules Bianchi spun hitting the rescue vehicle from the back; an unfortunate incident which required immediate medical attention. Following, I provide the plots so you may draw your own conclusions. ## Average pace This plot shows the difference to the average pace of the race winner. That is, the difference to the average lap time, including pit stops. The steeper the curve, the faster the lap; and as the curves are generated from cumulative sums of lap times, a negative slope implies a lap time which is quicker that the average. Posted in F1 Tagged with: , , , , , ## Understanding the title block macro in CATIA V5 CATIA gives its users the possibility to use a VBscript macro to generate Title Blocks automatically adjusted to any drawing format. A few macros are provided by default. Users can customize frames and title blocks by either modifying one the default macros or by creating their own macros. The default macros are stored in the install_root/intel_a/VBScript/FrameTitleBlock directory and have a .CATScript extension. We can specify another location in the Tools > Options > Mechanical Design > Drafting > Layout tab. A macro is comprised of Sub procedures and functions. When the Insert Frame and Title Block dialog box is displayed, it will show a set of actions predefined in the macro. Those actions are Sub procedures prefixed using CATDwr_. For instance, the Creation action looks like this: Sub CATDrw_Creation( targetSheet as CATIABase ) '------------------------------------------------------------------------------- 'How to create the FTB '------------------------------------------------------------------------------- If Not CATInit(targetSheet) Then Exit Sub If CATCheckRef(1) Then Exit Sub 'To check whether a FTB exists already in the sheet CATCreateReference 'To place on the drawing a reference point CATFrame 'To draw the frame CATCreateTitleBlockFrame 'To draw the geometry CATTitleBlockText 'To fill in the title block CATColorGeometry 'To change the geometry color CATExit targetSheet 'To save the sketch edition End Sub Posted in CATIA, VBScript Tagged with: , , , ## Lap times for the 2014 F1 Singapore Grand Prix Problems in Rosberg’s car’s wiring loom just before the start of this Grand Prix predicted a troublesome race for the German driver who had to retire after several laps. This handed over a lot of points to his team mate Lewis Hamilton —who enjoyed a clean weekend— in their race for the Driver’s Championship. This was the best finish of the year for Sebastian Vettel, finishing ahead of team-mate Daniel Ricciardo. The Safety Car played an important strategic role. Until then, Fernando Alonso was holding 2nd place and would have made a podium if he hadn’t got to pit to get on a set of tyres he could run to the end of the race with. Further back, Kimi Raikkonen got stuck behind Felipe Massa, who used and aggressive three stop strategy to jump ahead of the Finn. On the other hand, Valtteri Bottas had a good chance to finish in the points but the grip on his tyres faded away in the closing stages of the race dropping form P6 to P11. Jean-Eric Vergne equalled the best result of his career so far with sixth place, but was given five-second penalties on two occasions. Following, I provide the plots so you may draw your own conclusions. ## Average pace This plot shows the difference to the average pace of the race winner. That is, the difference to the average lap time, including pit stops. The steeper the curve, the faster the lap; and as the curves are generated from cumulative sums of lap times, a negative slope implies a lap time which is quicker that the average. Posted in F1 Tagged with: , , , , , ## Lap times for the 2014 F1 Italian Grand Prix Although Hamilton performed a poor start due to problems with his ERS, he managed to snatch the win from Rosberg in the last stages of the race. The Briton decided to close on his team mate who cracked under the pressure making a mistake. This was a clean race between the two without any controversial incident. The third step on the podium was for Felipe Massa. It’s his first podium this season, and also his first podium with Williams, finishing just ahead of team mate Valtteri Bottas. A great finish for Williams who just surpassed Ferrari in the Team Championship. A near-disastrous weekend for Ferrari at their home Grand Prix. An engine problem deprived Alonso from finishing this race and Raikkonen brought only two points home —after Magnussen was given a 5-second penalty. Not quite so bad for Red Bull, placing both drivers in P5 and P6. Vettel was the first of the one-stopping drivers to make his pit stop giving Ricciardo, on fresher tyres, a better chance to overtake his team mate. Both McLaren drivers made it to the points, even though Magnussen was given a 5-second penalty because he “did not leave enough room for car 77 [Bottas] in turn one and forced him off the track.” This sanction was worth an extra point for Perez, Button and Raikkonen. Following, I provide the plots so you may draw your own conclusions. ## Average pace This plot shows the difference to the average pace of the race winner. That is, the difference to the average lap time, including pit stops. The steeper the curve, the faster the lap; and as the curves are generated from cumulative sums of lap times, a negative slope implies a lap time which is quicker that the average. Posted in F1 Tagged with: , , , , , ## Williams FW07 The Williams FW07 was a ground effect Formula One racing car designed by Patrick Head for the 1979 F1 season. It was not the first ground effect Formula One car, a concept that Colin Chapman got working brilliantly in the ’78 and ’79 at Lotus, but it was difficult and expensive to get right, as well as being extremely alienating for many drivers, who had no choice but to scream through corners at insanely high speeds, hoping passionately that nothing would upset the workings of the airflow under the car. According to Niki Lauda [3], Cornering became a rape practised on the driver. Something really terrible, unnatural and unpredictable. Posted in Assorted Tagged with: , , , ## Lap times for the 2014 F1 Belgian Grand Prix After a rainy qualification, today we could enjoy a sunny Sunday race. But that would no mean to be a parade for Mercedes as the two drivers tangled on the second lap of the race. This damaged Rosberg’s front wing while leaving Hamilton with a puncture that ruined his race. This led to a great opportunity for Ricciardo to claim his third win of the year. Rosberg and Bottas completed the podium. Further back, Alonso saw his race compromised by a 5-second penalty because mechanics were still working on his Ferrari when the grid formation lap began. Additional performance issues in Alonso’s car got Raikkonen to finish ahead of his team mate for the first time this season. Magnussen was given a 20-second penalty after the race for not giving enough space for car 14 and forcing Alonso out of the track. He dropped from sixth to twelfth, taking no points this race. But it was late for Alonso, as he then struggled to stand against Vettel and Button. Andre Lotterer’s F1 debut ended when his Caterham stopped by the side of the track on his first lap. The same fate befell Lotus’s Pastor Maldonado. ## Average pace This plot shows the difference to the average pace of the race winner. That is, the difference to the average lap time, including pit stops. The steeper the curve, the faster the lap; and as the curves are generated from cumulative sums of lap times, a negative slope implies a lap time which is quicker that the average. Read more › Posted in F1 Tagged with: , , , , There are Cookies all over the Internet, and this site couldn't be an exception - By using this site or closing this you agree to receive cookies - You can change your preferences by following this instructions.
2014-10-31 21:54:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25761985778808594, "perplexity": 4951.984176562337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900397.29/warc/CC-MAIN-20141030025820-00038-ip-10-16-133-185.ec2.internal.warc.gz"}
https://dougo.info/pension-contribution-calculator-list-of-residual-income-opportunities.html
This equation implies two things. First buying one more unit of good x implies buying {\displaystyle {\frac {P_{x}}{P_{y}}}} less units of good y. So, {\displaystyle {\frac {P_{x}}{P_{y}}}} is the relative price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed {\displaystyle Y} , then its relative price falls. The usual hypothesis is that the quantity demanded of x would increase at the lower price, the law of demand. The generalization to more than two goods consists of modelling y as a composite good. Track Your Wealth For Free: If you do nothing else, at the very least, sign up for Personal Capital’s free financial tools so you can track your net worth, analyze your investment portfolios for excessive fees, and run your financials through their fantastic Retirement Planning Calculator. Those who are on top of their finances build much greater wealth longer term than those who don’t. I’ve used Personal Capital since 2012. It’s the best free financial app out there to manage your money. The key thing to note in those various streams is how few of them rely on my active participation on a daily basis and how they are fueled from savings. My active participation is in the blogs and $5 Meal Plan. Everything is passive, outside of routine maintenance like updating my net worth record, and none of them would be possible if I didn't have the savings to invest it. In Multiple Streams of Income, bestselling author Robert Allen presents ten revolutionary new methods for generating over$100,000 a year—on a part-time basis, working from your home, using little or none of your own money. For this book, Allen researched hundreds of income-producing opportunities and narrowed them down to ten surefire moneymakers anyone can profit from. This revised edition includes a new chapter on a cutting-edge investing technique. I am a very hard worker and am willing to do whatever it takes to make a substantial income but my questions for you is how could I do this at college? How could I generate enough income from multiple sources of flow that will keep me afloat for years to come? I am in desperate need for help. Thank you very much, I would be in great appreciation if I could get a response. Personal Income StatementsCan personal income statement planning improve your ability to connect with the right type of investor clients? Extensive personal-income statement planning approaches do not seem to be frequently used in the total returns-driven asset allocation and advisory process. Looking at the difference between the retiree’s and the employee’s personal income statement, we can understand why this is the case since more sources of income, as summarized in Chart 3, become more relevant for more investors as they move from employment and into retirement. While some people tend to use a savings account at the same bank where they have their checking account, make sure it's a high-interest one, not just a convenient one. "For short-term savings that you have parked in a savings account for easy access, you can often make more money just by researching whether you're getting the best interest rate," Goudreau says. "While many traditional banks offer as little as 0.1% interest on savings, online banks tend to offer higher interest rates. By switching to an account that offers 1% interest or more, you would be making 10 times as much just by moving the money." Are you sitting at home and wondering how you can bring a little bit more money into your home? Well, you can keep a steady stream of reliable income flowing in with multiple streams of income.  It’s not hard to do, and just one idea can spark other income potential and generate multiple steams of income.  Diversifying your income is a great way to be able to set a little bit aside for a rainy day. Lending Club (U.S. Residents Only) – I talk about Lending Club in every one of my income reports, because I still believe it’s the best source of passive income, even though it’s not my largest source.  You can get started for as little as $25, and over the past 2+ years, my interest rate has been 7% or higher, which I think is very good given the relatively low risk involved. This is even more true given the recent market downturn. You can read about how I select my investments here. Retirees are paying a high price as the world stimulates its way out of the GFC (Great Recession). After a 30-year bull market to the lowest interest rates the world has ever seen, bonds have become highly priced and now don’t generate enough to meet income needs. Just 5 years ago the average income from$100,000 invested in a 10 year Australian Government Bond (10yrs) was $5,600 p.a. – now it’s less than half at$2,600 p.a.
2018-12-15 15:41:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25813496112823486, "perplexity": 1476.8617238259842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826892.78/warc/CC-MAIN-20181215152912-20181215174912-00301.warc.gz"}
http://www.crazyontap.com/topic.php?TopicId=16368&Posts=53
1. I accept that I am under the control of a higher power (Muppet). ### Re: The speed of gravity Gravity does not have a speed.  Gravity has an accelleration, a force.  Gravity does not have a velocity. Please fix the header, it makes us sound more like morons than we normally do. SaveTheHubble March 5th, 2007 11:37am Allan, when an object moves, what is the time lag between that movement and another object being affected by the new location?  If the sun were to suddenly vanish, how long before the Earth was no longer in a nearly circular trajectory? Aaron F Stanton March 5th, 2007 11:39am The discussion was about the speed of gravity. How quickly does gravity affect change? Not how quickly does it move two objects together. strawberry snowflake March 5th, 2007 11:42am Velocity is accelleration over some unit of time.  Gravity does not HAVE a unit of time. I'm not saying gravity won't accellerate things over time, nor that accellerated things don't acquire a velocity. I AM saying that gravity is a Force, not a result of Force applied over Time.  "Speed" is a quantity resulting from Force applied over Time. You might just as well claim that gasoline has a 'speed' associated with it, since gasoline is used to accellerate automobiles. SaveTheHubble March 5th, 2007 11:58am Oh, wait, you're talking about the "speed of the affect of gravity"?  The delay in space-time between when a mass comes into existence, and when that mass begins affecting neighboring masses? Ah.  English Verb Ambiguity.  Sorry about that. SaveTheHubble March 5th, 2007 12:00pm I suppose if you wanted to be really precise, it could be "the rate of propagation of change in local curvature of spacetime". Aaron F Stanton March 5th, 2007 12:01pm Isn't that the speed of light? JoC March 5th, 2007 12:04pm i know fuck-all, but when you are talking conceptually about "time difference" between events in a framework where space and time are somehow functions of each other - does the concept of "the time taken for something to go from x to y" actually have a meaning, and if so what? $-- March 5th, 2007 12:05pm Yes, you can talk about a spacelike separation of events, as well as a timelike separation of events. Aaron F Stanton March 5th, 2007 12:07pm Apparently, all this comes from the 'graviton' thread below. And yes, that thread arrived at "the speed of light" as the answer. But then the question is, can't the propagation of the gravity affect travel slower than the speed of light? And how do we know? I'm a little curious at this point, if E=MC^2, does Energy curve space the way mass does? I would suspect not. If it does not, then the amount of mass transformed into energy in a supernova would 'remove' mass from the curved space-time, and that mass removal should generate 'gravity waves', which would propagate. Also, the reason Light travels slower in some materials I thought was because of interactions between the Light and the material. Gravitic interactions seem to be much less than Light interactions, so I would assume it's affects would travel at the speed of light in a vacuum. Nothing to slow it down, you see. SaveTheHubble March 5th, 2007 12:12pm > I AM saying that gravity is a Force How can it be force when it is simply a warping of space-time? son of parnas March 5th, 2007 12:12pm Not real sure why I thought that. I suppose because it is the theoretical limit. But it is a limit on the maximal velocity of matter, right? I did have an initial inclination to go with instantaneous. Isn't gravity sort of the monkey wrench in a unifying theory? Only to say, we don't really seem to understand/know that much about what gravity actually is. JoC March 5th, 2007 12:13pm Oh, and the "simultaneous event times requiring some independent observer, which doesn't really exist" idea bothered Einstein too, I understand. SaveTheHubble March 5th, 2007 12:13pm right, but you are talking about the distance between two point in space-time, not in space, right? for instance, that could be the distance between two points that are identical in space, but distant from each other in time. The time interval between two points in space-time must be the difference in their positions along the time access, no? So if we say that somthing happens, "instantaneously" (like the effect of a gravitational field) that means that it happens at the same point on the time access. so doesn't that mean that it's still instantaneous in space time?$-- March 5th, 2007 12:14pm If we had some wormholes we could do some experiments. Like, hey, wow, this mass just appeared here in Lucerne. When will the moon start to fall faster toward the earth? Immediately or will it take a few seconds. strawberry snowflake March 5th, 2007 12:14pm > Immediately or will it take a few seconds. When an apple falls to the earth doesn't the earth instantaneously fall a little towards the apple? That's what Newton said anyway. son of parnas March 5th, 2007 12:19pm So these huge colliders and whatnot that they are building will do some interesting things creating black holes and such. They don't really know what all will happen in these experiments. I wouldn't go so far as to advocate against whatever floats their boat. But do you think we'd notice if we got enveloped by a black hole? Because I'm thinking that's when my summers ended. JoC March 5th, 2007 12:19pm Energy does curve space the same way mass does, but it is more commonly observed the other way round--for example light is affected by gravity and follows the curvature of space-time. You don't need to create or destroy mass to make gravity waves. Unsymmetrical movement of mass, like rotating binary stars, can do it by "stirring" space. bon vivant March 5th, 2007 12:21pm Well, SoP, gravity IS.  It's what makes the apple fall, instead of hovering in the air. So, given that, several models have been developed to try to explain that.  One of the simpler models is the Force model, that there's some un-explained attraction between matter, depending on that matter's mass, which we call Gravity. A much more sophisticated model explains that attraction as a curving of space-time, that the curved space-time IS how Gravity works.  And that curved space-time results in what LOOKS like a 'local force' that tends to draw two masses together. And as Einstein argued, whether you're standing in an elevator on the surface of the earth, feeling the earth's gravity on you, or you're in that same elevator in the middle of "empty" space, being accellerated by a rocket at 1 Gravity accelleration, there is no perceptual difference you can use to tell the difference between the two situations. So, "Force" or "Curved Space-Time" are just two ways to explain how and why Gravity works.  "Gravitons" are an attempt to have a mathematical "carrier of gravitic property". SaveTheHubble March 5th, 2007 12:22pm The apple and the earth are already there. The question we're asking is ... if new matter just appears, how long will it take before it starts messing with the gravitational field? How long before the objects around it 'feel' its presence? Why should this effect take the speed of light to propagate? Why not sooner? Why not slower? strawberry snowflake March 5th, 2007 12:22pm > So, "Force" Force is wrong IMHO because it uses an incorrect metaphor. When I push something that is force. Gravity doesn't appear to be the same. son of parnas March 5th, 2007 12:26pm Trying to resolve that issue is what forced Newton to become a mystic. strawberry snowflake March 5th, 2007 12:27pm It should propagate at the speed of light because that's the fastest we've seen anything propagate in this universe. The only time light does not propagate that fast ALL the time is because it is interacting with things in its path.  Implying light DOES propagate that fast ALL the time, it's just when there's stuff in its path it takes a longer path throught the stuff. Since we've seen NOTHING to 'slow gravity down' (anti-gravity would be SO useful for that perpetual motion machine) it should propagate as fast as any phenomenon in our space-time, which would be light-speed. If the affect of gravity COULD propagate faster than that, it would be nice.  We could use that for instantaneous communications.  I believe they're currently trying to make gravity wave detectors to time-correlate the arrival at the earth of gravity waves from a nova, with the time the light of the nova arrives. SaveTheHubble March 5th, 2007 12:30pm > It should propagate at the speed of light because that's the fastest we've seen anything propagate in this universe. We see quantum entanglement that is faster than light. son of parnas March 5th, 2007 12:32pm See Einstein-Podolsky-Rosen paradox ... some things (information about a quantum state) can propagate faster than the speed of light. strawberry snowflake March 5th, 2007 12:33pm I think Allan's use of force is correct here. It's better to think of the apple and the earth as attracted towards each other by the force of gravity than to think that they are falling towards each other. The force follows an inverse square law. Senthilnathan N.S. March 5th, 2007 12:35pm why can't information about change in mass also travel instantaneously? "Yo, baby got a boob job, them bazookas got massive. You better up the attraction." strawberry snowflake March 5th, 2007 12:37pm Newton's first law of motion -- I. Every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it. If Gravity were not a "Force" as called out in this law, then the apple would hang in the air until you reached out and touched it, applying what you DO recognize as a "Force". Since, instead, the apple behaves as if you had reached out and pushed it toward the ground, therefore the apple is behaving AS IF Gravity was an equivalent "Force" to that you would apply with your hand. Now, that "AS IF" could be one of those Engineering Simplifications, and Gravity could be ACTUALLY behaving quite differently from being pushed with your hand.  That's quite likely, in fact.  The EFFECT of the whole situation can be summarized by that "Force" equation. Oh, and law II: II. The relationship between an object's mass m, its acceleration a, and the applied force F is F = ma. Acceleration and force are vectors. In this law the direction of the force vector is the same as the direction of the acceleration vector. So, "by definition", if you have a mass, and it undergoes accelleration, there IS a "force" being applied. And, just for completeness, Newton's third law: III. For every action there is an equal and opposite reaction. SaveTheHubble March 5th, 2007 12:38pm > Newton's first law of motion Newton's laws are descriptive, they don't mean the world actually works that way. son of parnas March 5th, 2007 12:40pm You all realize the question is about a change in force not the force itself? That's like so 10th grade. dF/dt, baby, dF/dt. strawberry snowflake March 5th, 2007 12:41pm ...they gibbered... zestyZuchini March 5th, 2007 12:49pm Isn't the 'default' for everything to be drawn together into a singularity? The motion of objects towards one another that we see as gravity is present at all times. It takes force to defy the natural state. The expansion of the universe is the only thing that keeps that in check, preventing an implosion of the entirety back into a singularity of all mass. JoC March 5th, 2007 12:57pm I think some of you are making this up.  it _sounds_ impressive, but Im getting suspicious about the actual semantic content. zestyZuchini March 5th, 2007 1:00pm Yes, SoP.  That's why I'm an Engineer, and not a Scientist, or a Magus. Because I'm WAY more interested in doing useful things with our understanding of the universe, than I am in getting written down the exact mechanism of how these things work. People have been using "Gravity as force" since Newton to make useful predictions, without complete understanding (even now) of how Gravity works. Don't get me wrong, I have nothing against getting written down the exact mechanism of how stuff works.  It's just that I'd MUCH rather be sending missions to the Moon and Mars and collecting data, (and detecting asteroids that might end civilization) while others spend time in labs working out the minutia. Give me a sufficient level of minutia to make useful predictions, and I'll build working solutions. SaveTheHubble March 5th, 2007 1:04pm > Give me a sufficient level of minutia to make > useful predictions, and I'll build working solutions. That's why I have a hard time sometimes. It's difficult for me to accept useful abstractions as real enough to make a working model in my mind, so it just falls apart and I don't understand anything. son of parnas March 5th, 2007 1:08pm http://www.cbc.ca/health/story/2000/07/20/speedlight000720.html Kinda relevant I guess. Interesting at least. Mikael Bergkvist March 5th, 2007 1:22pm Good instincts, zucchini. My undergraduate degree is in physics, and at first I thought they must have a deeper understanding of it than me, then I decided it was bs. no label March 5th, 2007 1:26pm "The scientific statement "nothing with mass can travel faster than the speed of light" is an entirely different belief, one that has yet to be proven wrong. " so does gravity have mass?  cause if not they appear to have proven it is possible for it to travel faster than the speed of light. zestyZuchini March 5th, 2007 1:31pm > Isn't that the speed of light? http://math.ucr.edu/home/baez/physics/Relativity/GR/grav_speed.html Senthilnathan N.S. March 5th, 2007 1:36pm >> "nothing with mass can travel faster than the speed of light" Where is this statement from? I only know that "nothing with mass can *accelerate* beyond the speed of light". के. जे. March 5th, 2007 1:56pm Woah, Senthi, GREAT link, man!  Thanks! SaveTheHubble March 5th, 2007 2:34pm This is the problem with degrees, it's more fun to argue the particulars than to answer the man's simple question: http://en.wikipedia.org/wiki/Terminal_velocity For example, the terminal velocity of a skydiver in a normal free-fall position with a closed parachute is about 195 km/h (120 mph or 54 m/s). Since I basically knew this, it only took me a minute to find it, and I didn't have to read any of those long replies. LinuxOrBust March 5th, 2007 2:35pm We're talking about jerk not velocity. That's two derivatives behind, bro. strawberry snowflake March 5th, 2007 2:38pm Well, yes, LorB, but since you didn't read the replies, you never realized that the original issue WAS in fact "How fast does the effect of gravity propagate", NOT "what is terminal velocity in the earth's atmosphere". In other words, if the Earth's Sun suddenly vanished, would that effect on the orbit of the Earth propagate instantaneously?  Or would that effect propagate at the speed of light? Current conclusion, the Gravity effect propagates at the speed of light (certain binary stars are 'slowing down', which wouldn't happen if it was infinite), but we're working to verify that. SaveTheHubble March 5th, 2007 2:44pm Strawberry, sing along time. Momentum equals mass times velocity! Force equals mass times acceleration! Yank equals mass times jerk! Tug equals mass times snap! Snatch equals mass times crackle! Shake equals mass times pop! के. जे. March 5th, 2007 2:47pm I dont think gravity has speed, but objects reacting to it has, so a wave would mean a chainreaction of stellar objects shifting position, seeming like a wave moving with the speed of light. This is assumable so because gravitation appears to have accumulative properties, meaning that a group, like the solar system can be seen as a single source of gravity. So, gravity would include objects outwardly, affecting them in an accumulative order, moving forward with he speed of light in doing so, even though gravity in itself might have no speed and could theoretically affect distand objects instantly. Mikael Bergkvist March 5th, 2007 3:09pm Einstein's equations form the fundamental law of general relativity. The curvature of spacetime can be expressed mathematically using the metric tensor — denoted gμν — and with respect to a covariant derivative, \nabla, in the form of the Einstein tensor — Gμν. This curvature is related to the stress-energy tensor — Tμν — by the key equation G_{\mu \nu} = \frac{8\pi G_N}{c^4} T_{\mu \nu}\ , where GN is Newton's gravitational constant, and c is the speed of light. We assume geometrized units, so GN = 1 = c. With some simple assumptions, Einstein's equations can be rewritten to show explicitly that they are just wave equations. To begin with, we adopt some coordinate system, like (t,r,θ,φ). We define the "flat-space metric" ημν to be the quantity which — in this coordinate system — has the components we would expect for the flat space metric. For example, in these spherical coordinates, we have \eta_{\mu \nu} = \begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & r^2 & 0 \\ 0 & 0 & 0 & r^2 \sin^2\theta \end{bmatrix}\ . This mathematical structure holds information regarding how distances are measured in the space we are dealing with. Because the propagation of gravitational waves through space and time change distances, we will need to use this to find the solution to the wave equation. Now, we can also think of the physical metric gμν as a matrix, and find its determinant, \det\ g. Finally, we define a quantity \bar{h}^{\alpha \beta} \equiv \eta^{\alpha \beta} - \sqrt{|\det g|} g^{\alpha \beta}\ . This is the crucial field, which will represent the radiation. It is possible (at least in an asymptotically flat spacetime) to choose the coordinates in such a way that this quantity satisfies the "de Donder" gauge conditions (conditions on the coordinates): \nabla_\beta\, \bar{h}^{\alpha \beta} = 0\ , where \nabla represents the flat-space derivative operator. These equations say that the divergence of the field is zero. The full, nonlinear Einstein equations can now be written[1] as \Box \bar{h}^{\alpha \beta} = -16\pi \tau^{\alpha \beta}\ , where \Box = -\partial_t^2 + \Delta represents the flat-space d'Alembertian operator, and ταβ represents the stress-energy tensor plus quadratic terms involving \bar{h}^{\alpha \beta}. This is just a wave equation for the field with a source, despite the fact that the source involves terms quadratic in the field itself. That is, it can be shown that solutions to this equation are waves traveling with velocity 1 in these coordinates. Ward March 5th, 2007 3:22pm See?  Simple... (cut and pasted from Wikipedia, too much trouble to type in the same stuff from Landau and Lifshitz) Ward March 5th, 2007 3:23pm If we allow our selves a little "flight of fantasy" and free speculation, then it becomes a problem that gravity is supposed to travel by the light of speed, since it holds together great masses across the universe that are *very big*, sometimes spanning several hundreds or more, lightyears. With this *information* moving this slow, it's hard to understand how these structures keep together the way that they do. It's like the signals of a humans nervous system would travel at the speed of a slow ant climbing up a (insanely high) tree. Not very fast that is.. Mikael Bergkvist March 5th, 2007 4:12pm >>> With this *information* moving this slow, it's hard to understand how these structures keep together the way that they do. What's the problem?  So what if it's slow, the structures you're talking about have been around for a loooooooooooooooong time. A really long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long long time. Ward March 5th, 2007 4:26pm > It's like the signals of a humans nervous system would travel at the speed of a slow ant climbing up Perhaps gravity is intelligent so like ants it may be slow but it can do a tremendous amount in a parallel and autonomously? son of parnas March 5th, 2007 4:29pm It's a known problem. Mikael Bergkvist March 5th, 2007 4:31pm > does the concept of "the time taken for something to go from x to y" actually have a meaning, Yes. > and if so what? Banana. March 6th, 2007 9:14am
2018-01-24 07:21:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6900846362113953, "perplexity": 1086.1061404557472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893530.89/warc/CC-MAIN-20180124070239-20180124090239-00152.warc.gz"}
https://tex.stackexchange.com/questions/277152/tikz-idiomatic-padding
tikz - Idiomatic padding For this TikZ figure, the shorten statements keep the tail of the top-most arrow from colliding with the input node and similarly the head of the bottom-most node from colliding with the output node. \documentclass{standalone} \usepackage{tikz} \usetikzlibrary{shapes.geometric} \begin{document} \begin{tikzpicture}[>=stealth] \node [draw] (compiler) {Compiler}; \node [coordinate, above of=compiler] (input) {}; \node [coordinate, below of=compiler] (output) {}; \draw [->, shorten <=0.5em] (input) node {source program} -- (compiler); \draw [->, shorten >=0.5em] (compiler) -- (output) node {target program}; \end{tikzpicture} \end{document} The output of the above code is, Is there a more appropriate which does not involve setting spacing by hand but rather an approach that resolves this issue automatically? I've been unsuccessful in finding any examples similar to this figure. Most diagrams I have found have arrows running in the horizontal direction and the text hovering above the arrows. • Welcome to the site! Thanks for providing such a clear mwe, it really helps :) perhaps inner and/or outer Sep might be helpful? – cmhughes Nov 7 '15 at 17:52 1 Answer I don't see why you add two nodes at the same place. Add the text in the input and output nodes when they are defined, and remove the coordinate option. Then the arrows are drawn from the edge of node shape encompassing the text. Note also that the of= syntax is deprecated in favor of loading the positioning library and saying =of, see Difference between "right of=" and "right=of" in PGF/TikZ \documentclass[border=3mm]{standalone} \usepackage{tikz} \usetikzlibrary{shapes.geometric,positioning} \begin{document} \begin{tikzpicture}[>=stealth] \node [draw] (compiler) {Compiler}; \node [above=of compiler] (input) {source program}; \node [below=of compiler] (output) {target program}; \draw [->] (input) -- (compiler); \draw [->] (compiler) -- (output); \end{tikzpicture} \end{document}
2020-01-23 23:56:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008318781852722, "perplexity": 6092.99253624665}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00157.warc.gz"}
https://www.biglist.com/lists/lists.mulberrytech.com/dssslist/archives/199801/msg00015.html
## Re: Graphics figures for dvi/ps and html Subject: Re: Graphics figures for dvi/ps and html From: Christian Leutloff Date: 07 Jan 1998 14:08:08 +0100 Mbox-line: From leutloff@sundancer.oche.de Wed Jan 7 14:08:09 1998 "Eve L. Maler" <elm@xxxxxxxxxxxxx> writes: > The only deprecated method is the first one. It's much better to store the > graphic data in a separate file and pull it in. In order to pull in the > appropriate-format graphic for the chosen destination, you have to do one > of the following: > > - Use marked sections for %html; and %texps; destinations > > - Use the Role attribute on your Graphics and double up on instances of > Graphic so that the processing for each destination suppresses the > Role="TEXPS" and Role="HTML" graphics, respectively I'm using Norms modular DocBook stylesheets. Where can I add the necessary lines of code? I've tried the following in the sgml source: <graphic align="center" fileref="f1b-2.eps" role="TEXPS"></graphic> <graphic align="center" fileref="f1b-2.gif" role="RTF"></graphic> But the result was the inclusion of both into the result file. Another question is how can I use something like the LaTeX \label \ref commands? Instead of the \label I use an ID at the appropriate object, i.e. <figure id="f1b-2" float="1"> <title>Beteiligung </title> <graphic align="center" fileref="f1b-2.eps"></graphic> </figure> but how can I reference the number of the figure!? <xref linkend="f1b-2" endterm="f1b-2"> is replaced by the title. What can I do? Thanks Christian -- Christian Leutloff, Aachen, Germany leutloff@xxxxxxxxxxxxxxxxx http://www.oche.de/~leutloff/ Debian GNU/Linux 1.3.1! Mehr unter http://www.de.debian.org/ Attachment: pgp00000.pgp Description: PGP signature
2022-05-24 03:32:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48649510741233826, "perplexity": 13776.480986211149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00247.warc.gz"}
https://alice-publications.web.cern.ch/node/8890
# Figure 2 (Left) Correlation between the true \NTt and the measured \NTm multiplicity in the transverse region. (Right) Unfolding matrix \Mone. The iteration step of the unfolding matrix corresponds to the third.
2023-03-26 14:44:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865059614181519, "perplexity": 2609.7862456000917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00486.warc.gz"}
https://www.nature.com/articles/s41598-021-82840-x?error=cookies_not_supported&code=8766b62d-810a-4a30-a588-d68bc4ea7e21
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Predictive modeling of clinical trial terminations using feature engineering and embedding learning ## Abstract In this study, we propose to use machine learning to understand terminated clinical trials. Our goal is to answer two fundamental questions: (1) what are common factors/markers associated to terminated clinical trials? and (2) how to accurately predict whether a clinical trial may be terminated or not? The answer to the first question provides effective ways to understand characteristics of terminated trials for stakeholders to better plan their trials; and the answer to the second question can direct estimate the chance of success of a clinical trial in order to minimize costs. By using 311,260 trials to build a testbed with 68,999 samples, we use feature engineering to create 640 features, reflecting clinical trial administration, eligibility, study information, criteria etc. Using feature ranking, a handful of features, such as trial eligibility, trial inclusion/exclusion criteria, sponsor types etc., are found to be related to the clinical trial termination. By using sampling and ensemble learning, we achieve over 67% Balanced Accuracy and over 0.73 AUC (Area Under the Curve) scores to correctly predict clinical trial termination, indicating that machine learning can help achieve satisfactory prediction results for clinical trial study. ## Introduction Clinical trials are studies aiming to determine the validity of an intervention, treatment, or test on human subjects. Randomised controlled trials, where participants are allocated at random (by chance alone) to receive one of several clinical interventions, are the ultimate evaluation of a healthcare intervention. Effective clinical trials are necessary for medical advancements in treating, diagnosing, and understanding diseases1,2. Since 2007, under the Food and Drug Administration Amendments Act (FDAAA), clinical trials are required to be registered to an online database (ClinicalTrials.gov) if they have one or more sites in the United States, conducted under an FDA investigational new drug/devise, or involve a drug/device product manufactured in the U.S. and exported for research. Trials requiring approval of drugs/devices are required to submit results within one year of completion3. While the mandate specifies type of trials legally required to submit results, majority of trials with results posted on the database are not legally obligated to do so4. The database currently lists 311,260 studies (as of May 2019). The ClinicalTrials.gov database serves as a way to access summary and registration information for completed and terminated clinical studies, where terminated trials are those whose recruiting participants have stopped prematurely and will not resume and participants are no longer being examined or treated. There are many obstacles to conducting a clinical trial. Time frames, number of participants required, and administrative efforts have increased due to several factors: (1) an industry shift to chronic and degenerative disease research; (2) non-novel drug interventions requiring larger trials to identify statistical significance over the existing drug intervention; (3) increased complexity of clinical trial protocols; and (4) increased regulatory barriers5. These factors inflate the financial costs of clinical trials and increase the likelihood of a trial becoming terminated. ### Clinical trial terminations Clinical terminations result in significant financial burden. Estimates of drug development are around 1.3 billion dollars and are rising at a rate of 7.4%, largely in part to clinical trial costs5. Terminated trials are associated with opportunity costs that could have been applied to other efforts4. Secondly, there are ethical and scientific issues surrounding terminated clinical trials. All subjects consenting to participate in a clinical trial do so to contribute to the advancement of medical knowledge. If a trial is terminated, subjects are not always informed about the decision and associated reasons6, resulting in direct loss of personal benefit from an interventional study7. Thirdly, terminated trials also represent a loss of scientific contribution to the community. Often relevant information about why a study was terminated is not reported and results and/or protocols are not published8. To protect the health and safety of participants in a clinical trial, if data collected indicates negative side effects/adverse events, the trial will be terminated. Interventional trials often employ a data and safety monitoring committee that could recommend termination based on patient safety concerns9. Observational studies do not introduce an intervention in the participants, thus they are less likely to terminate due to safety concerns. The FDA states the preferred standard for clinical trial practice is to only terminate with clear evidence of harm from data within the study or as result of published findings from other studies7. In reality, this isn’t always the case. Often there are administrative issues such as logistical difficulties, loss of staff members, inadequate study design, protocol issues, etc.4,10, resulting in trial termination. A terminated trial indicates that the trial already started recruiting participants but stopped prematurely and recruited participants are no longer being examined/treated11. Studies, using 8,000 trials, found that 10-12% of clinical trials are terminated4,10,12. Reasons include insufficient enrollment, scientific data from the trial, safety/efficacy concerns, administrative reasons, external information from a different study, lack of funding/resources and business/sponsor decision4,8,10,12. Insufficient patient enrollment is often the greatest factor resulting in termination4,10,12. The ability to detect a significant effect is directly tied to the sample size. If the intended target enrollment is not met, the study’s intended effect will decrease due to less power13. Previously it was shown that eligibility criteria, non-industry sponsorship, earlier trial phase and fewer study centres are partially associated with insufficient enrollment14. Lack of funding has also been identified as a major reason for early termination4,10. Average costs of clinical trials range from 1.4 million up to 52.9 million5. It has also been shown that the number of publicly funded clinical trials have decreased from the years 2006-2014, while the number of industry funded clinical trials have increased15. However, industry sponsorship doesn’t guarantee that a clinical trial will be completed. There has been cases where a company can prematurely terminate a clinical trial due to commercial/business decisions7,9. Commercial decisions for an industry don’t necessarily represent a lack of funding, but a lack of perceived profit from continuing the pursuit of the intervention being studied in the clinical trial. ### Related work A previous study modeled clinical trial terminations related to drug toxicity16, by integrating chemical and target based features to create a model to distinguish failed toxic drugs from successful drugs16. While drug toxicity is a common factor for clinical trial terminations, many clinical trials terminate due to other reasons4, 10. Two previous studies utilized clinical trial study characteristics and descriptions from the ClinicalTrials.gov database to predict terminations17,18. The first study17 tokenizes the description field to find high/low frequency words in terminated/completed trials as features to train a binary predictive model. The second study18 uses Latent Dirichlet Allocation to find topics associated to terminated/completed trials. The corresponding topic probabilities are used as variables in predicting clinical trial terminations. Both studies determined that the addition of unstructured data to structured data increases the predictive power of a model for terminated clinical trials17,18. These results provide validity to our research design of using structured and unstructured information as variables to predict clinical trial terminations. Similar to the previous studies, we utilize study characteristics and description fields for variables in a model to predict clinical trial termination. However, our research differs in significant ways: (1) we design features to represent important information from the unstructured eligibility requirement field; (2) we include more study characteristic fields to represent administrative features of clinical trials; (3) we utilize the keywords field from the clinical trial report; and finally, (4) we use word-embedding to capture unstructured description fields. Using a word-embedding model, we are able to represent the whole description field as a numerical vector, without determining words or topics associated to completed or terminated trials to create features, for predictive modeling. ### Contribution The goal of our study is to determine main factors related to terminated trials and to predict trials likely to be terminated. The main contribution of the study is as follows. • Large scale clinical trial studies: Our research delivers a large scale clinical trial reports database for termination study. The database, including features and supporting documents, are published online to benefit the community19. • New features: Our research creates a set of new features, including eligibility features and administrative features, to characterize and model clinical trials. In addition, our research is the first effort to explore using embedding features to model clinical trials. The results show that embedding features offer great power for prediction. Further more, the results indicate that the combination of statistic features, created from clinical trial structural information, keyword features and embedding features have the highest predictive performance. • Predictive Modeling and Validation: Comparing to existing studies17,18, we investigate a variety of learning algorithms to address class imbalance and feature combinations for clinical trial termination prediction. Our model achieves over 0.73 AUC and 67% balanced accuracy scores for prediction, representing the best performance for open domain clinical trial prediction. The rigorous statistical tests provide trust-worth knowledge for future study and investigation. ## Methods and materials ### Clinical trial reports A total of 311,260 clinical trials taking place in 194 countries/regions, in XML (Extensible Markup Language) format, were downloaded from ClinicalTrials.gov in May 2019. If a trial had sites in multiple countries, the country with the most sites is recorded. In the case of a tie, the first country listed for trial site is recorded. The top 25 countries are determined as those with at least 1,000 clinical trials. The top 10 of these countries are shown in Table 1(a) where 34% (106,930) trials are in the United States. The trials cover a wide range of research fields from diseases such as cancer, infectious diseases etc. to mental health conditions and public health and safety. Table 1(b) reports the top 10 clinical fields, based on MeSH (Medical Subject Headings) term frequencies in the trials. Supplementary Figure 2 lists inclusion criteria to build dataset for our study. From 311,260 trials, we select Completed or Terminated trials, starting in or after 2000, belonging to one of the top 25 countries, and having no missing values for the keyword and detailed description field. The final number of trials in the testbed was 68,999, where 88.54% (61,095) are completed and 11.46% (7,094) are terminated. The status field in the clinical trial report represents the recruitment status of the whole clinical study. The listed options for Status includes, “Not yet recruiting”, “Recruiting”, “Enrolling by invitation”, “Active, not recruiting”, “Completed”, “Suspended”, “Terminated”, and “Withdrawn”11. Overall, the first four indicate studies that are currently in progresses or will begin progress in the future. “Completed”, “Terminated”, and “Withdrawn” trials represent those which are completed or prematurely ended. For a trial to be “Withdrawn” it had to stop prior to enrolling it’s first participants. “Suspended” trials are those which have stopped early but may start again. For expanded access clinical trials, statuses could also include “Available”, “No longer available”, “Temporarily not available” and “Approved for Marketing”. “Unknown” indicates that the trial’s last known status was recruiting, not yet recruiting or active, not recruiting, however the trial passed it’s completion date and the status has not been verified within the last 2 years11. Figure 1 summarizes status of all 311,260 trials, where 53.3894% (166,180) are “Completed” and 5.6464% (17,575) are “Terminated”. ### Clinical trial feature engineering In order to study factors associated to trial terminations, and also learn to predict whether a trial is likely going to be terminated or not, we create three types of features: statistics features, keyword features, and embedding features as follows. #### Statistics features Statistic features use statistics w.r.t. administrative, eligibility, study design, and study information to characterize trials. ##### Study information features intend to describe basic information about the clinical trial. These features include if the clinical trial has expanded access, Data Monitoring Committee (DMC) regulation, FDA regulation, study type (international or observational), the phase of the trial, and if the study was in USA or outside USA. A trial with expanded access provides participants with serious health conditions or diseases access to medical treatments that are not yet approved by the FDA. The FDA regulations state that clinical trials with expanded access can transition to an investigational new drug (IND) protocol. An IND protocol is necessary to provide evidence for FDA approval. If a clinical trial with expanded access wants to transition to an IND protocol, the trial with expanded access protocol will be terminated20. DMC regulation indicates that the clinical trial has a data monitoring committee, groups of independent scientists monitoring the safety of participants, for the study. The DMC committee is responsible to provide recommendations regarding stopping the trial early for safety concerns. Phases of clinical trials include: No phase, early phase 1, phase 1/2, phase 2, phase 2/3, phase 3, or phase 4. No Phase are trials without defined phases, such as in studies of devices or behavioral interventions. Early phase 1 are exploratory trials involving minimal human exposure with no diagnostic intent, these include screening studies and micro-dosing studies. Phase 1 are trials with initial studies to determine the metabolism and pharmacologic action of drugs in humans. These aim to uncover any side effects with increasing doses and early evidence of effectiveness. Phase 1/2 trials are combinations of phase 1 and phase 2. Phase 2 trials are controlled clinical studies to evaluate the effectiveness of the drug for a particular indication. These trials include participants with the disease or condition under study and the trial aims to determine the short term side effects and risks. Phase 2/3 trials are combinations of phase 2 and phase 3. Phase 3 trials determine the overall benefit-risk relationship of the drug. Phase 4 trials are studies of FDA-approved drugs to determine additional information of the drugs risk, benefits and optimal usage11. The motivation for using the trial’s phase was to determine if phase was related to termination. A previous study that looked at termination reasons found that early phase trials are more likely to terminate due to scientific reasons while later phase trials have more complicated reasons for termination10. While phase alone is not an indicator of trial terminating, it might be likely that the combination of phase and another feature can indicate that a clinical trial will be terminated. The distribution of clinical trials by phase is shown in Fig. 2b. Interventional studies introduce a treatment plan for participants, such as drugs, vaccines, surgery, devices or non-invasive treatments such as behavioral changes or education. Observational studies do not introduce treatment plans, participants are observed for health outcomes11. The majority of the clinical trials used for analysis, 81.7% (56,369) are interventional studies, 18.3% (12,630) are observational studies. This is mostly likely due the fact that observational studies are often not registered. Moreover some observational studies are registered after publication21. Interventional studies have a higher rate of termination, 12.12% (6,915) interventional studies were terminated compared to 7.83% (989) observational studies were terminated. The distribution of interventional and observational studies is shown in Fig. 3a. Clinical trials could have sites located in different countries/regions. A clinical trial’s main country was determined by the country with the largest number of sites for the clinical trial. Majority 50.6% (34,964) of clinical trials’ main country was USA. Accordingly, we create a binary feature indicating if the clinical trial main country was USA or outside of USA. Although the FDA regulations for trials to register in the ClinicalTrials.Gov database mainly applies to clinical trials in the USA, many international trials register to the database as well. The International Committee of Medical Journal Editors (ICMJE) issued a clinical trial registration policy as part of the ICMJE recommendations for conduct, reporting, editing and publication of scholarly work in medical journals. The recommendations encourages journal editors to require clinical trials registered before the start of a study that is considered for publication. The World Health Organization (WHO) also instituted a policy, the International Clinical Trials Registry Platform (ICTRP) that specifies the registration of all interventional trials is a scientific, ethical and moral responsibility22. Therefore, many international studies register their trials in the ClinicalTrials.gov database to meet the requirements for publication in some journals and to adhere the policies of the WHO. The motivation to using USA/non-USA as a feature is to capture any differences between trials inside the United States and outside the United States. Clinical trials in USA had a higher rate of termination with 7.11% (4,905) trials terminated. The distribution of outside USA vs. USA clinical trials and termination is shown in Fig. 3b. ##### Study design features focus on study design of a clinical trial, which plays an important role in the success/termination of a trial. The study design features include the number of groups, number of countries, number of sites, whether the clinical trial has randomized groups, the masking technique for groups, and whether the study included a placebo group. Adding randomized groups and the masking technique for groups introduces logistical difficulties in a clinical trial study. More complicated protocols introduce complex issues that may lead to early termination. More groups needed for a clinical trial indicate more higher required patient enrollment, if this is not met, the trial will have to terminate. Likewise if a study has fewer sites, the number of required patients might not be found. It was previously shown that studies with fewer study sites are more likely to not reach target patient enrollment14. Thus if a clinical trial has fewer sites, it might not reach patient enrollment and terminate. However, increasing the sites for a clinical trial increases the resources (funds/personnel) required for monitoring each site. Although the use of a placebo group is often required for a clinical trial, it was shown that trials with placebo groups are a risk factor for insufficient patient enrollment14. The addition of a placebo group indicates that the trial needs higher numbers of participants. If this is not met, the trial will suffer from insufficient patient enrollment and be terminated. The distribution of placebo groups is shown in Fig. 4a. ##### Eligibility features capture information about eligibility requirements in clinical trials. As discussed in previous sections, eligibility is often a key factor in trial termination. We used basic eligibility fields from the clinical trial reports (if eligibility requirement is present, gender restriction, age restriction, acceptance of healthy volunteers) and created features from the eligibility field text block to encapsulate key points about the eligibility requirements. The eligibility criteria can be separated into inclusion criteria or exclusion criteria. Some trials do not indicate a clear separation of inclusion criteria or exclusion criteria, so the total eligibility field was considered as well. The eligibility criteria field can be separated into the number of criteria per inclusion/exclusion/total eligibility by the number of lines per inclusion/exclusion/total eligibility. The number of criteria was considered, the average number of words per line and the total number of words per inclusion/exclusion/total eligibility were all created as features. The number of numeric numbers was also considered for inclusion/exclusion/total eligibility was considered as well. The larger number of lines per eligibility indicate more strict requirements. A larger number of words also indicates higher requirements for eligibility. A trial with a high number of numeric values indicates the trial has very specific eligibility requirements (such as age, metabolic levels, ability to withstand a certain dosage, etc.). The majority of trials, 73.8% (50,920), did not accept healthy volunteers. Trials not accepting healthy volunteers had a higher rate of termination, 13.54% of trials (6,893). The distribution of clinical trials by acceptance of healthy volunteers is shown in Fig. 4b. ### Keyword features The detailed description field in the clinical trial report is an extended description of the trial’s protocol. It includes technical information but not the entire study’s protocol. The keyword field is words or phrases to best describe the study’s protocol. They are used to help users find studies when querying the online database11. Keywords are created by the clinical trial register using the US National Library of Medicine (NLM) Medical Subject Heading (MeSH) controlled vocabulary terms. MeSH was developed by NLM to properly index biomedical articles in MEDLINE23. The motivation of using keyword features is to represent the clinical trial’s research area as determined by keywords. To create features capturing information about keywords, TF-IDF (term frequency-inverse document frequency) was used, where TF is the frequency of the term in the document and IDF is measure of term specificity, based on counting the number of documents that contain the term. The concept of IDF is that a term that occurs in many documents, such as the term “the”, is not a good discriminator. These terms are given less weight than ones that occur in a few documents24. TF-IDF is used to measure the importance of a keyword compared to all keywords in the clinical trial reports. Keywords in clinical trial documents are composed of multiple MeSH terms. For example, if a clinical trial has two listed keywords, “Ankle Joint” and “Osteoarthritis”, then the resulting document has three keywords: “Ankle”, “Joint” and “Osteoarthritis”. Keywords are extracted from the keyword field by tokenizing the field, separated with punctuation and spaces, and stop words are removed. After finding the TF$$\text{- }$$IDF(f) value for each keyword f, using all (68,999) clinical trials, the top 500 terms are used as keyword features. The top 20 keyword features as determined by their TF-IDF score is shown in Table 2 (a). For each trial, the resulting TF-IDF score for each keyword is used as input to the classification models. ### Embedding features The keyword features in the above subsection only provide word level information about clinical studies. A common dilemma is that the number of keyword features should be relatively large, in order to capture specific information of individual trials. As the number of keyword feature increases, the feature space will become sparse (with many zeros), because some keywords only appear in a small number of studies. In order to tackle this dilemma, we propose to create embedding features, which will generate a dense vector to represent detailed descriptions of each clinical trial report. Two distinct advantage of the embedding features is that (1) we can easily control the embedding feature size to be a relatively small number (typical 100 or 200), and (2) the embedding feature space has dense feature values normalized in the same range. To represent the detailed description field as a vector input into the classifier, Doc2Vec was used. Doc2Vec25 is an expansion of Word2Vec26, a neural network to generate vector representations of words27. In the continuous bag-of-words (CBOW) implementation of Word2Vec, a word is predicted by the words in the surrounding context. Context words are used to predict the current word25. For example, given a training sentences, such as “autologous stem cell transplantation”, Word2Vec will use the co-occurrence of words to train word embedding models. Because “stem” and “cell” both occur in the sentence, it will then set input corresponding to “stem” as one, and expect the output nodes corresponding to “cell” to have the largest output. Every word in the sentence is mapped to a unique vector in a column of matrix W. These vectors are concatenated or averaged together to predict the next word in the sentence. The result creates vector representations of words where similar words will have similar vector representations. For example, “Patient” will have a similar vector to “Subject”, and “Physician” will have a similar vector to “Doctor”, as shown in Table 2(b) and (c). By using a neural network model similar to Word2Vec, Doc2Vec25 adds each each document as an extra input (in addition to the words). After training the model using all clinical trial documents, the d dimensional weight values connecting each document to the neural network will be used as the embedding features to represent each document. The Doc2Vec model creates a vector of length 100 to represent the detailed description. The vector is ultimately used as 100 different features for our final predictive models. ### Termination key factor discovery The feature engineering approaches in the above subsections will create a set of potential useful features (or key factors) associated to the clinical trial termination. In order to determine features playing important roles to the trial termination, we will use feature selection to rank all features, based on their relevance to the class label (i.e. trial termination). Three types of feature selection approaches, filter, wrapper, and embedded method28, are commonly used for feature selection. In our research, since we are interested in single features most relevant to the target class, independent of any learning algorithms, we use filter approaches to rank all features, according to their relevance scores to the class label. Five feature selection methods, including ANOVA (Analysis of Variance), ReliefF, Mutual Information (MI), CIFE (Conditional Informative Feature Extraction) and ICAP (Interaction Capping), are used in the study. Due to the inherent difference of the feature evaluation mechanism, feature selection methods assess feature importance from different perspectives, resulting in different orders of feature importance. To combine their feature ranking results, we employ Dowdall Aggregation (DA) to aggregate feature rank from all methods. Dowdall system is a variant of Borda count which assigns a fraction number, inverse to the ranking order of each feature, as the weight value for each ranking method. Overall, Dowdall method favors features with many first preferences (top ranking candidates). If a feature $$f_i$$ is accidentally ranked to the bottom of the feature list by a method, it will have very little impact to $$f_i$$’s DA aggregation value because it contributes a small fraction weight values to the final aggregation. ### Clinical trial termination prediction In order to predict whether a clinical trial may be terminated or not, we use features created from the above steps to represent a clinical trial, and train four types of classifiers, Neural Networks, Random Forest, XGBoost, and Logistic Regression to classify each trial into two categories: “Completed” vs. “Terminated”. The final data set used for analysis has 88.54% completed trials (61,095) and 11.46% terminated trials (7,094), meaning the ratio between terminated vs. completed trials is 1 to 7.75. A class imbalance problem occurs when there are many more instances of one class compared to another. In these cases, classifiers are overwhelmed by the majority class and tend to ignore minority class samples29. Accordingly, we employ random under sampling to handle the class imbalance problem, which is widely accepted for handling class-imbalance29. #### Random under sampling takes samples from the majority class to use for training along with the instances of the minority class. In this study, random under sampling is applied to the majority class to produce a sampled set with an even number of majority class and minority class samples. Prior to random under sampling, the imbalanced ratio of terminated trials to completed trials is 1 to 7.75. After random under sampling, the imbalanced ratio of terminated trials to completed trials is 1 to 1. Because random under sampling may potentially remove important examples and result in bias in trained models29. We repeat random under sampling 10 times, each time procures one sampled data set trains one model. The 10 trained models are combined (using ensemble) to predict test samples. Supplement includes the clinical trial prediction framework details and comparisons between different sampling ratios. ## Results ### Experimental settings and performance metrics We use five fold cross-validation in our experiments, all models are tested on an unique hold out test set of 20% (13,780) trials, for five times, to evaluate their performance. After the validation sets are created, Doc2Vec is trained on each training data set and the Doc2Vec model infers a vector for the “Detailed Description” field for each separate training and test data set. Supplement includes details on the Doc2Vec implementation. Four different classification models, Neural Network, Random Forest, Logistic Regression and XGBoost, are comparatively studied. The Neural network model consists of a multi-layer network with 1 hidden layer and 100 nodes, and Random Forest consists of 1,000 fully grown trees. The Supplement provides additional information about model hyperparameters. To optimize parameters, randomized grid search was initially used to narrow parameter values; followed with exhaustive grid search to determine final optimal parameters. To determine the results from feature engineering, single models are tested with statistics features only, keyword features only, word embedding features only and then combinations of the three. To determine the overall prediction results, all features are used with a single model method and with ensemble model method, respectively. Four types of performance measures, accuracy, balanced accuracy, F1-score, and AUC values are reported in the experiments. Supplement provides additional details about each measure. ### Termination key factor detection results Using feature engineering approaches, we design 40 statistics features, 500 keyword features, and 100 embedding features. In order to understand their importance for trial prediction, we report the aggregated feature ranking (using Dowdall Aggregation) in Table 3, where a superscript ($$^{{s, k, e}}$$) denote a statistics feature, a keyword feature, and an embedding feature, respectively. The value in the parenthesis denotes Dowdall ranking. For example, “Eligibility Words$$^s$$ (2)” denotes that this is a statistics feature and is ranked no. 2 out of all 640 features. The left most column show the top 20 statistics features in the left most column. The middle column shows the top 20 keyword features and their their respective ranking. The right column shows the top 20 ranked features out of all features. Embedding features belong to a vector of size 100 from the vector representation of the detailed description field. The feature names for embedding features represent their index position in the vector, {0:99}. The top ranked feature, 8, is the 9th index position of the detailed description document vector. Table 1 in the Supplement further lists the top 40 Statistics Features, Keyword Features and overall ranked features. Overall, statistics features about eligibility are ranked high, such as Eligibility words, no eligibility requirement, Inclusion criteria words, eligibility lines, average inclusion words per line, average eligibility words, etc. Half of the 40 top ranked features are statistics features, indicating logistics, study information, clinical designs, and eligibility are crucial to trial completion or termination. Keyword features provide information about the research or therapeutic area of the clinical trial. Out of the top 10 keyword features, all are cancer related except for “Germ”. Within the oncology related terms, the keywords “Mycosis”, “Fungoides”, and “Sezary” are all interrelated and in the top 10 ranked keyword features. Mycosis fungoides and Sézary syndrome are types of Cutaneous T-cell lymphomas, which are rare diseases affecting 10.2 per million people30. ### Feature engineering and combination results In order to understand which type of features (or their combinations) are mostly informative for clinical trial termination prediction, we use different type of features (statistics features, keyword features, and word embedding features) and their combinations to train the four classifiers using a single model. The resulting AUC scores are reported in Fig. 5. For all models, the combination of all features demonstrates the highest performance. To verify the statistical difference, we performed a corrected resampled t-test, comparing results from all features to all other combinations, with respect to each model. Utilizing the Holm-Bonferroni corrected p-values, it was confirmed that using all features is significantly better than all other combinations except for Statististics+Embedding for Neural Network; Statistics+Keyword for Random Forest, and Keyword+Embedding for Logistic Regression. Overall, the feature engineering results can be summarized into two major findings (1) for each type of features, statistics features have the best performance. Keyword and word embedding features have similar performance; (2) combining different types of features result in better classification results than using any single type of features alone, and using all features result in best classification results. Feature selection results in the Supplement (Figure 2) also confirm advantageous of using all features. ### Clinical trial termination prediction results Table 4 reports the clinical trial termination prediction results, with respect to Accuracy, Balanced Accuracy, F1-score, and AUC scores. Because the dataset is severely imbalanced with 88.54% completed trials and 11.46% terminated trials, Accuracy scores are not reliable measures to asses classifier performance. Using a corrected re-sampled t-test31, comparing an ensemble model vs. its single model counterpart, the results show: All models have a significant increase in Balanced Accuracy and F1-score; all models are significantly different in accuracy; Random Forest shows a significant increase in AUC scores. Ensemble XGBoost shows the highest scores in AUC and Balanced Accuracy, when using all features, compared to other Ensemble models. Using a corrected resampled t-test and Holm-Bonferroni corrected p-values, it was confirmed that XGBoost is significantly better, ($$p < 0.01$$), than Neural Network and Logistic Regression, which regards to AUC. XGBoost is slightly significantly better than Random Forest with $$p = 0.056$$. With regards to Balanced Accuracy, XGBoost is significantly better than all models with $$p < 0.01$$. To test the ensemble models performance over all combinations of features, a Friedman test shows a significant difference between the four ensemble models AUC scores, $$\chi ^{2}_{F} = 9.686$$, $$p=0.021$$. The Nemenyi post-hoc test, using $$\alpha = 0.1$$, results in Fig. 6a demonstrate that Random Forest and XGBoost are significantly better than Logistic Regression in AUC (There is no significant difference between Neural Network and the other three models in AUC). A Friedman test shows a significant difference between the four ensemble models in Balanced Accuracy, $$\chi ^{2}_{F} = 7.971$$, $$p=0.047$$. The Nemenyi post-hoc test, using $$\alpha = 0.05$$, results in Fig. 6b demonstrate that Random Forest is significantly better than Logistic Regression in Balanced Accuracy (There is no significant difference between Neural Network, XGBoost and the other three models in Balanced Accuracy). The Supplement lists results from all statistical tests. These statistical tests conclude that while XGBoost has highest performance with regards to using all features, Random Forest had reliable strength with regards to all feature combinations. Overall, the results can be summarized into three major findings (1) ensemble model is always better (or much better) than single model in Balanced Accuracy, F1-score and AUC values; (2) single model learned from original dataset (without random under sampling) is not reliable (a classification model with several percent of F1-score typically means that one type of samples are largely misclassified); and (3) using random under sampling, ensemble model, and XGBoost result in the best trial termination prediction with over 0.73 AUC values and 67% Balanced Accuracy. ## Discussion Our study has twofold goals: (1) determine clinical trial termination key factors and (2) accurately predict trial termination. For the first goal, among all studied features, statistics features are advantageous in describing tangible aspects of a clinical trial, such as eligibility requirements or trial phase. Some embedding features are ranked high, but the downside of embedding features is that the meaning of the detailed description field is not directly known, as it is represented as a numerical vector. The top ranked keyword features indicate research areas more likely to be terminated. Our research shows that a majority of top ranked keyword features are cancer related. A previous study utilizing trial description field keywords also found oncology related terms such as “tumor”, “chemotherapy”, and “cancer” to be important keyword indicators17. The high ranking of oncology terms indicate that cancer trials pose a higher termination risk. Indeed, proving clinical effectiveness of therapeutic interventions in cancer has become increasingly complex. Although there is an increase in the number of cancer clinical trials, patient enrollment has, in fact, decreased32. Meanwhile, statistics features provide information on aspects of trials related to termination, and keyword features provide additional information on research areas susceptible to the factors identified by statistics features. For example, the high ranking of keywords, “Mycosis”, “Fungoides”, and “Sezary”, which are related to rare diseases, suggest that these trials may have troubles enrolling patients to meet eligibility criteria, ending in termination. For the second goal, our research found that the combination of all features has the highest performance for all models. These results are in agreement with previous studies that use unstructured variables combined with structured variables (statistic features) for clinical trial termination models17,18. Our research, combined with existing findings, suggest that clinical trial termination is the outcome of many complex factors. High accuracy trial termination prediction should rely on advanced feature engineering approaches, instead of being limited to feature selection skills. While previous studies17,18 only used Random Forest, our research demonstrates the predictive capabilities of other models: (1) Random Forest and XGBoost are superior to Logistic Regression when comparing performance over different combinations of features; (2) XGBoost is statistically superior to all models when considering performance with regards to all features; and (3) our ensemble methods are able to properly handle the class imbalance issue, which are very common in this domain. Our research heavily relies on statistical tests. The Friedman statistical tests and critical difference diagrams demonstrate the classifiers rankings over different feature combinations. Because we used cross validation to find best parameters for each models, often their AUC scores for a specific feature combination were similar with a minor difference, which still impact their rankings, and directly affect their Nemenyi post-hoc tests. Unlike the corrected resampled t-test, the Friedman test and Nemenyi post-hoc tests do not take variability into overlapping training and test sets into account. The corrected re-sampled t-test can be more reliable with respect to pairwise comparison of one models performance to another. The Freidman tests demonstrate model superiority over all combinations of features. ## Conclusions In this paper, we used feature engineering and predictive modeling to study key factors associated to clinical trial termination, and proposed a framework to predict trial termination. By using 311,260 clinical trials to build a dataset with 68,999 samples, we achieved over 0.73 AUC and over 67% Balanced Accuracy scores for trial termination prediction. The predictive modeling offers insight for stakeholders to better plan clinical trials to avoid waste and ensure success. A limitation of our research is that the decision logic of the predictive models is not transparent, making it difficult to interpret the predictions. Future work can focus on models with better interpretability. In addition, research can segregate clinical trials into separate groups to determine if concentrated research area trials have more pronounced features or termination results. For example, this study and a previous study found oncology keywords as important features17. A different study has found surgery words as the highest important keyword factor18. Segregating clinical trials on the basis of research or therapeutic area for a single data set may possibly yield improved results for a predictive termination model. In which case, the same methodology could be applied to a subset of clinical trials. ## References 1. 1. Friedman, L. M., Furberg, C. D., DeMets, D. L., Reboussin, D. M. & Granger, C. B. Fundamentals of Clinical Trials 5th edn. (Springer, Berlin, 2015). 2. 2. Campbell, M. et al. Recruitment to randomised trials: strategies for trial enrollment and participation study (STEPS). Health Technol. Assess. (Winch., Engl.) https://doi.org/10.3310/hta11480 (2007). 3. 3. Food and Drug Administration Amendments Act of 2007. Pub. L. 110-85, Title VIII-Clinical Trial Databases, 121 STAT. 904. http://www.gpo.gov/fdsys/pkg/PLAW-110publ85/pdf/PLAW-110publ85.pdf#page=82 (2007). 4. 4. Williams, R., Tse, T., DiPiazza, K. & Zarin, D. Terminated trials in the clinicaltrials.gov results database: evaluation of availability of primary outcome data and reasons for termination. PLoS ONE 10, e0127242. https://doi.org/10.1371/journal.pone.0127242 (2015). 5. 5. Sertkaya, A., Wong, H.-H., Jessup, A. & Beleche, T. Key cost drivers of pharmaceutical clinical trials in the United States. Clin. Trials. https://doi.org/10.1177/1740774515625964 (2016). 6. 6. Kasenda, B. et al. Learning from failure-rationale and design for a study about discontinuation of randomized trials (DISCO study). BMC Med. Res. Methodol. 12, 131. https://doi.org/10.1186/1471-2288-12-131 (2012). 7. 7. Psaty, B. M. & Rennie, D. Stopping medical research to save money. A broken pact with researchers and patients. JAMA 289, 2128–31. https://doi.org/10.1001/jama.289.16.2128 (2003). 8. 8. Kasenda, B. et al. Prevalence, characteristics, and publication of discontinued randomized trials. JAMA 311, 1045–1051. https://doi.org/10.1001/jama.2014.1361 (2014). 9. 9. Greaves, M. Clinical trials and tribulations. J. Thromb. Haemost. 12, 822–823. https://doi.org/10.1111/jth.12567 (2014). 10. 10. Pak, T. R., Rodriguez, M. D. & Roth, F. P. Why clinical trials are terminated. bioRxiv https://doi.org/10.1101/021543 (2015). 11. 11. ClinicalTrials.gov. Protocol registration data element definitions for interventional and observational studies. https://prsinfo.clinicaltrials.gov/definitions.html (2019). 12. 12. Bernardez-Pereira, S. et al. Prevalence, characteristics, and predictors of early termination of cardiovascular clinical trials due to low recruitment: insights from the ClinicalTrials.gov registry. Am. Heart J. https://doi.org/10.1016/j.ahj.2014.04.013 (2014). 13. 13. Morgan, C. J. Statistical issues associated with terminating a clinical trial due to slow enrollment. J. Nucl. Cardiol. 24, 525–526. https://doi.org/10.1007/s12350-016-0702-1 (2017). 14. 14. Carlisle, B., Kimmelman, J., Ramsay, T. & MacKinnon, N. Unsuccessful trial accrual and human subjects protections: an empirical analysis of recently closed trials. Clin. Trials 12, 77–83. https://doi.org/10.1177/1740774514558307 (2015). 15. 15. Ehrhardt, S., Appel, L. J. & Meinert, C. L. Trends in National Institutes of Health funding for clinical trials registered in ClinicalTrials.gov. JAMA 314, 2566–2567. https://doi.org/10.1001/jama.2015.12206 (2015). 16. 16. Gayvert, K., Madhukar, N. & Elemento, O. A data-driven approach to prediction successes and failures of clinical trials. Cell Chem. Biol. 23, 1294–1301. https://doi.org/10.1016/j.chembiol.2016.07.023 (2016). 17. 17. Follett, L., Geletta, S. & Laugerman, M. Quantifying risk associated with clinical trial termination: a text mining approach. Inf. Process. Manage. 56, 516–525. https://doi.org/10.1016/j.ipm.2018.11.009 (2019). 18. 18. Geletta, S., Follett, L. & Laugerman, M. Latent Dirichlet allocation in predicting clinical trial terminations. BMC Med. Inform. Decis. Mak. https://doi.org/10.1186/s12911-019-0973-y (2019). 19. 19. Elkin, M. & Zhu, X. Clinical trial report data repository. https://github.com/maggieelkin/ClinicalTrialReports (2021). 20. 20. 21. 21. Boccia, S. et al. Registration practices for observational studies on ClinicalTrials.gov indicated low adherence. J. Clin. Epidemiol. 70, 176–182. https://doi.org/10.1016/j.jclinepi.2015.09.009 (2016). 22. 22. ClinicalTrials.gov. Support materials. https://clinicaltrials.gov/ct2/manage-recs/resources (2019). 23. 23. Huang, M., Névéol, A. & Lu, Z. Recommending MeSH terms for annotating biomedical articles. JAMIA 18, 660–667. https://doi.org/10.1136/amiajnl-2010-000055 (2011). 24. 24. Robertson, S. Understanding inverse document frequency: on theoretical arguments for IDF. J. Doc. 60, 503–520. https://doi.org/10.1108/00220410410560582 (2004). 25. 25. Le, Q. V. & Mikolov, T. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning. 32, 1188–1196 (2014). 26. 26. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S. & Dean, J. Distributed representations of words and phrases and their compositionality. Adv. Neural Inf. Process. Syst. 26, 3111–3119 (2013). 27. 27. Mikolov, T., Chen, K., Corrado, G. & Dean, J. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013). 28. 28. Guyon, I. & Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1167–1182 (2003). 29. 29. Chawla, N. V., Japkowicz, N. & Kotcz, A. Editorial: special issue on learning from imbalanced data sets. SIGKDD Explor. Newsl. 6, 1–6. https://doi.org/10.1145/1007730.1007733 (2004). 30. 30. Larocca, C. & Kupper, T. Mycosis fungoides and sezary syndrome: an update. Hematol. Oncol. Clin. N. Am. 33, 103–120. https://doi.org/10.1016/j.hoc.2018.09.001 (2019). 31. 31. Bouckaert, R. R. & Frank, E. Evaluating the replicability of significance tests for comparing learning algorithms. In Advances in Knowledge Discovery and Data Mining. PAKDD 2004, vol. 3056, 3–12, https://doi.org/10.1007/978-3-540-24775-3_3 (Springer, 2004). 32. 32. Ajithkumar, T. & Gilbert, D. Modern challenges of cancer clinical trials. Clin. Oncol. 29, 767–769. https://doi.org/10.1016/j.clon.2017.10.006 (2017). ## Acknowledgements This research is sponsored by the U.S. National Science Foundation through Grant Nos. IIS-2027339, IIS-1763452 and CNS-1828181. ## Author information Authors ### Contributions Drafting of the manuscript: M.E., X.Z. Design and modeling: M.E., X.Z. Data collection and analysis: M.E. Obtained funding: X.Z. Supervision: X.Z. ### Corresponding author Correspondence to Xingquan Zhu. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Elkin, M.E., Zhu, X. Predictive modeling of clinical trial terminations using feature engineering and embedding learning. Sci Rep 11, 3446 (2021). https://doi.org/10.1038/s41598-021-82840-x • Accepted: • Published:
2021-10-16 19:21:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34037813544273376, "perplexity": 3495.347562483233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00049.warc.gz"}
https://www.sololearn.com/discuss/333573/do-i-get-xp-from-declined-expired-challenges
+ 2 # Do i get xp from declined expired challenges? I mean i should at least get some if I answered 5 questions correctly I should get some points for my efforts. 21st Apr 2017, 9:37 AM TransHedgehog
2023-02-04 21:04:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9485190510749817, "perplexity": 2633.819861220066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00256.warc.gz"}
http://www.askiitians.com/forums/Integral-Calculus/26/44063/definite-integral.htm
``` the value of integrand limit [0 to [x]] {x-[x]}dx (where [.] denotes greatest integer function. ``` 2 years ago Share ``` $\int_{0}^{[x]}x-[x]dx \\=>[x]^2/2-\int_{1}^{2}1dx-\int_{2}^{3}2dx-\int_{3}^{4}3dx..............depends\, value \,of\, [x]$Arun KumarIIT DelhiAskiitians Faculty ``` 2 months ago ``` x-[x] is periodic with period 1. therefore, ?0[x]x-[x] dx = [x]?01x-[x] dx =[x]/2we can get integral of x-[x] within limits 0 and 1 using method of areas as ½.Therefore the answer is [x]/2 ``` 2 months ago More Questions On Integral Calculus Post Question Vouchers To Win!!! find the integral of(sin x¹°°cos x??) and interval is (0,x) I = S (sin 100 xcos x ).dx here S = integration symbol substituting sinx = t then cos x dx = dt I = S t 100 .dt I = (t 101 / 101) + k k- constant = (sin 101 x)/101 + k Thanks and Regards,... Ajay Verma 5 months ago sin ln vcos X-cos ³X at [-p÷2,p÷2] integral of (e^x)*(sec^2(x)) Thanks & Regards Saurabh Singh, askIITians Faculty B.Tech. IIT Kanpur Saurabh Singh 5 months ago what is 2F1 (-i/2,1;1-i/2;-e^2ix) K CHANDANA 5 months ago Find the intervals in which f(x) = jx??2j x2 is strictly increasing and strictly decreasing. your expression is not so clear so I take the liberty to interpret it in my way f(x) = |x 2 |x 2 Since x 2 is always greater than 0 so mod of x 2 is always positive So f(x) becomes x 4 which... sudhir pal 5 months ago Please post the questain again. We could not understand your question. Thanks Bharat Bajaj askIITians Faculty Qualification. IIT Delhi bharat bajaj 5 months ago f(x)=|x-2|/x^2 for x>2 =2-x/x^2 for x<2 f'(x)=(4/x^3)-(1/x^2) for x>2 =-((4/x^3)-(1/x^2)) for x<2 for strictly increasing f'(x)>0 (4/x^3)-(1/x^2) >0 for x>2 x>4 for... Find the equation of the line perpendicular to the line x-7y+5=0 and having x-intercept 3 7x+y-21=0 is the eqn perpendicular and having x-intercept to be 3 VINAYREDDY 8 months ago 7x+y-21=0 Akhil sai 8 months ago
2014-07-24 21:29:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6052220463752747, "perplexity": 13511.893270506502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997891953.98/warc/CC-MAIN-20140722025811-00164-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/equations-reducible-pair-linear-equations-two-variables-following-simultaneous-equations_1136
# Solution - Equations Reducible to a Pair of Linear Equations in Two Variables Account Register Share Books Shortlist Your shortlist is empty ConceptEquations Reducible to a Pair of Linear Equations in Two Variables #### Question Solve the following simultaneous equations: 7/(2X+1)+13/(Y+2)=27,13/(2X+1)+7/(Y+2)=33 #### Solution You need to to view the solution Is there an error in this question or solution? #### Similar questions VIEW ALL Solve the following pair of linear equations. a − b) x + (a + b) y = a2− 2ab − b2 (a + b) (x + y) = a2 + b2 view solution Solve the following pair of linear equations. x/a-y/b = 0 ax + by = a2 + b2 view solution Solve the following pairs of equations by reducing them to a pair of linear equations 10/(x+y) + 2/(x-y) = 4 15/(x+y) - 5/(x-y) = -2 view solution The ages of two friends Ani and Biju differ by 3 years. Ani’s father Dharam is twice as old as Ani and Biju is twice as old as his sister Cathy. The ages of Cathy and Dharam differ by 30 years. Find the ages of Ani and Biju view solution Solve the following pairs of equations by reducing them to a pair of linear equations 1/(2x) + 1/(3y) = 2 1/(3x) + 1/(2y) = 13/6 view solution Solution for concept: Equations Reducible to a Pair of Linear Equations in Two Variables. For the course 9th - 10th SSC (English Medium) S
2017-10-21 01:27:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2881907522678375, "perplexity": 1167.753407725429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824537.24/warc/CC-MAIN-20171021005202-20171021025202-00450.warc.gz"}
https://www.physicsforums.com/threads/how-did-they-get-that-the-vorticity-2-omega.1015538/
# How did they get that the vorticity = 2##\omega##? Hall Homework Statement: Deriving an expression for vorticity. Relevant Equations: Vorticity =##2 \omega## I'm learning Meteorology, and using the book Atmospheric Science by Wallace and Hobbs. We're discussing the kinematics of the winds (fluids). I shall post some images to say what I don't understand. This is how they define their natural coordinate system At any point on the surface one can define a pair of axes of a sys- tem of natural coordinates (s, n), where s is arc length directed downstream along the local streamline, and n is distance directed normal to the streamline and toward the left, I know and understand the concepts of angular velocity, shear, curvature, differentiation, and partial differentiation but for some reason, which is latent to me, I cannot understand anything that the book has done. Will you please explain it to me? $$\nabla \times \mathbf{u}=2\mathbf {\omega}$$ $$u_x=- \omega y,\ u_y=\omega x,\,\ u_z=0$$
2023-02-04 22:50:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6833366751670837, "perplexity": 1148.2380772187669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00421.warc.gz"}
https://3dprinting.stackexchange.com/questions/8204/use-gcode-extrusion-speed-in-calculations
Use Gcode Extrusion Speed in Calculations I have a Rostock Max V2, and I've added a second extruder going into a y-splitter into a single nozzle on my printer. I have both extruders working correctly, but I'm having trouble tuning the retraction settings to prevent stringing when I switch between extruders during a print. My system is essentially identical to the setup seen here. However, I can't get my printer to retract as cleanly as the one in the video What I'm trying to avoid is the long, thin "tail" that forms when retracting the filament from the hot end. That "tail" binds the other filament during the switch and makes the extruder grind a hollow spot on the filament. I've had limited success tuning my retraction settings, but I find that I need different settings for different extrusion speeds. For example, after an extrusion like G1 E20 F240 a 3mm retraction, 3mm extrusion, then a fast retraction creates a nice, clean break (this routine is recommended here by kraeger on the SeeMeCNC forums). However, after an extrusion like G1 E20 F900 I have to use longer retractions to get a clean break. I think this might have to do with the filament acting like a spring inside the bowden tube. It would make sense to me that the harder you push the filament, the more you need to pull back to compensate for the pent-up spring force. Here's my question: Is there a way to read the value of the extrusion speed, essentially the "F" term from the gcode commands, and change my retraction routine accordingly. Example pseudocode: If F value < 500 Then do short retraction If F value > 500 AND F value < 1000 Then do medium retraction If F value > 1000 Then do long retraction I'm using the tool change script feature in Simplify3D to store the tool change code. I don't think you're going to find either a firmware feature or a slicer feature that handles specifically what you want to do. The slicer would probably be the best place to put it, and I'd recommend maybe opening a feature request ticket with Ultimaker, because that sounds like an awesome feature. That being said, there's nothing stopping you from post-processing your GCode file after it's been generated. If you're experienced with python at all, that's the place I'd recommend you start. You'll probably want to do it via the following: 1. Find the first line number that does a retraction. 2. Sum up all the extrusion distances between that line and the starting point (the beginning of the file) 3. Replace the retraction distance and feedrate with whatever your short/medium/long retraction settings are 4. Store that line number as your new starting point 5. GOTO 1. If you're using Slic3r, there's actually a post-processing script function built into the app itself, you just need to write the script and give it to the application to make the whole process hands-off. For other slicers you'll probably just have to run the script manually between slicing and printing. • Wow, I didn't know that Slic3r implemented that kind of thing. I'll check into it! Just thinking out loud here - Slic3r is completely separate from Prusa's experimentation with it, e.g. PrusaSlicer. So I would be working with Slic3r, not necessarily PrusaSlicer, unless Prusa has implemented this as well. I think the way I would do it would be to insert some sort of comment using the slicer like "Insert long/medium/short retract here," and then use the post-processing script to fill in the gaps. – TempleGuard527 Jul 22 '19 at 15:18
2021-07-25 16:02:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40588074922561646, "perplexity": 1611.5126849904914}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151699.95/warc/CC-MAIN-20210725143345-20210725173345-00195.warc.gz"}
http://www.sciforums.com/threads/the-speed-of-light-is-not-constant.140905/page-5
The Speed of Light is Not Constant Discussion in 'Alternative Theories' started by Farsight, Feb 23, 2014. 1. Motor Daddy☼☼☼☼☼☼☼☼☼☼☼Valued Senior Member Messages: 5,105 I'm gonna say this one time, so listen closely. The ONLY entity in this entire infinite volume of what we refer to as "space" is distance. DATS IT! 2. Google AdSenseGuest Advertisement to hide all adverts. 3. Motor Daddy☼☼☼☼☼☼☼☼☼☼☼Valued Senior Member Messages: 5,105 Electromagnetic radiation occurs as an acceleration between velocity lines of different values. It doesn't occur on the line, it occurs between the lines (read, more distance). 4. Google AdSenseGuest Advertisement to hide all adverts. 5. Write4UValued Senior Member Messages: 11,784 Are you saying space is already physically infinite in distance... or... is it possible that space is bounded (expanding) but is potentially able to expand an infinitely long time? 6. Google AdSenseGuest Advertisement to hide all adverts. 7. Motor Daddy☼☼☼☼☼☼☼☼☼☼☼Valued Senior Member Messages: 5,105 I'm saying that there is infinite distance in every direction from a point. I'm saying that point has a rotational velocity. Please Register or Log in to view the hidden image! 8. Motor Daddy☼☼☼☼☼☼☼☼☼☼☼Valued Senior Member Messages: 5,105 The reason we exist is because math is perfect and nature isn't! LOL 9. Write4UValued Senior Member Messages: 11,784 TY, inherently dynamic? OK, "a point" = "any point". Questions: the entire infinity of points (the wholeness) rotates in one direction? Is time being created during this rotational change? Does time extend into the future for an event which has not yet occurred? That is grist for thought..... Please Register or Log in to view the hidden image! 10. Motor Daddy☼☼☼☼☼☼☼☼☼☼☼Valued Senior Member Messages: 5,105 The length of the path of light, FROM THAT POINT, in a rotational direction moving away from that point as the point is spinning, is c. When you increase the distance from the center point of the earth you are increasing the path length of that object in space. That is an acceleration, just like when you stomp on the gas pedal and accelerate to pass a slower car. An acceleration is an increase or decrease in velocity. When you increase the distance from the point the radius is greater than zero, and a requirement to travel at the speed of light has arisen. 11. TrappedBannedBanned Messages: 1,058 The slowing down of photons relative to someone sitting near a massive body, like a black hole say, is a coordinate artifact and so you are correct about why this dilation appears present. There is in fact as you point out, more space to traverse, giving shifts in the gravitational field. Of course, this is in a sense similar to how photons may appear to slow down due to variations in their position. What the picture is you have described, is a more complete understanding of the phenomenon than was probably understood then. 12. TrappedBannedBanned Messages: 1,058 Let's take what Einstein arrived at seriously, setting $c=1$ assuming it is fundamental and constant, then what varies is the gravitational field described by it's potential $g_{tt}(r) = 1+ 2\phi$. Assume we have a such a shift in our gravitational field $(\phi_1 - \phi_2)$ we can make estimates of the dilation by expanding it in a series $1 + \phi_1 - \phi_2 - \frac{1}{2}\phi_{1}^{2} - \phi_{1}\phi_{2} + \frac{3}{2} \phi_{2}^{2} + ...$ When we think about curvature in general relativity, we think about the distortions of spacetime, insomuch at least how geometry appears in the vacuum when we are dealing with gravity. Perhaps in a sense, photons do appear to take longer to get to places, but the first principles says the speed of light cannot change in SR, so we must assume the apparent change in it's velocity in a gravitational field according to some observer must have something to do with there being more space to traverse when gravity is present. 13. Russ_WattersNot a Trump supporter...Valued Senior Member Messages: 5,051 I know this is a fast moving thread, but.... What makes it accepted as fact is the fact that when theories are constructed utilizing the current understanding of time, they work. 14. Write4UValued Senior Member Messages: 11,784 Does that establish an infinite universal time frame? Expanding Quantum Foam? 15. Motor Daddy☼☼☼☼☼☼☼☼☼☼☼Valued Senior Member Messages: 5,105 If the length of the path is 299,792,458 meters, then 1 second has elapsed, and you can take that to the bank! 16. Write4UValued Senior Member Messages: 11,784 A long time ago I read an amusing story of Einstein's "man in the box" experiment. man in a box accelerates upwards at near SOL. box has a hole drilled on one side. box passes a light which goes through hole (in a straight line) to the man in the box the lightbeam seems to bend down and travel a longer curved path than a straight line. (even as it strikes the other side at the same time as if it were in a straight line, which of course it was) did gravity bend the line (and made it longer) to the observer in the box? I believe he told this during a meeting on gravity. 17. Motor Daddy☼☼☼☼☼☼☼☼☼☼☼Valued Senior Member Messages: 5,105 Einstein was clueless. Please Register or Log in to view the hidden image! If you accelerated a box directly away from the center of the earth, the line could not be horizontal to the box observer, it was curved and at an angle, and as the distance between the box and the center of the earth increased the line became flatter, because the acceleration decreased.. 18. Russ_WattersNot a Trump supporter...Valued Senior Member Messages: 5,051 What is amusing about that? 19. billvonValued Senior Member Messages: 16,274 I know! You should redefine pi as 3.2, as the Indiana legislature once tried to do. That way the math would be perfect. Won't quite match nature, but who cares, right? As long as the math is clean. 20. Motor Daddy☼☼☼☼☼☼☼☼☼☼☼Valued Senior Member Messages: 5,105 If the math matches reality, then it's perfect. If an earth sized object doesn't fit in a perfect cube, tough crap, the math is correct and nature is forced. 21. billvonValued Senior Member Messages: 16,274 Then you've just defined relativity as perfect, as evinced by everything from particle accelerator experiments to the GPS in your phone. Are you feeling OK? Are you having a sudden burst of clarity? 22. Motor Daddy☼☼☼☼☼☼☼☼☼☼☼Valued Senior Member Messages: 5,105 Don't apply your misunderstandings to what I'm telling you, it doesn't work that way. Understanding comes from listening and learning, not running your mouth! 23. Write4UValued Senior Member Messages: 11,784 Sorry, left out the detail of accelerating in a vacuum and a narrow beam of ligt placed perpendicular to the directio of travel so that the light beam would enter the box at right angles and potentially the shortest distance to the opposite wall. Yet the reality to the man in the box was that the light was curving and actually traveled a longer distance and struck the opposite wall at a lower point, in the same amount of time as if it the box were stationary and the light would be a straight line. And @ Russ_Watters, I was young and my father had a book on Einstein which gave me my first glimpse into into the wonders of physics. I appreciate all responses as I am trying to find a balanced worldview, based on Natural Laws and mathematical constants.
2019-09-20 03:13:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5195246934890747, "perplexity": 1226.316400360232}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573827.2/warc/CC-MAIN-20190920030357-20190920052357-00524.warc.gz"}
https://zanesvillepottery.com/0pkc01r/the-higher-the-frequency-the-higher-the-energy-06e74d
Select Page amplitude. The variation of the energy deviation of the ions in function of the extraction voltage is obtained by applying to the extraction system the double floating probe theory. …, how much heat is released in Setup A? the higher energy and lower energy ends on the electromagnetic spectrum above. 7 Electromagnetic Spectrum 10 4 10 2 10 0 10 -2 10 -4 10 -6 10 -8 10 -10 10 -12 10 -14 10 -16 10 4 10 6 10 8 10 10 10 12 10 14 10 16 10 18 10 20 10 22 10 24 IR UV X-Rays Cosmic Rays Radio Spectrum 1MHz ==100m 100MHz ==1m 10GHz ==1cm < 30 KHz VLF 30 … Photon energy is the energy carried by a single photon.The amount of energy is directly proportional to the photon's electromagnetic frequency and thus, equivalently, is inversely proportional to the wavelength.The higher the photon's frequency, the higher its energy. We establish that the difference of the spectral distribution of the equilibrium radiation energy in matter from the Planck formula in the high-frequency range is determined by the imaginary part of the transverse dielectric permittivity of the matter. share | cite | improve this answer | follow | answered Aug 30 '18 at 10:08. ****Loudness(db) is measured by amplitude and not frequency***** 1)Ok, for this part i am not too sure by im suggesting that because … so a single photon which frequency is higher, carries higher energy. a. avocado le Despite finding that other people are drawn to them, higher frequency individuals often feel alone in this world because they comprise so little of the population. ​, ELearning Task 2: Predict what will happen to other organs and to yourbody if the following organs undergo malfunction.stomach3.brain1.G4.2.kidneyslun The higher the frequency, the more often the particles bump into each other, so the more power is delivered. Therefore, to achieve the same energy at low frequencies the amplitude has to be higher. and wavelength ( l ) • c = f x l • Higher frequency means higher energy photons • The higher the energy photon the more penetrating is the radiation. Aquarius, The Return of the Feminine, Higher Frequency Energies . Frequencies do not add and do … Definition. f Costantino Costantino. An FM radio station transmitting at 100 MHz emits photons with an energy of about 4.1357 × 10−7 eV. Very-high-energy gamma rays have photon energies of 100 GeV to 100 TeV (1011 to 1014 electronvolts) or 16 nanojoules to 16 microjoules. Frequencies immediately below HF are denoted medium frequency (MF), while the next band of higher frequencies … For a photon, the equation is E = hv, E = energy, h= Plancks constant and v = frequency. Known as high frequency or dielectric heating, both are forms of electromagnetic wave energy, which share some characteristics but also have significant differences. A) The energy of a photon is directly proportional to its wavelength. It is called "electromagnetism" because electricity and magnetism are linked: A changing electric field produces a magnetic field, a changing magnetic field produces an electric field, ... around and around ... ! It is much higher in frequency than any broadcast wave. When you produce sound from a speaker you would like a "flat" response so that there is the same energy/Hz at all frequencies. Explanation: cause the higher the energy the higher. Q: A cylinder contains H2, He, and Kr gases at a total pressure of 1.80 atm; the mass percents are 20.0... A: Given: Mass percent of H2 = 20 % Mass percent of He = 35.0 % Photon energy can be expressed using any unit of energy. We are teaching you, Dear One, how to use the power of your mind to generate an understanding that becomes your vibration. Reposted 1-8-15 ~ originally channeled on the Full Moon, 11-28-12 I’ve been hearing this constant, high-pitched higher frequency sound now for about 3 weeks or so (maybe more)….it is something that […] C. Higher-energy light travels at a faster speed than lower-energy light does. It is sometimes called the "spectroscopic wavenumber". This is why the speaker movement is much larger. High frequency light has short wavelengths and high energy. During photosynthesis, specific chlorophyll molecules absorb red-light photons at a wavelength of 700 nm in the photosystem I, corresponding to an energy of each photon of ≈ 2 eV ≈ 3 x 10−19 J ≈ 75 kBT, where kBT denotes the thermal energy. Using Figure how can you easily differentiate high-energy radiation from low energy radiation? Yes. ionization-energy . Indeed 5D technology will increase the radiation emitted from antennas installed to direct the energy wave and because 5G operates at the higher-frequency end of the micro-wave spectrum (30 to possibly 300GHz - that’s almost pushing into the wavelength of sunlight) transmitters need to be installed at much shorter intervals to effectively transfer the higher wavelengths. = It depends on the context, for things like sound wave the amplitude is also important, so you could have a frequency that’s much higher but of much smaller amplitude. A meal prepared with love and gratitude not only tastes better it resonates with a higher energy level. Differently, a single photon has the energy E = h ν so a single photon which frequency is higher, carries higher energy. Imagine that as you breathe deeply and slowly the energy is moving … To find the photon energy in electronvolts, using the wavelength in micrometres, the equation is approximately. c The higher the frequency of a wave, the lower its energy. Preparation of High-Frequency Foods is just as important as quality when consuming HFF’s. Following the above examples, gamma rays have very high energy and radio waves are low-energy. Aquarius is ruled by Uranus meaning quick and unexpected changes. axllanuzagormeaxllanuzagorme. You can attract positive energy to you by using high vibrational words - a good example would be words used in meditation, chanting or during prayer. What happened to the paper clips when the wire was connected to the battery?2. Use this Practice to Fill Your Energy Field with the Frequency of Divine Love. is used where h is Planck's constant and the Greek letter ν (nu) is the photon's frequency.[2]. Results: There was a significant decrease in QRS energy after MI in the low and high frequency bandwidths in the 21 dogs that was primarily detected in the Y lead. The higher the frequency of a wave, the lower its energy. While violet has the highest frequency and the color that is close to violet is obviously green Wavenumber, as used in spectroscopy and most chemistry fields, is defined as the number of wavelengths per unit distance, typically centimeters (cm −1): ~ =, where λ is the wavelength. Nikola Tesla said, “If you want to find the secrets of the universe, think in terms of energy, frequency and vibration.” On the spectrum of vibrations from low to high, we can find warmth, light, color, sound and their many harmonics, as well as higher vibrations of radio frequencies, ultra-violet, and gamma waves among others. This effect heads off through space at the fastest speed possible: the speed of light. And assuming you're talking about light - they both have the same velocity, c =~ 3.00 x 10^8 m/s question 3: Does infrared radiation have a higher or lower energy than ultraviolet light? Now for the second part, you said that the higher frequency was 5-10db lower than the lower frequency as played back on the recorder and that you did not detect a difference with your own ears. We wish to speak to you this evening of the Raising of Your Frequency to match […] The amount of energy is directly proportional to the photon's electromagnetic frequency and thus, equivalently, is inversely proportional to the wavelength. Explain Question 4: Wavelength and energy … per photon, red or blue light? Therefore, to achieve the same energy at low frequencies the amplitude has to be higher. wich of the specimen being observed belongs to? Median response time is 34 minutes and may be longer for new subjects. Also, the higher the frequency the more the energy of the wave. B. Higher-energy light has a longer wavelength than lower-energy light does. 81 3 3 bronze badges $\endgroup$ $\begingroup$ $\nu$ is but a letter of the Greek alphabet. Therefore, the photon energy at 1 μm wavelength, the wavelength of near infrared radiation, is approximately 1.2398 eV. Still, in this question, for instance: "The convergence limit for the sodium atom has a wavelength of 242nm. In much the … Since all that waves really are is traveling energy, the more energy in a wave, the higher its frequency. In much the same way, the gall… as the higher frequency and lower frequency ends on the electromagnetic spectrum above. New questions in Science. Also, the extension of this characterization of the high‐frequency limit to other fields, such as electromagnetic fields, and to nonvacuum space‐times is briefly discussed. The spectrum is continuous with no sudden … Where E is photon energy, h is the Planck constant, c is the speed of light in vacuum and λ is the photon's wavelength. This means your body becomes emotionally, physically, mentally, and spiritually healthier as you raise your vibrational frequency. High frequency (HF) is the ITU designation for the range of radio frequency electromagnetic waves (radio waves) between 3 and 30 megahertz (MHz). Answer: the higher the energy. What happens to the body when one organ is not working properly?PIVOT 4A CATARADOON Soi​, how do you describe the motion of molecules (KE) in Setup A to Setup B after five minutes?​. Wavelength and Frequency DRAFT. A wave with a large wavelength will have a _____ frequency and _____ energy. B. UV light has longer wavelengths and higher energy than visible light. It is a measure of how strong of an electric field a material will permit inside of it. c. human haird Equivalently, the longer the photon's wavelength, the lower its energy. You are right that there is more energy at higher frequencies. As regards the currents of very high potential differences, which were employed in my experiments, I have never considered the current’s strength, but the energy which the human body was capable of receiving without injury, and I have expressed this quite clearly on more than one occasion. Breath work can be a powerful remedy for moving energy. It is also known as the decameter band or decameter wave as its wavelengths range from one to ten decameters (ten to one hundred meters). Gamma rays are typically waves of frequencies greater than 10 19 Hz. to Setup B? We know from theproblems above that higher frequencies mean shorter wavelengths. The energy (E) and the frequency(f) of an electromagnetic radiation is related as: {eq}E = h * f {/eq} where h is the Planck's constant. Since as the higher frequency and lower frequency ends on the electromagnetic spectrum above. 15th February 2019 @ 10:44. …, af what material, we are light. QRS energy was log transformed to achieve a normal distribution. Please include this credit at the top with a link to the source message when reposting this message. Solution for Which has the higher frequency, red or blue light? the higher energy and lower energy ends on the electromagnetic spectrum above. Therefore, the photon energy at 1 Hz frequency is 6.62606957 × 10−34 joules or 4.135667516 × 10−15 eV. There are many things that you can do in order to align your body to higher frequency energies and to support increased energy flow: Breathe. the frequency. The higher the frequency the ________ the energy​, under a microscope, a student observes a specimen that has no cell wall and chloroplast. Blue screen of death (BSOD) overtime. Please use simple language because I'm only in high school. sounds with greater amplitude are _____ brighter. Louder. Each frequency/wavelength, can absorb and emit energy in discrete quantized amounts. B. UV light has longer wavelengths and higher energy than visible light. The higher your energy, the more capable you are of nullifying and converting lower energies, which weaken you. D. Higher-frequency light travels at a slower speed than lower-energy light does. originally channeled 12-8-12, reposted 2-21-13 to help all with the transition we are experiencing at this time By The Golden Light Channel, www.thegoldenlightchannel.com Good evening, we are the Council of Angels, Archangel Metatron, Archangel Michael and Archangel Raphael. d. mayana leaf, Write the maximum number of electrons for each sublevel.s - ______p - ______d - ______f - ______g - ______ ​, Give at least two uses of satellite communication.​, what EM waves carries energy and transfer matter from one place to another​, Guide Questions:1. An X-ray is a high frequency (high energy) electromagnetic wave. When this occurs, the healer can properly channel very high-frequency energy, which will flow through their crown chakra and which will then flow through their body and their cells and into your body as the person being healed. complex. Changes in High Frequency QRS Energy in the SignaLAveraged Electrocardiogram After Acute Experimental In fare tion Daniel M. Bloomfield, M.D., Paul Lander, Ph.D,, and Jonathan S. Steinberg, M.D. • If it gains energy, it carries on with a higher frequency. within a low frequency bandwidth (1 5-40Hz) and two high frequency bandwidths (40-80 Hz, 80-300 Hz). The higher the photon's frequency, the higher its energy. X-rays or gamma-rays are examples of this. When our energetic aura shifts, our consciousness expands, our nervous system neutralizes, and we begin to fully realize and experience a spiritual awakening… the recognition of our divine essence. High frequency and low male energy (and vice versa) often accompany one another—they balance each other. We wish to speak to you this evening of the Raising of Your Frequency to match […] A person who combines high male energy with the “buzziness” of high frequency can be intense, in the sense that his energy is moving both quickly and outward. What happened to the paper clips when the wire was d B) The longer the wavelength the lower energy of an electron wave. Study Forum Helper ; Badges: 17. The higher frequency the wavelength (such as blue light), the higher the temperature and higher the energy intensity emitted. We can also say that E = h c / lambda. We begin to operate at a high-vibrational frequency, the frequency of love, and that’s when we are a magnet for more positive energy. The Benefits of Being in a Higher Vibration | HuffPost Life Photon energy is the energy carried by a single photon. Gamma rays are radiation from nuclear decay, when a nucleus changes from an excited energy state to a lower energy state. In 1900, Planck discovered that there was a direct relationship between aphoton's frequency and its energy: The higher the frequency of light, the higher its energy. “The Higher Self does not need a walk-in at the Avatar Level or any level because the Higher Self already exists as a consciousness, thus a walk-in is an overshadowing of benevolent power — or malevolent power — to either upgrade or downgrade a person’s frequency, awareness, and intelligence in order to take action. Manolis Gustavsson Manolis Gustavsson. X-rays or gamma-rays are examples of this. b. mango seed So electricity and magnetism are linked in an ongoing dance. so it depends the maximum value of the electric field E 0. The lower the frequency is, the less energy in the wave. of time; Higher frequency=more energy The number of vibrations in a second Measured in the Hertz (Hz) 1 Hz = 1 wave per second Wave Speed • The type of medium will affect wave speed • For example, sound travels faster/slower in different mediums; light travels through water at a … #3 Report 9 years ago #3 (Original post by Serene_Eloquence) Is it because you have a higher frequency therefore more energy is carried? originally channeled 12-8-12, reposted 2-21-13 to help all with the transition we are experiencing at this time By The Golden Light Channel, www.thegoldenlightchannel.com Good evening, we are the Council of Angels, Archangel Metatron, Archangel Michael and Archangel Raphael. and wavelength (l) • c = f x l • Higher frequency means higher energy photons • The higher the energy photon the more penetrating is the radiation. The 400nm wavelength will have less energy, as it has a lower frequency (and hence vibrates less often). The higher the frequency of light, the higher its energy. 6 1.4 Metals Permittivity to measure the degree to which a material enables the propagation of photons. It equals the spatial frequency.. You are right that there is more energy at higher frequencies. , where f is frequency, the photon energy equation can be simplified to. This does not matter whether it is a distant healing, or a healing done in person. This means that a new personal computer will be needed. asked Jun 16 '19 at 15:01. Everyone and everything has a vibrational frequency (also known as your aura, energy body, light body, life energy, soul, spirit, or essence) and your body is healthier and able to function better at a higher frequency. JOURNALS Bulletin of the American Meteorological Society Earth Interactions Journal of Applied Meteorology and Climatology Journal of Atmospheric and Oceanic Technology Journal of the Atmospheric Sciences Journal of Climate Journal of Hydrometeorology Journal of Physical Oceanography Meteorological Monographs Monthly Weather Review Weather and Forecasting … Equivalently, the longer the photon's wavelength, the lower its energy. High frequency light has short wavelengths and high energy. …, gs1. From The Division of Cardiology, De~urt~ent of Medicine, CoUege of P~ysic~~ and Surgeons of Columbia University, New York Background: The effect of MI on the high frequency content of the QRS … Radio waves are examples oflight with a long wavelength, low frequency, and low energy. This corresponds to frequencies of 2.42 × 1025 to 2.42 × 1028 Hz. 1. Explanation: Because red has the Longest Wavelength and therefore, the color next to red is color orange so it has a longer wavelength. Can completely short out motherboard, the higher frequency RAM, processor. Indeed 5D technology will increase the radiation emitted from antennas installed to direct the energy wave and because 5G operates at the higher-frequency end of the micro-wave spectrum (30 to possibly 300GHz - that’s almost pushing into the wavelength of sunlight) transmitters need to be installed at much shorter intervals to effectively transfer the higher wavelengths. The higher frequency-higher energy results from Planck's realisation of the quantization of light and the resulting E=hf equation. But it is also essential that you consciously prepare your body for higher and greater frequencies of energy. will help 0. reply. People of higher frequency tend to be described as “hypersensitive,” empathic, intuitive, creative, sometimes perfectionistic, and (wonderfully!) Substituting h with its value in J⋅s and f with its value in hertz gives the photon energy in joules. We can alsosay that E = h c / lambda. It is shown in the special case corresponding to the presence of a single wave that this tensor has the same form as the stress‐energy tensor of a null fluid. Watch Queue Queue When you produce sound from a speaker you would like a "flat" response so that there is the same energy/Hz at all frequencies. Calculate the first ionization energy of sodium from this data", shouldn't we be given the wavelength/frequency of the radiation emitted when the valence electron (the eleventh electron) falls from n = infinity back to n = 3? Which has the greater energy per photon, red or blue light? Rep:? $\begingroup$ Thank you for your answer! I think that alone is a great reason to raise your vibrational frequency! If short wavelength is equal to high frequency, and VIOLET has the shortest wavelength of all the colors, ... Can Carry more Energy than Visible Light and Shorter Wavelengths,Higher Frequency than Visible Light. High-frequency data of wind energy facilities are, however, very hard to be found and there are no estimations in the literature of the SD of the capacity factor of wind energy installations sampled with high frequency. For two waves of the same amplitude the higher frequency will have higher energy content because the medium is vibrating at faster speeds and its particles have higher kinetic energy. Here, we show the data sampled every 5 minutes for the wind energy facilities connected to the Australian National Electricity Market (NEM) grid during the year 2018. This equation is known as the Planck-Einstein relation. *Response times vary by subject and question complexity. Wave frequency is related to wave energy. Starting from the hypothesis of the modulation of energy of ions extracted by the high‐frequency voltage one calculates the width and the form of the energy spectrum of these ions. Quantum was the name given to the smallest amount of energy of a particular wavelength or frequency of light that could be absorbed by a body. The 200nm wavelength will have higher frequency; they're inversely proportional. Aquarius is depicted as the water bearer - water referring to the collective unconsciousness or grids that form the realities in which we experience. a measure of the distance between a line through the middle of a wave and a crest or trough. charco. *Response times vary by subject and question complexity. As h and c are both constants, photon energy E changes in inverse relation to wavelength λ. A minimum of 48 photons is needed for the synthesis of a single glucose molecule from CO2 and water (chemical potential difference 5 x 10−18 J) with a maximal energy conversion efficiency of 35%, https://en.wikipedia.org/w/index.php?title=Photon_energy&oldid=999078352, Creative Commons Attribution-ShareAlike License, This page was last edited on 8 January 2021, at 11:07. High school than lower-energy light does energy at low frequencies the amplitude has to be higher the fastest speed:... Ν so a single photon has the higher the photon energy is directly proportional the..., physically, mentally, and spiritually healthier as you raise your vibrational frequency Divine love in YouTube! Quick and unexpected changes as you raise your vibrational frequency has the greater energy per photon, red blue. From theproblems above that higher frequencies mean shorter wavelengths normal distribution a letter of the Feminine, frequency... Be needed Feminine, higher frequency than the word hate directly proportional to the photon 's,. Matter of frequency or trough short wavelengths and high energy and lower energy state time is minutes... That there is more energy at 1 Hz frequency is, the higher the frequency of light and the E=hf! 'S electromagnetic frequency and lower frequency ( rate of vibration ) has more energyand shorter wavelength are that... Waves of frequencies greater than 10 19 Hz radio station transmitting at 100 MHz emits with! Energy at 1 Hz frequency is 6.62606957 × 10−34 joules or 4.135667516 × 10−15 eV to TeV... Quick and unexpected changes it resonates with a long wavelength, the the higher the frequency the higher the energy 's electromagnetic frequency and thus,,! '18 at 10:08 greater than 10 19 Hz also say that E = h c / lambda 16 microjoules energyand... Qrs energy was log transformed to achieve the same way, the more power is delivered of Divine in... Self channeling that aligns your energy, it carries on with a link to the collective unconsciousness or that!, red or blue light ), the photon energy at low frequencies the amplitude has to be higher frequency! C are both constants, photon energy in a wave, the less energy, the higher frequency-higher results! Middle of a wave with a long wavelength, the higher frequency Green... Consciously prepare your body for higher and greater frequencies of energy is the energy an. In the wave same way, the lower its energy of an wave... Nullifying and converting lower energies, which weaken you YouTube video high school Planck! Equivalently, is approximately so it depends the maximum value of the electric field a enables. ) has more energyand shorter wavelength ( such as blue light in hertz gives the photon energy at 1 wavelength! Temperature and higher the energy of about 4.1357 × 10−7 eV photon energy at low frequencies amplitude. Or 4.135667516 × 10−15 eV Higher-energy light travels at a faster speed the higher the frequency the higher the energy light... Effect heads off through space at the top with a link to the paper clips when the wire was to. 40-80 Hz, 80-300 Hz ) breath work can be a powerful remedy for moving.... Reposting this message frequency the wavelength same energy at 1 μm wavelength, low frequency (. Remedy for moving energy relation to wavelength λ an understanding that becomes your vibration than visible light examples. So it depends the maximum value of the electric field E 0 large wavelength will a. 'S electromagnetic frequency and lower frequency ( and hence vibrates less often ) are both constants photon. The water bearer - water referring to the source message when reposting this message,...: the convergence limit for the sodium atom has a lower frequency ( high energy we.. Photon, red or blue light when consuming HFF ’ s Foods just. Wavelength λ from the problems above that higher frequencies mean shorter wavelengths the paper clips when wire! \nu $is but a letter of the quantization of light and resulting... 1011 to 1014 electronvolts ) or 16 nanojoules to 16 microjoules Uranus quick... Wavelength of near infrared radiation have a higher or lower energy ends on the spectrum... Photons with an energy of the distance between a line through the middle of a photon is proportional... And gratitude not only tastes better it resonates with a link to the frequency of a photon is directly to... 5-40Hz ) and two high frequency light has longer wavelengths and high energy ) wave. Form the realities in which we experience frequencies the amplitude has to be higher above,... At 1 Hz frequency is higher, carries higher energy and lower frequency ends on the electromagnetic spectrum.. Solution for which has the higher frequency also, the photon 's wavelength the! Grids that form the realities in which we experience answered Aug 30 '18 at 10:08 strong of an wave... That higher frequencies possible: the speed of light with a link to the photon 's wavelength the! Higher frequencies sometimes called the spectroscopic wavenumber '' ( via mass-energy equivalence ) way, less. But a letter of the distance between a line through the middle of a wave is not just a of! Generate an the higher the frequency the higher the energy that becomes your vibration achieve the same energy at 1 Hz frequency is higher carries! H c / lambda × 1025 to 2.42 × 1025 to 2.42 × 1025 2.42. Shorter wavelength form the realities in which we experience, 80-300 Hz ) 30! The fastest speed possible: the speed of light we are teaching,. By a single photon higher, carries higher energy than visible light using the wavelength in micrometres, equation. For the sodium atom has a longer wavelength than lower-energy light does - water referring to wavelength... Important as quality when consuming HFF ’ s its wavelength high frequency (. Gamma rays have photon energies of 100 GeV to 100 TeV ( 1011 to 1014 )! Will permit inside of it of about 4.1357 × 10−7 eV times vary by subject and question.! A new personal computer will be needed and the resulting E=hf equation the electric field a enables. We know from theproblems above that higher frequencies mean shorter wavelengths convergence.: the convergence limit for the sodium atom has a wavelength of 242nm electronvolts, using the wavelength micrometres... A meal prepared with love and gratitude not only tastes better it resonates with long... Between a line through the middle of a wave, the more the energy the frequency! Question complexity, in this YouTube video nucleus changes from an excited energy state ) two! Greater energy per photon, red or blue light: the convergence limit the... \Nu$ is but a letter of the distance between a line through the middle of a wave the. Not just a matter of frequency radio station transmitting at 100 MHz emits photons with an energy of an field! To which a material will permit inside of it frequencies of energy rate of vibration ) has more energyand wavelength... That higher frequencies from theproblems above that higher frequencies mean shorter wavelengths this answer | follow answered. From theproblems above that higher frequencies mean shorter wavelengths from theproblems above that higher frequencies mean shorter.. Log transformed to achieve the same way, the more energy at Hz! Use this Practice to Fill your energy field with the frequency of light, the Return of quantization. Fastest speed possible: the speed of light, the higher the frequency, the higher your energy field the! Or 4.135667516 × 10−15 eV higher the frequency of Divine love called the spectroscopic wavenumber.... Nullifying and converting lower energies, which weaken you was log transformed to achieve the same energy at frequencies. Photon 's frequency, and low energy radiation and low energy found that... Middle of a photon is directly proportional to the source message when reposting this message badges \endgroup... Wavenumber '' energy E changes in inverse relation to wavelength λ 8 × times! Waves changes and increases at a slower speed than lower-energy light does and spiritually healthier as raise! And c are both constants, photon energy at 1 μm wavelength, speed, frequency and. Changes from an excited energy state alone is a measure of the distance a... Often the particles bump into each other, so the more capable you are right there... 3 3 bronze badges $\endgroup$ $\nu$ is but a letter of the Greek alphabet 3. New personal computer will be needed, using the wavelength the sodium atom has a lower energy visible! All that waves really are is traveling energy, the higher frequency ( high energy an. A line through the middle of a wave is not just a matter frequency! Are linked in an ongoing dance E=hf equation energy Flashcards | Quizlet you are of nullifying converting... Wavelength than lower-energy light does that the velocity of seismic waves changes and increases a... Through space at the top with a higher or lower energy ends on the spectrum! Can alsosay that E = h c / lambda the energy of electric! 4.135667516 × 10−15 eV 8 × 10−13 times the electron 's mass ( mass-energy... Your vibrational frequency and higher energy level hence vibrates less often ) its frequency emit..., 80-300 Hz ) much heat is released in Setup a much higher frequency! Constants, photon energy in a wave is not just a matter frequency... Be higher energy radiation Self channeling that aligns your energy, as has. As important as quality when consuming HFF ’ s 16 nanojoules to 16 microjoules nullifying and converting energies! Theproblems above that higher frequencies mean shorter wavelengths gains energy, it on... Your vibration equivalently, the more capable you are of nullifying and converting lower energies which! A higher or lower energy ends on the electromagnetic spectrum above the lower the frequency, and low radiation. In person between a line through the middle of a wave is not just a matter of frequency means body... Photon 's frequency, and low energy radiation changes and increases at a slower speed than lower-energy does...
2021-05-17 13:35:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7002798318862915, "perplexity": 1259.9635381602789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00140.warc.gz"}
http://hep.itp.tuwien.ac.at/cgi-bin/dwww?type=runman&location=Module%3A%3ABuild/3
Module::Build(3perl) Perl Programmers Reference Guide Module::Build(3perl) NAME Module::Build - Build and install Perl modules SYNOPSIS Standard process for building & installing modules: perl Build.PL ./Build ./Build test ./Build install Or, if you're on a platform (like DOS or Windows) that doesn't require the "./" notation, you can do this: perl Build.PL Build Build test Build install DESCRIPTION "Module::Build" is a system for building, testing, and installing Perl modules. It is meant to be an alternative to "ExtUtils::MakeMaker". Developers may alter the behavior of the module through subclassing in a much more straightforward way than with "MakeMaker". It also does not require a "make" on your system - most of the "Module::Build" code is pure-perl and written in a very cross-platform way. In fact, you don't even need a shell, so even platforms like MacOS (traditional) can use it fairly easily. Its only prerequisites are modules that are included with perl 5.6.0, and it works fine on perl 5.005 if you can See "MOTIVATIONS" for more comparisons between "ExtUtils::MakeMaker" and "Module::Build". To install "Module::Build", and any other module that uses "Module::Build" for its installation process, do the following: perl Build.PL # 'Build.PL' script creates the 'Build' script ./Build # Need ./ to ensure we're using this "Build" script ./Build test # and not another one that happens to be in the PATH ./Build install This illustrates initial configuration and the running of three 'actions'. In this case the actions run are 'build' (the default action), 'test', and 'install'. Other actions defined so far include: build manpages clean pardist code ppd config_data ppmdist diff prereq_data dist prereq_report distcheck pure_install distclean realclean distdir retest distmeta skipcheck distsign test disttest testall docs testcover fakeinstall testdb help testpod html testpodcoverage install versioninstall manifest You can run the 'help' action for a complete list of actions. GUIDE TO DOCUMENTATION The documentation for "Module::Build" is broken up into three sections: General Usage (Module::Build) This is the document you are currently reading. It describes basic usage and background information. Its main purpose is to assist the user who wants to learn how to invoke and control "Module::Build" scripts at the command line. Authoring Reference (Module::Build::Authoring) This document describes the structure and organization of "Module::Build", and the relevant concepts needed by authors who are writing Build.PL scripts for a distribution or controlling "Module::Build" processes programmatically. API Reference (Module::Build::API) This is a reference to the "Module::Build" API. Cookbook (Module::Build::Cookbook) This document demonstrates how to accomplish many common tasks. It covers general command line usage and authoring of Build.PL scripts. Includes working examples. ACTIONS There are some general principles at work here. First, each task when building a module is called an "action". These actions are listed above; they correspond to the building, testing, installing, packaging, Second, arguments are processed in a very systematic way. Arguments are always key=value pairs. They may be specified at "perl Build.PL" time (i.e. "perl Build.PL destdir=/my/secret/place"), in which case their values last for the lifetime of the "Build" script. They may also be specified when executing a particular action (i.e. "Build test verbose=1"), in which case their values last only for the lifetime of that command. Per-action command line parameters take precedence over parameters specified at "perl Build.PL" time. The build process also relies heavily on the "Config.pm" module. If the user wishes to override any of the values in "Config.pm", she may specify them like so: perl Build.PL --config cc=gcc --config ld=gcc The following build actions are provided by default. build [version 0.01] If you run the "Build" script without any arguments, it runs the "build" action, which in turn runs the "code" and "docs" actions. This is analogous to the "MakeMaker" make all target. clean [version 0.01] This action will clean up any files that the build process may have created, including the "blib/" directory (but not including the "_build/" directory and the "Build" script itself). code [version 0.20] This action builds your code base. By default it just creates a "blib/" directory and copies any ".pm" and ".pod" files from your "lib/" directory into the "blib/" directory. It also compiles any ".xs" files from "lib/" and places them in "blib/". Of course, you need a working C compiler (probably the same one that built perl itself) for the compilation to work properly. The "code" action also runs any ".PL" files in your lib/ directory. Typically these create other files, named the same but without the ".PL" ending. For example, a file lib/Foo/Bar.pm.PL could create the file lib/Foo/Bar.pm. The ".PL" files are processed first, so any ".pm" files (or other kinds that we deal with) will get copied correctly. config_data [version 0.26] ... diff [version 0.14] This action will compare the files about to be installed with their installed counterparts. For .pm and .pod files, a diff will be shown (this currently requires a 'diff' program to be in your PATH). For other files like compiled binary files, we simply report whether they differ. A "flags" parameter may be passed to the action, which will be passed to the 'diff' program. Consult your 'diff' documentation for the parameters it will accept - a good one is "-u": ./Build diff flags=-u dist [version 0.02] This action is helpful for module authors who want to package up their module for source distribution through a medium like CPAN. It will create a tarball of the files listed in MANIFEST and compress the tarball using GZIP compression. By default, this action will use the "Archive::Tar" module. However, you can force it to use binary "tar" and "gzip" executables by supplying an explicit "tar" (and optional "gzip") parameter: ./Build dist --tar C:\path\to\tar.exe --gzip C:\path\to\zip.exe distcheck [version 0.05] Reports which files are in the build directory but not in the MANIFEST file, and vice versa. (See manifest for details.) distclean [version 0.05] Performs the 'realclean' action and then the 'distcheck' action. distdir [version 0.05] Creates a "distribution directory" named "$dist_name-$dist_version" (if that directory already exists, it will be removed first), then copies all the files listed in the MANIFEST file to that directory. This directory is what the distribution tarball is created from. distmeta [version 0.21] Creates the META.yml file that describes the distribution. distribution. The metadata includes the distribution name, version, abstract, prerequisites, license, and various other data about the distribution. This file is created as META.yml in YAML format. It is recommended that the "YAML" module be installed to create it. If the "YAML" module is not installed, an internal module supplied with Module::Build will be used to write the META.yml file, and this will most likely be fine. META.yml file must also be listed in MANIFEST - if it's not, a warning will be issued. The current version of the META.yml specification can be found at <http://module-build.sourceforge.net/META-spec-current.html> distsign [version 0.16] Uses "Module::Signature" to create a SIGNATURE file for your distribution, and adds the SIGNATURE file to the distribution's MANIFEST. disttest [version 0.05] Performs the 'distdir' action, then switches into that directory and runs a "perl Build.PL", followed by the 'build' and 'test' actions in that directory. docs [version 0.20] This will generate documentation (e.g. Unix man pages and HTML documents) for any installable items under blib/ that contain POD. If there are no "bindoc" or "libdoc" installation targets defined (as will be the case on systems that don't support Unix manpages) no action is taken for manpages. If there are no "binhtml" or "libhtml" installation targets defined no action is taken for HTML documents. fakeinstall [version 0.02] This is just like the "install" action, but it won't actually do anything, it will just report what it would have done if you had actually run the "install" action. help [version 0.03] This action will simply print out a message that is meant to help you use the build process. It will show you a list of available build actions too. With an optional argument specifying an action name (e.g. "Build help test"), the 'help' action will show you any POD documentation it can find for that action. html [version 0.26] This will generate HTML documentation for any binary or library files under blib/ that contain POD. The HTML documentation will only be installed if the install paths can be determined from values in "Config.pm". You can also supply or override install paths on the command line by specifying "install_path" values for the "binhtml" and/or "libhtml" installation targets. install [version 0.01] This action will use "ExtUtils::Install" to install the files from "blib/" into the system. See "INSTALL PATHS" for details about how Module::Build determines where to install things, and how to influence this process. If you want the installation process to look around in @INC for other versions of the stuff you're installing and try to delete it, you can use the "uninst" parameter, which tells "ExtUtils::Install" to do so: ./Build install uninst=1 This can be a good idea, as it helps prevent multiple versions of a module from being present on your system, which can be a confusing situation indeed. manifest [version 0.05] This is an action intended for use by module authors, not people installing modules. It will bring the MANIFEST up to date with the files currently present in the distribution. You may use a MANIFEST.SKIP file to exclude certain files or directories from inclusion in the MANIFEST. MANIFEST.SKIP should contain a bunch of regular expressions, one per line. If a file in the distribution directory matches any of the regular expressions, it won't be included in the MANIFEST. The following is a reasonable MANIFEST.SKIP starting point, you can ^_build ^Build$^blib ~$ \.bak$^MANIFEST\.SKIP$ CVS See the distcheck and skipcheck actions if you want to find out what the "manifest" action would do, without actually doing anything. manpages [version 0.28] This will generate man pages for any binary or library files under blib/ that contain POD. The man pages will only be installed if the install paths can be determined from values in "Config.pm". You can also supply or override install paths by specifying there values on the command line with the "bindoc" and "libdoc" installation targets. pardist [version 0.2806] Generates a PAR binary distribution for use with PAR or PAR::Dist. It requires that the PAR::Dist module (version 0.17 and up) is ppd [version 0.20] Build a PPD file for your distribution. This action takes an optional argument "codebase" which is used in the generated PPD file to specify the (usually relative) URL of the distribution. By default, this value is the distribution name without any path information. Example: ppmdist [version 0.23] Generates a PPM binary distribution and a PPD description file. This action also invokes the "ppd" action, so it can accept the same "codebase" argument described under that action. This uses the same mechanism as the "dist" action to tar & zip its output, so you can supply "tar" and/or "gzip" parameters to affect the result. prereq_data [version 0.32] This action prints out a Perl data structure of all prerequisites and the versions required. The output can be loaded again using "eval()". This can be useful for external tools that wish to query a Build script for prerequisites. prereq_report [version 0.28] This action prints out a list of all prerequisites, the versions required, and the versions actually installed. This can be useful for reviewing the configuration of your system prior to a build, or when compiling data to send for a bug report. pure_install [version 0.28] This action is identical to the "install" action. In the future, though, when "install" starts writing to the file $(INSTALLARCHLIB)/perllocal.pod, "pure_install" won't, and that will be the only difference between them. realclean [version 0.01] This action is just like the "clean" action, but also removes the "_build" directory and the "Build" script. If you run the "realclean" action, you are essentially starting over, so you will have to re-create the "Build" script again. retest [version 0.2806] This is just like the "test" action, but doesn't actually build the distribution first, and doesn't add blib/ to the load path, and therefore will test against a previously installed version of the distribution. This can be used to verify that a certain installed distribution still works, or to see whether newer versions of a distribution still pass the old regression tests, and so on. skipcheck [version 0.05] Reports which files are skipped due to the entries in the MANIFEST.SKIP file (See manifest for details) test [version 0.01] This will use "Test::Harness" or "TAP::Harness" to run any regression tests and report their results. Tests can be defined in the standard places: a file called "test.pl" in the top-level directory, or several files ending with ".t" in a "t/" directory. If you want tests to be 'verbose', i.e. show details of test execution rather than just summary information, pass the argument "verbose=1". If you want to run tests under the perl debugger, pass the argument "debugger=1". If you want to have Module::Build find test files with different file name extensions, pass the "test_file_exts" argument with an array of extensions, such as "[qw( .t .s .z )]". If you want test to be run by "TAP::Harness", rather than "Test::Harness", pass the argument "tap_harness_args" as an array reference of arguments to pass to the TAP::Harness constructor. In addition, if a file called "visual.pl" exists in the top-level directory, this file will be executed as a Perl script and its output will be shown to the user. This is a good place to put speed tests or other tests that don't use the "Test::Harness" format for output. To override the choice of tests to run, you may pass a "test_files" argument whose value is a whitespace-separated list of test scripts to run. This is especially useful in development, when you only want to run a single test to see whether you've squashed a certain bug yet: ./Build test --test_files t/something_failing.t You may also pass several "test_files" arguments separately: ./Build test --test_files t/one.t --test_files t/two.t or use a "glob()"-style pattern: ./Build test --test_files 't/01-*.t' testall [version 0.2807] [Note: the 'testall' action and the code snippets below are currently in alpha stage, see "/www.nntp.perl.org/group/perl.module.build/2007/03/msg584.html"" in "http: ] Runs the "test" action plus each of the "test$type" actions defined by the keys of the "test_types" parameter. Currently, you need to define the ACTION_test$type method yourself and enumerate them in the test_types parameter. my$mb = Module::Build->subclass( code => q( sub ACTION_testspecial { shift->generic_test(type => 'special'); } sub ACTION_testauthor { shift->generic_test(type => 'author'); } ) )->new( ... test_types => { special => '.st', author => ['.at', '.pt' ], }, ... testcover [version 0.26] Runs the "test" action using "Devel::Cover", generating a code- coverage report showing which parts of the code were actually exercised during the tests. To pass options to "Devel::Cover", set the $DEVEL_COVER_OPTIONS environment variable: DEVEL_COVER_OPTIONS=-ignore,Build ./Build testcover testdb [version 0.05] This is a synonym for the 'test' action with the "debugger=1" argument. testpod [version 0.25] This checks all the files described in the "docs" action and produces "Test::Harness"-style output. If you are a module author, this is useful to run before creating a new release. testpodcoverage [version 0.28] This checks the pod coverage of the distribution and produces "Test::Harness"-style output. If you are a module author, this is useful to run before creating a new release. versioninstall [version 0.16] ** Note: since "only.pm" is so new, and since we just recently added support for it here too, this feature is to be considered experimental. ** If you have the "only.pm" module installed on your system, you can use this action to install a module into the version-specific library trees. This means that you can have several versions of the same module installed and "use" a specific one like this: use only MyModule => 0.55; To override the default installation libraries in "only::config", specify the "versionlib" parameter when you run the "Build.PL" script: perl Build.PL --versionlib /my/version/place/ To override which version the module is installed as, specify the "versionlib" parameter when you run the "Build.PL" script: perl Build.PL --version 0.50 See the "only.pm" documentation for more information on version- specific installs. OPTIONS Command Line Options The following options can be used during any invocation of "Build.PL" or the Build script, during any action. For information on other options specific to an action, see the documentation for the respective action. NOTE: There is some preliminary support for options to use the more familiar long option style. Most options can be preceded with the "--" long option prefix, and the underscores changed to dashes (e.g. "--use-rcfile"). Additionally, the argument to boolean options is optional, and boolean options can be negated by prefixing them with "no" or "no-" (e.g. "--noverbose" or "--no-verbose"). quiet Suppress informative messages on output. use_rcfile Load the ~/.modulebuildrc option file. This option can be set to false to prevent the custom resource file from being loaded. verbose Display extra information about the Build on output. allow_mb_mismatch Suppresses the check upon startup that the version of Module::Build we're now running under is the same version that was initially invoked when building the distribution (i.e. when the "Build.PL" script was first run). Use with caution. debug Prints Module::Build debugging information to STDOUT, such as a trace of executed build actions. Default Options File (.modulebuildrc) [version 0.28] When Module::Build starts up, it will look first for a file,$ENV{HOME}/.modulebuildrc. If it's not found there, it will look in the the .modulebuildrc file in the directories referred to by the environment variables "HOMEDRIVE" + "HOMEDIR", "USERPROFILE", "APPDATA", "WINDIR", "SYS$LOGIN". If the file exists, the options specified there will be used as defaults, as if they were typed on the command line. The defaults can be overridden by specifying new values on the command line. The action name must come at the beginning of the line, followed by any amount of whitespace and then the options. Options are given the same as they would be on the command line. They can be separated by any amount of whitespace, including newlines, as long there is whitespace at the beginning of each continued line. Anything following a hash mark ("#") is considered a comment, and is stripped before parsing. If more than one line begins with the same action name, those lines are merged into one set of options. Besides the regular actions, there are two special pseudo-actions: the key "*" (asterisk) denotes any global options that should be applied to all actions, and the key 'Build_PL' specifies options to be applied when you invoke "perl Build.PL". * verbose=1 # global options diff flags=-u install --install_base /home/ken --install_path html=/home/ken/docs/html If you wish to locate your resource file in a different location, you can set the environment variable "MODULEBUILDRC" to the complete absolute path of the file containing your options. INSTALL PATHS [version 0.19] When you invoke Module::Build's "build" action, it needs to figure out where to install things. The nutshell version of how this works is that default installation locations are determined from Config.pm, and they may be overridden by using the "install_path" parameter. An "install_base" parameter lets you specify an alternative installation root like /home/foo, and a "destdir" lets you specify a temporary installation directory like /tmp/install in case you want to create bundled-up installable packages. Natively, Module::Build provides default installation locations for the following types of installable items: lib Usually pure-Perl module files ending in .pm. arch "Architecture-dependent" module files, usually produced by compiling XS, Inline, or similar code. script Programs written in pure Perl. In order to improve reuse, try to make these as small as possible - put the code into modules whenever possible. bin "Architecture-dependent" executable programs, i.e. compiled C code or something. Pretty rare to see this in a perl distribution, but it happens. bindoc Documentation for the stuff in "script" and "bin". Usually generated from the POD in those files. Under Unix, these are manual pages belonging to the 'man1' category. libdoc Documentation for the stuff in "lib" and "arch". This is usually generated from the POD in .pm files. Under Unix, these are manual pages belonging to the 'man3' category. binhtml This is the same as "bindoc" above, but applies to HTML documents. libhtml This is the same as "bindoc" above, but applies to HTML documents. Four other parameters let you control various aspects of how installation paths are determined: installdirs The default destinations for these installable things come from entries in your system's "Config.pm". You can select from three different sets of default locations by setting the "installdirs" parameter as follows: 'installdirs' set to: core site vendor uses the following defaults from Config.pm: lib => installprivlib installsitelib installvendorlib arch => installarchlib installsitearch installvendorarch script => installscript installsitebin installvendorbin bin => installbin installsitebin installvendorbin bindoc => installman1dir installsiteman1dir installvendorman1dir libdoc => installman3dir installsiteman3dir installvendorman3dir binhtml => installhtml1dir installsitehtml1dir installvendorhtml1dir [*] libhtml => installhtml3dir installsitehtml3dir installvendorhtml3dir [*] * Under some OS (eg. MSWin32) the destination for HTML documents is determined by the C<Config.pm> entry C<installhtmldir>. The default value of "installdirs" is "site". If you're creating vendor distributions of module packages, you may want to do something like this: perl Build.PL --installdirs vendor or ./Build install --installdirs vendor If you're installing an updated version of a module that was included with perl itself (i.e. a "core module"), then you may set "installdirs" to "core" to overwrite the module in its present location. (Note that the 'script' line is different from "MakeMaker" - unfortunately there's no such thing as "installsitescript" or "installvendorscript" entry in "Config.pm", so we use the "installsitebin" and "installvendorbin" entries to at least get the general location right. In the future, if "Config.pm" adds some more appropriate entries, we'll start using those.) install_path Once the defaults have been set, you can override them. On the command line, that would look like this: perl Build.PL --install_path lib=/foo/lib --install_path arch=/foo/lib/arch or this: ./Build install --install_path lib=/foo/lib --install_path arch=/foo/lib/arch install_base You can also set the whole bunch of installation paths by supplying the "install_base" parameter to point to a directory on your system. For instance, if you set "install_base" to "/home/ken" on a Linux system, you'll install as follows: lib => /home/ken/lib/perl5 arch => /home/ken/lib/perl5/i386-linux script => /home/ken/bin bin => /home/ken/bin bindoc => /home/ken/man/man1 libdoc => /home/ken/man/man3 binhtml => /home/ken/html libhtml => /home/ken/html Note that this is different from how "MakeMaker"'s "PREFIX" parameter works. "install_base" just gives you a default layout under the directory you specify, which may have little to do with the "installdirs=site" layout. The exact layout under the directory you specify may vary by system - we try to do the "sensible" thing on each platform. destdir If you want to install everything into a temporary directory first (for instance, if you want to create a directory tree that a package manager like "rpm" or "dpkg" could create a package from), you can use the "destdir" parameter: perl Build.PL --destdir /tmp/foo or ./Build install --destdir /tmp/foo This will effectively install to "/tmp/foo/$sitelib", "/tmp/foo/\$sitearch", and the like, except that it will use "File::Spec" to make the pathnames work correctly on whatever platform you're installing on. prefix Provided for compatibility with "ExtUtils::MakeMaker"'s PREFIX argument. "prefix" should be used when you wish Module::Build to install your modules, documentation and scripts in the same place "ExtUtils::MakeMaker" does. The following are equivalent. perl Build.PL --prefix /tmp/foo perl Makefile.PL PREFIX=/tmp/foo Because of the very complex nature of the prefixification logic, the behavior of PREFIX in "MakeMaker" has changed subtly over time. Module::Build's --prefix logic is equivalent to the PREFIX logic found in "ExtUtils::MakeMaker" 6.30. If you do not need to retain compatibility with "ExtUtils::MakeMaker" or are starting a fresh Perl installation we recommend you use "install_base" instead (and "INSTALL_BASE" in "ExtUtils::MakeMaker"). See "Instaling in the same location as ExtUtils::MakeMaker" in Module::Build::Cookbook for further information. MOTIVATIONS There are several reasons I wanted to start over, and not just fix what · I don't like the core idea of "MakeMaker", namely that "make" should be involved in the build process. Here are my reasons: + When a person is installing a Perl module, what can you assume about their environment? Can you assume they have "make"? No, but you can assume they have some version of Perl. + When a person is writing a Perl module for intended distribution, can you assume that they know how to build a Makefile, so they can customize their build process? No, but you can assume they know Perl, and could customize that way. For years, these things have been a barrier to people getting the build/install process to do what they want. · There are several architectural decisions in "MakeMaker" that make it very difficult to customize its behavior. For instance, when using "MakeMaker" you do "use ExtUtils::MakeMaker", but the object created in "WriteMakefile()" is actually blessed into a package name that's created on the fly, so you can't simply subclass "ExtUtils::MakeMaker". There is a workaround "MY" package that lets you override certain "MakeMaker" methods, but only certain explicitly preselected (by "MakeMaker") methods can be overridden. Also, the method of customization is very crude: you have to modify a string containing the Makefile text for the particular target. Since these strings aren't documented, and can't be documented (they take on different values depending on the platform, version of perl, version of "MakeMaker", etc.), you have no guarantee that your modifications will work on someone else's machine or after an · It is risky to make major changes to "MakeMaker", since it does so many things, is so important, and generally works. "Module::Build" is an entirely separate package so that I can work on it all I want, without worrying about backward compatibility. · Finally, Perl is said to be a language for system administration. Could it really be the case that Perl isn't up to the task of building and installing software? Even if that software is a bunch of stupid little ".pm" files that just need to be copied from one place to another? My sense was that we could design a system to accomplish this in a flexible, extensible, and friendly manner. Or die trying. TO DO The current method of relying on time stamps to determine whether a derived file is out of date isn't likely to scale well, since it requires tracing all dependencies backward, it runs into problems on NFS, and it's just generally flimsy. It would be better to use an MD5 signature or the like, if available. See "cons" for an example. - append to perllocal.pod AUTHOR Ken Williams <kwilliams@cpan.org> Development questions, bug reports, and patches should be sent to the Module-Build mailing list at <module-build@perl.org>. Bug reports are also welcome at <http://rt.cpan.org/NoAuth/Bugs.html?Dist=Module-Build>. The latest development version is available from the Subversion repository at <https://svn.perl.org/modules/Module-Build/trunk/> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl(1), Module::Build::Cookbook, Module::Build::Authoring, Module::Build::API, ExtUtils::MakeMaker, YAML META.yml Specification: <http://module-build.sourceforge.net/META-spec-current.html> <http://www.dsmit.com/cons/> <http://search.cpan.org/dist/PerlBuildSystem/> perl v5.10.1 2009-08-10 Module::Build(3perl) Generated by dwww version 1.11.3 on Wed Jan 17 12:03:31 CET 2018.
2018-01-23 23:45:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4949968159198761, "perplexity": 9170.643466788628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892802.73/warc/CC-MAIN-20180123231023-20180124011023-00545.warc.gz"}
https://en.m.wikipedia.org/wiki/Sequential_dynamical_system
# Sequential dynamical system Phase space of the sequential dynamical system Sequential dynamical systems (SDSs) are a class of graph dynamical systems. They are discrete dynamical systems which generalize many aspects of for example classical cellular automata, and they provide a framework for studying asynchronous processes over graphs. The analysis of SDSs uses techniques from combinatorics, abstract algebra, graph theory, dynamical systems and probability theory. ## Definition An SDS is constructed from the following components: • A finite graph Y with vertex set v[Y] = {1,2, ... , n}. Depending on the context the graph can be directed or undirected. • A state xv for each vertex i of Y taken from a finite set K. The system state is the n-tuple x = (x1, x2, ... , xn), and x[i] is the tuple consisting of the states associated to the vertices in the 1-neighborhood of i in Y (in some fixed order). • A vertex function fi for each vertex i. The vertex function maps the state of vertex i at time t to the vertex state at time t + 1 based on the states associated to the 1-neighborhood of i in Y. • A word w = (w1, w2, ... , wm) over v[Y]. It is convenient to introduce the Y-local maps Fi constructed from the vertex functions by ${\displaystyle F_{i}(x)=(x_{1},x_{2},\ldots ,x_{i-1},f_{i}(x[i]),x_{i+1},\ldots ,x_{n})\;.}$ The word w specifies the sequence in which the Y-local maps are composed to derive the sequential dynamical system map F: Kn → Kn as ${\displaystyle [F_{Y},w]=F_{w(m)}\circ F_{w(m-1)}\circ \cdots \circ F_{w(2)}\circ F_{w(1)}\;.}$ If the update sequence is a permutation one frequently speaks of a permutation SDS to emphasize this point. The phase space associated to a sequential dynamical system with map F: Kn → Kn is the finite directed graph with vertex set Kn and directed edges (x, F(x)). The structure of the phase space is governed by the properties of the graph Y, the vertex functions (fi)i, and the update sequence w. A large part of SDS research seeks to infer phase space properties based on the structure of the system constituents. ## Example Consider the case where Y is the graph with vertex set {1,2,3} and undirected edges {1,2}, {1,3} and {2,3} (a triangle or 3-circle) with vertex states from K = {0,1}. For vertex functions use the symmetric, boolean function nor : K3 → K defined by nor(x,y,z) = (1+x)(1+y)(1+z) with boolean arithmetic. Thus, the only case in which the function nor returns the value 1 is when all the arguments are 0. Pick w = (1,2,3) as update sequence. Starting from the initial system state (0,0,0) at time t = 0 one computes the state of vertex 1 at time t=1 as nor(0,0,0) = 1. The state of vertex 2 at time t=1 is nor(1,0,0) = 0. Note that the state of vertex 1 at time t=1 is used immediately. Next one obtains the state of vertex 3 at time t=1 as nor(1,0,0) = 0. This completes the update sequence, and one concludes that the Nor-SDS map sends the system state (0,0,0) to (1,0,0). The system state (1,0,0) is in turned mapped to (0,1,0) by an application of the SDS map.
2020-05-28 18:27:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7357462644577026, "perplexity": 705.2442514220425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399830.24/warc/CC-MAIN-20200528170840-20200528200840-00117.warc.gz"}
https://www.neetprep.com/question/60444-one-complete-cycle-thermodynamic-process-gas-shown-PV-diagram-following-correct--EintQ-EintQ-EintQ-EintQ/55-Physics--Thermodynamics/687-Thermodynamics
# NEET Physics Thermodynamics Questions Solved For one complete cycle of a thermodynamic process on a gas as shown in the P-V diagram, which of following is correct ? (1) $\Delta {E}_{\mathrm{int}}=0,\text{\hspace{0.17em}}Q<0$ (2) $\Delta {E}_{\mathrm{int}}=0,\text{\hspace{0.17em}}Q>0$ (3) $\Delta \text{\hspace{0.17em}}{E}_{\mathrm{int}}>0,\text{\hspace{0.17em}}Q<0$ (4) $\Delta \text{\hspace{0.17em}}{E}_{\mathrm{int}}<0,\text{\hspace{0.17em}}Q>\text{​\hspace{0.17em}}0$ (1) $\Delta {E}_{\text{int}}=0$, for a complete cycle and for given cycle work done is negative, so from first law of thermodynamics Q will be negative i.e. Q < 0. Difficulty Level: • 49% • 35% • 16% • 2% Crack NEET with Online Course - Free Trial (Offer Valid Till August 28, 2019)
2019-08-26 03:02:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8726125359535217, "perplexity": 1927.4241933513733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00237.warc.gz"}
https://pyquil.readthedocs.io/en/v2.4.0/apidocs/autogen/pyquil.quil.Program.get_qubits.html
# Program.get_qubits¶ Program.get_qubits(indices=True)[source] Returns all of the qubit indices used in this program, including gate applications and allocated qubits. e.g. >>> p = Program() >>> p.inst(("H", 1)) >>> p.get_qubits() {1} >>> q = p.alloc() >>> p.inst(H(q)) >>> len(p.get_qubits()) 2 Parameters: indices – Return qubit indices as integers intead of the wrapping Qubit object A set of all the qubit indices used in this program set
2019-05-26 09:05:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4925035238265991, "perplexity": 12453.378320672744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259015.92/warc/CC-MAIN-20190526085156-20190526111156-00275.warc.gz"}
https://math.stackexchange.com/questions/405628/trigonometric-identities-using-sin-x-and-cos-x-definition-as-infinite-seri?noredirect=1
# Trigonometric identities using $\sin x$ and $\cos x$ definition as infinite series Can someone show the way to proof that $$\cos(x+y) = \cos x\cdot\cos y - \sin x\cdot\sin y$$ and $$\cos^2x+\sin^2 x = 1$$ using the definition of $\sin x$ and $\cos x$ with infinite series. thanks... • I think this will helpful math.stackexchange.com/questions/57675/… – iostream007 May 29 '13 at 11:08 • Use the Cauchy product and simplify. – xavierm02 May 29 '13 at 11:15 • @iostream007, that question is about proving the infinite series; this question is about using the infinite series. – Gerry Myerson May 29 '13 at 13:03 ## 4 Answers Let me do a different one. Begin with $$\sin x = \sum_{k=0}^\infty \frac{(-1)^k x^{2k+1}}{(2k+1)!}, \qquad \cos x = \sum_{k=0}^\infty \frac{(-1)^k x^{2k}}{(2k)!}$$ So compute \begin{align} \sin(x+y) &= \sum_{k=0}^\infty \frac{(-1)^k(x+y)^{2k+1}}{(2k+1)!} \\ &= \sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)!}\sum_{j=0}^{2k+1} \binom{2k+1}{j} x^j y^{2k+1-j} \\ &= \sum_{k=0}^\infty (-1)^k\sum_{j=0}^{2k+1} \frac{1}{j!(2k+1-j)!} x^j y^{2k+1-j} \\ &= \sum_{j=0}^\infty \frac{x^j}{j!}\sum_k \frac{(-1)^k}{(2k+1-j)!} y^{2k+1-j} \end{align} where the inner sum is over all $k$ such that $2k+1 \ge j$. Consider two cases for the inner sum, $j$ even and $j$ odd. If $j=2n$, then $2k+1 \ge j$ iff $2k+1 \ge 2n$ iff $k \ge n$. So the $k$-sum is: $$\sum_{k=n}^\infty \frac{(-1)^k}{(2k+1-2n)!} y^{2k+1-2n}$$ Use change of variables $i=k-n$ to get $$\sum_{i=0}^\infty \frac{(-1)^{i+n}}{(2i+1)!} y^{2i+1} = (-1)^n \sin y .$$ If $j=2n+1$, then $2k+1 \ge j$ iff $2k+1 \ge 2n+1$ iff $k \ge n$. So the $k$-sum is $$\sum_{k=n}^\infty \frac{(-1)^k}{(2k+1-2n-1)!}y^{2k+1-2n-1}$$ Again use change of variables $i=k-n$ to get $$\sum_{i=0}^\infty \frac{(-1)^{i+n}}{(2i)!} y^{2i} = (-1)^n \cos y.$$ So finally we have $$\sin(x+y) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n}}{(2n)!} \sin y +\sum_{n=0}^\infty \frac{(-1)^n x^{2n+1}}{(2n+1)!}\cos y = \cos x \sin y + \sin x \cos y.$$ In both cases, you'll want to use the Cauchy product, and the binomial theorem will be useful (for at least the first one), too. I leave the second one to you. For the first one, \begin{align}\sin x\cdot\sin y &= \left(\sum_{n=0}^\infty\frac{(-1)^nx^{2n+1}}{(2n+1)!}\right)\cdot\left(\sum_{n=0}^\infty\frac{(-1)^ny^{2n+1}}{(2n+1)!}\right)\\ &= \sum_{n=0}^\infty\left(\sum_{k=0}^n \frac{(-1)^kx^{2k+1}}{(2k+1)!}\cdot\frac{(-1)^{n-k}y^{2(n-k)+1}}{(2(n-k)+1)!}\right)\\ &= \sum_{n=0}^\infty\left(\sum_{k=0}^n \frac{(-1)^nx^{2k+1}y^{2(n+1)-(2k+1)}}{(2k+1)!(2(n+1)-(2k+1))!}\right)\\ &= \sum_{n=0}^\infty\frac{(-1)^n}{(2(n+1))!}\left(\sum_{k=0}^n\frac{(2(n+1))!x^{2k+1}y^{2(n+1)-(2k+1)}}{(2k+1)!(2(n+1)-(2k+1))!}\right)\\ &= \sum_{n=0}^\infty\frac{(-1)^n}{(2(n+1))!}\left(\sum_{k=0}^n\binom{2(n+1)}{2k+1}x^{2k+1}y^{2(n+1)-(2k+1)}\right)\\ &= -\sum_{n=0}^\infty\frac{(-1)^{n+1}}{(2(n+1))!}\left(\sum_{k=0}^n\binom{2(n+1)}{2k+1}x^{2k+1}y^{2(n+1)-(2k+1)}\right)\\ &= -\sum_{n=1}^\infty\frac{(-1)^n}{(2n)!}\left(\sum_{k=0}^{n-1}\binom{2n}{2k+1}x^{2k+1}y^{2n-(2k+1)}\right),\end{align} whence $$-\sin x\cdot\sin y=\sum_{n=1}^\infty\frac{(-1)^n}{(2n)!}\left(\sum_{k=0}^{n-1}\binom{2n}{2k+1}x^{2k+1}y^{2n-(2k+1)}\right).$$ You can do some of the same sorts of manipulations to see that $$\cos x\cdot\cos y = 1+\sum_{n=1}^\infty\frac{(-1)^n}{(2n)!}\left(\sum_{k=0}^n\binom{2n}{2k}x^{2k}y^{2n-2k}\right).$$ On the other hand, the binomial theorem shows us that for $n\ge 1$ we have $$(x+y)^{2n} = \sum_{j=0}^{2n}\binom{2n}jx^jy^{2n-j},$$ and splitting the right-hand side into two sums--one for even $j$ and one for odd $j$--gives us $$(x+y)^{2n} = \left(\sum_{k=0}^{n}\binom{2n}{2k}x^{2k}y^{2n-2k}\right)+\left(\sum_{k=0}^{n-1}\binom{2n}{2k+1}x^{2k+1}y^{2n-(2k+1)}\right).$$ Can you put the pieces together and fill in the omitted steps/justifications? The functions $\sin$ and $\cos$, defined by the Taylor series have the following properties: $$\begin{cases} \sin'=\cos, & \sin 0=0\\ \cos'=-\sin, & \cos 0=1 \end{cases}$$ These properties are easy consequences of the definitions as power series, that can be differentiated term by term; moreover we know that the convergence radius (which is infinite, for $\sin$ and $\cos$) doesn't change when differentiating. If we now consider the function $$h(x)=(\sin x)^2 + (\cos x)^2$$ we have, by the chain rule, $$h'=2\sin\cdot\sin'+2\cos\cdot\cos'=2\sin\cdot\cos-2\cos\cdot\sin=0$$ so $h$ is constant. Since $h(0)=0^2+1^2=1$, we have proved that, for all $x$, $$(\sin x)^2+(\cos x)^2=1$$ How do you get $\cos(x+y)=\cos x\cos y-\sin x\sin y$? Just observe that when $f$ and $g$ are functions defined and differentiable over the whole real line, such that $$f'=g,\qquad g'=-f,$$ these functions are uniquely determined as linear combinations of $\sin$ and $\cos$ by their values at $0$. (Drawn from Lang's Introduction to mathematical analysis.) • Of course you can use the series to get some other characterization of the trig functions, and then solve the problem using that other characterization. But a better answer would use the series directly. – GEdgar May 29 '13 at 14:28 • @GEdgar I appreciate the way you present your long computation; but I think that a proof like this is worthy because it hides the hairy details under the cover of higher level tools. – egreg May 29 '13 at 15:26 I like egreg's answer, and I pick up where he left off. Fix $y \in \mathbb{R}$. Define $$F(x) = \sin(x+y) - \sin(x) \cos(y) - \cos(x) \sin(y)$$ We want to show that $F(x) = F'(x) = 0$ for all real numbers $x$. Now \begin{align*} & F'(x) = \cos(x+y) - \cos(x) \cos(y) + \sin(x) \sin(y) \\ \implies & F''(x) = -\sin(x+y) + \sin(x) \cos(y) + \cos(x) \sin(y) \\ \implies & F(x) + F''(x) = 0 \\ \implies & 2F'(x) F(x) + 2 F'(x) F''(x) = \frac{d}{d\ x} [(F(x))^2 + (F'(x))^2] = 0 \end{align*} It is clear that $F$ is infinitely differentiable, so the Mean Value Theorem grants us that $H(x) = (F(x))^2 + (F'(x))^2$ is a constant function. Since we know from their power series definitions that $\sin(0) = 0$ and $\cos(0) = 1$, we have $F(0) = F'(0) = 0$, hence $H(x) = H(0) = 0$ for all $x \in \mathbb{R}$. Now this implies that $F(x) = F'(x) = 0$ on $\mathbb{R}$ as well, and so we now know what $\sin(x+y)$ and $\cos(x+y)$ are (just look at the expressions for $F$ and $F'$ above).
2019-12-12 03:34:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998859167098999, "perplexity": 295.6134728751269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540536855.78/warc/CC-MAIN-20191212023648-20191212051648-00312.warc.gz"}
https://www.esaral.com/q/the-sum-of-first-three-terms-of-an-ap-is-48-84238
# The sum of first three terms of an AP is 48. Question: The sum of first three terms of an AP is 48. If the product of first and second terms exceeds 4 times the third term by 12, find the AP. Solution: Let the first three terms of the AP be (a − d), a and (a + d). Then, $(a-d)+a+(a+d)=48$ $\Rightarrow 3 a=48$ $\Rightarrow a=16$ Now, $(a-d) \times a=4(a+d)+12$      (Given) $\Rightarrow(16-d) \times 16=4(16+d)+12$ $\Rightarrow 256-16 d=64+4 d+12$ $\Rightarrow 16 d+4 d=256-76$ $\Rightarrow 20 d=180$ $\Rightarrow d=9$ When a = 16 and d = 9, $a-d=16-9=7$ $a+d=16+9=25$ Hence, the first three terms of the AP are 7, 16 and 25.
2023-03-30 23:23:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5706672668457031, "perplexity": 244.8180523391047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00226.warc.gz"}
https://mathsmadeeasy.co.uk/gcse-maths-revision/mean-median-mode-and-range-gcse-revision-and-worksheets/
Mean Median Mode and Range Worksheets | Questions and Revision # Mean Median Mode and Range Worksheets, Questions and Revision Level 4 Level 5 ## Mean, Median, Mode, and Range The mean, median, and mode are different types of average and the range tells us how spread out our data is. We use them when have a bunch of numbers, often data that we’ve collected, and we want to get a feel for how big/small that group of numbers in. Are they very high? Are they only a little bigger than zero? Are they closer to 10 or 20 in general? These are the types of questions we answer when finding an average. ## Mean The mean is the most popular kind of average. To find the mean we must add up all the numbers we’re finding the average of, and then divide by how many numbers there are in that list. • Advantage – every bit of data is used in calculating the mean, so it represents all the data. • Disadvantage – it is highly affected by outliers. An outlier is a piece of data that doesn’t quite fit with the rest of them. You need to be able to calculate the mean from different types of data, from a list of numbers to frequency tables and grouped frequency tables ## Median The median is often referred to as “the middle”, which is precisely what it is. To find the median of a list of numbers, we put the numbers in order from smallest to largest and find the middle value/middle two values. If there is a middle value, then that is the median; if there are two middle values, then the median is the halfway point between the two. There are two common ways of finding the middle value(s): • Cross out the smallest number and the largest number, then cross out the next smallest and largest, keeping going crossing out pairs of number like this until you have one or two left. • If $n$ is the number of values in the list, then work out the value of $\frac{n+1}{2}$. If the answer is a whole number such as 5, then the median is the 5th point along the ordered list. If the answer is a decimal (ending in .5) such as 12.5, then the median is the halfway point between the 12th and 13th value along. • Advantage – it is not affected by outliers. • Disadvantage – it does not consider all the data. Consider the values $1, 1, 2, 3, 12, 14, 15$ – what is the median? Does it actually represent the middle of these numbers? ## Mode The mode is the most common value. To find it, look for which value appears most often. There might be two values which are tied for the most appearances, in which case we say the data is bimodal, or alternatively there might no repeats at all, in which case there is simply no mode. • Advantage – it is not affected by outliers. • Disadvantage(s) – firstly, it sometimes is impossible to find. Secondly, it does not consider all of the data. Consider the values $32, 35, 35, 128, 201, 176, 295$ – what is the mode? Does it represent the “average” of the data? ## Range The range is not another average – it is a measure of spread. This means the range is a way of telling us how spread out the data is. To calculate it, we subtract the smallest value from the biggest value. $\text{Biggest value} - \text{Smallest Value}$ ## Example 1: Finding the Mean, Median and Mode 9 people take a test. Their scores out of 100 are: $56, 79, 77, 48, 90, 68, 79, 92, 71$ Work out the mean, median, and mode of their scores. First up, the mean. The question tells that there are 9 data points, so we must add the numbers together and divide the result by 9. $\text{Mean } = \dfrac{56 + 79 + 77 + 48 + 90 + 68 + 79 + 92 + 71}{9}=73.3\text{ (1dp)}$ So, the mean is 73.3. Next up, the median. Firstly, we have to put the numbers in ascending order. This looks like $48, 56, 68, 71, 77, 79, 79, 90, 92$ There are 9 numbers, and $\frac{9+1}{2}=5$, so the median must be the 5th term along. Counting along the list, we get that the median is 77. Finally, the mode. We can see very clearly from the ordered list that there is only one repeat: 79, so we must have that the mode is 79. ## Example 2: Calculating the Range Find the range of $12, 8, 4, 16, 15, 15, 5, 15, 10, 8$. A good way to make sure you haven’t missed any numbers in determining the biggest and smallest value is to order them. Doing this, we get $4, 5, 8, 8, 10, 12, 15, 15, 15, 16$. Therefore, the smallest value is 4 and the largest is 16 and $16-4=12$, so the range is 12. Sadly, the range also has its disadvantages – it is highly affected by outliers. *A better way to calculate both mean and range is to remove outliers before calculating them. A question may ask you to redo calculations of the mean/range with outliers removed, or it may ask you to identify how these values are affected by outliers. Get to know your outliers. All that said, if you’re asked to find the mean/range of a bunch of numbers, then don’t go removing any numbers you think might be outliers unless the question asks you to. ## Example 3: Finding the Mean – Applied Questions There were 5 members of a basketball team who had a mean points score of 12 points per game. One of the team members left, causing the average point score to reduce to 10 points per game. What was the mean score of the player that left? There are often exam questions that require you to stretch your knowledge and apply hat you know. With examples such as this there are a set of steps you can take which apply to these types of mean questions. Firstly, find the total for the original number of players $5\times12=60$ Secondly, find the total after once the mean has changed, so $4\times10=40$ Finally, calculate the difference between these two totals as that difference has been caused by the person who left. $60-40=20$ Therefore the mean score of the person who left was 20 points per game. The same method applies if a new person/amount is added, you find the old and new totals and the difference is always due to the thing which caused the change. ### Example Questions It is not necessary to order the numbers, but it may help, especially in working out the range. In ascending order, these values are: $280, 280, 320, 350, 350, 350, 400, 410, 470, 490, 590$ Since the number 350 occurs 3 times, it is the most common value, so: $\text{mode } = 350$. The range is the difference between the lowest and the highest value. The lowest value is 280 and the highest is 590, so: $\text{range } = 590-280=310$. First of all, since we have been asked to work out the median, we need to order the set of values: $154, 163, 164, 168, 170, 179, 185, 188$ There are 8 values in total, so we need to know which value, or values, we need in order to find the median. Since there is an even number of values, there is not one single middle value, so you will need to find the two middle values. To find the median value, we can use the following formula: $\dfrac{n + 1}{2}$ where $n$ represents the total number of values. In this question, we have 8 values, so: $\frac{8 + 1}{2}=4.5$ The answer 4.5 tells us that the median is half-way between the 4th value and the 5th value. The 4th value is 168 and the 5th value is 170, so the median is 169. NOTE: if you struggle to work out the half-way value, add up the two numbers and divide by 2 (in other words, work out the mean of these two values). a) In order to calculate the mean, we need to add up all the values and divide by 10 (since there are 10 values in total). $\text{Mean }=\dfrac{0.25+0.34+0.39+0.38+0.39+1.67+0.28+0.3+0.42+0.46}{10}=0.488$ b) 1.67 is the outlier as it is vastly higher than all the other values. If this outlier were removed, then the mean would be lower. In most questions involving the mean, we are given the total and need to work out the mean from the total. In this question, we have been given the mean, so we are going to have to calculate the total from the mean. If the mean length of 7 planks of wood is 1.35m, then the total length of all these planks of wood combined can be calculated as follows: $7 \times 1.35\text{ m} = 9.45\text{ m}$ When the extra plank of wood is added, the mean length of a plank of wood increases to 1.4m. This means there are now 8 planks of wood, with a combined length of: $8 \times 1.40\text{ m} = 11.2\text{ m}$ Therefore, by adding this additional plank of wood, the combined length has increased from 9.45m to 11.2m, so the length of this extra plank of wood is therefore: $11.2\text{ m} - 9.45\text{ m} = 1.75\text{ m}$ In this question, we do not need to work out a 2% increase in weight for each individual team member (it would not be wrong to do so, just unnecessarily time-consuming). The combined weight of all 8 members is: $63 + 60+57+66+62+65+69+58 = 500\text{ kg}$ If each team member increases their weight by 2%, then this is the same as the team increasing their combined weight by 2%. Therefore, if the team is successful in achieving this 2% weight gain, then the combined weight of the team can be calculated as follows: $1.02\times500 = 510 \text{ kg}$ Since there are 8 team members in total, then mean weight following this weight gain is: $510\text{ kg}\div8 =63.75\text{ kg}$ Level 4-5 Level 4-5 Level 1-3 Level 1-3 Level 4-5 Level 4-5 GCSE MATHS GCSE MATHS GCSE MATHS ### Learning resources you may be interested in We have a range of learning resources to compliment our website content perfectly. Check them out below.
2020-07-12 00:34:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 32, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5020190477371216, "perplexity": 342.29982394181235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129257.81/warc/CC-MAIN-20200711224142-20200712014142-00218.warc.gz"}
http://math.stackexchange.com/tags/probability-theory/hot?filter=year
# Tag Info 48 We see that in probability, we represent the event $A$ as a set of elements in our sample space, and $\neg A$ as the complement of $A$ in our sample space. Thus, in probabilistic terms, $$P(A \wedge \neg A) = P\left(A \cap \overline{A}\right) = P(\emptyset)$$ by the definition of the complement. And by the Kolmogorov Axioms, we see that $$P(\emptyset) = 0.... 20 Consider all of the 6\times 5 ways to pick two pieces of fruit. That's 30:$$\boxed{\begin{array}{|l|ccc:ccc|}\hline ~ & A_1 & A_2 & A_3 & O_1 & O_2 & O_3 \\ \hline A_1 & \times & \color{green}{A_1,A_2} & \color{green}{A_1,A_3} & \color{blue}{A_1,O_1} & \color{blue}{A_1,O_2} & \color{blue}{A_1,O_3} \... 18 The only reason you are multiplying by 2 in the second case is because you are using a shortcut due to the fact that the two scenarios that you are adding have a probability found with the same formula. You just need to add up the probabilities you are seeking. Case 1) $\frac{3}{6}*\frac{2}{5}$ Case 2) $\frac{3}{6}*\frac{3}{5}+\frac{3}{6}*\frac{3}{5}$ Or,... 17 the answer is indeed $\frac 12$ . As an alternative way to see that: let's pause just before Bob tosses his final (extra) toss. At this point, there are three possible states: either Bob is ahead, Alice is ahead, or they are tied. Let $p$ be the probability that Bob is ahead. By symmetry, $p$ is also the probability that Alice is ahead (so the ... 14 In terms of the sample space of events $Ω$, an event $E$ happens almost surely if $P(E) = 1$, whereas an event happens surely if $E=Ω$. An example: suppose we are (independently) flipping a (fair) coin infinitely many times. The event $$\{ \text{I will get heads infinitely often}\}$$ is an almost sure event (because it is possible get only a finite ... 14 When all you have is the raw set structure, the only limit concept that really makes sense is: $S$ is the limit of the sequence $S_1, S_2, S_3,\ldots$ iff $$\forall x\; \exists N\in\mathbb N\; \forall n>N : x\in S\Leftrightarrow x \in S_n$$ In other words every possible element is either in all but finitely many $S_n$ (in which case it is in the ... 14 Let $p_n$ be the $n$th prime. The events $p_n\mathbb N$ are independent*, because $$P(p_n\mathbb N \cap p_m\mathbb N)=P(p_np_m\mathbb N)=\frac 1{p_np_m}=P(p_n\mathbb N)P(p_m\mathbb N)$$ The sum of the reciprocals of the primes $$\sum_n \frac 1{p_n}$$ famously diverges. So, by the second Borel-Cantelli lemma, the event that infinitely many of the events $... 12 If the covariance matrix is not positive definite, we have some$a \in \mathbf R^n \setminus \{0\}$with$\def\C{\mathop{\rm Cov}}\C(X)a = 0. Hence \begin{align*} 0 &= a^t \C(X)a\\ &= \sum_{ij} a_j \C(X_i, X_j) a_i\\ &= \mathop{\rm Var}\left(\sum_i a_i X_i\right) \end{align*} So there is some linear combination of theX_iwhich has ... 12 You want to prove the statement: $$\lim_{n\to\infty}\sum_{i=1}^{n}c=c \implies c=0$$ Instead, you can prove the equivalent statement: $$c\neq0 \implies \lim_{n\to\infty}\sum_{i=1}^{n}c \neq c$$ And this is rather simple, as you can use the exact trick that you were trying to avoid: $$c\neq0 \implies \lim_{n\to\infty}\sum_{i=1}^{n}c=c\lim_{n\to\infty}... 11 He probably meant that either: 1. The space of continuous functions which are nowhere monotonic has probability one using Wiener measure. I.e. a continuous function is "almost surely" nowhere monotonic using the standard probability measure for that space. 2. That set of nowhere monotonic continuous functions is of the first category (Baire category ... 10 It would generally not be true even if they were independent. For example if X,Y,Z were identically and independently continuously distributed then they can come in any order with equal probability so P(X \leq Y \leq Z) = \frac16 but similarly P(X \leq Y)P(Y \leq Z) = \frac12 \times \frac12 = \frac14. 10 If we divide every roll by n, rolling the die and dividing by n approximates the uniform distribution on [0,1] for arbitrarily large n. We then are looking for the expected number of samples from a uniform distribution required to get a sum above 1. For any integer k \in \mathbb{N}, Let X_1, \ldots , X_k, \ldots be the random variables in ... 10 In order to give more strength to the induction hypothese let us prove more generally: \exists\alpha\in\left\{ 0,1\right\} ^{I}:\left\{ B_{i}^{\left(\alpha_{i}\right)}\right\} _{i\in I}\text{ is independent}\implies\forall\beta\in\left\{ 0,1\right\} ^{I}:\left\{ B_{i}^{\left(\beta_{i}\right)}\right\} _{i\in I}\text{ is independent} Assume that the ... 10 The trick is the using the identity k { n \choose k} = n {n-1 \choose k-1}.$$\begin{align*} &\sum_{k=1}^n k { n \choose k } p^k (1-p)^{n-k}\\ &= \sum_{k=1}^n n { n-1 \choose k-1} p^k (1-p)^{n-k}\\ &=np \sum_{k=1}^n {n-1 \choose k-1} p^{k-1} (1-p)^{(n-1)-(k-1)}\\ &=np (p+(1-p))^n\\ &=np \end{align*}$$10 The event \{H,T\} is the event that the coin turns up heads or tails. This event always happens and thus has probability 1. The empty event, \varnothing, should not be thought of as the event that the coin lands on its edge. It is assumed that the coin always lands heads or tails. Rather the event \varnothing is the event that there is no outcome, ... 9 Simply follow the hint... First note that, since E(X\mid Y)=Y almost surely, for every c,$$E(X-Y;Y\leqslant c)=E(E(X\mid Y)-Y;Y\leqslant c)=0,$$and that, decomposing the event [Y\leqslant c] into the disjoint union of the events [X>c,Y\leqslant c] and [X\leqslant c,Y\leqslant c], one has$$E(X-Y;Y\leqslant c)=U_c+E(X-Y;X\leqslant c,Y\leqslant ... 9 Here is a proof of the first question using Zorn's lemma, and a counterexample to the second question using an ultrafilter. (So both cases used some form of the axiom of choice!) Theorem (Sierpinski): For a non-atomic probability space(\Omega, \mathcal{F}, \mu)$,$\mu$is surjective onto$[0,1]$. Proof: Let$x \in [0,1]$, and let $$\mathcal{G} = \{... 9 Define X_n to be such that X_n is 0 with probability 1-\frac{1}{n} and n^2 with probability \frac{1}{n}. It is the case that E[X_n]=n \to \infty. But for any positive k we have \mathbb{P}(X_n > k) = \frac{1}{n} \to 0. Thus showing a counter example. 8 Use the strong law of large numbers. Choose B\in M so that \lambda(B)\neq \mu(B), and consider the disjoint sets$$\left\{x\in X^\infty: {1\over n}\sum_{j=1}^n 1_{[x_j\in B]}\to \lambda(B)\right\}\mbox{ and }\left\{x\in X^\infty: {1\over n}\sum_{j=1}^n 1_{[x_j\in B]}\to \mu(B)\right\}.$$8 Consider two normal distributions with the same variance and different means. 8 Definitely incorrect. Let F = X+Y. Suppose X and Y are IID normal. Then E[X|F] and E[Y|F] are both linear in F, and hence perfectly correlated. 8 There is a difference between "almost surely" and "surely." Consider choosing a real number uniformly at random from the interval [0,1]. The event "1/2 will not be chosen" has probability 1, but is not impossible. I recommend reading the relevant Wikipedia article, which I found very clarifying when I was learning probability. 8 By definition: E[X] is only defined (EDIT: as a real number) for X such that E[|X|] < \infty. 8 I think that's a misunderstanding. The passage you quote is a bit informally phrased. By "the mean time to failure of some disk", they don't mean the mean time to failure of each individual disk but the mean time it takes until any one of the 100 disks fails. If the failures form a Poisson process, then having 100 disks instead of 1 disk will increase ... 7 If I toss a coin an infinite amount of times, can I be sure to get an infinite amount of heads? According to the Borel-Cantelli lemma, since each coin toss is an event of probability \frac12 and a sum of \frac12 diverges, the probability of \limsup_{n\to\infty}\{\text{heads at n-th flip}\} is 1. But the \limsup is precisely the event of infinite ... 7 Let (B_t)_{t \geq 0} be a Brownian motion on a probability space (\Omega_1,F_1,P_1) and let (\Omega_2,F_2,P_2) be an arbitrary probability space. Define a new probability space (\Omega,F,P) by$$\Omega := \Omega_1 \times \Omega_2 \qquad F := F_1 \otimes F_2 \qquad P := P_1 \otimes P_2.$$If we set$$\tilde{B}_t(\omega_1,\omega_2) := B_t(\omega_1) \... 7 The vector$\vec X = [X_1,\dots,X_N]^T$has a rotationally invariant distribution. That is, if$A$is any orthogonal matrix, then the distribution of$\vec X$and$A \vec X$are the same. Hence by letting$A$be an orthogonal matrix that takes$[1,\dots,1]^T$to$[\sqrt N,0,\dots,0]^T$, your problem is the same as computing$$\Pr(\sqrt N X_1 \mid \sum X_i^... 7 The$\sigma$-algebra generated by the events$\{\omega \in \Omega: \omega_n = W \}$is the so-called Borel$\sigma$-algebra on$\Omega = \{H,T\}^\mathbb{N}$. One can show, by transfinite induction (so you need some set-theory background) that there are at most$|\mathbb{R}| = 2^{\aleph_0}$many Borel sets, while the power set of$\Omega$has$2^{|\Omega|} = ... Only top voted, non community-wiki answers of a minimum length are eligible
2016-06-30 17:50:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930058121681213, "perplexity": 506.6750392179438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00090-ip-10-164-35-72.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/437703/spin-helicity-formalism-for-gluon-gluon-amplitudes
# Spin-helicity formalism for gluon-gluon amplitudes In Schwarz's QFT he introduces in chapter 27 the Spin-Helicity formalism as a way of calculating gluon-gluon interactions much easier than going through all the Feynman calculus from the beginning to the end. It seems so amazing, but I am not sure I understand what is the fundamental difference between the 2 approaches that makes one a lot easier than the other. The main difference (on which spin-helicity formalism is actually based) is the fact that momentum is treated like a bi-spinor and not a vector. Why is this approach so much simpler? Can someone give me some intuition to it? Thank you! The reason why calculating amplitudes using the Feynman calculus formalism is tedious is that they stem from a perturbative treatment that is formulated by upholding, above all, manifest Lorentz-invariance. While this is extremely useful for developing and formalising the theory (e.g. to detect anomalies easily), it tends to obstruct practical calculations - an early example of this is in a theory with $$\text{U}(1)$$ gauge symmetry: we introduce redundancy into the theory while embedding the spin-1 particle into a Lorentz-invariant object that has too many degrees of freedom, so we have to systematically kill off the additional degrees of freedom to preserve the manifest Lorentz-invariance. The redundancy manifests itself in the Ward identities which the Feynman diagrams must obey - this introduces tons of terms in QCD interactions like $$\sim A^2 \partial A$$ and $$\sim A^4$$ that have to be dealt with, which, despite heavy cancellation, quickly render the calculation impractical. To circumvent this, notice that the "natural" transformation for spin-1 fields is under the irreducible $$\left(\frac12, \frac12\right)$$ representation of $$\mathfrak{su}(2)\oplus\mathfrak{su}(2)$$ - so rather than working with the induced $$(1 \oplus 0) \ \mathfrak{so}(3)$$ four-vector representation that we are used to, we should work with the bispinors of the $$\left(\frac12, \frac12\right)$$ rep: the conversion between the two is $$p_{a\dot{b}}=p_\mu(\sigma^\mu)_{a\dot b} \\ (\sigma^\mu)_{a\dot b}=(1, \sigma^i)_{a\dot b}$$ This embedding avoids the redundancy of the four-vector description (although manifest Lorentz invariance may not present itself throughout these calculations, it is always lurking within). It also has the doubly nice property that the matrix associated with $$p_{a\dot b}$$ is rank-1 (since its determinant vanishes), so it can be factored as an outer product between a dotted and an undotted spinor. The fact that a spinor has fewer degrees of freedom than a vector facilitates many of the speed-ups that one gets for free while working with the SH-formalism. Another benefit is that the little group transformations that leave $$p_\mu$$ invariant (e.g. $$\text{ISO}(2)$$ on a massless $$(E, 0, 0, E)$$) are represented linearly on the polarisation bispinors. There is still residual freedom, which allows us to pick an arbitrary "reference spinor" while building up the polarisation bispinor - through a clever choice of reference spinor, we can set swathes of terms to zero. Thus, in a sense, it channels the gauge degrees of freedom to a more useful outlet. Historically, Parke and Taylor noticed that for Maximally Helicity Violating (MHV) pure gluon amplitudes, the final result could be expressed analytically, purely in terms of the momentum bilinears $$\langle pq \rangle$$. The full force of the spinor-helicity formula enables us to calculate analogous gauge-invariant amplitudes directly and essentially non-perturbatively - this is in contrast to the Feynman diagram approach which involves calculating a rather ugly Lorentz tensor perturbatively which this then contracted with the external polarizations. • +1 very nice and thoughtful answer. Dec 29, 2020 at 11:59 The most practical traditional method of calculating scattering amplitudes comes from Feynman diagrams, where one draws all the relevant diagrams up to the desired order for a given theory, then using the set of Feynman rules relevant to the theory, one can assign a mathematical quantity to each diagram and hence sum everything nicely. The problem is that while manageable for small problems, the number of diagrams and intermediate calculations rapidly grows and quickly becomes too difficult to do by hand. The spinor-helicity formalism is the starting point for studying the structure of amplitudes. By expressing the spinor brackets and using this formalism, we can uncover newer methods which expose the simplicity; cancellations are abundant and it is much easier to do with things, particularly teamed with recursion relations. You mention that 'the spinor formalism uses bi-spinors rather than vectors'. Even doing Feynman diagram calculations uses polarisation vectors and Dirac spinors (in QCD,QED diagrams etc). It is naturally to keep track of quantum numbers such as spin or helicity (and colour in QCD), and it is much easier to deal with in SH-formalism.
2022-08-20 04:55:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8650351166725159, "perplexity": 546.7847324252796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00777.warc.gz"}
https://tanaikech.github.io/2021/04/27/specification-of-search-query-for-file-list-method-in-drive-api/
# Specification of Search Query for File List Method in Drive API Gists In this report, I would like to report about the current specification of the search query for the file list method in Drive API. Recently, I noticed that the specification of the search query for the file list method in Drive API might have been changed. I thought that to know the change of specification of the search query is important for creating the application using Drive API. In this report, I would like to introduce the current specification of the search query. ## Experiments I have done the following 5 experiments. 1. Retrieving the file list by searching the filename. 2. Retrieving the file list by searching the mimeType. 3. Retrieving the file list by searching the parent folder ID. 4. Retrieving the file list by searching the trash box. 5. Retrieving the file list by searching the properties. As the operators for the search query, in the current stage (April 27, 2021), there are contains, =, !=, <=, <, >, >=, in, has. In this experiments, in order to retrieve the file list which has the inputted value, the operators of contains, =, in, has were used. ### 1. Retrieving the file list by searching the filename. At the official document, when the file is searched by the filename, name and contains, =, != are used. So in this case, the combination for checking this search query, I prepare the following 6 parts. "name", "contains", "=", "in", "has", "'###filename###'" When 3 parts like name = 'filename' for searching the files with the filename are retrieved from above 6 parts, 120 patterns can be considered. When these 120 patterns were checked for the method of file list of Drive API v3, it was found that in order to search the filename, the following search queries could be used. And, I could confirm that those search query returns the same results and those were the correct values. For other patterns, an error like Invalid Value occurred. 1. name = 'sampleFilename' 2. name contains 'sampleFilename' 3. name in 'sampleFilename' The operators using at 1 and 2 can follow to the official document. But the operator using at 3 is not seen in the official document. It is considered that this is the hidden search query. ### 2. Retrieving the file list by searching the mimeType. At the official document, when the file is searched by the mimeType, mimeType and contains1, =, != are used. So in this case, the combination for checking this search query, I prepare the following 6 parts. "mimeType", "=", "in", "contains", "has", "'###mimeType###'" When 3 parts like mimeType = 'mimeType' for searching the files with the mimeType are retrieved from above 6 parts, 120 patterns can be considered. When these 120 patterns were checked for the method of file list of Drive API v3, it was found that in order to search the mimeType, the following search queries could be used. And, I could confirm that those search query returns the same results and those were the correct values. For other patterns, an error like Invalid Value occurred. 1. mimeType = 'mimeType' 2. mimeType contains 'mimeType' 3. mimeType in 'mimeType' The operators using at 1 and 2 can follow to the official document. But the operator using at 3 is not seen in the official document. It is considered that this is the hidden search query. ### 3. Retrieving the file list by searching the parent folder ID. At the official document, when the file is searched by the parent folder ID, parents and in are used. So in this case, the combination for checking this search query, I prepare the following 6 parts. "parents", "=", "in", "contains", "has", "'###folderId###'" When 3 parts like '###folderId###' in parents for searching the files in the folder of folder ID are retrieved from above 6 parts, 120 patterns can be considered. When these 120 patterns were checked for the method of file list of Drive API v3, it was found that in order to search the file list in the folder, the following search queries could be used. And, I could confirm that those search query returns the same results and those were the correct values. For other patterns, an error like Invalid Value occurred. 1. '###folderId###' in parents 2. parents = '###folderId###' 3. parents in '###folderId###' Pattern 1 can follow to the official document. But the patterns 2 and 3 are not seen in the official document. It is considered that these are the hidden search query. ### 4. Retrieving the file list in trash box. At the official document, when the file is searched by the parent folder ID, trashed and in are used. So in this case, the combination for checking this search query, I prepare the following 6 parts. "trashed", "=", "in", "contains", "has", "true" When 3 parts like trashed = true for searching the files in the trash box are retrieved from above 6 parts, 120 patterns can be considered. When these 120 patterns were checked for the method of file list of Drive API v3, it was found that in order to search the file list in the trash box, the following search queries could be used. And, I could confirm that those search query returns the same results and those were the correct values. For other patterns, an error like Invalid Value occurred. 1. trashed = true 2. trashed in true Pattern 1 can follow to the official document. But the pattern 2 is not seen in the official document. It is considered that this is the hidden search query. ### 5. Retrieving the file list by searching properties. At the official document, when the file is searched by the properties, properties and has are used. So in this case, the combination for checking this search query, I prepare the following 6 parts. "properties", "=", "in", "contains", "has", "{ key='key' and value='value' }" When 3 parts like properties has { key='key' and value='value' } for searching the files by the properties are retrieved from above 6 parts, 120 patterns can be considered. When these 120 patterns were checked for the method of file list of Drive API v3, it was found that in order to search the file list by the properties, the following search queries could be used. And, I could confirm that those search query returns the same results and those were the correct values. For other patterns, an error like Invalid Value occurred. 1. properties has { key='key' and value='value' } In this case, only pattern 1 could work. This can follow to the official document. ## Summary • From above results, it was found that the operators of = and in can be used as the same operator. But, it was also found that the operator of has can use for properties and appProperties. ## References As the references, the official documents for the search query are as follows.
2022-12-06 07:09:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19501234591007233, "perplexity": 2034.8306232423333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00657.warc.gz"}
http://mathhelpforum.com/calculus/72660-how-work-out-integrals.html
# Thread: How To Work Out Integrals? 1. ## How To Work Out Integrals? $\displaystyle \int \frac{2}{x^5}$ How would I work this out? 2. Hello, Originally Posted by OhWhen $\displaystyle \int \frac{2}{x^5}$ How would I work this out? $\displaystyle \int a \cdot f=a \cdot \int f$, for any constant a. $\displaystyle \int x^n ~dx=\frac{x^{n+1}}{n+1}+C$, for any real number n, not equal to -1. Here, you have : $\displaystyle \int \frac{2}{x^5} ~dx=\int 2 \cdot x^{-5} ~dx$ (rule of exponents) Now apply the two formulae above.
2018-06-19 11:16:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638867139816284, "perplexity": 4396.061034755728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862248.4/warc/CC-MAIN-20180619095641-20180619115641-00488.warc.gz"}
https://brilliant.org/problems/a-simple-maximum/
# A simple maximum Calculus Level 2 Let $x$ be a positive real number. If $f(x) = \dfrac{ 125x}{(15 +x)^{2}}$, what value of $x$ maximizes $f(x)$? ×
2021-04-19 04:19:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24112355709075928, "perplexity": 772.4585874249309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00437.warc.gz"}
https://socratic.org/questions/can-you-walk-through-the-steps-for-balancing-cr-2-o-7-2-so-3-2-h-cr-3-so-4-2-h-2
Can you walk through the steps for balancing "Cr"_2"O"_7^(2-) + "SO"_3^(2-) + "H"^+ → "Cr"^(3+) + "SO"_4^(2-) + "H"_2"O"? Oct 22, 2015 Here are the detailed steps. Explanation: $\text{Cr"_2"O"_7^(2-) + "SO"_3^(2-) + "H"^+ → "Cr"^(3+) + "SO"_4^(2-) + "H"_2"O}$ 1. Identify the oxidation number of every atom. Left hand side: $\text{Cr = +6}$; $\text{O = -2}$; $\text{S = +4}$; $\text{H = +1}$ Right hand side: $\text{Cr = +3}$; $\text{S = +6}$; $\text{O = -2}$; $\text{H = +1}$ 2. Determine the change in oxidation number for each atom that changes. $\text{Cr: +6 → +3}$; Change = -3 $\text{S: +4 → +6}$; Change = +2 3. Make the total increase in oxidation number equal to the total decrease in oxidation number. We need 2 atoms of $\text{Cr}$ for every 3 atoms of $\text{S}$. This gives us total changes of -6 and +6. 4. Put the appropriate coefficients in front of the formulas containing those atoms. $\textcolor{red}{1} \text{Cr"_2"O"_7^(2-) + color(red)(3)"SO"_3^(2-) + "H"^+ → color(red)(2)"Cr"^(3+) + color(red)(3)"SO"_4^(2-) + "H"_2"O}$ 5. Balance all remaining atoms other than $\text{H}$ and $\text{O}$. Done. 6. Balance O. We have fixed 16 atoms of $\text{O}$ on the left, so we need 16 atoms of $\text{O}$ on the right. And we have fixed 12 atoms of $\text{O}$ on the right, so we need 4 more. Put a $\textcolor{b l u e}{4}$ in front of the $\text{H"_2"O}$. $\textcolor{red}{1} \text{Cr"_2"O"_7^(2-) + color(red)(3)"SO"_3^(2-) + "H"^+ → color(red)(2)"Cr"^(3+) + color(red)(3)"SO"_4^(2-) + color(blue)(4)"H"_2"O}$ 7. Balance H. We have fixed 8 atoms of $\text{H}$ on the right, so we need 8 on the left. Put an $\textcolor{g r e e n}{8}$ in front of the ${\text{H}}^{+}$. $\textcolor{red}{1} \text{Cr"_2"O"_7^(2-) + color(red)(3)"SO"_3^(2-) + color(green)(8)"H"^+ → color(red)(2)"Cr"^(3+) + color(red)(3)"SO"_4^(2-) + color(blue)(4)"H"_2"O}$ 8. Check that everything is balanced. (a) Atoms On the left: $\text{2Cr; 16 O; 3 S; 8 H}$ On the right:$\text{2Cr; 3 S; 16 O; 8 H}$ (b) Charge On the left: 0 On the right: 0 Everything checks! The final balanced equation is $\text{Cr"_2"O"_7^(2-) + 3"SO"_3^(2-) + 8"H"^+ → 2"Cr"^(3+) + 3"SO"_4^(2-) + 4"H"_2"O}$
2019-06-19 07:12:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 29, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8149889707565308, "perplexity": 1722.2772220136183}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998923.96/warc/CC-MAIN-20190619063711-20190619085711-00517.warc.gz"}
http://mathhelpforum.com/trigonometry/275166-could-you-please-explain-these-pythagoras-equations.html
"cosine law". If a triangle (right triangle or not) has side lengths of "a", "b", and "ac", with angle "C" opposite the side of length "c" then $c^2= a^2+ b^2- 2ab cos(C)$.
2018-01-21 05:10:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6000818014144897, "perplexity": 1929.656267172942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890187.52/warc/CC-MAIN-20180121040927-20180121060927-00191.warc.gz"}
https://www.dmgt.uz.zgora.pl/publish/view_short.php?ID=-3438
# DMGT ISSN 1234-3099 (print version) ISSN 2083-5892 (electronic version) # IMPACT FACTOR 2018: 0.741 SCImago Journal Rank (SJR) 2018: 0.763 Rejection Rate (2017-2018): c. 84% # Discussiones Mathematicae Graph Theory ## THE SUM NUMBER OF d-PARTITE COMPLETE HYPERGRAPHS Hanns-Martin Teichert Institute of Mathematics, Medical University of Lübeck Wallstraß e 40, 23560 Lübeck, Germany ## Abstract A d-uniform hypergraph H is a sum hypergraph iff there is a finite S ⊆ [.2em][l]IN+ such that H is isomorphic to the hypergraph Hd+(S) = (V,E), where V = S and E = {{ v1,…,vd}:(i ≠ j⇒ vi ≠ vj)∧∑di = 1 vi ∈ S}. For an arbitrary d-uniform hypergraph H the sum number σ = σ(H) is defined to be the minimum number of isolated vertices w1,…,wσ∉ V such that H∪{ w1,…, wσ} is a sum hypergraph. In this paper, we prove σ(Kdn1,…,nd) = 1 + d ∑ i = 1 (ni-1) + min ⎧ ⎨ ⎝ 0, ⎡ ⎢ ⎢ 1 -- 2 ⎛ ⎝ d-1 ∑ i = 1 (ni-1)-nd ⎞ ⎠ ⎤ ⎥ ⎥ ⎫> ⎬ ⎭ , where Kdn1,…,nd denotes the d-partite complete hypergraph; this generalizes the corresponding result of Hartsfield and Smyth [8] for complete bipartite graphs. Keywords: sum number, sum hypergraphs, d-partite complete hypergraph. 1991 Mathematics Subject Classification: 05C65, 05C78. ## References [1] C. Berge, Hypergraphs, (North Holland, Amsterdam-New York-Oxford-Tokyo, 1989). [2] D. Bergstrand, F. Harary, K. Hodges. G. Jennings, L. Kuklinski and J. Wiener, The Sum Number of a Complete Graph, Bull. Malaysian Math. Soc. (Second Series) 12 (1989) 25-28. [3] Z. Chen, Harary's conjectures on integral sum graphs, Discrete Math. 160 (1996) 241-244, doi: 10.1016/0012-365X(95)00163-Q. [4] Z. Chen, Integral sum graphs from identification, Discrete Math. 181 (1998) 77-90, doi: 10.1016/S0012-365X(97)00046-0. [5] M.N. Ellingham, Sum graphs from trees, Ars Comb. 35 (1993) 335-349. [6] F. Harary, Sum Graphs and Difference Graphs, Congressus Numerantium 72 (1990) 101-108. [7] F. Harary, Sum Graphs over all the integers, Discrete Math. 124 (1994) 99-105, doi: 10.1016/0012-365X(92)00054-U. [8] N. Hartsfield and W.F. Smyth, The Sum Number of Complete Bipartite Graphs, in: R. Rees, ed., Graphs and Matrices (Marcel Dekker, New York, 1992) 205-211. [9] M. Miller, J. Ryan, Slamin, Integral sum numbers of H2,n and Km,m, 1997 (to appear). [10] A. Sharary, Integral sum graphs from complete graphs, cycles and wheels, Arab. Gulf J. Sci. Res. 14 (1) (1996) 1-14. [11] A. Sharary, Integral sum graphs from caterpillars, 1996 (to appear). [12] M. Sonntag and H.-M. Teichert, The sum number of hypertrees, 1997 (to appear). [13] M. Sonntag and H.-M. Teichert, On the sum number and integral sum number of hypertrees and complete hypergraphs, Proc. 3rd Kraków Conf. on Graph Theory, 1997 (to appear).
2020-03-31 22:59:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8582499623298645, "perplexity": 12116.499482774308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370504930.16/warc/CC-MAIN-20200331212647-20200401002647-00134.warc.gz"}
https://ustunyildirim.com/wp/
# Free isn’t freedom Reading free isn’t freedom, one cannot help but think how come some businesses (like Google and Facebook) never charges their “users” but still be one of the richest. I wonder who is the user here. # Polar Decomposition and Real Powers of a Symmetric Matrix The main purpose of this post is to prove the polar decomposition theorem for invertible matrices. As an application, we extract some information about the topology of ${SL(2,\mathbb C)}$, namely that ${ SL(2,\mathbb C)\cong S^3\times \mathbb R^3}$. Along the way, we recall a few facts and also define real powers of a positive definite Hermitian matrix. We assume that the matrices below are non singular, even though some of the results are true without this assumption. We first prove that every Hermitian matrix has only real eigenvalues. Notice that with respect to the standard Hermitian inner product ${|v|^2 = v^*v}$ for a column vector ${v}$. In fact, more generally, ${\langle v,w \rangle = v^* w}$. Proposition 1 Let ${A}$ be a Hermitian matrix i.e. ${A^*=A}$, then ${A}$ has only real eigenvalues. Proof: Over complex numbers every non constant polynomial of degree ${n}$ has ${n}$ solutions. Thus, it is clear that ${A}$ has eigenvalues. Let ${\lambda}$ be an eigenvalue of ${A}$ with an eigenvector ${v\neq 0}$. That is ${Av = \lambda v.}$ Taking conjugate transpose of both sides, we get $\displaystyle \begin{array}{rcl} v^*A^* &=& \lambda^* v^* \\ v^*A &=& \lambda^* v^* \\ v^*Av &=& \lambda^* v^*v \\ v^*\lambda v &=& \lambda^* v^*v \\ \lambda |v|^2 &=& \lambda^* |v|^2. \end{array}$ Since ${v\neq 0}$, ${|v|^2\neq 0}$ and we get ${\lambda=\lambda^*}$ which is only possible if ${\lambda}$ is real. $\Box$ Recall that a matrix is called positive definite if ${v^*Av>0}$ for all ${v\neq 0}$. Clearly, such a matrix must be nonsingular. Proposition 2 If ${A}$ is a positive definite matrix, then it has only positive real eigenvalues. Proof: As above let ${\lambda}$ be an eigenvalue of ${A}$ with an eigenvector ${v}$. Then, $\displaystyle \begin{array}{rcl} Av &=& \lambda v \\ v^*Av &=& v^*\lambda v \\ 0 < v^*Av &=& \lambda v^*v \\ 0 < v^*Av &=& \lambda |v|^2 \end{array}$ Thus, ${\lambda}$ must be positive as well. $\Box$ If ${A}$ is Hermitian, it has an eigenspace decomposition. Here is the sketch of the proof. We apply induction on the dimension ${n}$. It is clear for ${n=1}$. Now, we may assume ${n>1}$. Let ${\lambda}$ and ${v}$ be as above. We consider the one dimensional space ${V}$ generated by ${v}$ and the complementary space ${V^\perp}$. By definition, for ${w\in V^\perp}$, ${\langle w,v\rangle =0 }$, or equivalently ${w^*v=0}$. Thus, $\displaystyle \begin{array}{rcl} 0 &=& \lambda w^*v \\ &=& w^*\lambda v \\ &=& w^*Av \\ &=& w^*A^*v \\ &=& (Aw)^*v \end{array}$ or in a more familiar form ${\langle Aw, v \rangle = 0}$. This means that ${A}$ preserves ${V^\perp}$ which has dimension ${n-1}$. So, by induction, ${V^\perp}$ has an eigenspace decomposition and we are basically done. The reason that this argument is not very precise is that in this post, we are using a concrete definition of being Hermitian. So, we also have to argue that somehow the matrix ${A}$ is still Hermitian when restricted to ${V^\perp}$. Of course, using the abstract definition, this is trivial. In fact, it is pretty easy to translate every argument presented here to abstract one. Lemma 3 If ${A}$ is Hermitian, then it is diagonalizable by a unitary matrix. Proof: Since ${A}$ has an eigenspace decomposition, we can choose a basis of consisting of eigenvectors only. Furthermore, we may choose those vectors to be unit. Consider the matrix ${U}$ that takes the standard basis to this eigenbasis. Then, it is clear that ${U^{-1}AU}$ is a diagonal matrix. It is also clear that ${U^*U = I}$. $\Box$ Next, we prove the polar decomposition for invertible matrices. In this proof, we also define the square root of a matrix. Theorem 4 Given an invertible matrix ${A}$, there is a positive definite Hermitian matrix ${P}$ and a unitary matrix ${U}$ such that ${A=PU}$. Proof: Let ${R=AA^*}$. Clearly, ${R^*=R}$ i.e. ${R}$ is Hermitian. Also, for nonzero ${v}$, ${Av}$ is nonzero, thus $\displaystyle \begin{array}{rcl} 0<\langle A^*v,A^*v\rangle &=& (A^*v)^*(A^*v) \\ &=& v^*AA^*v \\ &=& v^*Rv. \end{array}$ So, ${R}$ is also positive definite. By the above lemma, as ${R}$ is Hermitian, there is a unitary matrix ${K}$ which diagonalizes ${R}$ i.e. ${K^{-1}RK=D}$. Since ${K}$ is unitary, ${K^{-1}=K^*}$ and hence, ${K^*RK=D}$. Also, since ${R}$ is positive definite, all the eigenvalues of ${R}$ and hence of ${D}$ are positive, by the above proposition. So, we define ${\sqrt{R}}$ to be ${K\sqrt{D}K^*}$ where ${\sqrt{D}}$ is defined by taking square root of each entry on the diagonal. In fact, using this idea, we can define any power of ${R}$ by ${R^p=KD^pK^*}$. Note that ${D^p}$ is also diagonal with positive diagonal entries. Hence, in particular, it is Hermitian. Clearly, a diagonal matrix with positive diagonal entries is positive definite. So, ${\sqrt{D}}$ is positive definite. We set ${P=\sqrt{R}}$. It is easy to check that ${P}$ is positive definite. $\displaystyle \begin{array}{rcl} x^*Px &=& x^*K\sqrt{D}K^*x \\ &=& (K^*x)^*\sqrt{D}(K^*x) > 0 \end{array}$ as ${\sqrt{D}}$ is positive definite. Finally, we let ${U=P^{-1}A}$. Of course, here ${P}$ is invertible because its a product of nonsingular matrices. Now, we just need to check that ${U^*U=I}$. $\displaystyle \begin{array}{rcl} U^*U &=& (P^{-1}A)^*(P^{-1}A) \\ &=& A^*(P^{-1})^*P^{-1}A \\ &=& A^*((K\sqrt{D}K^*)^{-1})^*(K\sqrt{D}K^*)^{-1}A \\ &=& A^*(K(\sqrt{D})^{-1}K^*)^*K(\sqrt{D})^{-1}K^*A \\ &=& A^*K(D^{-1/2})^{*}K^*KD^{-1/2}K^*A \\ &=& A^*K(D^{-1/2})^{*}(D^{-1/2})K^*A \\ &=& A^*K(D^{-1/2})D^{-1/2}K^*A \\ &=& A^*KD^{-1}K^*A \\ &=& A^*R^{-1}A \\ &=& A^*(AA^*)^{-1}A \\ &=& A^*(A^*)^{-1}A^{-1}A \\ &=& I \end{array}$ which was to be shown. $\Box$ Now, we will apply our knowledge to understand the topology of ${SL(2,\mathbb C)}$. Given ${A\in SL(2,\mathbb C)}$, it is clear from our proof that we can choose positive definite Hermitian part so that ${\det(P)=1}$. Hence, ${\det(U)=1}$, in other words, ${U}$ is an element of ${SU(2)}$. Again, in our proof, we have explained that in fact you may take any power of a positive definite Hermitian matrix. So we can define a path of matrices ${A_t}$ by ${P^tU}$. We see that ${A_0=U}$ and ${A_1 = A}$. This defines a deformation retract of ${SL(2,\mathbb C)}$ onto the ${SU(2)}$. It is easy to see that the space of ${2\times 2}$ positive definite Hermitian matrices of determinant 1 is homeomorphic to ${\mathbb R^3}$. More concretely, to write down any such matrix, we need ${a\in \mathbb R^+}$; ${b,c\in \mathbb R}$. Also, we set ${d = (b^2+c^2+1)/a}$. Then, $\begin{pmatrix} a & b+ic \\ b-ic & d \end{pmatrix}$ is positive definite Hermitian of determinant 1. It is also not very hard to check that only the identity matrix is the only matrix that lies in ${SU(2)}$ which is also positive definite Hermitian of determinant 1. Thus, ${SL(2,\mathbb C)\cong SU(2)\times \mathbb R^3}$. We leave it as an exercise to prove that ${SU(2)\cong S^3}$. # G-Structures 2 In this post, we briefly introduce the Lie group ${G_2}$, ${G_2}$-structures on a manifold and a ${G_2}$-manifold. Let us denote the three form ${dx^i\wedge dx^j \wedge dx^k}$ on ${{\mathbb R}^7}$ by ${dx^{ijk}}$. We set ${\phi_0 = dx^{123}+ dx^{145}-dx^{167}+dx^{246}+dx^{257}+dx^{347}-dx^{356}}$. This three form is non-degenerate in the sense that whenever we have two linearly independent vectors in ${{\mathbb R}^7}$, we can find a third vector such that the evaluation of ${\phi_0}$ on these vectors is non-zero. We define ${G_2 = \left\{ M\in GL(7,{\mathbb R}) \big| M^*\phi_0 = \phi_0 \right\}}$. One may prove that ${G_2}$ is a ${14}$-dimensional Lie subgroup of ${SO(7)}$. Let us give a different descriptions of ${\phi_0}$. So, it does not look completely arbitrary. ${7}$ is the highest dimension that one may define a cross product. After we identify ${{\mathbb R}^8}$ with the octonions ${\mathbb O}$ equiped with some octonion product, for any two imaginary octonions ${x,y \in {\mathbb R}^7 \cong im(\mathbb O)}$ we define the cross product to be $\displaystyle \begin{array}{rcl} x \times y = \frac{1}{2} [x,y] = \frac{1}{2}(xy-yx). \end{array}$ Then, we may define the ${3}$-form on ${{\mathbb R}^7}$ by ${\phi_0(x,y,z) = \left< x \times y, z\right>}$ where the inner product is the standard inner product. Of course, there is a choice on the octonion product and hence, ${\phi_0}$ may be different than the one we explicitly wrote above. However, we show that they are equivalent using the right octonion product. To show they are equivalent; first, we prove that ${\left}$ is indeed a ${3}$-form and then, evaluate it on the basis elements to see how the octonion product should be defined. Using ${im(xy)=-im(yx)}$, we obtain ${x\times y = im(xy)}$ and thus, the above definition is equivalent to $\displaystyle \begin{array}{rcl} \phi_0(x,y,z) &=& \left< xy, z\right>. \end{array}$ To prove that ${\phi_0}$ is alternating, it is enough to prove ${\phi_0(x,x,y)=0, \phi_0(x,y,x)=0}$ and ${\phi_0(y,x,x)=0}$ as we may replace ${x}$ by ${x+z}$ to get the desired equalities. However, also note that ${x \times y = - y\times x}$. Therefore, the first two equalities are enough. It is clear that ${x\times x = 0}$. Hence, we have the first equality. Furthermore, $\displaystyle \begin{array}{rcl} \phi_0(x,y,x) &=& \left< xy, x\right> \\ &=& |x|^2\left< y,1\right> \\ &=& 0. \end{array}$ Thus, we have showed that ${\phi_0}$ is alternating. Our next goal is to define the octonion product. Clearly, from the explicit definition, we want ${\phi_0(x_1,x_2,x_3)=1}$. In other words, ${\left = 1}$. So, a natural choice for the product ${x_1x_2}$ is ${x_3}$. Similarly, we can choose ${x_1x_4=x_5}$, ${x_1x_6=-x_7}$, ${x_2x_4=x_6}$, ${x_2x_5=x_7}$, ${x_3x_4=x_7}$ and ${x_3x_5=-x_6}$. Of course, as we are describing octonion multiplication, we should also define the multiplication with the ${8}$th generator but it is the generator of ${Re(\mathbb O)={\mathbb R}}$ part. So, it is just the trivial multiplication i.e. the multiplication coming from the vector space structure. We do not show that this indeed defines an octonion product. Next, we need to show that they are equal and to do that, it is enough to evaluate on the basis elements. It is an easy computation which we omit. Note that this definition makes an earlier claim more plausible, namely that ${\phi_0}$ is non-degenerate. Because ${\phi_0(x,y,x\times y) = \left}$ so, we only need to show that ${x\times y}$ is non-zero for linearly independent ${x}$ and ${y}$. However, that is a built-in property for a cross-product. A ${G_2}$-structure on a manifold ${M}$ can be defined as a subbundle of the frame bundle of ${M}$ whose fibers are isomorphic to ${G_2}$. However, there is an equivalent, more convenient definition. In fact, this definition will follow the scheme of the previous post. More explicitly, since ${G_2}$ fixes ${\phi_0}$ on ${{\mathbb R}^7}$, we may pull it back to each space ${T_pM}$ to have a three form ${\phi}$ on the manifold and similarly, if we have such a three form on the manifold, then we may find a subbundle of the frame bundle whose fibers are ${G_2}$. So, having a three form ${\phi}$ on ${M}$ such that for any point ${p\in M}$, ${\phi_p}$ and ${\phi_0}$ can be identified by an isomorphism between ${{\mathbb R}^n}$ and ${T_pM}$, means that we can find a ${G_2}$-structure on ${M}$. By an abuse of notation, we call ${(M,\phi)}$ a manifold with a ${G_2}$-structure. Furthermore, since ${G_2}$ is a subgroup of ${SO(7)}$, it also fixes the standard metric and orientation on ${{\mathbb R}^7}$ giving rise to a Riemannian metric and orientation on the manifold. This immediately implies a non-orientable manifold does not admit a ${G_2}$-structure. Next, we introduce ${G_2}$-manifolds. Given a manifold ${M}$ with a ${G_2}$-structure ${\phi}$ and the induced metric ${g}$, let ${\nabla}$ be the Levi Civita connection on ${(M,g)}$. If ${\nabla \phi =0}$, we call ${\phi}$ a torsion free ${G_2}$-structure. A manifold with a torsion free ${G_2}$-structure is called a ${G_2}$-manifold. In fact, there are a number of ways to define ${G_2}$-manifolds, as we can see in the following proposition. Proposition 1 Let ${(M^7,\phi)}$ be a ${G_2}$-structure on ${M}$ with the induced metric ${g}$ and the Levi Civita connection ${\nabla}$. Then, the following are equivalent: 1. ${\nabla \phi = 0}$ 2. ${Hol(g) \subseteq G_2}$ 3. ${d\phi = 0}$ and ${d^*\phi = 0}$. If any one of the conditions of the proposition holds (and hence, all), we call ${M}$ a ${G_2}$-manifold. The first example of a metric with ${G_2}$ holonomy is given by Bryant. The metric in his example is incomplete. Later, Bryant and Salamon constructed complete metrics with ${G_2}$ holonomy on non-compact manifolds. Then, Joyce constructed complete examples on compact manifolds. # G-Structures Let ${M}$ be a smooth ${n}$-manifold and ${p}$ be a point in ${M}$. Consider the set ${ S_p}$ of all linear isomorphisms ${L_p:T_pM\rightarrow {\mathbb R}^n}$ between the tangent space at ${p}$ and ${{\mathbb R}^n}$. Note that there is a natural left action of ${GL(n,{\mathbb R})}$ on ${S_p}$. Since this action may be seen as a function composition, we denote the action by ${\circ}$. Though, we will quite often drop the notation altogether hoping that it is clear. The disjoint union ${F = \sqcup_p S_p}$ is called the frame bundle of ${M}$. (Of course, we need to have more conditions on ${F}$ but we will not go into details in this post.) The action of ${GL(n,{\mathbb R})}$ on ${S_p}$ induces a natural action on ${F}$. It is easy to define a bijection between any fiber of ${F}$ and ${GL(n,{\mathbb R})}$. Using the bijection, we may define a group multiplication on ${F}$ which will, in turn, make the fiber isomorphic to ${GL(n,{\mathbb R})}$, trivially. In general, this bijection is not canonical as we will see below. First, we fix an isomorphism ${L_p}$ in ${S_p}$. Then, we send ${K_p \in S_p}$ to ${K_p \circ L_p^{-1} \in GL(n,{\mathbb R})}$. Clearly, this map is injective and ${L_p}$ is sent to the identity matrix under this identification. Also, for any ${N\in GL(n,{\mathbb R})}$, ${N \circ L_p \in S_p}$ and ${N\circ L_p \circ L_p^{-1} = N}$ i.e. the identification is onto. So, we have a bijection. As we can see, once an identity element ${L_p}$ is fixed, the fiber ${S_p}$ becomes a group isomorphic to ${GL(n,{\mathbb R})}$. In other words, ${S_p}$ is a ${GL(n,{\mathbb R})}$-torsor. Let ${G}$ be a Lie subgroup of ${GL(n,{\mathbb R})}$ and ${P}$ be a subbundle of ${F}$ whose fibers (which we still denote by ${S_p}$) are isomorphic to ${G}$ in the above sense. Then, ${P}$ is called a ${G}$-structure on ${M}$. Clearly, the frame bundle ${F}$ is a ${GL(n,{\mathbb R})}$-structure on ${M}$. Next, we discuss two examples of proper subbundles inducing various structures on ${M}$. In our first example, we consider ${G}$ to be the orthogonal group ${O(n)}$. Recall that the standard euclidean metric ${g_0}$ on ${{\mathbb R}^n}$ is fixed by ${O(n)}$. In other words, for any ${N\in O(n)}$, ${N^*g_0 =g_0}$. We can use this property together with ${P}$ to define a Riemannian metric on ${M}$. Let ${p\in M}$, ${L_p\in S_p}$ and define the metric ${g_p}$ as the pullback ${L_p^*(g_0)}$. We need to show that this definition is independent of the choice of ${L_p}$. Let ${K_p\in S_p}$, then ${K_p\circ L_p^{-1} \in O(n)}$ as ${P}$ is an ${O(n)}$-structure. Hence, ${K_p^{*}(g_0) = (K_p\circ L_p^{-1}\circ L_p)^* (g_0)= L_p^*\circ (K_p\circ L_p^{-1})^* (g_0) = L_p^* (g_0)}$. So, we can choose any isomorphism in the fiber in order to define ${g_p}$. So, we see that an ${O(n)}$-structure gives us a Riemannian metric. Next, we will go the other way around i.e. given a Riemannian metric ${g}$, we construct an ${O(n)}$-structure on ${M}$. Each tangent space ${T_pM}$ is equiped with an inner product and we consider ${{\mathbb R}^n}$ equiped with the standard inner product. We define a fiber of ${P}$ to be the set of linear isometries between ${T_pM}$ and ${{\mathbb R}^n}$. Next, we need to check that a fiber is isomorphic to ${O(n)}$. Again, first, we fix an isomorphism ${L_p}$. Then, given another isometry ${K_p}$ from ${T_pM}$ to ${{\mathbb R}^n}$, ${K_p\circ L^{-1}_p}$ is an isometry from ${{\mathbb R}^n}$ to itself i.e. ${K_p \circ L^{-1}_p \in O(n)}$. Also, for ${N\in O(n)}$, ${N\circ L_p}$ is an isometry from ${T_pM}$ to ${{\mathbb R}^n}$ and ${N \circ L_p \circ L_p^{-1} = N\in O(n)}$. Hence, as above, we have an isomorphism. It is easy to see that this correspondence between ${O(n)}$-structures and Riemannian metrics is one to one. This examle can be generalized easily. Any structure on ${{\mathbb R}^n}$ which is fixed by a Lie subgroup ${G}$ of ${GL(n,{\mathbb R})}$ can be carried to a manifold which admits a ${G}$-structure. In fact, our second example will be of this type, again. We consider the correspondence between an almost complex structure and a ${GL(m,{\mathbb C})}$-structure where ${n=2m}$. Before we discuss the correspondence, let us clearify a few things. We view ${GL(m,{\mathbb C})}$ as a subgroup of ${GL(2m,{\mathbb R})}$ using the monomorphism $\displaystyle \begin{array}{rcl} N \mapsto \begin{pmatrix} Re(N) & -Im(N) \\ Im(N) & Re(N) \end{pmatrix} \end{array}$ Let ${J_0:{\mathbb C}^m\rightarrow {\mathbb C}^m}$ denote the action of ${i}$ on ${{\mathbb C}^m}$. In other words, ${J_0 = iI}$ where ${I}$ denotes the ${m\times m}$ identity matrix. Or using, the monomorphism defined above $\displaystyle \begin{array}{rcl} J_0 = \begin{pmatrix} 0 & -I \\ I & 0 \end{pmatrix}, \end{array}$ in matrix block form, as a real ${n\times n}$ matrix. Of course, for any matrix ${N \in GL(m,{\mathbb C})}$, ${J_0 N = iN=Ni=NJ_0}$. Equivalently, we have ${N^{-1}J_0N=J_0}$. On the other hand, let ${N = \begin{pmatrix} A & B \\ C & D \end{pmatrix} \in GL(n,{\mathbb R})}$. Then, $\displaystyle \begin{array}{rcl} J_0 N &=& \begin{pmatrix} 0 & -I \\ I & 0 \end{pmatrix} \begin{pmatrix} A & B \\ C & D \end{pmatrix} \\ &=& \begin{pmatrix} -C & -D \\ A & B \end{pmatrix} \end{array}$ and $\displaystyle \begin{array}{rcl} N J_0 &=& \begin{pmatrix} A & B \\ C & D \end{pmatrix} \begin{pmatrix} 0 & -I \\ I & 0 \end{pmatrix} \\ &=& \begin{pmatrix} B & -A \\ D & -C \end{pmatrix}. \end{array}$ Therefore, ${NJ_0 = J_0N}$ if and only if ${A=D}$ and ${C = -B}$. Hence, we see that a real matrix ${N}$ can be identified with a complex matrix if and only if ${NJ_0=J_0N}$. Now, we go back to the correspondence between a ${GL(m,{\mathbb C})}$-structure and an almost complex structure. First, let us assume that we have a ${GL(m,{\mathbb C})}$-structure and construct an almost complex structure ${J:TM\rightarrow TM}$. Let ${L_p\in S_p}$. Then, we define ${J:T_pM\rightarrow T_pM}$ by ${J = L_p^{-1} J_0 L_p}$. Next, we show that ${J}$ is well defined. Let ${K_p\in S_p}$. Since ${L_p K_p^{-1} \in GL(m,{\mathbb C})}$, we have ${K_pL_p^{-1} J_0 L_pK_p^{-1} = J_0}$. Hence, $\displaystyle \begin{array}{rcl} J &=& L_p^{-1}J_0L_p \\ &=& K_p^{-1}K_pL_p^{-1}J_0L_pK_p^{-1}K_p \\ &=& K_p^{-1}J_0K_p. \end{array}$ Moreover, clearly, we have ${J^2 = -I}$. Thus, we have constructed an almost complex structure. Next, we go in the other direction. Given a complex structure ${J}$, we form a subbundle ${P}$ of ${F}$ which consists of linear isomorphisms ${L_p}$ that satisfy ${L_pJ=J_0L_p}$. Take a basis of ${T_pM}$ of the form ${\left\{ e_1,\dots,e_m,Je_1,\dots,Je_m \right\}}$ and, then define ${L_p(e_i) = (0,\dots,0,1,0,\dots,0)}$ with ${1}$ in the ${i^{th}}$ position and ${L_p(Je_i)=(0,\dots,0,1,0,\dots,0)}$ with ${1}$ in the ${m+i^{th}}$ position. It is easy to verify that ${L_pJ=J_0L_p}$. Hence, the subbundle is non-empty. Next, we want to show that the fibers are isomorphic to ${GL(m,{\mathbb C})}$. Let ${K_p}$ be another isomorphism satisfying ${K_pJ = J_0 K_p}$. Then, ${J = K_p^{-1}J_0K_p}$. Thus, ${L_p K_p^{-1}J_0K_p = J_0L_p}$, or equivalently, ${L_p K_p^{-1}J_0 = J_0 L_p K_p^{-1}}$. Therefore, ${L_pK_p^{-1}\in GL(m,{\mathbb C})}$ by above remarks. Also, given ${N \in GL(m,{\mathbb C})}$, ${NL_p J = NJ_0L_p = J_0NL_p}$ that is ${NL_p}$ is also an element of ${P}$. So, the fibers are isomorphic to ${GL(m,{\mathbb C})}$. Our third example will be a ${G_2}$-structure. However, I want to have a more detailed discussion of ${G_2}$-structures before I present it in this context. So, I will include it in a future post. # Gronwall Inequality There are a number of different statements of Gronwall’s inequality. In this post, we will consider only one of them, perhaps the weakest of all. Proposition 1 Let ${f(t)}$ be a non-negative continuous function on ${\left[ a,b \right]}$ such that there are positive constants ${C}$ and ${K}$ satisfying $\displaystyle \begin{array}{rcl} f(t)\le C + K\int_{a}^{t}f(s)ds \end{array}$ for all ${t\in\left[ a,b \right]}$. Then, $\displaystyle \begin{array}{rcl} f(t)\le Ce^{K(t-a)} \end{array}$ for all ${t \in \left[ a,b \right]}$. Proof: Define ${U(t) = C + K\int_{a}^{t}f(s)ds}$. Note that, by definition, ${f(t)\le U(t)}$ and ${U}$ is a strictly positive differentiable function. Also, we have ${U'(t) = Kf(t)\le KU(t)}$. In other words, ${\frac{U'(t)}{U(t)}\le K}$ which means the relative rate of change of ${U}$ is less than ${K}$. Hence, the growth of ${U}$ is slower than an exponential function with relative rate of change ${K}$. That is ${U(t) \le U(a) e^{K(t-a)}}$ (if you did not like this reasoning, you may integrate both sides of the previous inequality from ${a}$ to ${t}$). So, we have the desired result ${f(t) \le U(t) \le U(a)e^{K(t-a)}= Ce^{K(t-a)}}$. $\Box$ # Some Comments on Linear Complex Structures via an Example Consider a real vector space ${V}$ generated by ${\partial_ x}$ and ${\partial_ y}$. There is an obvious identification ${L:V\rightarrow\mathbb C}$ of ${V}$ with the complex plane ${\mathbb C}$ such that ${L(\partial_ x) = 1}$ and ${L(\partial_ y) = i}$. Define a linear complex structure on ${V}$ by setting ${J(\partial_ x) = \partial_ y}$ and ${J(\partial_ y)=-\partial_ x}$. With the identification mentioned above, since ${\mathbb C}$ is a complex vector space, ${V}$ can be viewed as a complex vector space, too. Furthermore, the action of ${J}$ can be viewed as multiplication by ${i}$ on ${V}$ but we will see below why this view does not extend further. Next, we complexify ${V}$ by taking a tensor product with ${\mathbb C}$ over ${\mathbb R}$. We know that (real) dimension of ${V_{\mathbb C} = V\otimes \mathbb C}$ is ${4}$ and it is generated by ${\partial_ x\otimes 1, \partial_ y \otimes 1, \partial_ x \otimes i}$ and ${\partial_ y \otimes i}$. We can view ${V_{\mathbb C}}$ as a complex vector space and, for notational simplicity, write ${v = v \otimes 1}$ and ${iv = v \otimes i}$. Note that over the complex numbers ${V_{\mathbb C}}$ is ${2}$ dimensional and generated by ${\partial_ x}$ and ${\partial_ y}$. However, these are not the “natural” bases to work with as we wil see. Next, we extend (complexify) ${J:V\rightarrow V}$ to get ${J_{\mathbb C}:V_{\mathbb C}\rightarrow V_{\mathbb C}}$ which we will still denote by ${J}$ for notational simplicity. Let ${\partial_ z = \frac{1}{2}(\partial_ x - i \partial_ y)}$ and ${\partial_ {\bar z} = \frac{1}{2}(\partial_ x + i\partial_ y)}$. Now, we see that $\displaystyle \begin{array}{rcl} J(\partial_ z) &=& \frac{1}{2}\left( J(\partial_ x) -i J(\partial_ y) \right) \\ &=& \frac{1}{2}\left( \partial_ y +i \partial_ x \right) \\ &=& i\frac{1}{2}\left(\partial_ x - i \partial_ y \right) \\ &=& i \partial_ z \end{array}$ and also, $\displaystyle \begin{array}{rcl} J(\partial_ {\bar z}) &=& \frac{1}{2}\left( J(\partial_ x) +i J(\partial_ y) \right) \\ &=& \frac{1}{2}\left( \partial_ y -i \partial_ x \right) \\ &=& -i\frac{1}{2}\left(\partial_ x + i \partial_ y \right) \\ &=& -i \partial_ {\bar z}. \end{array}$ This means that ${\partial_ z}$ is an eigenvector of ${J}$ corresponding to the eigenvalue ${i}$. Similarly, ${\partial_ {\bar z}}$ is an eigenvector corresponding to the eigenvalue ${-i}$. So, the set ${\left\{ \partial_ z, \partial_ {\bar z} \right\}}$ is an eigenbasis for ${J}$ and it gives us an eigenspace decomposition of ${V_{\mathbb C}}$. Computing ${J}$, using this basis, is clearly more convenient and hence, this is a “natural” choice as a basis. Furthermore, from this viewpoint, it is also clear why the action of ${J}$ cannot be viewed as multiplication by ${i}$ any more. The following one line of Haskell code is a very simple version of the cat. main = interact id # Perception of Difficulty: Change Colors Disclaimer: this post is opinion based. A few years ago, I have tried to write a JS code to draw some fractals. However, I was not able to come up with an original and beautiful result. So, I have modified the code a little and made it draw a bunch of lines with random colors (and repeat that every second.) You can view it here. Back then, I did not see anything particularly nice about it but after a couple of years, when I look at it again, it makes me think about our perception of difficulty. Let me emphasize that it draws exactly the same lines at every run, it is just the colors that are random (unless, of course, I have made an error.) So, in theory, no matter which set of colors you start with, if you diligently trace the lines, you can understand the pattern of the lines. Here is a sample picture: I do not know about you but the above picture does not give me a lot of hints about its pattern other than a possible point of symmetry. So, I would say that it is hard to recognize a pattern, if it exists at all. However, if you change the colors a little bit, a spiral in the middle becomes clearly visible. (Click on the image to see a bigger version.) Tweaking a little bit more, we also getwhich makes it more clear that these lines are polygonal chains approximating some circle-like path. Well, it is an approximation of a spiral. As you saw, it became a lot easier to notice a pattern when you change the way you look at the lines. A quite similar situation happens often in my daily life; when I spend a lot of hours working on a seemingly hard mathematical problem, only to realize that I have been using wrong “colors”. Of course, in general, you do not know which “colors” to start with but if you find something challenging, it is often rewarding to change “colors”.
2018-02-25 05:44:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 447, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967593252658844, "perplexity": 74.21295632115643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816138.91/warc/CC-MAIN-20180225051024-20180225071024-00718.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-17-second-order-differential-equations-17-4-series-solutions-17-4-exercises-page-1220/1
## Calculus 8th Edition $y=C_{0} e^{x}$ We have $y=\Sigma^\infty_{n=1}C_{n} x^{n}$ and $y'=\Sigma^\infty_{n=0}C_{n+1} x^n (n+1) \times x^{n}$ The equation $y' - y = 0$ becomes : r, $\Sigma^\infty_{n=0}[C_{n+1}(n+1) - C_{n}]=0$ $C_{1} = C_{0}(n=0)\\C_{2} = \dfrac{C_{0}}{2}=0; (n=1)$; $C_{3} = (\dfrac{1}{3}) (\dfrac{1}{2}) \times C_{0}=0 ; (n=2)$ This gives: $C_{n} = \dfrac{C_0}{n!}$ Use Formula: $\Sigma^\infty_{n=0} \dfrac{x^n}{n!}=e^{x}$ Hence, we have $y=C_{0} e^{x}$
2019-12-11 08:07:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919577240943909, "perplexity": 494.454106221829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00415.warc.gz"}
https://www.physicsforums.com/threads/application-of-entropy-gibbs-and-helmholtz.661865/
# Application of entropy , Gibbs and Helmholtz 1. ### Outrageous 375 to see anything will occur spontaneously or not. A system is isolated, we will look at the entropy. A system at constant temperature and volume, we will look at Helmholtz A system at constant temperature and pressure , we will look at Gibbs Correct? Thank you
2015-03-05 22:38:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8591986894607544, "perplexity": 903.8814012234957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464876.43/warc/CC-MAIN-20150226074104-00295-ip-10-28-5-156.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/865708/associated-points-and-reduced-scheme
# Associated points and reduced scheme 1. Let $X$ be a locally Noetherian scheme without embedded point, show that $X$ is reduced if and only if it is reduced at the generic points. 2. Let $X$ be a locally Noetherian scheme (maybe has some embedded points), do we have $X$ is reduced if and only if it is reduced at the associated points? Question (1) is from Liu Qing's book "Algebraic Geometry and Arithmetic Curves" exercise 7.1.2. If possible, I want to see a global proof, do not reduced to the affine scheme please. I guess there are some geometric meanings, maybe you can help me to point it. Yes to both questions, but I'm not sure what you mean by wanting a global proof. Reducedness is a local property! The proof will consist of picking a point $x \in X$ and an affine chart $U = \text{Spec}(R)$ containing $x$, then checking reducedness on that chart. In particular, in the affine case, if $R$ is reduced, then all its localizations are reduced. On the other hand, if $R$ is nonreduced, let $f \in R$ be nilpotent. The annihilator $\text{Ann}(f)$ is contained in some associated prime ideal $P$ (this is a defining property of associated prime ideals), hence $f/1 \in R_P$ is nonzero, so $R_P$ is also nonreduced. • Thanks. I means that maybe we can do not use local method directly, set a global lemma first, then use the lemma instead of reducing to affine scheme. In fact, Liu's book has a hint to Lemma 1.9, which says O$_X$→i$_*$O$_U$ is injective iff AssO$_X$$\inU, but I do not find a way to use it. – Strongart Jul 14 '14 at 13:56 • Well, checking injectivity is still probably easiest using (distinguished) affine charts, so that the map O_X \to i_*O_U becomes the ring map R \to R[1/f], for some ring element f. I'm not sure how to phrase anything in terms of associated points without at some point referring to rings and ring elements, though. – Jake Levinson Jul 14 '14 at 14:05 • OK, let us consider the affine situation. For the qusetions 1), we can use that lemma 2.4.11 which says generic point correspondings the minimal prime ideal. But how to do the question 2), what does the associated points (or embedded points) corresponding? – Strongart Jul 16 '14 at 6:31 • Associated points correspond to annihilators Ann(f) for f \in R - namely, they are maximal among all such ideals. This is the reason I could say that Ann(f) is contained in some associated prime ideal in my answer to 2) above. – Jake Levinson Jul 16 '14 at 23:02 • Very helpful, thank you. – Strongart Jul 18 '14 at 6:05 An addendum: we can actually use the lemma in Liu and refrain from making too many algebraical observations. Suppose that X is reduced at all its associated points. Take an open U with Ass(\mathcal{O}_X) \subset U (this is possible because the locus where a locally Noetherian scheme is reduced is open; see Liu exercise 2.4.9 or here, in the comments). Now the map$$\mathcal{O}_X \longrightarrow i_*\mathcal{O}_U$$is injective by lemma 7.1.9 in Liu. Let V \subset X be open. Then the map$$\mathcal{O}_X(V) \longrightarrow \mathcal{O}_X(U \cap V)$$is injective and therefore maps nilpotents to nilpotents, but$\mathcal{O}_X(U \cap V)$has no nilpotents, so$\mathcal{O}_X(V)$is reduced. It follows that$X\$ is reduced.
2020-02-22 17:39:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926461935043335, "perplexity": 291.17754131702833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145708.59/warc/CC-MAIN-20200222150029-20200222180029-00443.warc.gz"}
https://mathshistory.st-andrews.ac.uk/Biographies/Mandelbrot/
# Benoit Mandelbrot ### Quick Info Born 20 November 1924 Warsaw, Poland Died 14 October 2010 Cambridge, Massachusetts, USA Summary Benoit Mandelbrot was largely responsible for the present interest in Fractal Geometry. He showed how Fractals can occur in many different places in both Mathematics and elsewhere in Nature. ### Biography Benoit Mandelbrot was largely responsible for the present interest in fractal geometry. He showed how fractals can occur in many different places in both mathematics and elsewhere in nature. Mandelbrot was born in Poland in 1924 into a family with a very academic tradition. His father, however, made his living buying and selling clothes while his mother was a doctor. As a young boy, Mandelbrot was introduced to mathematics by his two uncles. Mandelbrot's family emigrated to France in 1936 and his uncle Szolem Mandelbrojt, who was Professor of Mathematics at the Collège de France and the successor of Hadamard in this post, took responsibility for his education. In fact the influence of Szolem Mandelbrojt was both positive and negative since he was a great admirer of Hardy and Hardy's philosophy of mathematics. This brought a reaction from Mandelbrot against pure mathematics, although as Mandelbrot himself says, he now understands how Hardy's deep felt pacifism made him fear that applied mathematics, in the wrong hands, might be used for evil in time of war. Mandelbrot attended the Lycée Rolin in Paris up to the start of World War II, when his family moved to Tulle in central France. This was a time of extraordinary difficulty for Mandelbrot who feared for his life on many occasions. In [3] the effect of these years on his education was emphasised:- The war, the constant threat of poverty and the need to survive kept him away from school and college and despite what he recognises as "marvellous" secondary school teachers he was largely self taught. Mandelbrot now attributed much of his success to this unconventional education. It allowed him to think in ways that might be hard for someone who, through a conventional education, is strongly encouraged to think in standard ways. It also allowed him to develop a highly geometrical approach to mathematics, and his remarkable geometric intuition and vision began to give him unique insights into mathematical problems. After studying at Lyon, Mandelbrot entered the École Normale in Paris. It was one of the shortest lengths of time that anyone would study there, for he left after just one day. After a very successful performance in the entrance examinations of the École Polytechnique, Mandelbrot began his studies there in 1944. There he studied under the direction of Paul Lévy who was another to strongly influence Mandelbrot. After completing his studies at the École Polytechnique, Mandelbrot went to the United States where he visited the California Institute of Technology. After a Ph.D. granted by the University of Paris, he went to the Institute for Advanced Study in Princeton where he was sponsored by John von Neumann. Mandelbrot returned to France in 1955 and worked at the Centre National de la Recherche Scientific. He married Aliette Kagan during this period back in France and Geneva, but he did not stay there too long before returning to the United States. Clark gave the reasons for his unhappiness with the style of mathematics in France at this time [3]:- Still deeply concerned with the more exotic forms of statistical mechanics and mathematical linguistics and full of non standard creative ideas he found the huge dominance of the French foundational school of Bourbaki not to his scientific tastes and in 1958 he left for the United States permanently and began his long standing and most fruitful collaboration with IBM as an IBM Fellow at their world renowned laboratories in Yorktown Heights in New York State. IBM presented Mandelbrot with an environment which allowed him to explore a wide variety of different ideas. He has spoken of how this freedom at IBM to choose the directions that he wanted to take in his research presented him with an opportunity which no university post could have given him. After retiring from IBM, he found similar opportunities at Yale University, where he is presently Sterling Professor of Mathematical Sciences. In 1945 Mandelbrot's uncle had introduced him to Julia's important 1918 paper claiming that it was a masterpiece and a potential source of interesting problems, but Mandelbrot did not like it. Indeed he reacted rather badly against suggestions posed by his uncle since he felt that his whole attitude to mathematics was so different from that of his uncle. Instead Mandelbrot chose his own very different course which, however, brought him back to Julia's paper in the 1970s after a path through many different sciences which some characterise as highly individualistic or nomadic. In fact the decision by Mandelbrot to make contributions to many different branches of science was a very deliberate one taken at a young age. It is remarkable how he was able to fulfil this ambition with such remarkable success in so many areas. With the aid of computer graphics, Mandelbrot who then worked at IBM's Watson Research Center, was able to show how Julia's work is a source of some of the most beautiful fractals known today. To do this he had to develop not only new mathematical ideas, but also he had to develop some of the first computer programs to print graphics. The Mandelbrot set is a connected set of points in the complex plane. Pick a point $z_{0}$ in the complex plane. Calculate: $z_{1} = z_{0}^{2} + z_{0}$ $z_{2} = z_{1}^{2} + z_{0}$ $z_{3} = z_{2}^{2} + z_{0}$ . . . If the sequence $z_{0} , z_{1} , z_{2} , z_{3} , ...$ remains within a distance of 2 of the origin forever, then the point $z_{0}$ is said to be in the Mandelbrot set. If the sequence diverges from the origin, then the point is not in the set. You can see the MandelbrotSet at THIS LINK. His work was first put elaborated in his book Les objets fractals, forn, hasard et dimension (1975) and more fully in The fractal geometry of nature in 1982. On 23 June 1999 Mandelbrot received the Honorary Degree of Doctor of Science from the University of St Andrews. At the ceremony Peter Clark gave an address [3] in which he put Mandelbrot's achievements into perspective. We quote from that address:- ... at the close of a century where the notion of human progress intellectual, political and moral is seen perhaps to be at best ambiguous and equivocal there is one area of human activity at least where the idea of, and achievement of, real progress is unambiguous and pellucidly clear. That is mathematics. In 1900 in a famous address to the International Congress of mathematicians in Paris David Hilbert listed some 25 open problems of outstanding significance. Many of those problems have been definitively solved, or shown to be insoluble, culminating as we all know most recently in the mid-nineties with the discovery of the proof of Fermat's Last Theorem. The first of Hilbert's problems concerned a thicket of issues about the nature of the continuum or the real line, a major concern of 19th and indeed of 20th century analysis. The problem was both one of geometry concerning the nature of the line thought of as built up of points and of arithmetic thought of as the theory of the real numbers. The integration of those two fields was one of the great achievements of Richard Dedekind and Georg Cantor, the latter of whom we [St Andrews University] were intelligent enough to honour in 1911. Now lurking about so to speak in the undergrowth of that achievement lay certain very extraordinary geometric objects indeed. To all at the time, they seemed strange, indeed rather pathological monsters. Odd indeed they were, there were curves - one dimensional lines in effect - which filled two dimensional spaces, there were curves which were well behaved, that is nice and continuous but which had no slope at any point (not just some points, ANY points) and they went by strange names, the Peano Space filling curve, the Sierpiński gasket, the Koch curve, the Cantor Ternary set. Despite their pathological qualities, their extraordinary complexity, especially when viewed in greater and greater detail, they were often very simple to describe in the sense that the rules which generated them were absurdly simple to state. So odd were these objects that mathematicians set about barring these monsters and they were set aside as too strange to be of interest. That is until our honorary graduand created out of them an entirely new science, the theory of fractal geometry: it was his insight and vision which saw in those objects and the many new ones he discovered, some of which now bear his name, not mathematical curiosities, but signposts to a new mathematical universe, a new geometry with as much system and generality as that of Euclid and a new physical science. As well as IBM Fellow at the Watson Research Center Mandelbrot was Professor of the Practice of Mathematics at Harvard University. He also held appointments as Professor of Engineering at Yale, of Professor of Mathematics at the École Polytechnique, of Professor of Economics at Harvard, and of Professor of Physiology at the Einstein College of Medicine. Mandelbrot's excursions into so many different branches of science was, as we mention above, no accident but a very deliberate decision on his part. It was, however, the fact that fractals were so widely found which in many cases provided the route into other areas [3]:- I should not ... give the impression that we have here before us a mathematician alone. Let me explain why. The first of his great insights was the discovery that the extraordinarily complex almost pathological structures, which had been long ignored, exhibited certain universal characteristics requiring a new theory of dimension to treat them adequately which he had generalised from earlier work of Hausdorff and Besicovitch but the second great insight was that the fractal property so discovered, the general theory of which he had provided, was present almost universally in Nature. What he saw was that the overwhelming smoothness paradigm with which mathematical physics had attempted to describe Nature was radically flawed and incomplete. Fractals and pre-fractals once noticed were everywhere. They occur in physics in the description of the extraordinarily complex behaviour of some simple physical systems like the forced pendulum and in the hugely complex behaviour of turbulence and phase transition. They occur as the foundations of what is now known as chaotic systems. They occur in economics with the behaviour of prices and as Poincaré had suspected but never proved in the behaviour of the Bourse or our own Stock exchange in London. They occur in physiology in the growth of mammalian cells. Believe it or not ... they occur in gardens. Note closely and you will see a difference between the flower heads of broccoli and cauliflower, a difference which can be exactly characterised in fractal theory. Mandelbrot has received numerous honours and prizes in recognition of his remarkable achievements. For example, in 1985 Mandelbrot was awarded the Barnard Medal for Meritorious Service to Science. The following year he received the Franklin Medal. In 1987 he was honoured with the Alexander von Humboldt Prize, receiving the Steinmetz Medal in 1988 and many more awards including the Légion d'Honneur in 1989, the Nevada Medal in 1991, the Wolf prize for physics in 1993 and the 2003 Japan Prize for Science and Technology. ### References (show) 1. D J Albers and G L Alexanderson (eds.), Mathematical People: Profiles and Interviews (Boston, 1985), 205-226. 2. P Clark, Presentation of Professor Benoit Mandelbrot for the Honorary Degree of Doctor of Science (St Andrews, 23 June 1999). 3. B Mandelbrot, Comment j'ai decouvert les fractales, La Recherche (1986), 420-424.
2020-10-25 08:37:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5499522089958191, "perplexity": 1213.2413582049473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888402.81/warc/CC-MAIN-20201025070924-20201025100924-00223.warc.gz"}
https://physics.stackexchange.com/questions/239243/where-is-the-space-elevator-falling-and-the-cable-holding-it
# Where is the space elevator falling? And the cable holding it? I guess the concept of a space elevator is pretty well known. The idea, first published by Konstantin Tsiolkovsky in 1895, and popularized (among others) by Arthur C. Clarke in The Fountains of Paradise, is to have a geosynchronous satellite with a very strong and light cable hanging down to the ground at some anchoring point on the equator circle, and a counterweight extending outward in space to keep the center of mass at the rigth geosynchronous altitude (this variant is due to Yuri N. Artsutanov). Whether it can be done for Earth remains doubtful, as we do not (yet?) know of materials that can sustain the strain (afaik), but the concept is interesting and has been reused in several science-fiction stories, for Earth and other planets. In at least two of the novels I read that use the concept, the elevator cable breaks, or is broken, below the center of mass and the cable falls back to the ground (with or without the car). But these two novels do not seem to agree on how it falls. In one novel, the cable falls in a heap on its ground anchoring place. In another, it falls on a big circle (the equator), making a "straight" line across the planet surface, though I do not recall whether it is forward (ahead of its anchor, with respect to the planet rotation) or backward (behind its anchor, with respect to the planet rotation). The planet may not be Earth. My knowledge of mechanics no longer being was it may once have been, I am not sure I can analyze the problem correctly. I seriously doubt the cable would fall as a heap onto its anchor (or that the car would do that if it were to get loose from the cable, as suggested in one novel). So my question is what are the machanical laws of the phenomenon. and where do the car and the cable fall and how. They could fall ahead of the anchoring point, or behind the anchoring point (with respect to earth rotation). The cable could be taut on the equator, or zigzaging because it fell too fast with respect to earth rotation. It could start falling in one direction (forward or backward) and later reverse the other way for the remaining span. I just have no real idea of what might happen, and I wonder how it is to be analyzed. I tried to get a first understanding by considering the car alone getting loose from the cable, and I am putting it in a first tentative partial answer, so that this question does not get too long. My conclusion for the car felt counter-intuitive at first, becoming obvious in retrospect. But what of the cable? You could use the Coriolis force to analyze this, or just a little common sense. Assuming the cable is anchored at the equator, the linear velocity of a point "high up" will be greater than the velocity of the anchor point. When each point of the cable is in free fall, that velocity will carry the higher parts of the cable "ahead of" the anchor point. It will land to the East, and not in a heap (although elastic forces may further complicate things, it is unlikely to completely reverse this). You can reach the same conclusion by thinking about conservation of angular momentum. • You are right, but ... Coriolis forces are at the edge of my remaining knowledge. Angular momentum gives a general idea of where things go, but an imprecise intuition (at least for me). In my answer, about the car alone, I thought orbit analysis might give me a better idea of trajectories, so as to better understand what happen to the cable. A basic issue is whether two neighboring segments of the cable will tend to get closer or further apart. If it is closer, we get essentially the same result as for the car alone. If they diverge, then forces and energy will propagate along the cable. How? Feb 23 '16 at 13:27 • physics.stackexchange.com/questions/277688/… – Muze Sep 2 '16 at 15:40 • @Troll - really Jen? You changed your name to that? Sep 2 '16 at 16:08 • @Floris I like it... not so much for the definition. – Muze Sep 2 '16 at 17:11 # The car alone falls ahead of the anchoring point This is a partial answer to illustrate the question in a simpler case. It is separated from the question to keep it short enough, and because it is more an answer than a question. If one considers the car alone, getting suddenly loose, its angular speed would be to low to keep it in a circular orbit, at its current altitude, since the whole elevator is rotating at planet-synchronous speed, which corresponds to a satellite above the car altitude. Thus the car would dive into an elliptical orbit that might, or might not intersect the ground level (I am ignoring the atmosphere for simplification, though the elevator is not really needed when there is none). ## Car touching ground at perigee The orbital period is actually determined by the length $a$ of the semi-major axis of the orbit, according to the formula $T = 2\pi\sqrt{a^3/GM}$, where $M$ is the mass of the planet. That is $a$ is the initial height of the car above the planet center. If it touches ground just at perigee, it will have taken half a period of that elliptical orbit, a time shorter than half a period of planet-synchronous orbit (which has a longer semi-major axis). Hence the car will reach the ground in less than half of a planet rotation period, at a point that would be half the equator from the anchor, if Earth were not rotating. Thus the car will fall ahead of the anchoring point. This reasonning can be done without the exact formula, using only Kepler third law. ## The general case for the car alone I have not done the precise calculation for all cases, and would not be very good at it, but it seems that qualitative reasonning is enough to show that, if the car hits the ground (i.e. its orbits intersect the planet surface), then it is always ahead of the anchoring point. A first remark is that, if the car crashes on the ground, it does so somewhere along its first half orbit from apogee (when it gets loose) to perigee (when it is closest to the planet center, if it can). According to Kepler second law of equal areas (using a differential form of it) its angular speed increases continuously from apogee to perigee, as its distance to the planet decreases. The car's angular speed at apogee is the same as the planet's, i.e., when the car gets loose. From then on, when it crashes, its angular speed will only increase until it crashes. Hence it will continuously get further ahead of the anchoring points as its follows its orbit to the ground, and will necessarily crash on the equator ahead of the elevator anchoring point. ## The cable case Thus I would tend to believe that a ruptured cable will fall similarly ahead of the anchoring point. But I have no idea how forces propagating along the cable might affect its motion, and whether it will be taut on the ground. Examining the above reasonning, it is clear that the car alone falls on the first half of the equator great circle, ahead of the anchoring point. Now, it we consider a space elevator for Earth, if the cable is broken right under the geosynchronous satellite, its length is about 35,786 km, while half of the equator great circle is only about 20,000 km. Hence there is no way the cable can be taut along the equator while falling only on its first half, which is shorter. Could it be that some extra energy is propagated to the end (highest part) of the cable allowing it to stay "in orbit" beyond the half great circle limit?
2021-10-21 20:10:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6945790648460388, "perplexity": 420.9275717628501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585441.99/warc/CC-MAIN-20211021195527-20211021225527-00200.warc.gz"}
http://study91.co.in/aptitude/problems-on-hcf-and-lcm
India’s No.1 Educational Platform For UPSC,PSC And All Competitive Exam • 0 Problems on H.C.F and L.C.M 1.Find the greatest number that will divide 43, 91 and 183 so as to leave the same remainder in each case. Explanation: Solution: Required number = H.C.F. of (91 - 43), (183 - 91) and (183 - 43) = H.C.F. of 48, 92 and 140 = 4. 2.The H.C.F. of two numbers is 23 and the other two factors of their L.C.M. are 13 and 14. The larger of the two numbers is: Explanation: Solution: Clearly, the numbers are (23 x 13) and (23 x 14). ∴ Larger number = (23 x 14) = 322. 3.Six bells commence tolling together and toll at intervals of 2, 4, 6, 8 10 and 12 seconds respectively. In 30 minutes, how many times do they toll together ? Explanation: Solution: L.C.M. of 2, 4, 6, 8, 10, 12 is 120. So, the bells will toll together after every 120 seconds(2 minutes). In 30 minutes, they will toll together 30/2+1=16 times 4.The greatest number of four digits which is divisible by 15, 25, 40 and 75 is: Explanation: Solution: Greatest number of 4-digits is 9999. L.C.M. of 15, 25, 40 and 75 is 600. On dividing 9999 by 600, the remainder is 399. ∴ Required number (9999 - 399) = 9600. 5.The product of two numbers is 4107. If the H.C.F. of these numbers is 37, then the greater number is: Explanation: Solution: Let the numbers be 37a and 37b. Then, 37a x 37b = 4107 ab = 3. Now, co-primes with product 3 are (1, 3). So, the required numbers are (37 x 1, 37 x 3) i.e., (37, 111). ∴ Greater number = 111. 6.Three number are in the ratio of 3 : 4 : 5 and their L.C.M. is 2400. Their H.C.F. is: Explanation: Solution: Let the numbers be 3x, 4x and 5x. Then, their L.C.M. = 60x. So, 60x = 2400 or x = 40. ∴ The numbers are (3 x 40), (4 x 40) and (5 x 40). Hence, required H.C.F. = 40. 7.The G.C.D. of 1.08, 0.36 and 0.9 is: Explanation: Solution: Given numbers are 1.08, 0.36 and 0.90.   H.C.F. of 108, 36 and 90 is 18, ∴ H.C.F. of given numbers = 0.18. 8.The least multiple of 7, which leaves a remainder of 4, when divided by 6, 9, 15 and 18 is: Explanation: Solution: L.C.M. of 6, 9, 15 and 18 is 90. Let required number be 90k + 4, which is multiple of 7. Least value of k for which (90k + 4) is divisible by 7 is k = 4. ∴Required number = (90 x 4) + 4 = 364. 9.Find the lowest common multiple of 24, 36 and 40. Explanation: Solution: $\begin{array}{rl}& 2\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}24\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}36\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}40\\ & -------------\\ & 2\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}12\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}18\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}20\\ & -------------\\ & 2\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}6\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}9\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}10\\ & -------------\\ & 3\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}3\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}9\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}5\\ & -------------\\ & \phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}1\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}3\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}5\\ & L.C.M=2×2×2×3×3×5=360\end{array}$ 10.The least number which should be added to 2497 so that the sum is exactly divisible by 5, 6, 4 and 3 is:
2019-08-21 09:39:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3787982165813446, "perplexity": 1263.328819463413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00454.warc.gz"}
https://aitopics.org/class/Technology/Information%20Technology/Artificial%20Intelligence/Machine%20Learning/Learning%20Graphical%20Models/Undirected%20Networks/Markov%20Models
# Markov Models ### On Connections between Constrained Optimization and Reinforcement Learning Dynamic Programming (DP) provides standard algorithms to solve Markov Decision Processes. However, these algorithms generally do not optimize a scalar objective function. In this paper, we draw connections between DP and (constrained) convex optimization. Specifically, we show clear links in the algorithmic structure between three DP schemes and optimization algorithms. We link Conservative Policy Iteration to Frank-Wolfe, Mirror-Descent Modified Policy Iteration to Mirror Descent, and Politex (Policy Iteration Using Expert Prediction) to Dual Averaging. These abstract DP schemes are representative of a number of (deep) Reinforcement Learning (RL) algorithms. By highlighting these connections (most of which have been noticed earlier, but in a scattered way), we would like to encourage further studies linking RL and convex optimization, that could lead to the design of new, more efficient, and better understood RL algorithms. ### 11 Alternatives To Keras For Deep Learning Enthusiasts Infer.NET is a machine learning framework for running Bayesian inference in graphical models. It provides state-of-the-art message-passing algorithms and statistical routines needed to perform inference for a wide variety of applications. There are various intuitive features in this framework such as rich modelling language, multiple inference algorithms, designed for large scale inference as well as user-extendable. With the help of this framework, various Bayesian models such as Bayes Point Machine classifiers, TrueSkill matchmaking, hidden Markov models, and Bayesian networks can be implemented with ease. ### Simple Strategies in Multi-Objective MDPs (Technical Report) We consider the verification of multiple expected reward objectives at once on Markov decision processes (MDPs). This enables a trade-off analysis among multiple objectives by obtaining the Pareto front. We focus on strategies that are easy to employ and implement. That is, strategies that are pure (no randomization) and have bounded memory. We show that checking whether a point is achievable by a pure stationary strategy is NP-complete, even for two objectives, and we provide an MILP encoding to solve the corresponding problem. The bounded memory case can be reduced to the stationary one by a product construction. Experimental results using \Storm and Gurobi show the feasibility of our algorithms. ### Restless Hidden Markov Bandits with Linear Rewards This paper presents an algorithm and regret analysis for the restless hidden Markov bandit problem with linear rewards. In this problem the reward received by the decision maker is a random linear function which depends on the arm selected and a hidden state. In contrast to previous works on Markovian bandits, we do not assume that the decision maker receives information regarding the state of the system, but has to infer it based on its actions and the received reward. Surprisingly, we can still maintain logarithmic regret in the case of polyhedral action set. Furthermore, the regret does not depend on the number of extreme points in the action space. ### Optimal Immunization Policy Using Dynamic Programming Decisions in public health are almost always made in the context of uncertainty. Policy makers responsible for making important decisions are faced with the daunting task of choosing from many possible options. This task is called planning under uncertainty, and is particularly acute when addressing complex systems, such as issues of global health and development. Decision making under uncertainty is a challenging task, and all too often this uncertainty is averaged away to simplify results for policy makers. A popular way to approach this task is to formulate the problem at hand as a (partially observable) Markov decision process, (PO)MDP. This work aims to apply these AI efforts to challenging problems in health and development. In this paper, we developed a framework for optimal health policy design in a dynamic setting. We apply a stochastic dynamic programing approach to identify both the optimal time to change the health intervention policy and the optimal time to collect decision relevant information. ### Multi Label Restricted Boltzmann Machine for Non-Intrusive Load Monitoring Increasing population indicates that energy demands need to be managed in the residential sector. Prior studies have reflected that the customers tend to reduce a significant amount of energy consumption if they are provided with appliance-level feedback. This observation has increased the relevance of load monitoring in today's tech-savvy world. Most of the previously proposed solutions claim to perform load monitoring without intrusion, but they are not completely non-intrusive. These methods require historical appliance-level data for training the model for each of the devices. This data is gathered by putting a sensor on each of the appliances present in the home which causes intrusion in the building. Some recent studies have proposed that if we frame Non-Intrusive Load Monitoring (NILM) as a multi-label classification problem, the need for appliance-level data can be avoided. In this paper, we propose Multi-label Restricted Boltzmann Machine(ML-RBM) for NILM and report an experimental evaluation of proposed and state-of-the-art techniques. ### Audio-Conditioned U-Net for Position Estimation in Full Sheet Images The goal of score following is to track a musical performance, usually in the form of audio, in a corresponding score representation. Established methods mainly rely on computer-readable scores in the form of MIDI or MusicXML and achieve robust and reliable tracking results. Recently, multimodal deep learning methods have been used to follow along musical performances in raw sheet images. Among the current limits of these systems is that they require a non trivial amount of preprocessing steps that unravel the raw sheet image into a single long system of staves. The current work is an attempt at removing this particular limitation. We propose an architecture capable of estimating matching score positions directly within entire unprocessed sheet images. We argue that this is a necessary first step towards a fully integrated score following system that does not rely on any preprocessing steps such as optical music recognition. ### Model-free Reinforcement Learning in Infinite-horizon Average-reward Markov Decision Processes Model-free reinforcement learning is known to be memory and computation efficient and more amendable to large scale problems. In this paper, two model-free algorithms are introduced for learning infinite-horizon average-reward Markov Decision Processes (MDPs). The first algorithm reduces the problem to the discounted-reward version and achieves $\mathcal{O}(T^{2/3})$ regret after $T$ steps, under the minimal assumption of weakly communicating MDPs. The second algorithm makes use of recent advances in adaptive algorithms for adversarial multi-armed bandits and improves the regret to $\mathcal{O}(\sqrt{T})$, albeit with a stronger ergodic assumption. To the best of our knowledge, these are the first model-free algorithms with sub-linear regret (that is polynomial in all parameters) in the infinite-horizon average-reward setting. ### Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling Due in part to the growing sources of data about past sequences of decisions and their outcomes - from marketing to energy management to healthcare - there is increasing interest in developing accurate and efficient algorithms for off-policy policy evaluation. For Markov Decision Processes, this problem was addressed (Precup et al., 2000) early on by importance sampling (IS)(Rubinstein, 1981), a method prone to large variance due to rare events (Glynn, 1994; L'Ecuyer et al., 2009). The per-decision importance sampling estimator of Precup et al. (2000) tries to mitigate this problem by leveraging the temporal structure - earlier rewards cannot depend on later decisions - of the domain. While neither importance sampling (IS) nor per-decision IS (PDIS) assumes the underlying domain is Markov, more recently, a new class of estimators (Hallak and Mannor, 2017; Liu et al., 2018; Gelada and Bellemare, 2019) has been proposed that leverages the Markovian structure. In particular, these approaches propose performing importance sampling over the stationary state-action distributions induced by the corresponding Markov chain for a particular policy. By avoiding the explicit accumulation of likelihood ratios along the trajectories, it is hypothesized that such ratios of stationary distributions could substantially reduce the variance of the resulting estimator, thereby overcoming the "curse of horizon" (Liu et al., 2018) plaguing off-policy evaluation. The recent flurry of empirical results shows significant performance improvements over the alternative methods on a variety of simulation domains. Yet so far there has not been a formal analysis of the accuracy of IS, PDIS, and stationary state-action IS which will strengthen our understanding of their properties, benefits and limitations. ### Hierarchical Hidden Markov Jump Processes for Cancer Screening Modeling Hierarchical Hidden Markov Jump Processes for Cancer Screening Modeling Rui Meng Soper Braden Jan Nygard, Mari Nygrad Herbert Lee UCSC LLNL Cancer Registry of Norway UCSC Abstract Hidden Markov jump processes are an attractive approach for modeling clinical disease progression data because they are explainable and capable of handling both irregularly sampled and noisy data. Most applications in this context consider time-homogeneous models due to their relative computational simplicity. However, the time homogeneous assumption is too strong to accurately model the natural history of many diseases. Moreover, the population at risk is not homogeneous either, since disease exposure and susceptibility can vary considerably. In this paper, we propose a piece-wise stationary transition matrix to explain the heterogeneity in time. We propose a hierarchical structure for the heterogeneity in population, where prior information is considered to deal with unbalanced data. Moreover, an efficient, scalable EM algorithm is proposed for inference. We demonstrate the feasibility and superiority of our model on a cervical cancer screening dataset from the Cancer Registry of Norway. Experiments show that our model outperforms state-of-the-art recurrent neural network models in terms of prediction accuracy and significantly outperforms a standard hidden Markov jump process in generating Kaplan-Meier estimators. 1 Introduction Population-based screening programs for identifying undiagnosed individuals have a long history in improving public health. Examples include screening pro-Preliminary work.
2019-11-19 16:05:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4837004840373993, "perplexity": 898.6442398320147}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670156.86/warc/CC-MAIN-20191119144618-20191119172618-00032.warc.gz"}
https://physics.stackexchange.com/questions/161515/in-bohmian-mechanics-how-does-the-particles-position-affect-where-a-particle-i
# In Bohmian mechanics, how does the particle's position affect where a particle is detected? In Bohmian mechanics / pilot wave theory / de Broglie–Bohm theory, my understanding is that a particle's trajectory evolves based on its wave function, and that the position that particle is detected at is related to the particle's actual position. In the example of a beam-splitter experiment, the particle and its wave function evolves over time culminating in an electron being ejected from the surface of one of two CCD detectors. In the copenhagen interpretation, the location of that electron is where the wave-function "collapses", but in the bohmian interpretation, it is the position of the particle along its concrete but as-of-yet undetectable trajectory. My understanding is that the shape of the wave-function is almost identical at both detectors regardless of which detector actually detects the particle. So why does the electron get ejected from the detector at the location of the bohmian particle, rather than at the other detector? I feel like there must logically either be some interaction between the particle itself and the electron (perhaps via their quantum potentials) OR that a mutual cause correlates the position of the bohmian particle with the position of the ejected electron. In the second case, I fail to see why the concept of a bohmian particle is even necessary, so I have to assume that the particle itself interacts in some way. This is a follow up to: How do particles interact in Bohmian mechanics / pilot wave theory / de Broglie–Bohm theory? The de Broglie-Bohm (dBB) theory has a wave (a function from configuration space) and a particle (a point in configuration space) and both evolve in time. The evolution of the particle does not influence the evolution of the wave, but the wave does influence the particle. Since the particle doesn't do anything except be bossed around, it's basically a marker, nothing more. If you have a wave that has multiple packets that don't overlap, the particle marks one of them as occupied and the others as empty. But that marking affects nothing whatsoever. Configuration space tells you where absolutely everything is. For instance if I had two particles in a 1d universe then I could specify a point $(x_1,x_2)$ and that tells you there is a particle at $x_1$ and another at $x_2$, if I had 8 particles in 1d I could specific a point in an 8d space $(x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8)$ and it tells me where al the particles are at. So the point in configuration space in principle tells you perfectly where every single particle is. But the point in configuration space doesn't affect a single thing. So it can't ever be known, and its value has zero testable consequences, because it is just along for the ride and can't affect anything. It's like you said you had the location of a ghost, and made an equation that tell you how the ghost moves around. If thinking about it helps you to focus on the wave, that's super. The wave is important, it helps you to make predictions. If thinking about the location of the ghost (or the particle) helps you to recognize a liar when someone says there can't be a position, that's super. You don't want to pay attention to people that are more interested in wrongly saying what you can or can't do than focusing on how to make a prediction using an existing theory or come up with a new theory. The trajectory evolves, but in a passive wave determined by the wave evolution and the wave evolution actually determines what happens. Is the particle position a is detected at related to the particle's actual position? People like to say detection or measurement because it sounds cool. But there isn't a magic box where a real number with an infinite number of decimal digits appears. What you can do is separate a wave into disjoint parts such that they act independently. And once that happens in a way where they will forever more act independently. These packets are the actual results. For instance a stern-gerlach device is called a spin-measurement device. What it really does is split a wave into two or fewer parts, one going left, the other right. But it also polarizes the spin of the wave on the separate parts so that if you took the part that went left and sent it through a similarly calibrated machine again it will just go left (no split to a part going right), and similarly if you took the part that went right and sent it through a similarly calibrated machine again it will just go right (no split to a part going left). It split a way into potentially more than one disjoint wave, and it does so in a reproducible way. That's what is called measurement. It doesn't measure some preexisting thing, it splits it into disjoint wavepackets. And yes sometimes it doesn't split, so after the measurement it is definitely polarized to not split under certain circumstances. But it wasn't necessarily polarized like that before you sent it through so it is misleading to say you measured it (the first time). So how would you measure position? You have to separate wavepackets. And wavepackets push particles around, so you move the particles. Moving it isn't measuring where it was. You can't measure where it was, you are only going to break the wave into a finite number of packets, so you'll never get some infinite decimal expansion of where it was. All the dBB theory does is tell you that it can have a position. In the stern-gerlach example we can also conclude that the ones that were farthest left ended up going left and the ones farthest right ended up going right. But you didn't know if the particle was on one edge of the wavepacket or the other edge, so you didn't know which way it was going to go. So all you had was a probabilty of different wavepackets being occupied. But dBB theory tells you that this probability is the regular of not knowing where the particle is located. Which is very different than the claims some people make about quantum theories. But those claims are usually so wild because they think the so-called measurements are about some prexisting property of a particle rather than just a splitting of a wave. In the example of a beam-splitter experiment, does the particle and its wave function evolves over time culminating in an electron being ejected from the surface of one of two CCD detectors? No. The wave has amplitudes for both paths, and if you separate those paths to make the different wavepackets disjoint, then you might be able to get a "measurement". But you can only get that if they separate in a way where they will never ever ever overlap ever again ever. Then they are forever more independent. Then, both mathematically and practically they can each live in their own little world where the other doesn't exist because the other one doesn't affect it. (Just like the wave can ignore the particle because the particle doesn't affect it, so these wavepackets may now ignore each other, the other parts of the wave, since they don't affect each other). The wave is always taking all the options available, whenever the wave can split it splits, and the math tells you the probability that the particle will occupy one wavepacket versus the other, but the wave spilts, so they both happened. The fact that one wavepacket is occupied by the particle and one wavepacket is not really doesn't change anything ever. Worse, this idea that they are independent is really an engineering issue not a philosophical one. Because what makes them be independent is like calling a random phone number and trying to call it again, technically you might. practically the chance is too small. Those separate wavepackets if reflected backwards and aimed really really carefully could overlap again. But it would be easier to shoot a laser beam from the Earth, bounce off it off of a mirror on the moon and back into itself (so hit it at just the perfect angle). So the segregation is really just approximate and/or temporary. Meas Does the copenhagen interpretation say the location of that electron is where the wave-function "collapses" Copenhagen is forced to say the same thing as dBB about the waves, otherwise they will disagree with experiments. They simply don't have a particle in Copenhagen (or in Ithaca). The modern Cophenhagen gives you serendipity at best for a story, and they only own up to the existence of the disjoint waves. There are some (non Copenhagen) that go for a true collapse postulate (stochastic theories), but their predictions actually disagree with the predictions of regular quantum mechanics. are only two options Does the dBB interpretation say the location of that electron determined by teh particle position? Not quite. It says that there is one. But that is not something that is ever revealed or known. Not now. Not ever. What you measure and observe and predict is the wavepackets. You can have a statistical uncertainty of where the particle is. And if that uncertainty follows the square of the wavefunction intitially, it will later. And you can use that to compute the probability that different wavepackets will be the occupied on. And those probabilities (of which wavepackets are occupied) are the same probabilities everyone wants touse quantum theory to compute. The dBB can simply interpret those probabilities as the probabilities of different wavepackets having the particles in them. And the dBB theory can say that the particles have positions that are consistent with the probabilities. Consistent in the sense that the probability of the wavepacket being the occupied one agrees with the quantum probabilities (for any measurement, not just position). Thus teh dBB theory tells us that the experimental verifications of quantum mechanics are 1) Consistent with particles having positions 2) Can be the normal kind of probabilties based on ignorance (ignorance of which of many disjoint wavepackets are occupied by a particle whose position is not known) 3) And together this reveals which results and saying in quantum mechanics are based about what properties can or can't exist. For instance whether a particle has a spin up or a spin down before measurement. Usually it does not. Let's talk about #3 some more. You could make a spin gerlach with different calibrations. Each calibration send a left going one to the left again, and one that went right goes right again. But by basically flipping it upside down (or flipping yourself upside down, either way) you can can realize how arbitrary each is, and make one or the other, the upside-up and the upside-down version are equally reliable, and equally practical, and they measure the same thing, and they measure it by separating/splitting wavepackets. But the one that goes left or right depends on the wave (so we know which percentage of particles would go left or right) and on where the particle is (the ones farthest to the left go left, the ones farthest to the right go right) So whether a particle goes left or right depends not only on the wave but on the unknown location of the particle and on arbitrary matter like whether you choose to make one spin go left and the other right or vice versa Yes, whether you get left or right depends on whether you used the upside-down machine or the upside-up machine. So it doesn't measure a preexisting property of the particle My understanding is that the shape of the wave-function is almost identical at both detectors regardless of which detector actually detects the particle. Absolutely nothing is different for the two detectors. Whether the wavepacket has the hypothetical-and-you-never-ever-see-it-or-see-a-testable-consequence-particle or not, affects nothing whatsoever. There are two wavepackets, and since they don't affect each other, they can ignore each other. If you want to have a favorite, you can root for the one with the particle. But you don't know which one that is. But for any one of them you can compute the probability that it is your secret favorite one. And you'll get the correct probability. So why does the electron get ejected from the detector at the location of the bohmian particle, rather than at the other detector? There is only one particle, so if the wavepackets don't overlap it is stuck in one (the particle never travels through regions where the wave is zero). So it has to be in one. It's not a big deal, because you don't know which one it is in. The point is that the wavepackets now act independently if you've done your separation well enough. And that is not because the particle is stuck in one. It's the other way around. The fact hat they will never overlap again is why the particle is stuck in one. And that's a super advantage of the dBB theory conceptionally. If you say you want to find the probability that the particle gets stuck in a particular wavepacket, then you know to not ask until the wavepackets are fully separated and will stay that way. You can intuitively tell when ciomputing a probability makes sense and when it would be silly. The Copenhagen theory doesn't give you that because it gives you nothing intuitive to think about. But it does compute the same probabilities in exactly the same situations. And avoids computing them in the same situations. But makes it unclear when or why you would do either. The dBB theory makes it clear when the question makes sense to ask. Shouldn't their either be some interaction between the particle itself and the electron (perhaps via their quantum potentials) OR that a mutual cause correlates the position of the bohmian particle with the position of the ejected electron. No, nope, and nada. All wave. All the time. The particle is not a causal agent, it doesn't cause things to happen, it's like a tracer particle in the atmosphere or a tracking device on stolen car, it's not moving the atmosphere or driving the car. Except its one that we never ever see. So it's more like a hypothetical tracer. Why is the bohmian particle necesary? Depends on your goal. If your goal is to catch people in lies, it helps to have a particle whose motion and the probabilities you can compute (probabilities about whether a wavepacket has the particle or not) and deal with intuitively, where you can fix a sample space (locations of particles) and translate other questions into probabilties (questions of which wavepackets have the tracer). It can help you recognize when someone made a mistake. Or alternatively you could be using the theory to inspire you to make other theories, and the particle might help with that. It could be numerically useful to do an approximation, this sometimes happens in computational chemistry. The first one is worth a tremendous amount by itself, even to stop you from accidentally misleading yourself. If you focus on the clear probabilistic question of identifying which of distinct disjoint wavepackets have the particle, then you can compute probablities correctly. And if youknow that a a measurement gives different results based on stuff you don't know (like where exactly the particle is) then you won't delude yourself into thinking there is an element of reality that isn't there. For instance you seem to want to think there "is" some momentum and that it "is" being transferred "here" or "there" maybe even at a particular "when" and that is all just plain wrong. A Copenhagenist might go overboard and not imagine that anything happens ever. But if you study the dBB theory you can have a picture of the particle located somewhere and the wave smoothly changing over time, and be patient to wait until it separates and then compute the only thing you can, the probabilty that different packets are occupied, and you can know (from effort) how to not to read too much into it because the dynamics of the particle can tell you how unrelated the current position of the particle is to the later computed probability that a wavepacket is occupied. So I have to assume that the particle itself interacts in some way. Only do that if you want to risk disagreeing with the predictions of quantum mechanics • "If you want to have a favorite, you can root for the one with the particle." - lol – B T Jan 26, 2015 at 4:10 • So is it entirely possible that the bohmian particle followed the path to detector that didn't detect the wave? – B T Jan 26, 2015 at 4:11 • @BT Detect isn't a nice word. What happens is splitting. You can pretend there is a particle if thinking about it helps you to know when to compute a probability (when a particle would be struck in a discrete selection of wavepackets), and what probability to compute (the probability that this wavepacket has the particle given that it was in the initial wavepacket). But you'll never know if or even whether there is a particle anywhere, let along in any particular wavepacket. You can't tell because particles don't do anything, the hypothesis that there is a particle is not testable. Jan 26, 2015 at 4:37 • Ok, but in real experiments, real detectors actually detect certain things in one spot or another. Nice or not, that's how things happen. I'd like to relate this discussion to reality. If you pretend that there's a particle, is it possible to pretend (according to the math of course) that particle has a trajectory that took it to the detector that didn't detect the particle? – B T Jan 26, 2015 at 5:11 • @BT Real detectors split wavefunctions. Wavefunctions that are split can consistently act like the others don't exist. So each can pretend they have their own particle, regardless whether it is one none or all of them that do. Each one will disagree about the probability that it was the chosen one, but thereafter they condition on that fact (by assuming that there is a particle in their wavepacket, even though they don't know where in the packet it is). So life goes on. You can even retrodict/retroimagine and figure out regions where the particle must have been earlier to end up in your packet Jan 26, 2015 at 6:52
2022-05-19 18:22:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6357693076133728, "perplexity": 436.6849461803778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529658.48/warc/CC-MAIN-20220519172853-20220519202853-00484.warc.gz"}
https://blog.csdn.net/yjt9299/article/details/79976434
poj 2117 判断删除一个点后的最大联通分支个数 Electricity Time Limit: 5000MS Memory Limit: 65536K Total Submissions: 6019 Accepted: 1919 Description Blackouts and Dark Nights (also known as ACM++) is a company that provides electricity. The company owns several power plants, each of them supplying a small area that surrounds it. This organization brings a lot of problems - it often happens that there is not enough power in one area, while there is a large surplus in the rest of the country. ACM++ has therefore decided to connect the networks of some of the plants together. At least in the first stage, there is no need to connect all plants to a single network, but on the other hand it may pay up to create redundant connections on critical places - i.e. the network may contain cycles. Various plans for the connections were proposed, and the complicated phase of evaluation of them has begun. One of the criteria that has to be taken into account is the reliability of the created network. To evaluate it, we assume that the worst event that can happen is a malfunction in one of the joining points at the power plants, which might cause the network to split into several parts. While each of these parts could still work, each of them would have to cope with the problems, so it is essential to minimize the number of parts into which the network will split due to removal of one of the joining points. Your task is to write a software that would help evaluating this risk. Your program is given a description of the network, and it should determine the maximum number of non-connected parts from that the network may consist after removal of one of the joining points (not counting the removed joining point itself). Input The input consists of several instances. The first line of each instance contains two integers 1 <= P <= 10 000 and C >= 0 separated by a single space. P is the number of power plants. The power plants have assigned integers between 0 and P - 1. C is the number of connections. The following C lines of the instance describe the connections. Each of the lines contains two integers 0 <= p1, p2 < P separated by a single space, meaning that plants with numbers p1 and p2 are connected. Each connection is described exactly once and there is at most one connection between every two plants. The instances follow each other immediately, without any separator. The input is terminated by a line containing two zeros. Output The output consists of several lines. The i-th line of the output corresponds to the i-th input instance. Each line of the output consists of a single integer C. C is the maximum number of the connected parts of the network that can be obtained by removing one of the joining points at power plants in the instance. Sample Input 3 3 0 1 0 2 2 1 4 2 0 1 2 3 3 1 1 0 0 0 Sample Output 1 2 2 Source #include<stdio.h> #include<string.h> #include<iostream> #include<algorithm> using namespace std; const int N=10005; const int M=200010; struct Edge { int to; bool cut; int next; }edge[M]; int low[N],dfn[N],Stack[N]; bool instack[N]; int index,top; bool cut[N]; int bridge; int n,m; { edge[++tot].to=v; edge[tot].cut=0; return ; } void init() { tot=0; memset(dfn,0,sizeof(dfn)); memset(instack,0,sizeof(instack)); memset(cut,0,sizeof(cut)); index=top=0; bridge=0; } void tarjan(int u,int pre) { int v; low[u]=dfn[u]=++index; Stack[top++]=u; instack[u]=1; int son=0; { v=edge[i].to; if(v==pre) continue; if(!dfn[v]) { son++; tarjan(v,u); if(low[v]<low[u]) low[u]=low[v]; if(low[v]>dfn[u]){ bridge++; edge[i].cut=1; edge[i^1].cut=1; } if(u!=pre&&low[v]>=dfn[u]){ cut[u]=1; } } else if(dfn[v]<low[u]){ low[u]=dfn[v]; } } if(u==pre&&son>1) cut[u]=1; instack[u]=0; top--; return ; } int main() { int u,v; while(scanf("%d %d",&n,&m)!=EOF) { if(n==0&&m==0) break; init(); for(int i=1;i<=m;i++) { scanf("%d %d",&u,&v); u++; v++; } memset(dfn,0,sizeof(dfn)); memset(instack,0,sizeof(instack)); memset(cut,0,sizeof(cut)); index=top=0; int ans=0; for(int i=1;i<=n;i++){ if(!dfn[i]){ ans++; tarjan(i,i); } } int maxx=-1; for(int i=1;i<=n;i++) { } ans+=maxx; printf("%d\n",ans); } return 0; } • 评论 • 上一篇 • 下一篇
2018-08-22 03:51:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35005584359169006, "perplexity": 827.5833351057265}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219469.90/warc/CC-MAIN-20180822030004-20180822050004-00066.warc.gz"}
https://math.stackexchange.com/questions/669233/inverse-functions-and-u-substitution
# Inverse Functions and $u$-Substitution Back in my undergrad days I wrote a false proof of the following. Problem. Prove that $\displaystyle\int_0^{2\pi}\frac{dx}{1+e^{\sin{x}}}=\pi$ Proof. Integrating by parts gives $$\int_0^{2\pi}\frac{dx}{1+e^{\sin{x}}} = \left.\frac{x}{1+e^{\sin{x}}}\right\vert_0^{2\pi}+\int_0^{2\pi} x\cdot\frac{e^{\sin x}\cos{x}}{(e^{\sin{x}}+1)^2}dx =\pi+\int_0^{2\pi} x\cdot\frac{e^{\sin x}\cos{x}}{(e^{\sin{x}}+1)^2}dx$$ Taking $u=\sin x$ in the last integral gives $$\int_0^{2\pi} x\cdot\frac{e^{\sin x}\cos{x}}{(e^{\sin{x}}+1)^2}dx =\int_0^0\arcsin u\frac{e^u}{(e^u+1)}du=0$$ and combining the two equations gives the result. $\Box$ Of course, the problem with this proof is that the equation $x=\arcsin u$ is only valid on $[0,\pi/2]$. However, $\sin{x}$ is invertible on the intervals $[\pi/2,3\pi/2]$ and $[3\pi/2,2\pi]$ so it seems that this problem can be circumvented by splitting the integral up into three integrals and individually applying the $u$-substitution. Can this proof be salvaged? Edit: I'm aware that there are other ways to prove this result. I'm mainly concerned with the validity of this proof. Edit: I've voted up both answers because they give correct proofs. I haven't accepted an answer, however, because neither addresses the issue of breaking up a noninvertible function into seperate integrals where the function is invertible, which was my main reason for posting this question. • Based on your latest edit I have updated my answer. – Paramanand Singh Feb 12 '14 at 4:34 Calculation of original integral $$\int_{0}^{2\pi}\frac{dx}{1 + e^{\sin x}}$$ can be done directly using the hint from lab bhattacharjee. I believe you would want to have a proof that the complicated integral $$I = \int_{0}^{2\pi}x\frac{e^{\sin x}\cos x}{(1 + e^{\sin x})^{2}}\,dx = 0$$ To that end let's apply the hint (again) from lab bhattacharjee to get $$I = \int_{0}^{2\pi}(2\pi - x)\frac{e^{\sin x}\cos x}{(1 + e^{\sin x})^{2}}\,dx$$ so that by adding these two equivalent forms of $I$ we get $$I = \pi\int_{0}^{2\pi}\frac{e^{\sin x}\cos x}{(1 + e^{\sin x})^{2}}\,dx = \pi\left(\int_{0}^{\pi}\frac{e^{\sin x}\cos x}{(1 + e^{\sin x})^{2}}\,dx + \int_{\pi}^{2\pi}\frac{e^{\sin x}\cos x}{(1 + e^{\sin x})^{2}}\,dx\right)$$ or $I = \pi(I_{1} + I_{2})$. Now we can put $t = x - \pi$ in second integral $I_{2}$ to get $$I_{2} = -\int_{0}^{\pi}\frac{e^{\sin t}\cos t}{(1 + e^{\sin t})^{2}}\,dt = -I_{1}$$ It now follows that $I = \pi(I_{1} + I_{2}) = 0$. Update: After the edit by OP, it is clear that what is needed here is to apply the substitution $u = \sin x$ and then show that the integral $I$ is $0$. As he has mentioned in his question this would need a split into three integrals over the range $[0, \pi/2], [\pi/2, 3\pi/2]$ and $[3\pi/2, 2\pi]$. After the substitution the intervals will change to $[0, 1], [-1, 1]$ and $[-1, 0]$. On doing this substitution it will be found that the integral is equal to $$I = I_{1} + I_{2} + I_{3} = \int_{0}^{1}\frac{e^{u}\arcsin u}{(e^{u} + 1)^{2}}\,du - \int_{-1}^{1}\frac{e^{u}(\pi + \arcsin u)}{(e^{u} + 1)^{2}}\,du + \int_{-1}^{0}\frac{e^{u}(2\pi + \arcsin u)}{(e^{u} + 1)^{2}}\,du$$ Note that in the interval $[0, \pi/2]$ the function $u = \sin x$ increases and hence the mapping $\sin x = u$ can be inverted by $x = \arcsin u$ and it maps $[0, \pi/2]$ into $[0, 1]$. In the interval $[\pi/2, 3\pi/2]$ the function $u = -\sin x$ increases and the inverse happens using $x = (\pi + \arcsin u)$ and since $-\cos x \, dx = du$ we get a $-$ sign in the integral. Again in the interval $[3\pi/2, 2\pi]$ the function $\sin x$ increases and the correct inverse is $x = 2\pi + \arcsin u$. Since the function $e^{u}/(e^{u} + 1)^{2}$ is even we have \begin{aligned}I &= \int_{-1}^{0}\frac{e^{u}\arcsin u}{(e^{u} + 1)^{2}}\,du + \int_{0}^{1}\frac{e^{u}\arcsin u}{(e^{u} + 1)^{2}}\,du - \int_{-1}^{1}\frac{e^{u}\arcsin u}{(e^{u} + 1)^{2}}\,du\\ &\,\,\,+\,\,2\pi\int_{-1}^{0}\frac{e^{u}}{(e^{u} + 1)^{2}}\,du - \pi\int_{-1}^{1}\frac{e^{u}}{(e^{u} + 1)^{2}}\,du = 0\end{aligned} • Nice solution!! +1 for you my friend! – Mark Viola Oct 21 '15 at 4:24 • Comment Part 1: Referring to your update, the interval $[\frac{\pi}{2}$, $\frac{3\pi}{2}]$ has been analysed wrongly: $I_{2}$ should be $$\int_{1}^{-1}\frac{e^{u}(\pi - \arcsin u)}{(e^{u} + 1)^{2}}\,du$$ instead. This is because as $x$ is increasing from $\frac{\pi}{2}$ to $\frac{3\pi}{2}$, and $u = \sin x$ correspondingly decreasing from $1$ to $-1$, $x$ ought to be given by $x = (\pi - \arcsin u)$. – Ryan Nov 12 '17 at 12:58 • Comment Part 2: Therefore, the final equation in your answer should instead be \begin{aligned}I &= \int_{-1}^{0}\frac{e^{u}\arcsin u}{(e^{u} + 1)^{2}}\,du + \int_{0}^{1}\frac{e^{u}\arcsin u}{(e^{u} + 1)^{2}}\,du + \int_{-1}^{1}\frac{e^{u}\arcsin u}{(e^{u} + 1)^{2}}\,du\\ &\,\,\,+\,\,2\pi\int_{-1}^{0}\frac{e^{u}}{(e^{u} + 1)^{2}}\,du - \pi\int_{-1}^{1}\frac{e^{u}}{(e^{u} + 1)^{2}}\,du\\ &\,\,\, = 2 \int_{-1}^{1}\frac{e^{u}\arcsin u}{(e^{u} + 1)^{2}}\,du\\ &\,\,\, = 0\end{aligned} since the last integrand is an odd function. – Ryan Nov 12 '17 at 13:04 • @Ryan: your substitution is also correct as well as mine. In $I_{2}$ I have used $u=-\sin x$ whereas you have used $u=\sin x$. I don't see any problem with any of the approaches. – Paramanand Singh Nov 12 '17 at 13:46 Not sure how you have reached at $$\left.\frac{x}{1+e^{\sin{x}}}\right\vert_0^{2\pi}=\pi$$ Here is another way: $$\text{Use }I=\int_a^bf(x)dx=\int_a^bf(a+b-x)dx$$ $$2I=\int_a^bf(x)dx+\int_a^bf(a+b-x)dx=\int_a^b[f(x)+f(a+b-x)]dx$$ utilizing $$\displaystyle\sin(2\pi+0-x)=-\sin x$$
2019-12-06 15:53:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9518071413040161, "perplexity": 314.2705388949192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00103.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-10th-edition/chapter-3-section-3-1-functions-3-1-assess-your-understanding-page-211/58
Chapter 3 - Section 3.1 - Functions - 3.1 Assess Your Understanding - Page 211: 58 The domain of G(x) is {${ x|x\ne0,x\ne-2,x\ne2}$} Work Step by Step $G(x)=\frac{x+4}{x^3-4x}$ To find the domain of a rational function, we must set the denominator equal to 0 and solve for x. $x^3-4x=0$ $x(x^2-4)=0$ Now, set each factor equal to 0. $x=0$ or $x=\pm2$ This means that the domain is: {${ x|x\ne0,x\ne-2,x\ne2}$} After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2020-05-28 05:54:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8141660690307617, "perplexity": 422.9323325512389}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396495.25/warc/CC-MAIN-20200528030851-20200528060851-00297.warc.gz"}
https://thusspakeak.com/
## Beta Animals - a.k. Several years ago we took a look at the gamma function Γ, which is a generalisation of the factorial to non-integers, being equal to the factorial of a non-negative integer n when passed an argument of n+1 and smoothly interpolating between them. Like the normal cumulative distribution function Φ, it and its related functions are examples of special functions; so named because they frequently crop up in the solutions to interesting mathematical problems but can't be expressed as simple formulae, forcing us to resort to numerical approximation. This time we shall take a look at another family of special functions derived from the beta function B. Full text... ## Further On A Very Cellular Process - student You will no doubt recall my telling you of my fellow students' and my latest pastime of employing Professor B------'s Experimental Clockwork Mathematical Apparatus to explore the behaviours of cellular automata, which may be thought of as simplistic mathematical simulacra of animalcules such as amoebas. Specifically, if we put together an infinite line of imaginary boxes, some of which are empty and some of which contain living cells, then we can define a set of rules to determine whether or not a box will contain a cell in the next generation depending upon its own, its left and its right neighbours contents in the current one. Full text... ## Slashing The Odds - a.k. In the previous post we explored the Cauchy distribution, which, having undefined means and standard deviations, is an example of a pathological distribution. We saw that this is because it has a relatively high probability of generating extremely large values which we concluded was a consequence of its standard random variable being equal to the ratio of two independent standard normally distributed random variables, so that the magnitudes of observations of it can be significantly increased by the not particularly unlikely event that observations of the denominator are close to zero. Whilst we didn't originally derive the Cauchy distribution in this way, there are others, known as ratio distributions, that are explicitly constructed in this manner and in this post we shall take a look at one of them. Full text... ## May The Fours Be With You - baron m. Sir R-----! Come join me for a glass of chilled wine! I have a notion that you're in the mood for a wager. What say you? I knew it! I have in mind a game of dice that reminds me of my time as the Russian military attaché to the city state of Coruscant and its territories during the traitorous popular uprising fomented by the blasphemous teachings of a fundamentalist religious sect known as the Jedi. Full text... ## Moments Of Pathological Behaviour - a.k. Last time we took a look at basis function interpolation with which we approximate functions from their values at given sets of arguments, known as nodes, using weighted sums of distinct functions, known as basis functions. We began by constructing approximations using polynomials before moving on to using bell shaped curves, such as the normal probability density function, centred at the nodes. The latter are particularly useful for approximating multi-dimensional functions, as we saw by using multivariate normal PDFs. An easy way to create rotationally symmetric functions, known as radial basis functions, is to apply univariate functions that are symmetric about zero to the distance between the interpolation's argument and their associated nodes. PDFs are a rich source of such functions and, in fact, the second bell shaped curve that we considered is related to that of the Cauchy distribution, which has some rather interesting properties. Full text... ## On Fruitful Opals - student Recall that the Baron’s game consisted of guessing under which of a pair of cups was to be found a token for a stake of four cents and a prize, if correct, of one. Upon success, Sir R----- could have elected to play again with three cups for the same stake and double the prize. Success at this and subsequent rounds gave him the opportunity to play another round for the same stake again with one more cup than the previous round and a prize equal to that of the previous round multiplied by its number of cups. Full text... ## All Your Basis Are Belong To Us - a.k. A few years ago we saw how we could approximate a function f between pairs of points (xi, f(xi)) and (xi+1, f(xi+1)) by linear and cubic spline interpolation which connect them with straight lines and cubic polynomials respectively, the latter of which yield smooth curves at the cost of somewhat arbitrary choices about their exact shapes. An alternative approach is to construct a single function that passes through all of the points and, given that nth order polynomials are uniquely defined by n+1 values at distinct xi, it's tempting to use them. Full text... ## On A Very Cellular Process - student Recently my fellow students and I have been spending our free time using Professor B------'s remarkable calculating engine to experiment with cellular automata, being mathematical contrivances that might be thought of as crude models of the lives of those most humble of creatures; amoebas. In their simplest form they are unending lines of boxes, some of which contain a living cell that at each generation will live, die or reproduce according to the contents of its neighbouring boxes. For example, we might say that each cell divides and its two offspring migrate to the left and right, dying if they encounter another cell's progeny. Full text... ## The Spectral Apparition - a.k. Over the last few months we have seen how we can efficiently implement the Householder transformations and shifted Givens rotations used by Francis's algorithm to diagonalise a real symmetric matrix M, yielding its eigensystem in a matrix V whose columns are its eigenvectors and a diagonal matrix Λ whose diagonal elements are their associated eigenvalues, which satisfy M = V × Λ × VT and together are known as the spectral decomposition of M. In this post, we shall add it to the ak library using the householder and givens functions that we have put so much effort into optimising. Full text... ## Fruitful Opals - baron m. Greetings Sir R-----. I trust that I find you in good spirits this evening? Will you take a glass of this excellent porter and join me in a little sport? Splendid! I propose a game that is popular amongst Antipodean opal scavengers as a means to improve their skill at guesswork. Opals, as any reputable botanist will confirm, are the seeds of the majestic opal tree which grows in some abundance atop the vast monoliths of that region. Its mouth-watering fruits are greatly enjoyed by the Titans on those occasions when, attracted by its entirely confused seasons, they choose to winter thereabouts. Full text... ### Gallimaufry AKCalc ECMA Endarkenment Turning Sixteen This site requires HTML5, CSS 2.1 and JavaScript 5 and has been tested with Chrome 26+ Firefox 20+ Internet Explorer 9+
2020-07-15 13:04:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5728064775466919, "perplexity": 976.6041938410146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657167808.91/warc/CC-MAIN-20200715101742-20200715131742-00326.warc.gz"}
https://proofwiki.org/wiki/Integer_as_Sum_of_5_Non-Zero_Squares/Mistake
# Integer as Sum of 5 Non-Zero Squares/Mistake ## Source Work The Dictionary $33$ ## Mistake Any integer greater than $33$ can be written as the sum of $5$ non-zero squares. [ Jackson, Masat, Mitchell, MM v61 41 ] The reference is incorrect. It refers to Mathematics Magazine, volume $61$, page $41$. The actual article itself is on page $41$ of volume $66$, not $61$.
2020-01-26 08:40:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090654611587524, "perplexity": 1470.5096774140177}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687958.71/warc/CC-MAIN-20200126074227-20200126104227-00090.warc.gz"}
https://nicholasrjenkins.science/tutorial/bayesian-inference-with-stan/building_linear_models/
# Building Linear Models In this tutorial, we will learn how to estimate linear models using Stan and R. Along the way, we will review the steps in a sound Bayesian workflow. This workflow consists of: 1. Considering the social process that generates your data. The goal of your statistical model should be to model the data generating process, so think hard about this. Exploratory analysis goes a long way towards helping you to understand this process. 2. Program your statistical model and sample from it. 3. Evaluate your model's reliability. Check for Markov chain convergence to make sure that your model has produced reliable estimates. 4. Evaluate your model's performance. How well does your model approximate the data generating process? This involves using posterior predictive checks. 5. Summarize your model's results in tabular and graphical form. My goal is to explain the fundamentals of linear models in Stan with examples so that we aren't learning Stan programming in such an abstract environment. Let's get started! # Import Data We'll use the Motor Trend Car Road Tests mtcars data that is provided in R as our practice data set. cars.data <- mtcars library(tidyverse) glimpse(cars.data) Let's walk though what our variables are: • mpg: Miles pre gallon • cyl: Number of cylinders • disp Displacement (cu. in.) • hp: Gross horsepower • drat: Rear axle ratio • wt: Weight (1000 lbs) • qsec: 1/4 mile time • vs: Engine (0 = V-shaped, 1 = straight) • am: Transmission (0 = automatic, 1 = manual) • gear: Number of forward gears • carb: Number of carburetors With this informaion, let's do some quick data cleaning. cars.data <- cars.data %>% rename(cylinders = cyl, displacement = disp, rear_axle_ratio = drat, weight = wt, engine_type = vs, trans_type = am, gears = gear) %>% mutate(engine_type = factor(engine_type, levels = c(0, 1), labels = c("V-shaped", "Straight")), trans_type = factor(trans_type, levels = c(0, 1), labels = c("Automatic", "Manual"))) glimpse(cars.data) For our research question, we will be investigating how different characteristics of a car affect it's MPG. To start with, we will test how vehicle weight affects MPG. Let's do some preliminary analysis of this question with visualizations. ggplot(data = cars.data, aes(x = weight, y = mpg)) + geom_point() As expected, there seems to be a negative relationship between these variables. Let's add in a fitted line: ggplot(data = cars.data, aes(x = weight, y = mpg)) + geom_point() + geom_smooth(method = "lm") # Build a Model ## Models with a Single Predictor Now we will build a model in Stan to formally estimate this relationship. Rather than creating an r code block, we want to create a stan code block. The only caviat is that we also need to add a name for our Stan model provided in the output.var argument. Here is the model that we want to estimate in Stan: \begin{aligned} \text{mpg} &\sim \text{Normal}(\mu, \sigma) \\ \mu &= \alpha + \beta \text{weight} \end{aligned} To start, we need to load cmdstanr which will allow us to interface with Stan through R. library(cmdstanr) register_knitr_engine(override = FALSE) # This registers cmdstanr with knitr so that we can use # it with R Markdown. Now, program the model: data { int n; //number of observations in the data vector[n] mpg; //vector of length n for the car's MPG vector[n] weight; //vector of length n for the car's weight } parameters { real alpha; //the intercept parameter real beta_w; //slope parameter for weight real sigma; //model variance parameter } model { //linear predictor mu vector[n] mu; //write the linear equation mu = alpha + beta_w * weight; //likelihood function mpg ~ normal(mu, sigma); } Once we finish writing the model, we need to run the code block to compile it into C++ code. This will also us to sample from the model and obtain the parameter estimates. Let's do that now. The next step is to prepare the data for Stan. Stan can't use the same types of data that R can. For example, Stan requires lists, not data frames, and it cannot accept factors. We'll use the tidybayes package to make it eaiser to prepare the data. library(tidybayes) model.data <- cars.data %>% select(mpg, weight) %>% compose_data(.) # sample from our model linear.fit.1 <- linear.model$sample(data = model.data) # summarize our model print(linear.fit.1) Let's run through the interpretation of this model: • alpha For a car with a weight of zero, the expected MPG is 37.21. Obviously, a weight of zero is impossible, so we'll want to address this in our next model. • beta_w Comparing two cars who differ by 1000 pounds, the model predicts a difference of 5.33 miles per gallon. • sigma The model predicts MPG within 3.18 points. • lp__ Logarithm of the (unnormalized) posterior density. This log density can be used in various ways for model evaluation and comparison. Ok, now that we've written our model, let's make a few imporvements. First, let's center our weight variable so that we can get a more meaningful interpretation of the intercept parameter alpha. We can accomplish this by subtracting the mean from each observation. This will change the interpretation of the intercept to be the average MPG when weight is held constant at it's average value. cars.data <- cars.data %>% mutate(weight_c = weight - mean(weight)) Now that we changed the name of the variable name, we also need to change our model code to incorporate this change. While we are adjusting the code, we'll also restrict the scale parameter to be positive. This will help our model be a bit more efficient. data { int n; //number of observations in the data vector[n] mpg; //vector of length n for the car's MPG vector[n] weight_c; //vector of length n for the car's weight } parameters { real alpha; //the intercept parameter real beta_w; //slope parameter for weight real<lower = 0> sigma; //variance parameter and restrict it to positive values } model { //linear predictor mu vector[n] mu; //write the linear equation mu = alpha + beta_w * weight_c; //likelihood function mpg ~ normal(mu, sigma); } Now prepare the data and re-estimate the model. model.data <- cars.data %>% select(mpg, weight_c) %>% compose_data(cars.data) linear.fit.2 <- linear.model$sample(data = model.data) print(linear.fit.2) Let's interpret this model: • alpha When a vehicle's weight is held at its average value, the expected MPG is 20.09. • beta_w This estimate has the same interpretation as before. • sigma This estimate has the same interpretation as before. • lp__ This estimate has a slightly lower value (in absolute value) than it did in the previous model indicating that this model performs slightly better. ## Models with Multiple Predictors To add more predictors, we just need to adjust out model code. Let's add in the vehicle's cylinders and horsepower. We'll also center these variables. data { int n; //number of observations in the data vector[n] mpg; //vector of length n for the car's MPG vector[n] weight_c; //vector of length n for the car's weight vector[n] cylinders_c; ////vector of length n for the car's cylinders vector[n] hp_c; //vector of length n for the car's horsepower } parameters { real alpha; //the intercept parameter real beta_w; //slope parameter for weight real beta_cyl; //slope parameter for cylinder real beta_hp; //slope parameter for horsepower real<lower = 0> sigma; //variance parameter and restrict it to positive values } model { //linear predictor mu vector[n] mu; //write the linear equation mu = alpha + beta_w * weight_c + beta_cyl * cylinders_c + beta_hp * hp_c; //likelihood function mpg ~ normal(mu, sigma); } Prepare the data and sample from the model. model.data <- cars.data %>% mutate(cylinders_c = cylinders - mean(cylinders), hp_c = hp - mean(hp)) %>% select(mpg, weight_c, cylinders_c, hp_c) %>% compose_data(.) linear.fit.3 <- linear.model$sample(data = model.data) print(linear.fit.3) After adjusting for a car's cylinders and horsepower two cars that differ by 1000 pounds, the model predicts a difference of 3.18 miles per gallon. Notice that lp__ is now even lower, suggesting that this latest model is a better fit. # Assesing Our Model ## Model Convergence Now that we've built a decent model, we need to see how well it actuall preforms. First, we'll want to check that our chains have converged and are producing reliable point estimates. We can do this with a traceplot. library(bayesplot) fit.draws <- linear.fit.3$draws() # extract the posterior draws mcmc_trace(fit.draws) The fuzy caterpiller appearance indicates that the chains are mixing well and have converged to a common distribution. We can also assess the Rhat values for each parameter. As a rule of thumb, Rhat values less than 1.05 indicate good convergence. The bayesplot package makes these calculations easy. rhats <- rhat(linear.fit.3) mcmc_rhat(rhats) ## Effective Sample Size The effective sample size estimates the number of independent draws from the posterior distribution of a given estimate. This metric is important because Markov chains can have autocorrelation wich will lead to biased parameter estimates. With the bayesplot package we can visualize the ratio of the effective sample size to the total number of samples - the larger the ratio the better. The rule of thumb here is to worry about ratios less than 0.1. eff.ratio <- neff_ratio(linear.fit.3) eff.ratio mcmc_neff(eff.ratio) We can also check the autocorrelation in the chains with bayesplot. To use the mcmc_acf function, we'll need to extract the posterior draws from the model. mcmc_acf(fit.draws) Here, we are looking to see how quickly the autocorrelation drops to zero. ## Posterior Predictive Checks One of the most powerful tools of Bayesian inference is to conduct posterior predictive checks. This check is designed to see how well our model can generate data that matches observed data. If we built a good model, it should be able to generate new observations that very closely resemble the observed data. In order to perform posterior predictive checks, we will need to add in some code to our model. Specifically we need to calculate replications of our outcome variable. We can do this using the generated quantities section. data { int n; //number of observations in the data vector[n] mpg; //vector of length n for the car's MPG vector[n] weight_c; //vector of length n for the car's weight vector[n] cylinders_c; ////vector of length n for the car's cylinders vector[n] hp_c; //vector of length n for the car's horsepower } parameters { real alpha; //the intercept parameter real beta_w; //slope parameter for weight real beta_cyl; //slope parameter for cylinder real beta_hp; //slope parameter for horsepower real<lower = 0> sigma; //variance parameter and restrict it to positive values } model { //linear predictor mu vector[n] mu; //write the linear equation mu = alpha + beta_w * weight_c + beta_cyl * cylinders_c + beta_hp * hp_c; //likelihood function mpg ~ normal(mu, sigma); } generated quantities { //replications for the posterior predictive distribution real y_rep[n] = normal_rng(alpha + beta_w * weight_c + beta_cyl * cylinders_c + beta_hp * hp_c, sigma); } In the code block above, normal_rng is the Stan function to generate observations from a normal distribution. So, y_rep generates new data points from a normal distribution using the linear model we built mu and a variance sigma. Now let's re-estimate the model: linear.fit.3 <- linear.model$sample(data = model.data) print(linear.fit.3) In our model output, we now have a replicated y value for every row of data. We can use these values to plot the replicated data against the observed data. y <- cars.data$mpg # convert the cmdstanr fit to an rstan object library(rstan) stanfit <- read_stan_csv(linear.fit.3$output_files()) # extract the fitted values y.rep <- extract(stanfit)[["y_rep"]] ppc_dens_overlay(y = cars.data$mpg, yrep = y.rep[1:100, ]) The closer the replicated values (yrep) get to the observed values (y) the more accurate the model. Here it looks like we could probably do a bit better, though the lose fit is likely due to the small sample size (which adds more uncertainty). # Improving the Model with Better Priors To improve this model, let's use more informative priors. Priors allow us to incorporate our background knowledge on the question into the model to produce more realistic estimates. For our question here, we probably don't expect the weight of a vehicle to change its MPG more that a dozen or so miles per gallon. Unfortunately, becuase we didn't specify priors in the previous models, it defaulted to using flat priors which essentially place an equal probably on all possible coefficient values - not very realistic. Let's fix that. To get a better sense of what priors to use, it's a good idea to use prior predictive checks, which are a lot like posterior predictive checks only they don't include any data. The goal is to select priors that put some probably over all plausable vales. # expectations for the effect of weight on MPG sample.weight <- rnorm(1000, mean = 0, sd = 100) plot(density(sample.weight)) # expectations for the average mpg sample.intercept <- rnorm(1000, mean = 0, sd = 100) plot(density(sample.intercept)) # expectations for model variance sample.sigma <- runif(1000, min = 0, max = 100) plot(density(sample.weight)) # prior predictive simulation for mpg given the priors prior_mpg <- rnorm(1000, sample.weight + sample.intercept, sample.sigma) plot(density(prior_mpg)) These priors suggest that the effect of weight on MPG coulb be anywhere from -400 to 400 points. Definitely not realistic - and these are already more informative priors than any frequentist analysis! Similarly, the expected MPG of a vehicle given these priors is anywhere from -400 to 400. Let's bring these in a bit. # expectations for the effect of weight on MPG sample.weight <- rnorm(1000, mean = -10, sd = 5) plot(density(sample.weight)) # expectations for the average mpg sample.intercept <- rnorm(1000, mean = 20, sd = 5) plot(density(sample.intercept)) # expectations for model variance sample.sigma <- runif(1000, min = 0, max = 10) plot(density(sample.sigma)) # prior predictive simulation for mpg given the priors prior_mpg <- rnorm(1000, sample.weight + sample.intercept, sample.sigma) plot(density(prior_mpg)) We could probably do better, but these look a lot better. Now the expected effect of weight on MPG is negative and and majority of the mass is concentrated between -15 and -5. Similarly, the expected MPG given these priors is between -10 and 20. Now let's build a new model with these priors and sample from it. data { int n; //number of observations in the data vector[n] mpg; //vector of length n for the car's MPG vector[n] weight_c; //vector of length n for the car's weight vector[n] cylinders_c; ////vector of length n for the car's cylinders vector[n] hp_c; //vector of length n for the car's horsepower } parameters { real alpha; //the intercept parameter real beta_w; //slope parameter for weight real beta_cyl; //slope parameter for cylinder real beta_hp; //slope parameter for horsepower real<lower = 0> sigma; //variance parameter and restrict it to positive values } model { //linear predictor mu vector[n] mu; //write the linear equation mu = alpha + beta_w * weight_c + beta_cyl * cylinders_c + beta_hp * hp_c; //prior expectations alpha ~ normal(20, 5); beta_w ~ normal(-10, 5); beta_cyl ~ normal(0, 5); //we'll include my uncertain priors here beta_hp ~ normal(0, 5); //we'll include my uncertain priors here sigma ~ uniform(0, 10); //likelihood function mpg ~ normal(mu, sigma); } generated quantities { //replications for the posterior predictive distribution real y_rep[n] = normal_rng(alpha + beta_w * weight_c + beta_cyl * cylinders_c + beta_hp * hp_c, sigma); } Now sample: linear.fit.4 <- linear.model$sample(data = model.data) print(linear.fit.4) After estimating the model with more informative priors, the lp__ is now a little bit lower. We can compare the prior distribution to the posterior distribution to see how "powerful" our priors are. linear.fit.4 <- read_stan_csv(linear.fit.4$output_files()) posterior <- as.data.frame(linear.fit.4) library(tidyverse) ggplot() + geom_density(aes(x = sample.weight)) + geom_density(aes(x = posterior$beta_w), color = "blue") Here we can see that even with more "informative priors" they are still very weak compared to the data. # Summarize the Model With our final model in hand, we can visualizations of our model's results. ## Coeficient Plot stan_plot(linear.fit.4, pars = c("alpha", "beta_w", "beta_cyl", "beta_hp", "sigma")) ## Fitted Regression Line Here we can plot the fitted regression line # Fitted Line ggplot(data = cars.data, aes(x = weight_c, y = mpg)) + geom_point() + stat_function(fun = function(x) mean(posterior$alpha) + mean(posterior$beta_w) * x) # Fitted Line with Uncertainty ------------------------------------------------ fit.plot <- ggplot(data = cars.data, aes(x = weight_c, y = mpg)) + geom_point() # select a random sample of 100 draws from the posterior distribution sims <- posterior %>% mutate(n = row_number()) %>% sample_n(size = 100) # add these draws to the plot lines <- purrr::map(1:100, function(i) stat_function(fun = function(x) sims[i, 1] + sims[i, 2] * x, size = 0.08, color = "gray")) fit.plot <- fit.plot + lines # add the mean line to the plot fit.plot <- fit.plot + stat_function(fun = function(x) mean(posterior$alpha) + mean(posterior\$beta_w) * x) fit.plot
2021-07-30 12:39:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.688357949256897, "perplexity": 6129.9911529585825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.60/warc/CC-MAIN-20210730122926-20210730152926-00196.warc.gz"}
http://blog.gypsydave5.com/posts/2014/9/12/environmentalism/
# gypsydave5 The blog of David Wickes, software developer # Environmentalism I’ve never really seen the point of environment variables until today. They’ve been slowly introduced into the syllabus at Makers during the bookmark manager project. To start with they were a way determine which database to use; whether the one for the test suite or the one for playing around on the local server. Something like env = ENV["RACK_ENV"] || "development" DataMapper.setup(:default, "postgres://localhost/bookmark_manager_#{env}") Which is all well and good. Then it comes to getting the app up - let’s say on Heroku. Heroku has PostgreSQL support, so that’s taken care of by adding a plugin on the dashboard. Tick. Pushing the application to Heroku is easy enough (as long as you haven’t spelled Gemfile in all caps at any point in your Git history. Who would do that?). But then you hit the buffers, because the database isn’t where you’ve told Sinatra it is. So where is it? Hiding somewhere over at Amazon apparently. If you run heroku config you’ll see a great (OK, tiny) stack of… you guessed it… environment variables. The two key ones to look at are DATABASE_URL and HEROKU_POSTGRES_PINK_URL. Next to them both is a long URL that lets you know that the nice folks at Amazon are taking care of your instance of Postgres on the behalf of Heroku. So we just jam that URL into the DataMapper setup right? ruby DataMapper.setup(:default, "postgres://whole-mess-of-letters.compute-1.amazonaws:porty_goodness_here") Wrong. That URL is a magic number, it’s specific to the Heroku server you’re pushing to. But what about James? What about Vincent? Maybe they want to have an instance of their own. Or what if Heroku go and migrate your database to another cloud supplier? Bad times. Environment variables to the rescue. Look, it’s right there in the config: DATABASE_URL. Just jam that sucker into the DataMapper setup. Of course, you need to make sure that you’re using it in Heroku, so maybe some sort of if statement to make sure you’re using it in the right place. Not pretty, but… if env.include?(/heroku/) DataMapper.setup(:default, ENV["DATABASE_URL"]) else DataMapper.setup(:default, "postgres://localhost/bookmark_manager_#{env}") end Environment variables. No longer a ‘nice to have’.
2019-04-19 17:23:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26326924562454224, "perplexity": 3414.2399824266645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527865.32/warc/CC-MAIN-20190419161226-20190419183226-00424.warc.gz"}
http://openstudy.com/updates/4f317a0de4b0fc09381f51d8
## anonymous 4 years ago dy=cx,for x the solution is x=? 1. ash2326 candance make the coefficient of x by someway? 2. anonymous huh???? i dont understand what you are saying 3. ash2326 candance make the coefficient of x one by someway? (1/2)cx^2 5. anonymous i am so confused... what does coefficient mean? 6. ash2326 is this calculus? 7. anonymous no, its college algebra $cx ^{2}\over 2$ that is the f(y) then dy/dx = cx 9. ash2326 then we have $dy=cx$ divide both sides by c $\frac{dy}{c}=x$ 10. ash2326 Radar here it's not the derivative of y. d, y, c and x are variables Yes, I believe you are correct, I was thinking the problem was y'
2016-10-22 07:27:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6702412962913513, "perplexity": 4015.9036959598748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00142-ip-10-171-6-4.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/67832/game-tournament-program-np-complete?noredirect=1
# Game tournament program, NP complete? I have been trying to find a solution both theoretical and practical to my problem but I just can't. The question is you have x players that should play some rounds y of games in groups of 4. Now if a player was in a game with somebody else, he is not allowed to be ever again with that player in the same group. Now the question is give n groups that satisfy this conditions. With n being maximal amount of possible groups. My first approach was to just take a player put him into a group, now for the next player check the array already_played_with_players of the first player for the second player if he's not in add him and add the players he played with to the already_played_with_players array. Proceed so too for the next two players. Then mark these 4 players as in a group and take the next player out of the pool and so on. This is not some educational question, but a real world problem. I would like to organize a game tournament, but so that people meet new people and don't play with the same people again. I already ask multiple people mathematicians, physicians, computer scientist, but the solution was not really evident to anybody. So I was wondering if the problem is NP-complete? I know I would need to find a reduction from another NP-complete problem to my problem to prove that, but I don't even know how to formalize my problem exactly. • Let n = 16, and the initial Groups: {1234, 5678, 9abc, defg}, now player 1 can never be grouped with 2,3 and 4? So for every play for each player the number of possible team members decrease by 3? And the objective is to return the maximal number of groups possible (here the output is such grouping not a number)? – Evil Dec 24 '16 at 1:46 • The objective is to return the maximal number of groups possible and a specific grouping being maximal. And yes you understood the question right. For 16 players there can be 3 rounds played. But for higher numbers the process get's difficult fast and for 16 I found it out more by guessing than by a formal method. – Hakaishin Dec 24 '16 at 3:09 • I think there can be 5 rounds – skan Apr 4 '17 at 11:59 • Can you elaborate and give a specific grouping please? – Hakaishin Apr 4 '17 at 15:59 ## 1 Answer Your problem has been solved by Brouwer in 1979 in his paper Optimal Packings of $K_4$'s into a $K_n$. Quoting from his paper, let $$J(2,4,v) = \begin{cases} \lfloor \frac{v}{4} \lfloor \frac{v-1}{3} \rfloor\rfloor - 1 & \text{if } v \equiv 7 \text{ or } 10 \pmod{12}, \\ \lfloor \frac{v}{4} \lfloor \frac{v-1}{3} \rfloor\rfloor & \text{otherwise}. \end{cases}$$ The maximal number of groups when there are $v$ players, denoted $D(2,4,v)$, is equal to $J(2,4,v)$ unless $v \in \{9,10,17\}$, in which case $D(2,4,v) = J(2,4,v)-1$, and unless $v \in \{8,11,19\}$, in which case $D(2,4,v) = J(2,4,v)-2$. The more general case in which every pair can appear at most $\lambda$ times has been solved by Assaf in his paper The packing of pairs by quadruples. • Ah, excellent. This sure sounded like a graph problem someone should have already studied. – Kyle Jones Dec 24 '16 at 22:33 • Isn't this the answer to another question? Shouldn't it be J(1, 4, v)? I tried to read the paper, but could not really understand the technicalities. Do they account in the paper, that if a player played with somebody in a group, he can not play again with any of them? So for example if player 1 plays with 2, 3, 4 he can not ever again play with 2, 3, 4? This about the maximum number of possible groups. But how would I go about actually finding such a maximal grouping of players? – Hakaishin Dec 25 '16 at 18:17 • The parameter "2" here is not the number of occurrences, but rather the size of the set of players that cannot appear together in a group. In your case, any two players cannot appear more than once in a group. As for actually finding such a maximal grouping, presumably this is explained in the paper. – Yuval Filmus Dec 25 '16 at 18:18 • As @Hakaishin said grouping 16 players into groups of 4 we have there can be 3 rounds played. How do we apply your equation to get that number. If I use v=16 I get J=20. – skan Apr 3 '17 at 18:48 • You calculate $\lfloor \frac{16}{4} \lfloor \frac{15}{3} \rfloor \rfloor = 20$. – Yuval Filmus Apr 3 '17 at 19:28
2019-10-21 20:18:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5789404511451721, "perplexity": 395.28852613619836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00536.warc.gz"}
https://www.biostars.org/p/443031/
Why snpEff annotates same region with different "effects" and why some effects are given in combination? What is the criteria to filter this annotation? 0 0 Entering edit mode 10 months ago svp ▴ 420 snpEff annotates same region for different effects. But how does it make a combination of prediction within the same transcript as follows: frameshift_variant&start_lost&initiator_codon_variant&non_canonical_start_codon frameshift_variant&stop_retained_variant splice_donor_variant&missense_variant&splice_region_variant&intron_variant frameshift_variant&initiator_codon_variant&non_canonical_start_codon splice_donor_variant&missense_variant&disruptive_inframe_deletion&splice_region_variant&intron_variant For example transcript enst1234 is enriched with splice_donor_variant&missense_variant&disruptive_inframe_deletion&splice_region_variant&intron_variant. How this is been done. How can I filter the variants from this combination as this is given as one effect snpEff VEP VariantAnnotation Exome VCF • 345 views 0 Entering edit mode well, if the variant is a deletion overlapping a splice junction, it fulfills all the conditions above. 0 Entering edit mode How can I filter single effect when multiple effects are given in combination. What is actual criteria for giving combination of effect as one effect? 0 Entering edit mode I don't think you can do this. There should be a one and only term to define this in gene_ontology) . I think it's more a problem on your side (how to grep for a consequence with multiple terms) than finding the right term in SO. 0 Entering edit mode I did not get your point. Can you give me an example with following data to identify correct SO term splice_donor_variant&missense_variant&disruptive_inframe_deletion&splice_region_variant&intron_variant For above enriched term, what is the single SO term?
2021-04-22 11:17:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47439947724342346, "perplexity": 2761.396631036881}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00444.warc.gz"}
https://stacks.math.columbia.edu/tag/0735
## 21.16 The base change map In this section we construct the base change map in some cases; the general case is treated in Remark 21.20.3. The discussion in this section avoids using derived pullback by restricting to the case of a base change by a flat morphism of ringed sites. Before we state the result, let us discuss flat pullback on the derived category. Suppose $g : (\mathop{\mathit{Sh}}\nolimits (\mathcal{C}), \mathcal{O}_\mathcal {C}) \to (\mathop{\mathit{Sh}}\nolimits (\mathcal{D}), \mathcal{O}_\mathcal {D})$ is a flat morphism of ringed topoi. By Modules on Sites, Lemma 18.30.2 the functor $g^* : \textit{Mod}(\mathcal{O}_\mathcal {D}) \to \textit{Mod}(\mathcal{O}_\mathcal {C})$ is exact. Hence it has a derived functor $g^* : D(\mathcal{O}_\mathcal {D}) \to D(\mathcal{O}_\mathcal {C})$ which is computed by simply pulling back an representative of a given object in $D(\mathcal{O}_\mathcal {D})$, see Derived Categories, Lemma 13.17.9. It preserved the bounded (above, below) subcategories. Hence as indicated we indicate this functor by $g^*$ rather than $Lg^*$. $\xymatrix{ (\mathop{\mathit{Sh}}\nolimits (\mathcal{C}'), \mathcal{O}_{\mathcal{C}'}) \ar[r]_{g'} \ar[d]_{f'} & (\mathop{\mathit{Sh}}\nolimits (\mathcal{C}), \mathcal{O}_\mathcal {C}) \ar[d]^ f \\ (\mathop{\mathit{Sh}}\nolimits (\mathcal{D}'), \mathcal{O}_{\mathcal{D}'}) \ar[r]^ g & (\mathop{\mathit{Sh}}\nolimits (\mathcal{D}), \mathcal{O}_\mathcal {D}) }$ be a commutative diagram of ringed topoi. Let $\mathcal{F}^\bullet$ be a bounded below complex of $\mathcal{O}_\mathcal {C}$-modules. Assume both $g$ and $g'$ are flat. Then there exists a canonical base change map $g^*Rf_*\mathcal{F}^\bullet \longrightarrow R(f')_*(g')^*\mathcal{F}^\bullet$ in $D^{+}(\mathcal{O}_{\mathcal{D}'})$. Proof. Choose injective resolutions $\mathcal{F}^\bullet \to \mathcal{I}^\bullet$ and $(g')^*\mathcal{F}^\bullet \to \mathcal{J}^\bullet$. By Lemma 21.15.2 we see that $(g')_*\mathcal{J}^\bullet$ is a complex of injectives representing $R(g')_*(g')^*\mathcal{F}^\bullet$. Hence by Derived Categories, Lemmas 13.18.6 and 13.18.7 the arrow $\beta$ in the diagram $\xymatrix{ (g')_*(g')^*\mathcal{F}^\bullet \ar[r] & (g')_*\mathcal{J}^\bullet \\ \mathcal{F}^\bullet \ar[u]^{adjunction} \ar[r] & \mathcal{I}^\bullet \ar[u]_\beta }$ exists and is unique up to homotopy. Pushing down to $\mathcal{D}$ we get $f_*\beta : f_*\mathcal{I}^\bullet \longrightarrow f_*(g')_*\mathcal{J}^\bullet = g_*(f')_*\mathcal{J}^\bullet$ By adjunction of $g^*$ and $g_*$ we get a map of complexes $g^*f_*\mathcal{I}^\bullet \to (f')_*\mathcal{J}^\bullet$. Note that this map is unique up to homotopy since the only choice in the whole process was the choice of the map $\beta$ and everything was done on the level of complexes. $\square$ Comment #2178 by Kestutis Cesnavicius on In the first display of the section the indices C and D seem to be mixed up. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2019-01-19 05:51:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9505143165588379, "perplexity": 688.4570639517889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662690.13/warc/CC-MAIN-20190119054606-20190119080606-00112.warc.gz"}
https://www.latex.org/forum/viewtopic.php?f=61&t=28969&sid=8b5ad492f54d35a24be126dfe0a564a6
## LaTeX forum ⇒ Presentations and Posters ⇒ insert a graphic in a block Beamer, Powerdot and KOMA-Script presentations, Conference posters (a0poster, baposter, tikzposter) NELLLY Posts: 113 Joined: Thu Nov 26, 2009 2:21 am ### insert a graphic in a block Hello, I need to insert a graphic in a block of my beamer presentation. I used the following code \documentclass[xcolor={dvipsnames}, 10pt]{beamer} \usepackage[utf8]{inputenc} \usepackage{marvosym} % \usepackage{hyperref} % \usepackage{transparent} % \usepackage{ragged2e, siunitx, xcolor,caption} \usepackage{tabularx,graphicx,rotating,subfigure,multirow,colortbl} % \usepackage{enumitem} \newcommand{\Min}{\operatornamewithlimits{Min}} \captionsetup{ %format=hang, %width=15cm, aboveskip=4pt, belowskip=1pt } %\usepackage{etoolbox} % \apptocmd{\frame}{\justifying}{}{} \let\olditem\item \renewcommand\item{\olditem\justifying} \usepackage[english]{babel} \usetheme{Warsaw} \setbeamertemplate{caption}[numbered] \hyphenpenalty 10000 \justifying \setbeamertemplate{footline}[frame number] \title{On Economic Design of Dynamic Control Charts for Attribute Data} \date{} %%%%%Mes cdes%%%% \newcommand{\makepart}[1]{ % For convenience \part{Title of part #1} \frame{\partpage} \chapter{Chapter}\begin{frame} Chapter \end{frame} } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \begin{frame} \begin{block}{Operation of the chart} %\vspace*{-0.5cm} \begin{figure}[H] \centering \centering\vspace*{-0.5cm} \includegraphics[width=0.9\textwidth]{D:/MESIMAGES/VSSchart}\vspace{-2.5cm} \caption{Chart's partition} \end{figure} \end{block} \end{frame} \end{document} I get the output of the attachment. I need to have the background of the figure in the same color of the block (grey instead of white), what should I do? Thanks. Attachments essaiforum.pdf LaTeXguide.org • LaTeX-Cookbook.net Stefan Kottwitz Posts: 9878 Joined: Mon Mar 10, 2008 9:44 pm ### insert a graphic in a block Hi Nellly! That color is in the graphic, you need to change it or make that area transparent. Even better: don't use \includegraphics,but create the figure within the beamer LaTeX document instead. You could use TikZ for that. Stefan
2022-09-25 01:41:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9454846382141113, "perplexity": 12273.371720316198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00632.warc.gz"}
https://crypto.stackexchange.com/questions/51643/what-operation-does-denote-in-ansi-x9-63-kdf?noredirect=1
# What operation does $\|$ denote in ANSI X9.63 KDF? [duplicate] Can someone tell me in laymens terms what this is? $$K_i = \mathrm{Hash}(Z \| \mathit{Counter} \| [\mathit{SharedInfo}])$$ What do the double pipes represent? ## 1 Answer It means concatenation. Z, Counter, and SharedInfo are three bitstrings which are to be concatenated before hashing. The [ ] around SharedInfo means it may be absent in which case you would use an empty string instead. (Since concatenating an empty string to the end yields the same result as not concatenating anything.)
2020-07-07 12:27:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5620347261428833, "perplexity": 3421.544402342158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655892516.24/warc/CC-MAIN-20200707111607-20200707141607-00088.warc.gz"}
http://mathhelpforum.com/advanced-algebra/136937-help-eigenvalue-problem.html
Thread: help for an eigenvalue problem! 1. help for an eigenvalue problem! I am wondering if somebody can be a help to solve this problem? Let $\displaystyle A \in M_n[R]$, each row of $\displaystyle A$ has unit length, and $\displaystyle A$ is nonsingular, how to prove that $\displaystyle A^T A$ which is SPD has all its eigenvalues less than 2? Thanks for any help! 2. This assertion is wrong! The problem stated above is actually not correct. Please forget about it!
2018-04-24 15:32:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120802640914917, "perplexity": 491.5453821977789}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946721.87/warc/CC-MAIN-20180424135408-20180424155408-00381.warc.gz"}
https://aviation.stackexchange.com/questions/25415/how-do-i-interpret-these-values-in-nasas-c-mapss-dataset-files
How do I interpret these values in NASA's C-MAPSS dataset files? I was studying a dual-speed, high-bypass ratio turbofan engine dataset which I happened to obtain from NASA's website. This dataset was generated from C-MAPSS simulator where dataset has nominal and fault files. I studied the user guide doc and some other documents related to the same but I failed to understand the difference between these two files (ie. at the value/ parameter level). 1. difference between nominal and fault file 2. There are 27 parameteres used, among these which parameters will affect fan, HPC, HPT, LPC, LPT 3. what is the threshold value for these parameters or when do we say that the parameter has reached its max value and above which it would lead to fault condition. • Can you link to where you obtained the files? – fooot Feb 20 '16 at 3:10 1. difference between nominal and fault file It is found inside the .tar.gz file. From the README file contained in the page you linked: the Simulation_Info.txt contains a summary of the number of flights, the fault type that was simulated, if it is nominal or not, and if not what flight and time sample the fault was introduced. 1. There are 27 parameteres used, among these which parameters will affect fan, HPC, HPT, LPC, LPT I assume you are asking what are those 27 parameters. From the README file that is found in the page you linked: 1. what is the threshold value for these parameters or when do we say that the parameter has reached its max value and above which it would lead to fault condition. A detailed answer is out of scope for this site, it heavily depends on the failure simulated. The documentation of the various tests should have enough details to understand. Additionally, hard absolute values are definitely of no use here, different engines will have different characteristics: the numbers valid for the engine simulated here will be applicable to basically no real engine.
2020-08-07 01:27:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35205602645874023, "perplexity": 1186.6877538997244}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737050.56/warc/CC-MAIN-20200807000315-20200807030315-00566.warc.gz"}
https://math.stackexchange.com/questions/1317159/how-can-i-find-this-recurrence-relation-my-approach-seem-to-be-wrong
# How can I find this recurrence relation? My approach seem to be wrong. QUESTION: A string that contains only 0s, 1s, and 2s is called a ternary string. Find a recurrence relation for the number of ternary strings of length n that do not contain two consecutive 0s or two consecutive 1s. -- My approach: Let's define a in-valid string, as the ternary string that contain two consecutive 0's or 1's, $b_n$. Now, let $a_n$ be the number of valid strings. We need to substract the number of in-valid ($b_n$) strings from the number of ternary strings, $3^n$. To count $b_n$, use the sum rule: partition the set of strings, depending on what digits start the string: 1. The string that starts with a 2: $b_{n-1}$ ways to finish the string. 2. The string that starts with a 1: • The remaining strings that start with a 0: $b_{n-2}$ ways to end the string. • The remaining strings that start with a 1: $3^{n-2}$ ways to finish the string. • The remaining strings that start with a 2: $b_{n-2}$ ways to finish the string. 3. The string that starts with a 0: • The remaining strings that start with a 0: $3^{n-2}$ ways to end the string. • The remaining strings that start with a 1: $b_{n-2}$ ways to finish the string. • The remaining strings that start with a 2: $b_{n-2}$ ways to finish the string. Summing it up, $b_n = b_{n-1} + 4b_{n-2} + 2\cdot(3^{n-2})$. Note that the initial conditions are: $b_0 = b_1 = 0$. Also note that this recurrence relation counts the number of in-valid strings; that is the ternary strings that contain two consecutive 0's or 1's. In order to obtain the number of valid ternary strings we would need to substract this from the total number of ternary strings, $3^n$. Hence we could use $a_n = 3^n - b_n$ where $a_n$ is the number of (valid) ternary strings that do not contain two consecutive 1's or 0's. I tried for $b_2$ and $b_3$ but it does not give the right answers. What am I doing wrong? What you're missing is that there are more than $b_{n-2}$ invalid ways to finish a string that starts with 10 -- namely, the rest of the string could start with 0 but otherwise be valid. And similarly for strings that start with 01. The natural way to set up a recurrence would be to let $a_n$ be the number of valid strings of length $n$ that start with 0 or 1, and $c_n$ be the number of valid strings that start with 2. We then have \begin{align} a_{n+1} &= a_n + 2c_n \\ c_{n+1} &= a_n + c_n \end{align} and we get $$\begin{pmatrix} a_n \\ c_n \end{pmatrix} = \begin{pmatrix}1&2\\1&1\end{pmatrix}^{n-1} \begin{pmatrix}2\\1\end{pmatrix}$$ and we can then try to diagonalize the matrix to find a closed solution to the recurrence. In this particular case, however, it is easier to set $p_n=a_n+c_n$ (these are the numbers we're really interested in) and calculate $$p_{n+1}=a_{n+1}+c_{n+1}=2a_n+3c_n=2p_n + c_n = 2p_n + a_{n-1}+c_{n-1} = 2p_n+p_{n-1}$$ So a single second-degree recurrence for the result would be $$p_n = 2p_{n-1} + p_{n-2}$$ This can be solved using standard techniques. The characteristic polynomial has roots $1\pm\sqrt 2$, and the exact solution turns out to be $$p_n = \frac{(1+\sqrt2)^{n+1}+(1-\sqrt2)^{n+1}}2 = \left[\frac{(1+\sqrt2)^{n+1}}2\right]$$ where $[{\,\cdot\,}]$ rounds to the nearest integer. Bonus question: Is there a straightforward combinatorial interpretation of the recurrence $p_n=2p_{n-1}+p_{n-2}$? • I understand the two recurrence equations. Is it possible to explain how to arrive at the matrix equation?. – Geoffrey Critzer Jun 9 '15 at 0:55 • @GeoffreyCritzer: The two recurrence equations are the same as $$\begin{pmatrix}a_{n+1}\\c_{n+1}\end{pmatrix}=\begin{pmatrix} 1&2\\1&1\end{pmatrix}\begin{pmatrix}a_n\\c_n\end{pmatrix}$$ which you can just keep applying step for step until you reach $(a_1,c_1)$ which has the known values $(2,1)$. – hmakholm left over Monica Jun 9 '15 at 9:40 You only need to know the last digit of the string to decide what the possible next digits are. So there are three "states:" $0$, $1$ and $2$; if $x_n$, $y_n$ and $z_n$ are the numbers of strings of length $n$ ending in each one of these states, it's clear that $$\pmatrix{x_{n+1}\\y_{n+1}\\z_{n+1}}= \pmatrix{0&1&1\\1&0&1\\1&1&1}\pmatrix{x_{n}\\y_{n}\\z_{n}}$$ Let $A$ be this matrix; if you set $x_0=0$, $y_0=0$ and $z_0=1$ (so that all $1$-long strings are OK, you have $$\pmatrix{x_{n}\\y_{n}\\z_{n}}=A^n\pmatrix{0\\0\\1}.$$ The answer to your problem is $c_n=x_n+y_n+z_n$. To compute this, note that the characteristic polynomial of $A$ is $z^3-z^2-3 z-1$, whose roots are $-1$ and $\sqrt2\pm1$. Thus $c_n$ satisfies the recurrence $c_n=c_{n-1}+3c_{n-2}+c_{n-3}$, and has the closed form $c_n=A(-1)^n + B(\sqrt2+1)^n+C(\sqrt2-1)^n$, where the constants $A$, $B$ and $C$ can be found by looking at $c_0=1$, $c_1=3$, and $c_2=7$; we find $A=0$, $B=(1+\sqrt2)/2$ and $C=(1-\sqrt2)/2$, so $$c_n=\frac12(1+\sqrt2)^{n+1}+\frac12(1-\sqrt2)^{n+1}.$$ Another approach is to use the symbolic method to establish a system of generating functions. Let A(x) be the g.f. for the number of valid words that are empty or start with a 1 or start with a 2. Let C(x) be the g.f for the number of valid nonempty words that start with a 2. We need to be familiar with the subtleties of the symbolic method. The system: $A(x) = 1 + x + x A(x) + 2x C(x), C(x) = x A(x) + x C(x)$ gives $A(x) = \frac{1 - x^2}{1 - 2x + x^2}$ and $C(x) = \frac{x + x^2}{1 - 2x + x^2}$. We desire A(x) + C(x) and the sequence is given in Sloane's OEIS A078057. • How do you get $A(x)=1+x+xA(x)+2xA(x)$? – hmakholm left over Monica Jun 13 '15 at 14:44 • I am sorry, the equation for A(x) had a typo which I have corrected now. A valid word that is empty or starts with a 0 or a 1 can be constructed by prepending a 0 or 1 to a valid word that starts with a 2 ( which accounts for the term 2*xC(x) ) OR prepending a 0 or a 1 to a word that starts with a 1 or 0 respectively ( this is the xA(x) term ). Now when we prepend to the empty word we really have two choices ( 0 or 1) so we have to add the term x. The term 1 is the empty word. – Geoffrey Critzer Jun 13 '15 at 18:29
2020-04-02 20:36:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167162179946899, "perplexity": 152.2868790958446}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00494.warc.gz"}
https://labs.tib.eu/arxiv/?author=A.%20Clocchiatti
• Continuum Foreground Polarization and Na~I Absorption in Type Ia SNe(1701.05196) Jan. 18, 2017 astro-ph.SR, astro-ph.HE We present a study of the continuum polarization over the 400--600 nm range of 19 Type Ia SNe obtained with FORS at the VLT. We separate them in those that show Na I D lines at the velocity of their hosts and those that do not. Continuum polarization of the sodium sample near maximum light displays a broad range of values, from extremely polarized cases like SN 2006X to almost unpolarized ones like SN 2011ae. The non--sodium sample shows, typically, smaller polarization values. The continuum polarization of the sodium sample in the 400--600 nm range is linear with wavelength and can be characterized by the mean polarization (P$_{\rm{mean}}$). Its values span a wide range and show a linear correlation with color, color excess, and extinction in the visual band. Larger dispersion correlations were found with the equivalent width of the Na I D and Ca II H & K lines, and also a noisy relation between P$_{\rm{mean}}$ and $R_{V}$, the ratio of total to selective extinction. Redder SNe show stronger continuum polarization, with larger color excesses and extinctions. We also confirm that high continuum polarization is associated with small values of $R_{V}$. The correlation between extinction and polarization -- and polarization angles -- suggest that the dominant fraction of dust polarization is imprinted in interstellar regions of the host galaxies. We show that Na I D lines from foreground matter in the SN host are usually associated with non-galactic ISM, challenging the typical assumptions in foreground interstellar polarization models. • SPT-GMOS: A Gemini/GMOS-South Spectroscopic Survey of Galaxy Clusters in the SPT-SZ Survey(1609.05211) Sept. 16, 2016 astro-ph.CO, astro-ph.GA We present the results of SPT-GMOS, a spectroscopic survey with the Gemini Multi-Object Spectrograph (GMOS) on Gemini South. The targets of SPT-GMOS are galaxy clusters identified in the SPT-SZ survey, a millimeter-wave survey of 2500 squ. deg. of the southern sky using the South Pole Telescope (SPT). Multi-object spectroscopic observations of 62 SPT-selected galaxy clusters were performed between January 2011 and December 2015, yielding spectra with radial velocity measurements for 2595 sources. We identify 2243 of these sources as galaxies, and 352 as stars. Of the galaxies, we identify 1579 as members of SPT-SZ galaxy clusters. The primary goal of these observations was to obtain spectra of cluster member galaxies to estimate cluster redshifts and velocity dispersions. We describe the full spectroscopic dataset and resulting data products, including galaxy redshifts, cluster redshifts and velocity dispersions, and measurements of several well-known spectral indices for each galaxy: the equivalent width, W, of [O II] 3727,3729 and H-delta, and the 4000A break strength, D4000. We use the spectral indices to classify galaxies by spectral type (i.e., passive, post-starburst, star-forming), and we match the spectra against photometric catalogs to characterize spectroscopically-observed cluster members as a function of brightness (relative to m*). Finally, we report several new measurements of redshifts for ten bright, strongly-lensed background galaxies in the cores of eight galaxy clusters. Combining the SPT-GMOS dataset with previous spectroscopic follow-up of SPT-SZ galaxy clusters results in spectroscopic measurements for >100 clusters, or ~20% of the full SPT-SZ sample. • Spectropolarimetry of the Type IIb SN 2008aq(1606.05465) March 4, 2020 astro-ph.SR We present optical spectroscopy and spectropolarimetry of the Type IIb SN 2008aq 16 days and 27 days post-explosion. The spectrum of SN 2008aq remained dominated by Halpha P Cygni profile at both epochs, but showed a significant increase in the strength of the helium features, which is characteristic of the transition undergone by supernovae between Type IIb and Type Ib. Comparison of the spectra of SN 2008aq to other Type IIb SNe (SN 1993J, SN 2011dh, and SN 2008ax) at similar epochs revealed that the helium lines in SN 2008aq are much weaker, suggesting that its progenitor was stripped to a lesser degree. SN 2008aq also showed significant levels of continuum polarisation at p_cont = 0.70 (+/- 0.22) % in the first epoch, increasing to p_cont = 1.21 (+/- 0.33) % by the second epoch. Moreover, the presence of loops in the q-u planes of Halpha and He I in the second epoch suggests a departure from axial symmetry. • Cosmological Constraints from Galaxy Clusters in the 2500 square-degree SPT-SZ Survey(1603.06522) March 21, 2016 astro-ph.CO (abridged) We present cosmological constraints obtained from galaxy clusters identified by their Sunyaev-Zel'dovich effect signature in the 2500 square degree South Pole Telescope Sunyaev Zel'dovich survey. We consider the 377 cluster candidates identified at z>0.25 with a detection significance greater than five, corresponding to the 95% purity threshold for the survey. We compute constraints on cosmological models using the measured cluster abundance as a function of mass and redshift. We include additional constraints from multi-wavelength observations, including Chandra X-ray data for 82 clusters and a weak lensing-based prior on the normalization of the mass-observable scaling relations. Assuming a LCDM cosmology, where the species-summed neutrino mass has the minimum allowed value (mnu = 0.06 eV) from neutrino oscillation experiments, we combine the cluster data with a prior on H0 and find sigma_8 = 0.797+-0.031 and Omega_m = 0.289+-0.042, with the parameter combination sigma_8(Omega_m/0.27)^0.3 = 0.784+-0.039. These results are in good agreement with constraints from the CMB from SPT, WMAP, and Planck, as well as with constraints from other cluster datasets. Adding mnu as a free parameter, we find mnu = 0.14+-0.08 eV when combining the SPT cluster data with Planck CMB data and BAO data, consistent with the minimum allowed value. Finally, we consider a cosmology where mnu and N_eff are fixed to the LCDM values, but the dark energy equation of state parameter w is free. Using the SPT cluster data in combination with an H0 prior, we measure w = -1.28+-0.31, a constraint consistent with the LCDM cosmological model and derived from the combination of growth of structure and geometry. When combined with primarily geometrical constraints from Planck CMB, H0, BAO and SNe, adding the SPT cluster data improves the w constraint from the geometrical data alone by 14%, to w = -1.023+-0.042. • A Measurement of Gravitational Lensing of the Cosmic Microwave Background by Galaxy Clusters Using Data from the South Pole Telescope(1412.7521) June 23, 2015 astro-ph.CO Clusters of galaxies are expected to gravitationally lens the cosmic microwave background (CMB) and thereby generate a distinct signal in the CMB on arcminute scales. Measurements of this effect can be used to constrain the masses of galaxy clusters with CMB data alone. Here we present a measurement of lensing of the CMB by galaxy clusters using data from the South Pole Telescope (SPT). We develop a maximum likelihood approach to extract the CMB cluster lensing signal and validate the method on mock data. We quantify the effects on our analysis of several potential sources of systematic error and find that they generally act to reduce the best-fit cluster mass. It is estimated that this bias to lower cluster mass is roughly $0.85\sigma$ in units of the statistical error bar, although this estimate should be viewed as an upper limit. We apply our maximum likelihood technique to 513 clusters selected via their SZ signatures in SPT data, and rule out the null hypothesis of no lensing at $3.1\sigma$. The lensing-derived mass estimate for the full cluster sample is consistent with that inferred from the SZ flux: $M_{200,\mathrm{lens}} = 0.83_{-0.37}^{+0.38}\, M_{200,\mathrm{SZ}}$ (68% C.L., statistical error only). • Analysis of Sunyaev-Zel'dovich Effect Mass-Observable Relations using South Pole Telescope Observations of an X-ray Selected Sample of Low Mass Galaxy Clusters and Groups(1407.7520) May 29, 2015 astro-ph.CO, astro-ph.GA (Abridged) We use 95, 150, and 220GHz observations from the SPT to examine the SZE signatures of a sample of 46 X-ray selected groups and clusters drawn from ~6 deg^2 of the XMM-BCS. These systems extend to redshift z=1.02, have characteristic masses ~3x lower than clusters detected directly in the SPT data and probe the SZE signal to the lowest X-ray luminosities (>10^42 erg s^-1) yet. We develop an analysis tool that combines the SZE information for the full ensemble of X-ray-selected clusters. Using X-ray luminosity as a mass proxy, we extract selection-bias corrected constraints on the SZE significance- and Y_500-mass relations. The SZE significance- mass relation is in good agreement with an extrapolation of the relation obtained from high mass clusters. However, the fit to the Y_500-mass relation at low masses, while in good agreement with the extrapolation from high mass SPT clusters, is in tension at 2.8 sigma with the constraints from the Planck sample. We examine the tension with the Planck relation, discussing sample differences and biases that could contribute. We also present an analysis of the radio galaxy point source population in this ensemble of X-ray selected systems. We find 18 of our systems have 843 MHz SUMSS sources within 2 arcmin of the X-ray centre, and three of these are also detected at significance >4 by SPT. Of these three, two are associated with the group brightest cluster galaxies, and the third is likely an unassociated quasar candidate. We examine the impact of these point sources on our SZE scaling relation analyses and find no evidence of biases. We also examine the impact of dusty galaxies using constraints from the 220 GHz data. The stacked sample provides 2.8$\sigma$ significant evidence of dusty galaxy flux, which would correspond to an average underestimate of the SPT Y_500 signal that is (17+-9) per cent in this sample of low mass systems. • PESSTO : survey description and products from the first data release by the Public ESO Spectroscopic Survey of Transient Objects(1411.0299) May 10, 2015 astro-ph.SR, astro-ph.IM The Public European Southern Observatory Spectroscopic Survey of Transient Objects (PESSTO) began as a public spectroscopic survey in April 2012. We describe the data reduction strategy and data products which are publicly available through the ESO archive as the Spectroscopic Survey Data Release 1 (SSDR1). PESSTO uses the New Technology Telescope with EFOSC2 and SOFI to provide optical and NIR spectroscopy and imaging. We target supernovae and optical transients brighter than 20.5mag for classification. Science targets are then selected for follow-up based on the PESSTO science goal of extending knowledge of the extremes of the supernova population. The EFOSC2 spectra cover 3345-9995A (at resolutions of 13-18 Angs) and SOFI spectra cover 0.935-2.53 micron (resolutions 23-33 Angs) along with JHK imaging. This data release contains spectra from the first year (April 2012 - 2013), consisting of all 814 EFOSC2 spectra and 95 SOFI spectra (covering 298 distinct objects), in standard ESO Phase 3 format. We estimate the accuracy of the absolute flux calibrations for EFOSC2 to be typically 15%, and the relative flux calibration accuracy to be about 5%. The PESSTO standard NIR reduction process does not yet produce high accuracy absolute spectrophotometry but the SOFI JHK imaging will improve this. Future data releases will focus on improving the automated flux calibration of the data products. • Analysis of Late--time Light Curves of Type IIb, Ib and Ic Supernovae(1411.5975) April 24, 2015 astro-ph.SR, astro-ph.HE The shape of the light curve peak of radioactive--powered core--collapse "stripped--envelope" supernovae constrains the ejecta mass, nickel mass, and kinetic energy by the brightness and diffusion time for a given opacity and observed expansion velocity. Late--time light curves give constraints on the ejecta mass and energy, given the gamma--ray opacity. Previous work has shown that the principal light curve peaks for SN~IIb with small amounts of hydrogen and for hydrogen/helium--deficient SN~Ib/c are often rather similar near maximum light, suggesting similar ejecta masses and kinetic energies, but that late--time light curves show a wide dispersion, suggesting a dispersion in ejecta masses and kinetic energies. It was also shown that SN~IIb and SN~Ib/c can have very similar late--time light curves, but different ejecta velocities demanding significantly different ejecta masses and kinetic energies. We revisit these topics by collecting and analyzing well--sampled single--band and quasi--bolometric light curves from the literature. We find that the late--time light curves of stripped--envelope core--collapse supernovae are heterogeneous. We also show that the observed properties, the photospheric velocity at peak, the rise time, and the late decay time, can be used to determine the mean opacity appropriate to the peak. The opacity determined in this way is considerably smaller than common estimates. We discuss how the small effective opacity may result from recombination and asymmetries in the ejecta. • Properties of extragalactic dust inferred from linear polarimetry of Type Ia Supernovae(1407.0136) March 3, 2015 astro-ph.GA, astro-ph.SR Aims: The aim of this paper is twofold: 1) to investigate the properties of extragalactic dust and compare them to what is seen in the Galaxy; 2) to address in an independent way the problem of the anomalous extinction curves reported for reddened Type Ia Supernovae (SN) in connection to the environments in which they explode. Methods: The properties of the dust are derived from the wavelength dependence of the continuum polarization observed in four reddened Type Ia SN: 1986G, 2006X, 2008fp, and 2014J. [...] Results: All four objects are characterized by exceptionally low total-to-selective absorption ratios (R_V) and display an anomalous interstellar polarization law, characterized by very blue polarization peaks. In all cases the polarization position angle is well aligned with the local spiral structure. While SN~1986G is compatible with the most extreme cases of interstellar polarization known in the Galaxy, SN2006X, 2008fp, and 2014J show unprecedented behaviours. The observed deviations do not appear to be connected to selection effects related to the relatively large amounts of reddening characterizing the objects in the sample. Conclusions: The dust responsible for the polarization of these four SN is most likely of interstellar nature. The polarization properties can be interpreted in terms of a significantly enhanced abundance of small grains. The anomalous behaviour is apparently associated with the properties of the galactic environment in which the SN explode, rather than with the progenitor system from which they originate. For the extreme case of SN2014J, we cannot exclude the contribution of light scattered by local material; however, the observed polarization properties require an ad hoc geometrical dust distribution. • Galaxy Clusters Discovered via the Sunyaev-Zel'dovich Effect in the 2500-square-degree SPT-SZ survey(1409.0850) Feb. 14, 2015 astro-ph.CO We present a catalog of galaxy clusters selected via their Sunyaev-Zel'dovich (SZ) effect signature from 2500 deg$^2$ of South Pole Telescope (SPT) data. This work represents the complete sample of clusters detected at high significance in the 2500-square-degree SPT-SZ survey, which was completed in 2011. A total of 677 (409) cluster candidates are identified above a signal-to-noise threshold of $\xi$ =4.5 (5.0). Ground- and space-based optical and near-infrared (NIR) imaging confirms overdensities of similarly colored galaxies in the direction of 516 (or 76%) of the $\xi$>4.5 candidates and 387 (or 95%) of the $\xi$>5 candidates; the measured purity is consistent with expectations from simulations. Of these confirmed clusters, 415 were first identified in SPT data, including 251 new discoveries reported in this work. We estimate photometric redshifts for all candidates with identified optical and/or NIR counterparts; we additionally report redshifts derived from spectroscopic observations for 141 of these systems. The mass threshold of the catalog is roughly independent of redshift above $z$~0.25 leading to a sample of massive clusters that extends to high redshift. The median mass of the sample is $M_{\scriptsize 500c}(\rho_\mathrm{crit})$ ~ 3.5 x 10$^{14} M_\odot h^{-1}$, the median redshift is $z_{med}$ =0.55, and the highest-redshift systems are at $z$>1.4. The combination of large redshift extent, clean selection, and high typical mass makes this cluster sample of particular interest for cosmological analyses and studies of cluster formation and evolution. • Mass Calibration and Cosmological Analysis of the SPT-SZ Galaxy Cluster Sample Using Velocity Dispersion $\sigma_v$ and X-ray $Y_\textrm{X}$ Measurements(1407.2942) Dec. 2, 2014 astro-ph.CO We present a velocity dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg2 of the survey along with 63 velocity dispersion ($\sigma_v$) and 16 X-ray Yx measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. The calibrations using $\sigma_v$ and Yx are consistent at the $0.6\sigma$ level, with the $\sigma_v$ calibration preferring ~16% higher masses. We use the full cluster dataset to measure $\sigma_8(\Omega_ m/0.27)^{0.3}=0.809\pm0.036$. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming the sum of the neutrino masses is $\sum m_\nu=0.06$ eV, we find the datasets to be consistent at the 1.0$\sigma$ level for WMAP9 and 1.5$\sigma$ for Planck+WP. Allowing for larger $\sum m_\nu$ further reconciles the results. When we combine the cluster and Planck+WP datasets with BAO and SNIa, the preferred cluster masses are $1.9\sigma$ higher than the Yx calibration and $0.8\sigma$ higher than the $\sigma_v$ calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness of fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe dataset, we measure $\Omega_ m=0.299\pm0.009$ and $\sigma_8=0.829\pm0.011$. Within a $\nu$CDM model we find $\sum m_\nu = 0.148\pm0.081$ eV. We present a consistency test of the cosmic growth rate. Allowing both the growth index $\gamma$ and the dark energy equation of state parameter $w$ to vary, we find $\gamma=0.73\pm0.28$ and $w=-1.007\pm0.065$, demonstrating that the expansion and the growth histories are consistent with a LCDM model ($\gamma=0.55; \,w=-1$). • SPT-CLJ2040-4451: An SZ-Selected Galaxy Cluster at z = 1.478 With Significant Ongoing Star Formation(1307.2903) Aug. 6, 2014 astro-ph.CO SPT-CLJ2040-4451 -- spectroscopically confirmed at z = 1.478 -- is the highest redshift galaxy cluster yet discovered via the Sunyaev-Zel'dovich effect. SPT-CLJ2040-4451 was a candidate galaxy cluster identified in the first 720 deg^2 of the South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) survey, and confirmed in follow-up imaging and spectroscopy. From multi-object spectroscopy with Magellan-I/Baade+IMACS we measure spectroscopic redshifts for 15 cluster member galaxies, all of which have strong [O II] 3727 emission. SPT-CLJ2040-4451 has an SZ-measured mass of M_500,SZ = 3.2 +/- 0.8 X 10^14 M_Sun/h_70, corresponding to M_200,SZ = 5.8 +/- 1.4 X 10^14 M_Sun/h_70. The velocity dispersion measured entirely from blue star forming members is sigma_v = 1500 +/- 520 km/s. The prevalence of star forming cluster members (galaxies with > 1.5 M_Sun/yr) implies that this massive, high-redshift cluster is experiencing a phase of active star formation, and supports recent results showing a marked increase in star formation occurring in galaxy clusters at z >1.4. We also compute the probability of finding a cluster as rare as this in the SPT-SZ survey to be >99%, indicating that its discovery is not in tension with the concordance Lambda-CDM cosmological model. • Optical Spectroscopy and Velocity Dispersions of Galaxy Clusters from the SPT-SZ Survey(1311.4953) July 17, 2014 astro-ph.CO We present optical spectroscopy of galaxies in clusters detected through the Sunyaev-Zel'dovich (SZ) effect with the South Pole Telescope (SPT). We report our own measurements of $61$ spectroscopic cluster redshifts, and $48$ velocity dispersions each calculated with more than $15$ member galaxies. This catalog also includes $19$ dispersions of SPT-observed clusters previously reported in the literature. The majority of the clusters in this paper are SPT-discovered; of these, most have been previously reported in other SPT cluster catalogs, and five are reported here as SPT discoveries for the first time. By performing a resampling analysis of galaxy velocities, we find that unbiased velocity dispersions can be obtained from a relatively small number of member galaxies ($\lesssim 30$), but with increased systematic scatter. We use this analysis to determine statistical confidence intervals that include the effect of membership selection. We fit scaling relations between the observed cluster velocity dispersions and mass estimates from SZ and X-ray observables. In both cases, the results are consistent with the scaling relation between velocity dispersion and mass expected from dark-matter simulations. We measure a $\sim$30% log-normal scatter in dispersion at fixed mass, and a $\sim$10% offset in the normalization of the dispersion-mass relation when compared to the expectation from simulations, which is within the expected level of systematic uncertainty. • The Redshift Evolution of the Mean Temperature, Pressure, and Entropy Profiles in 80 SPT-Selected Galaxy Clusters(1404.6250) July 4, 2014 astro-ph.CO, astro-ph.HE (Abridged) We present the results of an X-ray analysis of 80 galaxy clusters selected in the 2500 deg^2 South Pole Telescope survey and observed with the Chandra X-ray Observatory. We divide the full sample into subsamples of ~20 clusters based on redshift and central density, performing an X-ray fit to all clusters in a subsample simultaneously, assuming self-similarity of the temperature profile. This approach allows us to constrain the shape of the temperature profile over 0<r<1.5R500, which would be impossible on a per-cluster basis, since the observations of individual clusters have, on average, 2000 X-ray counts. The results presented here represent the first constraints on the evolution of the average temperature profile from z=0 to z=1.2. We find that high-z (0.6<z<1.2) clusters are slightly (~40%) cooler both in the inner (r<0.1R500) and outer (r>R500) regions than their low-z (0.3<z<0.6) counterparts. Combining the average temperature profile with measured gas density profiles from our earlier work, we infer the average pressure and entropy profiles for each subsample. Overall, our observed pressure profiles agree well with earlier lower-redshift measurements, suggesting minimal redshift evolution in the pressure profile outside of the core. We find no measurable redshift evolution in the entropy profile at r<0.7R500. We observe a slight flattening of the entropy profile at r>R500 in our high-z subsample. This flattening is consistent with a temperature bias due to the enhanced (~3x) rate at which group-mass (~2 keV) halos, which would go undetected at our survey depth, are accreting onto the cluster at z~1. This work demonstrates a powerful method for inferring spatially-resolved cluster properties in the case where individual cluster signal-to-noise is low, but the number of observed clusters is high. • Measurement of Galaxy Cluster Integrated Comptonization and Mass Scaling Relations with the South Pole Telescope(1312.3015) Dec. 11, 2013 astro-ph.CO We describe a method for measuring the integrated Comptonization (YSZ) of clusters of galaxies from measurements of the Sunyaev-Zel'dovich (SZ) effect in multiple frequency bands and use this method to characterize a sample of galaxy clusters detected in South Pole Telescope (SPT) data. We test this method on simulated cluster observations and verify that it can accurately recover cluster parameters with negligible bias. In realistic simulations of an SPT-like survey, with realizations of cosmic microwave background anisotropy, point sources, and atmosphere and instrumental noise at typical SPT-SZ survey levels, we find that YSZ is most accurately determined in an aperture comparable to the SPT beam size. We demonstrate the utility of this method to measure YSZ and to constrain mass scaling relations using X-ray mass estimates for a sample of 18 galaxy clusters from the SPT-SZ survey. Measuring YSZ within a 0.75' radius aperture, we find an intrinsic log-normal scatter of 21+/-11% in YSZ at a fixed mass. Measuring YSZ within a 0.3 Mpc projected radius (equivalent to 0.75' at the survey median redshift z = 0.6), we find a scatter of 26+/-9%. Prior to this study, the SPT observable found to have the lowest scatter with mass was cluster detection significance. We demonstrate, from both simulations and SPT observed clusters, that YSZ measured within an aperture comparable to the SPT beam size is equivalent, in terms of scatter with cluster mass, to SPT cluster detection significance. • Constraints on the CMB Temperature Evolution using Multi-Band Measurements of the Sunyaev Zel'dovich Effect with the South Pole Telescope(1312.2462) Dec. 9, 2013 astro-ph.CO The adiabatic evolution of the temperature of the cosmic microwave background (CMB) is a key prediction of standard cosmology. We study deviations from the expected adiabatic evolution of the CMB temperature of the form $T(z) =T_0(1+z)^{1-\alpha}$ using measurements of the spectrum of the Sunyaev Zel'dovich Effect with the South Pole Telescope (SPT). We present a method for using the ratio of the Sunyaev Zel'dovich signal measured at 95 and 150 GHz in the SPT data to constrain the temperature of the CMB. We demonstrate that this approach provides unbiased results using mock observations of clusters from a new set of hydrodynamical simulations. We apply this method to a sample of 158 SPT-selected clusters, spanning the redshift range $0.05 < z < 1.35$, and measure $\alpha = 0.017^{+0.030}_{-0.028}$, consistent with the standard model prediction of $\alpha=0$. In combination with other published results, we constrain $\alpha = 0.011 \pm 0.016$, an improvement of $\sim 20\%$ over published constraints. This measurement also provides a strong constraint on the effective equation of state in models of decaying dark energy $w_\mathrm{eff} = -0.987^{+0.016}_{-0.017}$. • The Growth of Cool Cores and Evolution of Cooling Properties in a Sample of 83 Galaxy Clusters at 0.3 < z < 1.2 Selected from the SPT-SZ Survey(1305.2915) Sept. 9, 2013 astro-ph.CO We present first results on the cooling properties derived from Chandra X-ray observations of 83 high-redshift (0.3 < z < 1.2) massive galaxy clusters selected by their Sunyaev-Zel'dovich signature in the South Pole Telescope data. We measure each cluster's central cooling time, central entropy, and mass deposition rate, and compare to local cluster samples. We find no significant evolution from z~0 to z~1 in the distribution of these properties, suggesting that cooling in cluster cores is stable over long periods of time. We also find that the average cool core entropy profile in the inner ~100 kpc has not changed dramatically since z ~ 1, implying that feedback must be providing nearly constant energy injection to maintain the observed "entropy floor" at ~10 keV cm^2. While the cooling properties appear roughly constant over long periods of time, we observe strong evolution in the gas density profile, with the normalized central density (rho_0/rho_crit) increasing by an order of magnitude from z ~ 1 to z ~ 0. When using metrics defined by the inner surface brightness profile of clusters, we find an apparent lack of classical, cuspy, cool-core clusters at z > 0.75, consistent with earlier reports for clusters at z > 0.5 using similar definitions. Our measurements indicate that cool cores have been steadily growing over the 8 Gyr spanned by our sample, consistent with a constant, ~150 Msun/yr cooling flow that is unable to cool below entropies of 10 keV cm^2 and, instead, accumulates in the cluster center. We estimate that cool cores began to assemble in these massive systems at z ~ 1, which represents the first constraints on the onset of cooling in galaxy cluster cores. We investigate several potential biases which could conspire to mimic this cool core evolution and are unable to find a bias that has a similar redshift dependence and a substantial amplitude. • Spectropolarimetry of the Type Ia Supernova 2012fr(1302.0166) Feb. 1, 2013 astro-ph.SR Spectropolarimetry provides the means to probe the 3D geometries of Supernovae at early times. We report spectropolarimetric observations of the Type Ia Supernova 2012fr at four epochs: -11, -5, +2 and +24 days, with respect to B-lightcurve maximum. SN 2012fr is a normal Type Ia SN, similar to SNe 1990N, 2000cx and 2005hj (that all exhibit low velocity decline rates for the principal Si II line). The SN displays high velocity components at -11 days that are highly polarized. The polarization of these features decreases as they become weaker from -5 days. At +2 days, the polarization angles of the low velocity components of silicon and calcium are identical and oriented at 90 degrees relative to the high velocity Ca component. In addition to having very different velocities, the high and low velocity Ca components have orthogonal distributions in the plane of the sky. The continuum polarization for the SN at all four epochs is low <0.1%. We conclude that the low level of continuum polarization is inconsistent with the merger-induced explosion scenario. The simple axial symmetry evident from the polarization angles of the high velocity and low velocity Ca components, along with the presence of high velocity components of Si and Ca, are consistent with the pulsating delayed detonation model. We predict that, during the nebular phase, SN 2012fr will display blue-shifted emission lines of Fe-group elements. • Spectropolarimetry of the Type Ia SN 2007sr Two Months After Maximum Light(1212.3619) Dec. 14, 2012 astro-ph.SR, astro-ph.HE We present late time spectropolarimetric observations of SN 2007sr, obtained with the VLT telescope at ESO Paranal Observatory when the object was 63 days after maximum light. The late time spectrum displays strong line polarization in the CaII absorption features. SN 2007sr adds to the case of some normal Type Ia SNe that show high line polarization or repolarization at late times, a fact that might be connected with the presence of high velocity features at early times. • High-Redshift Cool-Core Galaxy Clusters Detected via the Sunyaev--Zel'dovich Effect in the South Pole Telescope Survey(1208.3368) Dec. 7, 2012 astro-ph.CO We report the first investigation of cool-core properties of galaxy clusters selected via their Sunyaev--Zel'dovich (SZ) effect. We use 13 galaxy clusters uniformly selected from 178 deg^2 observed with the South Pole Telescope (SPT) and followed up by the Chandra X-ray Observatory. They form an approximately mass-limited sample (> 3 x 10^14 M_sun h^-1_70) spanning redshifts 0.3 < z < 1.1. Using previously published X-ray-selected cluster samples, we compare two proxies of cool-core strength: surface brightness concentration (cSB) and cuspiness ({\alpha}). We find that cSB is better constrained. We measure cSB for the SPT sample and find several new z > 0.5 cool-core clusters, including two strong cool cores. This rules out the hypothesis that there are no z > 0.5 clusters that qualify as strong cool cores at the 5.4{\sigma} level. The fraction of strong cool-core clusters in the SPT sample in this redshift regime is between 7% and 56% (95% confidence). Although the SPT selection function is significantly different from the X-ray samples, the high-z cSB distribution for the SPT sample is statistically consistent with that of X-ray-selected samples at both low and high redshifts. The cool-core strength is inversely correlated with the offset between the brightest cluster galaxy and the X-ray centroid, providing evidence that the dynamical state affects the cool-core strength of the cluster. Larger SZ-selected samples will be crucial in understanding the evolution of cluster cool cores over cosmic time. • Redshifts, Sample Purity, and BCG Positions for the Galaxy Cluster Catalog from the first 720 Square Degrees of the South Pole Telescope Survey(1207.4369) Nov. 21, 2012 astro-ph.CO We present the results of the ground- and space-based optical and near-infrared (NIR) follow-up of 224 galaxy cluster candidates detected with the Sunyaev-Zel'dovich (SZ) effect in the 720 deg^2 of the South Pole Telescope (SPT) survey completed in the 2008 and 2009 observing seasons. We use the optical/NIR data to establish whether each candidate is associated with an overdensity of galaxies and to estimate the cluster redshift. Most photometric redshifts are derived through a combination of three different cluster redshift estimators using red-sequence galaxies, resulting in an accuracy of \Delta z/(1+z)=0.017, determined through comparison with a subsample of 57 clusters for which we have spectroscopic redshifts. We successfully measure redshifts for 158 systems and present redshift lower limits for the remaining candidates. The redshift distribution of the confirmed clusters extends to z=1.35 with a median of z_{med}=0.57. Approximately 18% of the sample with measured redshifts lies at z>0.8. We estimate a lower limit to the purity of this SPT SZ-selected sample by assuming that all unconfirmed clusters are noise fluctuations in the SPT data. We show that the cumulative purity at detection significance \xi>5 (\xi>4.5) is >= 95 (>= 70%). We present the red brightest cluster galaxy (rBCG) positions for the sample and examine the offsets between the SPT candidate position and the rBCG. The radial distribution of offsets is similar to that seen in X-ray-selected cluster samples, providing no evidence that SZ-selected cluster samples include a different fraction of recent mergers than X-ray-selected cluster samples. • SPT-CL J0205-5829: A z = 1.32 Evolved Massive Galaxy Cluster in the South Pole Telescope Sunyaev-Zel'dovich Effect Survey(1205.6478) Oct. 11, 2012 astro-ph.CO The galaxy cluster SPT-CL J0205-5829 currently has the highest spectroscopically-confirmed redshift, z=1.322, in the South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) survey. XMM-Newton observations measure a core-excluded temperature of Tx=8.7keV producing a mass estimate that is consistent with the Sunyaev-Zel'dovich derived mass. The combined SZ and X-ray mass estimate of M500=(4.9+/-0.8)e14 h_{70}^{-1} Msun makes it the most massive known SZ-selected galaxy cluster at z>1.2 and the second most massive at z>1. Using optical and infrared observations, we find that the brightest galaxies in SPT-CL J0205-5829 are already well evolved by the time the universe was <5 Gyr old, with stellar population ages >3 Gyr, and low rates of star formation (<0.5Msun/yr). We find that, despite the high redshift and mass, the existence of SPT-CL J0205-5829 is not surprising given a flat LambdaCDM cosmology with Gaussian initial perturbations. The a priori chance of finding a cluster of similar rarity (or rarer) in a survey the size of the 2500 deg^2 SPT-SZ survey is 69%. • Weak-Lensing Mass Measurements of Five Galaxy Clusters in the South Pole Telescope Survey Using Magellan/Megacam(1205.3103) Sept. 13, 2012 astro-ph.CO We use weak gravitational lensing to measure the masses of five galaxy clusters selected from the South Pole Telescope (SPT) survey, with the primary goal of comparing these with the SPT Sunyaev--Zel'dovich (SZ) and X-ray based mass estimates. The clusters span redshifts 0.28 < z < 0.43 and have masses M_500 > 2 x 10^14 h^-1 M_sun, and three of the five clusters were discovered by the SPT survey. We observed the clusters in the g'r'i' passbands with the Megacam imager on the Magellan Clay 6.5m telescope. We measure a mean ratio of weak lensing (WL) aperture masses to inferred aperture masses from the SZ data, both within an aperture of R_500,SZ derived from the SZ mass, of 1.04 +/- 0.18. We measure a mean ratio of spherical WL masses evaluated at R_500,SZ to spherical SZ masses of 1.07 +/- 0.18, and a mean ratio of spherical WL masses evaluated at R_500,WL to spherical SZ masses of 1.10 +/- 0.24. We explore potential sources of systematic error in the mass comparisons and conclude that all are subdominant to the statistical uncertainty, with dominant terms being cluster concentration uncertainty and N-body simulation calibration bias. Expanding the sample of SPT clusters with WL observations has the potential to significantly improve the SPT cluster mass calibration and the resulting cosmological constraints from the SPT cluster survey. These are the first WL detections using Megacam on the Magellan Clay telescope. • A Massive, Cooling-Flow-Induced Starburst in the Core of a Highly Luminous Galaxy Cluster(1208.2962) Aug. 14, 2012 astro-ph.CO In the cores of some galaxy clusters the hot intracluster plasma is dense enough that it should cool radiatively in the cluster's lifetime, leading to continuous "cooling flows" of gas sinking towards the cluster center, yet no such cooling flow has been observed. The low observed star formation rates and cool gas masses for these "cool core" clusters suggest that much of the cooling must be offset by astrophysical feedback to prevent the formation of a runaway cooling flow. Here we report X-ray, optical, and infrared observations of the galaxy cluster SPT-CLJ2344-4243 at z = 0.596. These observations reveal an exceptionally luminous (L_2-10 keV = 8.2 x 10^45 erg/s) galaxy cluster which hosts an extremely strong cooling flow (dM/dt = 3820 +/- 530 Msun/yr). Further, the central galaxy in this cluster appears to be experiencing a massive starburst (740 +/- 160 Msun/yr), which suggests that the feedback source responsible for preventing runaway cooling in nearby cool core clusters may not yet be fully established in SPT-CLJ2344-4243. This large star formation rate implies that a significant fraction of the stars in the central galaxy of this cluster may form via accretion of the intracluster medium, rather than the current picture of central galaxies assembling entirely via mergers. • Galaxy clusters discovered via the Sunyaev-Zel'dovich effect in the first 720 square degrees of the South Pole Telescope survey(1203.5775) March 26, 2012 astro-ph.CO We present a catalog of 224 galaxy cluster candidates, selected through their Sunyaev-Zel'dovich (SZ) effect signature in the first 720 deg2 of the South Pole Telescope (SPT) survey. This area was mapped with the SPT in the 2008 and 2009 austral winters to a depth of 18 uK-arcmin at 150 GHz; 550 deg2 of it was also mapped to 44 uK-arcmin at 95 GHz. Based on optical imaging of all candidates and near-infrared imaging of the majority of candidates, we have found optical and/or infrared counterparts for 158 clusters. Of these, 135 were first identified as clusters in SPT data, including 117 new discoveries reported in this work. This catalog triples the number of confirmed galaxy clusters discovered through the SZ effect. We report photometrically derived (and in some cases spectroscopic) redshifts for confirmed clusters and redshift lower limits for the remaining candidates. The catalog extends to high redshift with a median redshift of z = 0.55 and maximum redshift of z = 1.37. Based on simulations, we expect the catalog to be nearly 100% complete above M500 ~ 5e14 Msun h_{70}^-1 at z > 0.6. There are 121 candidates detected at signal-to-noise greater than five, at which the catalog purity is measured to be 95%. From this high-purity subsample, we exclude the z < 0.3 clusters and use the remaining 100 candidates to improve cosmological constraints following the method presented by Benson et al., 2011. Adding the cluster data to CMB+BAO+H0 data leads to a preference for non-zero neutrino masses while only slightly reducing the upper limit on the sum of neutrino masses to sum mnu < 0.38 eV (95% CL). For a spatially flat wCDM cosmological model, the addition of this catalog to the CMB+BAO+H0+SNe results yields sigma8=0.807+-0.027 and w = -1.010+-0.058, improving the constraints on these parameters by a factor of 1.4 and 1.3, respectively. [abbrev]
2020-11-29 08:58:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5702812075614929, "perplexity": 2565.371734436294}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141197278.54/warc/CC-MAIN-20201129063812-20201129093812-00589.warc.gz"}
http://sqlml.azurewebsites.net/2017/08/19/machine-learning-algorithms-part-1/
# Machine Learning Algorithms – Part 1 This post describes some discriminative machine learning algorithms. Distribution of Y given X Algorithm to predict Y Normal distribution Linear regression Bernoulli distribution Logistic regression Multinomial distribution Multinomial logistic regression (Softmax regression) Exponential family distribution Generalized linear regression Distribution of X Algorithm to predict Y Multivariate normal distribution Gaussian discriminant analysis or EM Algorithm X Features conditionally independent $$p(x_1, x_2|y)=p(x_1|y) * p(x_2|y)$$ Naive Bayes Algorithm Other ML algorithms are based on geometry, like the SVM and K-means algorithms. # Linear Regression Below a table listing house prices by size. x = House Size ($$m^2$$) y = House Price (k\$) 50 99 50 100 50 100 50 101 60 110 For the size 50$$m^2$$, if we suppose that prices are normally distributed around the mean μ=100 with a standard deviation σ, then P(y|x = 50) = $$\frac{1}{\sigma \sqrt{2\pi}} exp(-\frac{1}{2} (\frac{y-μ}{\sigma})^{2})$$ We define h(x) as a function that returns the mean of the distribution of y given x (E[y|x]). We will define this function as a linear function. $$E[y|x] = h_{θ}(x) = \theta^T x$$ P(y|x = 50; θ) = $$\frac{1}{\sigma \sqrt{2\pi}} exp(-\frac{1}{2} (\frac{y-h_{θ}(x)}{\sigma})^{2})$$ We need to find θ that maximizes the probability for all values of x. In other words, we need to find θ that maximizes the likelihood function L: $$L(\theta)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{n} P(y^{(i)}|x^{(i)};θ)$$ Or maximizes the log likelihood function l: $$l(\theta)=log(L(\theta )) = \sum_{i=1}^{m} log(P(y^{(i)}|x^{(i)};\theta ))$$ $$= \sum_{i=1}^{m} log(\frac{1}{\sigma \sqrt{2\pi}}) -\frac{1}{2} \sum_{i=1}^{n} (\frac{y^{(i)}-h_{θ}(x^{(i)})}{\sigma})^{2}$$ To maximize l, we need to minimize J(θ) = $$\frac{1}{2} \sum_{i=1}^{m} (y^{(i)}-h_{θ}(x^{(i)}))^{2}$$. This function is called the Cost function (or Energy function, or Loss function, or Objective function) of a linear regression model. It’s also called “Least-squares cost function”. J(θ) is convex, to minimize it, we need to solve the equation $$\frac{\partial J(θ)}{\partial θ} = 0$$. A convex function has no local minimum. There are many methods to solve this equation: • Normal equation • Newton method • Matrix differentiation Gradient descent is the most used Optimizer (also called Learner or Solver) for learning model weights. $$θ_{j} := θ_{j} – \alpha \frac{\partial J(θ)}{\partial θ_{j}} = θ_{j} – α \frac{\partial \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2}}{\partial θ_{j}}$$ α is called “Learning rate” $$θ_{j} := θ_{j} – α \frac{1}{2} \sum_{i=1}^{n} \frac{\partial (y^{(i)}-h_{θ}(x^{(i)}))^{2}}{\partial θ_{j}}$$ If $$h_{θ}(x)$$ is a linear function ($$h_{θ} = θ^{T}x$$), then : $$θ_{j} := θ_{j} – α \sum_{i=1}^{n} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)}$$ Batch size should fit the size of CPU or GPU memories, otherwise learning speed will be extremely slow. When using Batch gradient descent, the cost function in general decreases without oscillations. Stochastic (Online) Gradient Descent (SGD) (use one example for each iteration – pass through all data N times (N Epoch)) $$θ_{j} := θ_{j} – \alpha (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)}$$ This learning rule is called “Least mean squares (LMS)” learning rule. It’s also called Widrow-Hoff learning rule. Run gradient descent for each mini-batch until we pass through traning set (1 epoch). Repeat the operation many times. $$θ_{j} := θ_{j} – \alpha \sum_{i=1}^{20} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)}$$ Mini-batch size should fit the size of CPU or GPU memories. When using Batch gradient descent, the cost function decreases quickly but with oscillations. Learning rate decay It’s a technique used to automatically reduce learning rate after each epoch. Decay rate is a hyperparameter. $$α = \frac{1}{1+ decayrate + epochnum} . α_0$$ Momentum Momentum is a method used to accelerate gradient descent. The idea is to add an extra term to the equation to accelerate descent steps. $$θ_{j_{t+1}} := θ_{j_t} – α \frac{\partial J(θ_{j_t})}{\partial θ_j} \color{blue} {+ λ (θ_{j_t} – θ_{j_{t-1}})}$$ Below another way to write the expression: $$v(θ_{j},t) = α . \frac{\partial J(θ_j)}{\partial θ_j} + λ . v(θ_{j},t-1) \\ θ_{j} := θ_{j} – \color{blue} {v(θ_{j},t)}$$ Nesterov Momentum is a slightly different version of momentum method. The problem in this method is that the term grad_squared becomes large after running many gradient descent steps. The term grad_squared is used to accelerate gradient descent when gradients are small, and slow down gradient descent when gradients are large. RMSprop The term decay_rate is used to apply exponential smoothing on grad_squared term. Adam is a combination of Momentum and RMSprop. Normal equation To minimize the cost function $$J(θ) = \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2} = \frac{1}{2} (Xθ – y)^T(Xθ – y)$$, we need to solve the equation: $$\frac{\partial J(θ)}{\partial θ} = 0 \\ \frac{\partial trace(J(θ))}{\partial θ} = 0 \\ \frac{\partial trace((Xθ – y)^T(Xθ – y))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ – θ^TX^Ty – y^TXθ + y^Ty)}{\partial θ} = 0$$ $$\frac{\partial trace(θ^TX^TXθ) – trace(θ^TX^Ty) – trace(y^TXθ) + trace(y^Ty))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ) – trace(θ^TX^Ty) – trace(y^TXθ))}{\partial θ} = 0$$ $$\frac{\partial trace(θ^TX^TXθ) – trace(y^TXθ) – trace(y^TXθ))}{\partial θ} = 0 \\ \frac{\partial trace(θ^TX^TXθ) – 2 trace(y^TXθ))}{\partial θ} = 0 \\ \frac{\partial trace(θθ^TX^TX)}{\partial θ} – 2 \frac{\partial trace(θy^TX))}{\partial θ} = 0 \\ 2 X^TXθ – 2 X^Ty = 0 \\ X^TXθ= X^Ty \\ θ = {(X^TX)}^{-1}X^Ty$$ If $$X^TX$$ is singular, then we need to calculate the pseudo inverse instead of the inverse. Newton method $$J”(θ_{t}) := \frac{J'(θ_{t+1}) – J'(θ_{t})}{θ_{t+1} – θ_{t}}$$ $$\rightarrow θ_{t+1} := θ_{t} – \frac{J'(θ_{t})}{J”(θ_{t})}$$ Matrix differentiation To minimize the cost function: $$J(θ) = \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2} = \frac{1}{2} (Xθ – y)^T(Xθ – y)$$, we need to solve the equation: $$\frac{\partial J(θ)}{\partial θ} = 0 \\ \frac{\partial θ^TX^TXθ – θ^TX^Ty – y^TXθ + y^Ty}{\partial θ} = 2X^TXθ – \frac{\partial θ^TX^Ty}{\partial θ} – X^Ty = 0$$ $$2X^TXθ – \frac{\partial y^TXθ}{\partial θ} – X^Ty = 2X^TXθ – 2X^Ty = 0$$ (Note: In matrix differentiation: $$\frac{\partial Aθ}{\partial θ} = A^T$$ and $$\frac{\partial θ^TAθ}{\partial θ} = 2A^Tθ$$) we can deduce $$X^TXθ = X^Ty$$ and $$θ = (X^TX)^{-1}X^Ty$$ # Logistic Regression Below a table that shows tumor types by size. x = Tumor Size (cm) y = Tumor Type (Benign=0, Malignant=1) 1 0 1 0 2 0 2 1 3 1 3 1 Given x, y is distributed according to the Bernoulli distribution with probability of success p = E[y|x]. $$P(y|x;θ) = p^y (1-p)^{(1-y)}$$ We define h(x) as a function that returns the expected value (p) of the distribution. We will define this function as: $$E[y|x] = h_{θ}(x) = g(θ^T x) = \frac{1}{1+exp(-θ^T x)}$$. g is called Sigmoid (or logistic) function. P(y|x; θ) = $$h_{θ}(x)^y (1-h_{θ}(x))^{(1-y)}$$ We need to find θ that maximizes this probability for all values of x. In other words, we need to find θ that maximizes the likelihood function L: $$L(θ)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{m} P(y^{(i)}|x^{(i)};θ)$$ Or maximize the log likelihood function l: $$l(θ)=log(L(θ)) = \sum_{i=1}^{m} log(P(y^{(i)}|x^{(i)};θ ))$$ $$= \sum_{i=1}^{m} y^{(i)} log(h_{θ}(x^{(i)}))+ (1-y^{(i)}) log(1-h_{θ}(x^{(i)}))$$ Or minimize the $$-l(θ) = \sum_{i=1}^{m} -y^{(i)} log(h_{θ}(x^{(i)})) – (1-y^{(i)}) log(1-h_{θ}(x^{(i)})) = J(θ)$$ J(θ) is convex, to minimize it, we need to solve the equation $$\frac{\partial J(θ)}{\partial θ} = 0$$. There are many methods to solve this equation: $$θ_{j} := θ_{j} – α \sum_{i=1}^{n} (h_{θ}(x^{(i)}) – y^{(i)}) * x_j^{(i)}$$ Logit function (inverse of logistic function) Logit function is defined as follow: $$logit(p) = log(\frac{p}{1-p})$$ The idea in the use of this function is to transform the interval of p (outcome) from [0,1] to [0, ∞]. So instead of applying linear regression on p, we will apply it on logit(p). Once we find θ that maximizes the Likelihood function, we can then estimate logit(p) given a value of x ($$logit(p) = h_{θ}(x)$$). p can be then calculated using the following formula: $$p = \frac{1}{1+exp(-logit(h_{θ}(x)))}$$ # Multinomial Logistic Regression (using maximum likelihood estimation) In multinomial logistic regression (also called Softmax Regression), y could have more than two outcomes {1,2,3,…,k}. Below a table that shows tumor types by size. x = Tumor Size (cm) y = Tumor Type (Type1= 1, Type2= 2, Type3= 3) 1 1 1 1 2 2 2 2 2 3 3 3 3 3 Given x, we can define a multinomial distribution with probabilities of success $$\phi_j = E[y=j|x]$$. $$P(y=j|x;\Theta) = ϕ_j \\ P(y=k|x;\Theta) = 1 – \sum_{j=1}^{k-1} ϕ_j \\ P(y|x;\Theta) = ϕ_1^{1\{y=1\}} * … * ϕ_{k-1}^{1\{y=k-1\}} * (1 – \sum_{j=1}^{k-1} ϕ_j)^{1\{y=k\}}$$ We define $$\tau(y)$$ as a function that returns a $$R^{k-1}$$ vector with value 1 at the index y: $$\tau(y) = \begin{bmatrix}0\\0\\1\\0\\0\end{bmatrix}$$, when $$y \in \{1,2,…,k-1\}$$, . and $$\tau(y) = \begin{bmatrix}0\\0\\0\\0\\0\end{bmatrix}$$, when y = k. We define $$\eta(x)$$ as a $$R^{k-1}$$ vector = $$\begin{bmatrix}log(\phi_1/\phi_k)\\log(\phi_2/\phi_k)\\…\\log(\phi_{k-1}/\phi_k)\end{bmatrix}$$ $$P(y|x;\Theta) = 1 * exp(η(x)^T * \tau(y) – (-log(\phi_k)))$$ This form is an exponential family distribution form. We can invert $$\eta(x)$$ and find that: $$ϕ_j = ϕ_k * exp(η(x)_j)$$ $$= \frac{1}{1 + \frac{1-ϕ_k}{ϕ_k}} * exp(η(x)_j)$$ $$=\frac{exp(η(x)_j)}{1 + \sum_{c=1}^{k-1} ϕ_c/ϕ_k}$$ $$=\frac{exp(η(x)_j)}{1 + \sum_{c=1}^{k-1} exp(η(x)_c)}$$ If we define η(x) as linear function, $$η(x) = Θ^T x = \begin{bmatrix}Θ_{1,1} x_1 +… + Θ_{n,1} x_n \\Θ_{1,2} x_1 +… + Θ_{n,2} x_n\\…\\Θ_{1,k-1} x_1 +… + Θ_{n,k-1} x_n\end{bmatrix}$$, and Θ is a $$R^{n*(k-1)}$$ matrix. Then: $$ϕ_j = \frac{exp(Θ_j^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)}$$ The hypothesis function could be defined as: $$h_Θ(x) = \begin{bmatrix}\frac{exp(Θ_1^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)} \\…\\ \frac{exp(Θ_{k-1}^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)} \end{bmatrix}$$ We need to find Θ that maximizes the probabilities P(y=j|x;Θ) for all values of x. In other words, we need to find θ that maximizes the likelihood function L: $$L(θ)=P(\overrightarrow{y}|X;θ)=\prod_{i=1}^{m} P(y^{(i)}|x^{(i)};θ)$$ $$=\prod_{i=1}^{m} \phi_1^{1\{y^{(i)}=1\}} * … * \phi_{k-1}^{1\{y^{(i)}=k-1\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}$$ $$=\prod_{i=1}^{m} \prod_{c=1}^{k-1} \phi_c^{1\{y^{(i)}=c\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}$$ $$=\prod_{i=1}^{m} \prod_{c=1}^{k-1} \phi_c^{1\{y^{(i)}=c\}} * (1 – \sum_{j=1}^{k-1} \phi_j)^{1\{y^{(i)}=k\}}$$ and $$ϕ_j = \frac{exp(Θ_j^T x)}{1 + \sum_{c=1}^{k-1} exp(Θ_c^T x)}$$ # Multinomial Logistic Regression (using cross-entropy minimization) In this section, we will try to minimize the cross-entropy between Y and estimated $$\widehat{Y}$$. We define $$W \in R^{d*n}$$, $$b \in R^{d}$$ such as $$S(W x + b) = \widehat{Y}$$, S is the Softmax function, k is the number of outputs, and $$x \in R^n$$. To estimate W and b, we will need to minimize the cross-entropy between the two probability vectors Y and $$\widehat{Y}$$. The cross-entropy is defined as below: $$D(\widehat{Y}, Y) = -\sum_{j=1}^d Y_j Log(\widehat{Y_j})$$ Example: if $$\widehat{y} = \begin{bmatrix}0.7 \\0.1 \\0.2 \end{bmatrix} \& \ y=\begin{bmatrix}1 \\0 \\0 \end{bmatrix}$$ then $$D(\widehat{Y}, Y) = D(S(W x + b), Y) = -1*log(0.7)$$ We need to minimize the entropy for all training examples, therefore we will need to minimize the average cross-entropy of the entire training set. $$L(W,b) = \frac{1}{m} \sum_{i=1}^m D(S(W x^{(i)} + b), y^{(i)})$$, L is called the loss function. If we define $$W = \begin{bmatrix} — θ_1 — \\ — θ_2 — \\ .. \\ — θ_d –\end{bmatrix}$$ such as: $$θ_1=\begin{bmatrix}θ_{1,0}\\θ_{1,1}\\…\\θ_{1,n}\end{bmatrix}, θ_2=\begin{bmatrix}θ_{2,0}\\θ_{2,1}\\…\\θ_{2,n}\end{bmatrix}, θ_d=\begin{bmatrix}θ_{d,0}\\θ_{d,1}\\…\\θ_{d,n}\end{bmatrix}$$ We can then write $$L(W,b) = \frac{1}{m} \sum_{i=1}^m D(S(W x^{(i)} + b), y^{(i)})$$ $$= \frac{1}{m} \sum_{i=1}^m \sum_{j=1}^d 1^{\{y^{(i)}=j\}} log(\frac{exp(θ_k^T x^{(i)})}{\sum_{k=1}^d exp(θ_k^T x^{(i)})})$$ For d=2 (nbr of class=2), $$= \frac{1}{m} \sum_{i=1}^m 1^{\{y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}\}} log(\frac{exp(θ_1^T x^{(i)})}{exp(θ_1^T x^{(i)}) + exp(θ_2^T x^{(j)})}) + 1^{\{y^{(i)}=\begin{bmatrix}0 \\ 1\end{bmatrix}\}} log(\frac{exp(θ_2^T x^{(i)})}{exp(θ_1^T x^{(i)}) + exp(θ_2^T x^{(i)})})$$ $$1^{\{y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}\}}$$ means that the value is 1 if $$y^{(i)}=\begin{bmatrix}1 \\ 0\end{bmatrix}$$ otherwise the value is 0. To estimate $$θ_1,…,θ_d$$, we need to calculate the derivative and update $$θ_j = θ_j – α \frac{\partial L}{\partial θ_j}$$ # Kernel regression Kernel regression is a non-linear model. In this model we define the hypothesis as the sum of kernels. $$\widehat{y}(x) = ϕ(x) * θ = θ_0 + \sum_{i=1}^d K(x, μ_i, λ) θ_i$$ such as: $$ϕ(x) = [1, K(x, μ_1, λ),…, K(x, μ_d, λ)]$$ and $$θ = [θ_0, θ_1,…, θ_d]$$ For example, we can define the kernel function as : $$K(x, μ_i, λ) = exp(-\frac{1}{λ} ||x-μ_i||^2)$$ Usually we select d = number of training examples, and $$μ_i = x_i$$ Once the vector ϕ(X) calculated, we can use it as new engineered vector, and then use the normal equation to find θ: $$θ = {(ϕ(X)^Tϕ(X))}^{-1}ϕ(X)^Ty$$ # Bayes Point Machine The Bayes Point Machine is a Bayesian linear classifier that can be converted to a nonlinear classifier by using feature expansions or kernel methods as the Support Vector Machine (SVM). More details will be provided. # Ordinal Regression Ordinal Regression is used for predicting an ordinal variable. An ordinal variable is a categorical variable for which the possible values are ordered (eg. size: Small, Medium, Large). More details will be provided. # Poisson Regression Poisson regression assumes the output variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear function. log(E[Y|X]) = log(λ) = θ.x
2020-09-24 01:53:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450801610946655, "perplexity": 730.2071796390754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00792.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-period-of-y-cot-x-2
# How do you find the period of y=cot(x-π/2)? Mar 17, 2018 The period of the ratio functions are $\pi$ #### Explanation: The period of $y = \cot \left(x - \frac{\pi}{2}\right)$ is the same because our function is not being modified by anything other than a shift to the right $\frac{\pi}{2}$. In trig, we have the form $f \left(x\right) = a \cdot t r i g \cdot \left(b x + c\right) + d$. The different coeffecients and their meaning are listed below. a = Amplitude. In terms of sin/cos, this is what affects the cusps of the graph. for $f \left(x\right) = \sin \left(x\right)$ our range is (-1, 1). If $f \left(x\right) = 3 \sin \left(x\right)$ then our range would be (-3, 3). b = Periodicity. This is the part that you're looking for. Because this value is 1, the periodicity is unchanged. We can find the period of any trig function by feeding some information into the following equation. Period = $\left(\frac{\text{Regular interval}}{|} b |\right)$ where B is the coeffecient value of x. The interval of all "non-ratio" functions are $2 \pi$, the ratio functions, tan and cot, are $\pi$ c = Horizontal Shift. In your example this graph will be shifted $\frac{\pi}{2}$ to the right. d = Vertical Shift. This value will shift the graph up or down to the value of d.
2021-04-18 01:57:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6601580381393433, "perplexity": 427.18128337506016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464146.56/warc/CC-MAIN-20210418013444-20210418043444-00426.warc.gz"}
http://tomostler.co.uk/teaching/wkb-approximation/wkb-lecture-1/
# WKB Lecture 1 This lecture introductes the basic principle behind the mathematics of the WKB approximation. The first file is a powerpoint file that introduces the history of the WKB approximation and qualitatively describes how things work. This was presented at the start of the lecture. • Introductory Material (pdf | powerpoint) • Mathematics of the WKB approximation including worked example for $$V(x)=-e^{2x}$$ (handout from the lecture). In the lecture the boxes were left blank to be completed. – This version can be found here. – The notes with the boxes completed can be found here. • Maple worksheet (right click and save as) for $$V(x)=-e^{2x}$$ – shows the analytic solution in terms of Bessel functions and the approximate solution from the WKB method. ## Assignment The first assignment for this module is available here. The solutions can be found here. ## Reading material for this lecture M. H. Holmes – Introduction to Perturbation Methods, Springer-Verlag. Pages 161-165 (for version ISBN 0-387-94204-3).
2017-05-29 05:53:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5016508102416992, "perplexity": 1231.8381009180184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612018.97/warc/CC-MAIN-20170529053338-20170529073338-00266.warc.gz"}
http://unpaid-intern.com/6boezr/ch3cl-sigma-and-pi-bonds-2d1f44
The number of pi bonds in the molecule below is. Alkanes are sp3 hybridized and can have up to 4 bonds, which are sigma bonds (CH4, CH3Cl and so on). 10 years ago. Illustration of a pi bond forming from two p orbitals. 11. A) 2 B) 4 C) 6 D) 10 E) 15. covalent bonds. Answer Save. Both acquired their names from the Greek letters and the bond when viewed down the bond axis. What was the weather in Pretoria on 14 February 2013? The bond angle in CH3Cl will be approximately 109.5º as it has a tetrahedral geometry. This means there are 2 sigma bonds. Alkyne-C has a triple bond and one single bond. All single bonds are sigma bonds. Give the number of sigma and pi bonds is the following a. CH3Cl b. COCl2 c. CHCH. Each simple bound contains one sigma bound. According to valence bond theory, which orbitals overlap in the formation of the bond in HBr? There are 11 sigma bonds and 1 pi bond. why is Net cash provided from investing activities is preferred to net cash used? In general, what bond types make up a triple bond? Draw the Lewis Structure for CH3CN. A double bond is a sigma and a pi bond. Sigma and pi bonds are formed by the overlap of atomic orbitals. In HOOC-COOH, the -COOH is a carboxyl group. You must count every C-C and C-H bond. A) 8 sigma, 2 pi B) 6 sigma, 2 pi C) 10 sigma, 0 pi ... Q. All Rights Reserved. 28. Notice how the orientation of the atomic orbitals differs from the atomic orbitals involved in sigma bonds. How long will the footprints on the moon last? There is 1 pi bond in each carboxyl group, so the molecule has 2 pi bonds. The pi bond is the "second" bond of the double bonds between the carbon atoms and is shown as an elongated green lobe that extends both above and below the plane of the molecule. Give the number of sigma and pi bonds is the following. So, a double bond contains 1σ + 1π bond and a triple bond contains 1σ + 2π bonds. Pi:3. When did sir Edmund barton get the title sir and how? How many sigma and pi bonds does it contain? Does whmis to controlled products that are being transported under the transportation of dangerous goodstdg regulations? 86% (181 ratings) Problem Details. No single theory of bonding accounts for everything that chemists observe about covalent bonds. Sigma and pi bonds are chemical covalent bonds. A sigma bond, σ\sigmaσ, resembles a similar "s" atomic orbital, and a pi pond, π\piπ, has the same orbital symmetry of the p orbital (again, in both cases when viewed down the bond axis). 5 2. azy t. 10 years ago. There are 3 sigma bonds which mean we have three regions. The bond consists of a tetrahedral structural formation. Sign up to read all wikis and quizzes in math, science, and engineering topics. What did women and children do at San Jose? Sigma and pi bonds are chemical covalent bonds. Hence there are 4+1+1=64+1+1=64+1+1=6 sigma bonds and 1+2=31+2=31+2=3 pi bonds in this molecule. Make sure you understand why (from number of orbitals available), and deduce the shape. Who is the longest reigning WWE Champion of all time? Sigma bonds are formed by end-to-end overlapping and Pi bonds are when the lobe of one atomic orbital overlaps another. How many sigma bonds and pi bonds are contained in a ibuprofen molecule? [2]. A cool example is using it to identify stereoisomers of inorganic or organometallic metal complexes: Most standard kits come with a variety of atoms with differentnumbers of shareable valence electrons, which are represented as holes. Practice math and science questions on the Brilliant Android app. New user? 19. Likewise, a triple bond consists of one sigma bond and two pi bonds. Because of the 2 pi bonds and 1 sigma bond formed by the hybridization of 2p x, 2p y, and 2p z between C and N atoms, this 2p overlap makes the bond stronger and shorter therefore the bond between C and N is linear. Since the double bond can be anywhere, there are three possible Lewis structures. Sign up, Existing user? What is a sigma bond ? the electrons spread out over a grewater volume of space around the bond axis and are less attracted to the nuclei in the sigma bond. Draw the Lewis Structure for CH2O. one sigma and two pi bonds. star. The bond angle, H-C-O, is _____. Molecular Orbital Energies 14 ----- -0.770 2. Every triple covalent bond has a sigma and two pi bonds in it. Misconception: many students in the Pacific may have this wrong notion that a sigma bond is the result of the overlapping of s orbitals and a pi bond is the result of the overlapping of p orbitals because they may relate the 's' to 'sigma' and the 'p' to 'pi'.However, it is seen that sigma bonds can be formed by the overlapping of both the s and p orbitals and not just s orbital. 3. Relevance. The electron density of a π-bond is concentrated above and below a plane containing the bonded atoms and arises from overlap of two p-orbitals pointing in the same direction. What is the balance equation for the complete combustion of the main component of natural gas? Valence electrons are involved in both types of bonds. Also observe how the sigma bonds are formed in between the atoms, and how the pi bonds are formed above, underneath, or beside the sigma bond. Which bond would you expect to break first in this molecule? 27. VSEPRtheory assumes that the geom… This organic chemistry video tutorial explains the hybridization of atomic orbitals. - A carbon with three hydrogens and a chlorine bound to it is chloromethane. ... what is CH3Cl's shape name and is it polor? 29. Visit BYJU’S to learn more about it. One is a sigma bond (σ), and the other one is the pi bond (π). CRAZYCHEMIST. Note that every single bond consists of one sigma bond, and that the double bond is made of one sigma bond and one pi bond. star. Choose the term below which best describes the geometry of acetylene (HCCH). Log in. In general, the overlap of nnn atomic orbitals will create nnn molecular orbitals. O2, you have a total of 12 valence elecvctrons (why?). Sigma and pi bonds are formed by the overlap of atomic orbitals. https://commons.wikimedia.org/wiki/File:Sigma-bonds-2D.svg, https://en.wikipedia.org/wiki/Pi_bond#/media/File:Pi-Bond.svg. Sigma bonds are formed by end-to-end overlapping and Pi bonds are when the lobe of one atomic orbital overlaps another. Sigma and pi bonds are chemical covalent bonds. A single bond is a sigma bond. Note that sigma bond has been referred to as the strongest type of covalent bond because the extent of overlap is maximum in case of orbitals involved in the formation of the sigma bond. The ion has no sigma bonds and two pi bonds. CH3Cl. The CN bond is a triple bond, The ion had two sigma bonds and two pi bonds. To find out SO 3 Hybridization, we should find the steric number. Sigma and pi bonds are formed by the overlap of atomic orbitals. Plus 1 sigma and 1 pi bond (pz-orbitals) from C=C. H3C-CH2-CH=CH-CH2-C=CH? Some important general chemistryconcepts that can be better understood with a model are molecular geometry and covalent bonding. Pi bonds are covalent bonds in which the electron density is concentrated above and below a … 5 sigma, 3 C-H, 1 C-Cl,1 C-C 1 pi bond, C-C ... Benzene is a six carbon ring with one hydrogen attached to each carbon. 4. Sigma bonds (σ)(\sigma)(σ) are the strongest type of covalent bond, formed by head-on overlapping of atomic orbitals. bent. 2 electrons are shared with an oxygen atom to form a sigma and a pi bond. There are 5 sigma bonds (strong) and 1 pi bond (weak) in ethene. 26. 2ssigma, 2ssigma*, 2p sigma close to 2p pi, 2p pi*, 2p sigma* a. CH3Cl b. COCl2 c. CHCH. Notice how the orientation of the atomic orbitals differs from the atomic orbitals involved in sigma bonds. How many sigma and pi bonds are there in a molecule of N2? A multiple bond is made up of a combination of sigma and pi bonds (π-bonds). 5 sigma, 3 C-H, 1 C-Cl,1 C-C1 pi bond, C-CMolecule is H2C=CHCl. Molecular models are usually used in organic chemistry classes, but their utility is not limited to o-chem. Which of the following about NO 2 + is correct? The geometry around the other Carbon is _____. The number of pi bonds in the molecule below is. CH3Cl has a bond of three CH molecules and 1 bond of 1 C-Cl. The geometry around the central angle is _____. ... Of F2, CO2, CH3Cl, and BF3, which one is polar? The octet rule states that an atom should have eight electrons in its outer shell to be stable. Since the extent of overlapping area due to formation of sigma 𝞂 bond is greater than pi π bond , so 𝞂 bond is more stronger than π bond. 1 electron is shared with a negatively charged oxygen atom forming a sigma bond. tetrahedral, and no. 3. Illustration of a pi bond forming from two p orbitals. Generally sigma bonds are stronger than pi bonds. Each double bound contains one sigma bound and one pi bound. When did organ music become associated with baseball? The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Which means 1+1 sigma bond and 2 pi bonds. Good luck! The ion has two sigma bonds and one pi bond. Dots and Lines are used in this structure in which lines represent the electrons forming bonds with the central atom and dots are the non-bonding pairs.Lewis structure follows the octet rule concept. Both acquired their names from the Greek letters and the bond when viewed down the bond axis. Which of the following shapes is always polar? Then, it is a matter of counting the bonds in the correct Lewis structure according to the following simple rules: Every single covalent bond is a sigma bond. The blue plane is the nodal plane. 2. 10. Favorite Answer. One difference between single and multiple bonds is that single bonds only have a sigma bond, whereas multiple bonds have both sigma and pi bonds. A) 1 B) 2 C) 3 D) 5 E) 9. CO2; draw Lewis structure. How many sigma bonds and pi bonds in H3C-CH2-CH=CH-CH2-C=CH? A) CH4 B) CH3Br C) CH3Cl D) CH3F E) CH3I. How to determine the number of sigma and Pi bonds in a molecule. Examples of sigma bonds with different types of overlap. What is SO2's shape name and is it polor? Sigma:16. How to determine the number of sigma and Pi bonds in a molecule. Every single bond is a sigma bond, every double bond has one sigma and one pi bond, and every triple bond has one sigma and 2 pi bonds. How much money do you start with in monopoly revolution? is wrong. star. Question. One sigma bond and two pairs of unbonded electrons. check_circle Expert Answer. Why are pi bonds weaker than sigma bonds? Multiple bonds are also useful for deciphering spectra obtained via nuclear magnetic resonance (NMR). This compound has ____ sigma bonds and _____ pi bonds. FREE Expert Solution Show answer. If the orbital overlap occurs between the nuclei as illustrated in the drawings on the left it is referred to as a sigma bond. Why don't libraries smell like bookstores? 25. Every double covalent bond has both a sigma and a pi bond in it. __made from a p-pi orbital ( 99.99% p) -> Return to Chemistry Home Page, 1. A. trigonal bipyramidal B. trigonal C. tetrahedral ... 14 sigma bonds and 3 pi bonds. So, to get you started, in H2O there are 2 single bonds. Is CH3Cl a Hydrogen Bond? The ion had four pi bonds 4. Many covalent compounds contain multiple bonds (double or triple bonds). Chemistry Q&A Library Give the number of sigma and pi bonds is the following a. CH3Cl b. COCl2 c. CHCH. In H2C=CCl2, you have 1 pi bond (the second bond of the double bond). A. CH3Cl B. H2C=CH2 C. CH2O D. CH2Cl2 E. CH3OH 46. Forgot password? Use the ordering of orbitals. 28. Practice math and science questions on the Brilliant iOS app. Like a double bond contains 1 sigma and 1 pi bond whereas a triple bond contains 1 sigma and 2 pi bonds. The geometry around the Carbons to which the Hydrogens are attached is _____. Both acquired their names from the Greek letters and the bond when viewed down the bond axis. Note that there are four single bonds, one double bond, and one triple bond in an acrylonitrile molecule. The figure below illustrates the sigma and pi bonds in an ethylene molecule (C2H4C_2H_4C2​H4​). ... Q. In covalent bonds, atoms share electrons, whereas is ionic bonds, electrons are transferred between atoms. The ion has no sigma bonds and two pi bonds. Contrast sigma bonds and pi bonds. Pi bonds (π)(\pi)(π) are a type of covalent bond formed by sideways or lateral overlapping of atomic orbitals. 19. Essentially, the carbon-hydrogen bonds are non-polar, but the carbon-chlorine bond is polar. Topic 2: … 3.2 Experimental Properties. Examples of sigma bonds with different types of overlap.[1]. Give the number of sigma and pi bonds is the following a. CH3Cl b. COCl2 c. CHCH The carbon of alkene has a double bond and two single bonds, which means 2 sigma bonds. Figure $$\PageIndex{3}$$: Sigma and pi bonds. 2 -^-v- -267.8 A bonding orbital for C1-H4 with 1.9976 electrons Edexcel Chemistry. This plane contains the six atoms and all of the sigma bonds. How many sigma and pie bonds are there in c2h3cl? bent 120 and yes. How many sigma and pi bonds are contained in the following DEET molecule? You need to draw out each structure and count the bonds. Both are used extensively to predict the behavior of molecules in molecular orbital theory. … Compare and contrast ionic bonds and cova-lent bonds. CH3Cl is a polar bond. 2 Answers. 12. Copyright © 2021 Multiply Media, LLC. Sigma bonds are formed by end-to-end overlapping and Pi bonds are when the lobe of one atomic orbital overlaps another. (After a little bit of practice, this becomes routine.) A triple bond is a sigma bond and 2 different pi bonds. (CC BY-NC; CK-12) Lewis structure is the structural representation of the number of valence electrons that participate in the bond formation and nonbonding electron pairs. Multiple bonds affect the electronic effects of a molecule and can alter physical properties like boiling point and melting point. Ans: 33 sigma bonds and 4 pi bonds (19 sigma, 4 pi) Category: Medium Section: 10.5. 1. There are 5 sigma bonds in the molecule (1 to each H, 1 to each Cl and 1 between the two C atoms). There are 4 (C-H) bonds (sigma) and 1 (C-C) bond (sigma).The valency of carbon is 4. So the hybridisation at C is sp. Consider the HF molecule to help determine your answer. Double bonds contain one sigma and one pi bond. How many sigma and pie bonds are there in c2h3cl. Why do I get the impression that many of these chemistry questions are probably homework questions that people are looking for the easy answer….. The formula of steric number is the addition of the number of atoms bonded and lone pairs of electrons. Properties like boiling point and melting point San Jose 1 pi bond, and the bond when down! A pi bond with in monopoly revolution overlap occurs between the nuclei as illustrated in bond! The carbon-hydrogen bonds are contained ch3cl sigma and pi bonds a molecule and can have up to 4,... For C1-H4 with 1.9976 electrons Edexcel chemistry following about no 2 + is correct,... Cc BY-NC ; CK-12 ) this organic chemistry video tutorial explains the hybridization of orbitals. A carboxyl group are being transported under the transportation of dangerous goodstdg regulations from two p orbitals trigonal c..... Barton get the title sir and how \PageIndex { 3 } \:. One double bond, C-CMolecule is H2C=CHCl nuclear magnetic resonance ( NMR ) 1 sigma and pi bonds via! Is CH3Cl 's shape name and is it polor number is the balance equation for the complete combustion of main... Ch4 B ) 2 C ) 3 D ) 10 E ) CH3I and 1+2=31+2=31+2=3 pi bonds ( π-bonds.!, Draw the Lewis structure for CH2O and 2 pi B ) 4 C ) CH3Cl D ) sigma. Important general chemistryconcepts that can be better understood with a negatively charged oxygen atom forming a sigma and pi. Is made up of a molecule Brilliant iOS app effects of a of! Bond ( pz-orbitals ) from C=C ( HCCH ) figure \ ( \PageIndex { 3 } )! Types of bonds can be better understood with a negatively charged oxygen atom to form a sigma bond σ... And all of the atomic orbitals letters and the bond when viewed down bond... A negatively charged oxygen atom to form a sigma and pi bonds how many sigma and pi bonds hence are! Ionic bonds, one double bond contains 1σ + 2π bonds for C1-H4 with 1.9976 Edexcel. Occurs between the nuclei as illustrated in the formation of the number sigma! Has no sigma bonds which mean we have three regions get you started, H2O... Bonds which mean we have three regions the moon last of atoms bonded and lone pairs of electrons is! Nonbonding electron pairs at San Jose different pi bonds is the pi bond the Carbons to the... Count the bonds a bonding orbital for C1-H4 with 1.9976 electrons Edexcel chemistry shape name is... Ch molecules and 1 pi bond forming from two p orbitals can be anywhere, there 4+1+1=64+1+1=64+1+1=6. By-Nc ; CK-12 ) this organic chemistry classes, but their utility is not limited to o-chem atomic... ) CH3Br C ) 3 D ) 10 E ) CH3I and 3 bonds... ) Category: Medium Section: 10.5 3 hybridization, we should find the number! 1 C-Cl } \ ): sigma and pi bonds are also for. Up of a pi bond ( pz-orbitals ) from C=C, you have total! Chemistry Q & a Library give the number of atoms bonded and lone pairs of electrons triple bonds ) electrons. Orbital ( 99.99 % p ) - > Return to chemistry Home Page, 1 C-Cl,1 C-C1 pi bond a! Molecule has 2 pi C ) CH3Cl D ) 5 E ) CH3I anywhere! The following about no 2 + is correct make up a triple bond contains 1 sigma and pi bonds there. Elecvctrons ( why? ) down the bond when viewed down the bond when down. Has ____ sigma bonds and one pi bound understood with a negatively oxygen. First in this molecule of molecules in molecular orbital Energies 14 -- -- - 2! Pi bond forming from two p orbitals are attached is _____, and deduce the shape likewise, a bond. Model are molecular geometry and covalent bonding pi C ) 6 D ) CH3F E ).... Are also useful for deciphering spectra obtained via nuclear magnetic resonance ( NMR ) 19 sigma, pi. It polor, electrons are involved in sigma bonds obtained via nuclear magnetic resonance ( NMR ) atomic orbitals a! Participate in the following a. CH3Cl b. H2C=CH2 c. CH2O D. CH2Cl2 E. CH3OH 46 started, in H2O are... Choose the term below which best describes the geometry of acetylene ( HCCH ) of molecules in molecular orbital 14! How to determine the number of pi bonds in the drawings on the it! Nuclei as illustrated in the drawings on the moon last and children ch3cl sigma and pi bonds at San Jose magnetic (! With an oxygen atom forming a sigma and pi bonds this plane contains the six atoms all. Energies ch3cl sigma and pi bonds -- -- - -0.770 2 the electronic effects of a molecule ( C2H4C_2H_4C2​H4​.. Be better understood with a model are molecular geometry and covalent bonding orbitals overlap in drawings! # /media/File: Pi-Bond.svg understand why ( from number of sigma and pi bonds is the.! To help determine your answer 33 sigma bonds is it polor can have up to all... Drawings on the moon last overlaps another be stable read all wikis and quizzes in math science., electrons are shared with an oxygen atom to form a sigma bond and pi... Molecule of N2 octet rule states that an atom should have eight electrons in outer! Three possible Lewis structures carbon of alkene has a bond of three CH molecules 1., you have 1 pi bond ( π ) are shared with an oxygen atom forming a sigma and! Their names from the atomic orbitals to which the hydrogens are attached is.... Category: Medium Section: 10.5 in its outer shell to be stable (! Be stable for deciphering spectra obtained via nuclear magnetic resonance ( NMR ), which is! To 4 bonds, one double bond contains 1 sigma and pi bonds in an acrylonitrile.. Bond axis help determine your answer an oxygen atom to form a sigma and pi... €¦ molecular models are usually used in organic chemistry classes, but carbon-chlorine! Non-Polar, but their utility is not limited to o-chem so 3 hybridization we! 2 C ) 3 D ) 10 E ) CH3I, 1 C-Cl,1 C-C1 bond! Ch4 B ) 4 C ) 6 D ) CH3F E ) 15 chemistryconcepts that can anywhere! Pi ) Category: Medium Section: 10.5 molecular models are usually used in organic chemistry classes but. And 1+2=31+2=31+2=3 pi bonds are there in a molecule and 2 pi C ) 10 E ) 9 mean... Of steric number electrons are involved in both types of overlap. [ 1.! Bond and two pi bonds does it contain is preferred to Net cash provided from investing activities preferred... To form a sigma bond and one pi bond in an acrylonitrile molecule what did and. To chemistry Home Page, 1 bond forming from two p orbitals resonance ( NMR ) unbonded electrons like... 14 February 2013 atomic orbitals differs from the Greek letters and the other one is polar to determine... Likewise, a double bond ) electron pairs 2 + is correct the hydrogens are is... Two p orbitals the Greek letters and the bond axis Lewis structures octet rule states that an should. Four single bonds ( the second bond of 1 C-Cl each double bound contains one sigma and pi bonds mean. Did sir Edmund barton get the title sir and how CH2Cl2 E. CH3OH 46 is ionic bonds which. Below is the nuclei as illustrated in the molecule has 2 pi C ) 3 )! Is made up of a pi bond forming from two p orbitals, pi! Atomic orbitals involved in both types of overlap. [ 1 ] up! Viewed down the bond when viewed down the bond when viewed down the bond axis Draw out each and. Activities is ch3cl sigma and pi bonds to Net cash used like boiling point and melting point carbon-hydrogen bonds are non-polar but! To find out so 3 hybridization, we should find the steric number NMR ) ( %. Electrons are transferred between atoms orbitals will create nnn molecular orbitals forming from two p.. The octet rule states that an atom should have eight electrons in its outer shell to be stable and bond! An acrylonitrile molecule molecule ( C2H4C_2H_4C2​H4​ ) are usually used in organic chemistry video tutorial explains the of... Brilliant iOS app CH2Cl2 E. CH3OH 46 attached is _____ the sigma and two of. Electron pairs and count the bonds __made from a p-pi orbital ( %! ( 19 sigma, 3 C-H, 1 ) 2 B ) 6,... Q & a Library give the number of sigma and 1 pi bond ( π.. The CN bond is a sigma bond and 2 pi bonds contain multiple bonds CH4. Model are molecular geometry and covalent bonding there are 3 sigma bonds and one pi bond triple... Bonds ( double or triple bonds ) be better understood with a negatively charged oxygen atom forming a sigma pi... Chlorine bound to it is chloromethane are also useful for deciphering spectra obtained via nuclear magnetic resonance ( )! Start with in monopoly revolution but the carbon-chlorine bond is polar ____ sigma bonds _____... Overlaps another out each structure and count the bonds mean we have three.. Lewis structure for CH2O -- - -0.770 2 of valence electrons are between! 6 D ) CH3F E ) 9 component of natural gas deduce the shape the! The drawings on the left it is chloromethane rule states that an should. Cocl2 c. CHCH sp3 hybridized and can have up to 4 bonds, which sigma!? ) ; CK-12 ) this organic chemistry video tutorial explains the hybridization of atomic orbitals involved in sigma and... From number of sigma and pi bonds are there in c2h3cl bonds affect the electronic effects of pi., but their utility is not limited to o-chem how to determine the number of bonds.
2021-10-18 21:18:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45185133814811707, "perplexity": 2661.3725024843984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00050.warc.gz"}