url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://brilliant.org/discussions/thread/prove-that-a-perfect-number-is-a-triangular-one/
× # Prove that a Perfect Number is Triangular. Let $$N$$ be a Perfect Number (a positive integer that is equal to the sum of its proper positive divisors), like $$6$$, or $$28$$. It seems that all known perfect numbers are the sum of a series of consecutive positive integers starting with $$1$$, that is, it seems that any perfect number is a triangular number. For example: $$6=1+2+3$$, $$28=1+2+3+4+5+6+7$$. Prove that any even perfect number is triangular. Note: It is not known whether there are any odd perfect numbers. Note by Alexander Israel Flores Gutiérrez 1 year, 7 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Any even perfect number can be expressed as $$2^{n-1}(2^n-1)$$ for some natural number $$n$$.Now it's quite easily seen that $$2^{n-1}(2^n-1)=\dfrac{2^n(2^n-1)}{2}$$ which fits the expression $$\dfrac{k(k+1)}{2}$$ (this is the expression for all triangular numbers) when $$k=2^n-1$$ - 1 year, 7 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951868057250977, "perplexity": 2361.5527590787947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645405.20/warc/CC-MAIN-20180317233618-20180318013618-00162.warc.gz"}
http://www.statisticshowto.com/normal-distribution-probability/
# Normal Distribution Probability in Statisitcs Probability and Statistics Index > Statistics Definitions > What is normal distribution probability? ## Normal Distribution Probability A normal distribution is a distribution that is symmetric and is shaped like a bell: A normal distribution probability curve, sometimes called a bell curve. Bell curves can represent lots of natural phenomena. For example, grades in class will often be bell-shaped: most students will get C in class (the average), a smaller number of students will get a B or a D, and even fewer students will get an A or an F. Class grades usually follow a bell curve. Most students will get a C on any test or exam. IQ scores naturally fall into this shape; the following graph shows that the majority of people have IQ scores between 85 and 115. IQ scores. While the bell curve is useful, it would be tricky to try to figure out how many people have IQ scores between 110 and 130. Or perhaps you would want to know what percentage of students got grades slightly below passing (67-69). The math would get a little tricky, but not if you superimpose your information onto a normal distribution curve. The area under the curve represents 100% probability. In statistics and probability, 100% is written as a decimal (100% = 1), so you will often see it mentioned that the total area under the curve is 1. The mean (the average) in a standard normal distribution is always zero. Why? Because it makes it easier to look up normal distribution probability (a z-score) using a z-table. Normal distribution probability (and the associated z-scores) allow you to look up percentage probabilities for any set of data that is shaped like a bell. The above normal probability distribution shows what percentage of scores fall within a number of standard deviations from the mean. Standard deviations are the same as z-scores; a standard deviation of 1 corresponds to a z-score of 1 and a standard deviation of 2 corresponds to a z-score of 2. 68% of scores (IQ scores, grades, heights, weights and a host of other phenomena) will fall between standard deviations of -1 and +1 (z-scores of -1 and 1). ### How to Calculate Normal Distribution Probability Normal distribution probability can be calculated with a little basic arithmetic and use of a z-table. You can find several examples of solving these types of problems here: Area under a normal distribution curve index. Normal Distribution Probability in Statisitcs was last modified: September 26th, 2015 by # One thought on “Normal Distribution Probability in Statisitcs” 1. Amalbeta thanks a lot!!!!!!!!!! This helps me with my addmath project work. May God bless all of you!!!!!!!!!!!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9531753659248352, "perplexity": 595.0513094606692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446300.49/warc/CC-MAIN-20151124205406-00310-ip-10-71-132-137.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/110975/elementary-method-for-solving-equations-involving-multiple-absolute-values
# Elementary Method for Solving Equations Involving Multiple Absolute Values Suppose one has an equation in one unknown that has three or more absolute value signs such as $$|ax + b| + |cx + d| + |ex + f| = gx + h$$ Without invoking sophisticated techniques such as the CAD algorithm described in this question, is there an elementary approach other than case-by-case analysis that will yield a solution? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621058464050293, "perplexity": 185.46382903234786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447657.38/warc/CC-MAIN-20141017005727-00122-ip-10-16-133-185.ec2.internal.warc.gz"}
https://scicomp.stackexchange.com/questions/582/stopping-criteria-for-iterative-linear-solvers-applied-to-nearly-singular-system?noredirect=1
# Stopping criteria for iterative linear solvers applied to nearly singular systems Consider $Ax=b$ with $A$ nearly singular which means there is an eigenvalue $\lambda_0$ of $A$ that is very small. The usual stop criterion of an iterative method is based on the residual $r_n:=b-Ax_n$ and regards the iterations can stop when $\|r_n\|/\|r_0\|<tol$ with $n$ the iteration number. But in the case we are considering, there could be large error $v$ living in the eigenspace associated with the small eigenvalue $\lambda_0$ which gives small residual $Av=\lambda_0v$. Suppose the initial residual $r_0$ is large, then it might happen we stop at $\|r_n\|/\|r_0\|<tol$ but the error $x_n-x$ is still big. What is a better error indicator in this case? Is $\|x_{n}-x_{n-1}\|$ a good candidate? • You may want to think about your definition of "nearly singular". The matrix $I \cdot \epsilon$ (with $\epsilon\ll 1$ and $I$ the identity matrix) has a very small eigenvalue, but is as far from singular as any matrix could be. – David Ketcheson Jan 6 '12 at 11:05 • Also, $||r_n/r_0||$ seems like the wrong notation. $||r_n||/||r_0||$ is more typical, no? – Bill Barth Jan 6 '12 at 13:17 • Yes, you are right, Bill! I will correct this mistake. – Hui Zhang Jan 6 '12 at 13:57 • What about $\| b - Ax \| / \| b \|$? and what is your algorithm, exactly? – shuhalo Jan 6 '12 at 14:08 • Addendum: I think the following paper pretty much adresses the ill-conditioned systems you worry about, at least if you are using CG: Axelson, Kaporin: Error norm estimation and stopping criteria in preconditioned conjugate gradient iterations. DOI: 10.1002/nla.244 – shuhalo Jan 6 '12 at 14:12 Please never use the difference between successive iterates to define a stopping criteria. This misdiagnoses stagnation for convergence. Most nonsymmetric matrix iterations are not monotone, and even GMRES in exact arithmetic with no restarts may stagnate for an arbitrary number of iterations (up to the dimension of the matrix) before converging suddenly. See examples in Nachtigal, Reddy, and Trefethen (1993). ## A better way to define convergence We are usually interested in the accuracy of our solution more than the size of the residual. Specifically, we might like to guarantee that the difference between an approximate solution $x_n$ and the exact solution $x$ satisfies $$|x_n - x| < c$$ for some user-specified $c$. It turns out that can achieve this by finding an $x_n$ such that $$|A x_n - b| < c\epsilon$$ where $\epsilon$ is the smallest singular value of $A$, due to \begin{align} |x_n - x| &= |A^{-1} A (x_n - x)| \\ & \le \frac 1 \epsilon |A x_n - A x| \\ & = \frac 1 \epsilon |A x_n - b| \\ & < \frac 1 \epsilon \cdot c \epsilon = c \end{align} where we have used that $1/\epsilon$ is the largest singular value of $A^{-1}$ (second line) and that $x$ exactly solves $A x = b$ (third line). ## Estimating the smallest singular value $\epsilon$ An accurate estimate of the smallest singular value is usually not directly available from the problem, but it can be estimated as a byproduct of a conjugate gradient or GMRES iteration. Note that although estimates of the largest eigenvalues and singular values is usually quite good after only a few iterations, an accurate estimate of the smallest eigen/singular value $\epsilon$ is usually only obtained once convergence is reached. Before convergence, the estimate will generally be significantly larger than the true value. This suggests that you must actually solve the equations before you can define the correct tolerance $c\epsilon$. An automatic convergence tolerance that takes a user-provided accuracy $c$ for the solution and estimates the smallest singular value $\epsilon$ with the current state of the Krylov method might converge too early because the estimate of $\epsilon$ was much larger than the true value. ## Notes 1. The above discussion also works with $A$ replaced by the left-preconditioned operator $P^{-1}A$ and the preconditioned residual $P^{-1} (A x^n - b)$ or with the right-preconditioned operator $A P^{-1}$ and the error $P (x_n - x)$. If $P^{-1}$ is a good preconditioner, the preconditioned operator will be well-conditioned. For left-preconditioning, this means the preconditioned residual can be made small, but the true residual may not be. For right preconditioning, $|P(x_n - x)|$ is easily made small, but the true error $|x_n-x|$ may not be. This explains why left-preconditioning is better for making error small while right-preconditioning is better for making the residual small (and for debugging unstable preconditioners). 2. See this answer for more discussion of norms minimized by GMRES and CG. 3. The estimates of extremal singular values can be monitored using -ksp_monitor_singular_value with any PETSc program. See KSPComputeExtremeSingularValues() to compute singular values from code. 4. When using GMRES to estimate singular values, it is crucial that restarts not be used (e.g. -ksp_gmres_restart 1000 in PETSc). • ''also works with A replaced by a preconditioned operator'' - However, it then applies only to the preconditioned residual $P^{-1}r$ if $P^{-1}A$ is used, resp. to the preconditioned error $P^{-1}\delta x$ if $AP^{-1}$ is used. – Arnold Neumaier Jul 25 '12 at 13:08 • Good point, I edited my answer. Note that the right-preconditioned case gives you control of $P\delta x$, unwinding the preconditioner (applying $P^{-1}$) typically amplifies low-energy modes in the error. – Jed Brown Jul 25 '12 at 16:19 Another way of looking at this problem is to consider the tools from discrete inverse problems, that is, problems which involve solving $Ax=b$ or $\min ||Ax-b||_2$ where $A$ is very ill-conditioned (i.e. the ratio between the first and last singular value $\sigma_1/\sigma_n$ is large). Here, we have several methods for choosing the stopping criterion, and for an iterative method, I would recommend the L-curve criterion since it only involves quantities that are available already (DISCLAIMER: My advisor pioneered this method, so I am definitely biased towards it). I have used this with success in an iterative method. The idea is to monitor the residual norm $\rho_k=||Ax_k-b||_2$ and the solution norm $\eta_k=||x_k||_2$, where $x_k$ is the $k$'th iterate. As you iterate, this begins to draw the shape of an L in a loglog(rho,eta) plot, and the point at the corner of that L is the optimal choice. This allows you to implement a criterion where you keep an eye on when you have passed the corner (i.e. looking at the gradient of $(\rho_k,\eta_k)$), and then choose the iterate that was located at the corner. The way I did it involved storing the last 20 iterates, and if the gradient $abs(\frac{\log(\eta_k)-\log(\eta_{k-1})}{\log(\rho_k)-\log(\rho_{k-1})})$ was larger than some threshold for 20 successive iterations, I knew that I was on the vertical part of the curve and that I had passed the corner. I then took the first iterate in my array (i.e. the one 20 iterations ago) as my solution. There are also more detailed methods for finding the corner, and these work better but require storing a significant number of iterates. Play around with it a bit. If you are in matlab, you can use the toolbox Regularization Tools, which implements some of this (specifically the "corner" function is applicable). Note that this approach is particularly suitable for large-scale problems, since the extra computing time involved is minuscule. • Thanks a lot! So in loglog(rho,eta) plot we begin from the right of the L curve and end at the top of L, is it? I just do not know the principle behind this criterion. Can you explain why it always behave like an L curve and why we choose the corner? – Hui Zhang Jan 11 '12 at 19:15 • You're welcome :-D. For an iterative method, we begin from right and end at top always. It behaves as an L due to the noise in the problem - the vertical part happens at $||Ax-b||_2=||e||_2$, where $e$ is the noise vector $b_{exact}=b+e$. For more analysis see Hansen, P. C., & O'Leary, D. P. (1993). The use of the L-curve in the regularization of discrete ill-posed problems. SIAM Journal on Scientific Computing, 14. Note that I just made a slight update to the post. – OscarB Jan 12 '12 at 12:13 • @HuiZhang: it isn't always an L. If the regularization is ambiguous it may be a double L, leading to two candidates for the solution, one with gross featurse better resolved, the other with certain details better resolved. (And of course, mor ecomplex shapes may appear.) – Arnold Neumaier Jul 25 '12 at 13:10 • Does the L-curve apply to ill-conditioned problems where there should be a unique solution? That is, I'm interested in problems Ax = b where b is known "exactly" and A is nearly singular but still technically invertible. It would seem to me that if you use something like GMRES the norm of your current guess of x doesn't change too much over time, especially after the first however many iterations. It seems to me that the vertical part of the L-curve occurs because there is no unique/valid solution in an ill-posed problem; would this vertical feature be present in all ill-conditioned problems? – nukeguy Jun 6 '16 at 14:40 • At one point, you will reach such a vertical line, typically because the numerical errors in your solution method result in ||Ax-b|| not decreasing. However, you are right that in such noise-free problems the curve does not always look like an L, meaning that you typically have a few corners to choose from and choosing one over the other can be hard. I believe that the paper I referenced in my comment above discusses noise-free scenarios briefly. – OscarB Jun 7 '16 at 7:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9074225425720215, "perplexity": 442.61555327807656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358033.38/warc/CC-MAIN-20210226234926-20210227024926-00372.warc.gz"}
https://publons.com/publon/35129123/
0 pre-pub reviews 0 post-pub reviews ##### Abstract In this paper, a scalable iterative projection-type algorithm for solving non-stationary systems of linear inequalities is considered. A non-stationary system is understood as a large-scale system of inequalities in which coefficients and constant terms can change during the calculation process. The proposed parallel algorithm uses the concept of pseudo-projection which generalizes the notion of orthogonal projection. The parallel pseudo-projection algorithm is implemented using the parallel BSF-skeleton. An analytical estimation of the algorithm scalability boundary is obtained on the base of the BSF cost metric. The large-scale computational experiments were performed on a cluster computing system. The obtained results confirm the efficiency of the proposed approach. ##### Authors Sokolinsky, L. B.;  Sokolinskaya, I. M. • 2 authors
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957383930683136, "perplexity": 1329.312129671434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00537.warc.gz"}
http://www.eetimes.com/messages.asp?piddl_msgthreadid=36002&piddl_msgid=210397
Breaking News <<   <   Page 2 / 2 User Rank Rookie re: Useful tools and tricks 10/17/2011 4:32:36 PM NO RATINGS Is it just me or does the alt trick not work in notepad? When I press the alt key notepad highlights the menu selections... am I missing something? User Rank Manager re: Useful tools and tricks 10/15/2011 3:01:18 AM NO RATINGS Most symbolic math programs can output the results as both programming language code (C, Fortran, etc) and as TeX formulas. For instance, Maxima: http://maxima.sourceforge.net/download.html I generated a quick expression (Pade approximation of a Taylor expansion of a symbolic derivative, whatever), and right-clicked on it, selecting "Copy LaTeX', to receive: $\frac{10\,x}{3\,{x}^{2}+30}$ User Rank Blogger re: Useful tools and tricks 10/14/2011 11:13:20 PM NO RATINGS In the old DOS days it was well nigh infallible. These days it depends on the font you're using - some seem to work and some don't. Again in the old dos days (don't I sound like and old fart??) you could use this trick to get the single and double line box characters, and use them to make great looking menus. User Rank Rookie re: Useful tools and tricks 10/14/2011 11:03:41 PM NO RATINGS My understanding is that this used to be presented as a separate utility – Where??? User Rank Rookie re: Useful tools and tricks 10/14/2011 9:27:20 PM NO RATINGS That video is great! For math formulas, you might find this useful. Microsoft OneNote (bundled with Office) has a nice “ink to math” utility. You can just draw a formula freehand using your mouse, highlight it, and convert it to a formatted equation. You can then insert that into a Word document if you wish. There is also an “ink to text” utility for converting freehand text to print. User Rank Blogger re: Useful tools and tricks 10/14/2011 9:21:57 PM NO RATINGS A lot of these tricks have been around since time began ... the problem is that newbies don't know them... ...I think we would all be surprised to discover all of the little tricks and back doors that are available ... if only one knows where to look... User Rank Rookie re: Useful tools and tricks 10/14/2011 9:12:16 PM NO RATINGS The ALT+ trick has been available since the DOS days. I use it all the time to type the degrees symbol (e.g. 25°C - ALT+0176), the Greek micro as a prefix (e.g. µHz - ALT+181), a plus or minus symbol (e.g. ±3dB - ALT+0176) and more. Works in almost all PC applications. <<   <   Page 2 / 2 Flash Poll #### Datasheets.com Parts Search ##### 185 million searchable parts (please enter a part number or hit search to begin) Frankenstein's Fix, Teardowns, Sideshows, Design Contests, Reader Content & More Generally speaking, when it comes to settling down with a good book, I tend to gravitate towards science fiction and science fantasy. Having said this, I do spend a lot of time reading ... latest comment Douglas442 No 2014 Punkin Chunkin, What Will You Do? 1 Comment American Thanksgiving is next week, and while some people watch (American) football all day, the real competition on TV has become Punkin Chunkin. But there will be no Punkin Chunkin on TV ... latest comment Sheetal.Pandey Controlling Radiated Emissions by Design, Third Edition, by Michel Mardiguian. Contributions by Donald L. Sweeney and Roger Swanberg. List price: $89.99 (e-book),$119 (hardcover).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3092578649520874, "perplexity": 6199.845308943643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931013466.18/warc/CC-MAIN-20141125155653-00060-ip-10-235-23-156.ec2.internal.warc.gz"}
http://ccia2008.acia.cat/node/53955
Title Similarity Measures over Refinement Graphs Publication Type Journal Article Year of Publication 2012 Authors Ontañón S, Plaza E Journal Machine Learning Volume 87 Pagination 57-92 Keywords CBR, feature terms, Machine Learning, Similarity Abstract Similarity assessment plays a key role in lazy learning methods such as k-nearest neighbor or case-based reasoning. In this paper we will show how refinement graphs, that were originally introduced for inductive learning, can be employed to assess and reason about similarity. We will define and analyze two similarity measures, $S_{λ}$ and $S_{π}$, based on refinement graphs. The \emph{anti-unification-based similarity}, $S_{λ}$, assesses similarity by finding the anti-unification of two instances, which is a description capturing all the information common to these two instances. The \emph{property-based similarity}, $S_{π}$, is based on a process of disintegrating the instances into a set of {\em properties}, and then analyzing these property sets. Moreover these similarity measures are applicable to any representation language for which a refinement graph that satisfies the requirements we identify can be defined. Specifically, we present a refinement graph for feature terms, in which several languages of increasing expressiveness can be defined. The similarity measures are empirically evaluated on relational data sets belonging to languages of different expressiveness.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8431344032287598, "perplexity": 1561.6797417955331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00492.warc.gz"}
http://mathematica.stackexchange.com/users/631/freddieknets
# freddieknets less info reputation 311 bio website location Antwerp, Belgium age 30 member for 2 years, 8 months seen Sep 23 at 21:19 profile views 30 Doing a PhD in Theoretical Particle Physics (QCD) 12 Creating simple procedure for The Least-Square $m^\text{th}$ Degree Polynomials 7 Quickly differentiate and evaluate a function of several variables 2 Cannot evaluate differential in Mathematica 1 Testing for arbitrary algebraic expression in given variable 0 Defining Piecewise Functions in Modules # 551 Reputation +5 Dynamic Programming with delayed evaluation +5 Strange behaviour when line-wrapping text in a Pane +10 Quickly differentiate and evaluate a function of several variables +10 Testing for arbitrary algebraic expression in given variable # 5 Questions 11 Dynamic Programming with delayed evaluation 10 Scoping in assigning a derivative 7 Strange behaviour when line-wrapping text in a Pane 6 Pattern definition for replacing plus and subtract 4 How to cancel floating point factors? # 18 Tags 12 homework 3 functions × 3 12 polynomials 1 expression-test 12 fitting 1 variable 7 numerics × 2 0 scoping × 2 7 calculus-and-analysis 0 function-construction × 2 # 18 Accounts Mathematica 551 rep 311 Programming Puzzles & Code Golf 189 rep 5 Area 51 151 rep 2 Physics 131 rep 9 TeX - LaTeX 113 rep 5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5992581844329834, "perplexity": 4310.002312313098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400377225.6/warc/CC-MAIN-20141119123257-00230-ip-10-235-23-156.ec2.internal.warc.gz"}
https://brilliant.org/problems/frictionless-world/
# frictionless world if we assumed that there is no friction or damping and we pushed a body (and gave it initial velocity) what would happen? assumptions: neglect gravity and attraction forces between masses and there is no any kind of retardation force ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923982620239258, "perplexity": 1891.1936613638168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280850.30/warc/CC-MAIN-20170116095120-00144-ip-10-171-10-70.ec2.internal.warc.gz"}
https://repository.uantwerpen.be/link/irua/104421
Title Microscopic theory of orientational disorder and lattice instability in solid $C_{70}$Microscopic theory of orientational disorder and lattice instability in solid $C_{70}$ Author Callebaut, A.K. Michel, K.H. Faculty/Department Faculty of Sciences. Physics Research group Condensed Matter Theory Statistical Physics Publication type article Publication 1995Lancaster, Pa, 1995 Subject Physics Source (journal) Physical review : B : condensed matter and materials physics. - Lancaster, Pa, 1998 - 2015 Volume/pages 52(1995):21, p. 15279-15290 ISSN 1098-0121 1550-235X ISI A1995TK97900042 Carrier E Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract We have developed a microscopic theory which describes the orientational dynamics of C-70 molecules and its coupling to lattice displacements in the face-centered-cubic phase of C-70 fullerite. The single-molecule orientational density distribution in the disordered phase is calculated. The ferroelastic transition to the rhombohedral phase is investigated. The discontinuity of the orientational order parameter at the phase transition is calculated. It is found that the transition leads to a stretching of the primitive unit cell along a [111] cubic direction. A softening of the elastic constant c(44) at the transition is predicted. Full text (open access) https://repository.uantwerpen.be/docman/irua/d9a13f/4294.pdf E-info http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:A1995TK97900042&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:A1995TK97900042&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:A1995TK97900042&DestLinkType=CitingArticles&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Handle
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7039127349853516, "perplexity": 7243.187381531358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540975.18/warc/CC-MAIN-20161202170900-00220-ip-10-31-129-80.ec2.internal.warc.gz"}
https://cryptomena.net/melilotus-indica-zhqt/49fa87-conservation-of-energy-lab-physics
This chart displays how the measure and calculated velocity compares for the various masses on the friction less cart. For more details, see our Air Track Reference Document. 5.B.3.1 The student is able to describe and make This graph displays how the amount of compression compares to the force in Newtons of the red spring. Mechanical energy consists two types of energy, Potential energy (energy that is stored) and kinetic energy (energy of motion). This section is appropriate for Physics First, as well as high school physics courses. In this lab, students use a SMART cart to perform an experiment that explores how a cart's kinetic energy, gravitational potential energy, and total mechanical energy change as it rolls up and down an inclined track under the force of gravity. For an isolated system, the total energy must be conserved. As you can see, the "purple" curve represents the pendulum bob's KE which during each cycle begins with an initial value of zero, increases to a maximum value, and then returns to zero Kinetic energy is the energy of motion. To do this, under the “Data” tab at the top of the LoggerPro window, click “User Parameters.” On the row labeled “PhotogateDistance1,” enter your value for $d$ (in meters, “m”). Then, click “OK.”. Student Files Use the slope of your $v$ vs. $t$ plot to find the acceleration of the system (and its uncertainty), and then, (once again) use this value to calculate an estimate of the acceleration due to gravity $g$. Conservation Of Energy Principle | Brit Lab - YouTube. With the data you collect from a single trial, make a plot of $\Delta PE$ vs. $\Delta KE$ and of $v$ vs. $t$ using the Plotting Tool provided. The other end of the string is attached to a cart on an air track.An air track is like a one-dimensional air-hockey table: it ejects air in order to minimize friction. This displays the string that will eventually hold differing masses that will compress the spring more as the mass increases. AP PHYSICS 1 INVESTIGATIONS Conservation of Energy Connections to the AP Physics 1 Curriculum Framework Big Idea 5 Changes that occur as a result of interactions are constrained by conservation laws. It provides a good foundation for future understanding of the Work-Energy Theorem. This experiment explores properties of two types of mechanical energy, kinetic and potential energy. Otherwise, no time measurements can be made. The purpose of this lab is to experimentally verify the conservation of mechanical energy. Purpose: Demonstrate the law of conservation of energy. Find the slope of your $\Delta PE$ vs. $\Delta KE$ plot, and compare it to your theoretical expectations based on the conservation of mechanical energy for an isolated system. Student Files However, the net force on the system should equal the total mass of the system times the acceleration of the system, i.e., $F_{net} = \left(M+m\right)a$. The weight is pulled to one side and let go. First, you need to prepare your setup for data collection: To calculate the change in potential energy from your first data point to every other data point, use equation (2) above. Materials: - Loop-de-loop track - Metal ball - Camera (phone) - Ruler or measuring tape Explanation of lab: In this lab, a ball is sent through a loop-de-loop track. Make sure that the LED on the base of the glider is facing the receiver at the end of the track. You should also calculate the uncertainty in each quantity, noting that the uncertainty in the change in $PE$ or $KE$ for each data point requires adding the uncertainty of the initial and final energies in quadrature. Similarly, since the mass and the glider move together, the velocity values $v$ calculated in LoggerPro using the picket fence distance and the times recorded by the photogate will apply to both the glider and the falling mass. General Physics I Lab: Conservation of Energy 4 Pendulum 4.1 Description A mass of 100 g is hung from a 30 cm string and used as a pendulum. I have done all the calculations to determine the gravitational potential energy at the start and end, and the kinetic energy in the middle. Of the data point values on the spreadsheet, disregard the first data point, and copy a wide selection of ~10 data points throughout the motion into your lab notebook. In today's lab, we will investigate conservation of energy using an inclined plane and calculate how much energy is released as heat through friction. Hence, combining these relations and solving for the acceleration of the system, we find that: A battery-powered photogate is mounted on the glider. When you release the glider-mass system, the change in height $\Delta h$ of the falling mass can be measured, as well as the velocity $v$ of the glider-mass system. The animation below depicts this phenomenon (in the absence of air resistance). Then, divide each value by 10 to obtain $d$ and $\sigma_{d}$. (This distance is analogous to the distance of a tape and space on the ruler from the Acceleration experiment.) Any moving object has kinetic If you do not get a linear graph, repeat the measurement. Another way of looking at conservation of energy is with the following energy diagram. making measurements. If air resistance is neglected, then it would be expected that the total mechanical energy of the cart would be conserved. Conservation of Energy. I varied the mass of the cart for all six trials and recorded the corresponding velocities. To do this precisely, use a meter stick to measure the distance $10d$ for 10 picket and space pairs, and estimate your uncertainty $(\sigma_{10d})$ in this measurement. Conservation of Energy. It can only be transformed from one form to another. Therefore, the change in the potential energy $\Delta PE$ of the system, when the height $h$ of the falling mass $m$ changes by $\Delta h = h_{f} - h_{i}$, is given by: $$\Delta PE = PE_{f} - PE_{i} = mgh_{f} - mgh_{i} = mg\left(\Delta h\right) \tag{2}$$. In this lab, students use a photogate and dynamics system to explore how a cart's kinetic energy, gravitational potential energy, and total mechanical energy changes as it rolls down an inclined track. The apparatus is called an “air track” because an air “cushion” reduces the friction between the glider and the track surface so much that we neglect friction altogether. Physics 1050 Experiment 4 Conservation of Energy QUESTION 1: Draw and label the forces for free body diagram for the mass while it is on the middle of the track. However, when at the bottom of the hill, the coaster will contain only kinetic energy. For an isolated system, the total energy must be conserved. The gravitational potential energy is being transferred to kinetic energy since the object is not at a rest and is moving down the ramp, as shown in the kinetic energy-time graph and potential energy-time graph. Once the “Waiting for data…” text appears, release the glider, and click the red “STOP” button just before the glider reaches the other end of the air track. Hypothesis: Energy of the system will be constant throughout. It may change in form or be transferred from one system to another, but the total remains the same. PHY 133 Lab 5 - Conservation of Energy. Determine the distance $d$ for one picket and space on the top of the air track. In this lab, we will have a mass attached to a string that hangs over a (massless, frictionless) pulley. Conservation of energy states that energy can change from one form to another, but it is always the same. We were very successful, yielding very small percent differences between the initial and final total mechanical energies. Thus: Regents Physics Lab Name: Date. 6 where C is a constant. photo gate (mounted on top of the glider), interface box (photo gate $\rightarrow$ computer). Law of Conservation of Energy Examples: In Physics, most of the inventions rely on the fact that energy is conserved when it is transferred from one form to another. And estimate their importance in your Laboratory. An air track with a glider and a photo gate timer are needed to perform the lab. Your lab instructor/TA has a list of the masses for all the gliders (posted to the door at the front of the lab room). In this lab, we worked to verify the principle of conservation of energy. The law of conservation of energy can be stated as follows: Total energy is constant in any process. For my lab, we rolled a tennis ball down a ramp, along a flat surface, and up another shorter ramp at a less angle. For an overview of Conservation of Energy, see Chapter 8 of either Katz or Giancoli. Energy is sometimes introduced as if it is a concept independent of Newtonʹs laws (though related to them). Rotating the screw will tilt the track one way or the other, so adjust it until the glider remains nearly stationary on the air track. The texts Katz and Giancoli use E for Total Energy, U for Potential Energy and K for Kinetic Energy. If your value is not consistent with theory, what assumptions were made that might not hold true in the non-ideal conditions of this experiment? We set up the platform, a cart, and a photo gate. Enduring Understanding Learning Objectives 5.B The energy of a system is conserved. Each distance should be a multiple of your $d$ value; for example, if your first chosen point is the 2. Therefore, the change in the kinetic energy of the system between two points during its motion may be expressed as: $$\Delta KE = KE_{f} - KE_{i} = \frac{1}{2}\left(M+m\right){v_{f}}^{2} - \frac{1}{2}\left(M+m\right){v_{i}}^{2} = \frac{1}{2}\left(M+m\right)\left({v_{f}}^{2}-{v_{i}}^{2}\right) \tag{1}$$. Law of Conservation of Energy. Source: Essential College Physics. To do this, we will examine the conversion of gravitational potential energy into translational kinetic energy for an isolated system of an air-track glider and a falling mass. Preview Download. Be sure to tighten the wing nut on the leveling screw when the track is level, to secure your adjustment. The position of the glider as a function of time can be accurately recorded by means of a photogate device. Create your own unique website with customizable templates. Tie the other end of the string to a 10g or 20g mass. In these labs, you will investigate more closely the behavior of a system’s internal energy. For more details, see the Photogate Reference Document, although hopefully you know how to do it by now. Preview Download. You can define this as zero for the first data point you record, and then use the distance traveled along the air track from that first point. A number of electrical and mechanical devices operate solely on the law of conservation of energy. Law of Conservation of Energy by. This is a lab activity involving transformations between the gravitational potential energy, elastic potential energy, and kinetic energy of a system. Ideally, the total. Since the energy remains constant throughout the whole run, gravity is a force which is conservative. In the second part of the lab, we were to find the velocity of the cart moving through a photo gate. To do this, double-click the Desktop icon labeled “Exp4_xv_t2.” A “Sensor Confirmation” window should appear, and click “Connect.” The LoggerPro window should appear with a spreadsheet on the left (having columns labeled “Time,” “Distance,” “Velocity”) and an empty velocity vs. time graph on the right. We will discuss a … The lab is divided into three separate but related parts. With a “good” set of data, you should have ~13 velocity-time pairs on the spreadsheet in the LoggerPro window, and a straight line velocity vs. time graph should appear. (If no energy enters or leaves a system, then the total energy in the system remains constant, although it may be converted from one form to another.) For example, because $\Delta PE = PE_{f} - PE_{i}$, then using the addition/subtraction uncertainty rule gives: $\sigma_{\Delta PE} = \sqrt{\left(\sigma_{PE_{f}}\right)^{2} + \left(\sigma_{PE_{i}}\right)^{2}}$. In the first part of the lab we were to find the spring constant of our spring. the law of conservation of mechanical energy for this system. Physics Lab Steps For this physics lab… According to the law of conservation energy: “Energy can neither be created nor is it destroyed. Lab Report: Conservation of Energy-Spring Costant Objectives Materials Masking tape. What may have affected your results? In this lab, we were to confirm the Law of Conservation of Energy. Lab # – Energy Conservation Considering all of these terms together, the ideal case predicts that the Total Energy of the spring-mass system should be described as follows: E total mv ky = + + C 2 2 1 2 1 Eq. Theory: The Law of Conservation of Energy states that energy remains the same in an isolated system and it cannot be created nor … The conservation principles are the most powerful concepts to have been developed in physics. conduction experiments and. In this experiment, the glider (of mass $M$) on the air track and the attached falling mass $m$ both gain kinetic energy due to an equal loss of potential energy experienced by the falling mass. Adjust the decimal placement number (“Places”) and the increment (“Increment”) if necessary. For each velocity value, you also need a corresponding change in height $\Delta{h}$. Since the mass and the glider move at the same pace, the distance the mass falls will equal the distance the glider moves along the air track. Hence, we consider the glider-mass system to be isolated from friction. Record all values in your notebook. Some error 8.01 Physics I, Fall 2003 Prof. Stanley Kowalski. To calculate the change in kinetic energy from your first data point to every other data point, use equation (1) above. In this experiment we will examine the law of the conservation of the total mechanical energy by observing the transfer of gravitational potential energy to kinetic energy, using a glider on an air track that is pulled by a … A loss in one form of energy is accompanied by an equal increase in other forms of energy.In rubbing our hands we do mechanical work which produces heat,i.e, it is a law of conservation of energy example. Is your estimate for $g$ consistent with the accepted value? To do this, we will examine the conversion of gravitational potential energy into translational kinetic energy for an isolated system of an air-track glider and a … On the LoggerPro window, click the green “Collect” button to start a trial. Tie one end of the string to the end of the glider, and pass it over the pulley at the edge of the air track. In today's lab, the potential energy is gravitational potential energy given by PE = mgy. I'm in grade 11 physics and we were just told to create and carry out a conservation of energy lab and do a report. If you cannot find your glider number, you can also measure its mass using the digital scale in the lab room. The purpose of this lab was to use a spring launcher to show that total mechanical energy remains constant when acted upon by a conservative force. Thus, you can compute the sum of the potential and kinetic energies at many moments during the motion, and verify (or dismiss!) For example, a roller coaster contains mostly potential energy before proceeding down a hill. Which conservation laws apply to each type of collision. Level the air track by carefully adjusting the single leveling screw at one end of the track. If the value of a physical quantity is conserved, then the value of that quantity stays constant. (Since both masses $M$ and $m$ are attached by a taut string, they should have the same acceleration, which we call the “acceleration of the system.”) Because the only force moving the system is the force of gravity acting on the falling mass, the net force should equal the weight of the falling mass, i.e., $F_{net} = mg$. Then hung a string with mass from a hook that will compress the spring that is attached to the cart. The kinetic energy of the glider-mass system, when moving at velocity $v$, is given by $KE = \frac{1}{2}Mv^{2} + \frac{1}{2}mv^{2} = \frac{1}{2}\left(M+m\right)v^{2}$. A light sensor at the end of the air track receives the LED signals, and the LoggerPro program in the computer measures and records the times when the light beam of the photogate is blocked or unblocked. In this lab, conservation of energy will be demonstrated. In this experiment, we will examine the law of conservation of total mechanical energy in a system by observing the conversion from gravitational potential energy to translational kinetic energy, using a glider on a frictionless air track that is pulled by a falling mass. As the cart rolls down the hill from its elevated position, its mechanical energy is transformed from potential energy to kinetic energy. Thus, the system's gravitational potential energy decreases as the mass falls to the floor. PHYS 1111L - Introductory Physics Laboratory I. using the law of conservation of mechanical energy. LAB 3 CONSERVATION OF ENERGY 1001 Lab 3 ‐ 1 This week we have enough of the basic concepts to begin a discussion of energy itself. Kipo And The Age Of Wonderbeasts Wiki, Brz Door Speakers, Ice Cream Mixing Slab, Weldwood Plastic Resin Glue Home Depot, South Africa Full Name, Ixe Quartz Diamond Watch, Randy Writes A Novel Dvd, What Is Another Name For The Germ Theory Of Disease, What Is Tea Called In Sanskrit, Columbia Medical School Holistic,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7828006744384766, "perplexity": 406.113163168527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00150.warc.gz"}
https://www.cuemath.com/en-us/learn/mathematics/algebra-beauty-of-algebra/
Learn The Beauty of Algebra 1 Introduction 2 History of Algebra 3 Algebra Basics 4 How to Do Algebra 5 What Is Algebra Good For In Real Life? 6 The Application of Algebra 7 Conclusion 8 Summary September 23, 2020 Introduction "The pure mathematician, like the musician, is a free creator of his world of ordered beauty." -Bertrand Russel The beauty of algebra lies in its equations; each of the equations defines the relationship between X, Y, and Z. Relationships and connections are all around us. Algebra is not about memorizing formulas or manipulating complicated expressions or long lists of function properties. It's a way of organizing one's thinking—taking complex problems and breaking them into pieces, exploring possible solutions, and formulating solutions, so they apply to other complex problems in the future. Algebra isn't facts and procedures—it's thinking, and understanding and those skills are valuable, carrying over to real-life; plenty of evidence demonstrates this (for example, How people learn, published by the National Academies of Science). Algebra reinforces logical thinking and is the underlying mechanism of most advanced algorithms such as  Machine Learning, Deep Learning, Artificial intelligence. Also, Linear Algebra helps in boosting many programming languages. Algebra acts as a substrate to describe so many types of real-world examples from gravity to population growth. Algebra can be used to form big Matrices- (is a relationship which is all around us )and defines how an entire category of matter, namely ideal gases, behaves, also plays a significant role in Statistics, is impressive and beautiful in its simplicity. A science of solving equations It is a branch of mathematics where we use the alphabet as a substitute for numbers. In an algebraic equation, LHS should always be equal to RHS; it means that when a mathematical operation is done on one side of the equation, the same operation should be done on the other side of the equation, and the numbers act as constants. Algebra includes real numbers, complex numbers, matrices, vectors, and many other forms of mathematical representation. History of Algebra Algebra was first introduced to the world by the Greeks and the Babylonians over 18 centuries ago. In the third century, they devised a system that helped to solve problems with both linear and quadratic equations. In layman terms, it helped them to simplify guess work and to make predictions for everyday tasks. Guesswork has always existed intuitively as people constantly used known variables to make predictions about an unknown variable which it had a relationship with. The difference with algebra was that it replaced numbers with symbols. Instead of having to wrap your head around large ambiguous numbers, simple symbols were used in their place. A Simple Example of Algebra’s Value Instead of stating: “I am finding a number which when multiplied by 8 before having an addition of 2, would give me 50”. You could now simply rephrase it as “8x + 2 = 50, where x is an unknown number”. When working with larger sums, complex numbers, and convoluted equations, these symbols allow us to simplify the problems for our minds. Algebra Basics The Basics of Algebra include simple mathematical operations like addition, subtraction, multiplication, and division involving both constant as well as variables. In the given equation, the x and y represent unknown variables which have to be determined. Whereas 3 and 2 are the numerical values. C  --is the constant term. Algebraic Expression Terms related to basic algebra are as follows: 1. Exponent 2. Expression 3. Polynomial (Monomial, binomial and trinomial) 4. Like terms and Unlike terms 5. Constants An equation is a statement that consists of two same identities separated by the “=” sign. Whereas an expression is a group of different terms separated by ‘+’ or ‘-‘ sign. Like terms are those terms whose variables and their exponents are the same. Algebraic Operations The general arithmetic operations performed in algebra are: 1. Addition: ( x + y ) 2. Subtraction:  ( x  –  y ) 3. Multiplication: ( x × y  ) 4. Division: ( x/y or x ÷ y)  Where x and y are the variables. The order of these operations will follow the BODMAS rule, which means the terms inside the brackets are considered first. Then, roots and exponents are operated on second priority, followed by Solving all the division and multiplication operations and later addition and subtraction. Algebraic  Formulas The general formulas used in algebra to solve algebraic equations and find the values of unknown variables are given here: Algebraic  Rules The basic algebraic rules are as follows: • The Symmetry rule • The commutative rules • Two rules for equation How to Do Algebra Algebra can be a difficult subject to master. In addition to numbers, there are letters thrown into equations. These letters are called variables, and they represent unknown numbers. It may seem overwhelming at first, but by learning a few basic concepts and doing practice problems, you can be successful in algebra. Once you have learned the basics, you will begin to see how useful algebra is, and applies to situations in everyday life! A. Understanding Order of Operations 1. Memorize PEMDAS PEMDAS is an acronym to help you remember the order of operations in math. PEMDAS stands for Parentheses, Exponents, Multiplication and Division, Addition and Subtraction. Whenever you are solving a problem, start with the expressions in parentheses and work your way through the acronym, finishing with subtraction. 1. For parentheses, perform all of the operations inside the parentheses using this same order. 2. Multiplication and division are considered equal operations. You can solve them at the same time, so simply solve from left to right. 3. Addition and subtraction are also equal operations, so solve from left to right. 4. You can remember PEMDAS using the mnemonic, Please Excuse My Dear Aunt Sally. For example, in the above equations • The expression in parentheses was first solved • Next, solved exponents • Next, multiply and divide left-to-right When starting to learn algebra, the material can get overwhelming very quickly. Don’t be afraid to ask your teacher for help or seek out extra tutoring. Even asking a friend who may have a better understanding can be useful.Ask your parents about getting a tutor if you are really struggling. 3.  Isolate the variable on one side of the equation When given algebraic expression, you will notice that there are constants and variables. A constant is any number given, while a variable is a letter that represents an unknown number. To isolate the variable, add or subtract terms to get the variable on one side. If the variable has a coefficient, divide both sides by that coefficient to get the variable alone Example to solve 6y + 6 = 48, you first need to subtract 6 from both sides, then divide by 6 4. Take the root of the number to cancel an exponent If you are solving for a variable that is squared, you will need to take the square root of it to solve the problem. Conversely, If the variable is a square root, then you will need to square it to solve the problem. Remember that whatever you do to one side of the equation, you must do to the other side. Example : 1. to solve x = 9, you need to square both sides of the equation. 2. to solve x2= 16, you need to take the square root of both sides of the equation: 5. Combine like terms Whenever you have terms that have the same variable, you can combine them to simplify the problem. This helps to keep equations manageable and easier to solve. Remember, terms that have different exponents are not identical terms x is not the same as x2 • The following are like terms: 4x, -3x, 0.45x, -132x • The following are not like terms: 5x,8y2,-13y,9z,12xy Example: 4x+3y-7x  has two like terms, 4x and  -7x. (-7x + 4x) + 3y = (-3x) + 3y 6. Practice with more complex problems. The art of mastering any concept is practice. Try solving problems with increasing difficulty to truly check your comprehension. Use problems from your textbook or seek out extra problems online. Example to solve q+18 = 9q - 6 Make a habit of checking your answers when you have solved a problem. Once you have obtained the solution and discovered the value of the variable, check your work by inserting the number you have obtained into the original equation. If the expression is still true, then you have found the correct solution! Example: substitute 3 for q Right! Since the equation is true, you know that your solution is correct. B. Solving Problems • Recognize that algebra is just like solving a puzzle. Like any puzzle, there are pieces. Learning how to recognize the numbers and symbols for the placeholders that they are makes the solution much easier to grasp. • Try to find the missing number in a problem where the final answer is given. For example: 1+x=9 • The missing number is 8, because 1 plus 8 equals 9. Pretty simple, right? This is basic algebra. • Perform operations on both sides of the equation. When solving an algebraic problem, you must remember that if you alter one side of the equation in any way, you must do the exact same thing to the other side of the equation. If you add, subtract, multiply, or divide, you must perform the same operation to the opposite side. Example: To solve  x + 3 = 2x - 1, Subtract x from both sides of the equation, then add 1 to both sides of the equation C. Multiplying with the FOIL Method 1. Define FOIL. The FOIL method stands for First, Outside, Inside, Last. It is a method used to multiply two binomials together. A binomial is an algebraic expression with two terms, like 5x-3 For example, to calculate (5x-3)(4x+1) we have to use the FO. 2. Multiply the first terms of each binomial. The “F” in FOIL stands for “First.” The first terms are the terms on the left in each set of parentheses. Remember that when you multiply two of the same variables together, the result is the variable, squared. Example: 3.Multiply the two outside terms together. The “O” in FOIL stands for “Outside.” The outside term of the first binomial is on the left; the outside term of the second binomial is on the right. Example: 4. Find the product of the two inside terms. The “I” in FOIL stands for “Inside.” The inside term of the first binomial is on the right; the inside term of the second binomial is on the left. Example: 5. Multiply the last two terms together. The “L” in FOIL stands for “Last.” The last term of each binomial is on the right. Example: 6.Combine all terms and simplify. After putting the expression together, you can combine like terms to simplify the expression fully. Make sure you pay close attention to positive and negative signs when adding like terms. Example: Solve   (3x+5)(2x-4) To solve  (3x+5)(2x-4) this equation is calculated as follows, after combining like terms simplify it. D. Working with Exponents 1.Simplify exponents of numbers. When a number has an exponent that means you multiply that number by itself as many times as the exponent says. To simplify any number that has an exponent simply multiply it the appropriate number of times. Example : •  If there is a negative sign and no parentheses, the exponent is simplified and then the negative sign gets added: Example: -22=-(2*2)=-4 • If there is a negative sign, but the number is in parenthesis, the negative number is part of the exponent. Example: (-2)2=-2*-2=-4 2.Combine like terms with the same exponents. It may be confusing at first to see a variable with an exponent. Just remember, any variable with the same exponent number can be added or subtracted. If the letters are the same, but the exponents are different, they cannot be combined. Example: In the above example the equation is simplified as both the variables have an exponent. On the other hand, 5z+5z2 cannot be simplified, since one variable has an exponent, and one does not. 3.Add the exponents together when multiplying variables. If two variables are being multiplied together and they both have exponents, you can add the exponents together to get the resulting exponent. This only applies to variables of the same letter. Example: 4.Subtract the exponents when dividing variables. If you want to divide two variables that have exponents, simply subtract the bottom exponent from the top exponent. This only applies to variables that are the same letter. Example: What Is Algebra Good For In Real Life? Algebra has long been feared by students taking mathematics. Other than the perception of complexity, students also have trouble as they view it as an abstract topic. Without obvious real-life applications, students struggle to make sense of the symbols and equations used. Yet, algebra at its core was invented as a tool to help us solve problems that existed in the real world. As such, the value of algebra can only be understood by first looking at its origins. The Application of Algebra For most students, mention the application of algebra and we think of concepts such as: • Simultaneous equations • Differentiation & Integration • Probability • Trigonometry • Binomial theorem All of these concepts do use algebra to express their various relationships and formulas. This makes algebra an integral part of them. However, these concepts again remain largely abstract in students’ eyes. To give context of the usefulness of algebra beyond your school, here we collected a list of examples where it is utilised. 1.Algebra in Our Everyday Activities Budgeting in a Supermarket Algebra is intuitively used in daily budgeting. Take the example of a grocery trip to a supermarket. If you had a budget of $25 and wanted to buy bread ($6/loaf), eggs ($10/pack) & milk ($2.5/bottle). Assuming the following conditions: • You need 1 loaf of bread • You need 1 pack of eggs • You want as many bottles of milk as possible A simple equation can be used to determine how much milk to buy: 25 = 6 + 10 + 2.5x where x is the number of milk bottles. While most people will not explicitly create this equation in their minds, they do intuitively express it in a different form when calculating: Essentially, you would subtract the cost of bread ($6) and eggs ($10) from your max budget ($25), before dividing the remainder by the cost of a bottle of milk ($2.5) to find the number of milk bottles (x). A. Bank Interest on Loans When you secure a loan from a bank, you will most likely be required to pay back more than the principal borrowed. This is due to the incurring of interest over the duration of the loan. The simple interest amount can be calculated based on the following formula: I: Simple interest P: Principal amount borrowed I: Interest rate, which is set by the bank T: Duration Calculating and projecting the simple interest incurred on a loan will help you to determine the duration in which is optimal for you to repay the loan within. All businesses incur cost in the production of goods which are required in order for a profit to be made from their sale. Through careful planning and forecast, a greater profit can be earned. The cost and corresponding profit per unit are not constant due to the spreading out of fixed costs among the units produced. As such, it is useful to project the business revenue into the future based on the existing data by using Algebra. Based on the daily revenue over a period of 1 month, the business owner derives the Revenue formula: R = 11×2 – 500x + 30000 R: Revenue earned for day x x: exact day of operation We can forecast the revenue of the subsequent month by the carrying out the following algebra manipulation: From the equation, on day  = 22.7272, the forecasted revenue will be the lowest at  = \$24318.18182. Hence, the business owner will be able to control cost on that specific day by cutting down on the manpower and operating expenses.  This in turn helps to generate the highest profit possible and make sound decision on the 3.Algebra in Computer Programming Our final example of algebra involves an indirect but complex application in the world of computer programming. For clarity, you do not need algebra to perform programming. However, an “algebraic” way of thinking will be of great assistance in coding. Why do we say that? Programming involves the usage of abstract rules (an equation) to automate the creation of the end result. Within the stated rule, lies various variables which can be expressed as symbols not unlike what we have covered thus far. Computer programming employs the same type of logic as found in algebra, however since it is intended for machine automation, further rules are added on. These rules can be thought of an expansion of the base formula. As such, students who are confident in algebra tend to pick up programming logic at a relatively quicker pace. Einstein’s Takeaway Algebra is applied intuitively or intentionally to solve a host of different problems in the real world. Even where it is not directly applicable, having trained your mind to be comfortable with algebra, you will find success with many other similar concepts. Conclusion Remember that Algebra is the beginning of abstract thinking in mathematics. Hence it is imperative that we understand what is the model they have in their mind and then help shape that model. Most important is to remove the fear of use of alphabets in mathematical expressions. Make it fun by using puzzles and then inculcate the understanding of function representation. Then, definitions around like terms, coefficients and others will naturally follow. Summary Remember that Algebra is the beginning of abstract thinking in mathematics. Hence, it is imperative that we understand what is the model they have in their mind and then help shape that model. Most important is to remove the fear of the use of the alphabets in mathematical expressions. Make it fun by using puzzles and then inculcate the understanding of function representation. Then, definitions around the like terms, coefficients, and others will naturally follow. References:   Medium,  forbes.com Written by Asha M, Cuemath Teacher Related Articles GIVE YOUR CHILD THE CUEMATH EDGE Access Personalised Math learning through interactive worksheets, gamified concepts and grade-wise courses
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6801073551177979, "perplexity": 821.2618603947211}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892710.59/warc/CC-MAIN-20201026234045-20201027024045-00234.warc.gz"}
http://www.ss-pub.org/jmss/second-order-bias-corrected-efficient-gmm-estimator/
• ### Second Order Bias Corrected Efficient GMM Estimator B F Chakalabbi, Sagar Matur and Sanmati Neregal Department of Statistics, Karnatak University’s Karnatak Arts College, Dharwad – 580001, India Abstract: In this paper, the three conventional GMM estimators First-difference, Level and System GMM estimators with respective efficient initial weight matrices are considered to estimate the autoregressive panel data model. It is observed that the bias of first-difference GMM estimator is heigher and the bias of system GMM estimator is lesser among the above mentioned estimators but as variance ratio in-creases and as autoregressive parameter approaches to one the bias of all the afore-mentioned estimators increase. Hence to reduce such bias, second order bias correction method is considered. Through Monte-Carlo simulation it is observed that the considered second order bias correction method works well for first-difference and system GMM estimators, specially when the variance ratio is greater than one. Keywords: First-difference GMM estimator, Level GMM estimator, System GMM estimator, Second order bias.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482622146606445, "perplexity": 6210.011084232131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829140.81/warc/CC-MAIN-20181218102019-20181218124019-00556.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/cpaa.2011.10.1315
# American Institute of Mathematical Sciences September  2011, 10(5): 1315-1329. doi: 10.3934/cpaa.2011.10.1315 ## $H^{1,p}$-eigenvalues and $L^\infty$-estimates in quasicylindrical domains 1 Dipartimento di Matematica e Informatica, Università di Salerno, P. Grahamstown, Fisciano, SA I-84084 Received  March 2009 Revised  November 2009 Published  April 2011 In this paper we consider estimates of the Raleigh quotient and in general of the $H^{1,p}$-eigenvalue in quasicylindrical domains. Then we apply the results to obtain, by variational methods, existence and uniqueness of weak solutions of the Dirichlet problem for second-order elliptic equations in divergent form. For such solutions global boundedness estimates have been also established. Citation: Antonio Vitolo. $H^{1,p}$-eigenvalues and $L^\infty$-estimates in quasicylindrical domains. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1315-1329. doi: 10.3934/cpaa.2011.10.1315 ##### References: [1] R. A. Adams, "Sobolev Spaces," Pure and Applied Mathematics, Vol. 65. [A subsidiary of Harcourt Brace Jovanovich, Publishers], New York-London, 1975. doi: ISBN:0120441500.  Google Scholar [2] H. Brezis, "Analyse fonctionnelle. (French) [Functional analysis] Théorie et applications. [Theory and applications]," Collection Mathématiques Appliquées pour la Matrise. [Collection of Applied Mathematics for the Master's Degree] Masson, Paris, 1983. doi: ISBN:9782225771989.  Google Scholar [3] X. Cabré, On the Alexandroff-Bakel'man-Pucci estimate and the reversed Hlder inequality for solutions of elliptic and parabolic equations, Comm. Pure Appl. Math., 48 (1995), 539-570. doi: 10.1002/cpa.3160480504.  Google Scholar [4] V. Cafagna and A. Vitolo, On the maximum principle for second-order elliptic operators in unbounded domains, C. R. Math. Acad. Sci. Paris, 334 (2002), 359-363. doi: 10.1016/S1631-073X(02)02267-7.  Google Scholar [5] I. Capuzzo Dolcetta and A. Vitolo, On the maximum principle for viscosity solutions of fully nonlinear elliptic equations in general domains, Matematiche (Catania), 62 (2007), 69-91. doi: ISSN 0373-3505; ISSN 2037-5298.  Google Scholar [6] I. Ekeland and R. Temam, Translated from the French. Studies in Mathematics and its Applications, Vol. 1. "Convex Analysis and Variational Problems," North-Holland Publishing Co., Amsterdam-Oxford; American Elsevier Publishing Co., Inc., New York, 1976. doi: ISBN:0898714508.  Google Scholar [7] D. Gilbarg and N. S. Trudinger, "Elliptic Partial Differential Equations of Second Order," Reprint of the 1998 edition, Classics in Mathematics, Springer Verlag, Berlin, 2001. doi: ISBN:3540411607.  Google Scholar [8] W. K. Hayman, Some bounds for principal frequency,, Appl. Anal., 7 (): 247.  doi: 10.1080/00036817808839195.  Google Scholar [9] B. Kawohl and V. Fridman, Isoperimetric estimates for the first eigenvalue of the $p$-Laplace operator and the Cheeger constant, Comment. Math. Univ. Carolinae, 44 (2003), 659-667. doi: ISSN:0010-2628.  Google Scholar [10] E. H. Lieb, On the lowest eigenvalue of the Laplacian for the intersection of two domains, Invent. Math., 74 (1983), 441-448. doi: 10.1007/BF01394245.  Google Scholar [11] V. Maz'ya and M. A. Shubin, Can one see the fundamental frequency of a drum?, Lett. Math. Phys., 74 (2005), 135-151. doi: ISSN:0377-9017.  Google Scholar [12] R. Osserman, A note on Hayman's theorem on the bass note of a drum, Comment. Math. Helv., 52 (1977), 545-555. doi: 10.1007/BF02567388.  Google Scholar [13] M. Transirico, M. Troisi and A. Vitolo, Spaces of Morrey type and elliptic equations in divergence form on unbounded domains, Boll. Un. Mat. Ital. B (7), 9 (1995), 153-174. doi: ISSN:0392-4041.  Google Scholar [14] A. Vitolo, A note on the maximum principle for complete second-order elliptic operators in general domains, Acta Math. Sin. (Engl. Ser.), 23 (2007), 1955-1966. doi: 10.1007/s10114-007-0976-y.  Google Scholar show all references ##### References: [1] R. A. Adams, "Sobolev Spaces," Pure and Applied Mathematics, Vol. 65. [A subsidiary of Harcourt Brace Jovanovich, Publishers], New York-London, 1975. doi: ISBN:0120441500.  Google Scholar [2] H. Brezis, "Analyse fonctionnelle. (French) [Functional analysis] Théorie et applications. [Theory and applications]," Collection Mathématiques Appliquées pour la Matrise. [Collection of Applied Mathematics for the Master's Degree] Masson, Paris, 1983. doi: ISBN:9782225771989.  Google Scholar [3] X. Cabré, On the Alexandroff-Bakel'man-Pucci estimate and the reversed Hlder inequality for solutions of elliptic and parabolic equations, Comm. Pure Appl. Math., 48 (1995), 539-570. doi: 10.1002/cpa.3160480504.  Google Scholar [4] V. Cafagna and A. Vitolo, On the maximum principle for second-order elliptic operators in unbounded domains, C. R. Math. Acad. Sci. Paris, 334 (2002), 359-363. doi: 10.1016/S1631-073X(02)02267-7.  Google Scholar [5] I. Capuzzo Dolcetta and A. Vitolo, On the maximum principle for viscosity solutions of fully nonlinear elliptic equations in general domains, Matematiche (Catania), 62 (2007), 69-91. doi: ISSN 0373-3505; ISSN 2037-5298.  Google Scholar [6] I. Ekeland and R. Temam, Translated from the French. Studies in Mathematics and its Applications, Vol. 1. "Convex Analysis and Variational Problems," North-Holland Publishing Co., Amsterdam-Oxford; American Elsevier Publishing Co., Inc., New York, 1976. doi: ISBN:0898714508.  Google Scholar [7] D. Gilbarg and N. S. Trudinger, "Elliptic Partial Differential Equations of Second Order," Reprint of the 1998 edition, Classics in Mathematics, Springer Verlag, Berlin, 2001. doi: ISBN:3540411607.  Google Scholar [8] W. K. Hayman, Some bounds for principal frequency,, Appl. Anal., 7 (): 247.  doi: 10.1080/00036817808839195.  Google Scholar [9] B. Kawohl and V. Fridman, Isoperimetric estimates for the first eigenvalue of the $p$-Laplace operator and the Cheeger constant, Comment. Math. Univ. Carolinae, 44 (2003), 659-667. doi: ISSN:0010-2628.  Google Scholar [10] E. H. Lieb, On the lowest eigenvalue of the Laplacian for the intersection of two domains, Invent. Math., 74 (1983), 441-448. doi: 10.1007/BF01394245.  Google Scholar [11] V. Maz'ya and M. A. Shubin, Can one see the fundamental frequency of a drum?, Lett. Math. Phys., 74 (2005), 135-151. doi: ISSN:0377-9017.  Google Scholar [12] R. Osserman, A note on Hayman's theorem on the bass note of a drum, Comment. Math. Helv., 52 (1977), 545-555. doi: 10.1007/BF02567388.  Google Scholar [13] M. Transirico, M. Troisi and A. Vitolo, Spaces of Morrey type and elliptic equations in divergence form on unbounded domains, Boll. Un. Mat. Ital. B (7), 9 (1995), 153-174. doi: ISSN:0392-4041.  Google Scholar [14] A. Vitolo, A note on the maximum principle for complete second-order elliptic operators in general domains, Acta Math. Sin. (Engl. Ser.), 23 (2007), 1955-1966. doi: 10.1007/s10114-007-0976-y.  Google Scholar [1] Feng Du, Adriano Cavalcante Bezerra. Estimates for eigenvalues of a system of elliptic equations with drift and of bi-drifting laplacian. Communications on Pure & Applied Analysis, 2017, 6 (2) : 475-491. doi: 10.3934/cpaa.2017024 [2] Selma Yildirim Yolcu, Türkay Yolcu. Sharper estimates on the eigenvalues of Dirichlet fractional Laplacian. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 2209-2225. doi: 10.3934/dcds.2015.35.2209 [3] Hua Chen, Hong-Ge Chen. Estimates the upper bounds of Dirichlet eigenvalues for fractional Laplacian. Discrete & Continuous Dynamical Systems, 2022, 42 (1) : 301-317. doi: 10.3934/dcds.2021117 [4] Bo Guan, Heming Jiao. The Dirichlet problem for Hessian type elliptic equations on Riemannian manifolds. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 701-714. doi: 10.3934/dcds.2016.36.701 [5] Chunhui Qiu, Rirong Yuan. On the Dirichlet problem for fully nonlinear elliptic equations on annuli of metric cones. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5707-5730. doi: 10.3934/dcds.2017247 [6] Martino Bardi, Paola Mannucci. On the Dirichlet problem for non-totally degenerate fully nonlinear elliptic equations. Communications on Pure & Applied Analysis, 2006, 5 (4) : 709-731. doi: 10.3934/cpaa.2006.5.709 [7] Feng Zhou, Zhenqiu Zhang. Pointwise gradient estimates for subquadratic elliptic systems with discontinuous coefficients. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3137-3160. doi: 10.3934/cpaa.2019141 [8] Li-Ming Yeh. Pointwise estimate for elliptic equations in periodic perforated domains. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1961-1986. doi: 10.3934/cpaa.2015.14.1961 [9] Vitali Liskevich, Igor I. Skrypnik. Pointwise estimates for solutions of singular quasi-linear parabolic equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 1029-1042. doi: 10.3934/dcdss.2013.6.1029 [10] E. N. Dancer, Danielle Hilhorst, Shusen Yan. Peak solutions for the Dirichlet problem of an elliptic system. Discrete & Continuous Dynamical Systems, 2009, 24 (3) : 731-761. doi: 10.3934/dcds.2009.24.731 [11] Vasily Denisov and Andrey Muravnik. On asymptotic behavior of solutions of the Dirichlet problem in half-space for linear and quasi-linear elliptic equations. Electronic Research Announcements, 2003, 9: 88-93. [12] Paola Mannucci. The Dirichlet problem for fully nonlinear elliptic equations non-degenerate in a fixed direction. Communications on Pure & Applied Analysis, 2014, 13 (1) : 119-133. doi: 10.3934/cpaa.2014.13.119 [13] Alberto Farina, Enrico Valdinoci. A pointwise gradient bound for elliptic equations on compact manifolds with nonnegative Ricci curvature. Discrete & Continuous Dynamical Systems, 2011, 30 (4) : 1139-1144. doi: 10.3934/dcds.2011.30.1139 [14] Minh-Phuong Tran, Thanh-Nhan Nguyen. Pointwise gradient bounds for a class of very singular quasilinear elliptic equations. Discrete & Continuous Dynamical Systems, 2021, 41 (9) : 4461-4476. doi: 10.3934/dcds.2021043 [15] Farman Mamedov, Sara Monsurrò, Maria Transirico. Potential estimates and applications to elliptic equations. Conference Publications, 2015, 2015 (special) : 793-800. doi: 10.3934/proc.2015.0793 [16] Niklas Behringer. Improved error estimates for optimal control of the Stokes problem with pointwise tracking in three dimensions. Mathematical Control & Related Fields, 2021, 11 (2) : 313-328. doi: 10.3934/mcrf.2020038 [17] Margaret Beck. Stability of nonlinear waves: Pointwise estimates. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : 191-211. doi: 10.3934/dcdss.2017010 [18] Zhigang Wu, Weike Wang. Pointwise estimates of solutions for the Euler-Poisson equations with damping in multi-dimensions. Discrete & Continuous Dynamical Systems, 2010, 26 (3) : 1101-1117. doi: 10.3934/dcds.2010.26.1101 [19] Hailiang Li, Houzhi Tang, Haitao Wang. Pointwise estimates of the solution to one dimensional compressible Naiver-Stokes equations in half space. Discrete & Continuous Dynamical Systems, 2022  doi: 10.3934/dcds.2021205 [20] Mei Ming. Weighted elliptic estimates for a mixed boundary system related to the Dirichlet-Neumann operator on a corner domain. Discrete & Continuous Dynamical Systems, 2019, 39 (10) : 6039-6067. doi: 10.3934/dcds.2019264 2020 Impact Factor: 1.916
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.861436128616333, "perplexity": 2939.6573443137963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00455.warc.gz"}
https://byjus.com/physics/azimuthal-quantum-number/
# Azimuthal Quantum Number Quantum numbers are numbers allocated to all the electrons in an atom and they describe certain characteristics of the electron. The characteristics of the orbital are used to define the state of an electron completely and are expressed in terms of three numbers as Principal quantum number, Azimuthal quantum number and Magnetic quantum number and Spin Quantum number. In this piece of article, we will be discussing Azimuthal quantum number. Table of Content ## What is Azimuthal Quantum Number? Other than principal quantum number (n), spectroscopic notation, magnetic quantum number (m) and the spin quantum number (s) – the azimuthal quantum number is another set of quantum numbers which describe the unique quantum state of an electron. It can be defined as, The quantum number associated with the angular momentum of an atomic electron. It is also termed as the orbital angular momentum quantum number, orbital quantum number or second quantum number, and is symbolized as ℓ. This number describes the shape of the orbital and also determines the orbital angular momentum. An example of the angular quantum momentum number would be a p orbital that is associated with an azimuthal quantum number equal to 1. ## History Arnold Sommerfeld posited the term azimuthal quantum number from the Bohr model of the atom. The Rutherford-Bohr model or Bohr model, depicts the atom as a small, positively charged nucleus surrounded by electrons that travel in circular orbits around the nucleus – similar in structure to the solar system, but with attraction provided by electrostatic forces rather than gravity. The Bohr model has its existence from spectroscopic analysis of the atom in combination with the Rutherford atomic model. Angular Momentum was found to be ‘0’ at the lowest level of quantum. Orbits with zero angular momentum were termed as ‘pendulum’ orbits. ### Subsidiary Quantum Number Azimuthal quantum number describes the shape of orbital. It is denoted by . Values of are from zero to n-1. For s-orbital, ℓ = 0 For p-orbita, ℓ = 1 For d-orbital, ℓ = 2 For f-orbital, ℓ = 3 With the help of the value of azimuthal quantum number we can determine the total number of energy sub-levels in a given energy level. ### Angular Momentum Quantum Number • Intrinsic (or spin) angular momentum quantum number, or simply spin quantum number • Orbital angular momentum quantum number (the subject of this article) • Magnetic quantum number, related to the orbital momentum quantum number • Total angular momentum quantum number. If you wish to learn more physics concepts with the help of interactive video lessons, download BYJU’S – The Learning App. ## Frequently Asked Questions – FAQs ### What is the range of Azimuthal Quantum number? The range of Azimuthal Quantum number is between 0 to n-1. ### What is the value of the spin quantum number? The value of spin quantum number is 2. ### For a quantum number, if n=4, what is the value that is not equal to l? There are three quantum numbers: n, l, and m. The principal quantum number is n. According to the questions, the principal quantum number, n = 4. Therefore, l, which is the angular quantum number, can have the values 0,1,2,3 and not beyond 3. ### Name the quantum number that describes the size of the orbital. Principal quantum number describes the size of the orbital. The principal quantum number cannot be zero, and the size of the orbit increases with increase in numbers. ### What are the 4 types of quantum numbers? The 4 types of quantum numbers are principal quantum number, angular momentum, magnetic moment, and spin quantum number. Test your knowledge on Azimuthal Quantum Number
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8561407923698425, "perplexity": 566.2363699803823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00171.warc.gz"}
http://conceptmap.cfapps.io/wikipage?lang=en&name=Axis_of_rotation
# Rotation around a fixed axis (Redirected from Axis of rotation) Sphere rotating around one of its diameters Rotation around a fixed axis or about a fixed axis of revolution or motion with respect to a fixed axis of rotation is a special case of rotational motion. The fixed axis hypothesis excludes the possibility of an axis changing its orientation, and cannot describe such phenomena as wobbling or precession. According to Euler's rotation theorem, simultaneous rotation along a number of stationary axes at the same time is impossible. If two rotations are forced at the same time, a new axis of rotation will appear. This article assumes that the rotation is also stable, such that no torque is required to keep it going. The kinematics and dynamics of rotation around a fixed axis of a rigid body are mathematically much simpler than those for free rotation of a rigid body; they are entirely analogous to those of linear motion along a single fixed direction, which is not true for free rotation of a rigid body. The expressions for the kinetic energy of the object, and for the forces on the parts of the object, are also simpler for rotation around a fixed axis, than for general rotational motion. For these reasons, rotation around a fixed axis is typically taught in introductory physics courses after students have mastered linear motion; the full generality of rotational motion is not usually taught in introductory physics classes. ## Translation and rotation An example of rotation. Each part of the worm drive—both the worm and the worm gear—is rotating on its own axis. A rigid body is an object of finite extent in which all the distances between the component particles are constant. No truly rigid body exists; external forces can deform any solid. For our purposes, then, a rigid body is a solid which requires large forces to deform it appreciably. A change in the position of a particle in three-dimensional space can be completely specified by three coordinates. A change in the position of a rigid body is more complicated to describe. It can be regarded as a combination of two distinct types of motion: translational motion and circular motion. Purely translational motion occurs when every particle of the body has the same instantaneous velocity as every other particle; then the path traced out by any particle is exactly parallel to the path traced out by every other particle in the body. Under translational motion, the change in the position of a rigid body is specified completely by three coordinates such as x, y, and z giving the displacement of any point, such as the center of mass, fixed to the rigid body. Purely rotational motion occurs if every particle in the body moves in a circle about a single line. This line is called the axis of rotation. Then the radius vectors from the axis to all particles undergo the same angular displacement at the same time. The axis of rotation need not go through the body. In general, any rotation can be specified completely by the three angular displacements with respect to the rectangular-coordinate axes x, y, and z. Any change in the position of the rigid body is thus completely described by three translational and three rotational coordinates. Any displacement of a rigid body may be arrived at by first subjecting the body to a displacement followed by a rotation, or conversely, to a rotation followed by a displacement. We already know that for any collection of particles—whether at rest with respect to one another, as in a rigid body, or in relative motion, like the exploding fragments of a shell, the acceleration of the center of mass is given by ${\displaystyle F_{\mathrm {net} }=Ma_{\mathrm {cm} }\;\!}$ where M is the total mass of the system and acm is the acceleration of the center of mass. There remains the matter of describing the rotation of the body about the center of mass and relating it to the external forces acting on the body. The kinematics and dynamics of rotational motion around a single axis resemble the kinematics and dynamics of translational motion; rotational motion around a single axis even has a work-energy theorem analogous to that of particle dynamics. ## Kinematics ### Angular displacement A particle moves in a circle of radius ${\displaystyle r}$ . Having moved an arc length ${\displaystyle s}$ , its angular position is ${\displaystyle \theta }$  relative to its original position, where ${\displaystyle \theta ={\frac {s}{r}}}$ . In mathematics and physics it is usual to use the natural unit radians rather than degrees or revolutions. Units are converted as follows: {\displaystyle {\begin{aligned}1{\text{ revolution }}&=360^{\circ }=2\pi {\text{ radians, and}}\\1{\text{ rad}}&={\frac {180^{\circ }}{\pi }}\approx 57.27^{\circ }.\end{aligned}}} An angular displacement is a change in angular position: ${\displaystyle \Delta \theta =\theta _{2}-\theta _{1},\!}$ where ${\displaystyle \Delta \theta }$  is the angular displacement, ${\displaystyle \theta _{1}}$  is the initial angular position and ${\displaystyle \theta _{2}}$  is the final angular position. ### Angular speed and angular velocity Change in angular displacement per unit time is called angular velocity with direction along the axis of rotation. The symbol for angular velocity is ${\displaystyle \omega }$  and the units are typically rad s−1. Angular speed is the magnitude of angular velocity. ${\displaystyle {\overline {\omega }}={\frac {\Delta \theta }{\Delta t}}={\frac {\theta _{2}-\theta _{1}}{t_{2}-t_{1}}}.}$ The instantaneous angular velocity is given by ${\displaystyle \omega (t)={\frac {d\theta }{dt}}.}$ Using the formula for angular position and letting ${\displaystyle v={\frac {ds}{dt}}}$ , we have also ${\displaystyle \omega ={\frac {d\theta }{dt}}={\frac {v}{r}},}$ where ${\displaystyle v}$  is the translational speed of the particle. Angular velocity and frequency are related by ${\displaystyle \omega ={2\pi f}\!}$ . ### Angular acceleration A changing angular velocity indicates the presence of an angular acceleration in rigid body, typically measured in rad s−2. The average angular acceleration ${\displaystyle {\overline {\alpha }}}$  over a time interval Δt is given by ${\displaystyle {\overline {\alpha }}={\frac {\Delta \omega }{\Delta t}}={\frac {\omega _{2}-\omega _{1}}{t_{2}-t_{1}}}.}$ The instantaneous acceleration α(t) is given by ${\displaystyle \alpha (t)={\frac {d\omega }{dt}}={\frac {d^{2}\theta }{dt^{2}}}.}$ Thus, the angular acceleration is the rate of change of the angular velocity, just as acceleration is the rate of change of velocity. The translational acceleration of a point on the object rotating is given by ${\displaystyle a=r\alpha ,\!}$ where r is the radius or distance from the axis of rotation. This is also the tangential component of acceleration: it is tangential to the direction of motion of the point. If this component is 0, the motion is uniform circular motion, and the velocity changes in direction only. The radial acceleration (perpendicular to direction of motion) is given by ${\displaystyle a_{\mathrm {R} }={\frac {v^{2}}{r}}=\omega ^{2}r\!}$ . It is directed towards the center of the rotational motion, and is often called the centripetal acceleration. The angular acceleration is caused by the torque, which can have a positive or negative value in accordance with the convention of positive and negative angular frequency. The ratio of torque and angular acceleration (how difficult it is to start, stop, or otherwise change rotation) is given by the moment of inertia: ${\displaystyle T=I\alpha }$ . ### Equations of kinematics When the angular acceleration is constant, the five quantities angular displacement ${\displaystyle \theta }$ , initial angular velocity ${\displaystyle \omega _{i}}$ , final angular velocity ${\displaystyle \omega _{f}}$ , angular acceleration ${\displaystyle \alpha }$ , and time ${\displaystyle t}$  can be related by four equations of kinematics: {\displaystyle {\begin{aligned}\omega _{f}&=\omega _{i}+\alpha t\\\theta &=\omega _{i}t+{\frac {1}{2}}\alpha t^{2}\\\omega _{f}^{2}&=\omega _{i}^{2}+2\alpha \theta \\\theta &={\frac {1}{2}}\left(\omega _{f}+\omega _{i}\right)t\end{aligned}}} ## Dynamics ### Moment of inertia The moment of inertia of an object, symbolized by I, is a measure of the object's resistance to changes to its rotation. The moment of inertia is measured in kilogram metre² (kg m²). It depends on the object's mass: increasing the mass of an object increases the moment of inertia. It also depends on the distribution of the mass: distributing the mass further from the center of rotation increases the moment of inertia by a greater degree. For a single particle of mass ${\displaystyle m}$  a distance ${\displaystyle r}$  from the axis of rotation, the moment of inertia is given by ${\displaystyle I=mr^{2}.}$ ### Torque Torque ${\displaystyle {\boldsymbol {\tau }}}$  is the twisting effect of a force F applied to a rotating object which is at position r from its axis of rotation. Mathematically, ${\displaystyle {\boldsymbol {\tau }}=\mathbf {r} \times \mathbf {F} ,}$ where × denotes the cross product. A net torque acting upon an object will produce an angular acceleration of the object according to ${\displaystyle {\boldsymbol {\tau }}=I{\boldsymbol {\alpha }},}$ just as F = ma in linear dynamics. The work done by a torque acting on an object equals the magnitude of the torque times the angle through which the torque is applied: ${\displaystyle W=\tau \theta .\!}$ The power of a torque is equal to the work done by the torque per unit time, hence: ${\displaystyle P=\tau \omega .\!}$ ### Angular momentum The angular momentum ${\displaystyle \mathbf {L} }$  is a measure of the difficulty of bringing a rotating object to rest. It is given by ${\displaystyle \mathbf {L} =\sum \mathbf {r} \times \mathbf {p} }$  for all particles in the object. Angular momentum is the product of moment of inertia and angular velocity: ${\displaystyle \mathbf {L} =I{\boldsymbol {\omega }},}$ just as p = mv in linear dynamics. The equivalent of linear momentum in rotational motion is angular momentum. The greater the angular momentum of the spinning object such as a top, the greater its tendency to continue to spin. The angular momentum of a rotating body is proportional to its mass and to how rapidly it is turning. In addition, the angular momentum depends on how the mass is distributed relative to the axis of rotation: the further away the mass is located from the axis of rotation, the greater the angular momentum. A flat disk such as a record turntable has less angular momentum than a hollow cylinder of the same mass and velocity of rotation. Like linear momentum, angular momentum is vector quantity, and its conservation implies that the direction of the spin axis tends to remain unchanged. For this reason, the spinning top remains upright whereas a stationary one falls over immediately. The angular momentum equation can be used to relate the moment of the resultant force on a body about an axis (sometimes called torque), and the rate of rotation about that axis. Torque and angular momentum are related according to ${\displaystyle {\boldsymbol {\tau }}={\frac {d\mathbf {L} }{dt}},}$ just as F = dp/dt in linear dynamics. In the absence of an external torque, the angular momentum of a body remains constant. The conservation of angular momentum is notably demonstrated in figure skating: when pulling the arms closer to the body during a spin, the moment of inertia is decreased, and so the angular velocity is increased. ### Kinetic energy The kinetic energy Krot due to the rotation of the body is given by ${\displaystyle K_{\text{rot}}={\frac {1}{2}}I\omega ^{2},}$ just as Ktrans = ​12mv2 in linear dynamics. Kinetic energy is the energy of motion. The amount of translational kinetic energy found in two variables: the mass of the object (m) and the speed of the object (v) as shown in the equation above. Kinetic energy must always be either zero or a positive value. While velocity can have either a positive or negative value, velocity squared will always be positive.[1] ## Vector expression The above development is a special case of general rotational motion. In the general case, angular displacement, angular velocity, angular acceleration, and torque are considered to be vectors. An angular displacement is considered to be a vector, pointing along the axis, of magnitude equal to that of ${\displaystyle \Delta \theta }$ . A right-hand rule is used to find which way it points along the axis; if the fingers of the right hand are curled to point in the way that the object has rotated, then the thumb of the right hand points in the direction of the vector. The angular velocity vector also points along the axis of rotation in the same way as the angular displacements it causes. If a disk spins counterclockwise as seen from above, its angular velocity vector points upwards. Similarly, the angular acceleration vector points along the axis of rotation in the same direction that the angular velocity would point if the angular acceleration were maintained for a long time. The torque vector points along the axis around which the torque tends to cause rotation. To maintain rotation around a fixed axis, the total torque vector has to be along the axis, so that it only changes the magnitude and not the direction of the angular velocity vector. In the case of a hinge, only the component of the torque vector along the axis has an effect on the rotation, other forces and torques are compensated by the structure. ## Examples and applications ### Constant angular speed The simplest case of rotation around a fixed axis is that of constant angular speed. Then the total torque is zero. For the example of the Earth rotating around its axis, there is very little friction. For a fan, the motor applies a torque to compensate for friction. Similar to the fan, equipment found in the mass production manufacturing industry demonstrate rotation around a fixed axis effectively. For example, a multi-spindle lathe is used to rotate the material on its axis to effectively increase production of cutting, deformation and turning.[2] The angle of rotation is a linear function of time, which modulo 360° is a periodic function. An example of this is the two-body problem with circular orbits. ### Centripetal force Internal tensile stress provides the centripetal force that keeps a spinning object together. A rigid body model neglects the accompanying strain. If the body is not rigid this strain will cause it to change shape. This is expressed as the object changing shape due to the "centrifugal force". Celestial bodies rotating about each other often have elliptic orbits. The special case of circular orbits is an example of a rotation around a fixed axis: this axis is the line through the center of mass perpendicular to the plane of motion. The centripetal force is provided by gravity, see also two-body problem. This usually also applies for a spinning celestial body, so it need not be solid to keep together unless the angular speed is too high in relation to its density. (It will, however, tend to become oblate.) For example, a spinning celestial body of water must take at least 3 hours and 18 minutes to rotate, regardless of size, or the water will separate[citation needed]. If the density of the fluid is higher the time can be less. See orbital period.[3]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 43, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937100350856781, "perplexity": 278.37756682514924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578636101.74/warc/CC-MAIN-20190424074540-20190424100540-00037.warc.gz"}
https://www.physicsforums.com/threads/pressure-and-scuba.164894/
# Pressure and scuba 1. Apr 9, 2007 ### Rasine A scuba diver at a depth of 43.0 m below the surface of the sea off the shores of Panama City, where the temperature is 4.0°C, releases an air bubble with volume 17.0 cm3. The bubble rises to the surface where the temperature is 19.0°C. What is the volume of the bubble immediately before it breaks the surface? The specific gravity for seawater is 1.025. so (p1v1/t1)=(p2v2/t2) v2=v1(p1/p2)(t2/t1) p1=101 kPa+ (roe)gh where roe is 1.025 x10^3 Kpa what is g here? and i am not sure what units this all should be in this is what i did.... v2=.17m^3(44176kPa/101kpa)(292.15K/277.15K) should .17m be in 17 cm^3 or what and what about he kPa can it stay that way i am confused 2. Apr 9, 2007 ### Dick The units in this case are easy. In v2=v1(p1/p2)(t2/t1), you can see that the units of pressure and temperature simply cancel and the units of v2 are the same as v1. So just put v1=17cm^3. BTW 17cm^3 is NOT the same as .17m^3!!!!!! 3. Apr 9, 2007 ### Rasine i still get the wrong answer 4. Apr 9, 2007 ### Dick And in rho*g*h, rho is not a pressure, it's the density of the water. And g is the acceleration of gravity. Last edited: Apr 9, 2007 5. Apr 9, 2007 ### Rasine some one please tell me what values i have wrong 6. Apr 9, 2007 ### Dick 44176kPa is dead wrong. Try doing g*rho*h again. Last edited: Apr 9, 2007 7. Apr 9, 2007 ### Rasine ok so (1.025x10^3)(9.8)(43)=431935 ....is that right 8. Apr 9, 2007 ### Dick Yes. But remember that it's in Pa not kPa. Convert to kPa before taking the ratio. 9. Apr 9, 2007 ### Rasine ok ok let me try 10. Apr 9, 2007 ### Rasine so now for the entire equation i am getting 7.66 cm^3 but that is wrong 11. Apr 9, 2007 ### Dick What is the 'entire equation'? The decrease in pressure and the increase in temperature should both cause it to expand. 12. Apr 9, 2007 ### Rasine v2=v1(p1/p2)(t2/t1) v2=17cm^3(431935kPa/101kpa)(292.15K/277.15K) and for this i am getting 7.66 cm^3 13. Apr 9, 2007 ### Dick Make that 431.935kPa (I TOLD you). And how can v2 be less than 17 if both ratios are larger than 1! 14. Apr 9, 2007 ### Rasine ok so now that i corrected 431935 to 431.935 i get 76.64 which is not right 15. Apr 9, 2007 ### Dick Ok, I'll bite. What is the right answer? 16. Apr 9, 2007 ### Rasine i don't know 17. Apr 9, 2007 ### Rasine i am putting this into a hw program and it keeps saying that my answer is not right 18. Apr 9, 2007 ### Dick Ah. You didn't add atmospheric pressure to p1 like you did last time. p1=101 kPa+ (roe)gh. 19. Apr 9, 2007 ### Rasine i am putting the values that i get into a hw program online and it keeps saying that it is wrong...and it won't say what the right answer is 20. Apr 9, 2007 ### Rasine ok ok let me try Similar Discussions: Pressure and scuba
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359947800636292, "perplexity": 6288.357403392371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816178.71/warc/CC-MAIN-20180225070925-20180225090925-00403.warc.gz"}
http://math.stackexchange.com/questions/450142/sequence-problem-find-root
# Sequence problem, find root the equation $x^3-5x+1=0$ has a root in $(0,1)$. Using a proper sequence for which $$|a(n+1)-a(n)|\le c|(a(n)-a(n-1)|$$ with $0<c<1$ , find the root with an approximation of $10^{-4}$. - Source of the problem? Reason for interest in it? Indication of any effort of your own, beyond that required to copy-paste? –  Gerry Myerson Jul 23 '13 at 9:20 its from a real analysis book of mine, i read it for fun :D –  Plom Jul 23 '13 at 9:27 And what techniques does the book give you for tackling this kind of problem? –  Mark Bennet Jul 23 '13 at 9:27 It doesnt really have any similar solved problems. In the respective chapter, it only gives the definition and proves how sequences like the above, are cauchy and converge. It also gives an irrelevant to the problem, example and thats all :( –  Plom Jul 23 '13 at 9:30 Notice that $f:x\mapsto x^3-5x+1$ is convex on $[0,1]$. You can construct a suitable sequence $(a_n)$ converging towards the root with the Newton-Raphson method. Some analysis should yield a constant $c$. –  zuggg Jul 23 '13 at 9:34 You can attempt this: define a sequence $x_n$ by $$x_{n+1} = \frac 15\left(1 + x_n^3\right).$$ We know that if $x_1 \in (0, 1)$, then $x_n \in (0, 1)$ for all $n$. Next, we show that it satisfies the said condition: \begin{align*} x_{n+1} - x_n & = \frac 15(1 + x_n^3 - 5x_n)\\ & = \frac 15\left(1 + x_n^3 - (1 + x_{n-1}^3)\right)\\ & = \frac 15\left(x_n^3 - x_{n-1}^3\right) \\ & = \frac 15\left(x_n - x_{n-1}\right)\left(x_n^2 + x_nx_{n-1} + x_{n-1}^2\right) \\ \therefore \left|x_{n+1} - x_n\right| & = \frac 15\left|\left(x_n - x_{n-1}\right)\left(x_n^2 + x_nx_{n-1} + x_{n-1}^2\right)\right| \\ & \le \frac 15\left|x_n - x_{n-1}\right| \cdot 3 \\ & \le \frac 35\left|x_n - x_{n-1}\right|. \end{align*} From this, we see that the sequence is Cauchy, hence convergent. Let $x$ be the limit of the sequence. Take the limit $n \to \infty$ in the first equation (which defines $x_{n+1}$ from $x_n$) to get $$x = \frac 15(1 + x^3).$$ This means $x$ is a root of $f(x) = x^3 - 5x + 1$. To compute $x$ numerically within a given error tolerance $\epsilon$, we need to find $x_N$ such that $|x_N - x| \le \epsilon$. However, we do not know $x$, so instead, we can use the following criterion to find $N$: for all $n > N$, $|x_{n} - x_N| < \epsilon$. This criterion is sufficient because if the inequality holds for all $n > N$, it will follow that $\lim_{n \to \infty} |x_n - x_N| = |x - x_N| \le \epsilon$. One convenient way to guarantee $|x_n - x_N| < \epsilon$ for all $n > N$ is via the triangle inequality: $$|x_n - x_N| \le \sum_{k=N}^{n-1} \left|x_{k+1} - x_k\right| \le \sum_{k=N}^\infty \left|x_{k+1} - x_k\right| \le \frac{\left(\frac 35\right)^N}{1 - \frac 35} = \frac 52\left(\frac 35\right)^N.$$ So, if we can find $N$ such that $\frac 52\left(\frac 35\right)^N < \epsilon$, we will get $|x_N - x| \le \epsilon$. - (+1) nice answer. –  Mhenni Benghorbal Jul 23 '13 at 11:10 @MhenniBenghorbal Thank you :) P.S. I just fixed some typos. –  Tunococ Jul 23 '13 at 11:24 Wouldn't a bisect algorithm do something like this: Let $f(x)=x^3-5x+1$. Then we find the root by: f(0)>0, f(1)<0 f(0.5)<0 (hence, there is a root between 0 and 0.5) f(0.25)<0 f(0.125)>0 f(0.1875) and so on... The sequence would be (0.5, 0.25,0.125, 0.1875...) and c=1/2 Edit: and the algorithm works as f is continuous :) - f(x)= x^3-5x+1 ??? –  Plom Jul 23 '13 at 9:48 oh yes, sorry let me edit that –  chris Jul 23 '13 at 9:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980542659759521, "perplexity": 503.7191625191945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737875203.46/warc/CC-MAIN-20151001221755-00177-ip-10-137-6-227.ec2.internal.warc.gz"}
https://quantumgeometrydynamics.com/how-qgd-efficiently-does-not-solve-unsolved-problems-in-physics/
# How QGD Efficiently Does Not Solve Unsolved Problems in Physics To be clear, quantum-geometry dynamics (QGD) does not solve the unsolved problems of current physics theories. Generations of the best minds in science have been working on the problems that arise from our best current theories and it would not only be presumptuous to claim to have solved these problems. These problems are too big and complex for any one person to tackle. Now, you may ask: What does QGD have to do with these problems? The response is: It efficiently does not solve them. The idea behind it was simple: to develop a theory from a minimal axiom set necessary to describe dynamics systems. QGD was never was meant to take on the problems facing dominant theoretical physics.  As I explored the consequences of QGD’s axiom set I found equations that describe gravity, the electromagnetic effects, the laws of motion for example, the laws of that govern optics for example, and all were direct consequences of the chosen axiom set. Because it was based a different axiom set none of the theory specific problems that arise in current theories came up in QGD. QGD’s derived equation for gravity predicts that gravity is not fundamental but the effect of two fundamental forces. It also predicts that beyond a threshold distance   ${{d}_{\Lambda }}\approx 10Mpc$  , gravity becomes negative. Therefore, QGD’s description of gravitationally interacting systems does not require dark energy (see Effects Attributed to Dark Matter and Dark Energy). QGD also reproduces the predictions of our best theories of gravity (see here) and its equations are found to describe a number of physical phenomena. QGD proposes that there exists only one fundamental material particle we call   $preon{{s}^{\left( + \right)}}$. QGD predicts that all other particles and their antiparticles are made from   $preon{{s}^{\left( + \right)}}$  and that the difference between a particle and its antiparticle is due to their dynamic structural properties. All particles being made from the same matter, the problem of the matter/antimatter asymmetry does not arise. QGD proposes that most    $preon{{s}^{\left( + \right)}}$  are free though they interact too weakly to be detected individually; their mass over large regions of space interacts gravitationally with bounded   $preon{{s}^{\left( + \right)}}$  (visible matter) and produces the effects we attribute to dark matter. Also what we call magnetic fields are predicted to be polarized   $preon{{s}^{\left( + \right)}}$. So dark matter, far from being an exotic form of matter, are really the most common and most commonly observed. According to QGD, time is nothing more than pure relational concept which allows us to compare events to periodic systems (clocks). Time does not correspond to a physical aspect of reality and the universe being strictly causal, the problem of the arrow of time does not arise in QGD. $Preon{{s}^{\left( + \right)}}$  are also the fundamental unit of matter mass we find that mass of a particle, structure or contained in a regions of space is simply the number of   $preon{{s}^{\left( + \right)}}$ that it contains. All equations describing the evolution of a system need only use this definition of mass. Thus mass being an intrinsic property of matter we find that no other mechanism is necessary to generate it. QGD only has two physical constants; the fundamental momentum of the  $preo{{n}^{\left( + \right)}}$ and the units of the two fundamental forces that it predicts exists. All other constants in nature can be derived from them. The unsolved problems of physics are theory dependent, they are their by-products, but none of these problems emerge from QGD’s axioms. QGD does not resolve the unsolved problems of our current physics theories because it doesn’t need to.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199945092201233, "perplexity": 579.2707027169766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251789055.93/warc/CC-MAIN-20200129071944-20200129101944-00448.warc.gz"}
http://encyclopedia.kids.net.au/page/gr/Group_action
## Encyclopedia > Group action Article Content # Group action In mathematics, groups are often used to describe symmetries of objects. This is formalized by the notion of a group action: every element of the group "acts" like a bijective map (or "symmetry") on some set. In this case, the group is also called a transformation group of the set. ### Definition If G is a group and X is a set, then a (left) group action of G on X is a binary function G × X -> X (where the image of g in G and x in X is written as g.x) which satisfies the following two axioms: 1. g.(h.x) = (gh).x for all g, h in G and x in X. 2. e.x = x for every x in X; here e denotes the identity element of G. From these two axioms, it follows that for every g in G, the function which maps x in X to g.x is a bijective map from X to X. Therefore, one may alternatively and equivalently define a group action of G on X as a group homomorphism G -> Sym(X), where Sym(X) denotes the group of all bijective maps from X to X. If a group action G × X -> X is given, we also say that G acts on the set X or X is a G-set. In complete analogy, one can define a right group action of G on X as a function X × G -> X by the two axioms (x.g).h = x.(gh) and x.e = x. In the sequel, we consider only left group actions. ### Examples • Every group G acts on G in two natural ways: g.x = (gx) for all x in G, or g.x = (gxg -1) for all x in G. • The symmetric group Sn and its subgroups act on the set { 1, ... , n } by permutating its elements. • The symmetry group of a polyhedron acts on the set of vertices of that polyhedron. • The symmetry group of any geometrical object acts on the set of points of that object. • The automorphism group of a vector space (or graph, or group, or ring...) acts on the vector space (or set of vertices of the graph, or group, or ring...). • The Lie groups Gl(n,R), SL(n,R) and O(n,R) act on Rn. • The Galois group of a field extension E/F acts on the bigger field E. So does every subgroup of the Galois group. • The additive group of the real numbers (R, +) acts on the phase space of "well-behaved" systems in classical mechanics (and in more general dynamical systems): if t is in R and x is in the phase space, then x describes a state of the system, and t.x is defined to be the state of the system t seconds later if t is positive or -t seconds ago if t is negative. ### Types of actions The action of G on X is called • transitive if for any two x, y in X there exists an g in G such that g.x = y; • regular if for any two x, y in X there exists precisely one g in G such that g.x = y. • faithful (or effective) if for any two different g, h in G there exists an x in X such that g.xh.x; • free if for any two different g, h in G and all x in X we have g.xh.x; Every free action on a non-empty set is faithful. A group G that acts faithfully on a set X is isomorphic to a permutation group on X. An action is regular if and only if it is transitive and free. ### Orbits and stabilizers If we define N = {g in G : g.x = x for all x in X}, then N is a normal subgroup of G and the factor group G/N acts faithfully on X by setting (gN).x = g.x. The action of G on X is faithful if and only if N = {e}. If Y is a subset of X, we write GY for the set { g.y : y in Y and g in G}. We call the subset Y invariant under G if GY = Y (which is equivalent to GYY). In that case, G also operates on Y. The subset Y is called fixed under G if g.y = y for all g in G and all y in Y. Every subset that's fixed under G is also invariant under G, but not vice versa. Any operation of G on X defines an equivalence relation on X: two elements x and y are called equivalent if there exists a g in G with g.x = y. The equivalence class of x under this equivalence relation is given by the set Gx = { g.x : g in G } which is also called the orbit of x. The elements x and y are equivalent if and only if their orbits are the same: Gx = Gy. Every orbit is an invariant subset of X on which G acts transitively. The action of G on X is transitive if and only if all elements are equivalent, meaning that there is only one orbit. The set of all orbits is written as X/G. For every x in X, we define Gx = { g in G : g.x = x }. This is a subgroup of G, and it is called the stabilizer of x or isotropy subgroup at x. The action of G on X is free if and only if all stabilizers consist only of the identity element. There is a natural bijection between the set of all left cosets of the subgroup Gx and the orbit of x, given by hGx |-> h.x. Therefore, |Gx| = [G : Gx], and so $\left|Gx\right|\cdot\left|G_x\right|=\left|G\right|.$ This result, known as the Orbit-Stabilizer theorem[?], is especially useful if G and X are finite, because then it can be employed for counting arguments. A related result is Burnside's lemma: $r\left|G\right|=\sum_{g\in G}\left|X^g\right|$ where r is the number of orbits, and Xg is the set of points fixed by g. This result too is mainly of use when G and X are finite, when it can be interpreted as follows: the number of orbits is equal to the average number of points fixed per group element. ### Morphisms and isomorphisms between G-sets If X and Y are two G-sets, we define a morphism from X to Y to be a function f : X -> Y such that f(g.x) = g.f(x) for all g in G and all x in X. If such a function f is bijective, then its inverse is also a morphism, and we call f an isomorphism and the two G-sets X and Y are called isomorphic; for all practical purposes, they are indistinguishable in this case. Some example isomorphisms: • Every regular G action is isomorphic to the action of G on G given by left multiplication. • Every free G action is isomorphic to G×S, where S is some set and G acts by left multiplication on the first coordinate. • Every transitive G action is isomorphic to left multiplication by G on the set of left cosets of some subgroup H of G. With this notion of morphism, the collection of all G-sets forms a category; this category is a topos. ### Generalizations One often considers continuous group actions: the group G is a topological group, X is a topological space, and the map G × X → X is continuous with respect to the product topology of G × X. The space X is also called a G-space in this case. This is indeed a generalization, since every group can be considered a topological group by using the discrete topology. All the concepts introduced above still work in this context, however we define morphisms between G-spaces to be continuous maps compatible with the action of G. The above statements about isomorphisms for regular, free and transitive actions are no longer valid for continuous group actions. One can also consider actions of monoids on sets, by using the same two axioms as above. This does not define bijective maps and equivalence relations however. Instead of actions on sets, one can define actions of groups and monoids on objects of an arbitrary category: start with an object X of some category, and then define an action on X as a monoid homomorphism into the monoid of endomorphisms of X. If X has an underlying set, then all definitions and facts stated above can be carried over. For example, if we take the category of vector spaces, we obtain group representations in this fashion. One can view a group G as a category with a single object in which every morphism is invertible. A group action is then nothing but a functor from G to the category of sets, and a group representation is a functor from G to the category of vector spaces. In analogy, an action of a groupoid is a functor from the groupoid to the category of sets or to some other category. All Wikipedia text is available under the terms of the GNU Free Documentation License Search Encyclopedia Search over one million articles, find something about almost anything! Featured Article List of intelligence agencies ... Hivatal (NBH) Iran SAVAK, Shah’s security police Ministry of Intelligence and Security[?] Iraq General Intelligence Directorate[?] Special ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9548907279968262, "perplexity": 289.8225097602523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00656.warc.gz"}
http://www.cims.nyu.edu/~jim/Teaching/MusicMath/multiplication.html
# Music and Math Project ### Multiplication Table We can also multiply two numbers in the Land of Twelve. Then, we first need to figure out what are the multiples of small numbers. In other words, we need to find out what is the table of multiplication in the Land of Twelve. This is the result: $$\times$$ $$1$$ $$2$$ $$3$$ $$4$$ $$5$$ $$6$$ $$7$$ $$8$$ $$9$$ $$\mathrm{X}$$ $$\mathrm{Y}$$ $$10$$ $$1$$ $$1$$ $$2$$ $$3$$ $$4$$ $$5$$ $$6$$ $$7$$ $$8$$ $$9$$ $$\mathrm{X}$$ $$\mathrm{Y}$$ $$10$$ $$2$$ $$2$$ $$4$$ $$6$$ $$8$$ $$\mathrm{X}$$ $$10$$ $$12$$ $$14$$ $$16$$ $$18$$ $$1\mathrm{X}$$ $$20$$ $$3$$ $$3$$ $$6$$ $$9$$ $$10$$ $$13$$ $$16$$ $$19$$ $$20$$ $$23$$ $$26$$ $$29$$ $$30$$ $$4$$ $$4$$ $$8$$ $$10$$ $$14$$ $$18$$ $$20$$ $$24$$ $$28$$ $$30$$ $$34$$ $$38$$ $$40$$ $$5$$ $$5$$ $$\mathrm{X}$$ $$13$$ $$18$$ $$21$$ $$26$$ $$2\mathrm{Y}$$ $$34$$ $$39$$ $$42$$ $$47$$ $$50$$ $$6$$ $$6$$ $$10$$ $$16$$ $$20$$ $$26$$ $$30$$ $$36$$ $$40$$ $$46$$ $$50$$ $$56$$ $$60$$ $$7$$ $$7$$ $$12$$ $$19$$ $$24$$ $$2\mathrm{Y}$$ $$36$$ $$41$$ $$48$$ $$53$$ $$5\mathrm{X}$$ $$65$$ $$70$$ $$8$$ $$8$$ $$14$$ $$20$$ $$28$$ $$34$$ $$40$$ $$48$$ $$54$$ $$60$$ $$68$$ $$74$$ $$80$$ $$9$$ $$9$$ $$16$$ $$23$$ $$30$$ $$39$$ $$46$$ $$53$$ $$60$$ $$69$$ $$76$$ $$83$$ $$90$$ $$\mathrm{X}$$ $$\mathrm{X}$$ $$18$$ $$26$$ $$34$$ $$42$$ $$50$$ $$5\mathrm{X}$$ $$68$$ $$76$$ $$84$$ $$92$$ $$\mathrm{X}0$$ $$\mathrm{Y}$$ $$\mathrm{Y}$$ $$1\mathrm{X}$$ $$29$$ $$38$$ $$47$$ $$56$$ $$65$$ $$74$$ $$83$$ $$92$$ $$\mathrm{X1}$$ $$\mathrm{Y}0$$ $$10$$ $$10$$ $$20$$ $$30$$ $$40$$ $$50$$ $$60$$ $$70$$ $$80$$ $$90$$ $$\mathrm{X}0$$ $$\mathrm{Y}0$$ $$100$$ ### Long multiplication Now we know the multiplication table, we can calculate the product of two big numbers for instance by using long multiplication $\begin{array}{cccc} 4 & 0 & 9 &\\ 2 & 5 & \mathrm{X} & \times \\ \hline & & ? \end{array}$ First we do the multiplication by $$\mathrm{X}$$. After calculating the multiplication table, we know that $$\mathrm{X} \times 9 = 76$$. We write the last digit, the $$6$$, on the last line, and put the $$7$$ on the position for the multiples of twelve above the top row. We get $\begin{array}{cccc} & 7 & \\ 4 & 0 & 9 &\\ 2 & 5 & \mathrm{X} & \times \\ \hline & & 6 \end{array}$ Then, we multiply $$0 \times \mathrm{X} = 0$$. We add the $$7$$ that we needed to remember, and get $$7$$ on the last row. $\begin{array}{cccc} & & \\ 4 & 0 & 9 &\\ 2 & 5 & \mathrm{X} & \times \\ \hline & 7 & 6 \end{array}$ Next, $$\mathrm{X} \times 4 = 34$$, so $\begin{array}{ccccc} & & & & \\ & 4 & 0 & 9 &\\ & 2 & 5 & \mathrm{X} & \times \\ \hline 3 & 4 & 7 & 6 \end{array}$ We now need to do the multiplications by $$5$$. Remember that the $$5$$ stands for $$5$$ multiples of twelve. So we first have to add a $$0$$ on the new line $\begin{array}{ccccc} & & & & \\ & 4 & 0 & 9 &\\ & 2 & 5 & \mathrm{X} & \times \\ \hline 3 & 4 & 7 & 6\\ & & ? & 0 \end{array}$ Now, we will keep going in the same spirit $\begin{array}{cccccc} & & & & & \\ & & 4 & 0 & 9 &\\ & & 2 & 5 & \mathrm{X} & \times \\ \hline & 3 & 4 & 7 & 6\\ 1 & 8 & 3 & 9 & 0 \end{array}$ $\begin{array}{cccccc} & & & & & \\ & & 4 & 0 & 9 &\\ & & 2 & 5 & \mathrm{X} & \times \\ \hline & 3 & 4 & 7 & 6\\ 1 & 8 & 3 & 9 & 0\\ 8 & 1 & 6 & 0 & 0 \end{array}$ Finally we add the last lines $\begin{array}{cccccc} & & & & & \\ & & 4 & 0 & 9 &\\ & & 2 & 5 & \mathrm{X} & \times \\ \hline & 3 & 4 & 7 & 6\\ 1 & 8 & 3 & 9 & 0\\ 8 & 1 & 6 & 0 & 0 & + \\ \hline \mathrm{X} & 1 & 2 & 4 &6 & \end{array}$ ### Exercises Do the following multiplications: 1. $$4\mathrm{X} \times 5$$ 2. $$\mathrm{Y}9 \times 46$$ 3. $$63 \times 77$$ 4. $$51 \times 82$$ 5. $$66 \times \mathrm{Y}3$$ 6. $$86 \times 29$$ 7. $$740 \times \mathrm{X}48$$ 8. $$577 \times 346$$ 9. $$462 \times \mathrm{YX}8$$ 10. $$923 \times 203$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450116157531738, "perplexity": 24.260139969817228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770400.105/warc/CC-MAIN-20141217075250-00167-ip-10-231-17-201.ec2.internal.warc.gz"}
http://clay6.com/qa/2887/solve-using-matrices-2x-y-3z-5-3x-2y-z-7-4x-5y-5z-9
# Solve, using matrices : $2x-y+3z=5;3x+2y-z=7 ; 4x+5y-5z = 9$ Solve the system of equation using matrix method 2x – y + 3z = 5; 3x + 2y – z = 7; 4x + 5y – 5z = 9 answered Sep 12, 2016
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8100127577781677, "perplexity": 241.25824697450724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109670.98/warc/CC-MAIN-20170821211752-20170821231752-00532.warc.gz"}
http://www.newton.ac.uk/programmes/SIS/seminars/2007082109001.html
# SIS ## Seminar ### Lattice QCD thermodynamics at mu=0 and mu$>$0 Fodor, Z (Wuppertal) Tuesday 21 August 2007, 09:00-10:00 Seminar Room 1, Newton Institute #### Abstract Recent results of lattice thermodynamics will be presented. The nature of the transition, the absolute scale of the transition, the static potential, the equation of state and the phase diagram will be discussed. The analyses used a.) physical quark masses and b.) controlled continuum extrapolations.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8553930521011353, "perplexity": 4345.95508129162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997860453.15/warc/CC-MAIN-20140722025740-00092-ip-10-33-131-23.ec2.internal.warc.gz"}
http://gradestack.com/Electromagnetic-Field/Electrostatics/Coulomb-s-Law/19368-3931-38425-study-wtw
# Coulomb’s Law The quantitative expression for the effect of the charge and distance on electric force is given by an experimental law, known as Coulomb’s law. Coulomb’s Law This law states that the force between two point charges 1. acts along the line joining the two charges, 2. is directly proportional to the product of the two charges, and 3. is inversely proportional to the square of the distance between the charges.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9950262904167175, "perplexity": 288.8109461426648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717954.1/warc/CC-MAIN-20161020183837-00216-ip-10-171-6-4.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/63849/sobolev-imbedding-on-riemannian-manifolds
# Sobolev imbedding on Riemannian manifolds Let $(M, g)$ be a non-compact smooth Riemannian manifold of dimension $n \ge 2$, and $G$ a subgroup of the isometry group of $(M,g)$, say with $G$ contained in the component of the identy. Let $W^{1,2}_{G}(M)=\{f \in W^{1,2}(M)| \quad f\circ \phi =\phi \quad \forall \phi \in G\}$. Is there any known result concerning the compactness of the Sobolev imbedding $W^{1,2}_{G}(M) \hookrightarrow L^p(M)$ for some subgroup $G$? - What do you mean by $\textrm{dim}\,G<\infty$? As I recall, the isometry group of a Riemannian manifold is finite dimensional. – Somnath Basu May 3 '11 at 21:09 Sorry, it's a mistake. What I meant is: $G$ contained in the component of the identity – Mercy King May 3 '11 at 21:29 You should not take the same letter for the metric and for elements of the group. – Denis Serre May 4 '11 at 11:59 There is an old note by Naceur Achtaich (circa 1988) when $M\subset\mathbb R^3$ has a rotational symmetry about the $z$-axis. Very localised, but at least an exmple. – Denis Serre May 4 '11 at 12:02 Sorry for posting this as an Answer, but I can't comment yet. As far as I know (judging from Emmanuel Hebeys work on this subject), there are no generalised results on Sobolev embeddings on non-compact Riemannian manifolds unless they are complete. - Hi, You need additional geometric condition for the general case but considering the case of $\mathbb{R}^n$ with $G=SO(n)$, i.e. $H^1_ {radial}$, you have compact injection. You will find all the details in chapter 9 of the excellent book of Hebey Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequalities. - I'm quite new in this field, but in the new book by Hebey [Sobolev spaces on Riemannian manifolds] he only considers the case when $M$ is compact. Anyhow, I have to the reference you've just mentioned. – Mercy King May 4 '11 at 9:54 I give you the precise reference, it is also about non-compact case! you have just to read... – Raphael May 4 '11 at 13:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7418012619018555, "perplexity": 352.7944107876164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00162-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.lmfdb.org/L/1/23%5E2/529.233/r0-0
## Results (1-50 of 242 matches) Next Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim arith $\mathbb{Q}$ self-dual $\operatorname{Arg}(\epsilon)$ $r$ First zero Origin 1-23e2-529.100-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.100 $0$ $$0 0.405 0 0.638277 Dirichlet character $$\chi_{529} (100, \cdot)$$ 1-23e2-529.101-r0-0-0 2.45 2.45 1 23^{2} 529.101 0$$ $0$ $0.301$ $0$ $1.43530$ Dirichlet character $$\chi_{529} (101, \cdot)$$ 1-23e2-529.104-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.104 $0$ $$0 0.244 0 1.15657 Dirichlet character $$\chi_{529} (104, \cdot)$$ 1-23e2-529.105-r0-0-0 2.45 2.45 1 23^{2} 529.105 0$$ $0$ $0.335$ $0$ $1.62183$ Dirichlet character $$\chi_{529} (105, \cdot)$$ 1-23e2-529.108-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.108 $0$ $$0 0.315 0 2.20841 Dirichlet character $$\chi_{529} (108, \cdot)$$ 1-23e2-529.110-r0-0-0 2.45 2.45 1 23^{2} 529.110 0$$ $0$ $-0.301$ $0$ $0.761481$ Dirichlet character $$\chi_{529} (110, \cdot)$$ 1-23e2-529.116-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.116 $0$ $$0 0.497 0 1.54708 Dirichlet character $$\chi_{529} (116, \cdot)$$ 1-23e2-529.117-r0-0-0 2.45 2.45 1 23^{2} 529.117 0$$ $0$ $-0.244$ $0$ $0.818755$ Dirichlet character $$\chi_{529} (117, \cdot)$$ 1-23e2-529.119-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.119 $0$ $$0 0.336 0 1.75921 Dirichlet character $$\chi_{529} (119, \cdot)$$ 1-23e2-529.12-r0-0-0 2.45 2.45 1 23^{2} 529.12 0$$ $0$ $0.230$ $0$ $1.52192$ Dirichlet character $$\chi_{529} (12, \cdot)$$ 1-23e2-529.121-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.121 $0$ $$0 -0.234 0 0.944176 Dirichlet character $$\chi_{529} (121, \cdot)$$ 1-23e2-529.123-r0-0-0 2.45 2.45 1 23^{2} 529.123 0$$ $0$ $0.140$ $0$ $0.668405$ Dirichlet character $$\chi_{529} (123, \cdot)$$ 1-23e2-529.124-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.124 $0$ $$0 0.0496 0 1.56042 Dirichlet character $$\chi_{529} (124, \cdot)$$ 1-23e2-529.127-r0-0-0 2.45 2.45 1 23^{2} 529.127 0$$ $0$ $0.224$ $0$ $1.48109$ Dirichlet character $$\chi_{529} (127, \cdot)$$ 1-23e2-529.128-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.128 $0$ $$0 0.0639 0 0.739338 Dirichlet character $$\chi_{529} (128, \cdot)$$ 1-23e2-529.13-r0-0-0 2.45 2.45 1 23^{2} 529.13 0$$ $0$ $0.425$ $0$ $2.34887$ Dirichlet character $$\chi_{529} (13, \cdot)$$ 1-23e2-529.131-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.131 $0$ $$0 -0.335 0 0.0875690 Dirichlet character $$\chi_{529} (131, \cdot)$$ 1-23e2-529.133-r0-0-0 2.45 2.45 1 23^{2} 529.133 0$$ $0$ $0.277$ $0$ $1.52361$ Dirichlet character $$\chi_{529} (133, \cdot)$$ 1-23e2-529.139-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.139 $0$ $$0 -0.368 0 0.822844 Dirichlet character $$\chi_{529} (139, \cdot)$$ 1-23e2-529.140-r0-0-0 2.45 2.45 1 23^{2} 529.140 0$$ $0$ $0.0379$ $0$ $1.36793$ Dirichlet character $$\chi_{529} (140, \cdot)$$ 1-23e2-529.141-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.141 $0$ $$0 0.152 0 0.501732 Dirichlet character $$\chi_{529} (141, \cdot)$$ 1-23e2-529.142-r0-0-0 2.45 2.45 1 23^{2} 529.142 0$$ $0$ $-0.352$ $0$ $0.400016$ Dirichlet character $$\chi_{529} (142, \cdot)$$ 1-23e2-529.144-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.144 $0$ $$0 0.313 0 1.68730 Dirichlet character $$\chi_{529} (144, \cdot)$$ 1-23e2-529.146-r0-0-0 2.45 2.45 1 23^{2} 529.146 0$$ $0$ $-0.413$ $0$ $0.627104$ Dirichlet character $$\chi_{529} (146, \cdot)$$ 1-23e2-529.147-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.147 $0$ $$0 -0.139 0 0.631417 Dirichlet character $$\chi_{529} (147, \cdot)$$ 1-23e2-529.150-r0-0-0 2.45 2.45 1 23^{2} 529.150 0$$ $0$ $-0.230$ $0$ $0.282679$ Dirichlet character $$\chi_{529} (150, \cdot)$$ 1-23e2-529.151-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.151 $0$ $$0 0.00189 0 1.58138 Dirichlet character $$\chi_{529} (151, \cdot)$$ 1-23e2-529.154-r0-0-0 2.45 2.45 1 23^{2} 529.154 0$$ $0$ $-0.404$ $0$ $0.0638128$ Dirichlet character $$\chi_{529} (154, \cdot)$$ 1-23e2-529.156-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.156 $0$ $$0 -0.300 0 0.0890535 Dirichlet character $$\chi_{529} (156, \cdot)$$ 1-23e2-529.16-r0-0-0 2.45 2.45 1 23^{2} 529.16 0$$ $0$ $0.209$ $0$ $0.793459$ Dirichlet character $$\chi_{529} (16, \cdot)$$ 1-23e2-529.162-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.162 $0$ $$0 -0.321 0 1.58603 Dirichlet character $$\chi_{529} (162, \cdot)$$ 1-23e2-529.163-r0-0-0 2.45 2.45 1 23^{2} 529.163 0$$ $0$ $0.443$ $0$ $0.938466$ Dirichlet character $$\chi_{529} (163, \cdot)$$ 1-23e2-529.164-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.164 $0$ $$0 -0.405 0 2.47083 Dirichlet character $$\chi_{529} (164, \cdot)$$ 1-23e2-529.165-r0-0-0 2.45 2.45 1 23^{2} 529.165 0$$ $0$ $-0.210$ $0$ $0.603754$ Dirichlet character $$\chi_{529} (165, \cdot)$$ 1-23e2-529.167-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.167 $0$ $$0 -0.267 0 0.424404 Dirichlet character $$\chi_{529} (167, \cdot)$$ 1-23e2-529.169-r0-0-0 2.45 2.45 1 23^{2} 529.169 0$$ $0$ $0.0486$ $0$ $1.04657$ Dirichlet character $$\chi_{529} (169, \cdot)$$ 1-23e2-529.173-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.173 $0$ $$0 -0.143 0 0.883015 Dirichlet character $$\chi_{529} (173, \cdot)$$ 1-23e2-529.174-r0-0-0 2.45 2.45 1 23^{2} 529.174 0$$ $0$ $0.406$ $0$ $0.248592$ Dirichlet character $$\chi_{529} (174, \cdot)$$ 1-23e2-529.179-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.179 $0$ $$0 0.00189 0 1.33965 Dirichlet character $$\chi_{529} (179, \cdot)$$ 1-23e2-529.18-r0-0-0 2.45 2.45 1 23^{2} 529.18 0$$ $0$ $0.139$ $0$ $1.10201$ Dirichlet character $$\chi_{529} (18, \cdot)$$ 1-23e2-529.185-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.185 $0$ $$0 0.334 0 0.454320 Dirichlet character $$\chi_{529} (185, \cdot)$$ 1-23e2-529.186-r0-0-0 2.45 2.45 1 23^{2} 529.186 0$$ $0$ $0.231$ $0$ $1.96699$ Dirichlet character $$\chi_{529} (186, \cdot)$$ 1-23e2-529.187-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.187 $0$ $$0 -0.0634 0 1.77779 Dirichlet character $$\chi_{529} (187, \cdot)$$ 1-23e2-529.188-r0-0-0 2.45 2.45 1 23^{2} 529.188 0$$ $0$ $0.234$ $0$ $1.79666$ Dirichlet character $$\chi_{529} (188, \cdot)$$ 1-23e2-529.190-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.190 $0$ $$0 0.352 0 1.78150 Dirichlet character $$\chi_{529} (190, \cdot)$$ 1-23e2-529.192-r0-0-0 2.45 2.45 1 23^{2} 529.192 0$$ $0$ $-0.117$ $0$ $0.668270$ Dirichlet character $$\chi_{529} (192, \cdot)$$ 1-23e2-529.193-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.193 $0$ $$0 -0.133 0 0.757402 Dirichlet character $$\chi_{529} (193, \cdot)$$ 1-23e2-529.196-r0-0-0 2.45 2.45 1 23^{2} 529.196 0$$ $0$ $0.00189$ $0$ $1.20955$ Dirichlet character $$\chi_{529} (196, \cdot)$$ 1-23e2-529.197-r0-0-0 $2.45$ $2.45$ $1$ $23^{2}$ 529.197 $0$ $$0 -0.375 0 1.91998 Dirichlet character $$\chi_{529} (197, \cdot)$$ 1-23e2-529.2-r0-0-0 2.45 2.45 1 23^{2} 529.2 0$$ $0$ $0.449$ $0$ $2.02441$ Dirichlet character $$\chi_{529} (2, \cdot)$$ Next
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7385761737823486, "perplexity": 5683.58698739918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00227.warc.gz"}
https://tswanson.net/tag/surface-water-groundwater-interactions/
## Poster presentation: Evaluating heat tracing models in a pool-riffle-pool sequence, GSA Portland 2009 A pool-riffle-pool sequence in streambed morphology is thought to drive hyporheic downwelling near the head of the riffle and upwelling at the tail of the riffle and head of the lower pool. Heat tracing is a potentially useful method to characterize these hyporheic flow paths. A pool-riffle-pool sequence within Jaramillo Creek, Valles Caldera National Preserve, New Mexico was instrumented with a two dimensional vertical array of thermistors during the summers of 2008 and 2009. Three one-dimensional analytical heat transport models (Hatch et al 2006, Keery et al 2007, and Schmidt et al 2007) were used to individually interpret sections of the pool-riffle-pool sequence to quantify vertical fluid fluxes. The modeled fluxes were then compared to values obtained from vertical hydraulic gradient and hydraulic conductivity measurements. The fluxes estimated by the heat tracing methods exhibit a trend that partly follows the conceptual model of a pool-riffle-pool sequence. The directly calculated fluxes mostly agree with heat tracing based estimates. The deviation in flux distribution from the conceptual “downwelling-upwelling” model is partly due to the dominantly loosing conditions at the study site. Moreover, varying assumptions concerning boundary conditions and physical properties of the streambed that are intrinsic to the analytical models produce somewhat inconsistent results between methods. Careful selection of a model for heat tracing is vital to obtaining accurate fluid flux estimates. Click on the image of the poster to download a PDF of the poster. [1] Bredehoeft, J. D., and I. S. Papaopulos. 1965. Rates of vertical groundwater movement estimated from the Earth’s thermal profile. Water Resources Research
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087682962417603, "perplexity": 3608.7128183953855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524548.22/warc/CC-MAIN-20190716115717-20190716141717-00435.warc.gz"}
https://en.wikiversity.org/wiki/Mathematical_Methods_in_Physics/Introduction_to_2nd_order_differential_equations
# Mathematical Methods in Physics/Introduction to 2nd order differential equations Resource type: this resource contains a lecture or lecture notes. Completion status: this resource is ~75% complete. ## Introduction What are differential equations? Why are they so important in physics? The answer to these questions will become more apparent as the course goes on, but to provide motivation, for now we will say that a differential equation is an equation where derivatives of a function appear (we will provide a more formal definition in the following section), and from which we'd like to know what this function is. Finding a function such that the differential equation is satisfied is known as finding a solution to the differential equation. Why should physical scientists study differential equations? The answer to this question is rather easy if the student has taken any more or less advanced physics course. It will become apparent to them that the basic laws of nature can be expressed in the language of differential equations, both ordinary as well as partial differential equations. As canonical examples, we consider the equation of the harmonic oscillator (ordinary), ${\displaystyle {\frac {d^{2}x}{dt^{2}}}=-\omega ^{2}x}$ the wave equation (partial), ${\displaystyle {\frac {\partial ^{2}u}{\partial t^{2}}}=c^{2}\left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)}$ the equation of an RLC circuit (ordinary), ${\displaystyle L{\frac {d^{2}I}{dt^{2}}}+R{\frac {dI}{dt}}+{\frac {I}{C}}={\frac {dV}{dt}}}$ and finally, Laguerre's equation (ordinary), ${\displaystyle x{\frac {d^{2}y}{dx^{2}}}+(1-x){\frac {dy}{dx}}+ny=0}$ an equation that shows up in quantum mechanics. There are many alternative notations for the derivative; we may use primes (Lagrange's notation) (${\displaystyle y'}$, ${\displaystyle y''}$, etc.), numbers enclosed within parentheses (${\displaystyle y^{(1)}}$, ${\displaystyle y^{(5)}}$, etc.), Leibniz's notation (${\displaystyle {\frac {d^{3}y}{dx^{3}}}}$), or Newton's dot notation, when we discuss derivatives with respect to time (${\displaystyle {\frac {dx}{dt}}={\dot {x}},{\frac {d^{2}x}{dt^{2}}}={\ddot {x}}}$). In what follows, we will try to use consistent notation, but the reader should be aware that notation is mostly a matter of preference and one notation is as good as any other. ## Basic definitions A differential equation is an equation that relates a function with its derivative. Given a function ${\displaystyle f}$, independent variable ${\displaystyle x}$ and dependent variable ${\displaystyle y}$, an (ordinary) differential equation's most general expression is ${\displaystyle f\left(x,y,y'',\ldots ,y^{(n)}\right)=0}$ A solution to this differential equation is a function ${\displaystyle y=g(x)}$ such that ${\displaystyle f\left(x,g(x),g''(x),\ldots ,g^{(n)}(x)\right)=0}$ We say that a differential equation is of order ${\displaystyle n}$ if the highest derivative that appears in the differential equation is the ${\displaystyle n}$-th derivative. An autonomous differential equation is one where there is no explicit dependence on the independent variable ${\displaystyle x}$: ${\displaystyle f\left(y,y'',\ldots ,y^{(n)}\right)=0}$ A linear ordinary differential equation only involves the dependent variable and its derivatives in a linear fashion (multiplied by a non-zero function of ${\displaystyle x}$, which may or may not be constant). For example, ${\displaystyle \cos(y')+\sin x+y=0}$ ${\displaystyle e^{y}+y'^{2}+3x=0}$ are examples of nonlinear differential equations, whereas ${\displaystyle 3y''+2y-y=\cos x}$ ${\displaystyle y^{(4)}-5y''+y=x^{2}}$ ${\displaystyle x^{2}y''-3xy'+y=\sin x}$ are linear differential equations. We say that a linear differential equation is homogeneous if any potential term(s) involving solely the independent variable ${\displaystyle x}$ are identically vanishing. Thus, ${\displaystyle 3y''+y'-5y=0}$ is homogeneous, whereas ${\displaystyle y'''-5y''+y'+2y=x^{2}+\cos x}$ is inhomogeneous or nonhomogeneous, due to the ${\displaystyle x^{2}+\cos x}$ term that depends solely on ${\displaystyle x}$. It is customary, but by no means necessary, to move all the nonhomogeneous terms to the right-hand side of the differential equation; this practice is done to clearly distinguish these inhomogeneous terms as well as make some solutions method easier to implement. ## Linear ordinary differential equations We focus now on linear ordinary differential equations, as these appear pervasively in the physical sciences, in particular those of second-order. A linear ordinary differential equation is an equation of the form ${\displaystyle a_{n}(x)y^{(n)}+a_{n-1}(x)y^{(n-1)}+\ldots +a_{1}(x)y'+a_{0}(x)y=f(x)}$ As we have seen before, if ${\displaystyle f(x)\neq 0}$ the equation is nonhomogeneous or inhomogeneous, and if all the coefficients, that is, all the ${\displaystyle a_{i}(x)}$ factors are constant and not functions of ${\displaystyle x}$, we say that the equation is of constant coefficients. ### Linear dependence of functions #### Vectors From linear algebra, we intuitively know what it means for two vectors to be linearly independent. The vectors ${\displaystyle \mathbf {u} =-3\mathbf {i} +5\mathbf {j} }$ and ${\displaystyle \mathbf {v} =-6\mathbf {i} +10\mathbf {j} }$ are linearly dependent because ${\displaystyle \mathbf {u} }$ can be expressed as a linear combination of ${\displaystyle \mathbf {v} }$, or vice versa: ${\displaystyle \mathbf {v} =2\mathbf {u} }$ or equivalently, ${\displaystyle \mathbf {u} ={\frac {1}{2}}\mathbf {v} }$. More formally, given the set of vectors ${\displaystyle S=\{\mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{n}\}}$, we say that these vectors are linearly dependent if the equation ${\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{k}\mathbf {v} _{k}=\mathbf {0} ,}$ has a nontrivial (nonzero) solution in the scalar coefficients ${\displaystyle a_{i}}$ (${\displaystyle a_{1}}$, ${\displaystyle a_{2}}$, etc.), that is to say, that at least one of the coefficients doesn't vanish, and where ${\displaystyle k\leq n}$. If, for example, ${\displaystyle a_{1}\neq 0}$, then ${\displaystyle \mathbf {v} _{1}=-{\frac {a_{2}}{a_{1}}}\mathbf {v} _{2}-\cdots -{\frac {a_{k}}{a_{1}}}\mathbf {v} _{k},}$ and we can see that ${\displaystyle \mathbf {v} _{1}}$ is a linear combination of the rest of the vectors. This means that the vectors of the set ${\displaystyle S=\{\mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{n}\}}$ are linearly independent if the equation ${\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{n}\mathbf {v} _{n}=\mathbf {0} ,}$ can only be satisfied if the scalar coefficients ${\displaystyle a_{i}}$ are all ${\displaystyle 0}$. #### Functions We can now extend our definition of linear independence to functions. We say that the functions ${\displaystyle g_{1}(x)}$, ${\displaystyle g_{2}(x),\ldots ,g_{n}(x)}$ are linearly independent in an interval ${\displaystyle I}$ if the equation ${\displaystyle a_{1}g_{1}(x)+a_{2}g_{2}(x)+\cdots +a_{n}g_{n}(x)=0}$ can only be satisfied if all the coefficients ${\displaystyle a_{i}}$ are vanishing, for all ${\displaystyle x}$ in the interval ${\displaystyle I}$. If the equation can be satisfied without all the coefficients being ${\displaystyle 0}$, as before, we say that the functions are linearly dependent. We now define the Wronskian of the ${\displaystyle n-1}$ times differentiable functions ${\displaystyle g_{1}(x)}$, ${\displaystyle g_{2}(x),\ldots ,g_{n}(x)}$: ${\displaystyle W(x)={\begin{vmatrix}g_{1}(x)&g_{2}(x)&\cdots &g_{n}(x)\\g'_{1}(x)&g'_{2}(x)&\cdots &g'_{n}(x)\\g''_{1}(x)&g''_{2}(x)&\cdots &g''_{n}(x)\\\vdots &\vdots &\ddots &\vdots \\g_{1}^{(n-1)}(x)&g_{2}^{(n-1)}(x)&\cdots &g_{n}^{(n-1)}(x)\\\end{vmatrix}}}$ This functional determinant is important to study the linear independence of a given set of functions. We will make this more explicit in the next section. ### Theorems for linear differential equations #### Principle of superposition If ${\displaystyle y_{1}}$ and ${\displaystyle y_{2}}$ are two solutions of a linear homogeneous ordinary differential equation, then so is ${\displaystyle ay_{1}+by_{2}}$, where ${\displaystyle a}$ and ${\displaystyle b}$ are any two real numbers. #### A theorem for complex solutions If ${\displaystyle y(x)=u(x)+iv(x)}$ is the complex solution to a linear homogeneous differential equation with continuous coefficients, then ${\displaystyle u(x)}$ and ${\displaystyle v(x)}$ are also solutions to the differential equation. #### Number of general solutions for linear homogeneous differential equation The maximum number of linearly independent solutions to a linear homogeneous differential equation is equal to its order. #### Linear independence and the Wronskian We now make use of the Wronskian determinant (defined earlier) to give a sufficient, but not necessary, condition of linear independence of the ${\displaystyle n-1}$ times differentiable functions ${\displaystyle g_{1}(x)}$, ${\displaystyle g_{2}(x),\ldots ,g_{n}(x)}$. If the Wronskian of the ${\displaystyle n-1}$ times differentiable functions ${\displaystyle g_{1}(x)}$, ${\displaystyle g_{2}(x),\ldots ,g_{n}(x)}$ does not vanish over an open interval ${\displaystyle I}$, then the functions are linearly independent. That is, ${\displaystyle W\neq 0\Rightarrow {\text{linearly independent functions}}}$ It is important to note that this is a sufficient but not necessary condition. It is not true that if the Wronskian does vanish, then the functions are linearly dependent. For example, the functions ${\displaystyle x}$, ${\displaystyle x^{2}}$ and ${\displaystyle x^{3}}$ are linearly independent in any closed interval of the reals, as their Wronskian doesn't vanish identically (for all ${\displaystyle x}$) in any such closed interval. However, if we consider the functions ${\displaystyle |x^{3}|}$ and ${\displaystyle x^{3}}$ on the interval ${\displaystyle I=(-1,1)}$, we can see that ${\displaystyle W=0}$ for all ${\displaystyle x}$ in the interval ${\displaystyle I}$. But these functions are not linearly dependent on the whole interval ${\displaystyle I}$. If we solve for the ${\displaystyle n}$-th derivative in a linear differential equation, we have ${\displaystyle y^{(n)}=-p_{1}(x)y^{(n-1)}(x)-\ldots -p_{n-1}(x)y'(x)-p_{n}(x)y(x)}$ The following equality then holds: ${\displaystyle W(x)=W(x_{0})e^{\displaystyle -\int _{x_{0}}^{x}p_{1}(t)dt}}$ where ${\displaystyle x_{0}}$ is any point belonging to any closed interval ${\displaystyle [a,b]}$ where the coefficients of the differential equation are continuous. ### Second-order ordinary linear differential equations We now turn to arguably the most important topic of this part of the course. A second-order ordinary linear differential equation is an equation of the form ${\displaystyle a_{2}(x)y''(x)+a_{1}(x)y'(x)+a_{0}(x)y(x)=f(x)}$ Why are these equations so important in the physical sciences? There are at least three reasons. First of all, in many occasions, Newton's second law, when applied to a specific system, yields such an equation. Canonical examples of this include the damped and driven oscillator: ${\displaystyle {\frac {d^{2}x}{dt^{2}}}+2\zeta \omega _{0}{\frac {dx}{dt}}+\omega _{0}^{2}x={\frac {F(t)}{m}},}$ and a particle under uniform gravitational acceleration, ${\displaystyle {\frac {d^{2}x}{dt^{2}}}=-g.}$ Secondly, when applying certain methods of solution to linear partial differential equations, we obtain as intermediate steps these sorts of second-order linear ordinary differential equations. An example is the aforementioned Laguerre equation. Another example is the Cauchy-Euler equation, ${\displaystyle a_{n}x^{n}y^{(n)}(x)+a_{n-1}x^{n-1}y^{(n-1)}(x)+\cdots +a_{0}y(x)=0}$ where all the ${\displaystyle a_{i}}$ terms are constants. Lastly, the importance of linear equations lies in the fact that, most of the time, a nonlinear equation can be approximated by a linear one in the vicinity of a specific point (called the equilibrium point). For example, the equation that governs the dynamics of a pendulum can be written as ${\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+{\frac {g}{\ell }}\sin \theta =0}$ If ${\displaystyle \theta =0}$ is taken as the equilibrium point, we expand ${\displaystyle \sin \theta }$ using its Taylor series ${\displaystyle \sin \theta =\theta -{\frac {\theta ^{3}}{3!}}+\cdots }$ and if all terms except the first one are considered negligible (${\displaystyle \theta \ll 1}$), then the equation of the pendulum is now ${\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+{\frac {g}{\ell }}\theta =0}$ and the equation is now linear. It should be noted that, thus, the solution obtained from this linear equation will only be valid under the hypothesis the linearization was done in the first place, namely ${\displaystyle \theta \ll 1}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 110, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656144976615906, "perplexity": 143.5548373950263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141718314.68/warc/CC-MAIN-20201203031111-20201203061111-00423.warc.gz"}
https://hpmuseum.org/forum/showthread.php?mode=threaded&tid=1217&pid=10571
[WP34s] Regularized incomplete Beta function 05-04-2014, 05:26 PM (This post was last modified: 05-04-2014 05:33 PM by Dieter.) Post: #21 Dieter Senior Member Posts: 2,397 Joined: Dec 2013 RE: [WP34s] Regularized incomplete Beta function (05-04-2014 06:29 AM)Paul Dale Wrote:  I can only agree with you here. I'm using a fairly generic Newton based solver for all the quantile functions except the normal QF. I know it can be very slow, that is the price for having a single general solver. Even a simple Newton solver should converge quadratically. I think the slow execution of the current 34s quantile implementations is due to an error in the initial guess for the solver. It should be within approx. 10% of the true quantile. So even a Newton solver should not require much more than five iterations, which means 6 - 10 s in user code. Quote:The 34S certainly doesn't have space for a C implementation of these. I originally implemented all the distribution functions in C but had to rewrite them as keystroke programs for space reasons. Adding additional custom solvers is likewise going to consume precious bytes. Thus, the 34S is unlikely to see any distribution speed ups. However, the 31S quite possibly will. Great. And many things are easier to do as there is no DP mode. ;-) Quote:I'd like to rewrite the distribution code there in native C -- I can't just drop in the original distribution code, we moved on algorithmically. So, yes I'd love to hear your thoughts on improving these functions. I recently tried some improvements, especially with the solver. The following ideas refer to the Student distribution, but they should be applicable for the Chi² and Fisher cases as well. The following thoughts assume the quantile is ≥ 0 and p is the upper tail probability (i.e. p ≤ 0.5). As usual, the rest is done with a simple transformation, e.g. QF(1–p) = –QF(p). So, what can be done? 1. Use a Halley solver. It converges very fast and it can be implemented easily since the second derivative is a function of the first (i.e. the pdf). Student: f"(x) = -(n+1)x / (n+x^2) * pdf Chi²: f"(x) = (n-x-2)/(2x) * pdf Fisher: f"(x) = -(n(m(x-1)+2) + 2mx) / (2x(mx+n)) * pdf where n (resp. m and n) are the degrees of freedom. So f"(x) simply is the pdf times a factor. This way a Halley iteration e.g. for the Student case can be written as follows: r := (StudentUpperCDF(x) - p) / StudentPDF d := r / ( r * (n+1)x / 2(n+x^2) - 1 ) x := x - d Due to the roughly cubic convergence, the iteration usually may quit as soon as |d| < 1E–10*x for 30+ digit accuracy. In my 34s program I tried a conservative CNVG 00 (i.e. rel. error < 1E–14) which usually returns results that are as good as a user code program gets when running in DP mode (approx 30-34 digits). The same idea can be used with the Chi² and Fisher quantile. 2. I slightly modified the initial guess for the Chi² quantile and I tried a new approach in the Student case, based on a 1992 paper on bounds of various quantiles by Fujikoshi and Mukaihata. The idea is a simple transformation of the normal quantile z: t = sqrt(n * (ea * z^2/n - 1 )) Where a is close to 1 and a function of n (or simply 1 so that it can be omitted). I used a = 1 + 1/(e*n) which works very well with the Normal estimate I used (simply the one in the Normal quantile function). This works fine for the center but less so for the distribution tails. For all p < 12–n I use a slight modification of the tail approximation suggested years ago: u = 2 * p * n * sqrt(pi / (2 * n - 0.75)) t = sqrt(n) / u ^ (1 / n) The 0.75 originally was a 1, but changing this value improves the results for low n. Although the Student estimate originally was intended for n≥3 it also works well for n=2 or even n=1. Usually it converges within three iterations, here and there in maybe four. This means that the code for n=1 and n=2 (direct solutions) may be omitted. For x close to 0 (i.e. p near 0.5) the expression t_u(x) – p loses accuracy due to digit cancellation. So I used the same idea as in the Normal quantile routine and had this value calculated differently for small t, using the incomplete Beta function. Yes, that's why I found the bug discussed in this thread. ;-) Code: LBL 'TQF'  ' T Quantile Function ENTER      ' probability in X, degrees of freedom in register J +/- INC X MIN CF 00 x!=? L SF 00       ' set flag 00 if p > 0.5 CF 01       ' clear error flag STO 00      ' save p in R00 # 005 STO 01      ' do not more than 5 iterations # 012 RCL J +/- y^x RCL 00 x>? Y GTO 00 RCL J       ' estimate for small p STO+ X × # pi RCL L # 1/2 x² RCL+ L    ' = 0.75 - / sqrt × RCL J xrooty RCL J sqrt x<> Y / GTO 01 LBL 00     ' estimate for low and moderate t XEQ 'GNQ'  ' get guess for the normal quantile x² # eE RCL× J 1/x INC X × RCL/ J e^x-1 RCL× J sqrt LBL 01     ' iteration starts here FILL # 1/2 x>? Y GTO 02 DROP t_u(x) RCL- 00 GTO 03 LBL 02 DROP x² ENTER RCL+ J / # 1/2 RCL× J RCL L IBeta RCL× L +/- # 1/2 RCL- 00 +          ' cdf(t) - p = (0,5-p) - 1/2 IBeta(x=t^2/(n+t^2), a=1/2, b=n/2) LBL 03 RCL T t_p(x) / ENTER RCL× T RCL J INC X × RCL T x² RCL+ J STO+ X / DEC X / - CNVG? 00 SKIP 003 DSE 01 GTO 01 SF 01       ' Raise error flag if no convergence after 5 iterations FS?C 00 +/-         ' adjust sign FS?C 01 ERR 20      ' if error, display "no root found" and exit with last approximation END LBL 'GNQ'   ' input: p =< 0.5 # 232       ' output: Normal estimate > 0 SDR 003 x<>Y x>? Y GTO 00 FILL        'Normal estimate for p up to 0.232 LN STO+ X +/- ENTER DEC X # pi × STO+ X sqrt RCL× T LN STO+ X +/- sqrt x<>Y # 004 × 1/x + RTN LBL 00   ' Normal estimate for p close to the center +/- # 1/2 + # pi STO+ X sqrt × ENTER x³ # 006 / + RTN END For best accuracy this should run in DP mode. The program exits if the last two approximations agree in approx. 14 digits. At this point the result usually carries 30+ valid digits. Here are some examples: Code: 10 STO J    0,1 XEQ"TQF" => -1,372183641110335627219156967662554 in 3,2 s    exact result:   -1,37218364111033562721915696766255392  1E-20 XEQ"TQF" => -256,4346993185261855315362349874343 in 3,1 s    exact result:   -256.434699318526185531536234987434334  0,5 ENTER 1E-16 -        XEQ"TQF" => -2,569978034930492409497513483729480 E-16 in 1,9 s    exact result:   -2,56997803493049240949751348372947856 E-16  1 STO J  0,05  XEQ"TQF" => -6,313751514675043098979464244768186 in 3 s    exact result:   -6,3137515146750430989794642447681860594  1E-10 XEQ"TQF" => -3183098861,837906715272955512330630 in 1,9 s    exact result:   -3183098861.837906715272955512330627466 100 STO J  0,025 XEQ"TQF" => -1,983971518523552286595184867990389 in 5,1 s    exact result:   -1,983971518523552286595184867990339165 Please note that the results for n=1 are exact to 33 resp. 34 digits while the current implementation (that calculates the result directly) gets only 32 resp. 24 (!) digits right. Either the internal tangent function is not that accurate or the current implementation does not evaluate the quantile as 1/tan(1E–10*180°) which would yield a nearly perfect result. ;-) Dieter « Next Oldest | Next Newest » Messages In This Thread [WP34s] Regularized incomplete Beta function - Dieter - 05-01-2014, 07:17 PM RE: [WP34s] Regularized incomplete Beta function - Thomas Klemm - 05-01-2014, 07:48 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-01-2014, 08:00 PM RE: [WP34s] Regularized incomplete Beta function - Thomas Klemm - 05-01-2014, 08:28 PM RE: [WP34s] Regularized incomplete Beta function - walter b - 05-01-2014, 09:08 PM RE: [WP34s] Regularized incomplete Beta function - walter b - 05-01-2014, 09:23 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-01-2014, 09:26 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-01-2014, 09:55 PM RE: [WP34s] Regularized incomplete Beta function - walter b - 05-01-2014, 10:14 PM RE: [WP34s] Regularized incomplete Beta function - Thomas Klemm - 05-01-2014, 11:13 PM RE: [WP34s] Regularized incomplete Beta function - Thomas Klemm - 05-01-2014, 11:01 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-02-2014, 08:48 PM RE: [WP34s] Regularized incomplete Beta function - walter b - 05-02-2014, 09:25 PM RE: [WP34s] Regularized incomplete Beta function - Thomas Klemm - 05-02-2014, 10:22 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-02-2014, 10:34 PM RE: [WP34s] Regularized incomplete Beta function - walter b - 05-02-2014, 10:46 PM RE: [WP34s] Regularized incomplete Beta function - Thomas Klemm - 05-02-2014, 11:29 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-02-2014, 09:03 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-04-2014, 06:29 AM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-04-2014 05:26 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-05-2014, 02:20 AM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-05-2014, 09:13 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-05-2014, 09:57 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-05-2014, 08:26 AM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-11-2014, 01:50 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-11-2014, 09:51 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-12-2014, 06:18 AM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-12-2014, 06:58 AM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-12-2014, 07:47 PM RE: [WP34s] Regularized incomplete Beta function - walter b - 05-12-2014, 08:14 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-12-2014, 09:59 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-22-2014, 08:51 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-25-2014, 09:43 PM RE: [WP34s] Regularized incomplete Beta function - Manolo Sobrino - 05-03-2014, 01:38 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-05-2014, 11:02 AM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-05-2014, 08:30 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-05-2014, 09:49 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-05-2014, 10:04 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-05-2014, 10:16 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-05-2014, 11:09 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-06-2014, 12:54 AM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-06-2014, 11:01 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-07-2014, 12:12 AM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-08-2014, 06:26 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-08-2014, 10:28 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-08-2014, 10:58 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-08-2014, 11:29 PM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-09-2014, 09:06 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-10-2014, 12:03 AM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-11-2014, 01:23 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-11-2014, 10:33 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-05-2014, 11:15 AM RE: [WP34s] Regularized incomplete Beta function - Dieter - 05-05-2014, 08:14 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-05-2014, 09:40 PM RE: [WP34s] Regularized incomplete Beta function - Paul Dale - 05-12-2014, 06:05 AM User(s) browsing this thread: 1 Guest(s)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7588318586349487, "perplexity": 2063.6650439394607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362879.45/warc/CC-MAIN-20211203121459-20211203151459-00321.warc.gz"}
https://www.tutorialspoint.com/maximum-value-of-arr-arr-plus-i-j-in-cplusplus
# Maximum value of |arr[i] – arr[j] - + |i – j| in C++ In this problem, we are given an array of n integers. Our task is to create a program that will find the maximum value of |arr[i]-arr[j]| + |i-j|. Let’s take an example to understand the problem, Input − array = {4, 1, 2} Output − 4 Explanation |arr[0] - arr[1]|+|0-1| = |4-1| + |-1| = 3+1 = 4 |arr[0] - arr[2]|+|0-2| = |4-2| + |-2| = 2+2 = 4 |arr[1] - arr[2 ]|+|1-2| = |1-2| + |1-2| = 1+1 = 2 To solve this problem, a simple approach will be using the brute force approach which will be using two loops and finding the max difference. But an efficient approach will be using the properties of the absolute function, Let’s decode the equation and find the solution, arr[i] - arr[j] + i - j = (arr[i] + i) - (arr[j] + j) arr[i] - arr[j] - i + j = (arr[i] - i) - (arr[j] - j) -arr[i] + arr[j] + i - j = -{(arr[i]-i) -(arr[j]-j)} -arr[i] + arr[j] - i + j = -{(arr[i]+i) - (arr[j]+j)} First and forth are the same and the second and fourth are the same. Using this we will create two arrays that will store values arr[i]+- i. array1 will store values arr[i] + i array2 will store values arr[i] - i So, we will find the maximum of two values that are max ((max(array1)-min(array1)), (max(array2)-min(array2))) ## Example Program to show the implementation of our solution, Live Demo #include<iostream> using namespace std; int maxDiff(int arr[], int n) { int ans = 0; for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) ans = max(ans, abs(arr[i] - arr[j]) + abs(i - j)); return ans; } int main() { int array[] = { 5, 7, 1, 2 }; int n = sizeof(array) / sizeof(array[0]); cout<<"The maximum value of |arr[i] - arr[j]| + |i-j| is "<<maxDiff(array, n); return 0; } ## Output The maximum value of |arr[i] - arr[j]| + |i-j| is 7
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37144935131073, "perplexity": 2753.3902505179503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00462.warc.gz"}
https://asymptotics.wordpress.com/category/mathematics/
### Mathematics Consider a discrete Markov source, ${\mathscr{X} \ = \ X_i\}_{i=1}^{\infty}}$ ona finite alphabet set. Let the initial distribution be ${Q}$ and the transition probability for the ${n^{th}}$ step be ${P_n}$. When can we say that ${\mathscr{X}}$ is stationary? Clearly, the source has to be time invariant and thefore we need ${P_n = P, \forall n}$. For ${\mathscr{X}}$ to be stationary, we need $\displaystyle f(X_1) = f(X_2) = \ \ldots \ f(X_n)$ etc, where ${f()}$ is the distribution. But ${f(X_n) = QP^n}$. Thus ${Q = QP}$ guarantees that all ${X_n}$‘s have the same distribution. Now, consider, say, ${f(X_1;X_2;X_3)}$ and ${f(X_2;X_3;X_4)}$. $\displaystyle f(X_1;X_2;X_3) = f(X_1)f(X_2/X_1)f(X_3/X_2).$ $\displaystyle f(X_2;X_3;X_4) = f(X_2)f(X_3/X_2)f(X_4/X_3).$ It is clear that for the two joint distributions to be equal, ${f(X_1) = f(X_2)}$ is enough and therefore ${Q = QP}$ is sufficient. The great Prof. Varadhan made a visit to the IISc on Febrary 13th. He gave a lecture at the IISc faculty hall on entropy and large deviations. The following example was interesting. Consider a bug with limited energy trapped in the valley of a steep peak. It tries to scale the peak and reach the other side. Evey time it fails, it falls to the bottom of the valley. It changes its strategy and starts all over again. After a very large number of attempts, it succeeds and reaches the peak. As it goes down the other side, an observer on the other valley sees the bug coming down. He becomes curious, goes to the top of the peak and looks below to see how steep was the bug’s scale. Now, having seen the bug, what can he conclude about the strategy adopted by the bug? Prof Varadhan made the following comment. The observer can surely conclude that the bug would have adopted the most efficient strategy. That has to be the case because as the number of iterations becomes very large, the probability of success is dominated by that of the best strategy and if a success ocurs it must come from the best strategy. Let us consider the following problem which is a popular example in game theory. A mother wants to divide a cake between her two children. To make both of them happy she has to ensure that each one gets an equal share. That is she has to cut the cake into two equal halves. But the problems is that the cake does not have a regular shape. Hence what is equal to her eyes may not be equal to the eyes of her children. The consequences of an unequal division are imaginable. So how can she make both her kids happy? The solution is to ask one of the kids to cut the cake into two and the other to choose the piece. It can be verified that this solves the problem. The interesting thing about the solution is that the mother was able to satisfy both the kids even without knowing what will make them happy. Now if the mother has N kids what will she do? What will happen if some kids form a collusion and try to get a bigger share for them ? While reading through the achievements of the Kerala school of mathematics I came across the Madhava-Leibniz formula for computing the value of pi/4. Though the formula was discovered by Madhava its popularly known as Leibniz formula. This is an instance of Stiglers Law, which is the tendency of NOT attributing a discovery to its original discoverer. Popular instances of the law include the United States of America, Halleys comet, Planck’s constant and, to my surprise, Gaussian distribution! I came to know that the Gaussian distribution was not first proposed by Gauss, a very late discovery for a communication engineer. The distribution was first proposed by de Moivre of the de Moivre-Laplace formula to approximate binomial distributions for large n. Now by recursive application of Stiglers Law can we say that De Moivre also was not the first proposer of the normal distribution ?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075217008590698, "perplexity": 414.5460178307292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814105.6/warc/CC-MAIN-20180222120939-20180222140939-00243.warc.gz"}
http://longertwits.vickythegme.com/kmart-certificate-ysnsul/63be42-how-to-simplify-radicals
Find a perfect square factor for 24. The radicand contains no fractions. For the purpose of the examples below, we are assuming that variables in radicals are non-negative, and denominators are nonzero. By quick inspection, the number 4 is a perfect square that can divide 60. Is the 5 included in the square root, or not? The index is as small as possible. Then, there are negative powers than can be transformed. Check it out. Simplifying radicals is the process of manipulating a radical expression into a simpler or alternate form. So … That is, the definition of the square root says that the square root will spit out only the positive root. Simplify each of the following. Free Radicals Calculator - Simplify radical expressions using algebraic rules step-by-step This website uses cookies to ensure you get the best experience. This is the case when we get $$\sqrt{(-3)^2} = 3$$, because $$|-3| = 3$$. This theorem allows us to use our method of simplifying radicals. As you can see, simplifying radicals that contain variables works exactly the same way as simplifying radicals that contain only numbers. Simplify square roots (radicals) that have fractions In these lessons, we will look at some examples of simplifying fractions within a square root (or radical). Indeed, we can give a counter example: $$\sqrt{(-3)^2} = \sqrt(9) = 3$$. Simplifying Radicals – Practice Problems Move your mouse over the "Answer" to reveal the answer or click on the "Complete Solution" link to reveal all of the steps required for simplifying radicals. Simplifying a Square Root by Factoring Understand factoring. Example 1. All right reserved. There are lots of things in math that aren't really necessary anymore. The radicand contains no factor (other than 1) which is the nth or greater power of an integer or polynomial. To simplify a term containing a square root, we "take out" anything that is a "perfect square"; that is, we factor inside the radical symbol and then we take out in front of that symbol anything that has two copies of the same factor. Sign up to follow my blog and then send me an email or leave a comment below and I’ll send you the notes or coloring activity for free! For example. These date back to the days (daze) before calculators. For instance, if we square 2, we get 4, and if we "take the square root of 4", we get 2; if we square 3, we get 9, and if we "take the square root of 9", we get 3. In other words, we can use the fact that radicals can be manipulated similarly to powers: There are various ways I can approach this simplification. Chemical Reactions Chemical Properties. There are four steps you should keep in mind when you try to evaluate radicals. It’s really fairly simple, though – all you need is a basic knowledge of multiplication and factoring. IntroSimplify / MultiplyAdd / SubtractConjugates / DividingRationalizingHigher IndicesEt cetera. We can deal with katex.render("\\sqrt{3\\,}", rad03C); in either of two ways: If we are doing a word problem and are trying to find, say, the rate of speed, then we would grab our calculators and find the decimal approximation of katex.render("\\sqrt{3\\,}", rad03D);: Then we'd round the above value to an appropriate number of decimal places and use a real-world unit or label, like "1.7 ft/sec". Simplify the following radicals. Reducing radicals, or imperfect square roots, can be an intimidating prospect. Oftentimes the argument of a radical is not a perfect square, but it may "contain" a square amongst its factors. One thing that maybe we don't stop to think about is that radicals can be put in terms of powers. First, we see that this is the square root of a fraction, so we can use Rule 3. Your email address will not be published. Then: katex.render("\\sqrt{144\\,} = \\mathbf{\\color{purple}{ 12 }}", typed01);12. That was a great example, but it’s likely you’ll run into more complicated radicals to simplify including cube roots, and fourth roots, etc. Let's look at to help us understand the steps involving in simplifying radicals that have coefficients. There are rules for operating radicals that have a lot to do with the exponential rules (naturally, because we just saw that radicals can be expressed as powers, so then it is expected that similar rules will apply). 1. Short answer: Yes. Finance. Find the number under the radical sign's prime factorization. Concretely, we can take the $$y^{-2}$$ in the denominator to the numerator as $$y^2$$. So 117 doesn't jump out at me as some type of a perfect square. For instance, relating cubing and cube-rooting, we have: The "3" in the radical above is called the "index" of the radical (the plural being "indices", pronounced "INN-duh-seez"); the "64" is "the argument of the radical", also called "the radicand". The square root of 9 is 3 and the square root of 16 is 4. Quotient Rule . A radical can be defined as a symbol that indicate the root of a number. In this tutorial we are going to learn how to simplify radicals. In simplifying a radical, try to find the largest square factor of the radicand. Another way to do the above simplification would be to remember our squares. Some techniques used are: find the square root of the numerator and denominator separately, reduce the fraction and change to improper fraction. Identities Proving Identities Trig Equations Trig Inequalities Evaluate Functions Simplify. No radicals appear in the denominator. I could continue factoring, but I know that 9 and 100 are squares, while 5 isn't, so I've gone as far as I need to. We will start with perhaps the simplest of all examples and then gradually move on to more complicated examples . In mathematical notation, the previous sentence means the following: The " katex.render("\\sqrt{\\color{white}{..}\\,}", rad17); " symbol used above is called the "radical"symbol. Special care must be taken when simplifying radicals containing variables. 1. + 1) type (r2 - 1) (r2 + 1). Simplifying radicals containing variables. First, we see that this is the square root of a fraction, so we can use Rule 3. Most likely you have, one way or the other worked with these rules, sometimes even not knowing you were using them. One would be by factoring and then taking two different square roots. Here’s how to simplify a radical in six easy steps. Another rule is that you can't leave a number under a square root if it has a factor that's a perfect square. Simplifying Radicals Activity. Radicals ( or roots ) are the opposite of exponents. Being familiar with the following list of perfect squares will help when simplifying radicals. Square root, cube root, forth root are all radicals. 0. If you notice a way to factor out a perfect square, it can save you time and effort. Simplify the following radical expression: $\large \displaystyle \sqrt{\frac{8 x^5 y^6}{5 x^8 y^{-2}}}$ ANSWER: There are several things that need to be done here. Let's see if we can simplify 5 times the square root of 117. This website uses cookies to improve your experience. How do we know? Simplifying simple radical expressions where a ≥ 0, b > 0 "The square root of a quotient is equal to the quotient of the square roots of the numerator and denominator." get rid of parentheses (). And take care to write neatly, because "katex.render("5\\,\\sqrt{3\\,}", rad017);" is not the same as "katex.render("\\sqrt[5]{3\\,}", rad018);". Simplifying radicals is an important process in mathematics, and it requires some practise to do even if you know all the laws of radicals and exponents quite well. Find the number under the radical sign's prime factorization. We use the fact that the product of two radicals is the same as the radical of the product, and vice versa. Since 72 factors as 2×36, and since 36 is a perfect square, then: Since there had been only one copy of the factor 2 in the factorization 2 × 6 × 6, the left-over 2 couldn't come out of the radical and had to be left behind. Did you just start learning about radicals (square roots) but you’re struggling with operations? In case you're wondering, products of radicals are customarily written as shown above, using "multiplication by juxtaposition", meaning "they're put right next to one another, which we're using to mean that they're multiplied against each other". We'll assume you're ok with this, but you can opt-out if you wish. For example, let. Generally speaking, it is the process of simplifying expressions applied to radicals. Examples. Any exponents in the radicand can have no factors in common with the index. Reducing radicals, or imperfect square roots, can be an intimidating prospect. So let's actually take its prime factorization and see if any of those prime factors show up more than once. Rule 2:    $$\large\displaystyle \sqrt[n]{xy} = \sqrt[n]{x} \sqrt[n]{y}$$, Rule 3:    $$\large\displaystyle \sqrt[n]{\frac{x}{y}} = \frac{\sqrt[n]{x}}{\sqrt[n]{y}}$$. It's a little similar to how you would estimate square roots without a calculator. Here’s the function defined by the defining formula you see. Simplifying Radicals Coloring Activity. Simplifying Radicals. All that you have to do is simplify the radical like normal and, at the end, multiply the coefficient by any numbers that 'got out' of the square root. A radical is considered to be in simplest form when the radicand has no square number factor. So, let's go back -- way back -- to the days before calculators -- way back -- to 1970! URL: https://www.purplemath.com/modules/radicals.htm, Page 1Page 2Page 3Page 4Page 5Page 6Page 7, © 2020 Purplemath. To simplify this sort of radical, we need to factor the argument (that is, factor whatever is inside the radical symbol) and "take out" one copy of anything that is a square. The goal of simplifying a square root … (Much like a fungus or a bad house guest.) Here is the rule: when a and b are not negative. This website uses cookies to ensure you get the best experience. In simplifying a radical, try to find the largest square factor of the radicand. Concretely, we can take the $$y^{-2}$$ in the denominator to the numerator as $$y^2$$. In this tutorial, the primary focus is on simplifying radical expressions with an index of 2. And for our calculator check…. We'll learn the steps to simplifying radicals so that we can get the final answer to math problems. Take a look at the following radical expressions. Rule 1:    $$\large \displaystyle \sqrt{x^2} = |x|$$, Rule 2:    $$\large\displaystyle \sqrt{xy} = \sqrt{x} \sqrt{y}$$, Rule 3:    $$\large\displaystyle \sqrt{\frac{x}{y}} = \frac{\sqrt x}{\sqrt y}$$. (Other roots, such as –2, can be defined using graduate-school topics like "complex analysis" and "branch functions", but you won't need that for years, if ever.). There are five main things you’ll have to do to simplify exponents and radicals. This theorem allows us to use our method of simplifying radicals. Mechanics. Simplifying Radical Expressions. Algebraic expressions containing radicals are very common, and it is important to know how to correctly handle them. Cube Roots . Fraction of a Fraction order of operation: $\pi/2/\pi^2$ 0. But my steps above show how you can switch back and forth between the different formats (multiplication inside one radical, versus multiplication of two radicals) to help in the simplification process. 2) Product (Multiplication) formula of radicals with equal indices is given by a square (second) root is written as: katex.render("\\sqrt{\\color{white}{..}\\,}", rad17A); a cube (third) root is written as: katex.render("\\sqrt[{\\scriptstyle 3}]{\\color{white}{..}\\,}", rad16); a fourth root is written as: katex.render("\\sqrt[{\\scriptstyle 4}]{\\color{white}{..}\\,}", rad18); a fifth root is written as: katex.render("\\sqrt[{\\scriptstyle 5}]{\\color{white}{..}\\,}", rad19); We can take any counting number, square it, and end up with a nice neat number. If the last two digits of a number end in 25, 50, or 75, you can always factor out 25. Simplifying dissimilar radicals will often provide a method to proceed in your calculation. Sometimes, we may want to simplify the radicals. Solution : √(5/16) = √5 / √16 √(5/16) = √5 / √(4 ⋅ 4) Index of the given radical is 2. The first thing you'll learn to do with square roots is "simplify" terms that add or multiply roots. \large \sqrt {x \cdot y} = \sqrt {x} \cdot \sqrt {y} x ⋅ y. . Check it out: Based on the given expression given, we can rewrite the elements inside of the radical to get. Example 1 : Use the quotient property to write the following radical expression in simplified form. In the same way, we can take the cube root of a number, the fourth root, the 100th root, and so forth. Another rule is that you can't leave a number under a square root if it has a factor that's a perfect square. We will start with perhaps the simplest form when the radicand the final answer to problems... Deviation Variance Lower Quartile Upper Quartile Interquartile Range Midhinge calculators -- way back -- back. That one factor is a perfect square, it is the nth or greater power of integer! A way to do the above simplification would be by factoring and then taking two different square roots 's perfect!, I used times '' in my work above the second case we... Math text, it is 3 and the square roots without a Calculator perfect squares will help when simplifying.! √1700 = √ ( 25 x 2 ) product ( multiplication ) formula of radicals with equal is! Negative root or the other worked with these rules, sometimes even not knowing were! ( 25 x 2 ) = 10√17 exactly the same as the square root, forth are... The step by step instructions on how to simplify radicals, how to simplify radicals all! One rule is that you need is a 501 ( c ) ( r2 - 1 ) type ( +..., Page 1Page 2Page 3Page 4Page 5Page 6Page 7, © 2020.! Symbol in the first thing you 'll learn to do the same rules apply to other radicals ( just... 'S go back -- way back -- way back -- way back -- to properties! Is raised to another exponent, you agree to our Cookie Policy fraction of a perfect square to you... Same time, especially with \ ( \sqrt x\ ) expressions using rules. Consider katex.render ( \\sqrt { 3\\, } '', rad03A ;. For any and all values what will make the original equation true, 50, or imperfect square roots each. Academy is a square root of 9 is 3 by the defining you... … '' the square root in the radical into prime factors show up more once. Instantly as you can opt-out if you notice a way to do square. Speaking, it is the process of simplifying radicals squares will help simplifying... You would estimate square roots we still get to do to simplify the following are.! Trig Equations Trig Inequalities Evaluate Functions simplify lucky for us, we deal with radicals all the time determine... Techniques used are: find the largest square factor of the numerator denominator. ) and simplify root: Yes, I used times '' symbol in the denominator of a radical removed. For any and all values what will make the original equation true to your when..., sometimes even not knowing you were using them by factoring and gradually! Rule is that you ca n't leave a number under the radical ) our mission is to provide a to. ( 3 ) nonprofit organization derive the rules for radicals radicals as well of 3, wo! Be to remember our squares not included on square roots, the root! Generally speaking, it is the rule: when a and b are not the root! Down to prime numbers when simplifying radicals that contain variables works exactly the same value equation... Values what will make the original equation true ca n't leave a number hand, we simplifying! The original equation true rule: when a and b are not the square root of fraction a., reduce the fraction and change to improper fraction 2, 3, 5 until only left numbers prime... + 1 ) which is the 5 included in the second case, \ ( \sqrt { }! The first case, we are going to be in simplest form when radicand... } x ⋅ y.: https: //www.purplemath.com/modules/radicals.htm, Page 1Page 2Page 3Page 4Page 5Page 6Page 7 ©. Our method of simplifying expressions applied to radicals ( 25 x 2 ) = 5√2 the focus. By using rule 6 of exponents and the simplifying radicals is pretty simple, though – all need... That are n't really necessary anymore to put the radical has no square number factor. to behind. ( 25 x 2 ) = 5√2 algebraic expressions containing radicals are non-negative, it... Is removed from the denominator cube root, forth root are all radicals it is the rule: when and. This is the nth or greater power of an integer, then degree, that statement is correct, it. How this can arise and here is how to deal with radicals all the way down prime! Include a times '' symbol in the square roots, can be an intimidating prospect below, we to... Simplifying dissimilar radicals will often provide a free, world-class education to anyone, anywhere be.. Are real numbers, and an index of 2 defined value for an containing... 4 is a square root of a fraction, so we can simplify those radicals down! That we 've already done is correct, but what happens is that (... Will also use some properties of exponents radical can be an intimidating prospect stay. Already know for powers to derive the rules for radicals the elements inside of the following list perfect. ( \\sqrt { 3\\, } '', rad03A ) ; the. Freedom Calculator Paired Samples, degrees of Freedom Calculator two Samples by finding the prime factors up... ( 3 ) nonprofit organization of things in math that are n't really necessary anymore will use! We do n't see a simplification right away is proper form to put the radical commonly. See a simplification right away you want: They are and how to simplify.... Of in decimal form is Interquartile Range Midhinge 75, you agree to our Cookie Policy you want They. 7, © 2020 Purplemath we still get to do them I have only the positive root know the... Than the index from the simplifications that we can simplify those radicals right down prime! Given, we see that this is the process of manipulating a radical is the! Put the radical our case here, it 's not. ) derive the rules we already know for to... 6 2 16 3 48 4 3 a ( in our case here, is! Symbol that indicate the root of a fraction may contain '' a square root of a fraction, we. Are true Based on the given expression given, we 're simplifying find! In the radical at the same value so, let \ ( \sqrt { y } x y.! $0 you 'll learn to do with square roots ( variables ) our mission to. } 9 agree to our Cookie Policy by step instructions on how to simplify the fraction and to... As you can see, simplifying radicals Calculator will simplify it instantly as type! =Root ( 4 ) * root ( 24 ) =root ( 4 * 6 ).! I multiply them inside one radical with equal indices is given by square. You type = |x|\ ) most of radicals you will see will be in simplest form the... The radical sign 's prime factorization simplify radicals of square roots \large \sqrt { x \cdot y x... Factor out 25 ) are the steps required for simplifying radicals Calculator: number: answer: this will. Multiplication and factoring only the positive root actually take its prime factorization break it as... Square numbers which are \color { red } 36 and \color { }... Be an intimidating prospect do to simplify radical expressions are similar to you... Are negative powers than can be defined as a power root of must... Common, and at the same as the square roots \large \sqrt { x \cdot y =... Whole numbers: do n't have to stay behind in the denominator of a fraction of all examples then! With these rules, sometimes even not knowing you were using them function, and the of... Work, use whatever notation works well for you the opposite of exponents multiplication! Show up more than once to know how to simplify radical expressions are similar to the days ( daze before. Exercise, something having no practical '' application 've already done ca n't leave a square root you the. Is on simplifying radical expressions 2Page 3Page 4Page 5Page 6Page 7, 2020... Following are the opposite of exponents use our method of simplifying radicals exponential is. Get the final answer to math problems re struggling with operations 75, you to... Method -Break the radicand 1Page 2Page 3Page 4Page 5Page 6Page 7, © 2020 Purplemath standard Deviation Lower. Of 24 and 6 is a square root if it has a factor 's! Just start learning about radicals ( square roots ) are the steps in! You Mean something other than what you 'd intended to proceed in your calculation into a text! Step-By-Step this website uses cookies to ensure you get the final answer to math problems by simplifying roots! Radicand, and vice versa Freedom Calculator Paired Samples, degrees of Freedom Calculator Paired,! That add or multiply roots of the square root of in decimal form is fairly simple, being barely from. You try to Evaluate the square root of in decimal form is simplify.. The perfect squares to your advantage when following the factor method of simplifying radicals that coefficients... Factors such as 2, 3, 5 until only left numbers are prime must... Some type of a fraction order of operation:$ \pi/2/\pi^2 \$ 0 the goal of expressions... How to simplify exponents and radicals in your calculation need is a perfect square of have.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8591639399528503, "perplexity": 632.9565630843955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00291.warc.gz"}
https://www.groundai.com/project/decomposition-of-orthogonal-matrix-and-synthesis-of-two-qubit-and-three-qubit-orthogonal-gates/
Decomposition of orthogonal matrix and synthesis of two-qubit and three-qubit orthogonal gates # Decomposition of orthogonal matrix and synthesis of two-qubit and three-qubit orthogonal gates ## Abstract The decomposition of matrices associated to two-qubit and three-qubit orthogonal gates is studied, and based on the decomposition the synthesis of these gates is investigated. The optimal synthesis of general two-qubit orthogonal gate is obtained. For two-qubit unimodular orthogonal gate, it requires at most 2 CNOT gates and 6 one-qubit gates. For the general three-qubit unimodular orthogonal gate, it can be synthesized by 16 CNOT gates and 36 one-qubit and gates in the worst case. ###### pacs: 03.67.Lx, 03.65.Fd ## I Introduction In quantum computing, the algorithms are commonly described by the quantum circuit model 1 (). The building blocks of quantum circuits are quantum gates, i.e., unitary transformations acting on a set of qubits. In 1995, Barenco et al showed that any qubit quantum circuit can be decomposed into a sequence of one-qubit gates and CNOT gates 2 (). The process of constructing quantum circuits by these elementary gates is called synthesis by some authors. The complexity of quantum circuit can be measured in terms of the number of CNOT and one-qubit elementary gates required. Achieving gate arrays of less complexity is crucial not only because it reduces resource, but it also reduces errors. Decomposition of matrix plays very important role to synthesize and optimize quantum gates. Based on Cartan decomposition 3 (); 4 (), the synthesis, optimization and “small circuit” structure of two-qubit gate are well solved 5 (); 6 (); 7 (); 8 (); 9 (). To implement the general two-qubit gate, it requires at most 3 CNOT gates and 15 elementary one-qubit gates from the family 7 (); 8 (). Unfortunately, the aforementioned optimal synthesis of most general two-qubit quantum gates have not yet led to similarly tight results for three-qubit gates. Based on one of Cartan decompositions for multi-qubit system, Khaneja-Glaser decomposition (KGD) 4 (), Vatan and Williams get the result that a general three-qubit quantum gate can be synthesized using at most 40 CNOT gates and 98 one-qubit and gates 10 (). Using the modified KGD, the results have been improved in 11 (), that is it requires at most 26 CNOT gates and 73 one-qubit and gates. Now the best known result is based on quantum Shannon decomposition (QSD) 12 () proposed by Shende, Bullock and Markov, it requires at most 20 CNOT gates. According to the result of multi-qubit case, the best known theoretical lower bound on CNOT gate cost for general three-qubit gates is 14 8 (). However, no circuit construction yielding these numbers of CNOT gates has been presented in the literature. The orthogonal gate is an important class of gate, the matrix corresponding to the gate is orthogonal. For example, classical reversible logic circuits have a long history 13 () and are a necessary subclass whose realization is required for any quantum computer to be universal. The matrix elements of them are all real, so they are orthogonal. Utilizing the basic property of magic basis, in 2004, Vatan and Williams investigated the synthesis of two-qubit orthogonal gate in 7 (). The result is that the synthesis of the unimodular orthogonal gate requires at most 2 CNOT gates and 12 elementary one-qubit gates. As for the non-unimodular orthogonal gate, that is its matrix determinant is equal to minus one, it requires at most 3 CNOT gates and 12 elementary one-qubit gates 7 (). The number of the one-qubit gates required can still be reduced further. Moreover, no articles discuss the synthesis of general orthogonal three-qubit quantum gates yet. In this work, we devote to investigating the synthesis of two-qubit and three-qubit orthogonal gates. For this purpose, we study the Cartan decomposition of matrix for these gates first. Based on the particular decompositions, the two kinds of synthesis are obtained. For two-qubit unimodular orthogonal gate, it requires at most 2 CNOT gates and 6 one-qubit gates, beating an earlier bound of 2 CNOT gates and 12 one-qubit elementary gates. The numbers required for one-qubit gate and CNOT gate are all reach the lower bound. For three-qubit unimodular orthogonal gate, it can be synthesized by 16 CNOT gates and 36 one-qubit and gates in the worst case. This paper is organized as follows. The concept of Cartan decomposition and its application in quantum information science (QIS) are briefly introduced in Section II. Based on a kind of Cartan decomposition of special orthogonal group , we provide an optimal synthesis of general two-qubit orthogonal gate in Section III. The decomposition of the group associated to three-qubit unimodular orthogonal gate is investigated in Section IV. The synthesis of general three-qubit unimodular orthogonal gate is studied in Section V. It is first time to discuss the synthesis of this kind gate. A brief conclusion is made in Section VI. ## Ii Cartan Decomposition and Its Application in QIS The Cartan decomposition of Lie group 3 () depends on the decomposition of its Lie algebra. A Cartan decomposition of a real semisimple Lie algebra is the decomposition g=l⊕p, (1) where is the orthogonal complement of with respect to the Killing form, and satisfy the commutation relations: [l,l]⊆l,[l,p]⊆p,[p,p]⊆l. (2) is a Lie subalgebra of . A maximal Abelian subalgebra contained in is called a Cartan subalgebra of the pair denoted as . Then using the relation between Lie group and Lie algebra, every element of the Lie group can be written as X=K1AK2, (3) where , , and . There are many kinds of Cartan decomposition for semisimple Lie groups. Now the main application in quantum information science is the decomposition of group for multi-qubit system, i.e. Khaneja-Glaser Decomposition (KGD) 4 (). Moreover there are some other decompositions, such as Concurrence Canonical Decomposition (CCD) 14 (); 15 () which is a decomposition of group too, the Odd-Even Decomposition (OED) 16 (), which is a generalization of CCD to more general multipartite quantum system case. Some kinds of Cartan decomposition for a bipartite high dimension quantum system were discussed in 17 (); 18 (); 19 (). These Cartan decompositions have been applied in the synthesis and implementation of quantum logic gates 10 (); 11 (); 20 (); 21 (), the entanglement of multipartite quantum systems 14 (); 15 (), etc. But we need to find new suitable algebraic structures of Cartan decomposition to meet the purpose here. ## Iii Optimal Synthesis of General Two-Qubit Orthogonal Gates We now consider the decomposition of 4 dimensional special orthogonal group associated to the two-qubit unimodular orthogonal gate. Difference from that in 7 (), the Lie algebra is constructed as so(4):=span{iI⊗σy,iσy⊗I,iσx⊗σy,iσy⊗σx,iσz⊗σy,iσy⊗σz}, (4) in which each basis vector involves a matrix. A kind of Cartan decomposition of algebra is that so(4)=l⊕p, (5) with l:=span{iI⊗σy,iσy⊗I}, (6) p:=span{iσx⊗σy,iσy⊗σx,iσz⊗σy,iσy⊗σz}. (7) where is a Lie subalgebra and . Its Cartan subalgebra is a:=span{iσx⊗σy,iσy⊗σz}. (8) Utilizing the relation between Lie group and Lie algebra, the Cartan decomposition of Lie group can be obtained. For every element , we have X=K1AK2, (9) where , and is a two-qubit operation of the form A(a,b)=exp(−i(aσx⊗σy+bσy⊗σz)), (10) where . The can be represented by the synthesis of elementary gates as A=C12⋅R(1)y(b)⋅R(2)y(a)⋅C12. (11) Here and afterwards denotes the CNOT gate that control on the -th qubit and target on the -th qubit, and is an elementary one-qubit gate acting on the -th qubit. Combing Eqs.(9), (10) and (11), we can get the synthesis of general two-qubit orthogonal gate as in Fig.1, it requires at most 2 CNOT gates and 6 one-qubit gates. As for the non-unimodular orthogonal gate (the determinant is equal to minus one), it requires at most 3 CNOT gates and 6 one-qubit gates. The 2 CNOT gates is optimal for CNOT gate cost of two-qubit orthogonal gate, and it has been proved in 9 (). Since a matrix has 6 independent parameters, it needs at least 6 elementary one-qubit gates to load them. So the synthesis of general two-qubit orthogonal gate here is optimal both for CNOT gates and elementary one-qubit gates. ## Iv Decomposition of General Three-qubit Unimodular Orthogonal Gate The matrices of any general three-qubit unimodular orthogonal gate are elements of special orthogonal group . We construct the Lie algebra first. Taking AI type of Cartan decomposition 3 () on and Lie algebra, u(4)=iσ(1)⊕iS(1), (12) u(2)=iσ(2)⊕iS(2), (13) with iσ(1):=span{iI⊗σx,iI⊗σy,iI⊗σz,iσx⊗I,iσy⊗I,iσz⊗I}, (14) iS(1):=span{iσx,y,z⊗σx,y,z,iI}, (15) iσ(2):=span{iσy}, iS(2):=span{iσx,iσz,iI}. (16) A set of basis for a Lie algebra is given by 28 tensor products of the form F:=iσ(1)⊗S(2)and iS(1)⊗σ(2). (17) Using the transformation matrix in 22 (); 23 (): M=1√2⎛⎜ ⎜ ⎜⎝1i0000i100i−11−i00⎞⎟ ⎟ ⎟⎠⊗I2, (18) the Lie algebra can be obtained by . So the Lie algebra is isomorphic to , and the basis in Eq.(18) can be called as magic basis of algebra. Then we take Cartan decomposition of the Lie algebra as Eq.(1), with l:=span{iI⊗σx,y,z⊗I,iσx,y,z⊗I⊗I,iI⊗σx,y,z⊗σz,iσx,y,z⊗I⊗σz}, (19) p:=span{iI⊗σx,y,z⊗σx,iσx,y,z⊗I⊗σx,iI⊗I⊗σy,iσx,y,z⊗σx,y,z⊗σy}. (20) The is isomorphic to . The Cartan subalgebra of the pair can be chosen as a:=span{iσx⊗σx⊗σy,iσy⊗σy⊗σy,iσz⊗σz⊗σy,iI⊗I⊗σy}. (21) Using the formula [A⊗B,C⊗D]=12({A,C}⊗[B,D]+[A,C]⊗{B,D}), (22) it is easy to verify that the and in Eqs.(19, 20) satisfy the conditions of the Cartan decomposition in Eq.(2). Lie subalgebra could be decomposed further l=l(1)⊕p(1), (23) with l(1):=span{iI⊗σx,y,z⊗I,iσx,y,z⊗I⊗I}, (24) p(1):=span{iI⊗σx,y,z⊗σz,iσx,y,z⊗I⊗σz}. (25) The is isomorphic to . Its Cartan subalgebra can be chosen as a(1):=span{iI⊗σz⊗σz,iσz⊗I⊗σz}. (26) From the correspondence between Lie group and Lie algebra and the conjugative transformation, we get the Cartan decomposition of Lie group : any element of the group can be decomposed as X=M†K1A(1)1K2AK3A(1)2K4M. (27) Here , and are the Abelian subgroup associated to Cartan subalgebra and respectively. ## V Synthesis of General Three-qubit Unimodular Orthogonal Gate Based on the discussion in Section IV, the decomposition of general three-qubit orthogonal gate is shown in Fig.2, where and A(a,b,c,d)=exp{−i(aσx⊗σx⊗σy+bσy⊗σy⊗σy +cσz⊗σz⊗σy+dI⊗I⊗σy)}, (28) A(1)(α,β)=exp{−i(αI⊗σz⊗σz+βσz⊗I⊗σz}. (29) The synthesis of transformation matrix is given in 7 () shown in Fig.3, that is M=C12⋅R(1)z(π4)⋅R(2)y(π4)⋅R(2)z(−π4). (30) The can be expressed as A(a,b,c,d)=M⋅~A(a,b,c)⋅M†⋅R(3)y(d), (31) here ~A(a,b,c) = exp{−i(aI⊗σz⊗σy−bσz⊗σz⊗σy+cσz⊗I⊗σy)} (32) = exp⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩−i⎛⎜ ⎜ ⎜⎝a−b+c0000b−a+c0000a+b−c0000−a−b−c⎞⎟ ⎟ ⎟⎠⊗σy⎫⎪ ⎪ ⎪⎬⎪ ⎪ ⎪⎭. Since the Cartan subalgebra is commutative, we can break down the synthesis of into the following operations: ~A1(a)=exp{−iaI⊗σz⊗σy}, (33) ~A2(−b)=exp{ibσz⊗σz⊗σy}, (34) ~A3(c)=exp{−icσz⊗I⊗σy}. (35) And we have ~A1(a)=C32⋅R(3)y(a)⋅C32, (36) ~A2(−b)=C31⋅C32⋅R(3)y(−b)⋅C32⋅C31, (37) ~A3(c)=C31⋅R(3)y(c)⋅C31. (38) By putting Eqs.(36), (37) and (38) together, we get ~A(a,b,c)=C32⋅R(3)y(a)⋅C31⋅R(3)y(−b)⋅C32⋅R(3)y(c)⋅C31. (39) Here the identity is used. Combining Eqs.(30), (31) and (39), we have A(a,b,c,d) = C12⋅R(2)y(π4)⋅C32⋅R(3)y(a)⋅C31⋅R(3)y(−b)⋅ (40) C32⋅R(3)y(c)⋅C31⋅R(2)y(−π4)⋅C12⋅R(3)y(d). and its circuit shown in Fig.4. Since gates commute with the control qubit of the CNOT gate, here the gates in and are canceled. The synthesis of is shown in Fig.5, that is A(1)(α,β)=C31⋅R(3)z(β)⋅C31⋅C32⋅R(3)z(α)⋅C32. (41) Putting all these pieces together, we get that 16 CNOT gates and 36 one-qubit and gates at most are sufficient to synthesize general three-qubit orthogonal gate. For the same reason mentioned above, the two gates acting on same qubit neighbored have been combined to one. Here, each requires 1 CNOT gates and 3 one-qubit and gates, requires 6 CNOT gates and 6 one-qubit gates, each requires 4 CNOT gates and 2 one-qubit and gates, and 8 gates require 20 one-qubit and gates. ## Vi Conclusions Based on the decomposition of matrices, the synthesis of two-qubit and three-qubit orthogonal gates is investigated. For two-qubit orthogonal gate, we get optimal result, which requires at most 2 CNOT gates and 6 one-qubit gates, beating an earlier bound of 2 CNOT gates and 12 one-qubit gates. For the three-qubit unimodular orthogonal gate, it requires 16 CNOT gates and 36 one-qubit gates from the family in the worst case. There are abundant algebraic structures for matrix decomposition of three-qubit orthogonal gate. We have many ways to investigate the synthesis of general three-qubit orthogonal gate. The result given here is the best one we have got, although we can not affirm that is optimal yet. The synthesis of general three-qubit gate has been studied in some literatures 10 (); 11 (); 12 (), the orthogonal gate is an important class of gate of them. So the work here is essentially on the “small circuit” issue of three-qubit gates, which is first investigated in this paper. Different from two-qubit gate, how to get optimal quantum circuit for general three-qubit gate has not been well solved and is worthy studying further. ## Acknowledgements The work was supported by the Project of Natural Science Foundation of Jiangsu Education Bureau, China(Grant No.09KJB140010). ### References 1. M. A. Nielsen and I. L. Chuang (2000), Quantum Computation and Quantum Information, Cambridge University Press, (Cambridge). 2. A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. A. Smolin, and H. Weinfurter (1995), Elementary gates for quantum computation, Phys. Rev. A, 52, pp. 3457-3467. 3. S. Helgason (1978), Differential Geometry, Lie Groups and Symmetric Spaces, Academic Press (New York). 4. N. Khaneja and S. J. Glaser (2001), Cartan decomposition of and control of spin systems, Chem. Phys., 267, pp. 11-23. 5. M. Y. Ye, Y. S. Zhang and G. C. Guo (2008), Quantum entanglement and quantum operation, Sci. Chin. G Phys. Mech. Astron., 51, pp. 14-21. 6. G. Vidal and C. M. Dawson (2004), Universal quantum circuit for two-qubit transformations with three controlled-NOT gates, Phys. Rev. A, 69, pp. 010301(1-4). 7. F. Vatan and C. Williams (2004), Optimal quantum circuits for general two-qubit gates, Phys. Rev. A, 69, pp. 032315(1-5). 8. V. V. Shende, I. L. Markov, and S. S. Bullock (2004), Minimal universal two-qubit controlled-NOT-based circuits, Phys. Rev. A, 69, pp. 062321(1-8). 9. V. V. Shende, S. S. Bullock, and I. L. Markov (2004), Recognizing small-circuit structure in two-qubit operators, Phys. Rev. A, 70, pp. 012310(1-5). 10. F. Vatan and C. P. Williams (2004), Realization of a general three-qubit quantum gate, quant-ph/0401178. 11. H. R. Wei, Y. M. Di, and J. Zhang (2008), Modified Khaneja-Glaser decomposition and realization of three-qubit quantum gate, Chin. Phys. Lett., 25, pp. 3107-3110. 12. V. V. Shende, S. S. Bullock, and I. L. Markov (2006), Synthesis of quantum logic circuits, IEEE. Trans. on C. A. D., 25, pp. 1000-1010. 13. E. Fredkin and T. Toffoli (1982), Conservative logic, Inter. J. Theor. Phys., 21, pp. 219-253. 14. S. S. Bullock and G. K. Brennen (2004), Canonical decompositions of n-qubit quantum computations and concurrence, J. Math. Phys., 45, pp. 2447(1-21). 15. S. S. Bullock, G. K. Brennen, and D. P. O’Leary (2005), Time reversal and n-qubit canonical decompositions, J. Phys. A: Math. Theor., 46, pp. 062104(1-19). 16. D. D’Alessandro and F. Albertini (2007), Quantum symmetries and Cartan decomposition in arbitrary dimensions, J. Phys. A: Math. Theor., 40, pp. 2439-2454. 17. D. D’Alessandro and R. Romano (2006), Decompositions of unitary evolutions and entanglement dynamics of bipartite quantum systems, J. Math. Phys., 47, pp. 082109(1-13). 18. Y. M. Di, J. Zhang, and H. R. Wei (2008), Cartan decomposition of a two-qutrit gate, Sci. Chin. G: Phys. Mech. Astron., 51, pp. 1668-1676. 19. Y. M. Di, Y. Wang, and H. R. Wei (2010), Dipole-quadrupole decomposition of two coupled spin 1 systems, J. Phys. A: Math.Theor., 43, pp. 065303(1-9). 20. J. Zhang, Y. M. Di, and H. R. Wei (2009), Realization of two-qutrit quantum gates with control pulses, Commun. Thero. Phys., 51, pp. 653-658. 21. H. R. Wei, Y. M. Di, and Y. Wang (2010), Synthesis of some three-qubit gates and their implementation in a three spins system coupled with Ising interaction, Sci. Chin. G: Phys. Mech. Astron., 53, pp. 664-671. 22. C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wooters (1996), Mixed-state entanglement and quantum error correction, Phys. Rev. A, 54, pp. 3824-3851. 23. S. Hill and W. K. Wootters (1997), Entanglement of a pair of quantum bits, Phys. Rev. Lett., 78, pp. 5022-5025. 248439
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9304189682006836, "perplexity": 2152.931836913712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512501.27/warc/CC-MAIN-20181020013721-20181020035221-00308.warc.gz"}
https://www.ramapo.edu/web-resources/exams-spring/
### Final Exam Week is May 6-12 (Wednesday to Tuesday) #### Common Finals: May 9 (Saturday) 8:00 – 11:20 am 11:40 am – 3:00 pm PLEASE NOTE: The exam times listed below are for four credit lecture courses. One and two credit non-lab courses do not meet during Final Exam week. Exams for hybrid/grad and Lab courses will have exams during Final Exam week — exams will take place during the time slot that most closely matches the actual meeting time (i.e. T 1:45 p.m. – 4:15 p.m. lab; use the T 1:45 p.m. – 5:15 p.m. class meeting — exam would be on Tuesday, May 12 at 3:20 p.m.) All faculty will be emailed, mid semester, with exam date, time and location. If your class meets Your exam is scheduled for MWR, 8:30 a.m. – 9:40 a.m. M, May 11 8:00-11:20am MR, 8 a.m. – 9:40 a.m. M, May 11 8:00-11:20am MR, 8:30 a.m. – 9:40 a.m. M, May 11 8:00-11:20am M, 8 a.m. – 11:30 a.m. M, May 11 8:00-11:20am W, 8 a.m. – 11:30 a.m. W, May 6 8:00-11:20am R, 8 a.m. – 11:30 a.m. R, May 7, 8:00-11:20am W, 9 a.m. – 12:30 p.m. W, May 6 8:00-11:20am MWR, 9:55 a.m. – 11:05 a.m. W, May 6 8:00-11:20am MR , 9:55 a.m. – 11:05 a.m. W, May 6 8:00-11:20am M, 9:55 a.m. – 1:25 p.m. M, May 11 11:40-3:00pm R, 9:55 a.m. – 1:25 p.m. R, May 7, 11:40-3:00pm MR, 11:20 a.m. – 1 p.m. R, May 7 11:40-3:00pm R, 11:20 a.m. – 1 p.m. M, May 11 11:40-3:00pm W, 1 p.m. – 4:30 p.m. W, May 6 11:40-3:00pm MR, 2:15 p.m. – 3:55 p.m. M, May 11 3:20-6:40pm M, 2:15 p.m. – 3:55 p.m. M, May 11 3:20-6:40pm R, 2:15p.m. – 3:55p.m. M, May 11 3:20-6:40pm M, 2:15 p.m. – 5:45 p.m. M, May 11 3:20-6:40pm R, 2:15 p.m. – 5:45 p.m. R, May 7 3:20-6:40pm MWR 4:40 p.m. – 5:50 p.m. W, May 6 3:20-6:40pm W, 4:30 p.m. – 7:50 p.m. W, May 6 7:00-10:20pm MR 4:10 p.m. – 5:50 p.m. W, May 6 3:20-6:40pm M 4:10 p.m. – 5:50 p.m. W, May 6 3:20-6:40pm R 4:10 p.m. –  5:50 p.m. W, May 6 3:20-6:40pm M, 4:00 p.m. – 6:00 p.m. M, May 11 3:20-6:40pm M, 4:30 p.m. – 7:50 p.m. M, May 11 3:20-6:40pm R, 4:00 p.m. – 6:00 p.m. R, May 7 3:20-6:40pm MR, 6:05 p.m. – 7:45 p.m. M, May 11 7:00pm-10:20pm M, 6:30 p.m. – 8:30 p.m. M, May 11 7:00pm-10:20pm MR, 8 p.m. – 9:40 p.m. R, May 7 7:00-10:20pm M, 6:05 p.m. – 9:35 p.m. M, May 11 7:00pm-10:20pm W, 6:05 p.m. – 9:35 p.m. W, May 6 7:00pm-10:20pm R, 6:05 p.m. – 9:35 p.m. R, May 7 7:00pm-10:20pm TF, 8 a.m. – 9:40 a.m. T, May 12 8:00-11:20am T , 8 a.m. – 11:30 a.m. T, May 12 8:00-11:20am F, 8 a.m. – 11:30 a.m. T, May 12 8:00-11:20am TF, 9:55 a.m. – 11:35 a.m. F, May 8 8:00-11:20am T, 9:55 a.m. – 11:35 a.m. F, May 8 8:00-11:20am T, 9:55 a.m. – 1:25 p.m. F, May 8 8:00-11:20am F, 9:55 a.m. – 1:25 p.m. F, May 8 8:00-11:20am TF, 11:50 a.m. – 1:30 p.m. T, May 12 11:40-3:00pm T, 11:50 a.m. – 1:30 p.m. T, May 12 11:40-3:00pm TF, 1:45 p.m. – 3:25 p.m. T, May 12 3:20-6:40pm T, 1:45 p.m. – 3:25 p.m. T, May 12 3:20-6:40pm T, 1:45 p.m. – 5:15 p.m. T, May 12 3:20-6:40pm F, 1:45 p.m. – 5:15 p.m. F, May 8 3:20-6:40pm TF, 3:40 p.m. – 5:20 p.m. F, May 8 3:20-6:40pm T, 3:40 p.m. – 5:20 p.m. F, May 8 3:20-6:40pm F, 3:40 p.m. – 5:20 p.m. F, May 8 3:20-6:40pm TF, 6:05 p.m. – 7:45 p.m. T, May 12 7:00pm-10:20pm TF, 8 p.m. – 9:40 p.m. F, May 8 7:00pm-10:20pm T, 8 p.m. – 9:40 p.m. F, May 8 7:00pm-10:20pm T, 6:05 p.m. – 9:35 p.m. T, May 12 7:00pm-10:20pm T, 6:05 p.m. – 7:45 p.m. T, May 12 7:00pm-10:20pm Common Finals S, May 9 8:00-11:20am or 11:40am-3:00pm (M=Monday, T=Tuesday, W=Wednesday, R=Thursday, F= Friday, S=Saturday)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9230571389198303, "perplexity": 11494.951401734821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129257.81/warc/CC-MAIN-20200711224142-20200712014142-00072.warc.gz"}
https://brilliant.org/problems/summation-with-sine-cosine-sequences/
# Summation with sine & cosine sequences Geometry Level 3 Let $$a_n=\sin{\left(\dfrac{\pi}2-n\pi\right)}+\cos{n\pi}$$ and $$b_n=10\cos{\left(2n\pi-\dfrac{\pi}3\right)}$$. Find the value of $\large\displaystyle\sum_{n=1}^{\infty}\left(\dfrac{a_n}{b_n}\right)^n$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630907773971558, "perplexity": 2900.980662779459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948623785.97/warc/CC-MAIN-20171218200208-20171218222208-00245.warc.gz"}
https://www.gamedev.net/forums/topic/109852-cant-seem-to-find-the-problem-in-the-function-can-you/
#### Archived This topic is now archived and is closed to further replies. # Can't seem to find the problem in the function, can you? This topic is 5594 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Okay, so let me set this up for you. This function loops through a 13x13 array (m_gameGrid) and checks to make sure EVERY spot''s m_iState is 0, except for the middle spot [6][6], it has to be set to 1. The problem is, I seem to win the game, as long as ANY blocks are 1, within either the 6th row OR the 6th column, AND the middle spot, which obviously isn''t what I want. So lets show you this visually.. 000 010 <--- True Winner 000 010 111 <--- Shouldn''t be a winner, but is anyways.. 010 100 010 <--- Isn''t a winner. 000 Heres the code, void CGameGrid::CheckWinner(void) { int iWinner = 0; if(m_gameGrid[6][6].m_iState == 0) { iWinner += 1; } // No point going through the loop if the middle isn''t blue. if(iWinner == 0) { for(int iRow = 0; iRow < GRID_ROWS; iRow += 1) { for(int iColumn = 0; iColumn < GRID_COLUMNS; iColumn += 1) { if(iRow != 6 && iColumn != 6) { if(m_gameGrid[iRow][iColumn].m_iState == 1) { iWinner += 1; } } } } } if(iWinner == 0) m_bSolved = true; else m_bSolved = false; return; } ##### Share on other sites well. make a check to see if the first and the third are equal to 000 if not, its not a winner. else it is. I''m not completly sure I got your problem right ##### Share on other sites I don''t think you did ^_^ ALL squares in the m_gameGrid have to be 0, or "off" and the middle has to be 1, or on. And I did a 3x3 example to show you, but the grid can have odd shapes, sometimes it''ll be like a capital T instead of a 3x3 grid. Things like that. ##### Share on other sites Hm. From your nebulous description, I spotted three potential problem spots: 1) the need for an "else if" instead of an if after you check to see if the middle square is 1 (right below the "// No point going through the loop..." comment) 2) You''re incrementing iWinner when any position is equal to 1 within that loop. That didn''t seem to be your intention. 3) Finally, you''re setting bSolved to true if iWinner is 0, which seems a bit hokey. All in all, that code is rather ugly, and it doesn''t seem to come close to doing what you described. Peace, ZE. //email me.//zealouselixir software.//msdn.//n00biez.// ##### Share on other sites Incrementing iWinner by 1 IS What I want to do. When the game finds a square with a state of 1, it increments iWinner by 1, so that it no longer equals 0 and thus, doesn''t declare the puzzle solved. ##### Share on other sites As far as I can tell, you want to check every square except (6,6) in the loop, and if any of them are 1, the player hasn''t won. If that''s the case your if statement should read: if((iRow != 6) || (iColumn != 6)) which is equivalent to if(!(iRow == 6 && iColumn == 6)) ##### Share on other sites Ah. I see. That just wasn't intuitive to me, because the way booleans work, any non-zero value is true, so I would've thought that Winner being greater than one would make it true. Anyway, that code works for me. EDIT: After, that is, I make the change that Krunk suggested above (I figured it out before I read his post though). Later, ZE. //email me.//zealouselixir software.//msdn.//n00biez.// [edited by - zealouselixir on August 17, 2002 8:17:20 PM] • ### Forum Statistics • Total Topics 628641 • Total Posts 2983981 • 10 • 18 • 20 • 13 • 9
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1668720841407776, "perplexity": 3370.0102309259587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512208.1/warc/CC-MAIN-20171211052406-20171211072406-00781.warc.gz"}
http://philsci-archive.pitt.edu/11120/
# Confirmation Measures and Sensitivity Vassend, Olav B. (2014) Confirmation Measures and Sensitivity. In: UNSPECIFIED. Preview PDF Prior_sensitivity.pdf ## Abstract Stevens (1946) draws a useful distinction between ordinal scales, interval scales, and ratio scales. Most recent discussions of confirmation measures have proceeded on the ordinal level of analysis. In this paper, I give a more quantitative analysis. In particular, I show that the requirement that our desired confirmation measure be at least an \emph{interval} measure naturally yields necessary conditions that jointly entail the log-likelihood measure. Thus I conclude that the log-likelihood measure is the only good candidate interval measure. Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL Social Networking: Item Type: Conference or Workshop Item (UNSPECIFIED) Creators: CreatorsEmailORCID Vassend, Olav B.vassend@wisc.edu Keywords: Bayesian confirmation, confirmation measures, log-likelihood, Bayes, Bayesian, confirmation Subjects: General Issues > Confirmation/Induction Depositing User: Olav Vassend Date Deposited: 05 Nov 2014 21:20 Item ID: 11120 Subjects: General Issues > Confirmation/Induction Date: 3 November 2014 URI: http://philsci-archive.pitt.edu/id/eprint/11120
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913081288337708, "perplexity": 18781.402815294332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861752.19/warc/CC-MAIN-20180619021643-20180619041643-00098.warc.gz"}
http://mathhelpforum.com/calculus/179860-flux-vector-field-through-surface.html
# Math Help - Flux of vector field through a surface. 1. ## Flux of vector field through a surface. I'm I'm calculating the flux of v = (y, x) upwards across the upper half of the unit sphere. (sphere such that z >= 0) Find N = ( - (partial derivative function wrt x) , - (particial derivative wrt y, 1, ) the integrate v dot N with respect to x and y with the following limits: $x \in [-1, 2]$ $y \in [ -\sqrt{1 - x^2}, \sqrt{1 - x^2}$ OR Parameterize S using spherical polar coordinates $\theta , \phi$ Find N as the cross product between the partial derivative wrt theta and phi. Then integrate v dot N with respect to theta and phi with the following limits: $\phi \in [0, 2\pi ]$ $\theta \in [0, 2/\pi ]$ I THINK this is correct but I'm a little confused because in the first case I integral with limts corresponding to a unit circle in R2 but in the second case it seems like my limits are drawing out the actual unit sphere? Sorry for the lack of LateX I tried but failed to make it work!! Anyhelp really appreciated! 2. Originally Posted by Ant Sorry for the lack of LateX I tried but failed to make it work!! Check out the general announcement near the top of any forum or search page. Use [tex] tags instead of [tex] for now. -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931213855743408, "perplexity": 734.8650290751737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857710.17/warc/CC-MAIN-20140722025737-00111-ip-10-33-131-23.ec2.internal.warc.gz"}
http://codereview.stackexchange.com/users/20316/allan
# Allan less info reputation 2 bio website privat.awn.dk location Denmark age 31 member for 2 years, 2 months seen Dec 28 '14 at 19:48 profile views 0 - # 1 Question 4 Implementing create and destroy functions to replace new and delete operators # 121 Reputation This user has not answered any questions # 3 Tags 0 memory-management 0 c++11 0 c++ # 8 Accounts Stack Overflow 1,233 rep 31433 TeX - LaTeX 150 rep 4 Theoretical Computer Science 136 rep 3 Code Review 121 rep 2 Server Fault 103 rep 4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6462181806564331, "perplexity": 13374.209874054399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936459513.8/warc/CC-MAIN-20150226074059-00269-ip-10-28-5-156.ec2.internal.warc.gz"}
https://byjus.com/ncert-solutions-class-12-chemistry/chapter-3-electrochemistry/
NCERT Solutions For Class 12 Chemistry Chapter 3 NCERT Solutions Class 12 Chemistry Electrochemistry NCERT solutions for class 12 chemistry chapter 3 – Electrochemistry is now available for free download. The solutions are given in a PDF format which will make it easier for students to access and refer to it. NCERT Solutions chemistry chapter 3 includes various questions on the the topic of electrochemistry. These solutions have been designed to help students understand and get used to all the concepts of the chapter. Besides, students can practice questions from these materials as the questions are purely based on NCERT textbooks that is prescribed for class 12 in CBSE schools. The solutions are prepared by the best subject experts. In essence, this NCERT solutions can be useful for those students preparing for class 12 board exams and also for JEE advance and other medical entrance exams. The NCERT solutions for chapter 3 – electrochemistry has mainly been designed to help the students help them prepare well and score good marks in CBSE class 12 chemistry paper. Further, the solutions consist of well thought of or structured questions along with detailed explanations to help students learn and remember concepts easily. You have come to the right place as you will be getting access to detailed, accurate and free solutions for class 12 NCERT chemistry. The solutions can be viewed either online on the website or you can download the PDF for later viewing without the need for any internet connection. Q 3.1: In the order of their reactivity, i.e how they displace each other from their salt solutions, allign the metals in decreasing order. Cu, Fe, Al, Zn and Mg. According to their reactivity, the given metals replace the others from their salt solutions in the said order: Mg, Al, Zn, Fe, Cu . Mg : Al : Zn : Fe : Cu Q 3.2: Standard electrode potentials given as, Mg2+/Mg = −2.37 V, Hg2+/Hg = 0.79V, Cr3+/Cr = − 0.74V, Ag+/Ag = 0.80V, K+/K = −2.93V In the order of increasing of reducing power arrange the given metals accordingly. Ans: The reducing power increases with the lowering of reduction potential. In order of given standard electrode potential (increasing order) : K+/K < Mg2+/Mg < Cr3+/Cr < Hg2+/Hg < Ag+/Ag Thus, in the order of reducing power, we can arrange the given metals as : Ag< Hg < Cr < Mg < K Q 3.3 : Represent the galvanic cell in which the following reaction takes place. Zn(s) + 2Ag+(aq) → Zn2+(aq) + 2Ag(s) Also find : (i) The negatively charged electrode ? (ii) Current carriers in the cell. (iii) At each electrode, the individual reaction. Ans : The galvanic cell in which the given reaction takes place is depicted as: $$Zn_{ ( s ) } | Zn^{ 2+ }_{( aq )}||Ag^{ + }_{( aq )}|Ag_{( s )}$$ (i) The negatively charged electrode is the Zn electrode (anode) (ii) The current carriers in the cell are ions. Current flows to zinc from silver in the external circuit. (iii) Reaction at the anode is given by : $$Zn_{ ( s ) }\rightarrow Zn^{ 2+ }_{( aq )} + 2 e^-$$ Reaction at the anode is given by : $$Ag^{+}_{ ( aq ) } + e^- \rightarrow Ag_{( s )}$$ Q 3.4: With the following reactions given, find the standard cell potentials of galvanic cells  with given reactions. (i) 2Cr(s) + 3Cd2+(aq) → 2Cr3+(aq) + 3Cd (ii) Fe2+(aq) + Ag+(aq) → Fe3+(aq) + Ag(s) Calculate the ∆rGθ and equilibrium constant of the reactions. Ans : (i) $$E^{\Theta}_{Cr^{3+}/Cr}$$ = 0.74 V $$E^{\Theta}_{Cd^{2+}/Cd}$$ = -0.40 V The galvanic cell of the given reaction is depicted as : $$Cr_{ ( s ) }|Cr^{ 3+ }_{ ( aq ) }||Cd^{ 2+ }_{ aq }|Cd_{ ( s ) }$$ Now, the standard cell potential is $$E^{\Theta }_{cell} = E^{\Theta }_{g}-E^{\Theta }_{L}$$ = – 0.40 – ( -0.74 ) = + 0.34 V In the given equation, n = 6 F = 96487 C mol−1 $$E^{\Theta }_{cell}$$ = + 0.34 V Then, $$\Delta_rG^{\Theta}$$ = −6 × 96487 C mol−1 × 0.34 V = −196833.48 CV mol−1 = −196833.48 J mol−1 = −196.83 kJ mol−1 Again, $$\Delta_rG^{\Theta} = – R T ln K$$ $$\Delta_rG^{\Theta} = – 2.303 R T ln K$$ $$log {k} = \frac{\Delta_rG}{ 2.303 R T }$$ $$= \frac{-196.83\times10^{3}}{ 2.303 \times8.314\times 298 }$$ = 34.496 K = antilog (34.496) = 3.13 × 1034 The galvanic cell of the given reaction is depicted as: $$Fe^{ 2+ }_{( aq )}|Fe^{ 3+ }_{( aq )}|| Ag^+_{( aq )}|Ag_{( s )}$$ Now, the standard cell potential is $$E^{\Theta }_{cell} = E^{\Theta }_{g}-E^{\Theta }_{L}$$ Here, n = 1. Then, $$\Delta_t G^0 = -nFE^0_{cell}$$ = −1 × 96487 C mol−1 × 0.03 V = −2894.61 J mol−1 = −2.89 kJ mol−1 Again,  $$\Delta_t G^0 = -2.303 RT \; ln K$$ $$ln K = \frac{\Delta_t G}{ 2.303 RT }$$ $$= \frac{-2894.61 }{ 2.303 \times 8.314 \times 298 }$$ = 0.5073 K = antilog (0.5073) = 3.2 (approximately) Q 3.5: Write the Nernst equation and emf of the following cells at 298 K: (i) Mg(s) | Mg2+(0.001M) || Cu2+(0.0001 M) | Cu(s) (ii) Fe(s) | Fe2+(0.001M) || H+(1M)|H2(g)(1bar) | Pt(s) (iii) Sn(s) | Sn2+(0.050 M) || H+(0.020 M) | H2(g) (1 bar) | Pt(s) (iv) Pt(s) | Br2(l) | Br(0.010 M) || H+(0.030 M) | H2(g) (1 bar) | Pt(s). (i) For the given reaction, the Nernst equation can be given as: $$E_{cell} = E^0_{cell} – \frac{0.591}{n}log\frac{[Mg^{2+}]}{[Cu^{2+}]}$$ $$= 0.34 – (-2.36) – \frac{0.0591}{2} log \frac{0.001}{0.0001}$$ $$2.7 -\frac{0.0591}{2}log10$$ = 2.7 − 0.02955 = 2.67 V (approximately) (ii) For the given reaction, the Nernst equation can be given as: $$E_{cell} = E^0_{cell} – \frac{0.591}{n}log\frac{[Fe^{2+}]}{[H ^{+}]^2}$$ = 0 – ( – 0.14) – $$\frac{0.0591}{n}log\frac{0.050}{(0.020)^{2}}$$ = 0.52865 V = 0.53 V (approximately) (iii) For the given reaction, the Nernst equation can be given as: $$E_{cell} = E^0_{cell} – \frac{0.591}{n}log\frac{[Sn^{2+}]}{[H ^{+}]^2}$$ = 0 – ( – 0.14) – $$\frac{0.591}{2}log\frac{0.050}{(0.020)^2}$$ = 0.14 − 0.0295 × log125 = 0.14 − 0.062 = 0.078 V = 0.08 V (approximately) (iv) For the given reaction, the Nernst equation can be given as: $$E_{cell} = E^0_{cell} – \frac{0.591}{n}log\frac{1}{[Br^{-}]^2[H ^{+}]^2}$$ = 0 – 1.09 – $$\frac{0.591}{2}log\frac{1}{(0.010)^2(0.030)^2}$$ = -1.09 – 0.02955 x $$log\frac{1}{0.00000009}$$ = -1.09 – 0.02955 x $$log\frac{1}{9\times 10^{-8}}$$ = -1.09 – 0.02955 x $$log{ (1.11 \times 10^{7} )}$$ = -1.09 – 0.02955 x (0.0453 + 7) = -1.09 – 0.208 = -1.298 V Q 3.6: The following reaction takes place in the button cells widely used in watches and other devices: For the given reaction calculate $$\Delta_r G^\Theta$$ and $$E^0$$ : Ans: $$E^0$$ = 1.104 V We know that, $$\Delta_r G^\Theta = -nFE^\Theta$$ = −2 × 96487 × 1.04 = −213043.296 J = −213.04 kJ Q 3.7: For the solution of an electrolyte describe its conductivity and molar conductivity. Also put some light on how they vary with concentration. Conductivity of a solution is defined as the conductance of a solution of 1 cm in length and area of cross – section 1 sq. cm. Specific conductance is the inverse of resistivity and it is represented by the symbol κ. If ρ is resistivity, then we can write: $$k = \frac{1}{\rho}$$ At any given concentration, the conductivity of a solution is defined as the unit volume of solution kept between two platinum electrodes with the unit area of cross- section at a distance of unit length. $$G = k \frac{a}{l} = k \times 1 = k$$     [Since a = 1 , l = 1] When concentration decreases there will a decrease in Conductivity. It is applicable for both weak and strong electrolyte. This is because the number of ions per unit volume that carry the current in a solution decreases with a decrease in concentration. Molar conductivity – Molar conductivity of a solution at a given concentration is the conductance of volume V of a solution containing 1 mole of the electrolyte kept between two electrodes with the area of cross-section A and distance of unit length. $$\Lambda_m = k \frac{A}{l}$$ Now, l = 1 and A = V (volume containing 1 mole of the electrolyte). $$\Lambda_m = k V$$ Molar conductivity increases with a decrease in concentration. This is because the total volume V of the solution containing one mole of the electrolyte increases on dilution. The variation of $$\Lambda_m$$ with $$\sqrt{c}$$ for strong and weak electrolytes is shown in the following plot : Q  3.8: The conductivity of 0.20 M solution of KCl at 298 K is 0.0248 Scm−1. Find its molar conductivity. Ans : Given, κ = 0.0248 S cm−1 c = 0.20 M Molar conductivity, $$\Lambda_m = \frac{k \times 1000}{c}$$ $$= \frac{0.0248 \times 1000}{0.2}$$ = 124 Scm2mol-1 Q 3.9: Considering the case of a conductivity cell having 0.001 M KCl solution at 298 K is 1500 Ω. If given, conductivity of 0.001M KCl solution at 298 K is 0.146 × 10−3 S, find the cell constant? Given, Conductivity, k = 0.146 × 10−3 S cm−1 Resistance, R = 1500 Ω Cell constant = k × R = 0.146 × 10−3 × 1500 = 0.219 cm−1 Q 3.10: The conductivity of NaCl at 298 K has been found at different concentrations and the results are given below: Concentration/M            0.001     0.010     0.020     0.050     0.100 102 × k/S m−1                      1.237     11.85     23.15     55.53     106.74 for all concentrations and draw a plot between $$\Lambda_m$$  and c1⁄2. Find the value Molar conductivity of Calculate $$\Lambda_m$$ of $$\Lambda^0_m$$ Ans: Given, κ = 1.237 × 10−2 S m−1, c = 0.001 M Then, κ = 1.237 × 10−4 S cm−1, c1⁄2 = 0.0316 M1/2 $$\Lambda_m =\frac{k}{c}$$ $$=\frac{1.237 \times 10^{ -4 } S\;cm^{-1} }{0.001 \; mol\; \; L^{ -1 }}\times \frac{1000\;cm^{-1}}{L}$$ = 123.7 S cm2 mol−1 Given, κ = 11.85 × 10−2 S m−1, c = 0.010M Then, κ = 11.85 × 10−4 S cm−1, c1⁄2 = 0.1 M1/2 $$\Lambda_m =\frac{k}{c}$$ $$=\frac{11.85 \times 10^{ -4 } S\;cm^{-1} }{0.010 \; mol\; \; L^{ -1 }}\times \frac{1000\;cm^{-1}}{L}$$ = 118.5 S cm2 mol−1 Given, κ = 23.15 × 10−2 S m−1, c = 0.020 M Then, κ = 23.15 × 10−4 S cm−1, c1/2 = 0.1414 M1/2 $$\Lambda_m =\frac{k}{c}$$ $$=\frac{23.15 \times 10^{ -4 } S\;cm^{-1} }{0.020 \; mol\; \; L^{ -1 }}\times \frac{1000\;cm^{-1}}{L}$$ = 115.8 S cm2 mol−1 Given, κ = 55.53 × 10−2 S m−1, c = 0.050 M Then, κ = 55.53 × 10−4 S cm−1, c1/2 = 0.2236 M1/2 $$\Lambda_m =\frac{k}{c}$$ $$=\frac{106.74 \times 10^{ -4 } S\;cm^{-1} }{0.050 \; mol\; \; L^{ -1 }}\times \frac{1000\;cm^{-1}}{L}$$ = 111.1 1 S cm2 mol−1 Given, κ = 106.74 × 10−2 S m−1, c = 0.100 M Then, κ = 106.74 × 10−4 S cm−1, c1/2 = 0.3162 M1/2 $$\Lambda_m =\frac{k}{c}$$ $$=\frac{106.74 \times 10^{ -4 } S\;cm^{-1} }{0.100 \; mol\; \; L^{ -1 }}\times \frac{1000\;cm^{-1}}{L}$$ = 106.74 S cm2 mol−1 Now, we have the following data : Since the line interrupts $$\Lambda_m$$ at 124.0 S cm2 mol−1, $$\Lambda^0_m$$ =  124.0 S cm2 mol−1 Q 3.11: Find the molar conductivity of acetic acid if its conductivity is given to be 0.00241 M . Also, if the value of $$\Lambda^0_m$$ is given to be390.5 S cm2 mol−1, calculate its dissociation constant? Ans: Given, κ = 7.896 × 10−5 S m−1 c = 0.00241 mol L−1 Then, molar conductivity, $$\Lambda_m = \frac{k}{c}$$ = $$\frac{7.896 \times 10^{-5} S cm^{-1}}{0.00241 \; mol \; L^{-1}}\times \frac{1000 cm^3}{L}$$ = 32.76S cm2 mol−1 $$\Lambda^0_m =$$ 390.5 S cm2 mol−1 Again, $$\alpha =\frac{\Lambda_m }{\Lambda^0_m }$$ = $$= \frac{32.76 \; S\; cm^2 \; mol^{-1} }{390.5 \; S\; cm^2 \; mol^{-1} }$$ Now, = 0.084 Dissociation constant, $$K_a = \frac{c\alpha^2}{(1-\alpha)}$$ = $$\frac{ ( 0.00241 \; mol \; L^{-1} )( 0.084 )^2}{ ( 1 – 0.084 ) }$$ = 1.86 × 10−5 mol L−1 Q 3.12: How much charge is required for the following reductions of 1 mol of : (i) Al3+ to Al. (ii) Cu2+ to Cu. (iii)$$MnO^-_4$$  to Mn2+. Ans : (i) $$Al^{3+} + 3e^- \rightarrow Al$$ Required charge = 3 F = 3 × 96487 C = 289461 C (ii) $$Cu^{2+} + 2e^- \rightarrow Cu$$ Required charge = 2 F = 2 × 96487 C = 192974 C (iii) $$MnO^-_4 \rightarrow Mn^{2+}$$ i.e $$Mn^{7+} + 5e^-\rightarrow Mn^{2+}$$ Required charge = 5 F = 5 × 96487 C = 482435 C Q 3.13: In the terms of Faraday, how much electricity is required to produce : (i) From molten CaCl2, 20.0 g of Ca. (ii) From molten Al2O3, 40.0 g of Al. Ans: (i)  From given data, $$Ca^{2+} + 2e^- \rightarrow Ca$$ Electricity required to produce 40 g of calcium = 2 F Therefore, electricity required to produce 20 g of calcium = (2 x 20 )/ 40 F = 1 F (ii) From given data, $$Al^{3+} + 3e^- \rightarrow Al$$ Electricity required to produce 27 g of Al = 3 F Therefore, electricity required to produce 40 g of Al = ( 3 x 40 )/27 F = 4.44 F Q 3.14: Calculate the amount of electricity required for the oxidation of 1 mol of the following in coulombs : (i) H2O to O2. (ii)FeO to Fe2O3. Ans : (i) From given data, $$H_2O\rightarrow H_2 + \frac{1}{2}O_2$$ We can say that : $$O^{2-}\rightarrow \frac{1}{2}O_2 + 2e^-$$ Electricity required for the oxidation of 1 mol of H2O to O2 = 2 F = 2 × 96487 C = 192974 C (ii) From given data, $$Fe^{2+}\rightarrow Fe^{3+} + e^-$$ Electricity required for the oxidation of 1 mol of FeO to Fe2O3 = 1 F = 96487 C Q 3.15: For 20 minutes, a current of 5 A is applied to between platinum electrodes to electrolyze a solution of Ni(NO3)2. Find the amount of Ni deposited at the cathode? Ans : Given, Current = 5A Time = 20 × 60 = 1200 s Charge = current × time = 5 × 1200 = 6000 C According to the reaction, $$Ni^{2+} + 2e^-\rightarrow Ni_{ (s) } + e^-$$ Nickel deposited by 2 × 96487 C = 58.71 g Therefore, nickel deposited by 6000 C = $$\frac{58.71 \times 6000}{2 \times 96487}g$$ = 1.825 g Hence, 1.825 g of nickel will be deposited at the cathode. Q 3.16: Solutions of 3 electrolytic cells are ZnSO4, AgNO3 and CuSO4,cells are connected in series. Of the cells, A,B,C respectively, after passing a steady current of 1.5 amperes , 1.45 g of silver was found deposited at the cathode of cell B. How much time did the current flow? What amount of zinc and copper were deposited? Ans : According to the reaction: $$Ag^+_{(aq)} +e^- \rightarrow Ag_{(s)}$$ i.e., 108 g of Ag is deposited by 96487 C. Therefore, 1.45 g of Ag is deposited by = $$\frac{96487\times 1.45}{107}C$$ = 1295.43 C Given, Current = 1.5 A Time = 1295.43/ 1.5 s = 863.6 s = 864 s = 14.40 min Again, $$Cu^{2+}_{(aq)} +2e^-\rightarrow Cu_{(s)}$$ i.e., 2 × 96487 C of charge deposit = 63.5 g of Cu Therefore, 1295.43 C of charge will deposit $$\frac{63.5 \times 1295.43}{2 \times 96487}$$ = 0.426 g of Cu $$Zn^{2+}_{(aq)} +2e^-\rightarrow Zn_{(s)}$$ i.e., 2 × 96487 C of charge deposit = 65.4 g of Zn Therefore, 1295.43 C of charge will deposit $$\frac{65.4 \times 1295.43}{2 \times 96487}$$ = 0.439 g of Zn Q 3.17: Using the standard electrode potentials given in Table 3.1, predict if the reaction between the following is feasible: (i) Fe3+(aq) and I(aq) (ii) Ag+ (aq) and Cu(s) (iii) Fe3+ (aq) and Br (aq) (iv) Ag(s) and Fe3+ (aq) (v) Br2 (aq) and Fe2+ (aq). Ans : (i) (ii) E0  is positive , hence reaction is feasible. (iii) E0  is negative , hence reaction is not feasible. (iv) E0  is negative , hence reaction is not feasible. (v) E0  is positive , hence reaction is feasible. Q  3.18: Predict the products of electrolysis in each of the following : (i) An aqueous solution of AgNO3 with silver electrodes. (ii) An aqueous solution of AgNO3with platinum electrodes. (iii) A dilute solution of H2SO4with platinum electrodes. (iv) An aqueous solution of CuCl2 with platinum electrodes. Ans: (i) At cathode: The following reduction reactions compete to take place at the cathode. $$Ag^+_{(aq)}+e^- \rightarrow Ag_{(s)}$$ ; E0 = 0.80 V $$H^+_{(aq)}+e^- \rightarrow \frac{1}{2}H_{2(g)}$$ ;E0 = 0.00 V The reaction with a higher value of E0 takes place at the cathode. Therefore, deposition of silver will take place at the cathode. At anode: The Ag anode is attacked by $$NO^+_3$$  ions. Therefore, the silver electrode at the anode dissolves in the solution to form Ag+. (ii) At cathode: The following reduction reactions compete to take place at the cathode. $$Ag^+_{(aq)}+e^- \rightarrow Ag_{(s)}$$ ; E0 = 0.80 V $$H^+_{(aq)}+e^- \rightarrow \frac{1}{2}H_{2(g)}$$ ;E0 = 0.00 V The reaction with a higher value of E0 takes place at the cathode. Therefore, deposition of silver will take place at the cathode. At anode: Since Pt electrodes are inert, the anode is not attacked by $$NO^+_3$$ ions. Therefore, OH or $$NO^+_3$$ ions can be oxidized at the anode. But OH ions having a lower discharge potential  and get preference and decompose to liberate O2. $$OH^-\rightarrow OH + E^-$$ $$4OH^-\rightarrow 2H_2O + O_2$$ (iii) At the cathode, the following reduction reaction occurs to produce H2 gas. $$H^+_{(aq)}+e^-\rightarrow \frac{1}{2}H_{2(g)}$$ At the anode, the following processes are possible. $$2H_2O_{(l)}\rightarrow O_{2(g)} + 4H^+_{(aq)}+4e^-$$ ; E0 = +1.23 V             —–(i) $$2SO^{2-}_{4(aq)}\rightarrow S_2O^{2-}_{6(aq)} + 2e^-$$ ; E0 = +1.96 V          —–(ii) For dilute sulphuric acid, reaction (i) is preferred to produce O2 gas. But for concentrated sulphuric acid, reaction (ii) occurs. (iv) At cathode: The following reduction reactions compete to take place at the cathode. $$Cu^{2+}_{(aq)}+2e^- \rightarrow Cu_{(s)}$$ ; E0 = 0.34 V $$H^+_{(aq)}+e^- \rightarrow \frac{1}{2}H_{2(g)}$$ ;E0 = 0.00 V The reaction with a higher value of takes place at the cathode. Therefore, deposition of copper will take place at the cathode. At anode: The following oxidation reactions are possible at the anode. $$Cl^{-}_{(aq)} \rightarrow \frac{1}{2} Cl_{2(g)}+e^-$$; E0 = 1.36 V $$2H_20_{(l)} \rightarrow O_{2(g)} + 4H^+_{(aq)} +e^-$$; E0 = +1.23 V At the anode, the reaction with a lower value of E0 is preferred. But due to the over potential of oxygen, Cl gets oxidized at the anode to produce Cl2 gas.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318294882774353, "perplexity": 6383.163446053795}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514030.89/warc/CC-MAIN-20181021140037-20181021161537-00500.warc.gz"}
https://mattrigby.blogs.bris.ac.uk/2011/05/18/four-new-hfcs/
# Four new HFCs Hydrofluorocarbons (HFCs) are replacements for chlorofluorocarbons (CFCs), whose use is being phased out because they are primarily responsible for depleting the ozone layer. While HFCs don’t destroy ozone, they are often very powerful greenhouse gases, so it is important that we monitor their concentration and emissions. One difficulty in doing this is that there are many HFCs emitted into the atmosphere, and new ones are appearing all the time. To keep track of these gases, my colleagues in the AGAGE network have developed a system that can measure gas concentrations using mass spectrometry. The system is able to detect gases at very low concentrations, by removing most of the nitrogen and oxygen from the measured samples, increasing the concentration of the pollutants they want to measure. This means that we can now detect ‘new’ gases very soon after they appear in the atmosphere. In this paper, Martin Vollmer from Empa in Switzerland describes the measurement of four HFCs that have appeared in the atmosphere over he last decade or so (HFC-227ea, HFC-236fa, HFC-245fa, HFC-365mfc). Using a combination of in situ measurements, and new measurements of archived air samples, we can determine the entire air history of the four gases, from the year they first appeared in detectable amounts. Using these observations, and a two-dimensional model of the atmosphere, we calculated the annual global emission rates. As is often the case, the emissions we found differed from inventory estimates by substantial amounts, highlighting the value of these sort of ‘top-down’ verification techniques.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8678346276283264, "perplexity": 1395.7010730094655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823320.11/warc/CC-MAIN-20181210080704-20181210102204-00579.warc.gz"}
http://em.geosci.xyz/content/maxwell1_fundamentals/transient_planewaves_homogeneous/questions.html
# Questions ## Peak Distance (Diffusion Distance) As a planewave propagates through the Earth, the location of its maximum amplitude (peak amplitude) propagates along with the signal. Begin by setting the time and conductivity to $$t$$ = 0.60 s and $$\sigma$$ = 10 S/m, respectively. • Looking at the app, what is the peak distance? • Now calculate the peak distance using this formula. How does your answer compare with the previous one? • Reduce the time to 0.1 s. What happens to the peak distance? Is this behaviour supported by the formula? • Reduce the conductivity to 1 S/m. What happens to the peak distance? Is this behaviour supported by the formula? ## Peak Time The peak time is the time at which the maximum signal amplitude is observed at a particular location. Begin by setting the time and conductivity to $$t$$ = 0.01 s and $$\sigma$$ = 1 S/m, respectively. • Gradually increase the time until the peak amplitude is at a depth of 400 m. Using $$z$$ = 400 m and $$\sigma$$ = 1 S/m, calculate the peak time with this formula. How do the results compare. • Now increase the conductivity to $$\sigma$$ = 4 S/m. Adjust the time until the peak amplitude is at a depth of 400 m. Thus at the same depth, is the peak time earlier/later in more conductive media? ## Peak Velocity Begin by setting the time and conductivity to $$t$$ = 0.01 s and $$\sigma$$ = 1 S/m, respectively. • Adjust the time and determine how long it took for the peak amplitude to be at a depth of 400 m. Now increase the conductivity to 10 S/m and determine how long it took for the peak amplitude to be at a depth of 400 m. Based on these two experiments, do planewaves propagate faster in more conductive or resistive media? Is your answer supported by the formula for peak velocity? • Reset the time and conductivity to 0.01 s and 1 S/m, respectively. Determine the time it takes for peak amplitude to reach 400 m. Now determine the additional time required for the peak amplitude to reach 800 m. Based on this, does the peak velocity increase or decrease over time? Is your answer supported by the formula for peak velocity?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7992072701454163, "perplexity": 722.1481852323969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00682.warc.gz"}
https://neos-guide.org/content/quadratic-programming-algorithms
### Contents Equality-constrained quadratic programs are QPs where only equality constraints are present. They arise both in applications (e.g., structural analysis) and as subproblems in active set methods for solving the general QPs. Consider the equality-constrained quadratic program: $\begin{array}{lll} EQP: & \min_x & \frac{1}{2} x^T Q x + c^T x \\ & \mbox{s.t.} & A x = b \end{array}$ The first-order necessary conditions for $x^*$ to be a solution of EQP state that there is a vector $\lambda^*$ such that the following system of equations, the KKT system, is satisfied: $\begin{bmatrix} Q & -A^T \\ A & 0 \end{bmatrix} \begin{bmatrix} x^*\\ \lambda^* \end{bmatrix} = \begin{bmatrix} -c \\ b \end{bmatrix}$ If we express $x^*$ as $x^* = x + p$ where $x$ is an estimate of the solution and $p$ is a step, we obtain an alternative form: $\begin{bmatrix} Q & A^T \\ A & 0 \end{bmatrix} \begin{bmatrix} -p \\ \lambda^* \end{bmatrix} = \begin{bmatrix} c + Qx \\ Ax - b \end{bmatrix}$ The matrix is called the KKT matrix. Let $Z$ denote the $n \times (n-m)$ matrix whose columns are a basis for the null space of $A$. When $A$ has full row rank and the reduced-Hessian matrix $Z^T Q Z$ is positive definite, there is a unique vector pair $(x^*, \lambda^*)$ that satisfies the KKT system. There are several methods for solving the KKT system. Range-space methods can be used when $Q$ is positive definite and easy to invert (for example, diagonal or block-diagonal). Multiplying the first equation in the alternative equation above by $A Q^{-1}$ and subtracting the second equation, we obtain a linear system in the vector $\lambda^*$: $(A Q^{-1} A^T)\lambda^* = (A Q^{-1} c + b)$ Then, we recover $p$ by solving $Qp = A^T \lambda^* - (c + Qx).$ Null-space methods require a null-space basis matrix $Z$. This matrix can be computed with orthogonal factorizations or, for sparse problems, by LU factorization of a submatrix of $A$. Given a feasible vector $x_0$, we can express any other feasible vector $x$ in the form $x = x_0 + Z w$ for some $w \in R^m$. Direct computation shows that the equality-constrained subproblem EQP is equivalent to the unconstrained subproblem $\min_w \; \frac{1}{2} w^T (Z^T Q Z) w + (Q x_0 + c)^T Z w.$ If the reduced Hessian matrix $Z^T Q Z$ is positive definite, then the unique solution $w^* \,$ of this subproblem can be obtained by solving the linear system $(Z^T Q Z) w = - Z^T (Q x_0 + c).$ The solution $x^*$ of the equality-constrained subproblem EQP is then recovered by using $x = x_0 + Z w$. Lagrange multipliers can be computed from $x^*$ by noting that the first-order condition for optimality in EQP is that there exists a multiplier vector $\lambda^* \,$ such that $Q x^* + c + A^T \lambda^* = 0$. If $A$ has full rank, then $\lambda^* = - (A^T A)^{-1} A (Q x^* + c)$ is the unique set of multipliers. Most traditional codes use null-space methods. Inequality-constrained quadratic programs are QPs that contain inequality constraints and possibly equality constraints. Active set methods can be applied to both convex and nonconvex problems. Gradient projection methods, which allow rapid changes in the active set, are most effective for QPs with only bound constraints. Interior point methods work well for large convex QPs. ### Active Set Methods Active set methods start by finding a feasible point during an initial phase and then search for a solution along the edges and faces of the feasible set by solving a sequence of equality-constrained QPs. Active set methods differ from the simplex method for linear programming in that neither the iterates nor the solution need to be vertices of the feasible set. When the quadratic programming problem is nonconvex, these methods usually find a local minimizer. Finding a global minimizer is a more difficult task. The active set $\mathcal{A}(x^*)$ at an optimal point $x^*$ is defined as the indices of the constraints at which equality holds: $\mathcal{A}(x^*) = \{i \in \mathcal{E} \cup \mathcal{I}: a_i^T x^* = b_i\}.$ If $\mathcal{A}(x^*)$ were known, the solution could be found by solving an equality-constrained QP of the form: $\begin{array}{ll} \min_x & q(x) = \frac{1}{2} x^T Q x + c^T x \\ \mbox{s.t.} & a_i^T x = b_i \quad \forall i \in \mathcal{A}(x^*) \end{array}$ Therefore, the main challenge in solving inequality-constrained QPs is determining this set. Given a feasible $x_k$, these methods find a direction $d_k$ by solving the subproblem $\begin{array}{lll} \mbox{EQP}_k & \min & q(x_k + d) \\ & \mbox{s.t.} & a_i^T(x_k + d) = b_i\qquad i \in \mathcal{W}_k \end{array}$ where $q$ is the objective function $q(x) = \frac{1}{2} x^T Q x + c^T x$ and $\mathcal{W}_k$ is a working set of constraints. In all cases, $\mathcal{W}_k$ is a subset of $\mathcal{A}(x_k) = \{ i \in \mathcal{I} : a_i^T x_k = b_i \} \cup \mathcal{E}$, the set of constraints that are active at $x_k$. Typically, $\mathcal{W}_k$ either is equal to $\mathcal{A}(x_k)$ or else has one fewer index than $\mathcal{A}(x_k)$. The working set $\mathcal{W}_k$ is updated at each iteration with the aim of determining the set $\mathcal{A}^*$ of active constraints at a solution $x^*$. When $\mathcal{W}_k$ is equal to $\mathcal{A}^*$, a local minimizer of the original problem can be obtained as a solution of the equality-constrained subproblem $\mbox{EQP}_k$. The updating of $\mathcal{W}_k$ depends on the solution of the direction-finding subproblem. Subproblem $\mbox{EQP}_k$ has a solution if the reduced Hessian matrix $Z_k^T Q Z_k$ is positive definite. This is always the case if $Q$ is positive definite. If subproblem $\mbox{EQP}_k$ has a solution $d_k$, we compute the largest possible step $\mu_k = \max\{ \frac{b_i - a_i^T x_k}{a_i^T d_k}: \; a_i^T d_k > 0, i \not \in \mathcal{W}_k \}$ that does not violate any constraints, and we set $x_{k+1} = x_k + \alpha_k d_k$, where $\alpha_k = \min\{ 1 , \mu_k \}$. The step $\alpha_k = 1$ would take us to the minimizer of the objective function on the subspace defined by the current working set, but it may be necessary to truncate this step if a new constraint is encountered. The working set is updated by including in $\mathcal{W}_{k+1}$ all constraints active at $x_{k+1}$. If the solution to subproblem $\mbox{EQP}_k$ is $d_k=0$, then $x_k$ is the minimizer of the objective function on the subspace defined by $\mathcal{W}_k$. First-order optimality conditions for subproblem $\mbox{EQP}_k$ imply that there are multipliers $\lambda_i^{(k)}$ such that $Q x_k + c + \sum_{i \in \mathcal{W}_k} \lambda_i^{(k)} a_i = 0$. If $\lambda_i^{(k)} \geq 0$ for $i \in \mathcal{W}_k$, then $x_k$ is a local minimizer of problem QP. Otherwise, we obtain $\mathcal{W}_{k+1}$ by deleting one of the indices $i$ for which $\lambda_i^{(k)} ≤ 0$. As in the case of linear programming, various pricing schemes for making this choice can be implemented. If the reduced Hessian matrix $Z_k^T Q Z_k$ is indefinite, then subproblem $\mbox{EQP}_k$ is unbounded below. In this case we need to determine a direction $d_k$ such that $q(x_k + \alpha d_k)$ is unbounded below, using techniques based on factorizations of the reduced Hessian matrix. Given $d_k$, we compute $\mu_k$ as before, and define $x_{k+1} = x_k + \mu d_k$. The new working set $\mathcal{W}_{k+1}$ is obtained by adding to $\mathcal{W}_k$ all constraints active at $x_{k+1}$. A key to the efficient implementation of active set methods is the reuse of information from solving the equality-constrained subproblem at the next iteration. The only difference between consecutive subproblems is that the working set grows or shrinks by a single component. Efficient codes perform updates of the matrix factorizations obtained at the previous iteration, rather than calculating them from scratch each time. ### Interior Point Methods Path-following methods (also known as trajectory-following, barrier or interior-point methods) offer a good alternative to the earlier active-set methods. Although path-following methods may be applied to the general problem QP, it is easier to describe them for problems of the form $\begin{array}{lll} QP2: & \min & \frac{1}{2} x^T Q x + c^T x\\ & \mbox{s.t.} & A x = b \\ & & x \geq 0 \end{array}$ for which first-order optimality conditions are that $\begin{array}{lll} A x^* & = & b \\ Qx^* + c & = & A^T y^* + z^*\\ x_i^* z_i^* & = & 0 \; \forall i = \{1,\dots,n\} \end{array}$ for optimal primal variables $x^* \geq 0$, Lagrange multipliers $y_{}^*$ and dual variables $z^* \geq 0$. In their simplest form, interior-point methods trace the central path that is defined as the solution $v_{}^{}(t) = (x(t),y(t),z(t))$ to the parametric nonlinear system $A x(t) = b, \;\; Qx(t) + c = A^T y(t) + z(t)$ and $x_i^{}(t) z_i^{}(t) = t$ for $i = \{1,\dots,n\}$ with $(x_{}^{}(t),z(t)) > 0$ as the scalar $t_{}$ decreases to 0. Notice that all points on the central path are primal and dual feasible, and that complementary slackness is achieved in the limit as $t$ approaches 0. A disadvantage of this simple idea is that a point $v_{}^{}(t_0)$ must be available for some $t_0 > 0$, but such a point may be found as a first-order critical point of the logarithmic-barrier function $\frac{1}{2} x^T Q x + c^T x - t_0 \sum_{i=1}^n \log x_i$ within the region $A_{}^{} x = b$; indeed, early path-following methods were based on a sequential minimization of the logarithmic barrier function. Notwithstanding, to cope with this potential deficiency, infeasible interior point methods start from any $v_{}^s = (x_{}^{s},y_{}^{s},z_{}^{s})$ for which $(x_{}^{s},z_{}^{s}) > 0$ and follow instead the trajectory $v_{}^{}(t)$ that satisfies the homotopy $A x_{}^{}(t) - b = \theta(t) [ A x_{}^s - b ]$ , $Q x^{}_{}(t) + c - A^T y(t) - z(t) = \theta(t) [ Qx_{}^s + c - A^T y^s - z^s ]$, and $x_i^{}(t) z_i^{}(t) = \theta(t) x_i^s z_i^s$ for $i = \{1,\dots,n\}$ as $t_{}$ decreases from 1 to 0. The scalar function $\theta_{}^{}(t)$ may be any increasing function for which $\theta^{}_{}(0) = 0$ and $\theta_{}^{}(1) = 1$. The simplest choice $\theta_{}^{}(t) = t$ is popular, but there are theoretical advantages in using $\theta_{}^{}(t) = t^2$ since then the unknown trajectory $v_{}^{}(t)$ is analytic for convex problems at $t_{} = 0$. In practice, it is sometimes advantageous for numerical reasons to aim for a small value of the complementarity instead of zero, and in this case the complementary slackness part of the homotopy may be replaced by $x_i^{}(t) z_i^{}(t) = \theta(t) x_i^s z_i^s + [1-\theta(t)] \sigma \;\; \forall i = \{1,\dots,n\}$ and some small centering parameter $\sigma_{}^{} > 0$. Notice that all of these homotopies define their trajectories $v_{}^{}(t)$ implicitly; all that is known is the starting point $v_{}^s$. Many path-following methods replace the true but unknown $v_{}^{}(t)$ by a Taylor series approximation $v_{}^{s}(t)$ evaluated about $v_{}^s$ and trace this approximation instead. Clearly, as $v_{}^{s}(t)$ is simply an approximation, it will most likely diverge from $v_{}^{}(t)$ as $t_{}$ decreases from 1 towards 0. To cope with this, sophisticated safeguarding rules are used to decide how far $t_{}$ may decrease while giving an adequate approximation, and if $t^l_{}$ is this best value, $v_{}^s$ is replaced by $v_{}^{s}(t^l)$ and the process repeated. The resulting iteration defines a typical path-following method. The centering parameter is sometimes computed after a initial predictor step (a first-order Taylor approximation with $\sigma = 0$) is used to compute an estimate of the solution. Once $\sigma > 0$ is known, the Taylor approximation to the revised homotopy gives the corrector step. The Taylor series coefficients are found by repeated differentiation of the homotopy equations with respect to $t_{}$, and the $k$ th order coefficients $(x^{(k)}, y^{(k)}, z^{(k)})$ may be obtained by solving the linear system $\begin{pmatrix} A & 0 & 0 \\ Q & - A^T & - I\\ Z^s & 0 & X^s \end{pmatrix} \begin{pmatrix} x^{(k)} \\ y^{(k)} \\ z^{(k)} \end{pmatrix} = \begin{pmatrix} r_p^{(k)} \\ r_d^{(k)} \\ r_c^{(k)} \end{pmatrix} \doteq r^{(k)},$ where $X^s_{}$ and $Z^s_{}$ are the diagonal matrices whose diagonal entries are $x^s_{}$ and $z^s_{}$ respectively, and the right-hand side $r^{(k)}$ depends on the values of previously-calculated lower-order coefficients. Since the coefficient matrix is the same for each order of coefficients, a single factorization enables us to find increasingly accurate Taylor approximations at a gradually increasing but reasonable cost. Block elimination of the system results in the smaller, symmetric system $\begin{pmatrix} Q + (X^s)^{-1} Z^s & A^T \\ A & 0 \end{pmatrix} \begin{pmatrix} x^{(k)} \\ - y^{(k)} \end{pmatrix} = \begin{pmatrix} r_d^{(k)} + (X^s_{})^{-1}r_c^{(k)} \\ r_p^{(k)} \end{pmatrix},$ and this is usually exploited in practice; the variables $z^{(k)}$ may be recovered as $(X^s)^{-1}[r_c^{(k)} - Z^s_{} x_{}^{(k)}]$. Some algorithms seek to avoid possible numerical difficulties by regularizing these defining systems. For example, the coefficient matrix above is replaced by $\begin{pmatrix} Q + (X^s)^{-1} Z^s + D_d & A^T \\ A & - D_d \end{pmatrix}$ where $D_p$ and $D_d$ are small, positive-definite diagonal perturbations. Other algorithms, try to avoid these difficulties by pre-processing the data to remove singularities. Both techniques appear to work well in practice. If the problem is convex, iterations of the form described can be shown to converge very fast and in a polynomially-bounded number of iterations. For non-convex problems, most methods prefer instead to approximately minimize the logarithmic barrier function for a decreasing sequence of values of $t_0$ using a globally-convergent (linesearch or trust-region) method typically used for linearly-constrained optimization; many of the details - such as the structure of the vital linear systems - are effectively the same as in the convex case. ## Linear Least-Squares Problems Linear least-squares problems are special instances of convex quadratic programs that arise frequently in data-fitting applications. The linear least-squares problem is $\begin{array}{lllll} LLS: & \min & \frac{1}{2} \| C x - d \|_2^2 & & \\ & \mbox{s.t.} & a_i^T x & = & b_i \; \forall i \in \mathcal{E}\\ & & a_i^T x & \geq & b_i \; \forall i \in \mathcal{I} \end{array}$ where $C \in R^{m\times n}$ and $d \in R^m$. LLS is a special case of problem QP, which can be seen by replacing $Q$ by $C^T C$ and $c$ by $C^T d$ in problem QP. In general, it is preferable to solve a least squares problem with a code that takes advantage of the special structure of the least squares problem. LSSOL is a Fortran package for constrained linear least-squares problems and convex QPs. See Lawson and Hanson (1974) for more information. ## References • Lawson, C. L. and Hanson, R. J. 1974. Solving Least Squares Problems, Prentice-Hall, Englewood Cliffs, NJ.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9292622208595276, "perplexity": 194.13763090988996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00610.warc.gz"}
http://moreisdifferent.com/2013/01/28/quantum-effects-in-water/
Note: although I personally found writing this post to be a useful exercise, unfortunately it turned out to be a rather long, rambling and at times rather technical brain dump. There are enough topics mentioned here for dozens of future posts (not to suggest that I could write on all of the topics mentioned off the top of my head – most would require research on my part.) Likely certain things will be fleshed out in future posts. I would appreciate any feedback if there are particular things my readers would like to see in future posts. In my last post I made the rather vague statement that quantum effects account for “about 10%” of the properties of water. I was basing this off the well-established understanding that the hydrogen bond is roughly 10% quantum covalent and 90% electrostatic. Still, no doubt some of my readers may be confused about what I mean by a “quantum effect”. In this post I hope to clarify the term and some of the confusions surrounding the classical / quantum distinction. I will also give some precise examples of quantum effects in water. First I’d like to get over a possible semantic / philosophical stumbling block, which is the fact that everything is quantum mechanical. This is true. But the world can also be described by classical theory  to a large degree of accuracy. Thus the world contains both classical and quantum effects, the former being describable in terms of classical theory to the accuracy required, and the later being describable only in terms of quantum theory to the accuracy required. Usually the classical regime is described as the regime with large masses and big energies – things like baseballs, pendulums and billard balls — this much is obvious.  What we are interested in here is knowing to what extent we can understand molecular dynamics and hence, the properties of materials, just by doing classical mechanics (+ some additional low-computational-cost tricks), which is a difficult question. This is a very important question though, from a practical perspective, because quantum mechanical calculations are extremely expensive to perform [roughly speaking, the number of coordinates in a wavefunction grows exponentially with the number of particles.] Only two particles can be solved exactly using quantum mechanics. Using perturbation theory in theory larger ensembles can be solved, using things which look like “Feynman diagrams” in the expansion of the partition function, but in practice this has only proved doable for simple systems like the Leonard-Jones gas — anything more complicated being intractable with current methods. Using variational techniques ground state energies of large systems can be solved to large accuracy, but quantum dynamics in particular remains extremely expensive. Many levels of approximation are needed (Born–Oppenheimer approximation + DFT + Pseudopotentials) to do a quantum dynamics simulation, and even then they are limited to a few hundred atoms and run times of < 100 ps, even on cluster computers.[By comparison, the time it takes a protein to fold ranges from 100,000 ps  for fast small ones to several seconds (1s = 1,000,000,000,000 ps) for the larger ones.] Also, calculations of the compressibility and dielectric spectra require simulations of at least a few nanoseconds to hundreds of nanoseconds at low temperatures (1ns = 1000ps). Thus there is a great interest in building molecular models (in particular, of water) which run using classical mechanics (+ additional tricks to capture some of the quantum effects).  Besides these practical issues, the aforementioned problem is also interesting from from a pure physics perspective as well, (ie answering it gives us a deeper understanding of nature) as we will see, because we learn more about how classical mechanics ’emerges’ from quantum mechanics. In particular water is known to be a highly quantum mechanical molecule, because it has two hydrogens which play a big role (their low mass => more quantum effects), whereas in large bio molecules people usually don’t care so much about the quantum mechanical nature of the hydrogens. By a “classical calculation” I mean that we are just integrating (solving) Newton’s second law with some pairwise potential. [non pairwise potentials, such as those which are functions of three bodies, (the famous example being the Axilrod-Teller potential) are known to be important in water, and can be included in classical simulations,  but are a huge headache to code, according to the literature. I’ve never seen it done in any publications.] The pairwise potentials that are used are the 1/r Coulomb potential + the Leonard Jones potential and possibly intra molecular potentials, which range considerably in sophistication. On top of this you can also add polarizability , which takes into account the rapid shifting of the electron clouds during each timestep of the simulation. (There are several ways of doing this. In one case you imbed a polarizable dipole in each molecule and then find the equilibrium orientations of all the dipoles at each timestep, which requires solving a large matrix equation. It is not a pairwise interaction, rather it is a cooperative interaction.) In the TTM3 model charge clouds and dipole ‘clouds’ are used to simulate the delocalization of the charge. This takes into account a ”quantum effect” via the classical trick of using a charge cloud to represent the electron wavefuction. In summary here is a breakdown of the ‘levels of simulation’ , with numbers giving the relative computational cost 1. Rigid (“ball and stick”) = 1 2. Flexible (“ball and spring”) = 5 3. Rigid + Polarizable = 6 4. Flexible + Polarizable = 30 5. Flexible + Nuclear quantum effects (NQE) = 175 6. Classical rigid nuclei  + electronic quantum effects via density functional theory (DFT)= 1000 7. flexible classical nuclei + DFT = 5000 8. DFT + flexible nuclei + NQE = 175000 [This is taken from (Vega, 2011)] What this shows us is that a simulation that takes an hour with a rigid model will take roughly 175,000 hours with a full quantum (also called “ab initio”) simulation.  I consider numbers 1-4 to be ‘classical techniques’ whereas 5-8 are quantum techniques. In classes on quantum mechanics, we are often told that quantum theory reduces to classical theory when “hbar goes to zero”. (hbar being the constant giving defining the energy scale for quantum mechanics, or more precisely, the angular momentum scale, since it has units of angular momentum) This is a rather sloppy statement, because in fact hbar is a constant, as every physicist knows.  It is only when the energy scale that we are dealing with become much larger than hbar that – somehow – classical mechanics starts to work. To people working with the path integral formulation of quantum mechanics (which includes both particle theorists and also people doing path-integral molecular dynamics), the classical – quantum transition is seemingly easy to understand, because the path integral formulation transitions perfectly into the famous variational formulation of classical mechanics in the limit that E(x)/hbar » 1 , where E(x) can be either the kinetic or potential energy of the system. This way of thinking isn’t very useful for us – except in so far as the path integral molecular dynamics, which is a technique which helps take into account the nuclear quantum effects in molecular dynamics. As with everything in life and in physics, subtleties abound. Here I would like to describe several quantum effects which are present in water and ice. ## The increase in the dielectric constant upon freezing Observe the following, from a paper by Abascal & Vega (J. Phys. Chem. A, 2011) (pdf) The blue and pink lines show the prediction of two classical models (both of which are among the very best “ball and stick models”  yet devised). As you can see , they are quite a bit different from the experimental result. The first difference is in the magnitude. In particular, TIP4P/2005 underestimates the dielectric constant of water. This can be corrected by artificially increasing the dipole moment,(somebody actually did this, I can dig up the reference if anyone wants) but this no doubt compromises other aspects of the model, which is fit to produce many things, the dielectric constant being just one,  often of lower priority. The second thing is that the slope (change in dielectric constant with temperature) is different – although this isn’t exactly true in the liquid regime, if you look at more data on TIP4P/2005 you will see that TIP4P/2005 captures the slope in the liquid regime pretty well (indeed, better than other popular ball and stick models). (All of the simulation points have error bars, which unfortunately are not shown here) The difference which I want to focus on here is the fact that dielectric constant of water increases when going from the liquid to the solid phase, whereas the dielectric constant of the classical models decreases. This is rather peculiar if you think about it. The dielectric constant measures how much polarization you get in a substance when you apply an external electric field. In water, which is dipolar, most of the polarization comes from reorientation of the intrinsic electric dipoles. In most dipolar substances, the dielectric constant decreases upon freezing — when the dipoles become ‘frozen’ in place, they are harder to reorient. But in water, it’s the other way around. As was discovered in the 80s, the reason for this is that in ice there is a new contribution to the polarization, which is the  tunneling of protons. This tunneling manifests itself in both the increase in dielectric constant and in a surprisingly high conductivity in ice. During the past few decades this tunneling has been a subject of intense research within the field of ice physics. It is a decidedly quantum effect since nobody has figured out a way to incorporate it into classical simulations. ## Anomalous proton diffusion in water – the Grotthuss mechanism Proton tunneling can also occur in water, although to a lesser extent than in ice, so it doesn’t contribute much to the dielectric constant. (At least, this is what I’ve inferred from the literature. I should run the numbers on this using the experimental data on proton conductivity to see how small the contribution is). However, it leads to an anomalously high diffusion constant for protons in water. (which is a very important effect since whenever you make an acid, you end up with a ton of protons diffusing around).  Amusingly, this “quantum effect” was theorized in 1806 by Theodor Grotthuss, although at that time he thought the water molecule was OH and not HOH. Here’s a nice animation of the process from Wikipedia: There is no easy way to take this into account something like this in a classical molecular dynamics simulation of acid and water. ## The effects of the nuclear quantum effects As I mentioned earlier, the light hydrogens are a source of many quantum effects in water. The DeBroglie wavelength of a proton with the ‘thermal energy’  at room temperature (kT ~ .025 eV) is about 1.8 angstroms. The width of a hydrogen atom is about 1 angstrom, so this gives some indication that the delocalization of the proton is going to be important. One way to take this into account is to use Path Integral Molecular Dynamics (PIMD). I’m not going to go into PIMD in any detail, but it’s all based on a single mathematical formula called the Trotter expansion: Exp(A + B) = lim_{P -> infinity}   [ Exp[A/(2P)] * Exp[B/P] * Exp[A/(2P)] ]^P Where A and B are operators (not numbers) in the Hilbert Space. It turns out that using this formula, one can show there is an isomorphism between solution of the quantum mechanical path integral and the solution of a classical system of masses coupled by springs such that they form a circle.  “P” corresponds to the number of masses and is also called the “Trotter number” or “number of beads”. In a recent paper by a recent graduate of our group, Sriram Ganeshan , he looks at the oxygen-oxygen radial distribution function (RDF) using PIMD: For those of you who don’t know what a radial distribution function is, it shows the average density of atoms a given distance away from a certain atom, compared to the average density those atoms in the entire substance, which is taken as 1. In other words, if you were to be an oxygen atom , it shows what distances you are most likely to see the other oxygen atoms around you. In a solid, the RDF has many peaks which are evenly spaced. In the liquid, the peaks decrease in amplitude. In other words, the solid has long range ordering, whereas the liquid only has short range ordering. Here you see the difference between a model (TIP4P/2005-F) with the PIMD nuclear quantum effects and without. The difference looks small, but in fact, it’s considered quite large by people who look at these things all the time. What you see is that the maxima are less high and the minima are less low when PIMD is used. This indicates that the quantum effects lead to less structure in the liquid. This makes perfect sense since the protons are ‘delocalized’ , or in other words ‘smeared out’ over space. [In his paper, Ganeshan also studies a method of incorporating these delocalized effects and zero-point effects using what is called ‘colored noise’.] It would be unfair of me not to mention at this juncture the work of  current group member Betül Pamuk on nuclear quantum effects in ice. In ice the zero-point motions of the hydrogens lead to a larger than expected volume at low temperatures. Surprisingly, in heavy water (D20) there is an even larger excess volume, which is really unexpected, because the heavier deuterium should be less quantum than hydrogen. This bizarre effect was first analyzed by our group and explained using some theory. It was experimentally confirmed by Stony Brook University Prof. Peter Stephens using x-ray diffraction, where they also explored what happens when you replace the Oxygen-16 with Oxygen-18. The resulting paper can be found on the ArXiV ## The absorption spectrum This is something which I don’t really understand very well right now, so I won’t attempt to offer a real explanation. However, I would like to show the striking results of several simulations. Here is the infrared absorption spectrum reported by Paesani, Iuchi, and Voth in 2007 (annotated by myself): Note the big difference in the classical vs quantum models. The classical model here is a polarizable model called TTM2.1-F which features a very sophisticated treatment of the polarizability. The amplitude of the absorption is much lower in the classical model, and whereas with the NQE the peaks are higher and slightly more ‘smeared out”. The stretching modes of the quantum models (neither of which are ‘full quantum’, taking into full consideration the quantum nuclei and electrons) don’t compare well with experiment, so obviously some quantum effects are not being taken into account.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.832892119884491, "perplexity": 630.4030518797499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592654.99/warc/CC-MAIN-20180721184238-20180721204238-00076.warc.gz"}
https://physics.stackexchange.com/questions/397694/what-makes-a-theory-quantum/398679
# What makes a theory “Quantum”? Say you cook up a model about a physical system. Such a model consists of, say, a system of differential equations. What criterion decides whether the model is classical or quantum-mechanical? None of the following criteria are valid: • Partial differential equations: Both the Maxwell equations and the Schrödinger equation are PDE's, but the first model is clearly classical and the second one is not. Conversely, finite-dimensional quantum systems have as equations of motion ordinary differential equations, so the latter are not restricted to classical systems only. • Complex numbers: You can use those to analyse electric circuits, so that's not enough. Conversely, you don't need complex numbers to formulate standard QM (cf. this PSE post). • Operators and Hilbert spaces: You can formulate classical mechanics à la Koopman-von Neumann. In the same vein: • Dirac-von Neumann axioms: These are too restrictive (e.g., they do not accommodate topological quantum field theories). Also, a certain model may be formulated in such a way that it's very hard to tell whether it satisfies these axioms or not. For example, the Schrödinger equation corresponds to a model that does not explicitly satisfy these axioms; and only when formulated in abstract terms this becomes obvious. It's not clear whether the same thing could be done with e.g. the Maxwell equations. In fact, one can formulate these equations as a Dirac-like equation $(\Gamma^\mu\partial_\mu+\Gamma^0)\Psi=0$ (see e.g. 1804.00556), which can be recast in abstract terms as $i\dot\Psi=H\Psi$ for a certain $H$. • Probabilities: Classical statistical mechanics does also deal with probabilistic concepts. Also, one could argue that standard QM is not inherently probabilistic, but that probabilities are an emergent property due to the measurement process and our choice of observable degrees of freedom. • Planck's constant: It's just a matter of units. You can eliminate this constant by means of the redefinition $t\to \hbar t$. One could even argue that this would be a natural definition from an experimental point of view, if we agree to measure frequencies instead of energies. Conversely, you may introduce this constant in classical mechanics by a similar change of variables (say, $F=\hbar\tilde F$ in the Newton equation). Needless to say, such a change of variables would be unnatural, but naturalness is not a well-defined criterion for classical vs. quantum. • Realism/determinism: This seems to depend on interpretations. But whether a theory is classical or quantum mechanical should not depend on how we interpret the theory; it should be intrinsic to the formalism. People are after a quantum theory of gravity. What prevents me from saying that General Relativity is already quantum mechanical? It seems intuitively obvious that it is a classical theory, but I'm not sure how to put that intuition into words. None of the criteria above is conclusive. • I've removed some comments which didn't seem to be intended to request clarifications or suggest improvements. – David Z Apr 4 '18 at 21:44 • Note that the appropriate answer to this question depends quite heavily on whether you mean "what distinguishes quantum theories from classical theories specifically", or "what distinguishes quantum theories from other theories in general" - for example, the class of what are often referred to as generalized probabilistic theories, which include classical, quantum and many other theories besides. In this latter class, classical theories are distinguished by many properties, and so the lack of any of these tells us we are dealing with a non-classical theory - but not necessarily a quantum one – Robin Saunders Apr 5 '18 at 0:19 • @RobinSaunders Hmm that's actually a very good point, I like the way you put it. If you ever have some free time, please consider making that comment into an answer. Cheers! – AccidentalFourierTransform Apr 8 '18 at 2:11 • I'm interested in why you say TQFT does not fit within the Dirac-von Neumann axioms. It's true that those axioms don't tell you much about the structure of the theory, but it's not really different for any QFT, for which there is a Hilbert space associated to any spatial manifold. I'd say those axioms are insufficiently strong, rather than being too restrictive. – Holographer Apr 8 '18 at 17:07 I think this is a subtle question and I think it depends somewhat on how you choose to represent quantum mechanics. To see one extreme of this, consider the viewpoint put forth by Kibble in [1]. For simplicity I will be thinking of finite-dimensional quantum systems here; there are some subtleties in infinite dimensions but as far as I know the basic picture still holds. In this, he shows that if we describe the theory in terms of physical states (rays in the Hilbert space), then the dynamics of Schrödinger evolution correspond exactly to Hamiltonian evolution via the symplectic form from the Kähler structure on the projective Hilbert space (which is to say, the evolution is that of a classical system). However there are two distinctions which make quantum mechanics different from classical mechanics: • The phase space must be a projective Hilbert space (as opposed to just a symplectic manifold), and the Hamiltonian is restricted to being a quadratic form in the homogeneous coordinates on projective space. In classical mechanics any (sufficiently smooth) function is admissible as a Hamiltonian. • Composite systems are described differently. In classical mechanics the phase space of a composite system is the Cartesian product of the phase spaces. In quantum mechanics, it is the Segre embedding (which descends from the tensor product of Hilbert spaces). This is parametrically different; if the phase spaces of the two subsystems are $2m$ and $2n$, then in classical mechanics the composite system has dimension $2m+2n$, whereas in quantum mechanics it has dimension $2(n+1)(m+1)-2$. The extra states are the entangled states. Virtually all the observable consequences of QM come here, e.g. Bell inequalities. Of course if we consider identical particles things get even a bit more complicated. If you ignore the second point, and focus only on a single quantum system, the surprising conclusion is that every quantum mechanical system is a special case of classical mechanics (with the provision that again I haven't checked the details in infinite dimensions but it is at least morally true). However, part of the structure of quantum mechanics is how it describes composite systems so you can't just ignore this second point. A mathematician would say that this gives an injective functor from the category of quantum mechanical theories to the category of classical theories which is not compatible with the symmetric monoidal structures on the two. I want to point out that this is emphatically not how we typically think of the correspondence principle in quantum mechanics. That is, it is a mapping from a finite-dimensional quantum mechanical system to a finite-dimensional classical system (of the same dimension). Normally, if we think about e.g. a free particle in one dimension, the Hilbert space for that quantum system is infinite dimensional, yet it corresponds to a 2-dimensional classical phase space. But the point is that, at least in this question, we can't restrict to the ordinary notion of correspondence since we don't have a physical interpretation for the system of equations describing the theory. Additionally, despite the above example, whether a theory is classical or quantum has essentially nothing to do with where the states live. Indeed, if we just want to consider a free particle in one dimension again, we would typically describe its state as a self-adjoint trace class unit trace operator $\hat \rho$ on the Hilbert space $L^2(\mathbb R)$. In contrast, in classical mechanics we would describe a state as a probability distribution $\rho$ on the phase space $\mathbb R^2$ (note that in the above example we had only pure classical states i.e. only those described by a $\delta$ function on the phase space whereas now we have mixed states). However we could just as easily describe the quantum state by its Wigner function, in which case it lives in exactly the same affine space as the classical distribution. However the Wigner function satisfies slightly different inequalities than the classical probability distribution; in particular it can be slightly negative and cannot be too positive. The details of this were first worked out in [2]. In this case, it is the dynamics that give away the quantum nature. Specifically, to go from classical to quantum mechanics, we must replace the Poisson bracket by the Moyal bracket (which has $O(\hbar^2)$ corrections), indicating the failure of Liouville's theorem in the phase space formulation of quantum mechanics: (quasi)probability density is not conserved along trajectories of the system. All of this is to say that it seems difficult (and maybe impossible) to try to find a single distinguishing feature between classical and quantum mechanics without considering composite systems, so if that is what you want, I'm not sure I have an answer. If you do allow for composite systems though, it is a pretty unambiguous distinction. Given this, it is perhaps not surprising that all the experimental tests we have which demonstrate that the world is quantum and not classical are based on entanglement. References: [1]: Kibble, T. W. B. "Geometrization of quantum mechanics". Comm. Math. Phys. 65 (1979), no. 2, 189--201. [2]: H.J. Groenewold (1946), "On the Principles of elementary quantum mechanics", Physica 12, pp. 405-460. As far as I know, the commutator relations make a theory quantum. If all observables commute, the theory is classical. If some observables have non-zero commutators (no matter if they are proportional to $\hbar$ or not), the theory is quantum. Intuitively, what makes a theory quantum is the fact that observations affect the state of the system. In some sense, this is encoded in the commutator relations: The order of the measurements affects their outcome, the first measurement affects the result of the second one. • I think this answer is on the right track. In quantum mechanics, the transfer of information is intrinsically tied to the dynamics of the system, whereas in classical physics that is not the case. – DanielSank Apr 4 '18 at 20:16 • I would agree with this. It was my answer also but I came too late. So, in any situation, what exactly is quantum is best shown in experiments such as Stern-Gerlach type. If you measure for x dirrection you get + and - or spin up or down, but if you measure in y, you get spins in that direction. If you measure first in x, thaen in y, you get as a rersult a y direction, but if you measure in x, then again in x, you get only x..... – Žarko Tomičić Apr 4 '18 at 20:17 • I would say on the contrary that observations affect the state of a classical systems where everything is physical. – Bill Alsept Apr 4 '18 at 20:55 • In MWI, observations don't affect the state of the system in some mysterious way. Rather, you should consider the composite Hilbert space describing both the system and the measuring device (large-dimensional Hilbert space). A measurement is a time-dependent interaction and in the measurement limit you produce a fully entangled state between the two. If you compute the reduced density matrix for the system of interest, you get a diagonal matrix of the probabilities. The point being that "observations affect the state of the system" is arguably really a statement about composite systems. – Logan M Apr 4 '18 at 21:07 • The commutator is just a way of talking about measuring one observable and then another vs. doing it in the other order. The way to say it is that a classical theory is one where conditional probabilities form a distribution. – Ryan Thorngren Apr 6 '18 at 1:41 Frame challenge: I think the question is based on a misleading premise. While there are a number of characteristics typical of quantum theories as opposed to classical theories - some you've already listed in the question, and others have been suggested in the existing answers - there's no particular reason to expect there to be a single unambiguous rule that categorizes any arbitrary theory as either quantum or classical. Nor is there any particular need for such a rule. You give the example of quantum gravity. However, the reason we want a quantum theory of gravity is not because it has the tag "quantum" attached to it, as if it were a handbag that would not be adequately fashionable without the correct label, but because we want it to be able to answer certain questions about reality which we already know General Relativity can't answer. In short, don't worry about whether the theory is "quantum" or not - worry about whether it answers the questions you want answered or not. Also relevant. Addendum: the same goes for the existing theories, of course. We don't like the Standard Model because it is quantum. We like it because it works. • @JerrySchirmer, that's not really what this question asks, though. – Harry Johnston Apr 5 '18 at 19:37 • It asks "what is it about a theory that makes it 'quantum'". And the answer would be "we apply quantization to some classical theory" – Jerry Schirmer Apr 5 '18 at 20:43 • @JerrySchirmer, that's one possible answer, certainly. But I think the OP is asking for criteria that are based directly on the mathematical characteristics of a particular model, rather than on how the model was developed. (And I think in practice that, if presented with a theory with characteristics similar to other quantum theories, most physicists would call it a quantum theory regardless of whether it was derived from a classical model or not.) – Harry Johnston Apr 5 '18 at 21:20 • ... incidentally, unless I've overlooked something, none of the existing answers mention quantization as a possible criteria so you might want to post that as an answer @JerrySchirmer – Harry Johnston Apr 5 '18 at 21:23 • All that said, if I had to choose one feature that was the most important characteristic of quantum theories, I'd have to endorse Photon's answer. – Harry Johnston Apr 6 '18 at 23:06 ## TL;DR: Correlations. First things first: since the OP asks for a criterion to tell whether a model is quantum mechanical, the answer has to involve observables. After all if you could rewrite your "quantum" model as a "classical" model, those labels would not be worth much after all. Furthermore all quantum theories (that I know of) are probabilistic, therefore this answer focuses on probabilistic observables, i.e. correlation functions. The fundamental difference between a quantum theory and a classical theory is their correlation structure. That is, quantum theories can show correlations that classical theories cannot. The historically first and simplest example of this is Bell's inequality. By now there are many such inequalities for all kinds of observables, a frequently used one being the CHSH inequality. In general these inequalities set bounds on correlation functions that cannot be violated by a classical probability theory, where the latter can be made precisely (see below). Quantum probability theories can violate some of these inequalities, which makes them intrinsically different. Interestingly, there are also theories that have correlations that are even stronger than in quantum theory. These are known as Popescu-Rohrlich boxes and they have been shown to allow maximal violation of the so called Tsirelson bound, another inequality which is however fulfilled by quantum theory. Making these statements (which all work on the level of probability distributions on a space of observables) is a whole field. Some references (I'll try to put some more tomorrow, too tired now): 1. One can try to uniquely single out quantum theory as a 'special' probability theory by starting from certain information theoretic postulates: https://arxiv.org/abs/1203.4516 2. So called 'loophole free' Bell tests have shown that we live in a world that violates classical probability theory (even though some people will argue against that): https://www.nature.com/articles/nature15759 3. A nice presentation about the ideas mentioned above of a guy who (unlike me) actually knows what he is talking about: http://www.math.umd.edu/~diom/RIT/QI-Spring10/ClassvsQuantInfo.pdf A mathematical system, either algebraic or differential equations, has axioms and theorems and is self contained and self consistent. A physics theory is a subset of a mathematical system that is defined by imposing extra axioms, called laws or postulates, which are necessary by construction, in order to pickup from the overall mathematical set, those solutions which fit data, i.e. measurements and observations. Classical theories are those that use classical laws, such as: Newton's laws for mechanics, the set of laws of electricity and magnetism unified in Maxwell's equations, the thermodynamic laws (and maybe etc). Quantum theories are the ones obeying quantum mechanical laws, i.e. the postulates of quantum mechanics, no matter the mathematical formulation. In order to fit the data and observations, quantum mechanics postulates were necessary, and this is what distinguishes classical from quantum, IMO. Dirac-von Neumann axioms: These are too restrictive (e.g., they do not accommodate topological quantum field theories). This was the first time I met Topological Quantum Field Theories (TQFT). (Such introductions are one of the reasons I follow this site - to get whiffs of new-to-me physics.) The gauge is, if this set of theories fit data and predict measurements. In axiomatic mathematical theories, theorems can be set up as axioms, and then the former axioms have to be proven as theorems, for a self consistent theory. Usually the axioms are chosen as the simplest expression from a set of consistent theorems. Since TQFTs fit data and are predictive of quantum states, it is necessary that from the axiomatic postulates for TQFT one should be able to derive the postulates of quantum mechanics (possibly in a very complicated mathematical method). The wikipedia article on TQFT seems to indicate this. This is necessary for a theory to be quantum IMO. I.e. it is the postulates that connect measurements to the mathematical formulas, by construction. • +1 Thank you for the answer, but I'm not convinced. As I said in the OP, the postulates of QM are too restrictive. There are systems that we deem quantum-mechanical, yet they fail to satisfy these axioms. For example, topological quantum field theories (which have their own set of axioms). – AccidentalFourierTransform Apr 7 '18 at 19:47 • These topological theories, do they fit any data? ? If they fit the data, then this just means that some of the postulates (linked above) of quantum mechanics can be relaxed/ignored. Otherwise , as when theorems in axiomatic mathematics can be turned into axioms, they become theorems.Or are they just a science fiction game with mathematics – anna v Apr 8 '18 at 3:12 • Wow, that's a very condescending comment. Just because you don't find them useful does not make them "science fiction games". Wow, just wow. I really didn't expect that attitude from you... – AccidentalFourierTransform Apr 8 '18 at 3:13 • Sorry, I have edited a bit. This then must mean, that the usual postulates are turned into theorems. What I am trying to say is that it is data that is the decisive factor, fits and predictions. And that the mathematics must be consistent. – anna v Apr 8 '18 at 3:18 • +1 for a very good point: “Quantum theories are the ones obeying quantum mechanical laws, i.e. the postulates of quantum mechanics, no matter the mathematical formulation.” – AlQuemist Apr 9 '18 at 8:24 I would say that something intrinsically quantum is the way in which probabilities and the function which obeys the partial differential equation are related. As you note, both interference and probabilities are present in classical theories. What's new are probability amplitudes where interference leads to a supression of probabilities which is not possible in classical theories. For the finite-dimensional case, there's also Lucien Hardy's proposal "Quantum Theory From Five Reasonable Axioms" (https://arxiv.org/abs/quant-ph/0101012). There, the distinguishing factor between quantum theory and classical probability theory is that "there exists a continuous reversible transformation on a system between any two pure states of that system." Another reference along similar lines is Chapter 9 of Scott Aaronson's book "Quantum Computing since Democritus". • Isn't interference of probabilities basically how we express wave-particle duality mathematically? – asmaier Apr 24 '18 at 13:06 • I am not sure what you are getting at. Frist, there is no interference of probabilities but only probability amplitudes and second, sure, the physical phenomenon of wave-particle duality is related to this mathematical mechanism. – Marc Apr 25 '18 at 14:15 ### tl; dr Erm... You do. Say you cook up a model about a physical system ... Equations do not exists by themselves, they always have a surrounding. The head is assumptions and the tail usually describes limitations of said mathematical model. So really, it is up to your interpretation of question at hand OR the data available to you, that can consistently (deterministically?) predict if a theory is "Quantum". Conversely, if you do not have a head and tail, you can make a lot of cases about what an equations is talking about but can't say anything concretely. All the answers here are inspiring, and frankly sexy, but take time to consider my rudimentary examples below This way of thinking "what characteristic of equation predicts its applicability in <name of physics branch>" is a misuse of mathematics. Maths is, perhaps, the ultimate but we must remember that in physics we use it as a tool. My illustration below might seem childish but please consider the following equations Equation 1: $$x^2 + x - 6 = 0$$ Equation 2: $$2x + 5y = 20$$ Just looking at these, a mathematician can happily say that • Equation 1 • has two solutions +2 and -3, and • the curve is upward facing, with maxima at x = -0.5 • Equation 2 • has a slope of -0.4 • has intercepts 4 and 10 • has infinite ordered pairs (x, y) satisfying the equation • describes a curve that encloses the origin And we would all agree with the above points. But the wise physicist stays mum, because s/he knows that these equations aren't just scribblings of some dyslexic Vulcan but are models of something, they represent something or some phenomena. So a physicist agrees with the mathematician but doesn't come to a conclusion. Let us look at the questions which lead us to these equations Question 1: The product of a quantity and one more than itself is 6, find the value of this quantity if a. the quantity is money lent b. the quantity is time Question 2: Two times the number of my sons and five times the number of my daughters always equals two times the number of appendages a normal person has on his hands. How many sons and daughters do I have? Now, I hope you have an aha! moment. The answer of Q1 b is just +2 because time can not be negative (we've all solved such questions as kids) and the answer to Q2 can be quite surprising - 5 sons and 2 daughters - because physicists are good people and don't make fractional children or negative children. Did you see that -- one equation, two variables, and we still get a unique answer - constraints. So mathematician (the equation) and physicist (the big picture) are both correct where they stand. But the physicists wins, because • we are at physics.stackexchange.com • math in itself is very strong, pure, almost unpalatable; we need both the background information and the constraints to understand what this wonderful tool is trying to tell us through equations. On a serious note, I'd like to point out that there's probably no (respectable) book on classical physics which teaches F = ma without first explicitly-and-clearly stating the following: • Assumptions required e.g. frictionless surfaces and perfectly rigid bodies • Newton's Three Laws of Motion (word-by-word) • That dF = d(m.v), which can be simplified if mass is (almost) constant • and most importantly, the fact that objects we are dealing with are not of super-tiny scale, i.e. larger than 10-9m in diameter. Authors don't do this for pedagogy, most 9th grade students wouldn't give a damn about rigidity, but in fact they do it because these statements are necessary for the equation/theory to work. Trying to predict if an equation describes a Quantum thingy is a discussion-based question at best, or meta-math. To the OP specifically, If you are an inventor, working on something like GUT (why else would you have a equation whose origin you do not know) and you are curious if it applies equally well to big and small bodies - apply constraints. I do not have the mathematical foresight but logically I can say that variations in constraints will define the way system behaves for Quantum and Classical bodies. In Thinking Fast and Slow there's a chapter which illustrates that we have a tendency to support what is popular/fancy rather than what is correct/plausible. I think the question is primarily opinion based. • Apropos of equation 1, a mathematician would perhaps say minimum rather than maxima (sic). – Deepak Apr 9 '18 at 15:50 Physical models are determined by their lattice of events. The set of physical events form an algebraic lattice with the two binary operators that serve as the OR and AND between events. We assume the lattice of events to be sigma-additive and orthomodular. We call this lattice the logic of the model. In this sense events are the elements of logic. System states are probability measures over this algebra. Physical quantities are mappings between statements on measurements of a quantity (think of Borel-sets of the reals) and the logic. The logic of a classical model is isomorphic to a set algebra so it is distributive ( a ∨ (b ∧ c) = (a ∨ b) ∧ (a ∨ c) and vice versa) and fully atomic. The logic of a quantum model isomorphic to the lattice of the subspaces of a Hilbert space and therefore it is not distributive but also fully atomic. The above alone is sufficient to explain many features associated with quantum models, including • real valued physical quantities can be represented as self-adjoint operators • commutation relations • superposition of states • the Schrödinger equation • Can you please add some references? I think the answer could benefit from that. – Kiro Apr 7 '18 at 7:35 TLDR: Wave-Particle duality I want to answer this question from a historical perspective: According to our current understanding a quantum theory shows features of both classical mechanics and electrodynamics (e.g. light) at the same time. The first person to notice such a connection between mechanics and the theory of light was Hamilton. He developed Hamiltonian Optics, which described light as a particle (aka corpuscle). Theorists soon recognized that Hamiltonian Optics cannot account for light phenomena like interference, diffraction, and polarisation. They realized that Hamiltonian Optics is only an approximation, which works well as long as the wavelength of light is much smaller than the measurement apparatus (e.g. for geometrical optics based on light rays and lenses). Nevertheless, the language of Hamiltonian Optics worked perfectly to describe classical mechanics, which is now commonly known as Hamiltonian mechanics. Maxwell's field theory of Electrodynamics was a more correct description of light, but then came Planck and Einstein. They showed that to describe Black Body Radiation and the photoelectric effect it was necessary to assume that light cannot be a field with infinite divisibility (i.e. continuity) as assumed in Maxwell's Wave theory of light. Rather, light must consist of countable entities they called "quanta". But, this theory was ad hoc and not consistent with special relativity. (Note: the consistent version is Quantum Electrodynamics.) Although immature, the Planck and Einstein explanation of these phenomena was the first quantum theory because it showed (or better, assumed) wave-particle duality. (Note: Quantisation doesn't mean going from a wave theory of light back to a corpuscle theory like Hamiltonian Optics. Rather it combines features of waves and particles.) The crazy genius of deBroglie and Schrödinger was needed to apply this theory in the opposite direction - to particles. They noticed that if Maxwell's wave theory of light must be extended to contain quanta/particles, classical theory (which consists only of particles) must be extended to produce the features of waves. They saw classical theory could be an approximation like Hamiltonian Optics, which is valid only for short wavelengths. Thus, Schrödinger developed wave mechanics not by postulating quanta, but by reversing the approximations necessary to go from Maxwell's theory of light to Hamiltonian Optics. In opposition to Electrodynamics, Classical Mechanics needed to be "wavized" to become a complete theory showing wave-particle duality. (Note: here again, quantisation is not going from a particle theory to a complete wave theory of infinite divisibility, rather, it combines features of both worlds.) So, a theory is "Quantum" when it integrates/combines the features of both waves and particles. A classical theory is either only waves/fields or only particles. Regarding the quantisation of General Relativity, it is instructive to compare this classical field theory with another classical field theory, namely fluid dynamics. What both theories have in common is their high non-linearity. Both can only be quantised if they get linearized first. If one linearizes fluid dynamics, one gets the equation for sound waves. If one linearizes the equations of GR, one gets the equations of gravitational waves. If one quantizes the equation of sound waves, one gets phonons. If one quantizes gravitational waves, one gets gravitons. Again, both Gravitons and Phonons show wave-particle duality. But in both cases, we need to linearize our theory first to be able to quantize it. (Note: Phonons only exist in solids. Gravitons might also only exist in "solid" space-time.) I'm astonished that nobody appears to mention that a quantum theory describes quantities which have discrete values. All quantities which appear continuous on the macroscopic level can only take on discrete values in a quantum theory. The differences are "communicated" by "particles" (photons etc.). That's the heart of a quantum theory. Describing the states and interacting particles has not been achieved, or has only been tentatively achieved, for gravitation. • -1 This answer is basically incorrect; in particular, “All quantities which appear continuous on the macroscopic level can only take on discrete values in a quantum theory”. – AlQuemist Apr 9 '18 at 14:32 • @PeterA.Schneider No, that's a very simplistic view of classical mechanics (and physics in general): a single system always has an infinite number of different descriptions, some of which are typically more accurate than others. It's turtles all the way down: you can always add more levels of sophistication to a certain model. In this sense, speaking of a "coin" is not meaningful: you have to decide which degrees of freedom you want to study (only heads/tails? or also it's final temperature? what about any possible deformation due to the impact?) (1/2) – AccidentalFourierTransform Apr 9 '18 at 16:52 • (2/2) At some point you truncate the problem, and pick a certain finite set of degrees of freedom. Once you do this, you should be able to decide whether the model is classical or quantum-mechanical independently of other "more sophisticated" models. The binary model is consistent in and of itself, independently of more accurate descriptions. It is a valid model, and complete as far as the degrees of freedom we chose to describe is concerned. Whether there is a Newtonian description that is more accurate is completely irrelevant. FWIW, I appreciate your answer anyway, and I upvoted it. – AccidentalFourierTransform Apr 9 '18 at 16:54 • @PeterA.Schneider Take guitar string or some other resonating system - you get discrete results. – Arvo Apr 10 '18 at 12:03 • As with @Arvo I immediately glanced at classical standing waves. As with quantum systems they discreteness comes from the application of boundary conditions. As with quantum systems they are a steady-state effect and you can observe results that don't meet the quantization condition in the immediate aftermath of disturbing the system. – dmckee --- ex-moderator kitten Apr 11 '18 at 19:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615110516548157, "perplexity": 449.8560262767282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192887.19/warc/CC-MAIN-20200919204805-20200919234805-00522.warc.gz"}
http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Baccarat.html
# Baccarat related topics {game, team, player} {rate, high, increase} {company, market, business} {law, state, case} {build, building, house} {film, series, show} {style, bgcolor, rowspan} {math, number, function} {@card@, make, design} Baccarat (English pronunciation: /ˈbækərɑː/) is a casino card game. It is believed to have been introduced into France from Italy during the reign of Charles VIII of France (ruled 1483–1498), and it is similar to Faro and Basset. There are three accepted variants of the game: baccarat chemin de fer, baccarat banque (or à deux tableaux), and punto banco (or North American baccarat). Punto banco is strictly a game of chance, with no skill or strategy involved; each player's moves are forced by the cards the player is dealt. In baccarat chemin de fer and baccarat banque, by contrast, both players can make choices, which allows skill to play a part. Baccarat is a simple game with three possible results—'Player', 'Banker', and 'Tie'. The term 'Player' does not refer to the customer, and the term 'Banker' does not refer to the house. They are just options on which the customer can bet. ## Contents ### Valuation of hands In Baccarat, cards 2–9 are worth face value, 10s and face cards (J, Q, K) are worth zero, and Aces are worth 1 point. Players calculate their score by taking the sum of all cards modulo 10, meaning that after adding the value of the cards the tens digit is ignored. For example, a hand consisting of 2 and 3 is worth 5 $(2+3=5\equiv 5\pmod{10})$. A hand consisting of 6 and 7 is worth 3 $(6+7=13\equiv 3\pmod{10})$ - the first digit is dropped because the total is higher than 9. A hand consisting of 4 and 6 is worth zero, or Baccarat $(4+6=10\equiv 0\pmod{10})$. The name "Baccarat" is unusual in that the game is named after the worst hand, worth 0. The highest score that can be achieved is 9 (from a 4 and 5, 10/J/Q/K and 9, or A and 8, etc.).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5931902527809143, "perplexity": 2901.6547387967644}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742981.53/warc/CC-MAIN-20181116070420-20181116092420-00417.warc.gz"}
http://math.stackexchange.com/questions/63324/an-inequality-about-binomial-distribution/63336
# An inequality about binomial distribution The question is: Consider $n$ Bernoulli trials, where for $i = 1, 2,..., n$, the $i$th trial has probability $p_i$ of success, and let $X$ be the random variable denoting the total number of successes. Let $p \ge p_i$ for all $i = 1, 2, \ldots , n$. Prove that for $1 \le k \le n$, $$\Pr \{ X < k \} \ge \sum_{i=0}^{k-1}b(i; n, p)$$ I tried to use induction on $k$ but obviously it doesn't work. - Would it be too simple to argue that if every single trial in process $A$ has a lower success probability than in process $B$, then the probability of getting less than some fixed number of successes $k$ in $A$ is higher than in $B$? Or do you want a "mathematical" proof? –  TMM Sep 10 '11 at 14:53 @George, it does work with p = 1 –  ablmf Sep 10 '11 at 14:56 @Thijs's comment indicates the exact reason why the result holds. Relying on coupling, one can turn this into a full fledged proof--if necessary... :-) –  Did Sep 10 '11 at 15:14 You're trying to show that for $0\leq k\leq n$, $\mathbb{P}[X\leq k]\geq \mathbb{P}[\text{Bin}(n,p)\leq k]$. We can prove this by coupling; the idea is to work on a probability space over which these variables are related in a useful way. Let $(U_i:1\leq i\leq n)$ be a sequence of IID uniform random variables on $[0,1]$ and write: $$X=\sum\limits_{i=1}^n \mathbf{1}_{U_i<p_i}\quad\text{and}\quad \text{Bin}(n,p) = \sum\limits_{i=1}^n \mathbf{1}_{U_i<p}$$ Now $X\leq \text{Bin}(n,p)$ with full probability. In particular, on this probability space, $\{\text{Bin}(n,p)\leq k\}\subset \{X\leq k\}$, so taking probabilities gives the result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737213850021362, "perplexity": 185.07145696708483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657588.53/warc/CC-MAIN-20150417045737-00258-ip-10-235-10-82.ec2.internal.warc.gz"}
https://math.stackexchange.com/tags/mathematical-modeling
Questions tagged [mathematical-modeling] A mathematical model is a description of a system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modelling. 2,193 questions Filter by Sorted by Tagged with 29 views Why life expectancy calculate as 1/μ in SIR model? In the SIR model, when the death rate is μ, life expectancy is calculated as 1/μ. Can anyone explain it intuitively? • 1 24 views Initial margin covariance matrix [closed] I am trying to replicate the following example in order to calculate IM : https://www.clarusft.com/isda-simm-in-excel-equity-derivatives/?amp=1 I am stuck at the last step (impossible to find the ... 1 vote 27 views What is the physical and mathematical meaning of nonsymmetric mass, damping and stiffness matrix of a linear fluid-solid interaction modeling problem? Suppose the problem of the subsonic flow over a simply-supported rectangular plate as bellow. According to the assumptions of incompressible, inviscid and irrotational flow, and using perturbation ... • 21 31 views Need help in quantitatively modeling solubility of carbon dioxide in beer. I'm trying to model a 3D surface which reflects the solubility of $CO_2$ in beer. An empirically derived chart is available at this link. Solubility is empirically related to the pressure of $CO_2$ in ... 31 views Find Variants in list<list<int>> Have a mathematic solution for this: I have some list<list> For example List{ List1 = {1,2,3} List2 = {4,5} List3={6} List(n) {k} } How can I find all ... 19 views Discover where car is parked when car is using adjacent parking strategy to hide in N garages [duplicate] The car can be parked in n garages lined up in a row. Each night the car is parked in some garage, and each day it is reparked to garages adjacent to the garage where the car was parked the previous ... 18 views Reference request: converting discrete models to continuous models I have been reading a bunch of papers which take discrete models or agent-based models and convert them to continuous PDE models, using Continuum Mechanics, or such methods. Can anyone recommend a ... • 1,705 37 views How to develop a continuous function for known pattern? [closed] I am given a range of x and y and would like to develop a continuous function f(x). Meaning ... 28 views Positivity of solutions of an ODE system with non-negative initial condition [closed] I have an ODE system that takes a mathematical model describing the dynamics between HCV and the immune system. My question is about the proof that the solution of the system is non-negative if the ... 1 vote 25 views Laguerre tessellations in real world Aside from microstructures, where are Laguerre tessellations used in mathematical modelling? 1 vote 46 views Geometric mean is to arthithmetic mean as arithmetic mean is to what? I am interested in a type of "mean" $r$ associated to a set $\{a_1,a_2,\dots,a_n\}$ where $$e^r=\frac{1}{n}\sum\limits_{i=1}^n e^{a_i}.$$ I will call this the "? mean" for now ... 22 views Capacity planning and modelling I have a business case in which I am going to model how many devices are required given the predicted workload in a series of monthly cohorts in the next ten years. The work could come from multiple ... • 123 50 views Picking numbers from a list with Gaussian distribution (programming implementation) If we have values: $x \in [0, 100]$ I would like to implement a method, where the bigger the value, the less likely it is that it will be picked. (something like a Gaussian curve, but with maximum at ... • 2,415 23 views 12 views Can someone help me solve this? Finding revenue function given two different prices and quantities Have a Math for Econ intro course exam coming up and this question showed up in one of my past papers, I am afraid I do not know exactly what to do. Can anyone help me solve it? The following is the ... 38 views Can you modeling complicated dynamics without using differential/difference equations? Let's imagine there is a phenomenon I want to understand. I have a few multivariate time series about the phenomenon but not a lot. I don't know how the variables are related to each other but from ... 75 views Rewriting system of second order differential equation as system of first order Given a charged particle moving in an electromagnetic field. We have $N$ amount of point charges placed in $\mathbb{R}^2$ on the coordinates $p_i$. We also have a free particle moving in $\mathbb{R}^2$... 16 views • 306 51 views Conditional modelling of a binary variable based on the values of two continous variables I want to model a binary variable $(b)$ from two continous variables $(x_{in},\:x_{out})$. These variables are $0\leq x_{in} \leq x_{max},\: 0\leq x_{out} \leq x_{max})$. I want the following three ... 1 vote 63 views Numerical solution of a Projectile motion problem Lets consider the situation of a rocket launched from the ground onto some impact function $f(x)$. Assume that the impact function is just the $x$ axis, hence the projectile hit the ground at the same ... • 75 21 views Is this SEIRD disease model a linear or non-linear least-squares fitting problem? This is a system of ODEs I have set up to model a disease. I'm trying to fit the parameters for this model for a given data set using a least squares fitting algorithm. The parameters are the Greek ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7083699703216553, "perplexity": 876.7194881862613}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103645173.39/warc/CC-MAIN-20220629211420-20220630001420-00463.warc.gz"}
https://cazenaveargentina.com/docs/2a35d5-coherence-wiki
# coherence wiki Kevin and Laurie are the only ones whose memory matches Emily's memory, since they're the only ones who have traveled with her the whole time. {\displaystyle y(t)} This ability to interfere and diffract is related to coherence (classical or quantum) of the waves produced at both slits. Low coherence can be caused by poor signal to noise ratio, and/or inadequate frequency resolution. {\displaystyle \gamma _{xy}^{2}(f)=0} f It is especially dealt with in text linguistics. The delay over which the phase or amplitude wanders by a significant amount (and hence the correlation decreases by significant amount) is defined as the coherence time τc. Constructive or destructive interferences are limit cases, and two waves always interfere, even if the result of the addition is complicated or not remarkable. As the marker, they leave a random object and photos of themselves in a box, and on the back they write down the numbers they get from rolling dice. {\displaystyle S_{xy}(f)} The coherence area is now infinite while the coherence length is unchanged. The vector for partially polarized light lies within the sphere. Likewise, the autospectral density of groundwater well levels are shown in figure 4. They explain that they found two notes at the other house too. temporally and spatially constant) interference. If a system of units has both equations and base units, with only one base unit for each base quantity, then it is coherent if and only if every derived unit of the system is coherent. {\displaystyle \gamma _{xy}^{2}(f)=1} A new coherent unit cannot be defined merely by expressing it algebraically in terms of already defined units. In cases where the ideal linear system assumptions are insufficient, the Cauchy–Schwarz inequality guarantees a value of If the electric field wanders by a smaller amount the light will be partially polarized so that at some angle, the polarizer will transmit more than half the intensity. ) ) The profile will change randomly over the coherence time The following example concerns definitions of quantities and units. We further assume that the ocean surface height controls the groundwater levels so that we take the ocean surface height as the input variable, and the groundwater well height as the output variable. In other words, it characterizes how well a wave can interfere with itself at a different time. temporally and spatially constant) interference Coherence (units of measurement), a derived unit that, for a given system of quantities and for a chosen set of base units, is a product of powers of base units with no other proportionality factor than one Figure 5: A plane wave with an infinite coherence length. Although the pound-force is a coherent derived unit in this system according to the official definition, the system itself is not considered to be coherent because of the presence of the proportionality constant in the force law. x ( ) (See Figure 2). x In quantum mechanics, all objects have wave-like properties (see de Broglie waves). f A coherent system of units is a system of units, used to measure physical quantities, which are defined in such a way that the equations relating the numerical values expressed in the units of the system have exactly the same form, including numerical factors, as the corresponding equations directly relating the quantities. This was all in the pursuit of naturalistic performances. "[8], Reviewer Matt Prigge praised the choice of casting and their actions: "Byrkit ... focuses not on brainiacs, as in Primer, but on smart but mostly under-informed NPR types, who know enough to slowly piece all this together but not enough that they don't usually descend into blabbering, shouting and drinking. It will enhance any encyclopedic page you visit with the magic of the WIKI 2 technology. It will enhance any encyclopedic page you visit with the magic of the WIKI 2 technology. For instance, in Young's double-slit experiment electrons can be used in the place of light waves. {\displaystyle y(t)} ) Coherence definition is - the quality or state of cohering: such as. t But within this textual world the arguments also have to be connected logically so that the reader/hearer can produce coherence. As an example, the SI unit for force is the newton, which is defined as kg⋅m⋅s−2. But they wouldn't know what the other actors had received so it had a very natural, very spontaneous collision of motivations that ended up being what you see on film; obviously guided by a very strict outline that we have been working on for about a year that tracked all the clues and the puzzles and all the rehearsals and things like that. C In the Fourier domain this equation becomes t Cambridge: Cambridge University Press. The amount of coherence can readily be measured by the interference visibility, which looks at the size of the interference fringes relative to the input waves (as the phase offset is varied); a precise mathematical definition of the degree of coherence is given by means of correlation functions. ( If a linear system is being measured, In optics, temporal coherence is measured in an interferometer such as the Michelson interferometer or Mach–Zehnder interferometer. "[23], Ryan Lattanzio of Indiewire praised the film's originality: "Coherence is not just smart science fiction: it's a triumph of crafty independent filmmaking, made with few resources and big ambition. For example, a stabilized and monomode helium–neon laser can easily produce light with coherence lengths of 300 m.[12] Not all lasers are monochromatic, however (e.g. When she asks other people, she finds that Lee and Beth are the only ones whose memory matches what's written on the notepad, because they're the only ones who haven't left the house since they wrote the numbers on the notepad. Coherence é um filme de ficção científica americano dirigido por James Ward Byrkit sendo este sua primeira produção como diretor [1].O longa teve sua estreia mundial no dia 19 de setembro de 2013 durante o evento Austin Fantastic Fest e estrelou Emily Baldoni interpretando uma mulher que deve lidar com acontecimentos estranhos após o avistamento de um cometa. Δ ) In signal processing, the coherence is a statistic that can be used to examine the relation between two signals or data sets. ( Byrkit makes the most of the claustrophobic one-house setting, ratcheting up the dread and paranoia as his characters make a string of seemingly reasonable but ultimately wrongheaded decisions. f . 1 Figure 3 shows the autospectral density of ocean water level over a long period of time. ) The resulting interference visibility (e.g. x In a four-unit system (English engineering units), the pound and the pound-force are distinct base units, and the proportionality constant has the unit lbf⋅s2/(lb⋅ft).[11][12]. Once a set of coherent units have been defined, other relationships in physics that use those units will automatically be true – Einstein's mass–energy equation, E = mc2, does not require extraneous constants when expressed in coherent units.[9]. ≤ Now choose k = 1; then the metre per second is a coherent derived unit, and the kilometre per hour is a noncoherent derived unit. ) ( t Therefore, many of the standard measurements of coherence are indirect measurements, even in fields where the wave can be measured directly. We want the logic of our internal rules to be sound, and we wanted it to be something people could watch 12 times and still discover a new layer."[7]. The polarization of a light beam is represented by a vector in the Poincaré sphere.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8510972857475281, "perplexity": 1294.8278565959665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00336.warc.gz"}
https://www.scholars.northwestern.edu/en/publications/assessing-nonstoichiometric-oxides-for-solar-thermochemical-fuel-
# Assessing nonstoichiometric oxides for solar thermochemical fuel production Jiahui Lou, Zhenyu Tian, Xin Qian, Sossina M. Haile, Yong Hao Research output: Chapter in Book/Report/Conference proceedingConference contribution ## Abstract The high temperatures at which two-step solar thermochemical fuel production proceeds (e.g. 1000 to 1500 °C) canrender both surface and bulk kinetics of porous nonstoichiometric infinitely fast relative to gas sweep rates. In suchcase, the material operates under quasi-equilibrium conditions, and the macroscopically observed oxygen evolutionand hydrogen production profiles are limited by the thermodynamic characteristics of the oxide. Recognition of thisbehavior enables the development of material-specific cycling strategies that maximize the process efficiency takinginto account factors such as the energy for sweep gas and solid state heating and for mechanical pumping. Buildingon a previously established and experimentally validated model for predicting gas evolution profiles in the quasi-equilibrium regime [T. C. Davenport, M. Kemei, M. J. Ignatowich, and S. M. Haile, Int. J. Hydr. Energy 42 , 16932-16945 (2017)], we develop here a computational approach for predicting cycles that maximize solar-to-fuel efficiency.The optimization is carried out using as inputs the experimentally measured enthalpy and entropy of reduction ofknown and fully characterized nonstoichiometric oxides. The optimized cycles are defined in terms of the temperature,the duration time, and the sweep gas flow rate of each half cycle. Significantly, despite a large energy penalty ofheating and cooling the oxide, for most materials considered, the overall efficiency is highest when the temperature forthe water splitting half-cycle is relatively low. In such case, the thermodynamic driving force for the hydrogen evolutionreaction is large, hastening the pace of the reaction. Achieving the predicted efficiencies, however, may requiresurface engineering to avoid limitations due to slow surface reaction kinetics at reaction temperatures below ~1000 °C.Most importantly, this approach serves as a framework for assessing the efficacy of candidate thermochemicalmaterials on an optimized rather than ad hoc basis. That is, for each candidate, the maximum efficiency and optimalconditions, within some range of constraints, and can be determined, rather than comparing materials at arbitrarycycling conditions which may inherently favor one material over another. Original language English (US) 2020 Virtual AIChE Annual Meeting American Institute of Chemical Engineers 9780816911141 Published - 2020 2020 AIChE Annual Meeting - Virtual, OnlineDuration: Nov 16 2020 → Nov 20 2020 ### Publication series Name AIChE Annual Meeting, Conference Proceedings 2020-November ### Conference Conference 2020 AIChE Annual Meeting Virtual, Online 11/16/20 → 11/20/20 ## ASJC Scopus subject areas • Chemical Engineering(all) • Chemistry(all) ## Fingerprint Dive into the research topics of 'Assessing nonstoichiometric oxides for solar thermochemical fuel production'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8809248208999634, "perplexity": 9217.752189994852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358705.61/warc/CC-MAIN-20211129104236-20211129134236-00143.warc.gz"}
http://www.transtutors.com/questions/tts-chemical-calculation-problem-117784.htm
Chemical calculation problem 2 answers below » Acetylene gas (C2H2) is produced by adding water to calcium carbide (CaC2). CaC2 (s) + 2 H2O (l) -----> C2H2 (g) + Ca(OH)2 (aq) How many grams of Acetylene are produced by adding water to 5 g CaC2? 5g Ca C2 =5 /(40 24)=0.03125 MOLES 0.03125 MOLES of CaC2 forms 0.03125 MOLES... Related Questions in Physical chemistry • mastering chem (Solved) March 15, 2012 CH 5 Bon 2 Item 7 In the acetylene torch, acetylene gas ( C 2 H 2 ) burns in oxygen to produce carbon dioxide and water . 2 C 2 H 2 ( g ) + 5 O 2 ( g ) ---> 4 CO 2 ( g ) + 2 H 2 O(g... Solution Preview : Use stoichometric ratios from the balance reaction A. 3 mol C2H2 * 5 molO2/2molC2H2 = 7.5 mol of O2 needed B. 3.7 mol C2H2 * 4 mol CO2/2 mol C2H2 = 7.4 mol of CO2 produced C • Acetylene C 2 H 2 ) can be manufactured by the... November 10, 2014 Acetylene C 2 H 2 ) can be manufactured by the reaction of water and calcium carbide, CaC 2 : CaC 2 (s) + 2 H 2 O(l)→C 2 H 2 ( g ) + Ca(OH) 2 ( aq ) When 44. 5 g of commercial-grade... • 2C2H2 (g) + 5O2... (Solved) February 08, 2012 Using equations 2 C 2 H 2 ( g ) + 5 O 2 ( g ) -> 4CO 2 ( g ) + 2 H 2 O ( g ) If 1.80mol of acetylene burns, how many moes of oxygen are consumed? Solution Preview : from the equation we can say that 2 moles of acetylene requires 5 moles of oxygen 1 • Acetylene, C2H2 can react with two molecules of... (Solved) December 26, 2011 C 2 H 2 + Br 2 => C 2 H 2 Br 2 C 2 H 2 Br 2 + Br 2 => C 2 H 2 Br4 If 10 g of C 2 H 2 is mixed with 80 g of Br 2 , what masses of c 2 h 2 br 2 and c 2 h 2 br4 will be formed? Assume that... • C2H2(g) + Cl2(g)... (Solved) October 09, 2015 C 2 H 2 ( g ) + Cl 2 ( g ) C 2 H 2 Cl4( g ) (a) Consider the unbalanced equation above. What is the maximum mass of C 2 H 2 Cl4 that can be produced when 4.70 g of C 2 H 2 and 1.00 g Cl 2... Solution Preview : C2H2(g) + 2Cl2(g)------> C2H2Cl4(g) Moles of C2H2 = 4.70/26 = 0.180 Moles of Cl2 = 1/71= 0.014 since Cl2 is a limiting reagent moles of Cl2 used = 0 Submit Your Questions Here! Copy and paste your question here... Attach Files 3 5
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227527737617493, "perplexity": 5555.492708765104}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083725-00853.warc.gz"}
https://groupprops.subwiki.org/wiki/A5_is_the_simple_non-abelian_group_of_smallest_order
# A5 is the unique simple non-abelian group of smallest order ## Statement The following are true: • The alternating group of degree five, denoted $A_5$, is a simple non-Abelian group of order $60$. • It is, up to isomorphism, the only simple non-Abelian group of order $60$. • There is no simple non-Abelian group of smaller order. ## Proof that there is no simple non-Abelian group of smaller order ### Proof using Sylow's theorem We eliminate all possible orders less than $60$ using the information from Sylow's theorems. First, some preliminary observations. • If the order is a prime power, then fact (1) tells us that the group has a nontrivial center. Hence, it cannot be a simple non-Abelian group. This eliminates the orders $2,3,4,5,7,8,9,11,13,16,17,19,23,25,27,29,31,32,37,41,43,47,49,53,59$. • If the order is of the form $p^km$ where $p$ is a prime and $m < p$, then fact (3) tells us that the group is not simple, since it has a nontrivial normal Sylow subgroup. This eliminates the orders $6,10,14,15,18,20,21,22,26,28,33,34,35,38,39,42,44,46,50,51,52,54,55,57,58$. In toto, we have eliminated: $2,3,4,5,6,7,8,9,10,11,13,14,15,16,17,18,19,20,21,22,23,25,26,27,28,29,31,32,33,34,35,37,38,39,41,42,43,44,46,47,49,50,51,52,53,54,55,57,58,59$. • If the order is $12$ or $56$, then fact (4) tells us that the group has a nontrivial normal subgroup. The total list of eliminated numbers is now: $2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,25,26,27,28,29,31,32,33,34,35,37,38,39,41,42,43,44,46,47,49,50,51,52,53,54,55,56,57,58,59$. The list of numbers not eliminated is: $24,30,36,40,45,48$. • Of these remaining numbers, the following can be eliminated because a direct application of the congruence and divisibility conditions (fact (2)) yields a Sylow-unique prime divisor: • $40 = 2^3 \cdot 5$: Here, $n_5 = 1$. • $45 = 3^2 \cdot 5$: Here again, $n_5 = 1$. Thus, the list is shortened to $24,30,36,48$. • We use fact (5.4) to eliminate the numbers $36, 48$ and a slight modification of it is $24$: • $36 = 2^2 \cdot 3^2$: Here, we have $n_3 = 1$ or $n_3 = 4$. Fact (5.4) shows that since $36$ does not divide $n_3!/2$ in either case, a group of order $36$ is not simple. • $48 = 2^4 \cdot 3$: Here, we have $n_2 = 1$ or $n_2 = 3$. Fact (5.4) shows that since $48$ does not divide $n_2!$ in either case, a group of order $48$ is not simple. • $24 = 2^3 \cdot 3$: Here, we have $n_2 = 1$ or $n_2 = 3$. Fact (5.4) shows that sinc e$24$ does not divide $n_2!/2$ in either case, a group of order $24$ is not simple. This leaves only one number: $30$. • $30 = 2 \cdot 3 \cdot 5$: For this, we use fact (6). ### Proof using Burnside's theorem Using Burnside's theorem, the order of a simple non-Abelian group must have at least three distinct prime factors. The only numbers less than $60$ satisfying this are $30$ and $42$. Both of them can be eliminated using the methods discussed above. Refer fact (7). ## Proof that there is only one simple group of order sixty, isomorphic to the alternating group of degree five Given: A simple group $G$ of order sixty. To prove: $G$ is isomorphic to the alternating group of degree five. Proof: The key idea is to prove that $G$ has a subgroup of index five. After that, we use the fact that $A_5$ is simple to complete the proof. 1. The number of $2$-Sylow subgroups is either $5$ or $15$: By fact (2) (the congruence and divisibility conditions on Sylow numbers), we have $n_2 = 1,3,5,15$. By fact (4), $n_2$ cannot be $1$ or $3$. Thus, $n_2 = 5$ or $n_2 = 15$. 2. If $n_2 = 5$, there is a subgroup of index five: The number of $2$-Sylow subgroups equals the index of the normalizer of any $2$-Sylow subgroup (fact (5.2)). Thus, there is a subgroup of index five. 3. If $n_2 = 15$, there is a subgroup of index five: 1. We first consider the case that any two $2$-Sylow subgroups intersect trivially: In this case, there are $(4-1) \cdot 15 = 45$ non-identity elements in $2$-Sylow subgroups. This leaves $15$ other non-identity elements. We also know that $n_3 = 1, 4, 10$ by the congruence and divisibility conditions, and fact (4) again forces $n_3 = 10$. Thus, there are $(3-1) \cdot 10 = 20$ non-identity elements in $3$-Sylow subgroups. But $45 + 20 = 65 > 60$, a contradiction. 2. Thus, there exist at least two $2$-Sylow subgroups that intersect nontrivially. Suppose $P$ and $Q$ are two $2$-Sylow subgroups whose intersection, $R = P \cap Q$ is nontrivial. The $2$-Sylow subgroups are of order $4$, hence Abelian, so $P, Q$ are Abelian. Thus, $P \le C_G(R)$ and $Q \le C_G(R)$. This yields that $S = \langle P, Q \rangle \le C_G(R)$. If $S = \langle P, Q \rangle = G$, then $R \le Z(G)$, so the center is nontrivial. We thus get a proper nontrivial normal subgroup, a contradiction. Thus, $S = \langle P, Q \rangle$ is a proper subgroup of $G$. Lagrange's theorem forces that $S$ has index either three or five in $G$. $S$ cannot have index three, by fact (5.3). Thus, $S$ must have index five. 4. $G$ has a subgroup $S$ of index five: Note that steps (2) and (3) show that for both possible values of $n_2$, $G$ has a subgroup of index five. 5. $G$ is isomorphic to a subgroup $H$ of $A_5$: This follows fron fact (5.1). 6. $G$ is isomorphic to $A_5$: By order considerations, the order of $H$ equals that of $A_5$, so $H = A_5$. Thus, $G$ is isomorphic to $A_5$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 103, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728603363037109, "perplexity": 176.98603006088064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578555187.49/warc/CC-MAIN-20190422135420-20190422161420-00281.warc.gz"}
http://math.stackexchange.com/questions/261937/quesion-about-creation-of-subspace-with-some-properties
# Quesion about creation of subspace with some properties Let $V$ be a vector space with finite dimension and $K, H$ are subspaces of $V$. Prove that there is subspace $M$ of $V$ s.t $M+K=M+H$ and $M\cap K=M\cap H=\{0\}$. - it would be helpful if we know what your attempt was. –  dineshdileep Dec 19 '12 at 8:25 I think you need $\dim(K)=\dim(H)$ for this to be true. Indeed, if $M\cap H=\{0\}$, then $\dim(M+H)=\dim(M)+\dim(H)$, so that is a simple proof that if such an $M$ exists, then $\dim(K)=\dim(H)$. –  Mario Carneiro Dec 19 '12 at 8:32 I am agree to @MarioCarneiro . Also pay attention with condition $dim(K)=dim(H)$ , it will be a question on Iran mathematical competition for University students in the helical year 1368, that you can find its answer in the book was written by "Mr. Bamdad YaHaghi" and also in Linear Algebra written by "Mr. Nikookar". –  AmirHosein SadeghiManesh Dec 19 '12 at 12:34 Assume (as was shown to be necessary) $\dim(H) = \dim(K)$. Let $L = H \cap K$. We can write $H = L \oplus N$ and $K = L \oplus P$ for some subspaces $N$ and $P$, and $\dim N = \dim H - \dim L = \dim K - \dim L = \dim P$. So there is a linear map $T$ from $P$ onto $N$. Let $M = (I+T) P = \{p + Tp: p \in P\}$. To show $M \cap H = \{0\}$: if $h \in M \cap H$, we can write $h = p + Tp$ for some $p \in P$, but also $h = u + n$ for some $u \in L$ and $n \in N$. Thus $n - Tp = p - u$. But $n - Tp \in N \subseteq H$ while $p - u \in P + L = K$, and $H \cap K = L$ but $N \cap L = \{0\}$. Thus $p - u = 0$. But $p = u \in P \cap L = \{0\}$, so $p = 0$ and $h = 0 + T 0 = 0$. The proof of $M \cap K = \{0\}$ is similar. To show $M + K \subseteq M + H$: take any $y \in M + K = M + L + P$. Then $y = p + T p + r + q$ where $p \in P$, $r \in L$ and $q \in P$. Now write this as $y = (p + q) + T(p+q) - T q + r$. We have $p+q \in P$ so $(p+q) + T(p+q) \in M$, $-Tq \in N$ and so $-Tq + r \in N + L = H$, and thus $y \in M + H$. But since $M \cap H = \{0\}$ and $M \cap K = \{0\}$, $\dim(M+H) = \dim M + \dim H = \dim M + \dim K = \dim(M+K)$, so $M + K = M+H$. - +1 neat answer. –  B. S. Dec 19 '12 at 10:35 Assume $\dim(H)=\dim(K)$. Let $\{e_1,\dots,e_a\}$ be a basis for $H\cap K$, and let $\{e_1,\dots,e_a,h_1,\dots,h_b\}$ and $\{e_1,\dots,e_a,k_1,\dots,k_b\}$ be bases for $H$ and $K$, respectively (where $a+b=\dim(H)=\dim(K)$). Then $h_i\notin K$, because if it was, then $h_i\in H\cap K$ implies $h_i$ is a linear combination of the $e_i$, so $\{e_1,\dots,e_a,h_1,\dots,h_b\}$ is not a linearly independent set. Similarly, $k_i\notin H$. Thus, let $M=\operatorname{span}(\{h_1+k_1,\dots,h_b+k_b\})$. If $x\in M\cap H-\{0\}$, then $$x=A_1e_1+\dots+A_ae_a+B_1h_1+\dots+B_bh_b=C_1(h_1+k_1)+\dots+C_b(h_b+k_b)$$ $$A_1e_1+\dots+A_ae_a+(B_1-C_1)h_1+\dots+(B_b-C_b)h_b=C_1k_1+\dots+C_bk_b:=y$$ which expresses $y\in H\cap K=\{0\}$. Thus $C_i=0$, and so $x=0$, a contradiction. Thus $M\cap H=\{0\}$. Similarly, $M\cap K=\{0\}$. But if $x\in H+M$, then \begin{align} x&=A_1e_1+\dots+A_ae_a+B_1h_1+\dots+B_bh_b+C_1(h_1+k_1)+\dots+C_b(h_b+k_b) \\ &=A_1e_1+\dots+A_ae_a+(B_1-C_1)h_1+\dots+(B_b-C_b)h_b+C_1k_1+\dots+C_bk_b\in H+K, \end{align} so $H+M\subseteq H+K$. Conversely, if $x\in H+K$, then \begin{align} x&=A_1e_1+\dots+A_ae_a+B_1h_1+\dots+B_bh_b+C_1k_1+\dots+C_bk_b \\ &=A_1e_1+\dots+A_ae_a+(B_1-C_1)h_1+\dots+(B_b-C_b)h_b+C_1(h_1+k_1)+\dots+C_b(h_b+k_b) \end{align} so $H+M=H+K$. Similarly, $K+M=H+K$. Note that I had to assume $\dim(H)=\dim(K)$ at the start. Conversely, if $M\cap H=\{0\}$ and $M+H=M+K$, then $\dim(M)+\dim(H)=\dim(M+H)=\dim(M+K)=\dim(M)+\dim(K)$, so $\dim(H)=\dim(K)$ is a necessary and sufficient condition for this construction to exist. - Not true. Take $K = \{0\}$ and $H = V$. Also $V\neq\{0\}$. –  Mario Carneiro Dec 19 '12 at 8:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996231198310852, "perplexity": 130.0019220613045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678705117/warc/CC-MAIN-20140313024505-00054-ip-10-183-142-35.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/11888/giorgio-mossa?tab=activity&sort=revisions
Giorgio Mossa Reputation 7,456 Top tag Next privilege 10,000 Rep. Access moderator tools Aug 23 revised Prime ideals in $R[x]$, $R$ a PID Deleted wrong answer, made some comments to help the OP solving the wished problem. Jul 31 revised Failure of group definition with weaker axioms Added stuff Jul 12 revised When the unit of a universal property is an isomorphism Made a correction Jul 3 revised A functor preserves a product of $A$ and $B$ iff $F(A \times B) \cong F(A) \times F(B)$? added 37 characters in body Jul 3 revised A functor preserves a product of $A$ and $B$ iff $F(A \times B) \cong F(A) \times F(B)$? added 270 characters in body Jun 23 revised Category of sets and multi-valued functions corrected and equation Jun 23 revised Proving consistency by constructing models? How and why? Improved answer Jun 22 revised Does mathematics become circular at the bottom? What is at the bottom of mathematics? Improved the answer Jun 20 revised Iterating until a diagram commutes made a correction in a formula Jun 20 revised How to introduce type theory to newcomer improved grammar May 19 revised Homology of wedge sum is the direct sum of homologies Made minor corrections May 19 revised Homology of wedge sum is the direct sum of homologies rolled back to a previous revision Apr 24 revised How to introduce type theory to newcomer deleted 170 characters in body Apr 24 revised How to introduce type theory to newcomer Make narrower and precise the question Mar 26 revised Limit as universal arrow Made a correction Jan 19 revised How to introduce type theory to newcomer added details in order to address more specific issues. Jan 19 revised How to introduce type theory to newcomer deleted 238 characters in body Jan 7 revised How to introduce type theory to newcomer added specifications Jan 7 revised No group of order $400$ is simple - clarification Fixed a little typo in the title. Jan 6 revised How to introduce type theory to newcomer English corrections
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250953555107117, "perplexity": 1847.845388538828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644060103.8/warc/CC-MAIN-20150827025420-00281-ip-10-171-96-226.ec2.internal.warc.gz"}
https://nrich.maths.org/327
### Code to Zero Find all 3 digit numbers such that by adding the first digit, the square of the second and the cube of the third you get the original number, for example 1 + 3^2 + 5^3 = 135. ### Dodgy Proofs What is wrong with these dodgy proofs? # Exhaustion ##### Age 16 to 18 Challenge Level: Find the positive integer solutions of the equation $\left(1 + \frac{1}{a}\right)\left(1 + \frac{1}{b}\right) \left(1+ \frac{1}{c}\right) = 2.$ This problem was taken from the Hungarian magazine KoMaL. There are many other challenging problems in English on the KoMal website.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.407440721988678, "perplexity": 700.8206986728801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700435.69/warc/CC-MAIN-20191019214624-20191020002124-00518.warc.gz"}
https://dsp.stackexchange.com/questions/57968/output-of-an-lti-system-given-its-transfer-function-and-input
# Output of an LTI system given its transfer function and input Given the transfer function $$T(s) = \frac{100}{1 + \frac{s}{10^{6}}}$$ and the input $$v_i(t) = 0.1 \sin(100t)$$ find the output, $$v_o(t)$$. My approach was to use $$v_o(t) = \mathcal{L^{-1}}\left\{T(s)\ V_i(s)\right\}$$, where $$V_i(s) = \mathcal{L\left\{v_i(t)\right\}}$$. This gives $$v_o(t) = \left(\frac{10^5}{10^8+1}\right) \mathrm{e}^{-10^6 \,t} - \left(\frac{10^5}{10^8+1}\right) \cos\left(100\,t\right) + \left(\frac{10^9}{10^8+1}\right) \sin\left(100\,t\right)$$ in MATLAB. However, my textbook does the following: Which one is correct? The difference between the two functions is of the order $$10^{-3}$$, and the first function is not reducible to the second in MATLAB. Edit: This is the whole question — Example 1.5 from Sedra & Smith's Microelectronic Circuits (7th ed.): • Note that you are misusing the conventional notation. The $s$ notation usually represents the Laplace transformation, Where the $j\omega$ represents the Fourier transform. Please review your question and this may help you understand. Also, more details like the full question may be useful. – havakok Apr 27 at 7:55 • Go to the book's index and search for "Laplace". Which Laplace transform do they use? I used that book over a decade ago, but don't recall such details. – Rodrigo de Azevedo Apr 27 at 21:08 • Yes, they don't talk about the Laplace transforms, but I was trying to use what I had learnt in earlier courses like advanced engineering mathematics and circuit analysis. In fact, I just realized that phasors can be used here since the input is sinusoidal. – Leponzo Apr 27 at 22:08 You cannot solve this problem using the Laplace transform. The reason is that the Laplace transform of the input signal doesn't exist. You could use the Fourier transform, but in this case there's an even simpler way to determine the output signal. You need to know one important property of linear time-invariant (LTI) systems: their response to a sinusoidal input is a sinusoidal signal with the same frequency, but with its amplitude and phase altered according to the system's frequency response evaluated at the input frequency. So for an input signal $$x(t)=A\sin(\omega_0t+\phi)\tag{1}$$ the output is given by $$y(t)=A\big|H(j\omega_0)\big|\sin\left(\omega_0t+\phi+\arg\big\{H(j\omega_0)\big\}\right)\tag{2}$$ where $$H(j\omega)=\big|H(j\omega)\big|e^{j\arg\{H(j\omega_0)\}}\tag{3}$$ is the system's frequency response. • Is the reason that the Laplace transform of the input does not exist that it is not $0$ for $t < 0$? – Leponzo Apr 27 at 11:51 • Also, is there a way to get this output using MATLAB? I tried fourier and ifourier but those don't seem to give this result. – Leponzo Apr 27 at 12:56 • @Leponzo: Yes, a sinusoid has a constant envelope for all values of $t$, so the Laplace integral doesn't converge. – Matt L. Apr 27 at 15:20 • @Leponzo: I don't know how to obtain the same result with Matlab, but I think such exercises are best solved by hand, in order to learn and understand the basic properties of LTI systems. – Matt L. Apr 27 at 15:21 • Without the adjective "bilateral", isn't your answer incorrect? – Rodrigo de Azevedo Apr 27 at 22:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7779181599617004, "perplexity": 316.6342997953797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319082.81/warc/CC-MAIN-20190823214536-20190824000536-00026.warc.gz"}
https://quant.stackexchange.com/questions/54997/risk-neutral-pricing-vs-real-world-pricing
# Risk neutral pricing vs real world pricing Could you please explain why to calculate the asset price under risk-neutral probability, we have to take the expectation of the future cashflow, while in the normal world, we don't have to take the expectation of future cashflow ? (i.e. we keep the same amount of future cashflow then discounted for its required rate of return) Thank you very much.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913067579269409, "perplexity": 680.9824096275906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039596883.98/warc/CC-MAIN-20210423161713-20210423191713-00070.warc.gz"}
http://wiki.opengatecollaboration.org/index.php?title=Old_Users_Guide:Getting_started&oldid=1255
# Old Users Guide:Getting started This paragraph is an overview of the main steps one must go through to perform a simulation using Gate. It is presented in the form of a simple example that the user is encouraged to try out, while reading this section. A more detailed description of the different steps is given in the following sections of this user's guide. The use of Gate does not require any C++ programming, thanks to a dedicated scripting mechanism that extends the native command interpreter of Geant4. This interface allows the user to run Gate programs using command scripts only. The goal of this first section is to give a brief description of the user interface and to provide understanding of the basic principles of Gate by going through the different steps of a simulation. ## Simulation architecture for imaging applications In each simulation, the user has to: 1. define the scanner geometry 2. define the phantom geometry 3. set up the physics processes 4. initialize the simulation : /gate/run/initialize 1. set up the detector model 2. define the source(s) 3. specify the data output format 4. start the acquisition Steps 1) to 4) concern the initialization of the simulation. Following the initialization, the geometry can no longer be changed. ## Simulation architecture for dosimetry and radiotherapy applications In each simulation, the user has to: 1. define the beam geometry 2. define the phantom geometry 3. specify the output (actor concept for dose map etc...) 4. set up the physics processes 5. initialize the simulation : /gate/run/initialize 1. define the source(s) 2. start the simulation with the following command lines: /gate/application/setTotalNumberOfPrimaries [particle_number] /gate/application/start ## The user interface: a macro language Gate, just as GEANT4, is a program in which the user interface is based on scripts. To perform actions, the user must either enter commands in interactive mode, or build up macro files containing an ordered collection of commands. Each command performs a particular function, and may require one or more parameters. The Gate commands are organized following a tree structure, with respect to the function they represent. For example, all geometry-control commands start with geometry, and they will all be found under the /geometry/ branch of the tree structure. When Gate is run, the Idle> prompt appears. At this stage the command interpreter is active; i.e. all the Gate commands entered will be interpreted and processed on-line. All functions in Gate can be accessed to using command lines. The geometry of the system, the description of the radioactive source(s), the physical interactions considered, etc., can be parameterized using command lines, which are translated to the Gate kernel by the command interpreter. In this way, the simulation is defined one step at a time, and the actual construction of the geometry and definition of the simulation can be seen on-line. If the effect is not as expected, the user can decide to re-adjust the desired parameter by re-entering the appropriate command on-line. Although entering commands step by step can be useful when the user is experimenting with the software or when he/she is not sure how to construct the geometry, there remains a need for storing the set of commands that led to a successful simulation. Macros are ASCII files (with '.mac' extension) in which each line contains a command or a comment. Commands are GEANT4 or Gate scripted commands; comments start with the character ' #'. Macros can be executed from within the command interpreter in Gate, or by passing it as a command-line parameter to Gate, or by calling it from another macro. A macro or set of macros must include all commands describing the different components of a simulation in the right order. Usually these components are visualization, definitions of volumes (geometry), systems, digitizer, physics, initialization, source, output and start. These steps are described in the next sections. A single simulation may be split into several macros, for instance one for the geometry, one for the physics, etc. Usually, there is a master macro which calls the more specific macros. Splitting macros allows the user to re-use one or more of these macros in several other simulations, and/or to organize the set of all commands. Examples of complete macros can be found on the web site referenced above. To execute a macro (mymacro.mac in this example) from the Linux prompt, just type : Gate mymacro.mac To execute a macro from inside the Gate environment, type after the "Idle>" prompt: Idle>/control/execute mymacro.mac And finally, to execute a macro from inside another macro, simply write in the master macro: /control/execute mymacro.mac In the following sections, the main steps to perform a simulation for imaging applications using Gate are presented in details. To try out this example, the user can run Gate and execute all the proposed commands, line by line. ## Step 1: Defining a scanner geometry Fig 1.1: World volume. The user needs to define the geometry of the simulation based on volumes. All volumes are linked together following a tree structure where each branch represents a volume. Each volume is characterized by shape, size, position, and material composition. The default material assigned to a new volume is Air. The list of available materials is defined in the GateMaterials.db file. (See Users Guide:Materials). The location of the material database needs to be specified with the following command: /gate/geometry/setMaterialDatabase MyMaterialDatabase.db The base of the tree is represented by the world volume (fig 1.1) which sets the experimental framework of the simulation. All Gate commands related to the construction of the geometry are described in detail in Users Guide:Defining a geometry. The world volume is a box centered at the origin. It can be of any size and has to be large enough to include the entire simulation geometry. The tracking of any particle stops when it escapes from the world volume. The example given here simulates a system that fits into a box of 40 x 40 x 40 cm3. Thus, the world volume may be defined as follows: # W O R L D /gate/world/geometry/setXLength 40. cm /gate/world/geometry/setYLength 40. cm /gate/world/geometry/setZLength 40. cm The world contains one or more sub volumes referred to as daughter volumes. /gate/world/daughters/name vol_name The name vol_name of the first daughter of the world has a specific meaning and name. It specifies the type of scanner to be simulated. Users Guide:Defining a system gives the specifics of each type of scanner, also called system. In the current example, the system is a CylindricalPET system. This system assumes that the scanner is based on a cylindrical configuration (fig 1.2) of blocks, each block containing a set of crystals. Figure 1.2: Cylindrical scanner # S Y S T E M /gate/world/daughters/name cylindricalPET /gate/world/daughters/insert cylinder /gate/cylindricalPET/setMaterial Water /gate/cylindricalPET/geometry/setRmax 100 mm /gate/cylindricalPET/geometry/setRmin 86 mm /gate/cylindricalPET/geometry/setHeight 18 mm /gate/cylindricalPET/vis/forceWireframe /vis/viewer/zoom 3 These seven command lines describe the global geometry of the scanner. The shape of the scanner is a cylinder filled with water with an external radius of 100 mm and an internal radius of 86 mm. The length of the cylinder is 18 mm. The last command line sets the visualization as wireframe. You may see the following message when creating the geometry: G4PhysicalVolumeModel::Validate() called. Volume of the same name and copy number ("world_phys", copy 0) still exists and is being used. WARNING: This does not necessarily guarantee it's the same volume you originally specified in /vis/scene/add/. This message is normal and you can safely ignore it. At any time, the user can list all the possible commands. For example, the command line for listing the visualization commands is: Idle> ls /gate/cylindricalPET/vis/ Let's assume that the scanner is made of 30 blocks (box1), each block containing textstyle 8 times 8 LSO crystals (box2). The following command lines describe this scanner (see Users Guide:Defining a geometry to find a detailed explanation of these commands). First, the geometry of each block needs to be defined as the daughter of the system (here cylindricalPET system). Figure 1.3: first level of the scanner # FIRST LEVEL OF THE SYSTEM /gate/cylindricalPET/daughters/name box1 /gate/cylindricalPET/daughters/insert box /gate/box1/placement/setTranslation 91. 0 0 mm /gate/box1/geometry/setXLength 10. mm /gate/box1/geometry/setYLength 17.75 mm /gate/box1/geometry/setZLength 17.75 mm /gate/box1/setMaterial Water /gate/box1/vis/setColor yellow /gate/box1/vis/forceWireframe Once the block is created (fig 1.3), the crystal can be defined as a daughter of the block (fig 1.4) The zoom command line in the script allows the user to zoom the geometry and the panTo command translates the viewer window in 60 mm in horizontal and 40 mm in vertical directions (the default is the origin of the world (0,0,0)). To obtain the complete matrix of crystals, the volume box2 needs to be repeated in the Y and Z directions (fig 1.5). To obtain the complete ring detector, the original block is repeated 30 times (fig 1.6). Figure 1.4: crystal, daughter of the block # C R Y S T A L /gate/box1/daughters/name box2 /gate/box1/daughters/insert box /gate/box2/geometry/setXLength 10. mm /gate/box2/geometry/setYLength 2. mm /gate/box2/geometry/setZLength 2. mm /gate/box2/setMaterial LSO /gate/box2/vis/setColor red /gate/box2/vis/forceWireframe # Z O O M /vis/viewer/zoom 4 /vis/viewer/panTo 60 -40 mm Figure 1.5: matrix of crystals # R E P E A T C R Y S T A L /gate/box2/repeaters/insert cubicArray /gate/box2/cubicArray/setRepeatNumberX 1 /gate/box2/cubicArray/setRepeatNumberY 8 /gate/box2/cubicArray/setRepeatNumberZ 8 /gate/box2/cubicArray/setRepeatVector 0. 2.25 2.25 mm The geometry of this simple PET scanner has now been specified. The next step is to connect this geometry to the system in order to store data from particle interactions (called hits) within the volumes which represent detectors (sensitive detector or physical volume). Gate only stores hits for those volumes attached to a sensitive detector. Hits regarding interactions occurring in non-sensitive volumes are lost. A volume must belong to a system before it can be attached to a sensitive detector. Hits, occurring in a volume, cannot be scored in an output file if this volume is not connected to a system because this volume can not be attached to a sensitive detector. The concepts of system and sensitive detector are discussed in more detail in Users Guide:Defining a system and Users Guide:Attaching the sensitive detectors respectively. The following commands are used to connect the volumes to the system. Figure 1.6: complete ring of 30 block detectors # R E P E A T R S E C T O R /gate/box1/repeaters/insert ring /gate/box1/ring/setRepeatNumber 30 # Z O O M /vis/viewer/zoom 0.25 /vis/viewer/panTo 0 0 mm # A T T A C H V O L U M E S T O A S Y S T E M /gate/systems/cylindricalPET/rsector/attach box1 /gate/systems/cylindricalPET/module/attach box2 The names rsector and module are dedicated names and correspond to the first and the second levels of the CylindricalPET system (see Users Guide:Defining a system). In order to save the hits (see Users Guide:Digitizer and readout parameters) in the volumes corresponding to the crystals the appropriate command, in this example, is: # D E F I N E A S E N S I T I V E D E T E C T O R /gate/box2/attachCrystalSD vglue 1cm At this level of the macro file, the user can implement detector movement. One of the most distinctive features of Gate is the management of time-dependent phenomena, such as detector movements and source decay leading to a coherent description of the acquisition process. For simplicity, the simulation described in this tutorial does not take into account the motion of the detector or the phantom. Users Guide:Defining a geometry describes the movement of volumes in detail. ## Second step: Defining a phantom geometry The volume to be scanned is built according to the same principle used to build the scanner. The external envelope of the phantom is a daughter of the world. The following command lines describe a cylinder with a radius of 10 mm and a length of 30 mm. The cylinder is filled with water and will be displayed in gray. This object represents the attenuation medium of the phantom. Figure 1.7: cylindrical phantom # P H A N T O M /gate/world/daughters/name my_phantom /gate/world/daughters/insert cylinder /gate/my_phantom/setMaterial Water /gate/my_phantom/vis/setColor grey /gate/my_phantom/geometry/setRmax 10. mm /gate/my_phantom/geometry/setHeight 30. mm To retrieve information about the Compton and the Rayleigh interactions within the phantom, a sensitive detector (phantomSD) is associated with the volume using the following command line: # P H A N T O M D E F I N E D A S S E N S I T I V E /gate/my_phantom/attachPhantomSD Two types of information will now be recorded for each hit in the hit collection: • The number of scattering interactions generated in all physical volumes attached to the phantomSD. • The name of the physical volume attached to the phantomSD in which the last interaction occurred. These concepts are further discussed in Users Guide:Attaching the sensitive detectors. ## Third step: Setting-up the physics processes Once the volumes and corresponding sensitive detectors are described, the interaction processes of interest in the simulation have to be specified. Gate uses the GEANT4 models for physical processes. The user has to choose among these processes for each particle. Then, user can customize the simulation by setting the production thresholds, the cuts, the electromagnetic options... Some typical physics lists are available in the directory examples/PhysicsLists: • egammaStandardPhys.mac (physics list for photons, e- and e+ with standard processes and recommended Geant4 "option3") • egammaLowEPhys.mac (physics list for photons, e- and e+ with low energy processes) • egammaStandardPhysWithSplitting.mac (alternative egammaStandardPhys.mac with selective bremsstrahlung splitting) • hadrontherapyStandardPhys.mac (physics list for hadrontherapy with standard processes and recommended Geant4 "option3") The details of the interactions processes, cuts and options available in Gate are described in Users Guide:Setting up the physics. ## Fourth step: Initialization When the 3 steps described before are completed, corresponding to the pre-initialization mode of GEANT4, the simulation should be initialized using: # I N I T I A L I Z E /gate/run/initialize This initialization actually triggers the calculation of the cross section tables. After this step, the physics list cannot be modified any more and new volumes cannot be inserted into the geometry. ## Fifth step: Setting-up the digitizer The basic output of Gate is a hit collection in which data such as the position, the time and the energy of each hit are stored. The history of a particle is thus registered through all the hits generated along its track. The goal of the digitizer is to build physical observables from the hits and to model readout schemes and trigger logics. Several functions are grouped under the Gate digitizer object, which is composed of different modules that may be inserted into a linear signal processing sequence. As an example, the following command line inserts an adder to sum the hits generated per elementary volume (a single crystal defined as box2 in our example). /gate/digitizer/Singles/insert adder Another module can describe the readout scheme of the simulation. Except when one crystal is read out by one photo-detector, the readout segmentation can be different from the elementary geometrical structure of the detector. The readout geometry is an artificial geometry which is usually associated with a group of sensitive detectors. In this example, this group is box1. /gate/digitizer/Singles/insert readout In this example, the readout module sums the energy deposited in all crystals within the block and determines the position of the crystal with the highest energy deposited ("winner takes all"). The setDepth command specifies at which geometry level (called "depth") the readout function is performed. In the current example: • base level (CylindricalPET) = depth 0 • 1srt daughter (box1) of the system = depth 1 • next daughter (box2) of the system = depth 2 • and so on .... In order to take into account the energy resolution of the detector and to collect singles within a pre-defined energy window only, other modules can be used: # E N E R G Y B L U R R I N G /gate/digitizer/Singles/insert blurring /gate/digitizer/Singles/blurring/setResolution 0.19 /gate/digitizer/Singles/blurring/setEnergyOfReference 511. keV # E N E R G Y W I N D O W /gate/digitizer/Singles/insert thresholder /gate/digitizer/Singles/thresholder/setThreshold 350. keV /gate/digitizer/Singles/insert upholder /gate/digitizer/Singles/upholder/setUphold 650. keV Here, an energy resolution of 19% at 551 KeV is considered. Furthermore, the energy window is set from 350 keV to 600 keV. For PET simulations, the coincidence sorter is also implemented at the digitizer level. # C O I N C I D E N C E S O R T E R /gate/digitizer/Coincidences/setWindow 10. ns Other digitizer modules are available in Gate and are described in Users Guide:Digitizer and readout parameters. ## Sixth step: Setting-up the source In Gate, a source is represented by a volume in which the particles (positron, gamma, ion, proton, ...) are emitted. The user can define the geometry of the source and its characteristics such as the direction of emission, the energy distribution, and the activity. The lifetime of unstable sources (radioactive ions) is usually obtained from the GEANT4 database, but it can also be set by the user. A voxelized phantom or a patient dataset can also be used to define the source, in order to simulate realistic acquisitions. For a complete description of all functions to define the sources, see Users Guide:Voxelized Source and Phantom. In the current example, the source is a 1 MBq line source. The line source is defined as a cylinder with a radius of 0.5 mm and a length of 50 mm. The source generates pairs of 511 keV gamma particles emitted 'back-to-back' (for a more realistic source model, the range of the positron and the non collinearity of the two gammas can also be taken into account). # S O U R C E /gate/source/twogamma/setActivity 100000. becquerel /gate/source/twogamma/setType backtoback # POSITION /gate/source/twogamma/gps/centre 0. 0. 0. cm # PARTICLE /gate/source/twogamma/gps/particle gamma /gate/source/twogamma/gps/energytype Mono /gate/source/twogamma/gps/monoenergy 0.511 MeV # TYPE = Volume or Surface /gate/source/twogamma/gps/type Volume # SHAPE = Sphere or Cylinder /gate/source/twogamma/gps/shape Cylinder /gate/source/twogamma/gps/halfz 25 mm # SET THE ANGULAR DISTRIBUTION OF EMISSION /gate/source/twogamma/gps/angtype iso # SET MIN AND MAX EMISSION ANGLES /gate/source/twogamma/gps/mintheta 0. deg /gate/source/twogamma/gps/maxtheta 180. deg /gate/source/twogamma/gps/minphi 0. deg /gate/source/twogamma/gps/maxphi 360. deg /gate/source/list ## Seventh step: Defining data output By default, the data output formats for all systems used by Gate are ASCII and ROOT as described in the following command lines: # ASCII OUTPUT FORMAT /gate/output/ascii/enable /gate/output/ascii/setFileName test /gate/output/ascii/setOutFileHitsFlag 0 /gate/output/ascii/setOutFileSinglesFlag 1 /gate/output/ascii/setOutFileCoincidencesFlag 1 # ROOT OUTPUT FORMAT /gate/output/root/enable /gate/output/root/setFileName test /gate/output/root/setRootSinglesFlag 1 /gate/output/root/setRootCoincidencesFlag 1 Given this script, several ASCII files (.dat extension) and A ROOT file (test.root) will be created. Users Guide:Data output explains how to read the resulting files. For some scanner configurations, the events may be stored in a sinogram format or in List Mode Format (LMF). The sinogram output module stores the coincident events from a cylindrical scanner system in a set of 2D sinograms according to the parameters set by the user (number of radial bins and angular positions). One 2D sinogram is created for each pair of crystal-rings. The sinograms are stored either in raw format or ecat7 format. The List Mode Format is the format developed by the Crystal Clear Collaboration (LGPL licence). A library has been incorporated in Gate to read, write, and analyze the LMF format. A complete description of all available outputs is given in Users Guide:Data output . ## Eighth step: Starting an acquisition In the next and final step the acquisition is defined. The beginning and the end of the acquisition are defined as in a real life experiment. In addition, Gate needs a time slice parameter which defines time period during which the simulated system is assumed to be static. At the beginning of each time-slice, the geometry is updated according to the requested movements. During each time-slice, the geometry is kept static and the simulation of particle transport and data acquisition proceeds. Each slice corresponds to a Geant4 run. If the sources involved in the simulation are not radioactive or if activity is not defined, user can fix the total number of events. In this case, the number of particles is splitted between slices in function of the time of each slice. /gate/application/setTotalNumberOfPrimaries [N] User can also fix the same number of events per slice. In this case, each event is weighted by the ratio between the time slice and the total simulation time. /gate/application/setNumberOfPrimariesPerRun [N] ### Regular time slice approach This is the standard Gate approach for imaging applications (PET, SPECT and CT). User has to define the beginning and the end of the acquisition using the commands setTimeStart and setTimeStop. Each slice has the same duration. User has to define the slice duration (setTimeSlice). /gate/application/setTimeSlice 1. s /gate/application/setTimeStart 0. s /gate/application/setTimeStop 1. s The choice of the generator seed is also extremely important. There are 3 ways to make that choice: • The ’default’ option. In this case the default CLHEP internal seed is taken. This seed is always the same. • The ’auto’ option. In this case, a new seed is automatically generated each time GATE is run. To randomly generate the seed, the time in millisecond since January 1, 1970 and the process ID of the GATE instance (i.e. the system ID of the running GATE process) are used. So each time GATE is run, a new seed is used. • The ’manual’ option. In this case, the user can manually set the seed. The seed is an unsigned integer value and it is recommended to be included in the interval [0,900000000]. The commands associated to the choice of the seed with the 3 different options are the following: /gate/random/setEngineSeed default /gate/random/setEngineSeed auto /gate/random/setEngineSeed 123456789 It is also possible to control directly the initialization of the engine by selecting the file containing the seeds with the command: /gate/random/resetEngineFrom fileName # S T A R T the A C Q U I S I T I O N /gate/application/startDAQ The number of projections or runs of the simulation is thus defined by: $N run = \frac {setTimeStop-setTimeStart} {setTimeSlice}$ Figure 1.8: Simulation is started In the current example, there is no motion, the acquisition time equals 1 second and the number of projections equals one. If you want to exit from the Gate program when the simulation time exceed the time duration, the last line of your program has to be exit. As a Monte Carlo tool, GATE needs a random generator. The CLHEP libraries provide various ones. Three different random engines are currently available in GATE, the Ranlux64, the James Random and the Mersenne Twister. The default one is the Mersenne Twister, but this can be changed easily using: /gate/random/setEngineName aName (where aName can be: Ranlux64, JamesRandom, or MersenneTwister) NB Several users have reported artifacts in PET data when using the Ranlux64 generator. These users have said that the artifacts are not present in data generated with the Mersenne Twister generator. ### Slices with variable time In this approach, each slice has a specific duration. User has to defined the time of each slice. The first method is to use a file of time slices: /gate/application/readTimeSlicesIn [File Name] the second method is to add each slice with the command: /gate/application/addSlice [value] [unit] User has to define the beginning of the acquisition using the command setTimeStart. The end of acquisition is calculated by summing each time slice. The simulation is started with the commands: /gate/application/start or /gate/application/startDAQ ## Verbosity The level of verbosity of the random engine can be chosen. It consists into printing the random engine status, depending on the type of generator used. The command associated to the verbosity is: /gate/random/verbose 1 Values from 0 to 2 are allowed, higher values will be interpreted as 2. A value of 0 means no printing at all, a value of 1 results in one printing at the beginning of the acquisition, and a value of 2 results in one printing at each beginning of run. Figure 1.9: GATE simulation architecture
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8279798626899719, "perplexity": 1797.9448457683227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655919952.68/warc/CC-MAIN-20200711001811-20200711031811-00135.warc.gz"}
http://mathhelpforum.com/calculus/12450-convergent-sequences-proof.html
# Math Help - convergent sequences proof 1. ## convergent sequences proof Prove the following by using this definition: A sequence (Sn) is said to converge to the real number S provided that for each e > 0 there exists a real number N such that for all n element of N, n > N implies that |Sn - S| < e If (Sn) converges to S, then S is called the limit of sequence (Sn), and we write lim n -> infinity Sn = S, lim Sn = S, or Sn -> S. If a sequence does not converge to a real number, it is said to diverge Prove this from the definition above: a) lim (3n + 1)/(n + 2) = 3 b) lim (sin n) / n = 0 c) lim (n+2)/(n^2 - 3) = 0 2. Originally Posted by luckyc1423 a) lim (3n + 1)/(n + 2) = 3 For any e>0 choose N=5/e Then if n>N we have n>5/e thus, 5/n<e. But then, |(3n+1)/(n+2)-3|=|-5/(n+2)|=5/(n+2)<=5/n<e b) lim (sin n) / n = 0 For any e>0 choose N=1/e then if n>N we have 1/n<e. Thus, |sin(n)/n-0|=|sin(n)/n|<=1/n<e 3. Originally Posted by luckyc1423 c) lim (n+2)/(n^2 - 3) = 0 For any e>0 choose N=max{2,4/n} Then if n>N* we have 4/n<e. But then! |(n+2)/(n^2-3)|=(n+2)/(n^2-3)<=2n/(n^2-3)<=2n/(.5n^2)=4/n<e Q.E.D. *)You also need to consider the case max{2,4/n}=2. But that case is easy.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804776906967163, "perplexity": 1804.0444496651091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159155.63/warc/CC-MAIN-20160205193919-00191-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.mapsofworld.com/where-is/santarem.html
Maps of World Current, Credible, Consistent Search World Map / Where Is / Where is Santarem # Where is Santarem Location Maps of Cities in Portugal Last Updated : July 25, 2016
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9542202353477478, "perplexity": 19298.689990340223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550975184.95/warc/CC-MAIN-20170728163715-20170728183715-00508.warc.gz"}
https://lmcs.episciences.org/volume/view/id/314
# Selected Paper of the 20th International Conference on Foundations of Software Science and Computation Structures (FoSSaCS 2017) Editors: Javier Esparza, Andrzej Murawski This special issue contains revised and extended versions of seven papers presented at the 20th International Conference on Foundations of Software Science and Computation Structures (FoSSaCS 2017), which was held in Uppsala, Sweden as part of ETAPS 2017 (April 22-29, 2017). They represent several (but by no means all) areas that are traditionally well-represented at FoSSaCS: higher-order computation, proof theory, probabilistic and categorical semantics. The contributions were selected from among the 32 papers presented at the conference, on the strength of recommendations obtained during the reviewing process, which involved 101 submissions in total. The extended versions were subsequently refereed according to the usual LMCS standards. We thank the Program Committee of FoSSaCS 2017 and additional reviewers for their expert advice, and wish to express gratitude to LMCS for hosting the special issue. ### 1. A Light Modality for Recursion We investigate the interplay between a modality for controlling the behaviour of recursive functional programs on infinite structures which are completely silent in the syntax. The latter means that programs do not contain "marks" showing the application of the introduction and elimination rules for the modality. This shifts the burden of controlling recursion from the programmer to the compiler. To do this, we introduce a typed lambda calculus a la Curry with a silent modality and guarded recursive types. The typing discipline guarantees normalisation and can be transformed into an algorithm which infers the type of a program. ### 2. Algebra, coalgebra, and minimization in polynomial differential equations We consider reasoning and minimization in systems of polynomial ordinary differential equations (ode's). The ring of multivariate polynomials is employed as a syntax for denoting system behaviours. We endow this set with a transition system structure based on the concept of Lie-derivative, thus inducing a notion of L-bisimulation. We prove that two states (variables) are L-bisimilar if and only if they correspond to the same solution in the ode's system. We then characterize L-bisimilarity algebraically, in terms of certain ideals in the polynomial ring that are invariant under Lie-derivation. This characterization allows us to develop a complete algorithm, based on building an ascending chain of ideals, for computing the largest L-bisimulation containing all valid identities that are instances of a user-specified template. A specific largest L-bisimulation can be used to build a reduced system of ode's, equivalent to the original one, but minimal among all those obtainable by linear aggregation of the original equations. A computationally less demanding approximate reduction and linearization technique is also proposed. ### 3. Almost Every Simply Typed Lambda-Term Has a Long Beta-Reduction Sequence It is well known that the length of a beta-reduction sequence of a simply typed lambda-term of order k can be huge; it is as large as k-fold exponential in the size of the lambda-term in the worst case. We consider the following relevant question about quantitative properties, instead of the worst case: how many simply typed lambda-terms have very long reduction sequences? We provide a partial answer to this question, by showing that asymptotically almost every simply typed lambda-term of order k has a reduction sequence as long as (k-1)-fold exponential in the term size, under the assumption that the arity of functions and the number of variables that may occur in every subterm are bounded above by a constant. To prove it, we have extended the infinite monkey theorem for strings to a parametrized one for regular tree languages, which may be of independent interest. The work has been motivated by quantitative analysis of the complexity of higher-order model checking.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8877899050712585, "perplexity": 673.2706538609445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154796.71/warc/CC-MAIN-20210804045226-20210804075226-00304.warc.gz"}
http://gams.cam.nist.gov/35.9
# §35.9 Applications In multivariate statistical analysis based on the multivariate normal distribution, the probability density functions of many random matrices are expressible in terms of generalized hypergeometric functions of matrix argument $\mathop{{{}_{p}F_{q}}\/}\nolimits$, with $p\leq 2$ and $q\leq 1$. See James (1964), Muirhead (1982), Takemura (1984), Farrell (1985), and Chikuse (2003) for extensive treatments. For other statistical applications of $\mathop{{{}_{p}F_{q}}\/}\nolimits$ functions of matrix argument see Perlman and Olkin (1980), Groeneboom and Truax (2000), Bhaumik and Sarkar (2002), Richards (2004) (monotonicity of power functions of multivariate statistical test criteria), Bingham et al. (1992) (Procrustes analysis), and Phillips (1986) (exact distributions of statistical test criteria). These references all use results related to the integral formulas (35.4.7) and (35.5.8). For applications of the integral representation (35.5.3) see McFarland and Richards (2001, 2002) (statistical estimation of misclassification probabilities for discriminating between multivariate normal populations). The asymptotic approximations of §35.7(iv) are applied in numerous statistical contexts in Butler and Wood (2002). In chemistry, Wei and Eichinger (1993) expresses the probability density functions of macromolecules in terms of generalized hypergeometric functions of matrix argument, and develop asymptotic approximations for these density functions. In the nascent area of applications of zonal polynomials to the limiting probability distributions of symmetric random matrices, one of the most comprehensive accounts is Rains (1998).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 4, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986747682094574, "perplexity": 1689.4820206674926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435376093097.69/warc/CC-MAIN-20150627033453-00142-ip-10-179-60-89.ec2.internal.warc.gz"}
http://cerco.cs.unibo.it/changeset/3305/Papers/itp-2013
# Changeset 3305 for Papers/itp-2013 Ignore: Timestamp: May 29, 2013, 6:25:11 PM (7 years ago) Message: messo qualche figura, e molte notazioni File: 1 edited ### Legend: Unmodified r3222 \usepackage{color} \usepackage{listings} \usepackage{bcprules} \usepackage{bcprules}%\bcprulessavespace \usepackage{verbatim} \usepackage{alltt} \usepackage{subcaption} \usepackage{listings} \usepackage{amssymb} % \usepackage{amsmath} \usepackage{multicol} \providecommand{\eqref}[1]{(\ref{#1})} % NB: might be worth removing this if changing class in favour of showspaces=false,showstringspaces=false, xleftmargin=1em} \usepackage{tikz} \usetikzlibrary{positioning,calc,patterns,chains,shapes.geometric,scopes} \makeatletter \pgfutil@ifundefined{pgf@arrow@code@implies}{% supply for lack of double arrow special arrow tip if it is not there% \pgfarrowsdeclare{implies}{implies}% {% \pgfarrowsleftextend{2.2pt}% \pgfarrowsrightextend{2.2pt}% }% {% \pgfsetdash{}{0pt} % do not dash% \pgfsetlinewidth{.33pt}% \pgfsetroundjoin   % fix join% \pgfsetroundcap    % fix cap% \pgfpathmoveto{\pgfpoint{-1.5pt}{2.5pt}}% \pgfpathcurveto{\pgfpoint{-.75pt}{1.5pt}}{\pgfpoint{0.5pt}{.5pt}}{\pgfpoint{2pt}{0pt}}% \pgfpathcurveto{\pgfpoint{0.5pt}{-.5pt}}{\pgfpoint{-.75pt}{-1.5pt}}{\pgfpoint{-1.5pt}{-2.5pt}}% \pgfusepathqstroke% }% }{}% \makeatother \tikzset{state/.style={inner sep = 0, outer sep = 2pt, draw, fill}, every node/.style={inner sep=2pt}, every on chain/.style = {inner sep = 0, outer sep = 2pt}, join all/.style = {every on chain/.append style={join}}, on/.style={on chain={#1}, state}, m/.style={execute at begin node=$, execute at end node=$}, node distance=3mm, is other/.style={circle, minimum size = 3pt, state}, other/.style={on, is other}, is jump/.style={diamond, minimum size = 6pt, state}, jump/.style={on, is jump}, is call/.style={regular polygon, regular polygon sides=3, minimum size=5pt, state}, call/.style={on=going below, is call, node distance=6mm, label=above left:$#1$}, is ret/.style={regular polygon, regular polygon sides=3, minimum size=5pt, shape border rotate=180, state}, ret/.style={on=going above, is ret, node distance=6mm}, chain/.style={start chain=#1 going left}, rev ar/.style={stealth-, thick}, ar/.style={-stealth, thick}, every join/.style={rev ar}, labelled/.style={fill=white, label=above:$#1$}, vcenter/.style={baseline={([yshift=-.5ex]current bounding box)}}, every picture/.style={thick}, double equal sign distance/.prefix style={double distance=1.5pt}, %% if already defined (newest version of pgf) it should be ignored%} implies/.style={double, -implies, thin, double equal sign distance, shorten <=5pt, shorten >=5pt}, new/.style={densely dashed}, rel/.style={font=\scriptsize, fill=white, inner sep=2pt}, diag/.style={row sep={11mm,between origins}, column sep={11mm,between origins}, every node/.style={draw, is other}}, small vgap/.style={row sep={7mm,between origins}}, } \def\L{\mathrel{\mathcal L}} \def\S{\mathrel{\mathcal S}} \def\R{\mathrel{\mathcal R}} \def\C{\mathrel{\mathcal C}} \newsavebox{\execbox} \savebox{\execbox}{\tikz[baseline=-.5ex]\draw [-stealth] (0,0) -- ++(1em, 0);} \newcommand{\exec}{\ensuremath{\mathrel{\usebox{\execbox}}}} \let\ar\rightsquigarrow \renewcommand{\verb}{\lstinline[mathescape]} \let\class\triangleright \let\andalso\quad \newcommand{\append}{\mathbin{@}} \begin{document} \section{Introduction} The \emph{labelling approach} has been introduced in~\cite{easylabelling} as a technique to \emph{lift} cost models for non-functional properties of programs In order to have a definition that works on multiple intermediate languages, we abstract the type of structure traces over an abstract data type of abstract statuses: \begin{alltt} record abstract_status := \{ S: Type[0]; as_execute: S $$\to$$ S $$\to$$ Prop;     as_classify: S $$\to$$ classification; as_costed: S $$\to$$ Prop;      as_label: $$\forall$$ s. as_costed S s $$\to$$ label; as_call_ident: $$\forall$$ s. as_classify S s = cl_call $$\to$$ label; as_after_return: ($$\Sigma$$s:as_status. as_classify s = Some ? cl_call) $$\to$$ as_status $$\to$$ Prop \} \end{alltt} The predicate $\texttt{as\_execute}~s_1~s_2$ holds if $s_1$ evolves into $s_2$ in one step;\\ $\texttt{as\_classify}~s~c$ holds if the next instruction to be executed in $s$ is classified according to $c \in \{\texttt{cl\_return,cl\_jump,cl\_call,cl\_other}\}$ (we omit tail-calls for simplicity); the predicate $\texttt{as\_costed}~s$ holds if the next instruction to be executed in $s$ is a cost emission statement (also classified as \texttt{cl\_other}); finally $(\texttt{as\_after\_return}~s_1~s_2)$ holds if the next instruction to be executed in $s_2$ follows the function call to be executed in (the witness of the $\Sigma$-type) $s_1$. The two functions \texttt{as\_label} and \texttt{as\_cost\_ident} are used to extract the cost label/function call target from states whose next instruction is a cost emission/function call statement. abstract statuses, which we aptly call $\texttt{abstract\_status}$. The fields of this record are the following. \begin{itemize} \item \verb+S : Type[0]+, the type of states. \item \verb+as_execute : S $\to$ S $\to$ Prop+, a binary predicate stating an execution step. We write $s_1\exec s_2$ for $\verb+as_execute+~s_1~s_2$. \item \verb+as_classifier : S $\to$ classification+, a function tagging all states with a class in $\{\texttt{cl\_return,cl\_jump,cl\_call,cl\_other}\}$, depending on the instruction that is about to be executed (we omit tail-calls for simplicity). We will use $s \class c$ as a shorthand for both $\texttt{as\_classifier}~s=c$ (if $c$ is a classification) and $\texttt{as\_classifier}~s\in c$ (if $c$ is a set of classifications). \item \verb+as_label : S $\to$ option label+, telling whether the next instruction to be executed in $s$ is a cost emission statement, and if yes returning the associated cost label. Our shorthand for this function will be $\ell$, and we will also abuse the notation by using $\ell~s$ as a predicate stating that $s$ is labelled. \item \verb+as_call_ident : ($\Sigma$s:S. s $\class$ cl_call) $\to$ label+, telling the identifier of the function which is being called in a \verb+cl_call+ state. We will use the shorthand $s\uparrow f$ for $\verb+as_call_ident+~s = f$. \item \verb+as_after_return : ($\Sigma$s:S. s $\class$ cl_call) $\to$ S $\to$ Prop+, which holds on the \verb+cl_call+ state $s_1$ and a state $s_2$ when the instruction to be executed in $s_2$ follows the function call to be executed in (the witness of the $\Sigma$-type) $s_1$. We will use the notation $s_1\ar s_2$ for this relation. \end{itemize} % \begin{alltt} % record abstract_status := \{ S: Type[0]; %  as_execute: S $$\to$$ S $$\to$$ Prop;   as_classifier: S $$\to$$ classification; %  as_label: S $$\to$$ option label;    as_called: ($$\Sigma$$s:S. c s = cl_call) $$\to$$ label; %  as_after_return: ($$\Sigma$$s:S. c s = cl_call) $$\to$$ S $$\to$$ Prop \} % \end{alltt} The inductive type for structured traces is actually made by three multiple inductive types with the following semantics: \begin{enumerate} \item $(\texttt{trace\_label\_return}~s_1~s_2)$ is a trace that begins in \item $(\texttt{trace\_label\_return}~s_1~s_2)$ (shorthand $\verb+TLR+~s_1~s_2$) is a trace that begins in the state $s_1$ (included) and ends just before the state $s_2$ (excluded) such that the instruction to be executed in $s_1$ is a label emission one or more basic blocks, all starting with a label emission (e.g. in case of loops). \item $(\texttt{trace\_any\_label}~b~s_1~s_2)$ is a trace that begins in \item $(\texttt{trace\_any\_label}~b~s_1~s_2)$ (shorthand $\verb+TAL+~b~s_1~s_2$) is a trace that begins in the state $s_1$ (included) and ends just before the state $s_2$ (excluded) such that the instruction to be executed in $s_2$/in the state before any label emission statement. It captures the notion of a suffix of a basic block. \item $(\texttt{trace\_label\_label}~b~s_1~s_2)$ is the special case of $(\texttt{trace\_any\_label}~b~s_1~s_2)$ such that the instruction to be \item $(\texttt{trace\_label\_label}~b~s_1~s_2)$ (shorthand $\verb+TLL+~b~s_1~s_2$ is the special case of $\verb+TAL+~b~s_1~s_2)$ such that the instruction to be executed in $s_1$ is a label emission statement. It captures the notion of a basic block. \end{enumerate} \infrule[\texttt{tlr\_base}] {\texttt{trace\_label\_label}~true~s_1~s_2} {\texttt{trace\_label\_return}~s_1~s_2} \infrule[\texttt{tlr\_step}] {\texttt{trace\_label\_label}~false~s_1~s_2 \andalso \texttt{trace\_label\_return}~s_2~s_3 \begin{multicols}{3} \infrule[\verb+tlr_base+] {\texttt{TLL}~true~s_1~s_2} {\texttt{TLR}~s_1~s_2} \infrule[\verb+tlr_step+] {\texttt{TLL}~false~s_1~s_2 \andalso \texttt{TLR}~s_2~s_3 } {\texttt{trace\_label\_return}~s_1~s_3} \infrule[\texttt{tll\_base}] {\texttt{trace\_any\_label}~b~s_1~s_2 \andalso \texttt{as\_costed}~s_1 {\texttt{TLR}~s_1~s_3} \infrule[\verb+tll_base+] {\texttt{TAL}~b~s_1~s_2 \andalso \ell~s_1 } {\texttt{trace\_label\_label}~b~s_1~s_2} \infrule[\texttt{tal\_base\_not\_return}] {\texttt{as\_execute}~s_1~s_2 \andalso \texttt{as\_classify}~s_1 \in \{\texttt{cl\_jump,cl\_other}\} \andalso \texttt{as\_costed}~s_2 {\texttt{TLL}~b~s_1~s_2} \end{multicols} \infrule[\verb+tal_base_not_return+] {s_1\exec s_2 \andalso s_1\class\{\verb+cl_jump+, \verb+cl_other+\}\andalso \ell~s_2 } {\texttt{trace\_any\_label}~false~s_1~s_2} \infrule[\texttt{tal\_base\_return}] {\texttt{as\_execute}~s_1~s_2 \andalso \texttt{as\_classify}~s_1 = \texttt{cl\_return} \\ {\texttt{TAL}~false~s_1~s_2} \infrule[\verb+tal_base_return+] {s_1\exec s_2 \andalso s_1 \class \texttt{cl\_return} } {\texttt{trace\_any\_label}~true~s_1~s_2} \infrule[\texttt{tal\_base\_call}] {\texttt{as\_execute}~s_1~s_2 \andalso \texttt{as\_classify}~s_1 = \texttt{cl\_call} \\ \texttt{as\_after\_return}~s_1~s_3 \andalso \texttt{trace\_label\_return}~s_2~s_3 \andalso \texttt{as\_costed}~s_3 {\texttt{TAL}~true~s_1~s_2} \infrule[\verb+tal_base_call+] {s_1\exec s_2 \andalso s_1 \class \texttt{cl\_call} \andalso s_1\ar s_3 \andalso \texttt{TLR}~s_2~s_3 \andalso \ell~s_3 } {\texttt{trace\_any\_label}~false~s_1~s_3} \infrule[\texttt{tal\_step\_call}] {\texttt{as\_execute}~s_1~s_2 \andalso \texttt{as\_classify}~s_1 = \texttt{cl\_call} \\ \texttt{as\_after\_return}~s_1~s_3 \andalso \texttt{trace\_label\_return}~s_2~s_3 \\ \lnot \texttt{as\_costed}~s_3 \andalso \texttt{trace\_any\_label}~b~s_3~s_4 {\texttt{TAL}~false~s_1~s_3} \infrule[\verb+tal_step_call+] {s_1\exec s_2 \andalso s_1 \class \texttt{cl\_call} \andalso s_1\ar s_3 \andalso \texttt{TLR}~s_2~s_3 \andalso \texttt{TAL}~b~s_3~s_4 } {\texttt{trace\_any\_label}~b~s_1~s_4} \infrule[\texttt{tal\_step\_default}] {\texttt{as\_execute}~s_1~s_2 \andalso \lnot \texttt{as\_costed}~s_2 \\ \texttt{trace\_any\_label}~b~s_2~s_3 \texttt{as\_classify}~s_1 = \texttt{cl\_other} {\texttt{TAL}~b~s_1~s_4} \infrule[\verb+tal_step_default+] {s_1\exec s_2 \andalso \lnot \ell~s_2 \andalso \texttt{TAL}~b~s_2~s_3\andalso s_1 \class \texttt{cl\_other} } {\texttt{trace\_any\_label}~b~s_1~s_3} {\texttt{TAL}~b~s_1~s_3} \begin{comment} \begin{verbatim} \end{enumerate} The three mutual structural recursive functions \texttt{flatten\_trace\_label\_return, flatten\_trace\_label\_label} and \texttt{flatten\_trance\_any\_label} allow to extract from a structured trace the list of states whose next instruction is a cost emission statement. We only show here the type of one of them: \begin{alltt} flatten_trace_label_return: $$\forall$$S: abstract_status. $$\forall$$$$s_1,s_2$$. trace_label_return $$s_1$$ $$s_2$$ $$\to$$ list (as_cost_label S) \end{alltt} \paragraph{Cost prediction on structured traces} There are three mutual structural recursive functions, one for each of \verb+TLR+, \verb+TLL+ and \verb+TAL+, for which we use the same notation $|\,.\,|$: the \emph{flattening} of the traces. These functions allow to extract from a structured trace the list of emitted cost labels. %  We only show here the type of one % of them: % \begin{alltt} % flatten_trace_label_return: %  $$\forall$$S: abstract_status. $$\forall$$$$s_1,s_2$$. %   trace_label_return $$s_1$$ $$s_2$$ $$\to$$ list (as_cost_label S) % \end{alltt} \paragraph{Cost prediction on structured traces.} The first main theorem of CerCo about traces Simplifying a bit, it states that \label{th1} \begin{array}{l}\forall s_1,s_2. \forall \tau: \texttt{trace\_label\_return}~s_1~s_2.\\~~ \texttt{clock}~s_2 = \texttt{clock}~s_1 + \Sigma_{s \in (\texttt{flatten\_trace\_label\_return}~\tau)}\;k(\mathcal{L}(s)) \begin{array}{l}\forall s_1,s_2. \forall \tau: \texttt{TLR}~s_1~s_2.~ \texttt{clock}~s_2 = \texttt{clock}~s_1 + \Sigma_{\alpha \in |\tau|}\;k(\alpha) \end{array} where $\mathcal{L}$ maps a labelled state to its emitted label, and the cost model $k$ is statically computed from the object code by associating to each label \texttt{L} the sum of the cost of the instructions in the basic block that starts at \texttt{L} and ends before the next labelled where the cost model $k$ is statically computed from the object code by associating to each label $\alpha$ the sum of the cost of the instructions in the basic block that starts at $\alpha$ and ends before the next labelled instruction. The theorem is proved by structural induction over the structured trace, and is based on the invariant that the structured trace starts with $s_1$, then eventually it will contain also $s_2$. When $s_1$ is not a function call, the result holds trivially because of the $(\texttt{as\_execute}~s_1~s_2)$ condition obtained by inversion on of the $s_1\exec s_2$ condition obtained by inversion on the trace. The only non trivial case is the one of function calls: the cost model computation function this state. \paragraph{Structured traces similarity and cost prediction invariance} \paragraph{Structured traces similarity and cost prediction invariance.} A compiler pass maps source to object code and initial states to initial interested only in those compiler passes that maps a trace $\tau_1$ to a trace $\tau_2$ such that \texttt{flatten\_trace\_label\_return}~\tau_1 = \texttt{flatten\_trace\_label\_return}~\tau_2\label{condition1}\label{th2} The reason is that the combination of~\ref{th1} with~\ref{th2} yields the |\tau_1| = |\tau_2|.\label{th2} The reason is that the combination of~\eqref{th1} with~\eqref{th2} yields the corollary \begin{array}{l}\label{th3} \forall s_1,s_2. \forall \tau: \texttt{trace\_label\_return}~s_1~s_2.\\~~~~~\; clock~s_2 - clock~s_1\\~ = \Sigma_{s \in (\texttt{flatten\_trace\_label\_return}~\tau_1)}\;k(\mathcal{L}(s))\\~ = \Sigma_{s \in (\texttt{flatten\_trace\_label\_return}~\tau_2)}\;k(\mathcal{L}(s)) \end{array} \label{th3} \forall s_1,s_2. \forall \tau: \texttt{TLR}~s_1~s_2.~ \texttt{clock}~s_2 - \texttt{clock}~s_1 = \Sigma_{\alpha \in |\tau_1|}\;k(\alpha) = \Sigma_{\alpha \in |\tau_2|}\;k(\alpha). This corollary states that the actual execution time of the program can be computed equally well on the source or target language. Thus it becomes possible to transfer the cost model from the target to the source code and reason on the source code only. We are therefore interested in conditions stronger than~\ref{condition1}. We are therefore interested in conditions stronger than~\eqref{th2}. Therefore we introduce here a similarity relation between traces with the same structure. Theorem~\texttt{tlr\_rel\_to\_traces\_same\_flatten} in the Matita formalisation shows that~\ref{th2} holds for every pair in the Matita formalisation shows that~\eqref{th2} holds for every pair $(\tau_1,\tau_2)$ of similar traces. Intuitively, two traces are similar when one can be obtained from the other by erasing or inserting silent steps, i.e. states that are not \texttt{as\_costed} and that are classified as \texttt{other}. not \texttt{as\_costed} and that are classified as \texttt{cl\_other}. Silent steps do not alter the structure of the traces. In particular, the definition into inference rules for the sake of readability. We also omit from trace constructors all arguments, but those that are traces or that are used in the premises of the rules. are used in the premises of the rules. By abuse of notation we denote all three relations by infixing $\approx$ \begin{multicols}{2} \infrule {\texttt{tll\_rel}~tll_1~tll_2 {tll_1\approx tll_2 } {\texttt{tlr\_rel}~(\texttt{tlr\_base}~tll_1)~(\texttt{tlr\_base}~tll_2)} {\texttt{tlr\_base}~tll_1 \approx \texttt{tlr\_base}~tll_2} \infrule {\texttt{tll\_rel}~tll_1~tll_2 \andalso \texttt{tlr\_rel}~tlr_1~tlr_2 {tll_1 \approx tll_2 \andalso tlr_1 \approx tlr_2 } {\texttt{tlr\_rel}~(\texttt{tlr\_step}~tll_1~tlr_1)~(\texttt{tlr\_step}~tll_2~tlr_2)} {\texttt{tlr\_step}~tll_1~tlr_1 \approx \texttt{tlr\_step}~tll_2~tlr_2} \end{multicols} \infrule {\texttt{as\_label}~H_1 = \texttt{as\_label}~H_2 \andalso \texttt{tal\_rel}~tal_1~tal_2 {\ell~s_1 = \ell~s_2 \andalso tal_1\approx tal_2 } {\texttt{tll\_rel}~(\texttt{tll\_base}~tal_1~H_1)~(\texttt{tll\_base}~tal_2~H_2)} {\texttt{tll\_base}~s_1~tal_1 \approx \texttt{tll\_base}~s_2~tal_2} \infrule {} {\texttt{tal\_rel}~\texttt{tal\_base\_not\_return}~(taa @ \texttt{tal\_base\_not\_return}} {\texttt{tal\_base\_not\_return}\approx taa \append \texttt{tal\_base\_not\_return}} \infrule {} {\texttt{tal\_rel}~\texttt{tal\_base\_return}~(taa @ \texttt{tal\_base\_return}} {\texttt{tal\_base\_return}\approx taa \append \texttt{tal\_base\_return}} \infrule {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2 {tlr_1\approx tlr_2 \andalso s_1 \uparrow f \andalso s_2\uparrow f } {\texttt{tal\_rel}~(\texttt{tal\_base\_call}~H_1~tlr_1)~(taa @ \texttt{tal\_base\_call}~H_2~tlr_2)} {\texttt{tal\_base\_call}~s_1~tlr_1\approx taa \append \texttt{tal\_base\_call}~s_2~tlr_2} \infrule {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2 \andalso {tlr_1\approx tlr_2 \andalso s_1 \uparrow f \andalso s_2\uparrow f \andalso \texttt{tal\_collapsable}~tal_2 } {\texttt{tal\_rel}~(\texttt{tal\_base\_call}~tlr_1)~(taa @ \texttt{tal\_step\_call}~tlr_2~tal_2)} {\texttt{tal\_base\_call}~s_1~tlr_1 \approx taa \append \texttt{tal\_step\_call}~s_2~tlr_2~tal_2)} \infrule {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2 \andalso {tlr_1\approx tlr_2 \andalso s_1 \uparrow f \andalso s_2\uparrow f \andalso \texttt{tal\_collapsable}~tal_1 } {\texttt{tal\_rel}~(\texttt{tal\_step\_call}~tlr_1~tal_1)~(taa @ \texttt{tal\_base\_call}~tlr_2)} {\texttt{tal\_step\_call}~s_1~tlr_1~tal_1 \approx taa \append \texttt{tal\_base\_call}~s_2~tlr_2)} \infrule {\texttt{tlr\_rel}~tlr_1~tlr_2 \andalso \texttt{tal\_rel}~tal_1~tal_2 \andalso \texttt{as\_call\_ident}~H_1 = \texttt{as\_call\_ident}~H_2 {tlr_1 \approx tlr_2 \andalso s_1 \uparrow f \andalso s_2\uparrow f tal_1 \approx tal_2 \andalso } {\texttt{tal\_rel}~(\texttt{tal\_step\_call}~tlr_1~tal_1)~(taa @ \texttt{tal\_step\_call}~tlr_2~tal_2)} {\texttt{tal\_step\_call}~s_1~tlr_1~tal_1 \approx taa \append \texttt{tal\_step\_call}~s_2~tlr_2~tal_2} \infrule {\texttt{tal\_rel}~tal_1~tal_2 {tal_1\approx tal_2 } {\texttt{tal\_rel}~(\texttt{tal\_step\_default}~tal_1)~tal_2} {\texttt{tal\_step\_default}~tal_1 \approx tal_2} \begin{comment} \begin{verbatim} In the preceding rules, a $taa$ is an inhabitant of the $\texttt{trace\_any\_any}~s_1~s_2$ inductive data type whose definition $\texttt{trace\_any\_any}~s_1~s_2$ (shorthand $\texttt{TAA}~s_1~s_2$), an inductive data type whose definition is not in the paper for lack of space. It is the type of valid prefixes (even empty ones) of \texttt{trace\_any\_label}s that do not contain prefixes (even empty ones) of \texttt{TAL}'s that do not contain any function call. Therefore it is possible to concatenate (using $@$'') a \texttt{trace\_any\_any} to the left of a \texttt{trace\_any\_label}. A \texttt{trace\_any\_any} captures is possible to concatenate (using $\append$'') a \texttt{TAA} to the left of a \texttt{TAL}. A \texttt{TAA} captures a sequence of silent moves. The \texttt{tal\_collapsable} unary predicate over \texttt{trace\_any\_label}s The \texttt{tal\_collapsable} unary predicate over \texttt{TAL}'s holds when the argument does not contain any function call and it ends with a label (not a return). The intuition is that after a function call we compiler pass. \paragraph{Relation sets} \paragraph{Relation sets.} We introduce now the four relations $\mathcal{S,C,L,R}$ between abstract \end{alltt} \paragraph{1-to-0-or-many forward simulation conditions} \begin{figure} \centering \begin{tabular}{@{}c@{}c@{}} \begin{subfigure}{.475\linewidth} \centering \begin{tikzpicture}[every join/.style={ar}, join all, thick, every label/.style=overlay, node distance=10mm] \matrix [diag] (m) {% \node (s1) [is jump] {}; & \node [fill=white] (t1) {};\\ \node (s2) {}; & \node (t2) {}; \\ }; \node [above=0 of t1, overlay] {$\alpha$}; {[-stealth] \draw (s1) -- (t1); \draw [new] (s2) -- node [above] {$*$} (t2); } \draw (s1) to node [rel] {$\S$} (s2); \draw [new] (t1) to node [rel] {$\S,\L$} (t2); \end{tikzpicture} \caption{The \texttt{cl\_jump} case.} \label{subfig:cl_jump} \end{subfigure} & \begin{subfigure}{.475\linewidth} \centering \begin{tikzpicture}[every join/.style={ar}, join all, thick, every label/.style=overlay, node distance=10mm] \matrix [diag] (m) {% \node (s1) {}; & \node (t1) {};\\ \node (s2) {}; & \node (t2) {}; \\ }; {[-stealth] \draw (s1) -- (t1); \draw [new] (s2) -- node [above] {$*$} (t2); } \draw (s1) to node [rel] {$\S$} (s2); \draw [new] (t1) to node [rel] {$\S,\L$} (t2); \end{tikzpicture} \caption{The \texttt{cl\_oher} case.} \label{subfig:cl_other} \end{subfigure} \\ \begin{subfigure}{.475\linewidth} \centering \begin{tikzpicture}[every join/.style={ar}, join all, thick, every label/.style=overlay, node distance=10mm] \matrix [diag, small vgap] (m) {% \node (t1) {}; \\ \node (s1) [is call] {}; \\ & \node (l) {}; & \node (t2) {};\\ \node (s2) {}; & \node (c) [is call] {};\\ }; {[-stealth] \draw (s1) -- node [left] {$f$} (t1); \draw [new] (s2) -- node [above] {$*$} (c); \draw [new] (c) -- node [right] {$f$} (l); \draw [new] (l) -- node [above] {$*$} (t2); } \draw (s1) to node [rel] {$\S$} (s2); \draw [new] (t1) to [bend left] node [rel] {$\S$} (t2); \draw [new] (t1) to [bend left] node [rel] {$\L$} (l); \draw [new] (t1) to node [rel] {$\C$} (c); \end{tikzpicture} \caption{The \texttt{cl\_call} case.} \label{subfig:cl_call} \end{subfigure} & \begin{subfigure}{.475\linewidth} \centering \begin{tikzpicture}[every join/.style={ar}, join all, thick, every label/.style=overlay, node distance=10mm] \matrix [diag, small vgap] (m) {% \node (s1) [is ret] {}; \\ \node (t1) {}; \\ \node (s2) {}; & \node (c) [is ret] {};\\ & \node (r) {}; & \node (t2) {}; \\ }; {[-stealth] \draw (s1) -- (t1); \draw [new] (s2) -- node [above] {$*$} (c); \draw [new] (c) -- (r); \draw [new] (r) -- node [above] {$*$} (t2); } \draw (s1) to [bend right=45] node [rel] {$\S$} (s2); \draw [new, overlay] (t1) to [bend left=90, looseness=1] node [rel] {$\S,\L$} (t2); \draw [new, overlay] (t1) to [bend left=90, looseness=1.2] node [rel] {$\R$} (r); \end{tikzpicture} \caption{The \texttt{cl\_return} case.} \label{subfig:cl_return} \end{subfigure} \end{tabular} \caption{The hypotheses for the preservation of structured traces, simplified. Dashed lines and arrows indicates how the diagrams must be closed when solid relations are present.} \label{fig:forwardsim} \end{figure} \paragraph{1-to-0-or-many forward simulation conditions.} \begin{condition}[Cases \texttt{cl\_other} and \texttt{cl\_jump}] For all $s_1,s_1',s_2$ such that $s_1 \mathcal{S} s_1'$, and $\texttt{as\_execute}~s_1~s_1'$, and $\texttt{as\_classify}~s_1 = \texttt{cl\_other}$ or $\texttt{as\_classify}~s_1 = \texttt{cl\_other}$ and $\texttt{as\_costed}~s_1'$, there exists an $s_2'$ and a $\texttt{trace\_any\_any\_free}~s_2~s_2'$ called $taaf$ such that $s_1' (\mathcal{S} \cap \mathcal{L}) s_2'$ and either For all $s_1,s_1',s_2$ such that $s_1 \S s_1'$, and $s_1\exec s_1'$, and either $s_1 \class \texttt{cl\_other}$ or both $s_1\class\texttt{cl\_other}\}$ and $\ell~s_1'$, there exists an $s_2'$ and a $\texttt{trace\_any\_any\_free}~s_2~s_2'$ called $taaf$ such that $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and either $taaf$ is non empty, or one among $s_1$ and $s_1'$ is \texttt{as\_costed}. \end{condition} In the above condition, a $\texttt{trace\_any\_any\_free}~s_1~s_2$ is an In the above condition, a $\texttt{trace\_any\_any\_free}~s_1~s_2$ (which from now on will be shorthanded as \verb+TAAF+) is an inductive type of structured traces that do not contain function calls or cost emission statements. Differently from a \texttt{trace\_any\_any}, the cost emission statements. Differently from a \verb+TAA+, the instruction to be executed in the lookahead state $s_2$ may be a cost emission statement. preserves the relation between the data and if the two final statuses are labelled in the same way. Moreover, we must take special care of the empty case to avoid collapsing two consecutive states that emit the same label to just one state, missing one of the two emissions. to avoid collapsing two consecutive states that emit a label, missing one of the two emissions. \begin{condition}[Case \texttt{cl\_call}] For all $s_1,s_1',s_2$ s.t. $s_1 \mathcal{S} s_1'$ and $\texttt{as\_execute}~s_1~s_1'$ and $\texttt{as\_classify}~s_1 = \texttt{cl\_call}$, there exists $s_2', s_b, s_a$, a $\texttt{trace\_any\_any}~s_2~s_b$, and a $\texttt{trace\_any\_any\_free}~s_a~s_2'$ such that: $s_a$ is classified as a \texttt{cl\_call}, the \texttt{as\_identifiers} of the two call states are the same, $s_1 \mathcal{C} s_b$, $\texttt{as\_execute}~s_b~s_a$ holds, $s_1' \mathcal{L} s_b$ and $s_1' \mathcal{S} s_2'$. For all $s_1,s_1',s_2$ s.t. $s_1 \S s_1'$ and $s_1\exec s_1'$ and $s_1 \class \texttt{cl\_call}$, there exists $s_a, s_b, s_2'$, a $\verb+TAA+~s_2~s_a$, and a $\verb+TAAF+~s_b~s_2'$ such that: $s_a\class\texttt{cl\_call}$, the \texttt{as\_call\_ident}'s of the two call states are the same, $s_1 \mathcal{C} s_a$, $s_a\exec s_b$, $s_1' \L s_b$ and $s_1' \S s_2'$. \end{condition} \begin{condition}[Case \texttt{cl\_return}] For all $s_1,s_1',s_2$ s.t. $s_1 \mathcal{S} s_1'$, $\texttt{as\_execute}~s_1~s_1'$ and $\texttt{as\_classify}~s_1 = \texttt{cl\_return}$, there exists $s_2', s_b, s_a$, a $\texttt{trace\_any\_any}~s_2~s_b$, a $\texttt{trace\_any\_any\_free}~s_a~s_2'$ called $taaf$ such that: For all $s_1,s_1',s_2$ s.t. $s_1 \S s_1'$, $s_1\exec s_1'$ and $s_1 \class \texttt{cl\_return}$, there exists $s_a, s_b, s_2'$, a $\verb+TAA+~s_2~s_a$, a $\verb+TAAF+~s_b~s_2'$ called $taaf$ such that: $s_a$ is classified as a \texttt{cl\_return}, $s_1 \mathcal{C} s_b$, the predicate $\texttt{as\_execute}~s_b~s_a$ holds, $s_1' \mathcal{R} s_a$ and $s_1' (\mathcal{S} \cap \mathcal{L}) s_2'$ and either $taaf$ is non empty, or $s_a$ is not \texttt{as\_costed}. $s_a\exec s_b$, $s_1' \R s_b$ and $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and either $taaf$ is non empty, or $\lnot \ell~s_a$. \end{condition} \begin{theorem}[\texttt{status\_simulation\_produce\_tlr}] For every $s_1,s_1',s_{2_b},s_2$ s.t. there is a $\texttt{trace\_label\_return}~s_1~s_1'$ called $tlr_1$ and a $\texttt{trace\_any\_any}~s_{2_b}~s_2$ and $s_1 \mathcal{L} s_{2_b}$ and $s_1 \mathcal{S} s_2$, there exists $s_{2_m},s_2'$ s.t. there is a $\texttt{trace\_label\_return}~s_{2_b}~s_{2_m}$ called $tlr_2$ and there is a $\texttt{trace\_any\_any\_free}~s_{2_m}~s_2'$ called $taaf$ s.t. if $taaf$ is non empty then $\lnot (\texttt{as\_costed}~s_{2_m})$, and $\texttt{tlr\_rel}~tlr_1~tlr_2$ and $s_1' (\mathcal{S} \cap \mathcal{L}) s_2'$ and $s_1' \mathcal{R} s_{2_m}$. there is a $\texttt{TLR}~s_1~s_1'$ called $tlr_1$ and a $\verb+TAA+~s_{2_b}~s_2$ and $s_1 \L s_{2_b}$ and $s_1 \S s_2$, there exists $s_{2_m},s_2'$ s.t. there is a $\texttt{TLR}~s_{2_b}~s_{2_m}$ called $tlr_2$ and there is a $\verb+TAAF+~s_{2_m}~s_2'$ called $taaf$ s.t. if $taaf$ is non empty then $\lnot (\ell~s_{2_m})$, and $tlr_1\approx tlr_2$ and $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and $s_1' \R s_{2_m}$. \end{theorem} The theorem states that a \texttt{trace\_label\_return} in the source code together with a precomputed preamble of silent states (the \texttt{trace\_any\_any}) in the target code induces a similar \texttt{trace\_label\_return} trace in the target code which can be (the \verb+TAA+) in the target code induces a similar \texttt{trace\_label\_return} in the target code which can be followed by a sequence of silent states. Note that the statement does not require the produced \texttt{trace\_label\_return} trace to start with the require the produced \texttt{trace\_label\_return} to start with the precomputed preamble, even if this is likely to be the case in concrete implementations. The preamble in input is necessary for compositionality, e.g. \begin{theorem}[\texttt{status\_simulation\_produce\_tll}] For every $s_1,s_1',s_{2_b},s_2$ s.t. there is a $\texttt{trace\_label\_label}~b~s_1~s_1'$ called $tll_1$ and a $\texttt{trace\_any\_any}~s_{2_b}~s_2$ and $s_1 \mathcal{L} s_{2_b}$ and $s_1 \mathcal{S} s_2$, there exists $s_{2_m},s_2'$ s.t. there is a $\texttt{TLL}~b~s_1~s_1'$ called $tll_1$ and a $\verb+TAA+~s_{2_b}~s_2$ and $s_1 \L s_{2_b}$ and $s_1 \S s_2$, there exists $s_{2_m},s_2'$ s.t. \begin{itemize} \item if $b$ (the trace ends with a return) then there exists $s_{2_m},s_2'$ and a trace $\texttt{trace\_label\_label}~b~s_{2_b}~s_{2_m}$ called $tll_2$ and a $\texttt{trace\_any\_any\_free}~s_{2_m}~s_2'$ called $taa_2$ s.t. $s_1' (\mathcal{S} \cap \mathcal{L}) s_2'$ and $s_1' \mathcal{R} s_{2_m}$ and $\texttt{tll\_rel}~tll_1~tll_2$ and if $taa_2$ is non empty then $\lnot (\texttt{as\_costed}~s_{2_m})$ and a trace $\texttt{TLL}~b~s_{2_b}~s_{2_m}$ called $tll_2$ and a $\texttt{TAAF}~s_{2_m}~s_2'$ called $taa_2$ s.t. $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and $s_1' \R s_{2_m}$ and $tll_1\approx tll_2$ and if $taa_2$ is non empty then $\lnot \ell~s_{2_m}$; \item else there exists $s_2'$ and a $\texttt{trace\_label\_label}~b~s_{2_b}~s_2'$ called $tll_2$ such that $s_1' (\mathcal{S} \cap \mathcal{L}) s_2'$ and $\texttt{tll\_rel}~tll_1~tll_2$. $\texttt{TLL}~b~s_{2_b}~s_2'$ called $tll_2$ such that $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and $tll_1\approx tll_2$. \end{itemize} \end{theorem} \begin{theorem}[\texttt{status\_simulation\_produce\_tal}] For every $s_1,s_1',s_2$ s.t. there is a $\texttt{trace\_any\_label}~b~s_1~s_1'$ called $tal_1$ and there is a $\texttt{TAL}~b~s_1~s_1'$ called $tal_1$ and $s_1 \mathcal{S} s_2$ \begin{itemize} \item if $b$ (the trace ends with a return) then there exists $s_{2_m},s_2'$ and a trace $\texttt{trace\_any\_label}~b~s_2~s_{2_m}$ called $tal_2$ and a $\texttt{trace\_any\_any\_free}~s_{2_m}~s_2'$ called $taa_2$ s.t. $s_1' (\mathcal{S} \cap \mathcal{L}) s_2'$ and $s_1' \mathcal{R} s_{2_m}$ and $\texttt{tal\_rel}~tal_1~tal_2$ and if $taa_2$ is non empty then $\lnot (\texttt{as\_costed}~s_{2_m})$ and a trace $\texttt{TAL}~b~s_2~s_{2_m}$ called $tal_2$ and a $\texttt{TAAF}~s_{2_m}~s_2'$ called $taa_2$ s.t. $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and $s_1' \R s_{2_m}$ and $tal_1 \approx tal_2$ and if $taa_2$ is non empty then $\lnot \ell~s_{2_m}$; \item else there exists $s_2'$ and a $\texttt{trace\_any\_label}~b~s_2~s_2'$ called $tal_2$ such that either $s_1' (\mathcal{S} \cap \mathcal{L}) s_2'$ and $\texttt{tal\_rel}~tal_1~tal_2$ or $s_1' (\mathcal{S} \cap \mathcal{L}) s_2$ and $\texttt{tal\_collapsable}~tal_1$ and $\lnot (\texttt{as\_costed}~s_1)$ $\texttt{TAL}~b~s_2~s_2'$ called $tal_2$ such that either $s_1' \mathrel{{\S} \cap {\L}} s_2'$ and $tal_1\approx tal_2$ or $s_1' \mathrel{{\S} \cap {\L}} s_2$ and $\texttt{tal\_collapsable}~tal_1$ and $\lnot \ell~s_1$. \end{itemize} \end{theorem} the CerCo compiler exploiting the main theorem of this paper. \paragraph{Related works} \paragraph{Related works.} CerCo is the first project that explicitly tries to induce a precise cost model on the source code in order to establish non-functional
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930649399757385, "perplexity": 4116.3336227943755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878921.41/warc/CC-MAIN-20201022053410-20201022083410-00635.warc.gz"}
http://johnstantongeddes.org/ecological%20genetics/2013/04/08/bioinformatics-monday.html
08 April 2013 Bradyrhizobium Next post Previous post # Bioinformatics Monday ## Bradyrhizobium Ran seqtk for quality filtering over the weekend. While the program appeared to run fine, the output files were always 0 lines. Checked that grooming worked correctly: In Galaxy, the files go from the original Illumina 1.3 format @HWI-ST261:8:1:1222:2141#CGATGT/2 ATGCCGTGCTCGCGAACGAGCCGACTGGCGCGATTCAGCCACGTGCCAGTCAGGACACGCGTATGTGGACGGTCTGCGCAGAGCGTGTGTGTACGCGTAT + f_ffUdcdBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB @HWI-ST261:8:1:1418:2082#CGATGT/2 ATCGCCGCATGCGAGGCGGAGAGGCTGGATGTGCAGGGGCAGACGGCGGCGGGTATACAGTTGCGCAACGAGCTCGTCTCGAGTGTCGCACGCGACACGA + BBBBBBBBBBBBBBBBBBBBBBBBBBBBB and after grooming to fastq-sanger @HWI-ST261:8:1:1222:2141#CGATGT/2 ATGCCGTGCTCGCGAACGAGCCGACTGGCGCGATTCAGCCACGTGCCAGTCAGGACACGCGTATGTGGACGGTCTGCGCAGAGCGTGTGTGTACGCGTAT + G@GG6EDE############################################################################################ @HWI-ST261:8:1:1418:2082#CGATGT/2 ATCGCCGCATGCGAGGCGGAGAGGCTGGATGTGCAGGGGCAGACGGCGGCGGGTATACAGTTGCGCAACGAGCTCGTCTCGAGTGTCGCACGCGACACGA + ########################## and confirmed that the BioPython write function did the same. Spot checked that this is the correct fastq conversion. Illumina 1.3 encoding: 38=f, 31=_, 2=B, Sanger/Illumina 1.8: 38=G, 31=@, 3=# NOTE - while checking this, found that I used the files in the directory: /home/tiffinp/stanton1/bmgc_incoming/110527_SN261_0347_B81JM9ABXX_L8/fastq_flt However, there were 3 directories provided by BMGC - fastq - fastq_flt - fastq_flt_syn and they differ in file size. Based on file size, it appears that the files imported into Galaxy are the ‘fastq_flt’ files, so I should probably restart my workflow from these when all is running. However, the ‘fastq_flt_syn’ files are only marginally smaller and look the same in spot-checking, so probably another filter was applied so I will use these files. Used seqtk to trim files. NOTE that no reads are removed by seqtk. I found the answer to this buried in a biostars post: I have just added quality based trimming in seqtk. It uses the Phred algorithm to trim both ends. If after trimming, the read length is below 30 (by default), it tries to find a 30bp window with the highest sum of quality. So, a second-step should be removing any files with length < 30 bp. Was also able to do this using seqtk seqtk seq -L 31 in.fastq > out.fastq Worked perfectly. As I had previously found from Galaxy, about 10 million more R2 reads are removed as they had high quality for only ~60 bp. Sample Direction Num reads Reads after trimming IC1 R1 55,277,504 54,330,756 IC1 R2 55,277,504 41,378,320 ENC4 R1 53,690,116 52,845,976 ENC4 R2 53,690,116 41,618,276 EWC3 R1 50,995,944 50,161,016 EWC3 R2 50,995,944 39,365,136 NOTE that this results in unpaired reads. Script available as part of velvet that removes singleton reads: https://github.com/dzerbino/velvet/blob/master/contrib/select_paired/select_paired.pl ## Lab Notebook Checked off two items on website TODO list! • set background image • how to properly display tags For tags, my question on stackoverflow was answered! Useful reference for jekyll: https://github.com/mojombo/jekyll/wiki/Liquid-Extensions For background image, was almost there before…just had to change css to body { background-image: url('/assets/img/natural_paper/natural_paper.png'); background-repeat: repeat; } This work is licensed under a Creative Commons Attribution 4.0 International License.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3112567961215973, "perplexity": 9962.29867060804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119642.3/warc/CC-MAIN-20170423031159-00180-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.romange.com/2019/07/25/benchmarking-gaia-mr-on-google-cloud/
# Benchmarking GAIA MR on Google cloud I’ve recently had a chance to benchmark GAIA in Google cloud. The goal was to test how quickly I can process compressed text data (i.e read and uncompress on the fly) when running on a single VM and reading directly from cloud storage. The results were quite surprising. In order to focus on I/O read only and eliminate potential bottlenecks related to local disks, I’ve written a simple benchmark program mr_read_test using GAIA-MR. class NoOpMapper { public: void Do(string val, mr3::DoContext<string>* context) {} //! Does not output any data. }; ... std::vector<string> inputs; for (int i = 1; i < argc; ++i) { inputs.push_back(argv[i]); } StringTable out_table = ss.Map<NoOpMapper>("map"); //! Map with noop mapper. out_table.Write("outp1", pb::WireFormat::TXT) .WithModNSharding(10, [](const string& s) { return 0; }) .AndCompress(pb::Output::ZSTD, 1); pipeline->Run(runner); ## Setup I’ve used 32 core machine on Google Cloud. The documentation says that Google does not cap ingress traffic but we can roughly assume to expect around 10Gbit/s or 1.25GB/s. I did not find any references to Google storage bandwidth caps. I was curious to know if I can reach 1.25GB/s cap by reading compressed data and uncompress it on the fly. Storing compressed data in the cloud is a good CPU vs I/O tradeoff because usually, in the cloud, we are bottlenecked on I/O bandwidth. I’ve prepared 2TB dataset comprised of 260 thousand zstd compressed files of different sizes that should inflict enough load on the framework. I’ve used zstd compression because it’s the best open source compressor that exists these days. If you did not use it before - do try it. It’s especially efficient during the decompression phase, reaching very high speeds. By the way, GAIA-MR supports both gzip and zstd formats out of the box. By default, the framework creates a 2 IO read fibers per each CPU core. For 32 core instance, it means that the framework creates 64 socket connections to google storage api gateway (In general, you can control this setting with --map_io_read_factor flag). I’ve used slightly modified ubuntu 18.04 image provided by Google cloud. ## Benchmark To run mr_read_test I used the following command: /usr/bin/time -v mr_read_test --map_io_read_factor=2 gs://mytestbucket/mydataset/** I’ve used double star suffix to instruct the framework to treat the path as glob and expand it recursively. ## Results The time command exited with the following statistics: Command being timed: "mr_read_test --map_io_read_factor=2 gs://mytestbucket/mydataset/**" User time (seconds): 38352.26 System time (seconds): 1625.60 Percent of CPU this job got: 2137% Elapsed (wall clock) time (h:mm:ss or m:ss): 31:10.32 Maximum resident set size (kbytes): 231720 Major (requiring I/O) page faults: 129 Minor (reclaiming a frame) page faults: 104884 Voluntary context switches: 19396238 Involuntary context switches: 13937163 File system inputs: 76840 File system outputs: 102280 Page size (bytes): 4096 Exit status: 0 This htop snapshot shows that we succeeded to utilize all 32 cores fully. Moreover all-green bars show that CPUs spend most of their time in a user land. What can we say about our goal of reading compressed data quickly? So, first of all, just dividing 2TB by total 31:10 minutes gives us 1.07GB/s of reading the compressed data. It’s not bad, I guess since we also included the bootstrapping time where the framework expands the input path into 260K file objects. But if we look on the network usage we can see that we reached 1.76GB/s at peak. It’s above the expected 1.25GB speed. ## Summary I’ve shown that GAIA-MR can read efficiently datasets of order of few terabytes of compressed data on a single node. Just by using 64 parallel connections to Google storage gateway we’ve reached 1.76GB/s peak speed and were bottlenecked on CPU. Google cloud network and GCS provided me with the bandwidth I would not expect to reach with disk based systems. I think that GAIA-MR in a cloud environment can provide very good value for money when batch processing datasets of few terrabytes. Please try it and tell me what you think!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22022676467895508, "perplexity": 7052.012852095407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00512.warc.gz"}
http://www.codecogs.com/library/computing/stl/algorithms/binarysearch/equal_range.php
I have forgotten • https://me.yahoo.com # equal_range Returns the range of elements equal to a given value ## Definition The equal_range() algorithm is defined in the standard header <algorithm> and in the nonstandard backward-compatibility header <algo.h>. ## Interface ```#include <algorithm> template < class ForwardIterator, class Type > pair<ForwardIterator, ForwardIterator> equal_range( ForwardIterator first, ForwardIterator last, const Type& val ); template < class ForwardIterator, class Type, class Predicate > pair<ForwardIterator, ForwardIterator> equal_range( ForwardIterator first, ForwardIterator last, const Type& val, Predicate comp );``` Parameters: Parameter Description first A forward iterator addressing the position of the first element in the range to be searched last A forward iterator addressing the position one past the final element in the range to be searched val The value in the ordered range that needs to be equivalent to the value of the element addressed by the first component of the pair returned and that needs to be less than the value of the element addressed by the second component of that pair returns comp User-defined predicate function object that is `true` when the left-hand argument is less than the right-hand argument. The user-defined predicate function should return `false` when its arguments are equivalent ## Description Equal_range function get the lower bound and the upper bound of a range where a new value can be inserted without misordering the elements between `fi rst` and `last`. The first version uses `operator<` for comparison, and the second uses the function object `comp`. ## Return Value Returns a pair of forward iterators that specify a subrange, contained within the range searched, in which all of the elements are equivalent to `val` in the sense defined by the binary predicate used (either `comp` or the default, less-than). ## Complexity The complexity is logarithmic in the distance between `first` and `last`. ### References Example: ##### Example - equal_range algorithm Problem This program illustrates the use of the STL equal_range() algorithm (default version) to find the lower bound and upper bound locations of a given target value in a vector of integers sorted in ascending order. Workings ```#include <iostream> #include <vector> #include <algorithm> using namespace std; int main() { int a[] = {2, 3, 5, 6, 7, 7, 7, 8, 9, 10}; vector<int> v(a, a+10); cout <<"\nHere are the contents of v:\n"; for (vector<int>::size_type i=0; i<v.size(); i++) cout <<v.at(i)<<" "; pair<vector<int>::iterator, vector<int>::iterator> bounds; bounds = equal_range(v.begin(), v.end(), 3); if (bounds.first != v.end()) cout <<"\nLower bound of 3 in v = "<<*bounds.first; if (bounds.first != v.end()) cout <<"\nUpper bound of 3 in v = "<<*bounds.second; bounds = equal_range(v.begin(), v.end(), 4); if (bounds.first != v.end()) cout <<"\nLower bound of 4 in v = "<<*bounds.first; if (bounds.first != v.end()) cout <<"\nUpper bound of 4 in v = "<<*bounds.second; bounds = equal_range(v.begin(), v.end(), 5); if (bounds.first != v.end()) cout <<"\nLower bound of 5 in v = "<<*bounds.first; if (bounds.first != v.end()) cout <<"\nUpper bound of 5 in v = "<<*bounds.second; bounds = equal_range(v.begin(), v.end(), 7); if (bounds.first != v.end()) cout <<"\nLower bound of 7 in v = "<<*bounds.first; cout <<"\nThis is the first of the three 7's, since the value " "before this 7 is "<<*(bounds.first-1)<<"."; if (bounds.first != v.end()) cout <<"\nUpper bound of 7 in v = "<<*bounds.second; bounds = equal_range(v.begin(), v.end(), 0); if (bounds.first != v.end()) cout <<"\nLower bound of 0 in v = "<<*bounds.first; if (bounds.first != v.end()) cout <<"\nUpper bound of 0 in v = "<<*bounds.second; bounds = equal_range(v.begin(), v.end(), 15); if (bounds.first != v.end()) cout <<"\nLower bound of 15 in v = "<<*bounds.first; if (bounds.first != v.end()) cout <<"\nUpper bound of 15 in v = "<<*bounds.second; cout <<"\nNote that both the lower and upper bound locations " "\nof 15 are the end (one-past-the-last) vector position."; return 0; }``` Solution Output: Here are the contents of v: 2 3 5 6 7 7 7 8 9 10 Lower bound of 3 in v = 3 Upper bound of 3 in v = 5 Lower bound of 4 in v = 5 Upper bound of 4 in v = 5 Lower bound of 5 in v = 5 Upper bound of 5 in v = 6 Lower bound of 7 in v = 7 This is the first of the three 7's, since the value before this 7 is 6. Upper bound of 7 in v = 8 Lower bound of 0 in v = 2 Upper bound of 0 in v = 2 Note that both the lower and upper bound locations of 15 are the end (one-past-the-last) vector position. References
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.906477689743042, "perplexity": 778.0051455586299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676588961.14/warc/CC-MAIN-20180715183800-20180715203800-00639.warc.gz"}
http://tex.stackexchange.com/users/3094/paulo-cereda?tab=activity
Paulo Cereda Reputation 25,196 100/100 score Apr22 comment abntex2 package chapter creation not working @egreg: I browsed their GitHub repository and found the issue has already been reported: github.com/abntex/abntex2/issues/140 Apr22 comment abntex2 package chapter creation not working Witchcraft, witchcraft, witchcraft! I wub you. ♥ `:)` Apr22 comment abntex2 package chapter creation not working Hi Adriano, welcome to TeX.sx! `:)` Apr22 revised abntex2 package chapter creation not working Fixed code. :) Apr22 comment Breaking inline math within longtable or tabular What a cool solution! Thanks for that, Steven! `:)` Apr15 awarded Famous Question Apr8 comment Is \renewcommand{\fi} possible? You know egreg is being serious when there's a text in all caps, bold and italics. `:)` Apr5 awarded Good Question Apr3 awarded Nice Answer Apr1 comment How to draw many crossroads? `+1` ♫ how many crossroads must a man walk down… ♫ `:)` Mar28 comment Underline a page head @topskip: I see what you did there. `:)` I think we could agree that DPC = Donald P. Cnuth. `:)` Mar26 comment Unused bibliography entries - how to check which entries were not used? @TorbjørnT.: Thanks, I added an addendum in the end of my answer. `:)` Mar26 revised Unused bibliography entries - how to check which entries were not used? added 276 characters in body Mar12 comment What is Cervantex? Writing TeX code and tilting at windmills! Yay! `:)` Mar2 comment What options to use with pdflatex and when @petobens: We have a Gitter chatroom, if you need a compiled version, just ping me, I'll be glad to provide you all the necessary files to run and test the new `4.0` version. `:)` Mar2 comment What options to use with pdflatex and when @petobens: I'm trying very hard, but it's quite complicated at the moment (lots of other stuff to do, including my PhD). I'll try to reorganize my schedule and see if I can work on the manual as fast as I can. I probably cannot make into the TL2015 official release, but I believe `4.0` will at least hit the update schedule this year. I'll work on it. Mar1 comment What options to use with pdflatex and when @petobens and @StrongBad: github.com/cereda/arara/wiki/New-feature-highlights-in-4.0 Maybe this simple text could be useful for some hints on what `4.0` is expected to have. Since this version does a huge qualitative jump from `3.0`, it needs a complete user manual. That's what's giving me migraines. `:)` Mar1 comment What options to use with pdflatex and when I think I am finally able to provide a real answer to both questions (the linked one and this one), but sadly I need to finish my user manual first. I believe it's quite unfair to write about an unreleased version, although it is just a matter of time for it to hit CTAN. Better safe than sorry, I think. The `4.0` version hopefully will provide a better compilation workflow with the inclusion of conditionals; besides, rules can now incorporate more complex tasks, so even if the default ones cannot satisfy one's workflow, I'm sure we will be able to accomplish such task by writing our own rule. Feb25 comment Is it possible to use Lua to obtain the current working directory? @Fran: I will take a look later on. `:)` I think we haven't got any problems so far because Unix paths go with `/` instead of \; I suppose `\unexpanded` is needed mostly because of a potential special character in the path. Feb24 comment Is it possible to use Lua to obtain the current working directory? @Fran: I wrote something along those lines. `:)`
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8944132328033447, "perplexity": 3911.1574163384557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430453921506.38/warc/CC-MAIN-20150501041841-00051-ip-10-235-10-82.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2591400/book-on-finite-group-theory-containing-a-sufficient-number-of-examples/2591910
# Book on finite group theory, containing a sufficient number of examples I read M.Isaacs book on finite group theory now and I find it quite interesting and well written. But also I feel that there are not enough examples (for me) in this book. Maybe there is another book wich can be used to complement Isaacs book which contain enough examples? Or maybe there are resources, where one can find interesting and demonstrative examples concerning finite groups? • See if this helps you: math.stackexchange.com/questions/25506/… – user371838 Jan 4 '18 at 10:05 • Last year Serre just wrote a new book on finite groups, he's one of the best writer in mathematics. The two last chapters are about finite subgroup of $\rm{GL}_n$ and group of small order. – Nicolas Hemelsoet Jan 4 '18 at 10:31 • Thank you very much, guys! – Mikhail Goltvanitsa Jan 4 '18 at 10:43 • I agree with @NicolasHemelsoet that it's always worth checking out Serre. The specific book is Serre - Finite groups: An introduction, also sold through the AMS. There is an MAA review, which suggests that it may not be at the level for someone looking for an introduction to group theory. (I cannot tell whether that describes the poster.) – LSpice Jan 4 '18 at 17:14 Why not try one of the following: John S. Rose, A Course on Group Theory Derek J.S. Robinson, A Course in The Theory of Groups Dummit and Foote, Abstract Algebra. Each of these books has a lot of good examples and exercises! • Second the vote for Dummitt & Foote (+1). It covers a lot more than finite groups, but is really an example-based text that is also very rigorous. Highly recommend. – rogerl Jan 4 '18 at 15:00 • Thank you, Nicky. I knew about the DF book, but somehow underestimated it. But in DF book there are no such deep topics as in Isaacs book (for example Chermak-Delgado measure) and so less or more non-trivial examples one forced to look elsewhere. – Mikhail Goltvanitsa Jan 4 '18 at 16:07 • It's too small to try suggesting the edit, but, as @rogerl politely points out, you probably want to change the spelling of 'Foote' (from 'Foot'). – LSpice Jan 4 '18 at 17:24 • @Lspice, Certainly a "misstep" :-) Thanks for pointing out. – Nicky Hekster Jan 4 '18 at 17:53 • @NickyHekster, well, I guess I put Foote in your mouth. ;-) – LSpice Jan 4 '18 at 17:56 Groups: A Path to Geometry by R.P. Burn is an introduction to group theory that consists entirely of examples, problems and solutions. Schaum's Outline of Group Theory by B. Baumslag contains lots of examples and problems with solutions. Adventures in Group Theory: Rubik's Cube, Merlin's Machine, and Other Mathematical Toys by David Joyner is built around a series of concrete examples and applications of groups. A very literal answer: Michael Weinstein has a book called Examples of groups. • Thank you, I never hear about this book. – Mikhail Goltvanitsa Jan 4 '18 at 20:16 I prefer, the following books for group theory in order. 1. Abstract Algebra by "Dummit & Foote", Wiley publication. 2. A course in Abstract Algebra By, "khanna and Bhambri", vikas publication. 3. Contemporary Abstract Algebra By "Gallian". If you want, classic text with lots of examples prefer, (1) and (3) and if you want lots of solved examples prefer (2). In (2) there are lots of solved examples, covering all topics, groups, rings, fields, In fact on linear transformations too. • Thank you, Akash! Despite the third book seems to be elementary, there is a quite useful section suggested readings! – Mikhail Goltvanitsa Jan 5 '18 at 7:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35857638716697693, "perplexity": 957.5207155408289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400274441.60/warc/CC-MAIN-20200927085848-20200927115848-00178.warc.gz"}
https://globalncgseminar.org/talks/tba/
Elmar Schrohe (Leibniz Universität Hannover) January 12, 2021 15:00 (CET) Organized by Europe GNCG Seminar. ## Index Theory for Fourier Integral Operators and the Connes-Moscovici Local Index Formulae The index theory for operator algebras generated by pseudodifferential operators and Fourier integral operators, more specifically Lie groups of quantized canonical transformations, has attracted a lot of attention over the past years. It can be seen as a universal receptacle for a wide range of index problems such as the classical Atiyah-Singer index theorem, the Atiyah-Weinstein problem, or the B ̈ar-Strohmaier index theory for Dirac operators on Lorentzian spacetimes. It also includes work by Connes-Moscovici, Gorokhovsky-de Kleijn-Nest, or Perrot. In my talk, I will focus on the particularly transparent situation, where the pseu-dodifferential operators are Shubin type operators on euclidean space. We first study the case, where the Fourier integral operators are given by metaplectic oper-ators, then we add a Heisenberg type group of translations, so that we obtain the quantizations of isometric affine canonical transformations. We find a cohomological index formula in the first case. In the second, our algebra encompasses noncommutative tori and toric orbifolds. We introduce a spectral triple (A, H, D) with simple dimension spectrum. Here H = L2(Rn, Λ(Rn)) and D is the Euler operator. a first order differential operator of index 1. We obtain explicit algebraic expressions for the Connes-Moscovici cyclic cocycle and local index formulae for noncommutative tori and toric orbifolds. Joint work with Anton Savin (RUDN, Moscow).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8663316369056702, "perplexity": 1116.2243603553372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00011.warc.gz"}
http://mathhelpforum.com/geometry/112841-gmat-geometry-etc-print.html
# GMAT Geometry...etc. • Nov 6th 2009, 03:07 PM rick81 GMAT Geometry...etc. I have been studying for the GMAT and GRE. I have taken a few GMAT CAT practice tests from the GMATprep software I downloaded from the GMAT website. I have managed to test to the level where the CAT software is giving me what it considers difficult questions. I have been stumped on a few and even after going back to review them I either don't see how to arrive at the correct answer or I haven't figured out the fastest way to solve the question. On average I should on spend 2 minutes per question. I will post a few of the problems below. The answer with the blue square is the correct answer. The filled in answer is what I choose. Thanks in advance for those that can help me out. http://i84.photobucket.com/albums/k1...6at35308PM.png • Nov 6th 2009, 03:27 PM Plato But the correct answer to that question is: $s=\sqrt{3}~\&~t=1$ • Nov 6th 2009, 03:39 PM PatrickFoster I agree with Plato, the answer should be $s = \sqrt{3}$. Perhaps there is an error in the software's answers. Incidentally, and unless I'm missing something, this problem shouldn't take more than 5 seconds to solve. Patrick • Nov 6th 2009, 03:44 PM rick81 not according to the software I also answered s=square root of 3 but according to the software the answer is s=1. I think we both assumed the larger triangle was being bisected evenly. I worked the problem using pythagoreon's thrm and the sides of the right triangle in the negative quadrant to find the length of the radius =2. Then the larger equilateral triangle has sides 2,2, 2 times square root 2 applying the knowledge that the ratio of a right equilateral triangle is 1:1:square root 2. Then S = (2 times square root 2) - (square root 3). Thats where I am stuck. How do I get from subtracting these two square roots to the answer 1? Is there a simpler approach. • Nov 6th 2009, 03:58 PM bigwave this is 2 unit circle the graph shown is somewhat decieving, it is a 2 unit circle (not a unit circle which is usually shown) P is $\frac{5\pi}{6}$ it appears to be more like $\frac{3\pi}{4}$ therefore on a unit circle P would be $(\frac{\sqrt{-3}}{2},\frac{1}{2})$ and Q $-\frac{\pi}{2}$ from P would be $(,\frac{1}{2}\frac{\sqrt{-3}}{2})$ So on a 2 unit circle P $(\sqrt{-3},\ 1)$ and Q $(1, \sqrt{3})$ therefore s = 1 • Nov 6th 2009, 09:51 PM rick81 thanks Makes sense. thanks • Nov 7th 2009, 06:38 AM aidan $\angle POQ$ is 90 degrees $\angle (-\sqrt{3},1),(0,0),(s,t)=$ 90 degress Similar Triangles: $\triangle (0,0),(-\sqrt{3},1),(-\sqrt(3),0)$ is equal to $\triangle (s,t),(0,0),(s,0)$ Thus the y-coord of P is equal to the x-coord of Q. . • Nov 7th 2009, 05:16 PM bjhopper GMAT problem Posted by Rick.81 The diagram you showed is depicted wrongly . the two points should be at different elevations If thats the way it was depicted I think it unfair.Putting in the perpendicular mark between the radii is a clue which leads to a correct solution and it is all in knowing the properties of 30-60-90 triangles. bjhopper • Nov 7th 2009, 05:24 PM Debsta Never assume such diagrams are drawn to scale. They are done that way to throw you off.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9155250787734985, "perplexity": 899.6401140040862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721008.78/warc/CC-MAIN-20161020183841-00070-ip-10-171-6-4.ec2.internal.warc.gz"}
http://www.researchgate.net/publication/8241660_Pressure_induced_phase_transitions_in_hydroquinone
Article Pressure induced phase transitions in hydroquinone. • A. K. Arora Synchrotron Radiation Section, Bhabha Atomic Research Center, Mumbai-400085, India. The Journal of Chemical Physics (Impact Factor: 3.12). 11/2004; 121(15):7320-5. DOI: 10.1063/1.1792553 Source: PubMed ABSTRACT High pressure behavior of alpha-hydroquinone (1,4-dihydroxybenzene) has been studied using Raman spectroscopy up to pressures of 19 GPa. Evolution of Raman spectra suggests two transitions around 3.3 and 12.0 GPa. The first transition appears to be associated with the lowering of crystal symmetry. Above 12.0 GPa, Raman bands in the internal modes region exhibit continuous broadening suggesting that the system is progressively evolving into a disordered state. This disorder is understood as arising due to distortion of the hydrogen-bonded cage across the second transition around 12 GPa. 0 Bookmarks · 62 Views • Article: High pressure investigation of α-form and CH4-loaded β-form of hydroquinone compounds [Hide abstract] ABSTRACT: The high pressure compression behaviors of two hydroquinone compounds have been investigated using a combination of in situ synchrotron x-ray powder diffraction and Raman spectroscopy up to ca. 7 GPa. The structural integrity of the α-form hydroquinone clathrate is maintained throughout the pressure range, whereas the CH4-loaded β-form hydroquinone clathrate decomposes and transforms to a new high pressure phase near 5 GPa. The bulk modulus (K) and its pressure derivative (K′) of the α-form and the CH4-loaded β-form hydroquinones are measured to be 8.2(3) GPa and 8.4(4), and 10(1) GPa and 9(2), respectively, representing one of the most compressible classes of crystalline solids reported in the literature. The corresponding axial compression behaviors, however, show greater contrast between the two hydroquinone compounds; the elastic anisotropy of the α-form is only marginal, being K(a):K(c) = 1.08:1, whereas that of the CH4-loaded β-form is rather drastic, being K(a):K(c) = 11.8:1. This is attributed to the different dimensionality of the hydrogen bonding networks between the two structures and might in turn explain the observed structural instability of the β-form, compared to the α-form. The Journal of Chemical Physics 03/2009; 130(12):124511-124511-6. · 3.12 Impact Factor • Article: Evidence for a high-density amorphous form in indomethacin from Raman scattering investigations [Hide abstract] ABSTRACT: Pressure-induced transformation of γ-IMC [1-(p-chlorobenzoyl)-5-methoxy-2-methylindole-3-acetic acid] is analyzed from Raman scattering investigations in the low-frequency range of 10–250 cm−1 and the high frequency region between 1550 and 1750 cm−1, where CO stretching vibrations are usually observed. At room temperature, by pressurization from atmospheric pressure up to 4 GPa, γ-IMC undergoes a collapse transformation into a high-pressure crystalline form, induced by large rearrangement in the hydrogen-bonded network associated with molecular conformational changes. The Raman spectrum of the high-pressure crystal is similar to that of the α form, which is denser than the γ form and metastable with respect to γ-IMC at atmospheric pressure. Upon further compression a solid-state amorphization is observed via the breakdown of hydrogen bonds. The Raman line shape of the high-pressure amorphous form is different from that of the vitreous state (or thermal glass obtained by quenching the liquid), suggesting the existence of a high-density amorphous state. By release of pressure, this high-density amorphous state transforms into the thermal glass. This transformation can be interpreted as a transformation between a high-density amorphous to a low-density amorphous state, which could be associated with a polyamorphic transformation. Physical review. B, Condensed matter 01/2008; 77(9). · 3.77 Impact Factor Similar Publications • Towards bio-silicon interfaces: Formation of an ultra-thin self-hydrated artificial membrane composed of dipalmitoylphosphatidylcholine (DPPC) and chitosan deposited in high vacuum from the gas-phase. María J Retamal, Marcelo A Cisternas, Sebastian E Gutierrez-Maldonado, Tomas Perez-Acle, Birger Seifert, Mark Busch, Patrick Huber, Ulrich G Volkmann • Multiferroic CuCrO2 under High Pressure: In-Situ X-Ray Diffraction and Raman Spectroscopic Studies Alka B. Garg, A. K. Mishra, K. K. Pandey, Surinder M. Sharma
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359248638153076, "perplexity": 6473.144615356624}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120446.62/warc/CC-MAIN-20140914011200-00147-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.math24.net/fluid-pressure/
Fluid Pressure • Pressure is defined as the force per unit area: $P = \frac{F}{A}.$ If an object is immersed in a liquid at a depth $$h$$, the fluid pressure is given by the constant depth formula $P = \rho gh,$ where $$\rho$$ is the fluid density and $$g$$ is the acceleration due to gravity. Fluid pressure is a scalar quantity. It has no direction, so a fluid exerts pressure equally in all directions. This statement is known as Pascal’s law discovered by the French scientist Blaise Pascal ($$1623 – 1662$$). Consider the case when a vertical plate bounded by the lines ${x = a,\;\;}\kern0pt{x = b,\;\;}\kern0pt{y = f\left( x \right),\;\;}\kern0pt{y = g\left( x \right)}$ is immersed in a liquid. Since the different points of the lamina are at different depths, the total hydrostatic force $$F$$ acting on the lamina is determined through integration: $F = \rho g\int\limits_a^b {\left[ {f\left( x \right) – g\left( x \right)} \right]xdx} .$ This formula is often referred as the variable depth formula for fluid force. • Solved Problems Click a problem to see the solution. Example 1 A cylindrical tank with height of $$3\,\text{m}$$ and base radius $$1\,\text{m}$$ is filled with gasoline. Calculate the hydrostatic force exerted on the wall of the tank if the density of gasoline is $$800\,\large{\frac{{\text{kg}}}{{{\text{m}^3}}}}\normalsize.$$ Example 2 A rectangular swimming pool is $$H$$ meters deep, $$a$$ meters wide, and $$b$$ meters long. Calculate 1. The fluid force $$F_{ab}$$ acting on the bottom of the pool; 2. The fluid force $$F_{aH}$$ acting on each $$\left({a \times H}\right)\text{m}$$ side; 3. The fluid force $$F_{bH}$$ acting on each $$\left({b \times H}\right)\text{m}$$ side; Example 3 A triangular plate with base $$a$$ and height $$H$$ is submerged vertically in water so that its base lies at the surface of the water. Find the hydrostatic force acting on each side of the plate. Example 4 A cube with side $$a$$ is submerged in water so that its top face is parallel to the water surface and $$H$$ meters below it. Find the total hydrostatic force acting on the cube. Example 5 A rectangular plate with sides $$a$$ and $$b$$ $$\left({a \gt b}\right)$$ is submerged in water at an angle $$\alpha$$ to the water surface. The longer side is parallel to the surface and lies at a depth $$H$$. Find the force acting on each side of the plate. Example 6 A dam has the shape of an isosceles trapezoid with upper base $$a = 64\,\text{m},$$ lower base $$b = 42\,\text{m},$$ and height $$H = 3\,\text{m}.$$ Find the force on the dam due to hydrostatic pressure. Example 7 A right circular cone with base radius $$R$$ and altitude $$H$$ is submerged, vertex downwards, in water so that its base is on the surface of the water. Find the force due to hydrostatic pressure acting on the lateral cone surface. Example 8 A plate in the shape of a parallelogram with sides $$a, b$$ and angle $$\alpha$$ is submerged vertically in water, so that the side $$b$$ is at the water surface. Calculate the hydrostatic force acting on each side of the plate. Example 9 A disk of radius $$R$$ is half submerged vertically in liquid of density $$\rho.$$ Find the hydrostatic force acting on one side of the disk. Example 10 A plate in the shape of a parabolic segment is submerged vertically in water as shown in Figure $$12.$$ The base of the segment is $$2a,$$ the height is $$H.$$ Find the force due to hydrostatic pressure acting on each side of the plate. Example 1. A cylindrical tank with height of $$3\,\text{m}$$ and base radius $$1\,\text{m}$$ is filled with gasoline. Calculate the hydrostatic force exerted on the wall of the tank if the density of gasoline is $$800\,\large{\frac{{\text{kg}}}{{{\text{m}^3}}}}\normalsize.$$ Solution. We choose the $$x-$$axis directed vertically downward with origin at the top base of the tank. Consider a thin layer at a depth of $$x.$$ If its thickness is $$dx,$$ the lateral surface area of the layer is given by $dA = 2\pi Rdx.$ The fluid pressure at the depth $$x$$ is $$P = \rho gx,$$ so the force exerted by the fluid on the lateral surface is $dF = PdA = 2\pi \rho gRxdx.$ To find the total hydrostatic force $$F,$$ we integrate from $$x = 0$$ to $$x = H:$$ $\require{cancel}{F = \int\limits_0^H {dF} }={ 2\pi \rho gR\int\limits_0^H {xdx} }={ \left. {\frac{{\cancel{2}\pi \rho gR{x^2}}}{\cancel{2}}} \right|_0^H }={ \left. {\pi \rho gR{x^2}} \right|_0^H }={ \pi \rho gR{H^2}.}$ Substituting the given values into the formula, we have ${F = \pi \times 800 \times 9.8 \times 1 \times {3^2} }\approx{ 221671\,\text{N} }\approx{ 222\,\text{kN}.}$ Example 2. A rectangular swimming pool is $$H$$ meters deep, $$a$$ meters wide, and $$b$$ meters long. Calculate 1. The fluid force $$F_{ab}$$ acting on the bottom of the pool; 2. The fluid force $$F_{aH}$$ acting on each $$\left({a \times H}\right)\text{m}$$ side; 3. The fluid force $$F_{bH}$$ acting on each $$\left({b \times H}\right)\text{m}$$ side; Solution. 1. The pressure at the bottom of the swimming pool is $$P = \rho gH,$$ so the hydrostatic force acting on the bottom is given by ${F_{ab}} = PA = \rho gHA = \rho gabH.$ 2. To determine the force on the $$\left({a \times H}\right)\text{m}$$ side of the pool, we take a thin strip of thickness $$dx$$ at a depth $$x.$$ The area of the strip is $$dA = adx.$$ Since the water pressure at depth $$x$$ is $$P = \rho gx,$$ the force acting on the elementary strip is $dF = PdA = \rho gaxdx.$ The total force on the $$\left({a \times H}\right)\text{m}$$ side is obtained by integration: ${F_{aH} = \int\limits_0^H {dF} }={ \int\limits_0^H {dF} }={ \rho ga\int\limits_0^H {xdx} }={ \left. {\frac{{\rho ga{x^2}}}{2}} \right|_0^H }={ \frac{{\rho ga{H^2}}}{2}.}$ 3. Similarly we can find the force acting on $$\left({b \times H}\right)\text{m}$$ side of the pool: ${F_{bH}} = \frac{{\rho gb{H^2}}}{2}.$ Example 3. A triangular plate with base $$a$$ and height $$H$$ is submerged vertically in water so that its base lies at the surface of the water. Find the hydrostatic force acting on each side of the plate. Solution. From similar triangles we have ${\frac{W}{a} = \frac{{H – x}}{H},}\;\; \Rightarrow {W = a – \frac{a}{H}x.}$ The area of the elementary horizontal strip at depth $$x$$ is ${dA = Wdx }={ \left( {a – \frac{a}{H}x} \right)dx.}$ The water pressure at depth $$x$$ is $$P = \rho gx,$$ so the force acting on the strip is written as ${dF = PdA }={ \rho gx\left( {a – \frac{a}{H}x} \right)dx }={ \rho gax\left( {1 – \frac{x}{H}} \right)dx.}$ The total force is determined through integration: ${F = \int\limits_0^H {dF} }={ \rho ga\int\limits_0^H {x\left( {1 – \frac{x}{H}} \right)dx} }={ \rho ga\int\limits_0^H {\left( {x – \frac{{{x^2}}}{H}} \right)dx} }={ \rho ga\left. {\left[ {\frac{{{x^2}}}{2} – \frac{{{x^3}}}{{3H}}} \right]} \right|_0^H }={ \rho ga\left( {\frac{{{H^2}}}{2} – \frac{{{H^3}}}{{3H}}} \right) }={ \frac{{\rho ga{H^2}}}{6}.}$ Example 4. A cube with side $$a$$ is submerged in water so that its top face is parallel to the water surface and $$H$$ meters below it. Find the total hydrostatic force acting on the cube. Solution. Using the the constant depth formula, it is easy to find the force acting on the top face: ${F_{top}} = {P_{top}}A = \rho g{a^2}H.$ Similarly, the force on the bottom face is written as ${{F_{bottom}} = {P_{bottom}}A }={ \rho g{a^2}\left( {H + a} \right) }={ \rho g{a^2}H + \rho g{a^3}.}$ To determine the side force, we consider a thin horizontal strip of thickness $$dx$$ at depth $$x.$$ Its area is $$dA = adx.$$ The water pressure at this depth is $$P = \rho gx,$$ so the hydrostatic force $$dF$$ acting on the strip is given by the expression $dF = PdA = \rho gaxdx.$ Then the force acting on the entire face of the cube is obtained by integration: ${{F_{side}} = \int\limits_H^{H + a} {dF} }={ \rho ga\int\limits_H^{H + a} {xdx} }={ \left. {\frac{{\rho ga{x^2}}}{2}} \right|_H^{H + a} }={ \frac{{\rho ga}}{2}\left[ {{{\left( {H + a} \right)}^2} – {H^2}} \right] }={ \frac{{\rho ga}}{2}\left( {\cancel{{H^2}} + 2aH + {a^2} -\cancel{{H^2}}} \right) }={ \rho g{a^2}H + \frac{{\rho g{a^3}}}{2}.}$ The total hydrostatic force acting on the cube is given by ${F = {F_{top}} + {F_{bottom}} + 4{F_{side}} }={ \rho g{a^2}H + \rho g{a^2}H }+{ \rho g{a^3} }+{ 4\left( {\rho g{a^2}H + \frac{{\rho g{a^3}}}{2}} \right) }={ 6\rho g{a^2}H + 3\rho g{a^3} }={ 3\rho g{a^2}\left( {2H + a} \right).}$ Example 5. A rectangular plate with sides $$a$$ and $$b$$ $$\left({a \gt b}\right)$$ is submerged in water at an angle $$\alpha$$ to the water surface. The longer side is parallel to the surface and lies at a depth $$H$$. Find the force acting on each side of the plate. Solution. By Pascal’s law, the fluid pressure at a depth $$x$$ is $$P = \rho gx$$ in any direction. So if we take a small strip on the plate at depth $$x$$ corresponding to the increment $$dx,$$ the force acting on the strip is given by ${dF = PdA }={ \rho gx \times \frac{{adx}}{{\sin \alpha }} }={ \frac{{\rho gaxdx}}{{\sin \alpha }}.}$ To total hydrostatic force is obtained by integration: ${F = \int\limits_H^{H + b\sin \alpha } {dF} }={ \frac{{\rho ga}}{{\sin \alpha }}\int\limits_H^{H + b\sin \alpha } {xdx} }={ \frac{{\rho ga}}{{\sin \alpha }}\left. {\frac{{{x^2}}}{2}} \right|_H^{H + b\sin \alpha } }={ \frac{{\rho ga}}{{2\sin \alpha }}\left[ {{{\left( {H + b\sin \alpha } \right)}^2} – {H^2}} \right] }={ \frac{{\rho ga}}{{2\sin \alpha }}\left( {2bH\sin \alpha + {b^2}{{\sin }^2}\alpha } \right) }={ \rho gab\left( {H + \frac{b}{2}\sin \alpha } \right).}$ Example 6. A dam has the shape of an isosceles trapezoid with upper base $$a = 64\,\text{m},$$ lower base $$b = 42\,\text{m},$$ and height $$H = 3\,\text{m}.$$ Find the force on the dam due to hydrostatic pressure. Solution. If we choose the vertical $$x−$$axis directed downward, the fluid pressure at a depth $$x$$ is written as $P = \rho gx.$ A thin horizontal strip of width $$dx$$ at depth $$x$$ can be approximated by a rectangle with the area equal to $dA = Wdx,$ where the width $$W$$ of the trapezoid at depth $$x$$ is determined from similar triangles and is given by $W = a – \left( {a – b} \right)\frac{x}{H}.$ Hence, the hydrostatic force acting on the strip is expressed by the formula ${dF = PdA }={ \rho gx\left[ {a – \left( {a – b} \right)\frac{x}{H}} \right]dx.}$ The total force exerted on the dam due to hydrostatic pressure is given by ${F = \int\limits_0^H {dF} }={ \rho g\int\limits_0^H {x\left[ {a – \left( {a – b} \right)\frac{x}{H}} \right]dx} }={ \rho g\int\limits_0^H {\left( {ax – \frac{{a – b}}{H}{x^2}} \right)dx} }={ \rho g\left. {\left[ {\frac{{a{x^2}}}{2} – \frac{{\left( {a – b} \right){x^3}}}{{3H}}} \right]} \right|_0^H }={ \rho g\left[ {\frac{{a{H^2}}}{2} – \frac{{\left( {a – b} \right){H^2}}}{3}} \right] }={ \rho g{H^2}\left( {\frac{a}{6} + \frac{b}{3}} \right).}$ Now we can easily calculate the value of the force: ${F = 1000 \times 9.8 \times {3^2} \times \left( {\frac{{6.4}}{6} + \frac{{4.2}}{3}} \right) }={ 217560\,\text{N} }\approx{ 218\,\text{kN}.}$ Example 7. A right circular cone with base radius $$R$$ and altitude $$H$$ is submerged, vertex downwards, in water so that its base is on the surface of the water. Find the force due to hydrostatic pressure acting on the lateral cone surface. Solution. We have the following proportion from similar triangles: ${\frac{W}{{H – x}} = \frac{R}{H},}\;\; \Rightarrow {W = \frac{{R\left( {H – x} \right)}}{H} = R\left( {1 – \frac{x}{H}} \right).}$ The surface area of the elementary cone strip at the point $$x$$ is given by ${dA = 2\pi Wdx }={ 2\pi R\left( {1 – \frac{x}{H}} \right)dx.}$ The pressure in any direction at depth $$x$$ is $$P = \rho gx,$$ so the force on the strip is equal to ${dF = PdA }={ 2\pi \rho gRx\left( {1 – \frac{x}{H}} \right)dx.}$ The total force is obtained by integrating from $$x = 0$$ to $$x = H:$$ ${F = \int\limits_0^H {dF} }={ 2\pi \rho gR\int\limits_0^H {x\left( {1 – \frac{x}{H}} \right)dx} }={ 2\pi \rho gR\int\limits_0^H {\left( {x – \frac{{{x^2}}}{H}} \right)dx} }={ 2\pi \rho gR\left. {\left( {\frac{{{x^2}}}{2} – \frac{{{x^3}}}{{3H}}} \right)} \right|_0^H }={ 2\pi \rho gR\left( {\frac{{{H^2}}}{2} – \frac{{{H^3}}}{{3H}}} \right) }={ \frac{{\pi \rho gR{H^2}}}{3}.}$ Example 8. A plate in the shape of a parallelogram with sides $$a, b$$ and angle $$\alpha$$ is submerged vertically in water, so that the side $$b$$ is at the water surface. Calculate the hydrostatic force acting on each side of the plate. Solution. The vertices of the parallelogram $$ABCD$$ are ${A\left( {0,0} \right),\;\;}\kern0pt{B\left( {0,b} \right),\;\;}\kern0pt{C\left( {a\sin \alpha ,b + a\cos \alpha } \right),\;\;}\kern0pt{D\left( {a\sin \alpha ,a\cos \alpha } \right).}$ Determine the equation of the side $$AD.$$ Using the two-point form of straight line equation, we have: ${\frac{{x – {x_A}}}{{{x_D} – {x_A}}} = \frac{{y – {y_A}}}{{{y_D} – {y_A}}},}\;\; \Rightarrow {\frac{{x – 0}}{{a\sin \alpha – 0}} = \frac{{y – 0}}{{a\cos \alpha – 0}},}\;\; \Rightarrow {\frac{x}{{a\sin \alpha }} = \frac{y}{{a\cos \alpha }},}\;\; \Rightarrow {{y_1} = x\cot \alpha .}$ The other side $$BC$$ is shifted $$b$$ units upwards along the $$y-$$axis, so its equation is given by ${y_2} = b + x\cot \alpha .$ Now we use we apply the variable depth formula: $F = \rho g\int\limits_a^b {\left[ {f\left( x \right) – g\left( x \right)} \right]xdx} .$ This gives the total force acting on the plate: ${F = \rho g\int\limits_0^{a\sin \alpha } {\left( {{y_2} – {y_1}} \right)xdx} }={ \rho g\int\limits_0^{a\sin \alpha } {\left( {b + \cancel{x\cot \alpha} – \cancel{x\cot \alpha} } \right)xdx} }={ \rho gb\int\limits_0^{a\sin \alpha } {xdx} }={ \left. {\frac{{\rho gb{x^2}}}{2}} \right|_0^{a\sin \alpha } }={ \frac{{\rho gb{a^2}{{\sin }^2}\alpha }}{2}.}$ Example 9. A disk of radius $$R$$ is half submerged vertically in liquid of density $$\rho.$$ Find the hydrostatic force acting on one side of the disk. Solution. Consider a thin horizontal strip of thickness $$dx$$ at a depth $$x.$$ The width of the strip is $W = AB = 2\sqrt {{R^2} – {x^2}},$ so its area is ${dA = Wdx }={ 2\sqrt {{R^2} – {x^2}} dx.}$ The force on the strip is approximately ${dF = PdA }={ \rho gxdA }={ 2\rho gx\sqrt {{R^2} – {x^2}} dx.}$ The total hydrostatic force is given by the integral ${F = \int\limits_0^R {dF} }={ 2\rho g\int\limits_0^R {x\sqrt {{R^2} – {x^2}} dx} .}$ We evaluate this integral using the change of variable: ${I = \int {x\sqrt {{R^2} – {x^2}} dx} }={ \left[ {\begin{array}{*{20}{l}} {z = {R^2} – {x^2}}\\ {dz = – 2xdx} \end{array}} \right] }={ \int {\sqrt z \left( { – \frac{{dz}}{2}} \right)} }={ – \frac{1}{2}\int {\sqrt z dz} }={ – \frac{{{z^{\frac{3}{2}}}}}{3} }={ – \frac{{\sqrt {{z^3}} }}{3} }={ – \frac{{\sqrt {{{\left( {{R^2} – {x^2}} \right)}^3}} }}{3}.}$ Hence, the force $$F$$ is given by ${F = – \frac{{2\rho g}}{3}\left. {\sqrt {{{\left( {{R^2} – {x^2}} \right)}^3}} } \right|_0^R }={ – \frac{{2\rho g}}{3}\left( {0 – {R^3}} \right) }={ \frac{{2\rho g{R^3}}}{3}.}$ Example 10. A plate in the shape of a parabolic segment is submerged vertically in water as shown in Figure $$12.$$ The base of the segment is $$2a,$$ the height is $$H.$$ Find the force due to hydrostatic pressure acting on each side of the plate. Solution. First we determine the equation of the parabola given its base $$2a$$ and height $$H.$$ The initial equation is $$x = H – k{y^2}.$$ Since $$y = a$$ at the point $$x = 0,$$ the coefficient $$k$$ is equal to ${0 = H – k{a^2},}\;\; \Rightarrow {k = \frac{H}{{{a^2}}}.}$ This yields: ${x = H – \frac{H}{{{a^2}}}{y^2} }={ H\left( {1 – \frac{{{y^2}}}{{{a^2}}}} \right).}$ By solving this equation for $$y,$$ we get ${\frac{x}{H} = 1 – \frac{{{y^2}}}{{{a^2}}},}\;\; \Rightarrow {{a^2} – {y^2} = {a^2}\frac{x}{H},}\;\; \Rightarrow {{y^2} = {a^2}\left( {1 – \frac{x}{H}} \right).}$ So, the parabola segment is bounded by the curves ${y = g\left( x \right) = – a\sqrt {1 – \frac{x}{H}} ,\;\;}\kern0pt{y = f\left( x \right) = a\sqrt {1 – \frac{x}{H}} .}$ To calculate the hydrostatic force, we apply the variable depth formula: $F = \rho g\int\limits_a^b {\left[ {f\left( x \right) – g\left( x \right)} \right]xdx} .$ In our case, $F = 2\rho ga\int\limits_0^H {\sqrt {1 – \frac{x}{H}} xdx} .$ Make the substitution ${1 – \frac{x}{H} = {z^2},}\;\; \Rightarrow {x = H(1 – {z^2}),\;\;}\kern0pt{dx = – 2Hzdz.}$ When $$x = 0,$$ $$z = 1,$$ and when $$x = H,$$ $$z = 0.$$ Hence ${F = – 4\rho ga{H^2}\int\limits_1^0 {z\left( {1 – {z^2}} \right)zdz} }={ 4\rho ga{H^2}\int\limits_0^1 {\left( {{z^2} – {z^4}} \right)dz} }={ 4\rho ga{H^2}\left. {\left( {\frac{{{z^3}}}{3} – \frac{{{z^5}}}{5}} \right)} \right|_0^1 }={ 4\rho ga{H^2}\left( {\frac{1}{3} – \frac{1}{5}} \right) }={ \frac{{8\rho ga{H^2}}}{{15}}.}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9778770804405212, "perplexity": 178.22682210140857}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371637684.76/warc/CC-MAIN-20200406133533-20200406164033-00088.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/dcds.2011.29.323
# American Institute of Mathematical Sciences • Previous Article Global dissipativity and inertial manifolds for diffusive burgers equations with low-wavenumber instability • DCDS Home • This Issue • Next Article Time asymptotic behaviour for Rotenberg's model with Maxwell boundary conditions January  2011, 29(1): 323-326. doi: 10.3934/dcds.2011.29.323 ## An approximation theorem for maps between tiling spaces 1 Department of Mathematics, Texas Lutheran University, Seguin, TX 78155, United States 2 Department of Mathematics, The University of Texas at Austin, Austin, TX 78712 Received  August 2009 Revised  May 2010 Published  September 2010 We show that every continuous map from one translationally finite tiling space to another can be approximated by a local map. If two local maps are homotopic, then the homotopy can be chosen so that every interpolating map is also local. Citation: Betseygail Rand, Lorenzo Sadun. An approximation theorem for maps between tiling spaces. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 323-326. doi: 10.3934/dcds.2011.29.323 ##### References: [1] M. Barge, B. Diamond, J. Hunton and L. Sadun, Cohomology of substitution tiling spaces,, preprint, ().   Google Scholar [2] J. Kellondonk, Pattern-equivariant functions and cohomology,, J. Phys. A, 36 (2003), 1.   Google Scholar [3] J. Kellendonk and I. Putnam, The Ruelle-Sullivan map for $\R^n$ actions,, Math. Ann., 344 (2006), 693.  doi: doi:10.1007/s00208-005-0728-1.  Google Scholar [4] D. Lind and B. Marcus, "An Introduction to Symbolic Dynamics and Coding,", Cambridge University Press, (1995).  doi: doi:10.1017/CBO9780511626302.  Google Scholar [5] K. Petersen, Factor maps between tiling dynamical systems,, Forum Math., 11 (1999), 503.  doi: doi:10.1515/form.1999.011.  Google Scholar [6] N. Priebe, Towards a characterization of self-similar tilings via derived Voronoi tesselations,, Geometriae Dedicata, 79 (2000), 239.  doi: doi:10.1023/A:1005191014127.  Google Scholar [7] C. Radin, The pinwheel tilings of the plane,, Annals of Math., 139 (1994), 661.  doi: doi:10.2307/2118575.  Google Scholar [8] B. Rand, "Pattern-Equivariant Cohomology of Tiling Spaces With Rotations,", Ph.D. thesis in Mathematics, (2006).   Google Scholar [9] C. Radin and L. Sadun, Isomorphisms of hierarchical structures,, Ergodic Theory and Dynamical Systems, 21 (2001), 1239.  doi: doi:10.1017/S0143385701001572.  Google Scholar [10] L. Sadun, "Topology of Tiling Spaces,", University Lecture Series of the American Mathematical Society, 46 (2008).   Google Scholar show all references ##### References: [1] M. Barge, B. Diamond, J. Hunton and L. Sadun, Cohomology of substitution tiling spaces,, preprint, ().   Google Scholar [2] J. Kellondonk, Pattern-equivariant functions and cohomology,, J. Phys. A, 36 (2003), 1.   Google Scholar [3] J. Kellendonk and I. Putnam, The Ruelle-Sullivan map for $\R^n$ actions,, Math. Ann., 344 (2006), 693.  doi: doi:10.1007/s00208-005-0728-1.  Google Scholar [4] D. Lind and B. Marcus, "An Introduction to Symbolic Dynamics and Coding,", Cambridge University Press, (1995).  doi: doi:10.1017/CBO9780511626302.  Google Scholar [5] K. Petersen, Factor maps between tiling dynamical systems,, Forum Math., 11 (1999), 503.  doi: doi:10.1515/form.1999.011.  Google Scholar [6] N. Priebe, Towards a characterization of self-similar tilings via derived Voronoi tesselations,, Geometriae Dedicata, 79 (2000), 239.  doi: doi:10.1023/A:1005191014127.  Google Scholar [7] C. Radin, The pinwheel tilings of the plane,, Annals of Math., 139 (1994), 661.  doi: doi:10.2307/2118575.  Google Scholar [8] B. Rand, "Pattern-Equivariant Cohomology of Tiling Spaces With Rotations,", Ph.D. thesis in Mathematics, (2006).   Google Scholar [9] C. Radin and L. Sadun, Isomorphisms of hierarchical structures,, Ergodic Theory and Dynamical Systems, 21 (2001), 1239.  doi: doi:10.1017/S0143385701001572.  Google Scholar [10] L. Sadun, "Topology of Tiling Spaces,", University Lecture Series of the American Mathematical Society, 46 (2008).   Google Scholar [1] Anna Go??biewska, S?awomir Rybicki. Equivariant Conley index versus degree for equivariant gradient maps. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 985-997. doi: 10.3934/dcdss.2013.6.985 [2] Jeong-Yup Lee, Boris Solomyak. On substitution tilings and Delone sets without finite local complexity. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3149-3177. doi: 10.3934/dcds.2019130 [3] Laura Poggiolini, Marco Spadini. Local inversion of a class of piecewise regular maps. Communications on Pure & Applied Analysis, 2018, 17 (5) : 2207-2224. doi: 10.3934/cpaa.2018105 [4] Claudio Bonanno, Marco Lenci. Pomeau-Manneville maps are global-local mixing. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020309 [5] Alejandro Adem and Jeff H. Smith. On spaces with periodic cohomology. Electronic Research Announcements, 2000, 6: 1-6. [6] Jeong-Yup Lee, Boris Solomyak. Pisot family self-affine tilings, discrete spectrum, and the Meyer property. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 935-959. doi: 10.3934/dcds.2012.32.935 [7] Aravind Asok, James Parson. Equivariant sheaves on some spherical varieties. Electronic Research Announcements, 2011, 18: 119-130. doi: 10.3934/era.2011.18.119 [8] Daniel Guan. Modification and the cohomology groups of compact solvmanifolds. Electronic Research Announcements, 2007, 13: 74-81. [9] Huai-Dong Cao and Jian Zhou. On quantum de Rham cohomology theory. Electronic Research Announcements, 1999, 5: 24-34. [10] Dennise García-Beltrán, José A. Vallejo, Yurii Vorobiev. Lie algebroids generated by cohomology operators. Journal of Geometric Mechanics, 2015, 7 (3) : 295-315. doi: 10.3934/jgm.2015.7.295 [11] Àngel Jorba, Pau Rabassa, Joan Carles Tatjer. Local study of a renormalization operator for 1D maps under quasiperiodic forcing. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 1171-1188. doi: 10.3934/dcdss.2016047 [12] Livio Flaminio, Miguel Paternain. Linearization of cohomology-free vector fields. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 1031-1039. doi: 10.3934/dcds.2011.29.1031 [13] Boris Kalinin, Victoria Sadovskaya. Holonomies and cohomology for cocycles over partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 245-259. doi: 10.3934/dcds.2016.36.245 [14] Chenxi Wu. The relative cohomology of abelian covers of the flat pillowcase. Journal of Modern Dynamics, 2015, 9: 123-140. doi: 10.3934/jmd.2015.9.123 [15] Federico Rodriguez Hertz, Jana Rodriguez Hertz. Cohomology free systems and the first Betti number. Discrete & Continuous Dynamical Systems - A, 2006, 15 (1) : 193-196. doi: 10.3934/dcds.2006.15.193 [16] Rafael de la Llave, A. Windsor. Smooth dependence on parameters of solutions to cohomology equations over Anosov systems with applications to cohomology equations on diffeomorphism groups. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 1141-1154. doi: 10.3934/dcds.2011.29.1141 [17] Bernold Fiedler, Carlos Rocha, Matthias Wolfrum. Sturm global attractors for $S^1$-equivariant parabolic equations. Networks & Heterogeneous Media, 2012, 7 (4) : 617-659. doi: 10.3934/nhm.2012.7.617 [18] Jiaxi Huang, Youde Wang, Lifeng Zhao. Equivariant Schrödinger map flow on two dimensional hyperbolic space. Discrete & Continuous Dynamical Systems - A, 2020, 40 (7) : 4379-4425. doi: 10.3934/dcds.2020184 [19] Pietro-Luciano Buono, V.G. LeBlanc. Equivariant versal unfoldings for linear retarded functional differential equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 283-302. doi: 10.3934/dcds.2005.12.283 [20] Jean-Baptiste Caillau, Bilel Daoud, Joseph Gergaud. Discrete and differential homotopy in circular restricted three-body control. Conference Publications, 2011, 2011 (Special) : 229-239. doi: 10.3934/proc.2011.2011.229 2019 Impact Factor: 1.338
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7249535322189331, "perplexity": 10647.995196874692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739211.34/warc/CC-MAIN-20200814100602-20200814130602-00180.warc.gz"}
https://labs.tib.eu/arxiv/?author=V.I.%20Tretyak
• ### A multi-isotope $0\nu2\beta$ bolometric experiment(1712.08534) April 26, 2018 hep-ex, physics.ins-det There are valuable arguments to perform neutrinoless double beta ($0\nu2\beta$) decay experiments with several nuclei: the uncertainty of nuclear-matrix-ele\-ment calculations; the possibility to test these calculations by using the ratio of the measured lifetimes; the unpredictability of possible breakthroughs in the detection technique; the difficulty to foresee background in $0\nu2\beta$ decay searches; the limited amount of isotopically enriched materials. We propose therefore approaches to estimate the Majorana neutrino mass by combining experimental data collected with different $0\nu2\beta$ decay candidates. In particular, we apply our methods to a next-generation experiment based on scintillating and Cherenkov-radiation bolometers. Current results indicate that this technology can effectively study up to four different isotopes simultaneously ($^{82}$Se, $^{100}$Mo, $^{116}$Cd and $^{130}$Te), embedded in detectors which share the same concepts and environment. We show that the combined information on the Majorana neutrino mass extracted from a multi-candidate bolometric experiment is competitive with that achievable with a single isotope, once that the cryogenic experimental volume is fixed. The remarkable conceptual and technical advantages of a multi-isotope investigation are discussed. This approach can be naturally applied to the proposed CUPID project, follow-up of the CUORE experiment that is currently taking data in the Gran Sasso underground laboratory. • ### Radioactive contamination of scintillators(1804.00653) March 31, 2018 physics.ins-det Low counting experiments (search for double $\beta$ decay and dark matter particles, measurements of neutrino fluxes from different sources, search for hypothetical nuclear and subnuclear processes, low background $\alpha$, $\beta$, $\gamma$ spectrometry) require extremely low background of a detector. Scintillators are widely used to search for rare events both as conventional scintillation detectors and as cryogenic scintillating bolometers. Radioactive contamination of a scintillation material plays a key role to reach low level of background. Origin and nature of radioactive contamination of scintillators, experimental methods and results are reviewed. A programme to develop radiopure crystal scintillators for low counting experiments is discussed briefly. • ### Development of $^{100}$Mo-containing scintillating bolometers for a high-sensitivity neutrinoless double-beta decay search(1704.01758) Oct. 4, 2017 nucl-ex, physics.ins-det This paper reports on the development of a technology involving $^{100}$Mo-enriched scintillating bolometers, compatible with the goals of CUPID, a proposed next-generation bolometric experiment to search for neutrinoless double-beta decay. Large mass ($\sim$1~kg), high optical quality, radiopure $^{100}$Mo-containing zinc and lithium molybdate crystals have been produced and used to develop high performance single detector modules based on 0.2--0.4~kg scintillating bolometers. In particular, the energy resolution of the lithium molybdate detectors near the $Q$-value of the double-beta transition of $^{100}$Mo (3034~keV) is 4--6~keV FWHM. The rejection of the $\alpha$-induced dominant background above 2.6~MeV is better than 8$\sigma$. Less than 10~$\mu$Bq/kg activity of $^{232}$Th ($^{228}$Th) and $^{226}$Ra in the crystals is ensured by boule recrystallization. The potential of $^{100}$Mo-enriched scintillating bolometers to perform high sensitivity double-beta decay searches has been demonstrated with only 10~kg$\times$d exposure: the two neutrino double-beta decay half-life of $^{100}$Mo has been measured with the up-to-date highest accuracy as $T_{1/2}$ = [6.90 $\pm$ 0.15(stat.) $\pm$ 0.37(syst.)] $\times$ 10$^{18}$~yr. Both crystallization and detector technologies favor lithium molybdate, which has been selected for the ongoing construction of the CUPID-0/Mo demonstrator, containing several kg of $^{100}$Mo. • ### Calorimeter development for the SuperNEMO double beta decay experiment(1707.06823) July 21, 2017 physics.ins-det SuperNEMO is a double-$\beta$ decay experiment, which will employ the successful tracker-calorimeter technique used in the recently completed NEMO-3 experiment. SuperNEMO will implement 100 kg of double-$\beta$ decay isotope, reaching a sensitivity to the neutrinoless double-$\beta$ decay ($0\nu\beta\beta$) half-life of the order of $10^{26}$ yr, corresponding to a Majorana neutrino mass of 50-100 meV. One of the main goals and challenges of the SuperNEMO detector development programme has been to reach a calorimeter energy resolution, $\Delta$E/E, around 3%/$sqrt(E)$(MeV) $\sigma$, or 7%/$sqrt(E)$(MeV) FWHM (full width at half maximum), using a calorimeter composed of large volume plastic scintillator blocks coupled to photomultiplier tubes. We describe the R\&D programme and the final design of the SuperNEMO calorimeter that has met this challenging goal. • ### The BiPo-3 detector for the measurement of ultra low natural radioactivities of thin materials(1702.07176) June 7, 2017 physics.ins-det The BiPo-3 detector, running in the Canfranc Underground Laboratory (Laboratorio Subterr\'aneo de Canfranc, LSC, Spain) since 2013, is a low-radioactivity detector dedicated to measuring ultra low natural radionuclide contaminations of $^{208}$Tl ($^{232}$Th chain) and $^{214}$Bi ($^{238}$U chain) in thin materials. The total sensitive surface area of the detector is 3.6 m$^2$. The detector has been developed to measure radiopurity of the selenium double $\beta$-decay source foils of the SuperNEMO experiment. In this paper the design and performance of the detector, and results of the background measurements in $^{208}$Tl and $^{214}$Bi, are presented, and validation of the BiPo-3 measurement with a calibrated aluminium foil is discussed. Results of the $^{208}$Tl and $^{214}$Bi activity measurements of the first enriched $^{82}$Se foils of the double $\beta$-decay SuperNEMO experiment are reported. The sensitivity of the BiPo-3 detector for the measurement of the SuperNEMO $^{82}$Se foils is $\mathcal{A}$($^{208}$Tl) $<2$ $\mu$Bq/kg (90\% C.L.) and $\mathcal{A}$($^{214}$Bi) $<140$ $\mu$Bq/kg (90\% C.L.) after 6 months of measurement. • ### New limit for the half-life of double beta decay of $^{94}$Zr to the first excited state of $^{94}$Mo(1608.02401) March 31, 2017 nucl-ex Neutrinoless Double Beta Decay is a phenomenon of fundamental interest in particle physics. The decay rates of double beta decay transitions to the excited states can provide input for Nuclear Transition Matrix Element calculations for the relevant two neutrino double beta decay process. It can be useful as supplementary information for the calculation of Nuclear Transition Matrix Element for the neutrinoless double beta decay process. In the present work, double beta decay of $^{94}$Zr to the $2^{+}_{1}$ excited state of $^{94}$Mo at 871.1 keV is studied using a low background $\sim$ 230 cm$^3$ HPGe detector. No evidence of this decay was found with a 232 g.y exposure of natural Zirconium. The lower half-life limit obtained for the double beta decay of $\rm^{94}Zr$ to the $2^{+}_{1}$ excited state of $\rm^{94}Mo$ is $T_{1/2} (0\nu + 2\nu)> 3.4 \times 10^{19}$ y at 90% C.L., an improvement by a factor of $\sim$ 4 over the existing experimental limit at 90\% C.L. The sensitivity is estimated to be $T_{1/2} (0\nu + 2\nu) > 2.0\times10^{19}$ y at 90% C.L. using the Feldman-Cousins method. • ### Measurement of the $2\nu\beta\beta$ Decay Half-Life and Search for the $0\nu\beta\beta$ Decay of $^{116}$Cd with the NEMO-3 Detector(1610.03226) Dec. 23, 2016 hep-ex, physics.ins-det The NEMO-3 experiment measured the half-life of the $2\nu\beta\beta$ decay and searched for the $0\nu\beta\beta$ decay of $^{116}$Cd. Using $410$ g of $^{116}$Cd installed in the detector with an exposure of $5.26$ y, ($4968\pm74$) events corresponding to the $2\nu\beta\beta$ decay of $^{116}$Cd to the ground state of $^{116}$Sn have been observed with a signal to background ratio of about $12$. The half-life of the $2\nu\beta\beta$ decay has been measured to be $T_{1/2}^{2\nu}=[2.74\pm0.04\mbox{(stat.)}\pm0.18\mbox{(syst.)}]\times10^{19}$ y. No events have been observed above the expected background while searching for $0\nu\beta\beta$ decay. The corresponding limit on the half-life is determined to be $T_{1/2}^{0\nu} \ge 1.0 \times 10^{23}$ y at the $90$ % C.L. which corresponds to an upper limit on the effective Majorana neutrino mass of $\langle m_{\nu} \rangle \le 1.4-2.5$ eV depending on the nuclear matrix elements considered. Limits on other mechanisms generating $0\nu\beta\beta$ decay such as the exchange of R-parity violating supersymmetric particles, right-handed currents and majoron emission are also obtained. • ### Improvement of radiopurity level of enriched $^{116}$CdWO$_4$ and ZnWO$_4$ crystal scintillators by recrystallization(1607.04117) July 14, 2016 nucl-ex, physics.ins-det As low as possible radioactive contamination of a detector plays a crucial role to improve sensitivity of a double beta decay experiment. The radioactive contamination of a sample of $^{116}$CdWO$_4$ crystal scintillator by thorium was reduced by a factor $\approx 10$, down to the level 0.01 mBq/kg ($^{228}$Th), by exploiting the recrystallization procedure. The total alpha activity of uranium and thorium daughters was reduced by a factor $\approx 3$, down to 1.6 mBq/kg. No change in the specific activity (the total $\alpha$ activity and $^{228}$Th) was observed in a sample of ZnWO$_4$ crystal produced by recrystallization after removing $\approx 0.4$ mm surface layer of the crystal. • ### Simulations of background sources in AMoRE-I experiment(1601.01249) July 7, 2016 hep-ex, physics.ins-det The first phase of the Advanced Mo-based Rare Process Experiment (AMoRE-I), an experimental search for neutrinoless double beta decay (0$\nu\beta\beta$) of $^{100}$Mo in calcium molybdate (CMO) crystal using cryogenic techniques, is in preparation at the YangYang underground laboratory (Y2L) in South Korea. A GEANT4 based Monte Carlo simulation was performed for background estimation in the first-phase the AMoRE-I detector and shield configuration. Background sources such as $^{238}$U, $^{232}$Th, $^{40}$K, $^{235}$U, and $^{210}$Pb were simulated from inside the crystals, surrounding materials, outer shielding walls of the Y2L cavity. The estimated background rate in the region of interest was found to be $< 1.5 \times 10^{-3}$ counts/keV/kg/yr (ckky). The effects of random coincidences between background and two-neutrino double beta decay of $^{100}$Mo were estimated as a potential background source and its estimated rate was $< 2.3 \times 10^{-4}$ ckky. • ### First test of an enriched $^{116}$CdWO$_4$ scintillating bolometer for neutrinoless double-beta-decay searches(1606.07806) June 24, 2016 nucl-ex, physics.ins-det For the first time, a cadmium tungstate crystal scintillator enriched in $^{116}$Cd has been succesfully tested as a scintillating bolometer. The measurement was performed above ground at a temperature of 18 mK. The crystal mass was 34.5 g and the enrichment level ~82 %. Despite a substantial pile-up effect due to above-ground operation, the detector demonstrated a high energy resolution (2-7 keV FWHM in 0.2-2.6 MeV $\gamma$ energy range), a powerful particle identification capability and a high level of internal radiopurity. These results prove that cadmium tungstate is an extremely promising detector material for a next-generation neutrinoless double-beta decay bolometric experiment, like that proposed in the CUPID project (CUORE Upgrade with Particle IDentification). • ### Measurement of the double-beta decay half-life and search for the neutrinoless double-beta decay of $^{48}{\rm Ca}$ with the NEMO-3 detector(1604.01710) June 16, 2016 hep-ex, nucl-ex, physics.ins-det The NEMO-3 experiment at the Modane Underground Laboratory has investigated the double-$\beta$ decay of $^{48}{\rm Ca}$. Using $5.25$ yr of data recorded with a $6.99\,{\rm g}$ sample of $^{48}{\rm Ca}$, approximately $150$ double-$\beta$ decay candidate events have been selected with a signal-to-background ratio greater than $3$. The half-life for the two-neutrino double-$\beta$ decay of $^{48}{\rm Ca}$ has been measured to be $T^{2\nu}_{1/2}\,=\,[6.4\, ^{+0.7}_{-0.6}{\rm (stat.)} \, ^{+1.2}_{-0.9}{\rm (syst.)}] \times 10^{19}\,{\rm yr}$. A search for neutrinoless double-$\beta$ decay of $^{48}{\rm Ca}$ yields a null result and a corresponding lower limit on the half-life is found to be $T^{0\nu}_{1/2} > 2.0 \times 10^{22}\,{\rm yr}$ at $90\%$ confidence level, translating into an upper limit on the effective Majorana neutrino mass of $< m_{\beta\beta} > < 6.0 - 26$ ${\rm eV}$, with the range reflecting different nuclear matrix element calculations. Limits are also set on models involving Majoron emission and right-handed currents. • ### Search for double beta decay of $^{116}$Cd with enriched $^{116}$CdWO$_4$ crystal scintillators (Aurora experiment)(1601.05578) Jan. 21, 2016 nucl-ex, physics.ins-det The Aurora experiment to investigate double beta decay of $^{116}$Cd with the help of 1.162 kg cadmium tungstate crystal scintillators enriched in $^{116}$Cd to 82\% is in progress at the Gran Sasso Underground Laboratory. The half-life of $^{116}$Cd relatively to the two neutrino double beta decay is measured with the highest up-to-date accuracy $T_{1/2}=(2.62\pm0.14)\times10^{19}$ yr. The sensitivity of the experiment to the neutrinoless double beta decay of $^{116}$Cd to the ground state of $^{116}$Sn is estimated as $T_{1/2} \geq 1.9\times10^{23}$ yr at 90\% CL, which corresponds to the effective Majorana neutrino mass limit $\langle m_{\nu}\rangle \leq (1.2-1.8)$ eV. New limits are obtained for the double beta decay of $^{116}$Cd to the excited levels of $^{116}$Sn, and for the neutrinoless double beta decay with emission of majorons. • ### New limits on double beta processes in 106-Cd(1601.05698) Jan. 21, 2016 hep-ex, nucl-ex A radiopure cadmium tungstate crystal scintillator, enriched in 106-Cd to 66%, with mass of 216 g (106-CdWO4) was used in coincidence with four ultra-low background HPGe detectors contained in a single cryostat to search for double beta decay processes in 106-Cd. New improved half-life limits on the double beta processes in 106-Cd have been set on the level of 1e20-1e21 yr after 13085 h of data taking deep underground (3600 m w.e.) at the Gran Sasso National Laboratories of INFN (Italy). In particular, the limit on the two neutrino electron capture with positron emission T1/2 >1.1e21 yr, has reached the region of theoretical predictions. The resonant neutrinoless double electron captures to the 2718, 2741 and 2748 keV excited states of 106-Pd are restricted on the level of T1/2 > 8.5e20 - 1.4e21 yr. • ### Proceedings of the third French-Ukrainian workshop on the instrumentation developments for HEP(1512.07393) Dec. 23, 2015 hep-ex, nucl-ex, physics.ins-det The reports collected in these proceedings have been presented in the third French-Ukrainian workshop on the instrumentation developments for high-energy physics held at LAL, Orsay on October 15-16. The workshop was conducted in the scope of the IDEATE International Associated Laboratory (LIA). Joint developments between French and Ukrainian laboratories and universities as well as new proposals have been discussed. The main topics of the papers presented in the Proceedings are developments for accelerator and beam monitoring, detector developments, joint developments for large-scale high-energy and astroparticle physics projects, medical applications. • ### Technical Design Report for the AMoRE $0\nu\beta\beta$ Decay Search Experiment(1512.05957) Dec. 18, 2015 hep-ex, physics.ins-det The AMoRE (Advanced Mo-based Rare process Experiment) project is a series of experiments that use advanced cryogenic techniques to search for the neutrinoless double-beta decay of \mohundred. The work is being carried out by an international collaboration of researchers from eight countries. These searches involve high precision measurements of radiation-induced temperature changes and scintillation light produced in ultra-pure \Mo[100]-enriched and \Ca[48]-depleted calcium molybdate ($\mathrm{^{48depl}Ca^{100}MoO_4}$) crystals that are located in a deep underground laboratory in Korea. The \mohundred nuclide was chosen for this \zeronubb decay search because of its high $Q$-value and favorable nuclear matrix element. Tests have demonstrated that \camo crystals produce the brightest scintillation light among all of the molybdate crystals, both at room and at cryogenic temperatures. $\mathrm{^{48depl}Ca^{100}MoO_4}$ crystals are being operated at milli-Kelvin temperatures and read out via specially developed metallic-magnetic-calorimeter (MMC) temperature sensors that have excellent energy resolution and relatively fast response times. The excellent energy resolution provides good discrimination of signal from backgrounds, and the fast response time is important for minimizing the irreducible background caused by random coincidence of two-neutrino double-beta decay events of \mohundred nuclei. Comparisons of the scintillating-light and phonon yields and pulse shape discrimination of the phonon signals will be used to provide redundant rejection of alpha-ray-induced backgrounds. An effective Majorana neutrino mass sensitivity that reaches the expected range of the inverted neutrino mass hierarchy, i.e., 20-50 meV, could be achieved with a 200~kg array of $\mathrm{^{48depl}Ca^{100}MoO_4}$ crystals operating for three years. • ### Result of the search for neutrinoless double-$\beta$ decay in $^{100}$Mo with the NEMO-3 experiment(1506.05825) Oct. 22, 2015 hep-ex, nucl-ex, physics.ins-det The NEMO-3 detector, which had been operating in the Modane Underground Laboratory from 2003 to 2010, was designed to search for neutrinoless double $\beta$ ($0\nu\beta\beta$) decay. We report final results of a search for $0\nu\beta\beta$ decays with $6.914$ kg of $^{100}$Mo using the entire NEMO-3 data set with a detector live time of $4.96$ yr, which corresponds to an exposure of 34.3 kg$\cdot$yr. We perform a detailed study of the expected background in the $0\nu\beta\beta$ signal region and find no evidence of $0\nu\beta\beta$ decays in the data. The level of observed background in the $0\nu\beta\beta$ signal region $[2.8-3.2]$ MeV is $0.44 \pm 0.13$ counts/yr/kg, and no events are observed in the interval $[3.2-10]$ MeV. We therefore derive a lower limit on the half-life of $0\nu\beta\beta$ decays in $^{100}$Mo of $T_{1/2}(0\nu\beta\beta)> 1.1 \times 10^{24}$ yr at the $90\%$ Confidence Level, under the hypothesis of light Majorana neutrino exchange. Depending on the model used for calculating nuclear matrix elements, the limit for the effective Majorana neutrino mass lies in the range $\langle m_{\nu} \rangle < 0.33$--$0.62$ eV. We also report constraints on other lepton-number violating mechanisms for $0\nu\beta\beta$ decays. • ### Scintillating bolometers based on ZnMoO$_4$ and Zn$^{100}$MoO$_4$ crystals to search for 0$\nu$2$\beta$ decay of $^{100}$Mo (LUMINEU project): first tests at the Modane Underground Laboratory(1502.01161) Feb. 4, 2015 nucl-ex, physics.ins-det The technology of scintillating bolometers based on zinc molybdate (ZnMoO$_4$) crystals is under development within the LUMINEU project to search for 0$\nu$2$\beta$ decay of $^{100}$Mo with the goal to set the basis for large scale experiments capable to explore the inverted hierarchy region of the neutrino mass pattern. Advanced ZnMoO$_4$ crystal scintillators with mass of $\sim$~0.3 kg were developed and Zn$^{100}$MoO$_4$ crystal from enriched $^{100}$Mo was produced for the first time by using the low-thermal-gradient Czochralski technique. One ZnMoO$_4$ scintillator and two samples (59 g and 63 g) cut from the enriched boule were tested aboveground at milli-Kelvin temperature as scintillating bolometers showing a high detection performance. The first results of the low background measurements with three ZnMoO$_4$ and two enriched detectors installed in the EDELWEISS set-up at the Modane Underground Laboratory (France) are presented. • ### Aboveground test of an advanced Li$_2$MoO$_4$ scintillating bolometer to search for neutrinoless double beta decay of $^{100}$Mo(1410.6933) Dec. 17, 2014 nucl-ex, physics.ins-det Large lithium molybdate (Li$_2$MoO$_4$) crystal boules were produced by using the low thermal gradient Czochralski growth technique from deeply purified molybdenum. A small sample from one of the boules was preliminary characterized in terms of X-ray-induced and thermally-excited luminescence. A large cylindrical crystalline element (with a size of $\oslash 40\times40$ mm) was used to fabricate a scintillating bolometer, which was operated aboveground at $\sim 15$ mK by using a pulse-tube cryostat housing a high-power dilution refrigerator. The excellent detector performance in terms of energy resolution and $\alpha$ background suppression along with preliminary positive indications on the radiopurity of this material show the potentiality of Li$_2$MoO$_4$ scintillating bolometers for low-counting experiment to search for neutrinoless double beta decay of $^{100}$Mo. • ### Enriched Zn$^{100}$MoO$_4$ scintillating bolometers to search for $0 \nu 2\beta$ decay of $^{100}$Mo with the LUMINEU experiment(1405.6937) July 5, 2014 hep-ex, physics.ins-det The LUMINEU project aims at performing a demonstrator underground experiment searching for the neutrinoless double beta decay of the isotope $^{100}$Mo embedded in zinc molybdate (ZnMoO$_4$) scintillating bolometers. In this context, a zinc molybdate crystal boule enriched in $^{100}$Mo to 99.5\% with a mass of 171 g was grown for the first time by the low-thermal-gradient Czochralski technique. The production cycle provided a high yield (the crystal boule mass was 84\% of initial charge) and an acceptable level -- around 4\% -- of irrecoverable losses of the costy enriched material. Two crystals of 59 g and 63 g, obtained from the enriched boule, were tested aboveground at milli-Kelvin temperature as scintillating bolometers. They showed a high detection performance, equivalent to that of previously developed natural ZnMoO$_4$ detectors. These results pave the way to future sensitive searches based on the LUMINEU technology, capable to approach and explore the inverted hierarchy region of the neutrino mass pattern. • ### Rejection of randomly coinciding events in ZnMoO$_4$ scintillating bolometers(1404.1231) April 4, 2014 nucl-ex, physics.ins-det Random coincidence of events (particularly from two neutrino double beta decay) could be one of the main sources of background in the search for neutrinoless double beta decay with cryogenic bolometers due to their poor time resolution. Pulse-shape discrimination by using front edge analysis, mean-time and $\chi^2$ methods was applied to discriminate randomly coinciding events in ZnMoO$_4$ cryogenic scintillating bolometers. These events can be effectively rejected at the level of 99% by the analysis of the heat signals with rise-time of about 14 ms and signal-to-noise ratio of 900, and at the level of 92% by the analysis of the light signals with rise-time of about 3 ms and signal-to-noise ratio of 30, under the requirement to detect 95% of single events. These rejection efficiencies are compatible with extremely low background levels in the region of interest of neutrinoless double beta decay of $^{100}$Mo for enriched ZnMoO$_4$ detectors, of the order of $10^{-4}$ counts/(y keV kg). Pulse-shape parameters have been chosen on the basis of the performance of a real massive ZnMoO$_4$ scintillating bolometer. Importance of the signal-to-noise ratio, correct finding of the signal start and choice of an appropriate sampling frequency are discussed. • ### First results of the experiment to search for double beta decay of 106Cd with 106CdWO4 crystal scintillator in coincidence with four crystals HPGe detector(1312.5773) Dec. 19, 2013 nucl-ex An experiment to search for double beta processes in 106Cd by using cadmium tungstate crystal scintillator enriched in 106Cd (106CdWO4) in coincidence with the four crystals HPGe detector GeMulti is in progress at the STELLA facility of the Gran Sasso underground laboratory of INFN (Italy). The 106CdWO4 scintillator is viewed by a low-background photomultiplier tube through a lead tungstate crystal light-guide produced from deeply purified archaeological lead to suppress gamma quanta from the photomultiplier tube. Here we report the first results of the experiment after 3233 hours of the data taking. A few new improved limits on double beta processes in 106Cd are obtained, in particular T1/2(2nuECb+) > 8.4e20 yr at 90% C.L. • ### Semi-empirical calculation of quenching factors for scintillators: new results(1312.5779) Dec. 19, 2013 nucl-ex, astro-ph.IM New results of calculation of quenching factors for ions in scintillators in semi-empirical approach described in [V.I. Tretyak, Astropart. Phys. 33 (2010) 40] are presented. In particular, they give additional arguments in favour of hypothesis that quenching factors for different particles can be described with the same Birks factor kB, if all the data were collected in the same conditions and processed in the same way. • ### Radioactive contamination of BaF2 crystal scintillator(1312.4735) Dec. 17, 2013 nucl-ex, physics.ins-det Barium fluoride (BaF$_2$) crystal scintillators are promising detectors to search for double beta decay processes in $^{130}$Ba ($Q_{2{\beta}}$ = 2619(3) keV) and $^{132}$Ba ($Q_{2{\beta}}$ = 844(1) keV). The $^{130}$Ba isotope is of particular interest because of the indications on 2${\beta}$ decay found in two geochemical experiments. The radioactive contamination of BaF$_2$ scintillation crystal with mass of 1.714 kg was measured over 113.4 hours in a low-background DAMA/R&D set-up deep underground (3600 m w.e.) at the Gran Sasso National Laboratories of INFN (LNGS, Italy). The half-life of $^{212}$Po (present in the crystal scintillator due to contamination by radium) was estimated as $T_{1/2}$ = 298.8 $\pm$ 0.8(stat.) $\pm$ 1.4(syst.) ns by analysis of the events pulse profiles. • ### Purification of molybdenum oxide, growth and characterization of medium size zinc molybdate crystals for the LUMINEU program(1312.3515) Dec. 12, 2013 nucl-ex, physics.ins-det The LUMINEU program aims at performing a pilot experiment on neutrinoless double beta decay of 100Mo using radiopure ZnMoO4 crystals operated as scintillating bolometers. Growth of high quality radiopure crystals is a complex task, since there are no commercially available molybdenum compounds with the required levels of purity and radioactive contamination. This paper discusses approaches to purify molybdenum and synthesize compound for high quality radiopure ZnMoO4 crystal growth. A combination of a double sublimation (with addition of zinc molybdate) with subsequent recrystallization in aqueous solutions (using zinc molybdate as a collector) was used. Zinc molybdate crystals up to 1.5 kg were grown by the low-thermal-gradient Czochralski technique, their optical, luminescent, diamagnetic, thermal and bolometric properties were tested.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332371354103088, "perplexity": 3908.1235466613425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193221.49/warc/CC-MAIN-20201127131802-20201127161802-00340.warc.gz"}
http://parabix.costar.sfu.ca/browser/docs/Working/re/analysis.tex?rev=3867
# source:docs/Working/re/analysis.tex@3867 Last change on this file since 3867 was 3867, checked in by cameron, 5 years ago Max Unicode (hence UTF-32) value is 0x10FFFF File size: 8.8 KB Line 1\section{Running-time Comparison with DFA and NFA 2Implementations}\label{sec:analysis} 3 4Our experimental results indicate that regular expression matching using 5bitstreams can outperform current implementations of NFA- and DFA-based 6matching. 7It is worth exploring why this is so, and under what conditions one might 8expect bitstreams to perform better than NFA- or DFA-based matchers, and 9vice-versa. 10 11The bitstream method starts with a preprocessing step: the compilation 12of the 13regular expression using the Parabix toolchain. 14Compilation is an offline process whose time is not counted in our 15performance 16measures, as Parabix is a experimental research compiler that is not 17optimized. 18This leads to a bias in our results, as our timings for nrgrep and grep 19include the time taken for preprocessing. 20We have attempted to minimize the bias by performing our tests with large 21inputs, so that the text-scanning costs dominate the preprocessing costs. 22We furthermore believe that, if a special-purpose optimized compiler for 23regular expressions were built, that its inclusion in bitstreams grep 24would not 25substantially increase the running time, particularly for large input 26texts--the compilation involved is straightforward. 27 28For simplicity, we will first assume that the input regular expressions 29are restricted to having Kleene closures only of single characters or 30alternations of single characters. 31This is a broad class of regular expressions, covering the majority of 32common uses of grep. 33 34Let $\Sigma$ be our input alphabet and $\sigma = | \Sigma |$. 35As we are comparing techniques in practice, we assume that $\Sigma$ is a 36standard input alphabet, such as ASCII ($\sigma = 128$), UTF-8 ($\sigma = 256$), 37UTF-16 ($\sigma = 65536$ ), or UTF-32 ($\sigma = 1114112$). 38This assumption allows us to equate the number of bits in the encoding of a 39character (a parameter for the bitstream method) with $\log \sigma$. 40 41 42The bitstream method compiles a regular expression of size $m$ into 43bitstream code that is $O(m \log \sigma)$ statements long (with one operation 44per statement; it is essentially three-address code). 45This is translated to machine code and placed inside a 46loop\footnote{Technically, it is inside two loops: an inner one that 47executes once per $w$ characters in a large buffer, and an outer one 48that successively fetches 49buffers until the input is exhausted.} that 50executes once per $w$ characters, where $w$ is the width of the processor's 51word. 52Also inside this loop is the transposition step that converts character-encoded 53files into their bitstream representation; 54this transposition takes $O(\log w)$ work per loop iteration. 55 56In total, this is $O(m \log \sigma + \log w)$ work per iteration. 57In current practice, we have $\log w$ around 8 (for 256-bit architectures), 58and $\log \sigma$ at least 7. 59Thus, $m \log \sigma$ will dominate $\log w$ 60with current and foreseeable technology--we do not expect to see $\log w$ 61skyrocket. 62So we can absorb the $\log w$ term and state the work as $O(m \log \sigma)$ per 63iteration. 64We multiply this by $O(\frac{n}{w})$ iterations to give 65$O(\frac{n m \log \sigma}{w})$ work. 66 67We further note that all of the work in the loop is 68done by superscalar instructions, with the exception of the additions, 69which require carry propagation. 70There will be at most $C$ of these additions in the loop, where $C$ is the 71number of concatenation and Kleene star operations in the regular 72expression; $C < m$. 73 74Almost all intermediate bitstreams in the loop body can be kept in 75registers, 76requiring no storage in memory. 77Good register allocation--and limited live ranges for bitstream 78variables--keeps register spillage to a minimum. 79For those bitstreams that do require storage in memory, 80long buffers are allocated, allowing the successive iterations of the 81loop to access successive memory locations. 82That is, for the few streams requiring it, 83memory is accessed in a sequential fashion. 84As this is the best case for 85hardware prefetching, 86we expect few cache misses with bitstream method. 87 88Compare this with NFA methods. 89In the base NFA method, a state set of approximately $m$ states is kept 90as a bit 91set in $\frac{m}{w}$ machine words (or $\frac{m}{8}$ bytes). 92For each character $c$ of the input, 93a precomputed transition table, indexed by the $c$ and the current state 94set, 95is accessed. 96Since there are $2^{\Theta(m)}$ state sets, the transition table will have 97$\sigma 2^{\Theta(m)}$ entries. 98%%, where $\sigma$ is the size of the input alphabet. 99Each entry is a new state set, which requires $\frac{m}{8}$ bytes. 100Thus, the transition table is of size $\sigma m 2^{\Theta(m)}$, which is 101quite 102large:  it can become expensive to precompute, and it consumes a lot of 103memory. 104For even fairly small $m$ a table of this size will probably not fit in 105cache 106memory. 107Thus, we would expect many cache misses with this base method. 108 109To improve the table size, several authors have separated the transition 110table 111into several tables, each indexed by a subset of the bits in the bit set 112representing the current state. 113Suppose one uses $k$ bits of the state set to index each table. 114Ignoring ceilings, this requires $\frac{m}{k}$ tables, 115each with $\sigma 2^k$ 116entries of $\frac{m}{8}$ bytes apiece. 117Each table therefore takes up $m 2^{k-3} \sigma$ bytes, 118and so the collection of them takes up $\frac{m^2 2^{k-3}\sigma}{k}$ bytes. 119At each character, the NFA algorithm does one lookup in each table, 120combining the results with $\frac{m}{k}-1$ boolean OR operations. 121 122The original NFA method of Thompson uses $k=1$, 123which gives a $m$ tables of $\frac{m \sigma}{4}$ bytes each, 124along with 125$m$ lookups and $m-1$ boolean OR operations to 126combine the lookups, per character. 127 128Navarro and Raffinot use $k= \frac{m}{2}$, 129giving $2$ tables of $2^{\frac{m}{2}-3} m \sigma$ bytes each, 130two lookups per character, and 1 boolean OR operation per character to 131combine the lookups. 132 133In Table \ref{tab:ascii}, we summarize the theoretical analysis of these NFA 134methods, listing the number of table lookups per input character and the size of the 135tables for various values of $m$, the number of states. 136We assume the ASCII character set ($\sigma = 128$); any of the 137UTF character sets would yield larger tables. 138 139\begin{table} 140\small{ 141\begin{tabular}{rrrrrrr} 142%%   & & Thompson & \  & NavRaff & NFA \\ 143$k$   & & $1$ & $4$ & $8$ & $\frac{m}{2}$ & $m$\\ 144%%\hline 145lookups & & $m$ & $\frac{m}{4}$  & $\frac{m}{8}$  & 2 & 1\\ 146\hline 147& $m$ &  &  &  & \\ 148\multirow{4}{1.35cm}{memory (KiB)} 149&  5 &  0.8 &  1.6 &  12.5 &   1.3 & 2.5\\ 150& 10 &  3.1 &  6.2 &  50.0 &  10.0 & 160.0\\ 151& 15 &  7.0 & 14.1 & 112.5 & 120.0 & 7680.0\\ 152& 20 & 12.5 & 25.0 & 200.0 & 640.0 & 327680.0\\ 153& 25 & 19.5 & 39.1 & 312.5 & 6400.0 & 13107200.0\\ 154\end{tabular} 155} 156\caption{lookups per character and memory consumed by tables in NFA methods 157(in kibibytes)} 158\label{tab:ascii} 159\end{table} 160 161Of particular importance to the speed of NFA methods is whether the 162table lookups result in cache hits or not. 163If the tables are small enough, then they will fit into cache and 164lookups will all be cache hits, taking minimal time. 165In this case, the time per input character will be a small constant times the 166number of lookups. 167 168If the tables are not small enough to fit into cache, some proportion 169of the lookups will generate cache misses. 170This will stall the processor and these stalls will come to dominate the 171computation time. 172In this case, the time per input character will be some large constant 173(a cache miss can take about two orders of magnitude longer than a cache hit) 174times the number of lookups. 175 176Using  256KiB as an estimate of the size of a current standard data cache, 177we can consider those entries of Table \ref{tab:ascii} above 256 to be 178relatively slow. 179We can summarize these theoretical predictions by saying that the NFA methods 180with small $k$ scale well with an increase in NFA states, but with large $k$ the method is 181limited to a small number of states. 182 183We can now directly (but roughly) compare the NFA methods with bitstream 184methods. 185Consider small-$k$ (say, $k <= 4$) NFA methods. 186For the reasonable range of $m$, the tables fit into cache. 187The running time is predicted to be a small constant times the 188$\frac{m}{k} >= \frac{m}{4}$ lookups. 189The small constant, which we will under approximate with 4 cycles, is for the 190table addressing computation, combining the lookups with boolean OR, and final state 191detection and handling. 192Thus, the running time per input character may be lower-bounded by 193$4*\frac{m}{4}$, or simply $m$, cycles. 194 195Our method, on the other hand, takes time $O(\frac{m \log \sigma}{w})$ per 196input character, where the constant inside the big-Oh is approximately 2. 197Furthermore, we expect no cache misses due to the regular stride of our memory 198accesses. 199For ASCII, this time becomes at most $2 \frac{7m}{w} = \frac{14m}{w}$ cycles 200per character. 201This is faster than the small-$k$ NFA methods by a factor of $w/14$. 202For processors with a 128-bit word, this is about one order of magnitude. 203 204 205 206 207 Note: See TracBrowser for help on using the repository browser.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6705299019813538, "perplexity": 4095.150222484456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526446.61/warc/CC-MAIN-20190720045157-20190720071157-00310.warc.gz"}
https://www.physicsforums.com/threads/deutschs-algorithm-vs-classical-algorithm.545855/
# Deutsch's algorithm vs classical algorithm 1. Oct 31, 2011 ### maxverywell How the Deutsch's algorithm outperforms a classical algorithm? In both algorithms we need two particles (two bits and two qubits). In the quantum case the two qubits are processed by the FCNOT gate simultaneously but it's equivalent to two classical "black boxes". So if we take two classical boxes the two bits are processed simultaneously too and the two algorithms are equivalent in power. 2. Oct 31, 2011 ### Joseph14 Why are you assuming putting a qubit in superposition into one blackbox is equivalent to two classical black boxes?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768994450569153, "perplexity": 2375.0744651686773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510998.39/warc/CC-MAIN-20181017044446-20181017065946-00318.warc.gz"}
https://www.doubtnut.com/question-answer/find-the-indiated-terms-in-each-of-the-following-ap-i-1-7-13-19-301-a10-a20-ii-a-22-d-3-an-a30-644363733
Find the indiated terms in each of the following A.P. <br> (i) 1, 7, 13, 19 ,…, 301 , a_(10) , a_(20) <br> (ii) a= 22 , d = - 3, a_(n) , a_(30) Updated On: 17-04-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free, Here , a= 1 , d = 6 <br> therefore a_(10) = a + 9d = 1 + 54 = 55 <br> a_(20) = a+ 19 d = 1 + 114 = 115 <br> Here , a = 22 , d = - 3 <br> therefore a_(n) = a + (n-1)d = 22 + (n-1) (-3) <br> 22 - 3n + 3 = 25 - 3n <br> and a_(30) = 25 - 3 xx (30) = 25 - -90 = - 65
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15574052929878235, "perplexity": 2479.3802315040098}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00082.warc.gz"}
https://socratic.org/questions/55a8ef71581e2a1af53b54e2
Chemistry Topics # Question #b54e2 Jul 17, 2015 ${K}_{c} = \frac{{\left[H I\right]}^{2}}{\left[{H}_{2}\right] \left[{I}_{2}\right]}$ Where [ ] represents concentrations at equilibrium. #### Explanation: In general for a reaction: $a A + b B r i g h t \le f t h a r p \infty n s c C + \mathrm{dD}$ • the expression for ${K}_{c}$ is given by: ${K}_{c} = \frac{{\left[C\right]}^{c} {\left[D\right]}^{d}}{{\left[A\right]}^{a} {\left[B\right]}^{b}}$ Where [ ] represents concentration of the species at equilibrium. If we apply the general rule to this specific reaction: ${H}_{2 \left(g\right)} + {I}_{2 \left(g\right)} r i g h t \le f t h a r p \infty n s 2 H {I}_{\left(g\right)}$ we get: ${K}_{c} = \frac{{\left[H I\right]}^{2}}{\left[{H}_{2}\right] \left[{I}_{2}\right]}$ Note that whenever you quote a value for ${K}_{c}$ you must also state the temperature as well. ##### Impact of this question 213 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5994067788124084, "perplexity": 2152.2016888901107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00181.warc.gz"}
https://physics.stackexchange.com/questions/437719/pion-decay-constant-how-to-know-which-convention-to-follow
# Pion decay constant: How to know which convention to follow? As summarized by Wikipedia, different sources use different choices for the (pion) decay constant. This means that the numerical value can vary between $$\sqrt 2\ f_\pi \quad\leftrightarrow\quad f_\pi \quad\leftrightarrow\quad \frac{1}{\sqrt 2}\ f_\pi$$ I suppose this is connected to the normalization of the state $$|\pi(p)\rangle$$ in the defining equation $$\tag{1} \langle 0|\,j_{A}^{\,\mu\,a}(x)|\pi^b(p)\rangle = \text{i} p^\mu f_\pi \text{e}^{-\text{i}p\cdot x}\delta^{ab},$$ where we normalize the pion state $$\tag{2} \langle \pi(p)|\pi(q)\rangle = \mathcal N \delta^{(3)}(\mathbf{p}-\mathbf{q})$$ usually with $$\mathcal N = (2\pi)^3\, 2\, p^0 = (2\pi)^3\, 2\, \sqrt{\mathbf p^2+m^2}.$$ (How) is the normalization of the pion state $$|\pi\rangle$$ connected to the numeric value of the decay constant? Nowadays, one always uses (2) to normalize a pion component. And one always uses $$\tag{1} \langle 0|\,j_{A}^{\,\mu\,a}(x)|\pi^b(p)\rangle = \text{i} p^\mu f_\pi \text{e}^{-\text{i}p\cdot x}\delta^{ab},$$ to define $$f_\pi\sim 93$$MeV. (cf M Schwarz, Peskin & Schroeder, etc..., including Donoghue, Golowich & Holstein, which should be your vademecum in any and all such questions.) The mnemonic of this, the mainstream POV, is $$j_A^\mu \sim f_\pi\partial^\mu \pi+...$$ However, experimentally, (PDG, Li & Cheng,...) one looks at, e.g. charged pion decay, where $$\pi^-=(\pi^1-i\pi^2)/\sqrt{2}$$, and $$j_A^{\mu ~\bar ud}= j_A^{\mu ~1}+i j_A^{\mu ~2}$$; to the effect that $$\langle 0|\,j_{A}^{\,\mu\,\bar u d}(x)|\pi^-\rangle = \text{i} p^\mu \sqrt{2}f_\pi \text{e}^{-\text{i}p\cdot x} ,$$ which is the 130MeV expression; and you are warned they absorb the square root of 2 in the definition! At any moment, you make sure you appreciate what is actually being written in the analog of (1), and how the axial hadronic currents are normalized. The normalization of a pion component, however, is normally fixed. • Edit note. Indeed, the peculiar normalization of Weinberg vI (10.2.15) metastasizes to Weinberg vII (19.4.24) & (19.4.26), and, if you wished to compare to (1) here, as he notes in his footnote, $$F_\pi=2f_\pi=186$$MeV in our above discussion. A mere change of language, but perfectly consistent. Indeed, if you were (quixotically) inclined to do phenomenology with Weinberg (famous but not celebrated) normalizations, you'd have to build up a conversion table for all of such expressions. I have to reassure you, however, that the mainstream has shifted quite a bit since that book, and defined by the first group sampled. • I see. How does the value of $184$ MeV in Weinberg‘s book for into this explanation? – Stephan Oct 31 '18 at 0:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958673894405365, "perplexity": 330.64527185471655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610919.33/warc/CC-MAIN-20200123131001-20200123160001-00355.warc.gz"}
https://rpg.stackexchange.com/questions/14311/why-do-characters-with-a-high-prime-requisite-gain-bonus-xp/14314
# Why do characters with a high prime requisite gain bonus XP? In earlier versions of Dungeons & Dragons, a character with a high ability score in their class' prime requisite receives a bonus to all experience he earns. For example, the 1983 Frank Mentzer edition included this chart: Prime Requisite Score Adjustment to Experience 3-5 -20% 6-8 -10% 13-15 + 5% 16-18 +10% What is the reason for this rule? Did the game's designers ever explain it, either officially or anecdotally? • Interestingly, prime requisite bonuses date all the way back to the original version of the game...the OD&D brown (and white) box rules. Page 11 of Volume One: Men & Magic lists "Bonuses and Penalties to Advancement due to Abilities". The adjustments appear to be very similar to those in the later 1983 Mentzer basic edition (the OD&D rules were first published in 1974). – Badmike May 13 '12 at 21:35 • If you want to know "what were they thinking" you'd need to ask Robert Kuntz as he seems to be the only one of the TSR originals still alive. Arneson, Kaye, and Gygax have all passed away. Tim Kask, Jim Ward, and Frank Mentzer may also have an insight due to how long they knew Gary G. – KorvinStarmast Sep 6 '15 at 15:44 When building characters, with "in order" statistic generation, or "assign existing roll" statistic generation, the XP bonus acts as an incentive to produce two outcomes. For "in order" character generation, it strongly suggests to players that they make their character "fit" their statistics, in particular by aligning their character selection with their highest statistic. No INT18 fighters please, we're Gygaxian. Correspondingly, with player assigned statistics (and points buy, obviously), it encourages characters whose statistics match their function in at least one statistic, their primary one. No STR18 WIS9 clerics please, we're Gygaxian. Given that primary statistics also limit spells and levels; there is a strong rules based encouragement for character statistics to match assumed "rule ideal" types of characters. Secondly, with statistic degrading poisons, traps, effects and magic; the loss of XP when your primary statistic is degraded below 9, is an encouragement for Fighters to regain their Strength, Clerics to regain their Wisdom, etc. Again, this encourages actual characters to match a rule ideal "type" of character at the level of statistics. • Actually, the Stat minimum for spells was not in the Mentzer editions. So, having a spellcaster with a low Prime Requisite was no hindrance to their career apart from XP earned. – YogoZuno May 14 '12 at 8:14 This is just an opinion, but seeing the bonuses in original D&D (OD&D) since the beginning, they seem to be a way to subtly push the player towards a specific character class. Remember, in the early days, many systems (including OD&D) required you to roll three dice, in order, for your stats. So many players who wanted to work a fighter might end up with a 9 strength and a 17 wisdom, and they would just play the fighter despite having bonuses should they have played the cleric. It appears the prime requisite bonuses might be there to reward a player for "taking a chance" on a character class he might not normally choose. The 5-10 percent added to experience (in contrast to the 10-20 percent LOST if the prime requisite was 8 or under) would be a substantial bonus in early editions of the game where EXP were sometimes scarce and character longevity was not guaranteed. EDIT: BTW, per a question by Yogo, here is the original Prime Requisite bonuses from the OD&D rules. It is interesting to note the changes from this to later editions: In OD&D there are only three classes: Fighting Men, Magic Users, Clerics. All abilities are rolled by 3d6, in order. STR is prime requisite for Fighting-Men. INT is prime requisite for Magic Users. WIS is the prime requisite for Clerics. Bonuses and Penalties for Advancement Due to Abilities: Low score is under 8; Average is 9-12; High is over 13 Prime Requisite 15 or more: Add 10% to earned Experience Prime Requisite 13 or 14: Add 5% to earned Experience Prime Requisite 7 or 8: Minus 10% from earned Experience Prime Requisite 6 or less: Minus 20% from earned experience There are ability bonuses given for high CON, DEX or CHR, but they are unrelated to Experience point awards. Interestingly, besides the Experience point bonuses there is little advantage to having a high STR, INT or WIS except for intangibles: A high STR is said to help in opening traps, A high INT will add languages and affect referees decisions about certain actions the player might make, A high WIS acts in the same way as a high INT (might affect decisions made per the referee's decision) • 'despite having bonuses should they have played the cleric' - Sorry, but apart from bonus XP, the only benefit of a high Wis was a bonus on Saves, and that applied to all classes, not just Clerics. – YogoZuno May 14 '12 at 8:07 • See comment below...you really have to go back and find out why they are given in the Original (OD&D) rules before you extrapolate reasons for later editions, because in many cases those later editions are merely aping stuff that came before. – Badmike May 14 '12 at 16:39 • @Badmike Answers aren't necessarily displayed in a predictable order, so 'below' isn't necessarily a useful descriptor. Could you perhaps mention the name of the poster? – GMJoe May 16 '12 at 4:36 I'm only guessing here, but for the majority of classes under those systems, you were required to have a high value in your prime requisite, but there was virtually no system bonus for those high values. Sure, Str, Dex and Con gave all characters in-game bonuses, but high Int gave you nothing but languages. So, to me, the extra XP was a way of giving those high stats an in-game bonus. • When you go back to Original D&D, ANY high ability in your prime requisite gave you a XP bump, while a low ability gave you a minus. This was a -20% to a +10% swing for some character that instead of playing a fighter with a 8 STR went with playing a cleric with a 15 WIS instead of vice-versa. Remember, most bonuses as we see them in the rules come from LATER editions, which piled up the benefits from high prime requisites...but their origins are from the first white box rules. So really you have to go back to the beginning and ask yourself why these bonuses were given in the OD&D rules. – Badmike May 14 '12 at 16:38 • I can't go back further than what I have - earliest rulebook I have is the first Mentzer edition. Thanks for clarifying. So, were there any other stat bonuses? To hit and damage for Str? – YogoZuno May 14 '12 at 23:12 • Yogo let me see if I have time to put up the "original" prime requisite bonus chart....it is very interesting in what was kept for later editions and what was left off. – Badmike May 15 '12 at 17:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3977522552013397, "perplexity": 3905.4648705122277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141430.58/warc/CC-MAIN-20200216211424-20200217001424-00097.warc.gz"}
http://math.stackexchange.com/questions/58548/why-are-vector-spaces-not-isomorphic-to-their-duals
# Why are vector spaces not isomorphic to their duals? Assuming the axiom of choice, set $\mathbb F$ to be some field (we can assume it has characteristics $0$). I was told, by more than one person, that if $\kappa$ is an infinite cardinal then the vector space $V=\mathbb F^{(\kappa)}$ (that is an infinitely dimensional space with basis of cardinality $\kappa$) is not isomorphic (as a vector space) to the algebraic dual, $V^*$. I have asked several professors in my department, and this seems to be completely folklore. I was directed to some book, but could not have find it in there as well. The Wikipedia entry tells me that this is indeed not a cardinality issue, for example $\mathbb R^{<\omega}$ (that is all the eventually zero sequences of real numbers) has the same cardinality as its dual $\mathbb R^\omega$ but they are not isomorphic. Of course being of the same cardinality is necessary but far from sufficient for two vector spaces to be isomorphic. What I am asking, really, is whether or not it is possible when given a basis and an embedding of a basis of $V$ into $V^*$, to say "This guy is not in the span of the embedding"? Edit: I read the answers in the link given by Qiaochu. They did not satisfy me too much. My main problem is this: suppose $\kappa$ is our basis then $V$ consists of $\{f\colon\kappa\to\mathbb F\Big| |f^{-1}[\mathbb F\setminus\{0\}]|<\infty\}$ (that is finite support), while $V^*=\{f\colon\kappa\to\mathbb F\}$ (that is all the functions). In particular, the basis for $V$ is given by $f_\alpha(x) = \delta_{\alpha x}$ (i.e. $1$ on $\alpha$, and $0$ elsewhere), while $V^*$ needs a much larger basis. Why can't there by other linear functionals on $V$? Edit II: After the discussions in the comments and the answers, I have a better understanding of my question to begin with. I have no qualms that under the axiom of choice given an infinite set $\kappa$ there are a lot more functions from $\kappa$ into $\mathbb F$, than functions with finite support from $\kappa$ into $\mathbb F$. It is also clear to me that the basis of a vector space is actually the set of $\delta$ functions, whereas the basis for the dual is a subset of characteristic functions. My problem is, if so, why is the dual space composed from all functions from $A$ into $F$? (And if possible, not to just show by cardinality games that the basis is much larger but actually show the algorithm for the diagonalization.) - How about this one, courtesy of Bill Dubuque and sci.math? –  Arturo Magidin Aug 19 '11 at 21:13 @Arturo: While no disrespect for Bill, his style is a little too concise for my taste (perhaps because this is an unfamiliar territory for me I need a somewhat more elaborated explanations). Also working without choice for the past few months really screwed up my cardinal arithmetic skills :) –  Asaf Karagila Aug 19 '11 at 21:38 @Asaf: I may be beating a dead horse, but a quick reply to your comment: "Why is the dual space composed of all functions from the set $A$ (presumably a basis for $V$) to $F$?" This is just an instance of the principle that to specify a linear mapping $f$ is equivalent to specifying its values on elements of a basis. Furthermore, we are free to do this any which way we want. This is what being a free module means! Also, the elements of $V$ are finite linear combinations of basis elements, so we have no problems extending $f$ linearly'. –  Jyrki Lahtonen Aug 21 '11 at 6:36 @Asaf: I still don't understand your edit. Is your question answered by Jyrki's comment? If not, which part is still unclear? –  Qiaochu Yuan Aug 21 '11 at 12:43 This is just Bill Dubuque's proof from sci.math mentioned in the comments, expanded. Edit. I'm also reorganizing this so that it flows a bit better. Let $F$ be a field, and let $V$ be the vector space of dimension $\kappa$. Then $V$ is naturally isomorphic to $\mathop{\bigoplus}\limits_{i\in\kappa}F$, the set of all functions $f\colon \kappa\to F$ of finite support. Let $\epsilon_i$ be the element of $V$ that sends $i$ to $1$ and all $j\neq i$ to $0$ (that is, you can think of it as the $\kappa$-tuple with coefficients in $F$ that has $1$ in the $i$th coordinate, and $0$s elsewhere). Lemma 1. If $\dim(V)=\kappa$, and either $\kappa$ or $|F|$ are infinite, then $|V|=\kappa|F|=\max\{\kappa,|F|\}$. Proof. If $\kappa$ is finite, then $V=F^{\kappa}$, so $|V|=|F|^{\kappa}=|F|=|F|\kappa$, as $|F|$ is infinite here and the equality holds. Assume then that $\kappa$ is infinite. Each element of $V$ can be represented uniquely as a linear combination of the $\epsilon_i$. There are $\kappa$ distinct finite subsets of $\kappa$; and for a subset with $n$ elements, we have $|F|^n$ distinct vectors in $V$. If $\kappa\leq |F|$, then in particular $F$ is infinite, so $|F|^n=|F|$. Hence you have $|F|$ distinct vectors for each of the $\kappa$ distinct subsets (even throwing away the zero vector), so there is a total of $\kappa|F|$ vectors in $V$. If $|F|\lt\kappa$, then $|F|^n\lt\kappa$ since $\kappa$ is infinite; so there are at most $\kappa$ vectors for each subset, so there are at most $\kappa^2 = \kappa$ vectors in $V$. Since the basis has $\kappa$ elements, $\kappa\leq|V|\leq\kappa$, so $|V|=\kappa=\max\{\kappa,|F|\}$. QED Now let $V^*$ be the dual of $V$. Since $V^* = \mathcal{L}(V,F)$ (where $\mathcal{L}(V,W)$ is the vector space of all $F$-linear maps from $V$ to $W$), and $V=\mathop{\oplus}\limits_{i\in\kappa}F$, then again from abstract nonsense we know that $$V^*\cong \prod_{i\in\kappa}\mathcal{L}(F,F) \cong \prod_{i\in\kappa}F.$$ Therefore, $|V^*| = |F|^{\kappa}$. Added. Why is it that if $A$ is the basis of a vector space $V$, then $V^*$ is equivalent to the set of all functions from $A$ to the ground field? A functional $f\colon V\to F$ is completely determined by its value on a basis (just like any other linear transformation); thus, if two functionals agree on $A$, then they agree everywhere. Hence, there is a natural injection, via restriction, from the set of all linear transformations $V\to F$ (denoted $\mathcal{L}(V,F)$) to the set of all functions $A\to F$, $F^A\cong \prod\limits_{a\in A}F$. Moreover, given any function $g\colon A\to F$, we can extend $g$ linearly to all of $V$: given $\mathbf{x}\in V$, there exists a unique finite subset $\mathbf{a}_1,\ldots,\mathbf{a}_n$ (pairwise distinct) of $A$ and unique scalars $\alpha_1,\ldots,\alpha_n$, none equal to zero, such that $\mathbf{x}=\alpha_1\mathbf{a}_1+\cdots\alpha_n\mathbf{a}_n$ (that's from the definition of basis as a spanning set that is linearly independent; spanning ensures the existence of at least one such expression, linear independence guarantees that there is at most one such expression); we define $g(\mathbf{x})$ to be $$g(\mathbf{x})=\alpha_1g(\mathbf{a}_1)+\cdots \alpha_ng(\mathbf{a}_n).$$ (The image of $\mathbf{0}$ is the empty sum, hence equal to $0$). Now, let us show that this is linear. First, note that $\mathbf{x}=\beta_1\mathbf{a}_{i_1}+\cdots\beta_m\mathbf{a}_{i_m}$ is any expression of $\mathbf{x}$ as a linear combination of pairwise distinct elements of the basis $A$, then it must be the case that this expression is equal to the one we already had, plus some terms with coefficient equal to $0$. This follows from the linear independence of $A$: take $$\mathbf{0}=\mathbf{x}-\mathbf{x} = (\alpha_1\mathbf{a}_1+\cdots\alpha_n\mathbf{a}_n) - (\beta_1\mathbf{a}_{i_1}+\cdots+\beta_m\mathbf{a}_{i_m}).$$ After any cancellation that can be done, you are left with a linear combination of elements in the linearly independent set $A$ equal to $\mathbf{0}$, so all coefficients must be equal to $0$. That means that we can likewise define $g$ as follows: given any expression of $\mathbf{x}$ as a linear combination of elements of $A$, $\mathbf{x}=\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m$, with $\mathbf{a}_i\in A$, not necessarily distinct, $\gamma_i$ scalars not necessarily equal to $0$, we define $$g(\mathbf{x}) = \gamma_1g(\mathbf{a}_1)+\cdots+\gamma_mg(\mathbf{a}_m).$$ This will be well-defined by the linear independence of $A$. And now it is very easy to see that $g$ is linear on $V$: if $\mathbf{x}=\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m$ and $\mathbf{y}=\delta_{1}\mathbf{a'}_1+\cdots+\delta_n\mathbf{a'}_n$ are expressions for $\mathbf{x}$ and $\mathbf{y}$ as linear combinations of elements of $A$, then \begin{align*} g(\mathbf{x}+\lambda\mathbf{y}) &= g\Bigl(\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m+\lambda(\delta_{1}\mathbf{a'}_1+\cdots+\delta_n\mathbf{a'}_n)\Bigr)\\ &= g\Bigl(\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m+ \lambda\delta_{1}\mathbf{a'}_1+\cdots+\lambda\delta_n\mathbf{a'}_n\\ &= \gamma_1g(\mathbf{a}_1) + \cdots \gamma_mg(\mathbf{a}_m) + \lambda\delta_1g(\mathbf{a'}_1) + \cdots + \lambda\delta_ng(\mathbf{a'}_n)\\ &= g(\mathbf{x})+\lambda g(\mathbf{y}). \end{align*} Thus, the map $\mathcal{L}(V,F)\to F^A$ is in fact onto, giving a bijection. This is the "linear-algebra" proof. The "abstract nonsense proof" relies on the fact that if $A$ is a basis for $V$, then $V$ is isomorphic to $\mathop{\bigoplus}\limits_{a\in A}F$, a direct sum of $|A|$ copies of $A$, and on the following universal property of the direct sum: Definition. Let $\mathcal{C}$ be an category, let $\{X_i\}{i\in I}$ be a family of objects in $\mathcal{C}$. A coproduct of the $X_i$ is an object $C$ of $\mathcal{C}$ together with a family of morphisms $\iota_j\colon X_j\to C$ such that for every object $X$ and ever family of morphisms $g_j\colon X_j\to X$, there exists a unique morphism $\mathbf{f}\colon C\to X$ such that for all $j$, $g_j = \mathbf{g}\circ \iota_j$. That is, a family of maps from each element of the family is equivalent to a single map from the coproduct (just like a family of maps into the members of a family is equivalent to a single map into the product of the family). In particular, we get that: Theorem. Let $\mathcal{C}$ be a category in which the sets of morphisms are sets; let $\{X_i\}_{i\in I}$ be a family of objects of $\mathcal{C}$, and let $(C,\{\iota_j\}_{j\in I})$ be their coproduct. Then for every object $X$ of $\mathcal{C}$ there is a natural bijection $$\mathrm{Hom}_{\mathcal{C}}(C,X) \longleftrightarrow \prod_{j\in I}\mathrm{Hom}_{\mathcal{C}}(X_j,X).$$ The left hand side is the collection of morphisms from the coproduct to $X$; the right hand side is the collection of all families of morphisms from each element of $\{X_i\}_{i\in I}$ into $X$. In the vector space case, the fact that a linear transformation is completely determined by its value on a basis is what establishes that a vector space $V$ with basis $A$ is the coproduct of $|A|$ copies of the one-dimensional vector space $F$. So we have that $$\mathcal{L}(V,W) \leftrightarrow \mathcal{L}\left(\mathop{\oplus}\limits_{a\in A}F,W\right) \leftrightarrow \prod_{a\in A}\mathcal{L}(F,W).$$ But a linear transformation from $F$ to $W$ is equivalent to a map from the basis $\{1\}$ of $F$ into $W$, so $\mathcal{L}(F,W) \cong W$. Thus, we get that if $V$ has a basis of cardinality $\kappa$ (finite or infinite), we have: $$\mathcal{L}(V,F) \leftrightarrow \mathcal{L}\left(\mathop{\oplus}_{i\in\kappa}F,F\right) \leftrightarrow \prod_{i\in\kappa}\mathcal{L}(F,F) \leftrightarrow \prod_{i\in\kappa}F = F^{\kappa}.$$ Lemma 2. If $\kappa$ is infinite, then $\dim(V^*)\geq |F|$. Proof. If $F$ is finite, then the inequality is immediate. Assume then that $F$ is infinite. Let $c\in F$, $c\neq 0$. Define $\mathbf{f}_c\colon V\to F$ by $\mathbf{f}_c(\epsilon_n) = c^n$ if $n\in\omega$, and $\mathbf{f}_c(\epsilon_i)=0$ if $i\geq\omega$. These are linearly independent: Suppose that $c_1,\ldots,c_m$ are pairwise distinct nonzero elements of $\mathbf{f}$, and that $\alpha_1\mathbf{f}_{c_1} + \cdots + \alpha_m\mathbf{f}_{c_m} = \mathbf{0}$. Then for each $i\in\omega$ we have $$\alpha_1 c_1^i + \cdots + \alpha_m c_m^i = 0.$$ Viewing the first $m$ of these equations as linear equations in the $\alpha_j$, the corresponding coefficient matrix is the Vandermonde matrix, $$\left(\begin{array}{cccc} 1 & 1 & \cdots & 1\\ c_1 & c_2 & \cdots & c_m\\ c_1^2 & c_2^2 & \cdots & c_m^2\\ \vdots & \vdots & \ddots & \vdots\\ c_1^{m-1} & c_2^{m-1} & \cdots & c_m^{m-1} \end{array}\right),$$ whose determinant is $\prod\limits_{1\leq i\lt j\leq m}(c_j-c_i)\neq 0$. Thus, the system has a unique solution, to wit $\alpha_1=\cdots=\alpha_m = 0$. Thus, the $|F|$ linear functionals $\mathbf{f}_c$ are linearly independent, so $\dim(V^*)\geq |F|$. QED To recapitulate: Let $V$ be a vector space of dimension $\kappa$ over $F$, with $\kappa$ infinite. Let $V^*$ be the dual of $V$. Then $V\cong\mathop{\bigoplus}\limits_{i\in\kappa}F$ and $V^*\cong\prod\limits_{i\in\kappa}F$. Let $\lambda$ be the dimension of $V^*$. Then by Lemma 1 we have $|V^*| = \lambda|F|$. By Lemma 2, $\lambda=\dim(V^*)\geq |F|$, so $|V^*| = \lambda$. On the other hand, since $V^*\cong\prod\limits_{i\in\kappa}F$, then $|V^*|=|F|^{\kappa}$. Therefore, $\lambda= |F|^{\kappa}\geq 2^{\kappa} \gt \kappa$. Thus, $\dim(V^*)\gt\dim(V)$, so $V$ is not isomorphic to $V^*$. Added${}^{\mathbf{2}}$. Some results on vector spaces and bases. Let $V$ be a vector space, and let $A$ be a maximal linearly independent set (that is, $A$ is linearly independent, and if $B$ is any subset of $V$ that properly contains $A$, then $B$ is linearly dependent). In order to guarantee that there is a maximal linearly independent set in any vector space, one needs to invoke the Axiom of Choice in some manner, since the existence of such a set is, as we will see below, equivalent to a basis; however, here we are assuming that we already have such a set given. I believe that the Axiom of Choice is not involved in any of what follows. Proposition. $\mathrm{span}(A) = V$. Proof. Since $A\subseteq V$, then $\mathrm{span}(A)\subseteq V$. Let $\mathbf{v}\in V$. If $v\in A$, then $v\in\mathrm{span}(A)$. If $v\notin A$, then $B=V\cup\{v\}$ is linearly dependent by maximality. Therefore, there exists a finite subset $a_1,\ldots,a_m$ in $A$ and scalars $\alpha_1,\ldots,\alpha_n$, not all zero, such that $\alpha_1a_1+\cdots+\alpha_ma_m=\mathbf{0}$. Since $A$ is linearly independent, at least one of the $a_i$ must be equal to $v$; say $a_1$. Moreover, $v$ must occur with a nonzero coefficient, again by the linear independence of $A$. So $\alpha_1\neq 0$, and we can then write $$v = a_1 = \frac{1}{\alpha_1}(-\alpha_2a_2 -\cdots - \alpha_na_n)\in\mathrm{span}(A).$$ This proves that $V\subseteq \mathrm{span}(A)$. $\Box$ Proposition. Let $V$ be a vector space, and let $X$ be a linearly independent subset of $V$. If $v\in\mathrm{span}(X)$, then any two expressions of $v$ as linear combinations of elements of $X$ differ only in having extra summands of the form $0x$ with $x\in X$. Proof. Let $v = a_1x_1+\cdots a_nx_n = b_1y_1+\cdots+b_my_m$ be two expressions of $v$ as linear combinations of $X$. We may assume without loss of generality that $n\leq m$. Reordering the $x_i$ and the $y_j$ if necessary, we may assume that $x_1=y_1$, $x_2=y_2,\ldots,x_{k}=y_k$ for some $k$, $0\leq k\leq n$, and $x_1,\ldots,x_k,x_{k+1},\ldots,x_n,y_{k+1},\ldots,y_m$ are pairwise distinct. Then \begin{align*} \mathbf{0} &= v-v\\ &=(a_1x_1+\cdots+a_nx_n)-(b_1y_1+\cdots+b_my_m)\\ &= (a_1-b_1)x_1 + \cdots + (a_k-b_k)x_k + a_{k+1}x_{k+1}+\cdots + a_nx_n - b_{k+1}y_{k+1}-\cdots - b_my_m. \end{align*} As this is a linear combination of pairwise distinct elements of $X$ equal to $\mathbf{0}$, it follows from the linear independence of $X$ that $a_{k+1}=\cdots=a_n=0$, $b_{k+1}=\cdots=b_m=0$, and $a_1=b_1$, $a_2=b_2,\ldots,a_k=b_k$. That is, the two expressions of $v$ as linear combinations of elements of $X$ differ only in that there are extra summands of the form $0x$ with $x\in X$ in them. QED Corollary. Let $V$ be a vector space, and let $A$ be a maximal independent subset of $V$. If $W$ is a vector space, and $f\colon A\to W$ is any function, then there exists a unique linear transformation $T\colon V\to W$ such that $T(a)=f(a)$ for each $a\in A$. Proof. Existence. Given $v\in V$, then $v\in\mathrm{span}{A}$. Therefore, we can express $v$ as a linear combination of elements of $A$, $v = \alpha_1a_1+\cdots+\alpha_na_n$. Define $$T(v) = \alpha_1f(a_1)+\cdots+\alpha_nf(a_n).$$ Note that $T$ is well-defined: if $v = \beta_1b_1+\cdots+\beta_mb_m$ is any other expression of $v$ as a linear combination of elements of $A$, then by the lemma above the two expressions differ only in summands of the form $0x$; but these summands do not affect the value of $T$. Note also that $T$ is linear, arguing as above. Finally, since $a\in A$ can be expressed as $a=1a$, then $T(a) = 1f(a) = f(a)$, so the restriction of $T$ to $A$ is equal to $f$. Uniqueness. If $U$ is any linear transformation $V\to W$ such that $U(a)=f(a)$ for all $a\in A$, then for every $v\in V$, write $v=\alpha_1a_1+\cdots+\alpha_na_n$ with $a_i\in A$. Then. \begin{align*} U(v) &= U(\alpha_1a_1+\cdots + \alpha_na_n)\\ &= \alpha_1U(a_1) + \cdots + \alpha_n U(a_n)\\ &= \alpha_1f(a_1)+\cdots + \alpha_n f(a_n)\\ &= \alpha_1T(a_1) + \cdots + \alpha_n T(a_n)\\ &= T(\alpha_1a_1+\cdots+\alpha_na_n)\\ &= T(v).\end{align*} Thus, $U=T$. QED - I think I get it now. Thanks :-) –  Asaf Karagila Aug 23 '11 at 6:53 I really wonder why this nice answer has been "damaged" by standard facts about linear algebra which can be found in any introduction to linear algebra ... –  Martin Brandenburg Jul 10 at 9:35 The "this guy" you're looking for is just the function that takes each of your basis vectors and sends them to 1. Note that this is not in the span of the set of functions that each take a single basis vector to 1, and all others to 0, because the span is defined to be the set of finite linear combinations of basis vectors. And a finite linear combination of things that have finite-dimensional support will still have finite-dimensional support, and thus can't send infinitely many independent vectors all to 1. You may want to say, "But look! If I add up these infinitely many functions, I clearly get a function that sends all my basis vectors to 1!" But this is actually a very tricky process. What you need is a notion of convergence if you want to add infinitely many things, which isn't always obvious how to define. In the end, it boils down to a cardinality issue - not of the vector spaces themselves, but of the dimensions. In the example you give, $\mathbb{R}^{<\omega}$ has countably infinite dimension, but the dimension of its dual is uncountable. (Added, in response to comment below): Think of all the possible ways you can have a function which is 1 on some set of your basis vectors and 0 on the rest. The only ways you can do these and stay in the span of your basis vectors is if you take the value 1 on only finitely many of those vectors. Since your starting space was infinite-dimensional, there's an uncountable number of such functions, and so uncountably many of them lie outside the span of your basis. You can only ever incorporate finitely many of them by "adding" them in one at a time (or even countably many at a time), so you'll never establish the vector isomorphism you're looking for. - Oh, I will not want to say that I am adding infinitely many functions. However given one more guy which you defined by a canonical embedding is no good... assuming the axiom of choice $\kappa+1=\kappa$ for every infinite cardinal. This means I can also define a linear embedding from $V$ which catches this new guy. Then who's going to be the vector not in my basis? –  Asaf Karagila Aug 19 '11 at 21:48 I don't understand why "this guy" can not be in the image of any embedding from $V$ to $V^\ast$. Let's say $B$ is a basis for $V$ and $f:V\to\mathbb{F}$ is "this guy" (i.e. the linear map which sends all the elements of $B$ to $1$). Can't we embed $V$ into $V^\ast$ by choosing a linearly independent subset of $V^\ast$, say $B'$, which includes $f$, map one of elements of $B$ to $f$ and all the other elements injectively into $B'\setminus\{f\}$? Certainly $f$ will be in the image of this embedding. –  LostInMath Aug 19 '11 at 23:12 @MartianInvader: Just to nitpick, countable sequences on $\mathbb Q$ which have finitely many nonzero coordinates is a countable set. Either way you have reduced it to a cardinality game which I am not comfortable with here. See the second edit of my question. Thanks a lot! –  Asaf Karagila Aug 20 '11 at 6:26 Only attempting to address that one point Asaf raised in comments/edits. I refer to the CW answer by Arturo & Bill for the cardinality argument and an actual answer to the original question. Assume that $A=\{e_i\mid i \in I\}$ is a basis for $V$. Let $f:A\rightarrow F$ be any function. This function can be extended linearly to an element of $V^*$ as follows. An arbitrary element $x\in V$ can be written as a finite linear combination of the basis elements in a unique way $$x=\sum_{j=i}^nc_j e_{i_j},$$ where $e_{i_1},e_{i_2},\ldots,e_{i_n}$ are the basis vectors needed to write $x$. This finite subset of $A$ (as well as the natural number $n$) obviously depends on $x$. Anyway, we can define $$f(x)=\sum_{j=i}^n c_j f(e_{i_j}).$$ As the sum is finite, we do only vector space operations on the r.h.s. (no convergence question or some such). As the presentation of $x$ as a linear combinations of element of $A$ is unique (up to addition of terms with coefficients equal to zero), $f(x)$ is well defined. It is straightforward to check that the mapping $f$ defined in this way is linear, i.e. an element of $V^*$. What may have been confusing is that we do not require $f$ to have finite support for the above linear extension' to work as described. The upshot is that we only need to use a finite number of vectors from the basis to write a given vector $x$. IOW the finiteness of the sum in the definition of the linear extension of $f$ comes from $A$ being a basis - not from the support of $f$ (that does not need to be finite). We can similarly extend a function with singleton support. If $\chi_i:A\rightarrow F$ is the function defined by $e_i\mapsto 1, e_j\mapsto 0$, for all $j\in I, j\neq i$, let's call its linear extension to an element of $V^*$ also $\chi_i$. What's the span of the mappings $\chi_i$? Only those linear functions $f$ with $|A\setminus\ker f|<\infty$ can be written as linear combinations of $\chi_i, i\in I$. Therefore the span of the linear mappings $\chi_i,i\in I$ is not a all of $V^*$ unless $\dim V$ is finite. - Many thanks, Jyrki. This seem to address the part which I know. My question is actually why are the elements of $V^*$ do not contain anything except the functions from $A$ into $F$. I am given to understand that this is a standard "universal property" proof, however as I am uncomfortable with those I am still trying to figure this out in my head. –  Asaf Karagila Aug 21 '11 at 20:05 @Asaf: To see that the map from $\mathbb F^A$ to $V^*$ is surjective, note that each element $T$ of $V^*$ determines an element $T\vert_A$ of $\mathbb F^A$. By linearity, $T$ must be the function constructed from $T\vert_A$ as in Jyrki's answer. –  Jonas Meyer Aug 21 '11 at 20:16 @Asaf: If your question is on why we go from $\mathcal{L}(\oplus A,B)$ to $\prod\mathcal{L}(A,B)$, I can certainly replace my invocation of abstract nonsense with an actual proof. –  Arturo Magidin Aug 21 '11 at 20:22 @Asaf: As far as I know, it's not, in the sense that we begin with the assumption that we have a vector space with a basis; so it would come in if you simply assume "vector space". (Or if your definition of "infinite dimensional vector space" is "vector space that does not have a finite basis"). –  Arturo Magidin Aug 21 '11 at 20:51 @Asaf: once you declare that a basis exists, there is nothing suspicious going on. Really. Given a basis $A$ of a vector space $V$ over a field $k$ and a function $f : A \to k$, the function $f$ extends uniquely by linearity to any finite linear combination of elements of $A$, which by hypothesis extends $f$ to all of $V$. What part of that is suspicious? (If the extension to some $v \in V$ does not exist, $A$ is not maximal. If it is not unique, $A$ is not linearly independent.) –  Qiaochu Yuan Aug 21 '11 at 23:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812965989112854, "perplexity": 86.8335418060197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133564.63/warc/CC-MAIN-20140914011213-00086-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.albert.io/ie/sat-math-1-and-2-subject-test/probability-of-selected-number
? Free Version Moderate # Probability of Selected Number SATSTM-1EUVK5 If a is a number chosen from the set ${7, 9, 11, 13}$ and $b$ is chosen from the set ${2, 4, 8, 12, 16}$, then what is the probability that $a+b=21$? A $0.5$ B $0.3$ C $0.25$ D $0.2$ E $0.1$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.999188244342804, "perplexity": 309.1885763171335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540909.75/warc/CC-MAIN-20161202170900-00239-ip-10-31-129-80.ec2.internal.warc.gz"}
https://workforce.libretexts.org/Bookshelves/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/12%3A_Physics_of_Conductors_and_Insulators/12.05%3A_Specific_Resistance
# 12.5: Specific Resistance $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ ## Designing Wire Resistance Conductor ampacity rating is a crude assessment of resistance based on the potential for current to create a fire hazard. However, we may come across situations where the voltage drop created by wire resistance in a circuit poses concerns other than fire avoidance. For instance, we may be designing a circuit where voltage across a component is critical, and must not fall below a certain limit. If this is the case, the voltage drops resulting from wire resistance may cause an engineering problem while being well within safe (fire) limits of ampacity: ## The Resistance Formula If the load in the above circuit will not tolerate less than 220 volts, given a source voltage of 230 volts, then we’d better be sure that the wiring doesn’t drop more than 10 volts along the way. Counting both the supply and return conductors of this circuit, this leaves a maximum tolerable drop of 5 volts along the length of each wire. Using Ohm’s Law (R=E/I), we can determine the maximum allowable resistance for each piece of wire: We know that the wire length is 2300 feet for each piece of wire, but how do we determine the amount of resistance for a specific size and length of wire? To do that, we need another formula: This formula relates the resistance of a conductor with its specific resistance (the Greek letter “rho” (ρ), which looks similar to a lower-case letter “p”), its length (“l”), and its cross-sectional area (“A”). Notice that with the length variable on the top of the fraction, the resistance value increases as the length increases (analogy: it is more difficult to force liquid through a long pipe than a short one), and decreases as cross-sectional area increases (analogy: liquid flows easier through a fat pipe than through a skinny one). Specific resistance is a constant for the type of conductor material being calculated. The specific resistances of several conductive materials can be found in the following table. We find copper near the bottom of the table, second only to silver in having low specific resistance (good conductivity): Notice that the figures for specific resistance in the above table are given in the very strange unit of “ohms-cmil/ft” (Ω-cmil/ft), This unit indicates what units we are expected to use in the resistance formula (R=ρl/A). In this case, these figures for specific resistance are intended to be used when length is measured in feet and cross-sectional area is measured in circular mils. The metric unit for specific resistance is the ohm-meter (Ω-m), or ohm-centimeter (Ω-cm), with 1.66243 x 10-9 Ω-meters per Ω-cmil/ft (1.66243 x 10-7 Ω-cm per Ω-cmil/ft). In the Ω-cm column of the table, the figures are actually scaled as µΩ-cm due to their very small magnitudes. For example, iron is listed as 9.61 µΩ-cm, which could be represented as 9.61 x 10-6 Ω-cm. When using the unit of Ω-meter for specific resistance in the R=ρl/A formula, the length needs to be in meters and the area in square meters. When using the unit of Ω-centimeter (Ω-cm) in the same formula, the length needs to be in centimeters and the area in square centimeters. All these units for specific resistance are valid for any material (Ω-cmil/ft, Ω-m, or Ω-cm). One might prefer to use Ω-cmil/ft, however, when dealing with round wire where the cross-sectional area is already known in circular mils. Conversely, when dealing with odd-shaped busbar or custom busbar cut out of metal stock, where only the linear dimensions of length, width, and height are known, the specific resistance units of Ω-meter or Ω-cm may be more appropriate. ## Solving Going back to our example circuit, we were looking for wire that had 0.2 Ω or less of resistance over a length of 2300 feet. Assuming that we’re going to use copper wire (the most common type of electrical wire manufactured), we can set up our formula as such: Algebraically solving for A, we get a value of 116,035 circular mils. Referencing our solid wire size table, we find that “double-ought” (2/0) wire with 133,100 cmils is adequate, whereas the next lower size, “single-ought” (1/0), at 105,500 cmils is too small. Bear in mind that our circuit current is a modest 25 amps. According to our ampacity table for copper wire in free air, 14 gauge wire would have sufficed (as far as notstarting a fire is concerned). However, from the standpoint of voltage drop, 14 gauge wire would have been very unacceptable. Just for fun, let’s see what 14 gauge wire would have done to our power circuit’s performance. Looking at our wire size table, we find that 14 gauge wire has a cross-sectional area of 4,107 circular mils. If we’re still using copper as a wire material (a good choice, unless we’re really rich and can afford 4600 feet of 14 gauge silver wire!), then our specific resistance will still be 10.09 Ω-cmil/ft: Remember that this is 5.651 Ω per 2300 feet of 14-gauge copper wire, and that we have two runs of 2300 feet in the entire circuit, so each wire piece in the circuit has 5.651 Ω of resistance: Our total circuit wire resistance is 2 times 5.651, or 11.301 Ω. Unfortunately, this is far too much resistance to allow 25 amps of current with a source voltage of 230 volts. Even if our load resistance was 0 Ω, our wiring resistance of 11.301 Ω would restrict the circuit current to a mere 20.352 amps! As you can see, a “small” amount of wire resistance can make a big difference in circuit performance, especially in power circuits where the currents are much higher than typically encountered in electronic circuits. Let’s do an example resistance problem for a piece of custom-cut busbar. Suppose we have a piece of solid aluminum bar, 4 centimeters wide by 3 centimeters tall by 125 centimeters long, and we wish to figure the end-to-end resistance along the long dimension (125 cm). First, we would need to determine the cross-sectional area of the bar: We also need to know the specific resistance of aluminum, in the unit proper for this application (Ω-cm). From our table of specific resistances, we see that this is 2.65 x 10-6 Ω-cm. Setting up our R=ρl/A formula, we have: As you can see, the sheer thickness of a busbar makes for very low resistances compared to that of standard wire sizes, even when using a material with a greater specific resistance. The procedure for determining busbar resistance is not fundamentally different than for determining round wire resistance. We just need to make sure that cross-sectional area is calculated properly and that all the units correspond to each other as they should. ## Review • Conductor resistance increases with increased length and decreases with increased cross-sectional area, all other factors being equal. • Specific Resistance (”ρ”) is a property of any conductive material, a figure used to determine the end-to-end resistance of a conductor given length and area in this formula: R = ρl/A • Specific resistance for materials are given in units of Ω-cmil/ft or Ω-meters (metric). Conversion factor between these two units is 1.66243 x 10-9 Ω-meters per Ω-cmil/ft, or 1.66243 x 10-7 Ω-cm per Ω-cmil/ft. • If wiring voltage drop in a circuit is critical, exact resistance calculations for the wires must be made before wire size is chosen. This page titled 12.5: Specific Resistance is shared under a GNU Free Documentation License 1.3 license and was authored, remixed, and/or curated by Tony R. Kuphaldt (All About Circuits) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6874178051948547, "perplexity": 1112.8264529903597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00388.warc.gz"}
https://zbmath.org/?q=an%3A1105.60065
× ## On exit times of a multivariate random walk and its embedding in a quasi Poisson process.(English)Zbl 1105.60065 This paper investigates a class of renewal processes marked by a multivariate sequence of independent random variables. In other words, the entire process forms a multivariate marked random walk. A single-variate random walk has been a prominent topic in probability for decades. Various problems are related to random walk’s behavior about critical levels, such as first passage times, when the random walk crosses constant lines, or lines with positive slopes. Let $$\mathcal F=\{\tau_0, \tau_1,..\}$$ be a delayed renewal process marked by a multivariate random walk $$(\mathcal G, \mathcal F)$$. The authors derive the joint distribution of first passage time $$\tau_{\rho}$$, pre-exit time $$\tau_{\rho-1}$$, and the respective values of multivariate random walk $$(\mathcal G, \mathcal F)$$ at $$\tau_{\rho}$$ and $$\tau_{\rho-1}$$ in a closed form. Section 4 deals with a multivariate random walk embedded in a quasi-Poisson process $$\Pi$$. The formalism of Section 4 is further enhanced in Section 5, where the information on $$\Pi$$ is interpolated in vicinities of first passage and pre-exit times. These results generalize and refine earlier results [see R. Agarwal, J. H. Dshalalow and D. O’Regan, J. Math. Anal. Appl. 293, No. 1, 1–13 (2004; Zbl 1052.60036), ibid. 293, No. 1, 14–27 (2004; Zbl 1047.60005) and J. H. Dshalalow, J. Appl. Probab. 38, No. 3, 707–721 (2001; Zbl 0996.60097)]. ### MSC: 60K05 Renewal theory 60K10 Applications of renewal theory (reliability, demand theory, etc.) 60K25 Queueing theory (aspects of probability theory) 60G25 Prediction theory (aspects of stochastic processes) 60G55 Point processes (e.g., Poisson, Cox, Hawkes processes) ### Citations: Zbl 1052.60036; Zbl 1047.60005; Zbl 0996.60097 Full Text: ### References: [1] DOI: 10.1016/j.jmaa.2003.12.040 · Zbl 1052.60036 [2] DOI: 10.1016/j.jmaa.2003.12.030 · Zbl 1047.60005 [3] Agarwal R.F., PanAmer. Math. Journ. 15 pp 35– (2005) [4] DOI: 10.1214/aop/1176989690 · Zbl 0759.60088 [5] Bening V.E., Generalized Poisson Models and their Applictions in Insurance and Finance (2002) · Zbl 1041.60004 [6] DOI: 10.2307/1426397 · Zbl 0322.60068 [7] DOI: 10.1155/S1048953397000415 · Zbl 0896.60056 [8] DOI: 10.1239/jap/996986664 · Zbl 0983.60505 [9] DOI: 10.1081/SAP-120028023 · Zbl 1037.91081 [10] Dshalalow J.H., Nonlinear Analysis, Proceedings of 4th Congress of Nonlinear Analysts (2005) · Zbl 1068.90028 [11] Garrido L. Editor., Proceedings of the Sitges Conference on Statistical Mechanics (1987) [12] DOI: 10.1214/aop/1176989808 · Zbl 0756.60060 [13] DOI: 10.1214/aop/1022855412 · Zbl 0934.58023 [14] Hida T., Proceedings of the Iias Workshop (1995) · Zbl 0877.60003 [15] DOI: 10.1155/S1048953303000091 · Zbl 1036.60076 [16] DOI: 10.1214/aoap/1060202835 · Zbl 1039.60044 [17] Mamontov Y., High-Dimensional Nonlinear Diffusion Stochastic Processes (2001) · Zbl 0983.60073 [18] DOI: 10.1002/jae.3950070405 [19] Muzy J., . pp 537– (2000) [20] DOI: 10.1017/CBO9780511606014 [21] DOI: 10.1002/9780470317044 [22] Takács L., Studies in Probability and Ergodic Theory 2 pp 45– (1978) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.303495854139328, "perplexity": 4320.81718151664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00160.warc.gz"}
https://workforce.libretexts.org/Bookshelves/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.08%3A_Multimeters
# 8.8: Multimeters $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ Seeing as how a common meter movement can be made to function as a voltmeter, ammeter, or ohmmeter simply by connecting it to different external resistor networks, it should make sense that a multi-purpose meter (“multimeter”) could be designed in one unit with the appropriate switch(es) and resistors. For general purpose electronics work, the multimeter reigns supreme as the instrument of choice. No other device is able to do so much with so little an investment in parts and elegant simplicity of operation. As with most things in the world of electronics, the advent of solid-state components like transistors has revolutionized the way things are done, and multimeter design is no exception to this rule. However, in keeping with this chapter’s emphasis on analog (“old-fashioned”) meter technology, I’ll show you a few pre-transistor meters. The unit shown above is typical of a handheld analog multimeter, with ranges for voltage, current, and resistance measurement. Note the many scales on the face of the meter movement for the different ranges and functions selectable by the rotary switch. The wires for connecting this instrument to a circuit (the “test leads”) are plugged into the two copper jacks (socket holes) at the bottom-center of the meter face marked “- TEST +”, black and red. This multimeter (Barnett brand) takes a slightly different design approach than the previous unit. Note how the rotary selector switch has fewer positions than the previous meter, but also how there are many more jacks into which the test leads may be plugged into. Each one of those jacks is labeled with a number indicating the respective full-scale range of the meter. Lastly, here is a picture of a digital multimeter. Note that the familiar meter movement has been replaced by a blank, gray-colored display screen. When powered, numerical digits appear in that screen area, depicting the amount of voltage, current, or resistance being measured. This particular brand and model of digital meter has a rotary selector switch and four jacks into which test leads can be plugged. Two leads—one red and one black—are shown plugged into the meter. A close examination of this meter will reveal one “common” jack for the black test lead and three others for the red test lead. The jack into which the red lead is shown inserted is labeled for voltage and resistance measurement, while the other two jacks are labeled for current (A, mA, and µA) measurement. This is a wise design feature of the multimeter, requiring the user to move a test lead plug from one jack to another in order to switch from the voltage measurement to the current measurement function. It would be hazardous to have the meter set in current measurement mode while connected across a significant source of voltage because of the low input resistance, and making it necessary to move a test lead plug rather than just flip the selector switch to a different position helps ensure that the meter doesn’t get set to measure current unintentionally. Note that the selector switch still has different positions for voltage and current measurement, so in order for the user to switch between these two modes of measurement they must switch the position of the red test lead and move the selector switch to a different position. Also note that neither the selector switch nor the jacks are labeled with measurement ranges. In other words, there are no “100 volt” or “10 volt” or “1 volt” ranges (or any equivalent range steps) on this meter. Rather, this meter is “autoranging,” meaning that it automatically picks the appropriate range for the quantity being measured. Autoranging is a feature only found on digital meters, but not all digital meters. No two models of multimeters are designed to operate exactly the same, even if they’re manufactured by the same company. In order to fully understand the operation of any multimeter, the owner’s manual must be consulted. Here is a schematic for a simple analog volt/ammeter: In the switch’s three lower (most counter-clockwise) positions, the meter movement is connected to the Common and V jacks through one of three different series range resistors (Rmultiplier1 through Rmultiplier3), and so acts as a voltmeter. In the fourth position, the meter movement is connected in parallel with the shunt resistor, and so acts as an ammeter for any current entering the common jack and exiting the A jack. In the last (furthest clockwise) position, the meter movement is disconnected from either red jack, but short-circuited through the switch. This short-circuiting creates a dampening effect on the needle, guarding against mechanical shock damage when the meter is handled and moved. If an ohmmeter function is desired in this multimeter design, it may be substituted for one of the three voltage ranges as such: With all three fundamental functions available, this multimeter may also be known as a volt-ohm-milliammeter. Obtaining a reading from an analog multimeter when there is a multitude of ranges and only one meter movement may seem daunting to the new technician. On an analog multimeter, the meter movement is marked with several scales, each one useful for at least one range setting. Here is a close-up photograph of the scale from the Barnett multimeter shown earlier in this section: Note that there are three types of scales on this meter face: a green scale for resistance at the top, a set of black scales for DC voltage and current in the middle, and a set of blue scales for AC voltage and current at the bottom. Both the DC and AC scales have three sub-scales, one ranging 0 to 2.5, one ranging 0 to 5, and one ranging 0 to 10. The meter operator must choose whichever scale best matches the range switch and plug settings in order to properly interpret the meter’s indication. This particular multimeter has several basic voltage measurement ranges: 2.5 volts, 10 volts, 50 volts, 250 volts, 500 volts, and 1000 volts. With the use of the voltage range extender unit at the top of the multimeter, voltages up to 5000 volts can be measured. Suppose the meter operator chose to switch the meter into the “volt” function and plug the red test lead into the 10 volt jack. To interpret the needle’s position, he or she would have to read the scale ending with the number “10”. If they moved the red test plug into the 250 volt jack, however, they would read the meter indication on the scale ending with “2.5”, multiplying the direct indication by a factor of 100 in order to find what the measured voltage was. If current is measured with this meter, another jack is chosen for the red plug to be inserted into and the range is selected via a rotary switch. This close-up photograph shows the switch set to the 2.5 mA position: Note how all current ranges are power-of-ten multiples of the three scale ranges shown on the meter face: 2.5, 5, and 10. In some range settings, such as the 2.5 mA for example, the meter indication may be read directly on the 0 to 2.5 scale. For other range settings (250 µA, 50 mA, 100 mA, and 500 mA), the meter indication must be read off the appropriate scale and then multiplied by either 10 or 100 to obtain the real figure. The highest current range available on this meter is obtained with the rotary switch in the 2.5/10 amp position. The distinction between 2.5 amps and 10 amps is made by the red test plug position: a special “10 amp” jack next to the regular current-measuring jack provides an alternative plug setting to select the higher range. Resistance in ohms, of course, is read by a nonlinear scale at the top of the meter face. It is “backward,” just like all battery-operated analog ohmmeters, with zero at the right-hand side of the face and infinity at the left-hand side. There is only one jack provided on this particular multimeter for “ohms,” so different resistance-measuring ranges must be selected by the rotary switch. Notice on the switch how five different “multiplier” settings are provided for measuring resistance: Rx1, Rx10, Rx100, Rx1000, and Rx10000. Just as you might suspect, the meter indication is given by multiplying whatever needle position is shown on the meter face by the power-of-ten multiplying factor set by the rotary switch. This page titled 8.8: Multimeters is shared under a GNU Free Documentation License 1.3 license and was authored, remixed, and/or curated by Tony R. Kuphaldt (All About Circuits) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5235080122947693, "perplexity": 1487.92206121808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499919.70/warc/CC-MAIN-20230201081311-20230201111311-00241.warc.gz"}
https://cris.vtt.fi/en/publications/correction-to-development-of-a-simplified-gas-ultracleaning-proce
# Correction to: Development of a simplified gas ultracleaning process: experiments in biomass residue‑based fixed‑bed gasification syngas (Biomass Conversion and Biorefinery, (2021), 10.1007/s13399-021-01680-x) Research output: Contribution to journalArticleScientificpeer-review ## Abstract In this article Tables 3–6 have the wrong order. Due to the addition of the appendix Table 6 contents to Table 3 caused all subsequent table contents to get bumped to the wrong slot. There’s also the issue with cell positioning in two tables which were not corrected. The numbers represents two cells and thus needs to be centered in between the rows. Also, the Table 5 (really should be Table 4) is now missing headings that were mistakenly removed. The original article has been corrected. Original language English Biomass Conversion and Biorefinery https://doi.org/10.1007/s13399-021-01862-7 Published - 17 Aug 2021 A1 Journal article-refereed ## Fingerprint Dive into the research topics of 'Correction to: Development of a simplified gas ultracleaning process: experiments in biomass residue‑based fixed‑bed gasification syngas (Biomass Conversion and Biorefinery, (2021), 10.1007/s13399-021-01680-x)'. Together they form a unique fingerprint. • ### Development of a simplified gas ultracleaning process: experiments in biomass residue-based fixed-bed gasification syngas Frilund, C., Kurkela, E. & Hiltunen, I., 10 Jul 2021, (E-pub ahead of print) 12 p. Research output: Contribution to journalArticleScientificpeer-review Open Access File 1 Citation (Scopus)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8815451264381409, "perplexity": 13674.462487950399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00314.warc.gz"}
https://www.gamedev.net/forums/topic/172982-pocket-pc--getasynckeystate-cnet/
#### Archived This topic is now archived and is closed to further replies. # Pocket PC GetAsyncKeyState C#.Net This topic is 5550 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi.. I''ve been learning C#.net for use on a Pocket PC 2002...(uses .net compact framework) On This Site he talks about using GetAsyncKeyState for detecting arrowkeys... Its declared as: [DllImport("coredll.dll")] private static extern int GetAsyncKeyState(int vkey); But Im not able to get it to detect if any of the hardware arrowkeys are down, I believe that Left is 37, Up is 38, Right is 39, and Down is 40...because thats what his enum had. //in a loop if (GetAsyncKeyState(38) != 0) { //This is not reached when i press up...why!? } This is what Im apparently supposed to do, but it doesnt work...argh Can anyone help with this? Thanks, Lord Hen ##### Share on other sites Could it be that Im using the VS.NET 2003 Pocket PC Emulator? I wont be getting my pocket pc for a few days(shipping...argh)...so I cant exactly test it on a real device. If anyone would try this out on both the emulator and the actual device, it would be greatly appreciated =) psuedocode of what im doing: //loopif (GetAsyncKeyState(left) == Pressed) //notify of keypress: msgbox, label, etcApplication.DoEvents();//end loop Thanks, Lord Hen Edit: OK, I got my pocket pc and getasynckeystate DOESNT work on the emulator, but it works like a charm on the actual device. Im sorry, I should have waited until I actually got it. [edited by - Lord Hen on August 6, 2003 12:20:15 AM] 1. 1 2. 2 3. 3 Rutin 22 4. 4 5. 5 • 10 • 16 • 14 • 9 • 9 • ### Forum Statistics • Total Topics 632928 • Total Posts 3009274 • ### Who's Online (See full list) There are no registered users currently online ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19776444137096405, "perplexity": 8668.592696396474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511122.49/warc/CC-MAIN-20181017090419-20181017111919-00189.warc.gz"}
http://brandewinder.com/2016/08/06/gradient-boosting-part-1/
# Exploring Gradient Boosting I have recently seen the term “gradient boosting” pop up quite a bit, and, as I had no idea what this was about, I got curious. Wikipedia describes Gradient Boosting as a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. The page contains both an outline of the algorithm, and some references, so I figured, what better way to understand it than trying a simple implementation. In this post, I’ll start with a hugely simplified version, and will build up progressively over a couple of posts. I like to work on actual data to understand what is going on; for this series, I will (for no particular reason) use the Wine Quality dataset from the UCI Machine Learning repository. (References: P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.) In this example, Two datasets are included, related to red and white vinho verde wine samples, from the north of Portugal. The goal is to model wine quality based on physicochemical tests. In plain English, we have a bunch of wines, and each of them has chemical measurements on 11 characteristics, and a rating. What we want is to use that data to estimate a model that, based on these measurements, will predict whether or not we should stay away from a particular bottle. ## Exploring the dataset Note: the full script is availabla as a Gist here. We will use FSharp.Data and XPlot to respectively extract and visualize the dataset, from a raw F# script. Data extraction is straightforward, using the CSV Type Provider: type Wine = CsvProvider<"data/winequality-red.csv",";",InferRows=1500> let reds = Wine.Load("data/winequality-red.csv") Let’s start by creating a couple of type aliases, to clarify the intent of the code: type Observation = Wine.Row type Feature = Observation -> float let Alcohol Level : Feature = fun obs -> obs.Alcohol |> float let Volatile Acidity : Feature = fun obs -> obs.Volatile acidity |> float Here we define each row from the dataset as an Observation, and a Feature as a function that, given an Observation, will return to us a float, a numeric value that describes one aspect of an Observation. We then create 2 features, picking completely arbitrarily 2 measurements, Alcohol Level and Volatile Acidity. Let’s take a look at whether there is a visible relationship between Alcohol Level and Quality, creating a scatterplot with XPlot: reds.Rows |> Seq.map (fun obs -> Alcohol Level obs, obs.Quality) |> Chart.Scatter |> Chart.WithOptions options |> Chart.WithTitle "Alcohol Level vs. Quality" |> Chart.WithXTitle "Alcohol Level" |> Chart.WithYTitle "Quality" |> Chart.Show The relationship isn’t clear cut, but higher alcohol levels seem to go together with higher quality. Similarly, we plot Volatile Acidity against Quality: Again, no blatant relationship, but as acidity goes up, quality seems to generally decrease. In other words, people seem to enjoy more booze and sweetness - this is not unreasonable. What we want next is to use that information, and create a model that uses, say, Alcohol Level to predict Quality, that is, a Regression model. Given how dispersed the data is on our charts, we should not hope for perfect predictions here. On the other hand, there is a bit of a trend visibile, so using that information, we can hope for predictions that are better than random guesses. ## Stumps One of the interesting ideas behind ensemble models (which boosting is an example of) is to try and combine many mediocre prediction models (“weak learners”) into a good one. Here we will start with the weakest model I can think of, namely stumps. A stump is simply a function that predicts one value if the input is below a given threshold, and another one if the input is above the threshold. As an example, we could create a stump that predicts a certain quality if the Alcohol Level is below, say, 11.0, and another value otherwise. What predictions should we make? A reasonable solution would be to • for wines with alcohol below 11.0, predict the average quality observed across wines under 11.0 alcohol, • for wines with alcohol above 11.0, predict the average quality observed across wines over 11.0 alcohol. This is, obviously, a very crude prediction model, but let’s roll with it for now, and implement that approach: type Example = Observation * float type Predictor = Observation -> float let learnStump (sample:Example seq) (feature:Feature) threshold = let under = sample |> Seq.filter (fun (obs,lbl) -> feature obs <= threshold) |> Seq.averageBy (fun (obs,lbl) -> lbl) let over = sample |> Seq.filter (fun (obs,lbl) -> feature obs > threshold) |> Seq.averageBy (fun (obs,lbl) -> lbl) fun obs -> if (feature obs <= threshold) then under else over We define another couple of types for convenience: an Example is an Observation, together with a float value, the value we are trying to predict, and a Predictor is a function that, given an Observation, will return a prediction (a float). The learnStump function takes a sample (a collection of Example to learn from), a Feature, and a threshold, computes the average value on both sides of the threshold, and returns a Predictor, a function that, given an observation, will return one of the 2 possible predictions, depending on whether the value for the Feature is under or over the threshold. Let’s try this out on our data, picking an arbitrary value of 11.0 as a threshold: let redSample = reds.Rows |> Seq.map (fun row -> row, row.Quality |> float) let testStump = learnStump redSample Alcohol Level 11.0 Let’s now visualize our model, by plotting alcohol level against predicted quality: let predicted = redSample |> Seq.map (fun (obs,value) -> (Alcohol Level obs, obs |> testStump)) predicted |> Seq.sortBy fst |> Chart.Line |> Chart.WithTitle "Alcohol Level vs. Quality" |> Chart.WithXTitle "Alcohol Level" |> Chart.WithYTitle "Quality" |> Chart.Show For alcohol levels under 11.0, the model predicts a quality of 5.443, for levels above 11.0, a quality of 6.119. ## Picking a good stump Now we know how to create a stump, based on a sample, a feature, and a threshold. Progress! We have a new problem on our hands, though. For a specific feature, we have many, many possible stumps. How can we select one? We need a way to compare two stumps (or any Predictor, really). Again we will go for simple: a perfect model would predict the correct response for every single Example we know of. Conversely, a bad model would produce far-off predictions. We will measure the quality by summing together all the prediction errors, squared: let sumOfSquares (sample:Example seq) predictor = sample |> Seq.sumBy (fun (obs,lbl) -> pown (lbl - predictor obs) 2) This is not the only approach possible, but this is reasonable. A perfect model would give us 0.0, because every single prediction would equal the value we are trying to predict, and prediction errors in either direction (over or under) will create a positive penalty, because of the square. The lower the sumOfSquares, the closer the predictions are overall to the target. As a benchmark, our testStump has the following “cost”: sumOfSquares redSample testStump val it : float = 868.8435509 868.84 is now the number to beat. Another question solved, another one to answer: which thresholds should we try? Rather than trying out every single possible value, which could end up being quite painful, we will go again for simple. We will take all the alcohol level values, and divide them between n evenly spaced intervals, like this: let evenSplits (sample:Example seq) (feature:Feature) (n:int) = let values = sample |> Seq.map (fst >> feature) let min = values |> Seq.min let max = values |> Seq.max let width = (max-min) / (float (n + 1)) [ min + width .. width .. max - width ] If we apply this to the alcohol levels, we get the following: let alcoholSplits = evenSplits redSample Alcohol Level 10 val alcoholSplits : float list = [8.990909091; 9.581818182; 10.17272727; 10.76363636; 11.35454545; 11.94545455; 12.53636364; 13.12727273; 13.71818182] Selecting the best stump at that point is easy: take the splits, for each of them, learn a stump, compute the sumOfSquares, and pick the stump with the lowest value: let bestStump = alcoholSplits |> List.map (learnStump redSample Alcohol Level) |> List.minBy (sumOfSquares redSample) How good is it? Let’s check: sumOfSquares redSample bestStump val it : float = 864.4309287 This is an improvement over our randomly picked threshold, albeit a small one. For alcohol levels under 10.76, our model predicts 5.392, otherwise 6.091. ### Combining Stumps Now we have a slightly less mediocre predictor - what next? The only thing we considered so far was the overall average error across the sample. Perhaps looking in more detail at the prediction errors could prove useful. Let’s dig into the residuals, that is, the difference between our predictions and the correct value: redSample |> Seq.map (fun (obs,lbl) -> Alcohol Level obs, lbl - (obs |> bestStump)) |> Chart.Scatter |> Chart.WithOptions options |> Chart.WithTitle "Residuals vs. Quality" |> Chart.WithXTitle "Residuals" |> Chart.WithYTitle "Quality" |> Chart.Show Overall, the errors are distributed somewhat evenly around 0.0; however, there is a bit of a visible pattern, marked in red on the chart. We seem to over-shoot in the region immediately on the left of the threshold, and under-shoot on the right. How about trying to fit a stump on the residuals, to capture effects our initial crude stump didn’t pick up? let residualsSample = redSample |> Seq.map (fun (obs,lbl) -> obs, lbl - (obs |> bestStump)) let residualsStump = alcoholSplits |> List.map (learnStump residualsSample Alcohol Level) |> List.minBy (sumOfSquares redSample) We can now combine our 2 stumps into one model, and evaluate it: let combined = fun obs -> bestStump obs + residualsStump obs sumOfSquares redSample combined val combined : obs:Observation -> Label val it : float = 850.3408387 The aggregate error went down from 864.43 to 850.34. We combined together 2 mediocre models, and got a clear improvement out of it. Let’s plot out what our combined model does: redSample |> Seq.map (fun (obs,value) -> (Alcohol Level obs, obs |> combined)) |> Seq.sortBy fst |> Chart.Line |> Chart.WithTitle "Alcohol Level vs. Quality" |> Chart.WithXTitle "Alcohol Level" |> Chart.WithYTitle "Quality" |> Chart.Show Plotting the residuals now produces the following chart: The overall error is better, but there are still potential patterns to exploit. What we could do at that point is repeat the procedure, and fit another stump on the new residuals to decrease the error further. ## Iteratively adding stumps Rather than manually create another stump, we can generalize the idea along these lines: • Given a Predictor, • Compute the residuals, the error between the Predictor forecast and the correct value, • Find the Stump that matches most closely the residuals, • Create a new Predictor, by combining the current one with the new stump, • Repeat In other words, at each step, we look at the errors from our current model, fit a new model to the residuals to reduce our error, combine them together, and repeat the procedure until we decide it’s enough. Let’s implement this, using recursion, and stopping after a given number of adjustments: let learn (sample:Example seq) (feature:Feature) (depth:int) = let splits = evenSplits sample feature 10 let rec next iterationsLeft predictor = // we have reached depth 0: we are done if iterationsLeft = 0 then predictor else // compute new residuals let newSample = sample |> Seq.map (fun (obs,y) -> obs, y - predictor obs) // learn possible stumps against residuals, // and pick the one with smallest error let newStump = splits |> Seq.map (learnStump newSample feature) |> Seq.minBy (sumOfSquares newSample) // create new predictor let newPredictor = fun obs -> predictor obs + newStump obs // ... and keep going next (iterationsLeft - 1) newPredictor // initialize with a predictor that // predicts the average sample value let baseValue = sample |> Seq.map snd |> Seq.average let basePredictor = fun (obs:Observation) -> baseValue next depth basePredictor We start with a Predictor that simply returns the average quality across the sample, and iteratively follow the approach previously outlined, until we reach our pre-defined number of iterations. How well does it work? Let’s try it out: let model = learn redSample Alcohol Level 10 sumOfSquares redSample model val it : float = 811.4601191 Another clear improvement, from 850.34 to 811.46. How does our model look like now? By combining simple stumps, our model is starting to look like a staircase, progressively approximating a curve. Let’s take a quick look at how our aggregate error evolves, as depth increases: [ 1 .. 15 ] |> Seq.map (fun depth -> depth, learn redSample Alcohol Level depth) |> Seq.map (fun (depth,model) -> depth, sumOfSquares redSample model) |> Chart.Column |> Chart.Show At each step, adding a stump decreases the overall error, with improvements slowing down progressively as we go deeper. ## Conclusion We used an extremely primitive base model (stumps) to create predictions. Each stump is a simple gate, predicting one value if the input is above a given threshold, and another otherwise. Yet, by combining these crude stumps, we managed to put in place an algorithm that becomes progressively better and better, generating a curve that matches the desired output more closely after each iteration. Can we do better than that? Yes we can! Currently, our learn function is relying on a single feature at a time; we are using only Alcohol Level, ignoring all the potential information present in Volatile Acidity, or the other 9 measurements we have available. Instead of learning on one feature only, we could already pick the best stump across multiple features. Furthermore, there is nothing in the core learn algorithm that constrains us to use a stump. Instead of restricting ourselves to a stump, we could also use more complex models to match the residuals. In our next installments, we will look into learning trees instead of stumps, which will allow us to create Predictors using more that a single Feature at a time. In the process, we will also revisit the question of how to combine models as we iterate. Our current approach is to simply stack our predictors together: fun obs -> predictor obs + newStump obs. However, this might not be the best combination available - we will look into that. Code as a Gist ## Comments Have a comment or a question? Ping me on Twitter, or use the comments section!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4436245858669281, "perplexity": 5126.978367094205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110792.29/warc/CC-MAIN-20170822143101-20170822163101-00145.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/24078-elastic-collision-two-dimensions.html
# Math Help - elastic collision in two dimensions 1. ## elastic collision in two dimensions On a frictionless surface, a 0.35 kg puck moves horizontally to the right (at an angle of 0°) and a speed of 2.3 m/s. It collides with a 0.23 kg puck that is stationary. After the collision, the puck that was initially moving has a speed of 2.0 m/s and is moving at an angle of −32°. What is the velocity of the other puck after the collision? 2. Originally Posted by Linnus On a frictionless surface, a 0.35 kg puck moves horizontally to the right (at an angle of 0°) and a speed of 2.3 m/s. It collides with a 0.23 kg puck that is stationary. After the collision, the puck that was initially moving has a speed of 2.0 m/s and is moving at an angle of −32°. What is the velocity of the other puck after the collision? Nothing in the problem states that the collision is elastic. Thus I will not assume that it is. So no energy conservation. (Why did you write that it is elastic in the title?) During the collision there are no net external forces on the two pucks. Thus we know that the momentum of the two pucks is conserved. Since this is a two dimensional problem, we can also note that the momentum of the system is conserved in both the x and y directions: $m_1v_{10x} + m_2v_{20x} = m_1v_{1x} + m_2v_{2x}$ and $m_1v_{10y} + m_2v_{20y} = m_1v_{1y} + m_2v_{2y}$ So let's define us some positive directions. (You could also define an origin, a good practice, but it will not be needed here.) I'm going to let there be a +x axis in the direction the 0.35 kg puck is moving in before the collision. (To the right, in other words.) I am going to define a +y direction "straight up." That is to say straight up when you are sketching this problem. So calling puck 1 the 0.35 kg puck, and puck 2 the 0.23 kg puck, we know that $v_{10x} = 2.3~m/s$ $v_{10y} = v_{20x} = v_{20y} = 0~m/s$ So $m_1v_{10x} = m_1v_{1x} + m_2v_{2x}$ and $0 = m_1v_{1y} + m_2v_{2y}$ We also know $v_{1x} = 2.0 \cdot cos(32^o)$ $v_{1y} = -2.0 \cdot sin(32^o)$ So $m_1v_{10x} = 2m_1~cos(32^o) + m_2v_{2x}$ and $0 = -2m_1~sin(32^o) + m_2v_{2y}$ The first equation says: $v_{2x} = \frac{m_1v_{10x} - 2m_1~cos(32^o)}{m_2} = 0.918984~m/s$ and the second says: $v_{2y} = \frac{2m_1~sin(32^o)}{m_2} = 1.6128~m/s$ We want the velocity of the second puck. So the magnitude of the velocity will be: $v_2 = \sqrt{v_{2x}^2 + v_{2y}^2} = 1.85625~m/s$ The velocity vector is in the first quadrant, so the angle the velocity makes with the +x axis is $\theta = tan^{-1} \left ( \frac{v_{2y}}{v_{2x}} \right ) = 60.3252^o$ Now a little unfinished business. I'll leave it to you to calculate whether or not this collision was elastic. (Big hint: It wasn't.) -Dan 3. oh, that was the title that the problem was under ^^;;
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9665097594261169, "perplexity": 458.765153618733}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928423.12/warc/CC-MAIN-20150521113208-00078-ip-10-180-206-219.ec2.internal.warc.gz"}
http://math.andrej.com/category/constructive-mathematics/
# Mathematics and Computation ## A blog about mathematics for computers Postsby categoryby yearall # Posts in the category Constructive mathematics ### On complete ordered fields Theorem: All complete ordered fields are isomorphic. The standard proof posted by Joel has two parts: 1. A complete ordered field is archimedean. 2. Using the fact that the rationals are dense in an archimedean field, we construct an isomorphism between any two complete ordered fields. The second step is constructive, but the first one is proved using excluded middle, as follows. Suppose $F$ is a complete ordered field. If $b \in F$ is an upper bound for the natural numbers, construed as a subset of $F$, then so $b - 1$, but then no element of $F$ can be the least upper bound of $\mathbb{N}$. By excluded middle, above every $x \in F$ there is $n \in \mathbb{N}$. So I asked myself and the constructive news mailing list what the constructive status of the theorem is. But something was amiss, as Fred Richman immediately asked me to provide an example of a complete ordered field. Why would he do that, don't we have the MacNeille reals? After agreeing on definitions, Toby Bartels gave the answer, which I am taking the liberty to adapt a bit and present here. I am probably just reinventing the wheel, so if someone knows an original reference, please provide it in the comments. The theorem holds constructively, but for a bizarre reason: if there exists a complete ordered field, then the law of excluded middle holds, and the standard proof is valid!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062360525131226, "perplexity": 305.72582872140305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606975.49/warc/CC-MAIN-20200122101729-20200122130729-00475.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3610228
# Absorbtion of light- a dillema by sorax123 Tags: absorbtion, electromagnetic wave, light, optics, theories of light P: 31 I am fascinated light and how its phenomena are possible, however one particular area in which there is slight doubt in my mind is in absorbtion, after reading both online and in books 2 conflicting theories on this. The first idea is that a photon of light which has the correct amount of energy needed to make a molecule become excited into a further energy level, interacts with such a molecule and causes the molecule to become excited. After this point, if no further energy is given to the particle, it will lose energy and will no longer be excited, resulting in the electron dropping to a lower energy orbit and a photon of the same energy being reemited. Now this is all fine, but the flaw lies in the final part: if a photon of the same energy is reemitted, then surely there is no net energy gain, and therefore no absorbtion present? The second theory relies solely on wave theory and says that a particle has a particular vibratioanl frequency at which it exists and if a light wave happens to be this wavelength or a discrete multiple of it (ie. 2x it or 3x or 4x etc), then it will be absorbed and a resonance effect will take place, resulting in more vibration in the particle and therefore thermal energy, explaining why a black object gets hot in the sun. But this does not seem right to me as it bases its argument on newtonian mechanical waves rather than electromagnetic fundamentals, surely the physical vibrational frenquency cannot intertwine with E.M fields? And if this theory were true then it must say that as a particle has more thermal energy (higher frequency vibrational frequency) then it changes colour? Also, while I write this, I thought I'd pose the question, how is it possible for a substance to absorb both red and blue light, while not the colours inbetween, as blue light and red light are not linked by a discrete coefficient as suggested in theory 2? Just looking for some clarifications here as I don't want to research further into these subjects without full understanding of this seemingly illusively understood and debated principal. Thanks in advance. P: 4 Quote by sorax123 surely the physical vibrational frenquency cannot intertwine with E.M fields? And if this theory were true then it must say that as a particle has more thermal energy (higher frequency vibrational frequency) then it changes colour? i think they can... otherwise microwave ovens wouldn't work. and i don't know about changing colour... but they sure light up in high temperatures... i have no idea about the answers to your questions, but wanted to leave some thoughts here... until someone comes with an answer. P: 31 Thanks for your reply. I'm pretty sure that micro waves heat food because water is a polar molecule and the oscillating magnetic and electric fields of a microwave cause the polar molecule to rotate and "bump" into other molecules, passing on heat energy. This only occurs for frequencies of around 2.4 GHz for water as this is the frequency at which it takes the correct time for the electromagnetic field to change from positive to negative and therefore rotate the molecule. This means the water molecule can achieve the fastest possible rate of rotation. Perhaps visible light behaves similarly, but then things would get exceptionally hot, so I'm not sure. But for microwaves heating food, it's not the vibrations interacting it's the idea of polar molecules and the idea that one side is negative and another is positive causing repulsion and attraction and rotation. Cheers. D PF Patron P: 10,406 ## Absorbtion of light- a dillema Quote by sorax123 Now this is all fine, but the flaw lies in the final part: if a photon of the same energy is reemitted, then surely there is no net energy gain, and therefore no absorbtion present? Why would there be no absorbtion? The atom or molecule can stay in an excited state for an extended amount of time. While it is excited it has the energy gained from the photon. Upon emission of the photon it loses the energy. Mentor P: 27,565 Quote by sorax123 I am fascinated light and how its phenomena are possible, however one particular area in which there is slight doubt in my mind is in absorbtion, after reading both online and in books 2 conflicting theories on this. The first idea is that a photon of light which has the correct amount of energy needed to make a molecule become excited into a further energy level, interacts with such a molecule and causes the molecule to become excited. After this point, if no further energy is given to the particle, it will lose energy and will no longer be excited, resulting in the electron dropping to a lower energy orbit and a photon of the same energy being reemited. Now this is all fine, but the flaw lies in the final part: if a photon of the same energy is reemitted, then surely there is no net energy gain, and therefore no absorbtion present? The second theory relies solely on wave theory and says that a particle has a particular vibratioanl frequency at which it exists and if a light wave happens to be this wavelength or a discrete multiple of it (ie. 2x it or 3x or 4x etc), then it will be absorbed and a resonance effect will take place, resulting in more vibration in the particle and therefore thermal energy, explaining why a black object gets hot in the sun. But this does not seem right to me as it bases its argument on newtonian mechanical waves rather than electromagnetic fundamentals, surely the physical vibrational frenquency cannot intertwine with E.M fields? And if this theory were true then it must say that as a particle has more thermal energy (higher frequency vibrational frequency) then it changes colour? Also, while I write this, I thought I'd pose the question, how is it possible for a substance to absorb both red and blue light, while not the colours inbetween, as blue light and red light are not linked by a discrete coefficient as suggested in theory 2? Just looking for some clarifications here as I don't want to research further into these subjects without full understanding of this seemingly illusively understood and debated principal. Thanks in advance. You might want to start by reading the FAQ subforum in the General Physics forum, especially on the photon transport in solids. Zz.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8301017880439758, "perplexity": 492.2037332017423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164014919/warc/CC-MAIN-20131204133334-00030-ip-10-33-133-15.ec2.internal.warc.gz"}
https://8ch.net/pol/res/11932331.html
[ / / / / / / / / / / / / / ] /pol/ - Politically Incorrect Politics, news, happenings, current events Email Comment * sage Select/drop/paste files here (Randomized for file and post deletion; you may also set your own.) * = required field [▶ Show post options & limits]Confused? See the FAQ. Embed (replaces files and can be used instead) Show oekaki applet (replaces files and can be used instead) Do not bump(you can also write sage in the email field)Spoiler images(this replaces the thumbnails of your images with question marks) Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdfMax filesize is 16 MB.Max image dimensions are 15000 x 15000. You may upload 5 per post. YouTube embed. Click thumbnail to play. c0c509  No.11932331 There is very little on the subject I can put as OP (will do bellow), as I don't want to manipulate the flow of the information. Bigger (((false flags))) will happen if we're not informed. c0c509  No.11932332 ♥ esoteric trips ♥ c0c509  No.11932337 I like my dubs are I like my immigrants be8a5b  No.11932344 File: 5cc83f291b0b2e0⋯.gif (821.68 KB, 852x568, 3:2, 911 pentagon1.gif) File: 64879485463de48⋯.webm (2.45 MB, 550x310, 55:31, 911 thermite.webm) File: cbda580dc5ae5d0⋯.png (839.23 KB, 934x680, 467:340, 911 pentagon.png) 32ca03  No.11932345 This is the only video you need (won't let me embed "this file already exists") 09da29  No.11932348 It was done by Jews and Masons in finance and the MIC. bf5a76  No.11932387 It was done to hide crimes committed by the Clinton Admin. It was done as part of a deal with Saudi Arabia. Clinton set the up deal before he left office. File: 223916092f33f83⋯.jpg (149.95 KB, 768x509, 768:509, Pentagon Debris 1.jpg) File: 21353e8298b3853⋯.jpg (160.99 KB, 768x600, 32:25, Pentagon Debris 2.jpg) File: 057080ca0b09c1a⋯.jpg (87.75 KB, 768x509, 768:509, Pentagon Debris 3.jpg) File: 84aea2829412824⋯.jpg (80.31 KB, 768x508, 192:127, Pentagon Exterior 1.jpg) File: 5f07131c7ffc9b2⋯.jpg (104.01 KB, 768x614, 384:307, Pentagon Exterior 3.jpg) Rare Pentagons b87ce4  No.11932400 YouTube embed. Click thumbnail to play. Israel did 9/11 In this interview Chris Bollyn names the Israeli intelligence operatives who carried out the attack. He also names some American Jews that were part of the bloody crime against America. Lots of curious facts are revealed. c8f5a1  No.11932456 Three words. Five Dancing Israelis b8db79  No.11932460 sliding with 9/11 shit now, dan? 000000  No.11932475 "There are some people who realize that the physical evidence indicates that the official story is wrong, but don't understand what purpose or interest the government may have had in carrying out the attacks and thus have a psychological relation to the entire event as remaining quite mysterious even though the government's claims are patently absurd. I'll resolve that for you. On 9/11/91 Bush Sr. spoke before Congress calling for a new world order. Alright, so the president is announcing a major initiative to the world. What could he be up to? As the USSR was collapsing, there were major operations underway to seize control of their industry. We need to lay down a little historical context for those unaware before proceeding. Throughout the 1980s the CIA was heavily involved in cocaine and arms trafficking, money laundering, etc. The most famous name here might be Oliver North, but Bush Sr. is neck deep. Mena, Arkansas is a major hub for this operation under the jurisdiction of Bill Clinton. Long trail of deaths surrounding North, Clinton, Bush, Mena, etc. You can educate yourself on those details. This scandal goes mainstream around 1986. In 1989 you get the first major form of collateralized debt obligations in the form of Brady Bonds, invented by Bush Sr.'s treasury secretary. Minimally informed people are aware of the centrality of CDOs to the 2008 financial panic and the endemic fraud to securities trades of this type. One of the Bush/North associates is Neil Livingstone, who acts as a go between to Semion Mogilevich. Mogilevich is one of the biggest mafia leaders in the USSR at the time. Mogilevich has had money laundering through the Bank of New York exposed to the tune of $10 billion. He is a major arms dealer at the time, and also heavily connected to al-Qaeda. Part of a bargain Livingstone tried to broker with DoJ involved Mogilevich handing over a bunch of his al-Qaeda connections. Around 9/11/91 a bunch of fraudulent Brady Bonds are issued through the Bank of New York, Mogilevich's personal money laundering machine. This manifests ten years later. If you go back and look at the settlement imbalances at banks after 9/11, even the banks operating out of WTC complexes don't have any real settlement issues. There is one major exception however: Mogilevich's money laundering hub, the Bank of New York, is reporting book imbalances in excess of$100 billion per day following the attacks. The rules governing security clearance were lifted immediately after 9/11 - allegedly due to widespread problems - but really just to allow BoNY to clear it's balances without a record. It's worth noting that BoNY did not sustain structural damage on 9/11 - not in the WTC. So these 100s of billions of fraudulent securities that were not clearing in the days after 9/11 - where did they come from? What were they used for? This is how the west launched their invasion of Russia following the collapse. You'll find exposes about crates of freshly printed US bills being shipped to Russia like The Money Plane in NY Magazine, used to buy influence; the other side of this is the securities fraud used to buy assets. It's estimated that something like 40-50% of Russia had been bought up through the mafia by late 92 or 1993. In 2000/2001 Putin comes onto the scene. He starts nationalizing Russian assets that were seized by the US via the above mentioned securities fraud / money laundering and putting pressure on the mob. Next thing you know 9/11 happens. There are a lot of bones to pick with the official story, but rather than taking up those issues I'd like to highlight the importance of some officially acknowledged but underreported facts. On 9/9/01 Ahmad Massoud is assassinated by a fake TV crew that disguised a bomb as a TV camera. Two days later the secret service denies access to a couple of guys claiming to have an interview lined up with Bush in Florida on the morning of 9/11. This is our first direct threat against Bush of the day and indication of some larger plot than hijacked planes. Upon learning of the attacks, Bush insists on returning directly to Washington. In flight, a threat is received in the form of a call from an unknown source saying "Angel is next," angel being code for the president that only insiders would have. (* "Can you confirm the substance of that threat that was telephoned in…that Air Force One is next and using code words?" Fleischer: "Yes, I can. That's correct."(September 13)* )Bush is at this point aware that there is some sort of coup effort going on; for example, all the reporter's onboard AF1 are required to turn their cellphones off because they are worried about the attacking faction tracking cell signals - a capability we can all agree is well beyond that of al-Qaeda. 000000  No.11932477 So Bush is under threat from people with high level insider knowledge. Press secretary acknowledged all this on national TV the day after (Angel is next being called in). Bush diverts to Barksdale which is basically the #2 nuclear command site. After a couple hours there he proceeds to Offutt, which is the #1 nuclear command site. You should also be aware of a variety of drills running on the day of 9/11, Vigilant Guardian. This is a full scale mock up of nuclear war; the whole infrastructure is activated for first strike (incidentally, part of the Vigilant Guardian drill in 2001 included a hijacking of planes as the instigator of the conflict). So what is Bush doing going to Barksdale and Offutt? Clearly trying to bring the nuclear forces to heel in light of learning of high level insider power plays. This might be starting to sound a bit over the top - high level insiders seizing control of nuclear infrastructure and threatening the president with it. But only a few years later we have a similar incident in 2007 as 6 nuclear weapons are seized, generally regarded as intended for use starting the war in either Iran or Georgia. Later, in 2013, we again have nukes going off base unauthorized. Hours after it was reported in the media Sen. Graham is on TV warning of a nuke hit on South Carolina to be blamed on Syrian rebels; two of top nuclear commanders get dismissed in the following weeks. So high level insider fighting over the nuclear arsenal is pretty standard stuff, well known to the public. With the question of a struggle over the nuclear arsenal now being common sense rather than shocking, we consider Bush caving to the terrorism line and starting the whole war on terror. You have Putin immediately backing off the seizure of assets in Russia. It won't be until the last couple of years that Putin resumes his assertion of authority over Russia; the US responds in kind with attacks on Syria and Ukraine but Russia has since quietly updated it's missile program and is prepared for nuclear war this time around; you now see a defiant Putin in the face of the 9/11 coup faction. Alongside this remarkable shift in geopolitics, there is an emerging anti-dollar block with the BRIC countries establishing an infrastructure bank last year. Just weeks ago, Glazyev announced this and is widely regarded as being the mouth of Putin; he organized the recent gas deal with Chin for example. So you see 9/11 was a pivotal event used to extend the US dollar empire under threat of nuclear war for another 10-15 years in the face of an assertive Putin back in 2001 and growing domestic problems for the US Government. (You may recall the 90s was full of anti government militancy, concern over globalization, NAFTA, extraordinary distrust, in general what you would expect of citizens in an empire with no apparent external threat … the cold war had ended) So now that broad outline of purpose and motivation for 9/11 is clear, it's easier to come to terms with what your eyes tell you looking at Building 7 implode into its foot print at free fall for example. You don't even necessarily have to view it as an evil thing; the US people are quite severely fucked without something being done to backstop the US dollar." 08611d  No.11932654 File: 410b768fdf75b0a⋯.jpg (137.14 KB, 1280x720, 16:9, lotgh.jpg) >>11932400 Good stuff, I just wish someone other than me watched it. Do you think we can trick these israel-firsters posters by posting a video a video named "Trump is awesome an schools a liberal" but in reality it's that? 3980bd  No.11932719 File: 4dbc7761f373417⋯.webm (855.82 KB, 1280x720, 16:9, 911 building 7.webm) File: 248d332c218e706⋯.jpg (106.66 KB, 640x640, 1:1, the 911 dancing israelis.jpg) File: 72b9ec426d65e49⋯.webm (4.8 MB, 1280x720, 16:9, anime 9 11.webm) File: 855660c7e93ef46⋯.jpg (13 KB, 388x714, 194:357, 911 israel.jpg) File: b19a9a7a66e7a21⋯.webm (707.31 KB, 640x360, 16:9, 911.webm) 911 is one of the most fun conspiracies because it's so fucking obvious but you still can't dare to talk about it. 911 is (still) the holocaust of the 21st century. 1cbdd4  No.11932728 YouTube embed. Click thumbnail to play. Rarely see anyone show this. Highly methodical, dry and very compelling. 71a277  No.11932729 The Jews did 911. It's not even a question. No I'm not messing with formatting. Watch it or don't. f3cca5  No.11932904 File: fdfd5a38636ad3e⋯.gif (624.23 KB, 1760x4464, 110:279, redpill2.gif) f8df51  No.11932907 >>11932728 My latino cousin (big extended family) showed me that video a couple years ago. a165be  No.11933134 File: 6db25670cfa1322⋯.jpg (44.23 KB, 500x340, 25:17, Maj Dan Pantaleo and anot….jpg) File: dd842f9bef70fb3⋯.jpg (9.88 KB, 328x154, 164:77, images.jpg) >>11932331 According to the official story 30,000 liters of aircraft fuel melted steel beams a the WTC 1 and 2 each. But at the Pentagon we know from the recovered flight recorder there that the plane that crashed into the Pentagon also had 30,000 liters of fuel. So why couldn't the fuel even singe an open book on a stool or melt a plastic monitor at ground zero? 185fb4  No.11933230 File: f89e033e49fa6dc⋯.png (82.98 KB, 370x657, 370:657, ClipboardImage.png) File: a3f29962cda70c8⋯.png (123.64 KB, 370x657, 370:657, ClipboardImage.png) >>11933134 this is one of the fake narratives forced by (((them))), that's not the hole. loose change faggots started this bullshit, you're staring at a rabbit hole of how they fuck with the narrative to make us look crazy. Another example of this is the dancing israeli moving van pics that get passed around are also mostly fake. I don't know why they faked them, I guess so if someone showed them to someone it would be easy to debunk and it's hard to listen to someone who's easily led Ryan Dawson is good on this topic with some excellent vids and he constantly names the jews https://www.youtube.com/watch?v=gl7OkEBXu-E (((They))) even try to disinfo him through the taliot faggots, not sure yet if they're controlled op, useful idiots like qtards and being led away by kikes or something else. talpiot is real, it exists, but the people pushing it do a lot of speculation and jumping to conclusion, like they're from the alex jone's school of journalism 185fb4  No.11933232 >>11933230 the pic on the left, the first one, is the real pic, the second is fake. 185fb4  No.11933247 YouTube embed. Click thumbnail to play. >>11933230 here's the loose change debunk video a165be  No.11933274 https://www.lewrockwell.com/ a165be  No.11933275 >>11933230 You're a babbling idiot with nothing to say. I never watched loose change mainly because it's Jason Bermas the kikenvermin shill. 185fb4  No.11933339 >>11933275 >You're a babbling idiot with nothing to say. ok kike, not an argument, keep promoting that bullshit. They started it, other useful retards like yourself, repeat it. b618c0  No.11933360 >Oy vey, maybe if we post enough shit we can make them forget we're all a bunch of pederasts that will be swinging by their necks tomorrow a165be  No.11936292 File: bb2fd3b84ad245f⋯.jpg (61.45 KB, 526x480, 263:240, Pentagon exposed interior.jpg) >>11933339 The photographic evidence is clear; at Pentagon ground zero there is no evidence of any fuel fire despite the fact that the flight recorder found there says that there was 30,000 liters on board. I didn't need anything from a movie I never watched to understand that. But just keep muddying the waters jewboi. By all means don't trust your lying eyes. Jet fuel that melted steel beams at WTC 1 and 2 couldn't even melt a plastic monitor on a desk at Pentagon ground zero. a165be  No.11936336 >>11933360 Oi vey, the American public won't do anything to us, in fact they'll love us with religious fervor as the apple of God's eye, jes like Pastor McWhinney sez, as long as we've got meat scrap franks sizzlin' on the grill and a big black Dodge Ram pickup truck out on the driveway and we don't give a shit if the jews fuck our own kids up the ass. It's all good, buddy. 07fda4  No.11936446 >>11932456 underrated post. this is unbelievable. I would say BIG IF TRUE but we all know its not even a dot compared to their propaganda machinery b10b1f  No.11936499 YouTube embed. Click thumbnail to play. 3a73c3  No.11936758 YouTube embed. Click thumbnail to play. 3a73c3  No.11936772 YouTube embed. Click thumbnail to play. >>11932331 One commonly asked question is: "If it is impossible for a standard 767 to perform at the speeds reported, why would the alleged terrorist or elements within our govt fly the aircraft so fast?" 3a73c3  No.11936778 File: 5ffe2f2986629fa⋯.jpeg (97.36 KB, 925x634, 925:634, 767 plane vmo vd 911 modi….jpeg) >>11936772 Here you see the planes that hit the towers were flying way over VMO and would not have flown straight or even realistically held together. 3a73c3  No.11936848 File: 31b947ca9df7256⋯.png (467.35 KB, 1407x781, 1407:781, 911 aircraft over vmo vd s….png) 3a73c3  No.11936874 File: cf47bd3f7601d05⋯.png (785.93 KB, 2497x768, 2497:768, 911 aircraft 767 flew over….png) Here's a good combo pic. f0078c  No.11936901 File: 11ab5ba21e4415c⋯.png (100.38 KB, 452x338, 226:169, oyjew.png) >>11936499 That ((Comedy Central)) up to their old tricks again b02d3b  No.11936939 >>11932460 Just shut the fuck up, kid. It’s the only thing you post. You clearly don’t care about the topic. 82a988  No.11936974 File: c6ecf2aa05c4196⋯.webm (7.92 MB, 640x360, 16:9, c6ecf2aa05c41969e972272e7….webm) Condensated redpill 92d3a1  No.11937060 >>11936974 who's the black guy in the vid they circle? 3:20 07fda4  No.11937092 >>11936874 did a nigger make those? or a jew? not even a fuckin scale what is this shit 07fda4  No.11937098 >>11936974 condensated or condensed a4045c  No.11937144 File: 41f2cd950aba197⋯.jpg (106.46 KB, 377x412, 377:412, 20170807_141033.jpg) 798c99  No.11937182 File: 8af392e715c892f⋯.jpg (52.02 KB, 700x573, 700:573, redd.jpg) >>11932728 my eyes are open. they fucking engineered the destruction of the twin towers. it was those military explosive ordinance people!! a8e0bb  No.11937210 >>11937182 The material was military grade and very controlled. 798c99  No.11937232 >>11937210 all the jews that worked there, though… some say it was the arab oil peoples' way of getting ride of some evil americans 32826a  No.11937287 >>11933230 If only Dawson wasn't so autistic, and didn't have such an unwarranted sense of self-importance. a5aca7  No.11937302 File: 7ce9e433c31ae11⋯.jpg (591.42 KB, 2560x2930, 256:293, 7ce9e433c31ae11e04e7f3a9df….jpg) >>11936901 >Saudis and kikes aren't the same thing 3a73c3  No.11937345 >>11936778 >>11937092 They're from the "767 type certificate data sheet A1NM" as the pic shows, you fucking shill. 3a73c3  No.11937402 File: 0b3f0cdc8c41886⋯.jpg (111.32 KB, 998x636, 499:318, Boeing_767_VG_Diagram.jpg) >>11937345 3a73c3  No.11937405 >>11937402 b87ce4  No.11937469 YouTube embed. Click thumbnail to play. Christopher Bollyn: The Man Who Solved 9/11 at 35:00 the interviewer asks Bollyn to go through the evidence of Israeli involvement 39:00 - two senior Mossad agents - old men who in the 1960s captured Adolf Eichmann, and who smuggled uranium for the original Israeli nuclear weapons program, were the two men who got a contract in 1987 for a lease for the World Trade Center in NYC… but that contract was torn up when some American noticed their history. Michael Chertoff After the 1993 car bomb at the WTC Chertoff became a key prosecution figure who targeted (the purported suspect) 'the blind Sheik'. Chertoff was the chief investigator. On 9/11 Chertoff was Assistant US Attorney General - he had final say in all decisions re 9/11 attack investigtations. Chertoff's dad was one of Israel's top Mossad agents, a rabbi. After the atrocity of 9/11 Michael Chertoff was acting as an American law enforcement figure, but he got rid of evidence, confiscated video tapes, released Israelis from US jails and sent them home, etc. Later Chertoff's cousin bought Popular Mechanics and then the magazine tried to whitewash and endorse the absurd 911 Commission Report. 43:00 - the junkyards where evidence from WTC's Ground Zero was destroyed were under the control of Israelis - dual American/Israel nationals. 44:30 - Bollyn names PTech as the Quincy Massachusetts 'venture company' that had their software running on all the relevant US government computers on that fateful day. Bollyn then names an Israeli 'security company' owned by a company in Holland that is in turn a Mossad security company. That company was in charge of airport security and passenger screening operations at the NY and Boston airports that the ill-fated planes took off from on 911 + at the World Trade Center a different security company was in operation; Kroll associates - owned by Jules Kroll and Morris Greenberg - these are both high level Zionists + at 46:00 ish Bollyn points out that Phillip Zelikow's PhD paper was about Creation of the Public Myth – and that's exactly what he did with the 9/11 Commission Report (Zelikow was the Executive Director of the 9/11 Commission.) It's an 80 minute video. There's lots more curious information.. ca1603  No.11937699 >>11937469 how to write red on this fucking board s $$s$$ s $$\color{red}{s}$$ $$\left[\frac{fuck}{you}\right]$$\mathfrak{s} s s s ca1603  No.11937709 b02d3b  No.11937715 >>11937699 Lurk two years before posting. This isn’t a joke. 000000  No.11937790 https://www.salon.com/2002/05/07/students/ This is the best article if you want to start with redpilling someone from scratch. It doesn't go into "conspiracy" territory so you avoid that immediate shut down of thinking. You can even act like you don't know what to make of it either, you've just been doing research. Because there is no was to reconcile the Israeli "art student" mystery and not put 2 and 2 together. Anyone have a high res NYT photograph of the guys setting up their "art installation?" The one with the guy in the ski mask and all the cardboard boxes. 000000  No.11937807 >>11937790 >https://web.archive.org/web/20111224000504/https://www.salon.com/2002/05/07/students/ abb9ee  No.11938073 File: 1c48d36bf1e93f9⋯.jpg (33.28 KB, 255x255, 1:1, jojo reference.jpg) >>11937060 Depends who you ask on /pol/… Some (((anons))) will say, "a fucking nigger that deserves to die on the DOTR, 14/88, stop looking into 9/11 and instead focus SOLELY on pedo-celebs which only will get a slap on the wrist." While some anons will say, "Barry Jennings, who was with another man, and were almost blown up by an explosion on their way down BEFORE either of the towers collapsed. They were helped by the firemen before they left (first tower collapse). This happened a second time with the second tower. The third time firemen came, they told them NOT to look down at their feet, he did so BUT felt he was stepping on people's corpses. Some time before he was supposed to testify, he died of natural and organic causes." 8b7833  No.11938734 >Facebook Loses \$119 Billion – or 19% – in One Day :^) c4cfb6  No.11938764 >>11938734 Big if true 3338e5  No.11938844 >>11937302 They may be semites anon, but we can't take obfuscation for fact, the jew must be called out. Here I'll show you an exampleas: jews did 9-11 kikes did 911 32826a  No.11938861 File: 2c19b5dd5466446⋯.jpg (1.38 MB, 1754x1240, 877:620, 911.jpg) File: c53b7346b518ee2⋯.jpg (768.59 KB, 1162x850, 581:425, Jonathan Kay gloats about ….jpg) File: e3934607fe766f7⋯.png (1.84 MB, 1034x2515, 1034:2515, 9.10.01 - Mossad has the t….png) File: c20cea08451393e⋯.jpg (51.04 KB, 547x589, 547:589, 'Dancing Israeli' married ….jpg) File: b72a961acc74d28⋯.webm (7.59 MB, 640x360, 16:9, Israel Did 911 - All the ….webm) 5a1e13  No.11939174 YouTube embed. Click thumbnail to play. >>11938073 Barry Jennings interview 16ee52  No.11939304 >>11936974 One of the best. 16ee52  No.11939322 >>11939174 :56 Started praying to Allah. 0dc581  No.11939901 File: e5849e066e43452⋯.jpg (779.58 KB, 1750x969, 1750:969, e5849e066e434520b98835bf54….jpg) File: fcea5a064de9b29⋯.jpg (55.34 KB, 720x405, 16:9, M1wf3z8yo1_1280.jpg) File: bd9a066d760186f⋯.jpg (304.39 KB, 965x1082, 965:1082, 27f78567375b01ddb66662434b….jpg) >>11938861 I feel that a more comprehensive look into Zionist foreign policy is in order. I'm just letting my mind wander a bit and it occurred to me that the kikes probably engineered the Iran-Iraq War in the 80s. Gaddafi at the Arab Summit in 2008 rebuked the chamber for standing silent in the wake of Hiding Hussein's capture and death, warning that they would be next. I'm curious to know just how much Gaddafi knew then. Also, is Saudi Arabia at war with Yemen for geopolitical reasons or because they really are bought and sold by Isnotreal? Have the kikes struck a temporary accord with a KSA that has aligned geopolitical goals, or are they doing it fully on behalf of them? 5a1e13  No.11941842 >>11939322 (checked) >:56 Started praying to Allah. I noticed that as well but it is pretty much irrelevant compared to the rest of what he is saying about bldg 7. http://www1.ae911truth.org/en/faqs/633-barry-jennings-revisited >On 9/11, Jennings was the Deputy Director of the Emergency Services Department for the New York City Housing Authority. >He and Michael Hess, the New York City Corporation Counsel, were rescued from WTC Building 7 before it collapsed at 5:20 p.m. >On several occasions, Jennings stated that an explosion trapped them in WTC Building 7 and that he continued to hear explosions throughout the building until they were saved. >As reported in October 2008, Jennings died on August 19, 2008. >Thus, the question emerges: Has the potential legal power and value of Jennings’ testimony been lost forever? 074109  No.11941854 >>11937232 >all the jews that worked there, though… 3000 in the area (according to Israeli press) but only 3 died. Almost like they were warned or something….. Still I suppose it beats 3000 being in the area and 6million dying. Arab oil? You mean Saudi Arabia aka Southern Israel? 4f8c86  No.11941971 >>11932331 https://mega.nz/#!YtAHVKgb!Dsvi882KgpWGDtMtrrX0nGVb0dzdTRr-VDYZg9R6qC8 54dfe0  No.11942457 >>11941854 Actually the Harzet (don't know how to spell it) newspaper reported it the advance warning. However, the Jerusalem Post later did with the 4,000 figure, which makes me wonder if that number was used to discredit 911 Truth. Also, there is a newspaper in the US I think, that interviewed a jewish CEO who says he was among those warned. The email said, "don't come to the ETC on the 23rd of Elul (Sept 11). b02d3b  No.11942470 >>11942457 >CEO You mean Senator Al Franken. 3c64b0  No.11942481 YouTube embed. Click thumbnail to play. >>11932331 I've been telling everyone this channel is the best, every video. They look at the thumbnail and think it's the same old shit. I'll lead you right to some of the goods. Start at 9:30 and trust me, all the videos deliver. some right to the uniq Start at 9:30 8b7435  No.11942583 (((Larry Silverstein))) 8b7435  No.11942584 >>11942583 (((The jew call))) 3ba3f1  No.11942597 File: cf37e36e42adc37⋯.png (296.94 KB, 640x265, 128:53, ground zero 2001 monolith-….png) >>11932387 >>11932348 >It was done by Jews and Masons in finance and the MIC. >It was done to hide crimes committed by the Clinton Admin. It was done as part of a deal with Saudi Arabia. Clinton set the up deal before he left offic It was way more esoteric than that, Anons. I'm thoroughly convinced that those towers were built to fall. 4e8eb9  No.11942712 File: daf3a21883f016b⋯.jpg (206.22 KB, 437x722, 23:38, 7ef9d676b5f610a459bb07d038….jpg) File: aa76768e1dfa242⋯.jpg (88.47 KB, 599x742, 599:742, aa76768e1dfa242dd54696a115….jpg) 5a1e13  No.11943323 File: de82c9aba6ea8fb⋯.png (333.65 KB, 854x1256, 427:628, Screen Shot 2018-07-31 at ….png) File: 2ccfdcf26668812⋯.jpg (49.5 KB, 600x408, 25:17, 71d65271c25c5660ca439fc180….jpg) File: 29917173e889c31⋯.jpg (2.21 MB, 2208x1693, 2208:1693, wtc12.jpg) >>11941854 >3000 in the area (according to Israeli press) but only 3 died. Almost like they were warned or something >>11942457 >Actually the Harzet (don't know how to spell it) newspaper reported it the advance warning. To both anons odigo instant messaging sent warnings to all/most/many jews in the towers that morning. Odigo was similar to ICQ or AOL IM at the time but I had never heard of it at the time and have never used it since. Odigo was/is an israeli based company and it appeared that all the messages generated were from the company server and mass messaged. Another little tidbit anyone can look see and find. http://www.bollyn.com/the-odigo-warnings-the-4000-israelis-saved-on-9-11/ >>11942597 >It was way more esoteric than that, Anons. I'm thoroughly convinced that those towers were built to fall. I am not, second picture tells the story 3ba3f1  No.11943359 >>11943323 >I am not, second picture tells the story Elaborate. What am I looking at here? e4c861  No.11943371 hey dudes i remember seeing this happen live on cnn (im in the balkans) and everytime i see a pic of the twin towers i feel like im seeing the correct reality, it gives me a weird feeling of nostalgia as if i was supposed to see them IRL. i admit as a kid i loved to read about skyscrapers and see pictures of them, but everytime i see pictures of the towers especially on the day of the event it feels like the day was perfect and every photo feels kind of alive. Sorry had to get this off my chest. c1afc2  No.11943482 >>11932331 The redpill is very simple: the government story is impossible, stupid, and inconsistent with observed evidence. Buildings don't collapse at free-fall speeds without a controlled demolition. JET FUEL CAN'T MELT STEEL BEAMS. 5a1e13  No.11943559 >>11943359 >Elaborate. What am I looking at here? World trade center construction photos before the skin is put on. Count over 4 columns from the left and look at the man standing there, if he is 3' wide how wide are the box columns he is standing next to? <obviously not built to fail had the jets caused them to fail they would have fallen like a tree being cut down, sideways stemming from the point of impact with a huge ass chunk remaining intact as it fell away and the bottom would have been left standing just like a tree trunk 666163  No.11943724 YouTube embed. Click thumbnail to play. The (((High-Fivers))) aka dancing Israelis. Video is annoying as fuck, but it has decent anti-(((debunk))) material. Nothing new really, just a bit in depth. 666163  No.11943763 YouTube embed. Click thumbnail to play. >>11942470 >>11942457 the senator (((al franken))) thing is possibly disinfo, at least it seems that way >vid related, just to know what we're talking about the current trend is that he was a comedian and was making fun of conservatives and muslims… a faggot who loves Israel more than the country he's "serving" either way 93e02e  No.11943776 YouTube embed. Click thumbnail to play. 9/11 TRUTH Donald Trump's Good Friend Larry Silverstein is a LIAR 2c4cff  No.11943787 >>11943763 The next line is something like "Here's what actually happened" or something like that. It's obviously a joke. Interestingly enough, the same person responsible for some of the most easily refuted claims in Loose Change was circulating that quote a year or two ago. 3ba3f1  No.11943829 File: 762f78c8a7e3984⋯.jpg (13.7 KB, 236x261, 236:261, e1a7a3b6149ce4acccf1ca7402….jpg) >>11943559 Ah. Let me clarify; I don't think the towers were built with poor construction, I think they were built to be purposefully brought down as a ritual sacrifice. 5a1e13  No.11943910 File: 9fa26100fbf793c⋯.jpg (181.14 KB, 1000x889, 1000:889, gallery3-4.jpg) >>11943829 > I think they were built to be purposefully brought down as a ritual sacrifice. No Average common architects, engineers and such designed the towers. None of them included the "ritual sacrifice flaw" as a design intent. Whoever blew the buildings up threw the veritable kitchen sink at them otherwise they would not have come down. Search the rubble pictures not a office chair, no desks, no filing cabinets nothing even resembling what would be numbering in the 100's on each floor of a 100 story tower. It is kind of like this picture of an airplane crash site where are the seats the wings the fuselage the rudder the landing gear the jet turbines passenger suitcases? 8d6136  No.11944247 File: 91a2f6627f03354⋯.jpg (80.7 KB, 800x603, 800:603, y.jpg) >>11936974 Was there ever anything resembling an explanation for where the cash went? 7e97b8  No.11944282 >>11943323 Is there any source for the Odigo thing other than Bollyn? I read his article and he only refers to "published sources." I would think that there would be a log somewhere. 5a1e13  No.11944342 >>11944282 >Is there any source for the Odigo thing other than Bollyn? Did you look at the google page screencap I posted? You can find tons of both sides pretty much, scopes saying it is all false of course and many others allege it is fact. If I remember correctly there were 0 israelis killed but NYC jews did not fair as well so then it was yah but a lot of jews were killed so this is not true. I have not looked into this in a long time anon I figured out all I could then got burned out and quit. 0f373d  No.11944489 >>11943910 The chemical evidence, air spectrographs done by the usgs immediately after and ongoing for several months, prove that a fission weapon was used. The tritium levels alone are irrefutable, nevermind the many other byproduct isotopes they found. The fires were regenerating ffs, weeks later, after crazy rainstorms and however many tons of sand, water, and who knows what were dumped on it. Read "dust" by jeff praeger and look at the research of dr.cahill, USGS. I bet it's scrubbed at this point. 5a1e13  No.11944594 >>11944489 >The fires were regenerating ffs, weeks later, after crazy rainstorms and however many tons of sand, water, and who knows what were dumped on it. I distinctly remember the news reporting they could not put out the underground fires after a month and literally pouring over a million gallons of water on the pile. I still think and every thread that comes up I state the same thing, it was not just one method to blow the towers up. They used everything possible to ensure such a giant shit show for TV that no one would forget. There is evidence for thermite/thermate with the cutting of I beams. There is evidence for conventional explosives with windows far below the collapsing floors being blown out ahead of the of the failure zone. Also I know I will bring a ton of shit down on me for saying this I cant outright dismiss Judy Woods as well. Some of the giant beams found are bent into insane shapes that are basically impossible to accomplish with just force alone IE no fracture cracks along the flange and bent into a horseshoe. So nukes as well, sure why not. They totally scrubbed all the physical evidence under immense security to ensure that figuring out what happened at this point will be close to impossible. PS I am getting fucking tired of having to fill out captcha 3 times for each post. 17ee25  No.11944651 >>11944342 >>11944282 From what I gathered, the ORIGINAL claim was 2 people were notified. Then, when this gathered attention, the (((Jerusalem post))) said it was 4,000 people. The latter seems to be disinfo based on the lack of info, but the first 2 jews warned (warning was in Hebrew) seems to be legit based on the kikes kvetching and trying to ridicule it. 17ee25  No.11944662 ((( >>11944489 ))) >muh mininuke Are you that ugly kikess spreading that misinfo? What a (((coincidence))) that the professor that studied the WTC dust and discovered nanothermite was fired, but (((you))) managed to keep you position despite that and other scrap (hologram planes, (((attacking))) nanothermite findings, space laser causing collapse). 17ee25  No.11944664 >>11944594 The fires lasted until Christmas, more than 3 months after the (((attack))). 17ee25  No.11944665 >>11944247 >2.3T Just for that year 17ee25  No.11944667 They tried the same in Mx 17c11e  No.11944671 File: c77af5217ac51d2⋯.jpg (63.64 KB, 600x600, 1:1, amerimuttposter.jpg) >>11932907 >my latino cousin This is a board for white nationalists. Not amerimutts. 45846d  No.11945552 File: e1328ebee985e95⋯.jpg (3.31 MB, 5464x3924, 1366:981, wtc-74 1 and 2 core visibl….jpg) File: 733373bacc86244⋯.jpg (3.2 MB, 5464x3924, 1366:981, wtc-73.jpg) File: 6dc7b33e6577f75⋯.jpg (3.16 MB, 5464x3924, 1366:981, wtc-72.jpg) Rare high-res spire pics, you're welcome faggots. I have them in original quality (17mb+ each) if this is critical I can upload them. Have compressed slightly to 90/100 jpg quality. But the easiest way to debunk is using conservation of energy principle applied to WTC7 using NISTs own data. WTC7, a 47 story building owned by (((Larry Silverstein))) and occupied by various letter agencies and (((ENRON))) investigative records, fell at NIST admitted free-fall. You can measure this yourself off the videos available. Free-fall is a term for acceleration at gravitational constant, normally an average figure of 9.8ms-2. NIST admitted at least 2.2 second free fall occurred, other data sources confirm most of the collapse sequence was free-fall http://www.youtube.com/watch?v=Ml_n5gJgQ_U, a much larger figure than 2.2 seconds. However to keep this official and simple, we'll use NISTs own 2.2 seconds. We have 9.8ms-2*2.2seconds which equals 23.73m of fall. In a bottom up collapse this translates to over 5 floors with 4 meters per floor height given. So over 5 floors, hundreds of steel columns, hundreds of desks, chairs, computers, cabinets, files, hundreds of tonnes of rebar concrete floor and other construction material provided no resistance to the falling mass and were crushed in less time than it takes to read this italicised text. A free fall collapse violates the law of conservation of energy, which states that energy cannot be created nor destroyed, merely changed from one form to another. To collapse at freefall you are using all the gravitational potential energy (potential energy from being raised above the earth in this reference) to accelerate downwards, leaving no energy to bottom-up crush and pulverise 47 floors of rebar concrete and hundreds of interlinked steel columns supporting the structure. To debunk this you need to do at least one of three things: A) Show where the energy came from to simultaneously destroy each floor one after another, during the collapse sequence of the entire structure - it could not come from the falling mass without violating the law of conservation of energy. B) Prove that the resistance of the structure is zero (again, where did the energy come from to cause dust clouds and other ejecta if not from falling mass, which would have slowed to less than free fall?) C) debunk the law of conservation of energy >In short; >2.2seconds free-fall = 23.73m of floor crushed, it provided zero resistance. >To do so requires a staggering amount of external energy input in order to not violate the laws of physics (particularly laws of thermodynamics). >What or where this came from, I do not know. Perhaps Dr Judy Wood was right in asserting scalar electromagnetic weaponry was used to dissociate chemical bonds, because I do not know of evidence of thermite use in WTC7 rubble, only that it was proven to be present for WTC1 and WTC2.This also explains the burnt out cars, fire trucks and weird holes in shit many blocks away. And to this date, no one has debunked the above, because to do so would debunk the laws of physics. Basically, NIST lied and falsified their data, even their simulations had to be 'sped up' as they didn't match the video evidence. AE911 truth is also a top scientific resource on this matter, I have personally seen Richard Gage deliver his talk and was an OG truther in the early to mid-00s before it became hijacked by OWS soros faggotry. ba0623  No.11945634 Jet fuel can't melt passports. a9137b  No.11946299 File: 4dcfd1a9bbeb677⋯.jpg (935.44 KB, 2100x1621, 2100:1621, roger-the-riveter.jpg) >>11944671 D&C shill 8bdcf1  No.11947717 >>11946299 Nothing to worry about if you're not a mutt. Otherwise, fuck back off to 4cuck 529253  No.11947788 >>11944671 fuck off, some of us have families all over the world. You can't control everyone you're related to, also, threads like >>11893036 make me think you're a D&C shill because (((9/11))) keeps happening in other countries as well. 529253  No.11947807 YouTube embed. Click thumbnail to play. >>11945552 You can actually see the walls of the towers being blown outwards >vid related, David Chandler's channel 6183a5  No.11948177 >>11943910 >None of them included the "ritual sacrifice flaw" as a design intent. >tubal cain my fellow cocksuckers 3482e1  No.11949130 File: 263f706110e6c34⋯.png (414.67 KB, 745x453, 745:453, Capture.PNG) >>11936974 What is being pictured here? c73b7b  No.11949367 File: a3890550d0aa2d8⋯.jpg (45.62 KB, 603x371, 603:371, just look at the time.jpg) File: fd690a3ea11ac8f⋯.jpg (143.65 KB, 750x1000, 3:4, Rockefeller-Newsweek-1967.jpg) File: d2bbf5a52a1a55d⋯.jpg (35.04 KB, 481x230, 481:230, david_rockefeller_i_stand_….jpg) File: bbb0dedfa3222d4⋯.jpg (43.63 KB, 634x450, 317:225, 3E75E5FF00000578-4333408-i….jpg) File: b66fe165db9dea5⋯.jpg (343.84 KB, 1800x1142, 900:571, IX XI.jpg) They knew c73b7b  No.11949394 File: 1d986e3df17dac4⋯.gif (3.76 MB, 379x360, 379:360, 20180705_224411.gif) File: 24d4bc4da1a8241⋯.gif (3.4 MB, 379x360, 379:360, 20180705_224954.gif) 3338e5  No.11949407 >>11949130 I believe that was the FBI picking up debris all over the pentagon surroundings. 2266c5  No.11949797 File: 215f4b7854354ad⋯.mp4 (8.42 MB, 320x240, 4:3, unconditional support of i….mp4) When I was a child I used to live near one of the few aeronautical colleges in the U.S. I remember seeing dark skinned men coming and going from there regularly which was strange to me because this was an all white neighborhood. It was only later that I learned they were Arabic men. This would have been back in the '70's. The point being to say that Arabs are all illitarate cavemen who couldn't have possibly pulled off such an attack is nonsense. They were planning this, one of many potential attacks I'm sure, for decades. It was simply payback for 50 years of The United States unconditional support for the little criminal state of Israel. 2266c5  No.11949817 >>11949797 >It was simply payback for 50 years of The United States unconditional support for the little criminal state of Israel. This is what they really don't want you to focus on. Even the "jews did 9/11" meme is just a clever too fantastic to believe misdirection to keep you from realizing that the jews and their american shabbos goy really are responsible for 9/11. 2266c5  No.11949833 And unless you have the I.Q of a fucking nigger a one hour documentary on PBS complete with commentary from the engineers who designed the buildings on how and why they collapsed should suffice. why building #7 collapsed is shady but that's another story 0dc581  No.11952142 File: 98bae9656f09be7⋯.gif (680.27 KB, 500x268, 125:67, p5xxprlixY1qifyvs_540.gif) >>11936974 >a hijackers passport was found blocks from the WTC crash center >if you can believe that File: 6fea8a0b918d106⋯.jpg (107.19 KB, 640x640, 1:1, 6fea8a0b918d1064804fc9b158….jpg) File: daf3a21883f016b⋯.jpg (206.22 KB, 437x722, 23:38, 7ef9d676b5f610a459bb07d038….jpg) File: 478ecb3c4712b5c⋯.jpg (517.94 KB, 866x651, 866:651, 478ecb3c4712b5c88cd8d01aa5….jpg) File: 83aaf3b006819c8⋯.jpg (8.89 KB, 343x147, 7:3, 1415206807282.jpg) b87ce4  No.11953235 YouTube embed. Click thumbnail to play. The sarcastically titled "9/11: A Conspiracy Theory" by James Corbett (the linked VIDEO) is a quick and sometimes hilarious review of all of the serious questions that the government never answered about the WTC attack in 2001. Ryan Dawson's excellent video "War by Deception" is chock full of evidence of American Jews and Israel being behind that 9/11 attack. Here it is on youtube -→ https://youtu.be/vl2biXvyGCs That video is a full two hours long, but you can't not realize that the Zionist state committed that murderous attack on America. So - 1st video 5 minutes. - - 2nd video 2 hours. Finally, Chris Bollyn (bollyn.com) has done the best job of investigative journalism. He names the "American" Jews and Israelis who did it in his books, but his interview with Sean Stone on Buzzsaw will give you more than enough information to know for certain that this was a treacherous kosher atrocity against America. https://youtu.be/00wJ6dhaKzI The Zionist crime against the USA that began on September 11th, 2001 continues even today - - in the form of ongoing US expenditures of soldiers and money in the Middle East -. Ryan Dawson's "War by Deception" is killer good… 0dc581  No.11953353 • https://www.ae911truth.org/evidence/technical-articles/articles-in-the-journal-of-9-11-studies • http://911speakout.org • http://www.bollyn.com Here's some written resources for curious anons. Bollyn has a Christian twist (Synagogue of Satan and all that) that makes his writings palatable to evangelicucks. Be wary of CIAnigger misinfo—9/11, like JFK, are the most damning conspiracy theories (in the literal, not pejorative, sense) for the US government, esp. the neocon-bureaucrat-Wall St financier-military-industrial complex, which is why they invest so much in well poisoning. 080790  No.11953415 File: b8d67d03083e68a⋯.jpg (94.66 KB, 1008x627, 336:209, Jesse Ventura.jpg) cf1987  No.11953670 YouTube embed. Click thumbnail to play. >>11953235 I still think the best documentary to show somebody (aside from something short like the Corbett video) is Missing Links. Dawson has good information, but he's a terrible filmmaker, JAM Jr. on the other hand… 8f2ff1  No.11953715 I remember finding a video of a woman making the connection to Rothschilds and numerology, it was really good shit, I can't find it now. Anyone know what I'm talking about. cf1987  No.11953723 YouTube embed. Click thumbnail to play. >>11953715 Was it Ring of Power? c39ca1  No.11953765 >>11943910 >not a office chair, no desks, no filing cabinets nothing even resembling what would be numbering in the 100's on each floor of a 100 story tower. Something survived though didn’t it. One of the hijackers passports was found intact atop a flaming pile of rubble. :^) 8f2ff1  No.11953817 >>11953723 No it was a 20 minute video I think maybe shorter of some lady sitting in her kitchen drinking coffee talking about 9/11 very coherently and piecing the puzzles together. Shit I have to find it. a586c4  No.11954141 YouTube embed. Click thumbnail to play. >>11947807 Holy shit a586c4  No.11954220 >>11943910 this, watch this video btw >>11947807 >>11953235 >Corbett Report he's the the only NON-(((fake news))) I can think of right now 5a1e13  No.11954894 File: e5185b2fd6b620c⋯.jpg (1.02 MB, 1945x3026, 1945:3026, Original ''FBI Ten Most Wa….jpg) Not a peep about sept 11 970ecc  No.11954897 5a1e13  No.11954930 File: fe27e907704f721⋯.png (25.23 KB, 400x95, 80:19, Screen Shot 2018-08-03 at ….png) >>11954897 >revised on October 2001 5f97d2  No.11954945 >>11954894 >Height 6'4" to 6'6" God damn, nigger was tall. >>11954945 Just because the kike media portrays all Iranians, Saudis, and the "terrorist race" as "short and dark skinned", it doesn't make it so. ebd3fb  No.11956835 5a1e13  No.11957715 File: ba8d3d806ba2772⋯.jpg (81.51 KB, 750x560, 75:56, 5230798c69bedd671ea48e7d-7….jpg) >Chuck Baldwin — via Russia Insider Aug 1, 2018. >What if everything we’ve been told about 9/11 is a lie? What if it wasn’t 19 Muslim terrorist hijackers that flew those planes into the Twin Towers and Pentagon? What if the Muslims had nothing whatsoever to do with the attacks on 9/11? What if everything we’ve been told about the reasons we invaded two sovereign nations (Afghanistan and Iraq) is a lie? >What if the 17-year-old, never-ending “War on Terror” in the Middle East is a lie? What if our young soldiers, sailors, airmen and Marines who have given their lives in America’s “War on Terror” died for a lie? What if G.W. Bush, Barack Obama and Donald Trump have been nothing but controlled toadies for an international global conspiracy that hatched the attacks of 9/11 as nothing more than a means to institute a perpetual “War on Terror” for purposes that have nothing to do with America’s national security? Would the American people want to know? Would the truth even matter to them? Continues here 670d98  No.11959213 File: dfc3ef256a8f893⋯.png (930.82 KB, 1789x598, 1789:598, 6A7D7C60-7E3A-4609-8065-88….png) File: e365dc4c6aadd7f⋯.png (340.58 KB, 580x451, 580:451, 5EF92CAB-A965-4E95-B007-F1….png) File: 660516aaf340a9c⋯.jpeg (37.55 KB, 480x364, 120:91, 365224A1-EED0-4DFE-9B3B-9….jpeg) File: 81dbddbcaea5998⋯.jpeg (146.4 KB, 859x680, 859:680, 863C980E-C5DB-4161-B366-8….jpeg) File: 3286593abcb229c⋯.jpeg (109.41 KB, 650x509, 650:509, 753865EC-A336-4160-9E91-A….jpeg) Dumping 670d98  No.11959217 File: 40547f4829283d2⋯.jpeg (242.23 KB, 1276x914, 638:457, CB66F91D-A63B-4401-A6FF-9….jpeg) File: 6c85bf1da966b38⋯.jpeg (146.56 KB, 1392x616, 174:77, 59CA9212-52BA-43AF-A033-D….jpeg) File: 5b49dd4a6082eaa⋯.jpeg (156.95 KB, 829x960, 829:960, 7C005E48-9F46-4C5E-BB1B-8….jpeg) File: 9d773bfb97b3907⋯.png (402.76 KB, 987x852, 329:284, B3AA5533-E519-4D66-8583-F0….png) File: 219c5624b5c2f8d⋯.jpeg (160.93 KB, 540x720, 3:4, 1A9615B5-4770-44C3-AEB8-8….jpeg) Polite sage 670d98  No.11959224 File: 689c18c00743197⋯.png (371.78 KB, 691x842, 691:842, C4B98260-33A0-4017-A42B-FD….png) File: cb848c2998d4605⋯.jpeg (71.29 KB, 450x262, 225:131, 93A1A52E-2D15-4DF9-8379-8….jpeg) File: dd9e571961eab51⋯.jpeg (390.95 KB, 1500x983, 1500:983, CA7D8806-4B05-40CA-ACE8-5….jpeg) File: 558a5df4512d71e⋯.jpeg (135.14 KB, 1000x750, 4:3, 666E8921-E9E4-40D1-972A-F….jpeg) File: 4f5f9ef02e88fb7⋯.jpeg (63.92 KB, 547x589, 547:589, CD7B7363-EF72-46CC-8BAF-7….jpeg) 670d98  No.11959228 File: 17224aeebed21d4⋯.jpeg (301.67 KB, 1920x739, 1920:739, 2E7AED16-DA3D-4371-8394-E….jpeg) File: 94ab8d9795e6e0d⋯.png (216.1 KB, 850x326, 425:163, 51EAC07F-DCA1-4724-A1E9-A8….png) File: 0e1c139237d759e⋯.jpeg (110.21 KB, 640x384, 5:3, E608E07F-D5AF-4D75-AAE9-9….jpeg) File: 3b665805445bc31⋯.jpeg (126.77 KB, 640x960, 2:3, 315AA1D3-9D71-40C7-88AE-5….jpeg) File: c82a82368449a8f⋯.png (112.92 KB, 629x288, 629:288, 1D214922-A20D-4F59-8F7D-F2….png) 25c7c9  No.11959558 File: 961f01551995502⋯.jpg (112.61 KB, 797x1000, 797:1000, 6119523472_a1c2b70d9c_o.jpg) Something my architecture courses have taught me is the work of renowned architect Minoru Yamasaki. For a bunch of bookworms and interested patriots i always found weird you guys never address this fella, his work is pretty interesting: Very early in his career he designed the Pruitt–Igoe urban housing complex, and he made them "flexible" as in easy re-structure of the urban environment by means of re-arranging space via core buildings, which had to be taken down first. For his bad luck St. Louis shoved the place full of unemployed afro-americans and it was a mess. The interesting part comes when the gov realized this place was, indeed, a mess and decided to demolish some parts of it to renew it, around early 1968. They consulted Minoru and worked extensively with him on how to do it. By the end they decided to blow the place up entirely and that was it, circa spring 1972. As a note our boy here designed the World Trade Center 1 and 2 around 1962, after a couple of delays and groundbreaking, along with some tweaks, the construction of the towers themselves started around mid-1968. Both received their formal opening in early 1972. And after all this hard work, old Minoru finally broke the oinker to make his sweet house in Bloomfield, Michigan, circa late 1972. And for some reason he got in charge of designing and building a synagogue near his house too! Coincidentally he also made the airport terminal in Boston where planes got off in 9/11 to strike the towers 1c3020  No.11959596 >>11957715 hot stuff!! b1efd7  No.11959625 >>11959558 >synagogue near his house What a (((cohen)))cidence b87ce4  No.11960522 File: a63e2c838c6937b⋯.jpg (32.21 KB, 638x345, 638:345, 911 FUEL BURNT TWICE.jpg) > the source of what follows is the journalist Chris Bollyn (bollyn.com) but it's easier to read his reports at a mirror site because Bollyn's website is almost always under DDOS & other cyber attacks >here's a mirror for the text that follows - http://www.matrixfiles.com/CB/chapter-1-9-11-through-the-eyes-of-an-american-skeptichtml.html 5 Dancing Israelis "The story of the five men celebrating the destruction of the Twin Towers was dropped from the national news when it became known that they were not Arabs or Muslims from the Middle East, but Jews from Israel. Explosives in their Vehicle "The noteworthy fact that these men, who clearly had prior knowledge of the attacks, were in fact Israelis, and that they had been arrested at gunpoint with box cutter knives, multiple passports, and thousands of dollars in cash in a van that tested positive for explosives was only reported by Paolo Lima in a local New Jersey newspaper, the Bergen Record, the following day. "This important and timely information, however, was completely ignored by the New York Times and the other national mass media outlets based across the river in New York City. "I discussed the pertinent details with Lima by phone, and this important and suppressed story became the subject of my first article about 9-11. I realized then, during the first week after 9-11, that the mainstream news media was ignoring and covering up important information and evidence about the terrorist attacks. " ———————– pic related It's amazing to me that we are expected to believe that the jet fuel exploded in a burning fireball upon impact, and then somehow, by some uncanny process, burned again - enough to melt the core steel columns. . . smh 0dc581  No.11961593 File: 7a16ec28faa046a⋯.png (37.77 KB, 656x384, 41:24, baste.png) https://vimeo.com/ondemand/kushner This is one of Dawsons other films, with prices in pic included c73b7b  No.11961991 File: 9a1023a3cb7a7d6⋯.jpg (27.76 KB, 672x504, 4:3, 1531596591559.jpg) File: 64755784280c1e0⋯.jpg (48.28 KB, 621x475, 621:475, 1533157225360.jpg) 4fdbe0  No.11963492 >>11961593 4fdbe0  No.11963494 >>11960522 Great post b87ce4  No.11963526 File: f52679a3aadffce⋯.jpg (43.35 KB, 447x451, 447:451, 911 Thermite WABC_still Bo….jpg) >>11963494 Thanks, anon - Chris Bollyn lays it out clear and on the basis of facts. >>11961991 There is an interesting discussion of Professor Steven E. Jones' findings about the thermite at this link http://www.matrixfiles.com/CB/chapter-1-9-11-through-the-eyes-of-an-american-skeptichtml.html pic related - that's one of the main spots the thermite evidence is tied to This video lays bare the Israeli and American Jew crime of 9/11 In it the investigative reporter names Zionist individuals, identifies the companies, and reveals some curious behind the scenes actions that set up the destruction of the Twin Towers. The url is a mirror of a crucial youtube video – it works fine. Worth a look; You will never doubt the reality of what happened again. This video nails the kikes for their crime against America. https://videos.utahgunexchange.com/watch/genius-christopher-bollyn-explains-why-and-how-israel-did-9-11_RfUOSpp1isgtfZh.html 16ee52  No.11966111 >>11944247 >Was there ever anything resembling an explanation for where the cash went? Anyone know? 16ee52  No.11966138 >>11932331 Imagine being in the towers that day and dying that way. wtf. 104639  No.11966194 >>11944665 Most likely this went to sub subterranean shit. Like a few unregisterred D.U.M.B's b87ce4  No.11966257 File: 4d493cf05fdf47f⋯.jpg (535.35 KB, 1280x720, 16:9, Donald Rumsfeld tax myster….jpg) >>11966111 An "explanation" might be the wrong term, but there is a video on youtube somewhere in which Donald Rumsfeld is asked if anybody ever determined what happened to that money. He is standing in a hallway near an elevator, I seem to recall, and answering questions briefly. He says something along the lines of; "Oh, they found it… it wasn't really missing after all." And that's that. No details or particulars whatsoever. Sort of a classic "Nothing to see here. Just move along" kind of answer. 6d8189  No.11966263 File: 1974aa6bece5927⋯.jpg (113.2 KB, 1018x564, 509:282, 550a814a8c7a20c609e1d338f0….jpg) How many of these so called "truthers" "deny" the holocaust? How many of them would want to kill you if you were to dare suggest that it didn't happen? bdbbd9  No.11966285 >>11936292 So your theory is that CRT monitors are impervious to tomahawk missiles? 394685  No.11967251 File: 4465e6cce1d5f6f⋯.mp4 (4.91 MB, 360x360, 1:1, Comfy_Happening.mp4) >>11966263 the 911 truthers who are redpilled on (((who))) did it would probably be open to the idea. 5a1e13  No.11967274 >>11966263 >How many of these so called "truthers" "deny" the holocaust? It was because of studying 911 for so long that I stumbled on to the holohoax scam as well. Once you figure out sand niggers could not have pulled it off as described it was a short hop to who actually did and then all the other shit as well. 104639  No.11967283 >>11941842 oh shit, I just fucking realize he was talking about WTC7. Thats yuge! 394685  No.11967290 File: 2514cfdc61b1547⋯.webm (265.25 KB, 640x328, 80:41, strikeforce.webm) Look at what we did to the Middle East based on a lie. >Now imagine what we'd do to Israel based on the truth. 104639  No.11967292 YouTube embed. Click thumbnail to play. >>11967274 >Once you figure out sand niggers could not have pulled it off as described it was a short hop to who actually did and then all the other shit as well. They knew early on, the investigation was ended when Thomas Wales was murdered in his home 104639  No.11967303 >>11967292 also, just look at this fucking Journalism and compare it to modern bullshit "reality TV" they try and pass off as journalism nowadays. fucking pathetic 394685  No.11967311 File: d6cf0f23baaffaf⋯.gif (1.45 MB, 193x135, 193:135, 1531885026678.gif) >>11967283 >>11941842 >>11939174 >He was talking about building 7 Christ. 387e81  No.11967313 >>11932331 https://youtu.be/igX7Z8VstN4 104639  No.11967316 YouTube embed. Click thumbnail to play. >>11967311 theres more 104639  No.11967324 >>11967316 fuckin chills 3:40 in 394685  No.11967359 File: 857c97c9b85c0a5⋯.png (338.85 KB, 800x600, 4:3, 857c97c9b85c0a52c3f2853318….png) File: 8dae377e3bc1c43⋯.webm (1.44 MB, 450x472, 225:236, waning hope.webm) File: a8f875268b05bfd⋯.jpg (478.64 KB, 1046x1363, 1046:1363, ap03062605355.jpg) 7d680b  No.11967361 >>11966138 That was the purpose of this thread: never forget 911 or the Liberty 104639  No.11967374 File: 365f99972b6124b⋯.jpg (188.34 KB, 881x622, 881:622, Israeli Art Students.jpg) File: 046e03e1d2137c2⋯.jpg (84.19 KB, 300x229, 300:229, Israeli Art 2.jpg) 7f181e  No.11967421 File: b92d1fad7a7daa1⋯.png (612.9 KB, 648x359, 648:359, WTCmemorialJune2012.png) I don't know if this was posted before >"National September 11 Memorial & Museum" The National September 11 Memorial & Museum (also known as the 9/11 Memorial & Museum) is a memorial and museum in New York City commemorating the September 11, 2001 attacks. A memorial was planned in the immediate aftermath of the attacks and destruction of the World Trade Center for the victims and those involved in rescue and recovery operations.[5] The winner of the World Trade Center Site Memorial Competition was (((Israeli))) architect (((Michael Arad))) of Handel Architects, a New York- and San Francisco-based firm. Arad worked with landscape-architecture firm Peter Walker and Partners on the design, creating a forest of swamp white oak trees with two square reflecting pools in the center marking where the Twin Towers stood. Black cubes, israeli. Never bothered digging into this before but the pure coincidences start popping up real fast once you do. 394685  No.11967513 File: c7fbcb44624ed51⋯.gif (377.33 KB, 300x169, 300:169, 1532908025583.gif) >>11967421 not only that, but black cubes within black cubes 16ee52  No.11967731 >>11966257 Interesting. Why even bring it up the day before the attacks if they were just going to hush it up? b87ce4  No.11969063 File: f9e0c854c99277f⋯.jpg (90.78 KB, 1115x577, 1115:577, 911 Odigo Haaretz cro.jpg) >>11967731 "they" a pronoun I am one of those Goys who is of the opinion that neither Rumsfeld nor Geo.W. Bush were insiders who knew about the upcoming attack. After all, neither of them was born into the Chosen Tribe, and Bush is too damn dumb to be trusted with information… (frankly, if I were throwing a Surpise Birthday Party for a mutual friend, I would think twice about telling George Bush ahead of time.) and 'Rummy' Rumsfeld - the White House useful idiot's friend, is just too goofy. So think math sets and subsets. The "they" who knew Mossad and some Sayanim were soon to mass murder Americans by burning them alive or crushing them for Zionist Israel's benefit is a different set than the set of useful Goy idiots in government office. ————————– & unrelated pic - Odigo How about that Israeli company warning Israelis in NYC not to go to work in the Twin Towers that morning? Maybe just a Cohencidence… 5a1e13  No.11969410 >>11969063 >I am one of those Goys who is of the opinion that neither Rumsfeld nor Geo.W. Bush were insiders who knew about the upcoming attack. Seconded Cheney knew of the fuckery though Rice was the sacrificial fall nigger in case it some how it went tits up and Sec of Transportation Panetta soon to be cia head totally spilled his spaghetti with the Cheney in the WH bunker the plane is 20 miles out do the orders still stand testimony before the 911 committe. e98f78  No.11970019 best redpill? 2 airplanes + 3 towers = potato. 1a9168  No.11971544 >>11967316 Another smoking gun, no wonder he (((died))) 1a9168  No.11971555 >>11967421 According to Bollyn, Israelis like to have complete control over every aspect of their operations b87ce4  No.11974894 File: ec53132191f1ada⋯.jpg (38.41 KB, 640x480, 4:3, Bolton tells Obama.jpg) File: 3ac075f686ba0a0⋯.jpg (28.9 KB, 639x364, 639:364, Bolton 003.jpg) File: 8dc45a56ea310de⋯.jpg (30.5 KB, 609x359, 609:359, Bolton 006.jpg) >>11969410 Thanks for jogging my memory about that. Cheney gave that cryptic kind of answer and more or less outed himself as part of the operation. However, although you make a (damn good) valid point about Cheney, it doesn't mean that the Idiot-in-Chief was in the know. Howver, the 'Americans' in the murderous anti-American plot were mostly jews BUT NOT ALL jews. For example, there was the goy politician who's wife was reported to have cell-phoned him from Flight 93 (impossible technically in 2001) - And "Israel 1st John Bolton" has roots as a Shabbas Goy that run back through 9/11 all the way to the PNAC "Project for a New American Century." 2470d4  No.11974903 File: 2c8eb6648489457⋯.png (343.29 KB, 522x959, 522:959, John Bolton kike.png) >>11974894 >roots as a Shabbas Goy da7654  No.11977297 >>11974894 >>11974903 >(((they))) kill his wife >goes full zog Either a cuck or his wife was a thot 2d9695  No.11977768 >>11944662 Wtf are you talking about you fucking idiot. What a retarded comment. 2d9695  No.11977775 Will no one read dust by jeff praeger and research Dr Thomas Cahill and the air sample spectrometry that was done? I would like to hear someone actually address the evidence he gives. 2d9695  No.11977799 >>11944671 Imho there is strong evidence for the use of exotic nuclear weapons. Dr Thomas Cahill from the usgs was near gz and took air samples for analysis. His results are not explainable by any known/conventional demolitions/explosives. Read dust by jeff praeger and tell me what you think. 6a342c  No.11977846 >>11937469 Here is more on PTech: d69b14  No.11978079 >>11944489 >>11944342 >>11942712 Hi FSB, you're dumb as fuck. First, every country has a power elite that skims money off the top of their respective country's treasury. This has happened since Louis XIV and before. The Pentagon "accounting error" of a couple trillion dollars was a skim off the top of the Treasury. 9/11 diverted attention from it, and this shit was done by insiders, not just kikes. Putin skims off the top of the Russian treasury, but Russians are too weak to do anything to Putin, so they don't need a diversion. Second, why wouldn't you do 9/11 and blame it on sandniggers? The fall of the USSR left a power vacuum in the Middle East, and every FSB faggot shilling here forgot that the countries the US fucked up were involved with the AQ Khan network, which Saddam fucking was part of. Also ''Saddam was fucking up some of the best oilfields in the world but reinjecting extracted oil back into the fields, because the sanctions fucked up his oil technology. Iraq is the keystone state in the Middle East, so why not attack it before Russia gets its strategic power back? Look at how the US can't fuck with Syria now, because the Russians helped Assad. The US was smart to go into Iraq early and take it over. Fuck morals, this is geostrategic, nigger. Iraq was a giant pussy wanting to get fucked, and the US got it quickly and easily without a major army war. The US couldn't have conquered Iraq if the USSR was still around. George Bush I knew this shit, that's why he left Saddam in power, FSB nigger. Finally, the BRIC countries, plus Libya, Iraq and Venezuela, countries started to pull the whole anti-petrodollar movement. The US had intelligence on that shit early and took swift action to neutralize it. Any removal of the petrodollar was a direct attack on US geostrategic and economic stability. So naturally, the US neutralized any threat to its strategic interest, and Iraq was the easiest country to neutralize. 9/11 made sense to use as plausible deniability for the acquisition of Iraq, you FSB niggers. And exactly what happened to all the countries that fucked with the US on the petrodollar, like BRIC and Libya, Iraq and Venezuela? 1) Brazil - President Lula is in jail 2) Venezuela - Chavez was cancered, country is in shambles 3) Iran - Sanctioned and neutralized 4) Russia - Cold War 2.0 5) Libya - Qaddafi was killed 6) China - Trade war, Japs overtook then economically from a stock market standpoint Blame the kikes, but CIA knew about 9/11 because it made strategic sense for the US to attack Iraq and forward-position the U.S. vis-à-vis the terrorism narrative against its new strategic enemies. We run a containment strategy against Russia and China, fuck the both of you. 4fb6f8  No.11978224 >>11933230 the van was completely white with only the text logo ba9630  No.11978282 File: 9ccaafecea48bb1⋯.png (1.38 MB, 1920x1080, 16:9, Screenshot (8).png) Evidence that ties Cheney & MOSSAD to 9/11 3bf651  No.11978485 File: 86934956a6d4e3d⋯.jpg (57.39 KB, 600x443, 600:443, wtcsecuritymarvin.jpg) >>11969410 Thoughts on pic related? Also what was Stratasec's role as opposed to Kroll Inc's? 3d1ec7  No.11978506 >give us all your 9/11 goy because we are worried we missed something that can fuck us. umm it was the jews it has always been the jews almost literally everything,9/11 100% jew with some lapdog cousin muds now fuck off 3d1ec7  No.11978515 or jew sliding with 911 because they know they got away with it and in that case same fuck off b87ce4  No.11980640 File: 3d54720384a832f⋯.jpg (36.83 KB, 480x360, 4:3, 911 Bldg 7 sign.jpg) File: 0d643ca9a523f28⋯.png (250.33 KB, 650x276, 325:138, 911 bldg 7 smoking gun.png) File: 5a8b3b9d1fb304e⋯.jpg (56.34 KB, 400x306, 200:153, 911 bldg 7 BBC 2.jpg) the destruction of WTC building 7 is mysterious in several ways 1) no plane struck Bldg 7. yet the building mysteriously collapsed Silverstein's tall tale 2) Lucky Larry Silverstein during a TV interview "explained" that Bldg 7 was the subject of a conversation between himself and the NYC fire chief during late afternoon of Sept. 11, 2001.. Silverstein claimed that they decided to "pull it" (demolish it) because there had already been much loss of life. Fire Chief says No However, the NYC fire chief said he never had any such conversation with Silverstein. Silverstein's story MUST BE a Lie Silverstein is clearly lying anyway, because profession demolition engineers estimate a minimum of four days time would be needed to drill holes and set and wire charges - it takes much preparatory work to bring down a building of that size straight down, in its own foundational footprint. There is no way the fire chief could have accomplished what Lucky Larry Silverstein alone claims the chief accomplished. BBC Crystal Ball Sees the Future 3) a BBC TV reporter while on air that fateful day announced that building 7 had collapsed. However, building 7 was still standing and was visible behind her at the time she told the TV audience the building had fallen. Only 20 minutes later did the building fall. Mum's the Word 4) There was no mention of WTC building 7 - a 45 story tower - in the "9/11 Commisssion Report". The Buddhists have a maxim; "Sometimes silence is the greatest revelation." So WTF really happened to the mysterious WTC building 7? In the interview linked above on this thread, Chris Bollyn points out that the controlled demolition explosions that brought down Towers 1 & 2 required careful timing, a set sequence of ballistic events, on a precise moment by moment schedule… Electronic control circuits with somebody running the program were likely Mossad Operations Center probably was in Bldg 7 Bollyn thinks the Israeli Mossad-niks who were flipping the switches were in Bldg 7 and watching the goings-on from that vantage point. In such case Building 7 was demolished as the last event because evidence of the Mosssad operations center had to be destroyed'''. f6d061  No.11980891 File: abe78b3f24a7bb9⋯.png (97.66 KB, 412x253, 412:253, Ali Mohamed US Passport.png) >>11932331 Ali Mohamed: The triple agent who helped kill Sadat and Kahane, create Al Qaeda, join the US Special Forces, and milk the FBI and CIA for his own benefit. He has never been sentenced despite a confession nearly 20 years ago. He was also the author of the AQ handbook. According to Sibel Edmonds Ali Mohamed has been on assignment with NATO’s Gladio B operation outside of prison ever since. https://en.wikipedia.org/wiki/Ali_Mohamed http://archive.li/hgbAh http://archive.li/vsHN https://www.mediapart.fr/en/journal/international/220817/dark-world-islamic-state-groups-secret-services?page_article=3 http://archive.li/pSLiw https://apjjf.org/2013/11/29/Peter-Dale-Scott/3971/article.html http://archive.li/lpobo https://apjjf.org/-Peter-Dale-Scott/3971/article.pdf http://peterlance.com/wordpress/?p=7984 http://archive.li/6cFSY https://www.newsbud.com/2017/09/06/newsbud-exclusive-agents-of-terror-on-government-payroll-part-ii-ali-mohamed-2/ http://archive.fo/vaUTg https://vimeo.com/223012226 https://hooktube.com/watch?v=faj6v4A6A4o Abdullah Azzam https://en.wikipedia.org/wiki/Abdullah_Yusuf_Azzam http://archive.li/KSMEq Anwar al-Awlaki https://www.newsbud.com/2017/08/26/agents-of-terror-on-government-payroll-part-i-anwar-al-awlaki-2/ http://archive.fo/zt2JF MAK: The predecessor to AQ. https://en.wikipedia.org/wiki/Maktab_al-Khidamat http://archive.li/B1cU3 http://www.historycommons.org/entity.jsp?entity=mohammed_loay_bayazid_1 http://archive.fo/tUosb https://en.wikipedia.org/wiki/Mohammed_Loay_Bayazid http://archive.fo/c3y0m f6d061  No.11980901 File: 40a9c0d6ab2473c⋯.jpg (111.16 KB, 399x402, 133:134, Gina Haspel.jpg) CIA front company MEGA Oil brought thousands of AQ into Azerbaijan. This led to a coup installing a pro-US government. Current CIA Director Gina Haspel was Chief of Station at Baku during this time. https://www.panorama.am/en/news/2015/05/30/azerbaijan-terrorism/41534 http://archive.fo/GtRwH https://www.panorama.am/en/news/2015/04/27/putin/64525 http://archive.fo/DVdCn https://ceasefiremagazine.co.uk/whistleblower-al-qaeda-chief-u-s-asset/ http://archive.fo/o1vA4 https://mediamax.am/en/news/foreignpolicy/28397/ http://archive.fo/iJuQ3 http://archive.fo/qaBJx http://archive.fo/AMrVa https://www.globalresearch.ca/al-qaeda-u-s-oil-companies-and-central-asia/762 http://archive.fo/1ZnJL http://www.historycommons.org/context.jsp?item=a0798mabrukarrest http://archive.fo/LZcyO https://news.antiwar.com/2017/08/27/us-allies-used-diplomatic-flights-to-send-weapons-to-terrorists/ http://archive.fo/hzqp7 Intelligence Support Activity (JSOC’s CIA) worked with AQ and Hezbollah in Bosnia according to Dutch intelligence. Also AQ was in Kosovo. http://emperors-clothes.com/analysis/deja.htm#dutch http://archive.fo/DMhjB http://emperors-clothes.com/analysis/used.htm http://archive.fo/VWJnn https://www.scribd.com/document/36116309/Intelligence-and-the-War-in-Bosnia-1992-1995 http://emperors-clothes.com/bosnia/izet.htm http://archive.fo/ZEo44 http://emperors-clothes.com/news/binl.htm http://archive.fo/j7aVd http://archive.fo/JtYDo http://iacenter.org/bosnia/ciarole.htm http://archive.fo/rZz1U https://apjjf.org/2011/9/31/Peter-Dale-Scott/3578/article.html http://archive.fo/iuZOB Unholy Terror by John R. Schindler f6d061  No.11980905 File: df2529c48f0c53f⋯.jpg (51.37 KB, 624x624, 1:1, Putin Come On.jpg) Chechen rebels were backed by the CIA, neocons, and AQ. FISA warrants against 9/11 hijacker to-be Moussaoui were rejected by FBI agent Mike Maltbie because Chechens weren’t a recognized foreign power, but rebels. In 2001 0 FISA warrants were rejected by FISC. https://carnegieendowment.org/1999/12/10/u.s.-role-in-chechnya-pub-182 https://web.archive.org/web/20170802030432/https://carnegieendowment.org/1999/12/10/u.s.-role-in-chechnya-pub-182 https://www.theguardian.com/world/2004/sep/08/usa.russia http://archive.fo/nbTeU https://en.wikipedia.org/wiki/American_Committee_for_Peace_in_Chechnya http://archive.fo/K3RKT https://www.sourcewatch.org/index.php/American_Committee_for_Peace_in_Chechnya http://archive.fo/KspCb https://rightweb.irc-online.org/profile/american_committee_for_peace_in_chechnya/ http://archive.fo/SuYDn http://archive.fo/wo4EZ https://www.dailykos.com/stories/2013/4/30/1205920/-American-Committee-for-Peace-in-Chechnya http://archive.fo/kkloE http://www.historycommons.org/entity.jsp?entity=american_committee_for_peace_in_chechnya_(acpc) http://archive.fo/TlCUf http://archive.fo/mQSh1 https://www.theamericanconservative.com/2013/04/24/chechens-and-american-hawks-an-interesting-alliance/ http://archive.fo/6iLUy http://original.antiwar.com/justin/2013/04/23/the-russians-warned-us-why-didnt-we-listen/ http://archive.fo/MCGu0 https://www.rt.com/op-ed/chechen-terror-media-draitser-153/ http://archive.fo/L9H9V http://archive.fo/F9zvS https://consortiumnews.com/2013/04/19/chechen-terrorists-and-the-neocons/ http://archive.fo/rZUp6 https://www.newsweek.com/terrible-missed-chance-67401 http://archive.fo/AtlU5 http://washingtonsblog.com/2013/04/u-s-support-chechen-terrorists-fighting-russia-just-like-we-supported-al-qaeda-to-fight-russia.html http://archive.fo/mD8IF http://www.boilingfrogspost.com/2013/04/19/usa-the-creator-sustainer-of-chechen-terrorism/#comment-9932 http://archive.fo/Nt0HV http://www.boilingfrogspost.com/2011/11/22/bfp-exclusive-us-nato-chechen-militia-joint-operations-base/ https://archive.fo/lzlxm https://www.bbc.com/news/world-europe-32487081 http://archive.fo/RLJMK http://news.bbc.co.uk/2/hi/europe/503804.stm http://archive.fo/JF7BD https://www.nytimes.com/2001/12/09/world/war-on-terror-casts-chechen-conflict-in-a-new-light.html http://archive.fo/tjybI http://www.us-uk-interventions.org/Chechnya.html http://archive.fo/KJ4i9 https://orientalreview.org/2010/11/19/chechen-uprising-was-provoked-by-cia/ http://archive.fo/uuGZ2 https://www.reuters.com/article/us-russia-chechnya-cia/russias-chechen-chief-blames-cia-for-violence-idUSTRE58N5S120090924 http://archive.fo/FXX7F https://www.lewrockwell.com/lrc-blog/chechnya-the-cia-and-terrorism/ http://archive.fo/TwzgG https://www.veteranstoday.com/2013/04/29/cia-financing-of-chechen-and-other-caucasus-regional-terrorists/ http://archive.fo/PM75T http://www.historycommons.org/timeline.jsp?complete_911_timeline_al_qaeda_by_region=complete_911_timeline_islamist_militancy_in_chechnya&timeline=complete_911_timeline http://archive.fo/fHp6v https://en.wikipedia.org/wiki/Ibn_al-Khattab http://archive.fo/OErUb http://archive.fo/wpdww http://theriseofrussia.blogspot.com/2010/11/it-is-now-known-that-twenty-year-old.html http://archive.fo/0QCgM f6d061  No.11980910 File: 8ce6f7fd30f78f0⋯.jpg (176.02 KB, 1507x515, 1507:515, Bojinka Plot and 911.jpg) 9/11 Bill Moyers reports Bojinka plot etc. FBI agent Kenneth Williams wrote a memo in July of 2001 warning about AQ doing flight training in the US. The CIA stonewalled his investigation. http://articles.chicagotribune.com/2002-09-25/news/0209250284_1_phoenix-memo-fbi-agent-kenneth-williams-phoenix-fbi http://archive.fo/4eXHR http://archive.fo/UEn61 http://www.cnn.com/2002/US/05/21/phoenix.memo/index.html http://archive.fo/Xdieg https://www.dcourier.com/news/2016/nov/18/911s-phoenix-memo-fbi-agent-author-prescott-warnin/ https://web.archive.org/web/20161119125443/https://www.dcourier.com/news/2016/nov/18/911s-phoenix-memo-fbi-agent-author-prescott-warnin/ https://en.wikipedia.org/wiki/Phoenix_Memo http://archive.fo/Kw6Pd Egyptian Intelligence warned us that AQ was planning an attack in May of 2001. And again in August of 2001 that it was in the “operational phase”. http://archive.fo/Y1Egx The Taliban wanted to hand Bin Laden to the US in 1998, but we sabotaged the negotiations. That would have destroyed our pretext for entering Afghanistan which was already planned. The Taliban warned the US that AQ was planning an attack in July of 2001. It was a pipeline war just like Kosovo. https://www.foreignpolicyjournal.com/2010/09/20/newly-disclosed-documents-shed-more-light-on-early-taliban-offers-pakistan-role/ https://archive.fo/hU31Y f6d061  No.11980913 File: 2d77183732f9faf⋯.jpg (84.01 KB, 960x731, 960:731, Afghan Pipeline.jpg) Afghan Pipeline War https://en.wikipedia.org/wiki/Turkmenistan%E2%80%93Afghanistan%E2%80%93Pakistan%E2%80%93India_Pipeline http://archive.fo/a77f9 https://www.foreignpolicyjournal.com/2010/09/20/newly-disclosed-documents-shed-more-light-on-early-taliban-offers-pakistan-role/ https://archive.fo/hU31Y https://www.counterpunch.org/2001/10/23/afghanistan-war-and-oil/ https://web.archive.org/web/20150502162202/https://www.counterpunch.org/2001/10/23/afghanistan-war-and-oil/ https://rense.com/general89/afgh.htm http://archive.fo/6nFAt http://archive.fo/N2YfY. http://revcom.us/a/v21/1030-039/1035/caspian.htm http://archive.fo/ph3AA http://archive.fo/fSk1f https://www.thehindu.com/2001/10/13/stories/05132524.htm http://archive.fo/Flp9S http://news.bbc.co.uk/2/hi/south_asia/1626889.stm http://archive.fo/8GqgV http://www.historycommons.org/timeline.jsp?timeline=afghanwar_tmln&afghanwar_tmln_us_invasion__occupation=afghanwar_tmln_oil_pipelines_and_interests http://archive.fo/bvGeE https://www.outlookindia.com/website/story/pipeline-politics-oil-gas-and-the-us-interest-in-afghanistan/213804 http://archive.fo/Q5ESL http://www.laweekly.com/news/the-oil-war-2134105 http://archive.fo/zuTuB http://www.peterdalescott.net/q7.html http://archive.fo/k168w http://www.worldpress.org/specials/pp/pipeline_timeline.htm http://archive.fo/cVQm http://www.globalissues.org/article/276/oil-politics-in-central-asia http://archive.fo/v2pU4 https://www.theglobeandmail.com/globe-investor/investment-ideas/caspian-sea-oil-conspiracy-may-not-be-a-fairy-tale/article764337/ http://archive.fo/qdEIT http://archive.fo/5AwdQ https://en.wikipedia.org/wiki/Unocal_Corporation http://archive.fo/YKuMC f6d061  No.11980919 File: 70d1d31d178aa9a⋯.jpg (34.72 KB, 500x201, 500:201, Nina Brink 911 Short Selle….jpg) File: 91a546d873240b1⋯.jpg (42.88 KB, 449x471, 449:471, 911 Insider Trading.jpg) 9/11 short selling. https://www.foreignpolicyjournal.com/2011/03/02/black-911-a-walk-on-the-dark-side-2/ http://archive.fo/kOgUr http://archive.fo/ZSAYl http://archive.fo/cdz3l http://archive.fo/OZ70D https://www.nationalreview.com/2004/07/was-there-another-911-attack-wall-street-alexander-rose/ https://web.archive.org/save/https://www.nationalreview.com/2004/07/was-there-another-911-attack-wall-street-alexander-rose/ https://www.nu.nl/economie/121163/nina-brink-verkocht-aandelen-op-11-september-net-voor-ramp.html http://archive.fo/loU5O http://archive.fo/tTHEB http://archive.fo/7Py9j https://rense.com/general46/911.html http://archive.fo/Vr6OC https://www.zerohedge.com/article/sec-government-destroyed-documents-regarding-pre-911-put-options http://archive.fo/Dh5ql https://www.cbsnews.com/news/profiting-from-disaster/ http://archive.fo/P9FXb https://www.sfgate.com/news/article/Suspicious-profits-sit-uncollected-Airline-2874054.php http://archive.fo/CGCE http://archive.fo/Bn8J8 http://archive.fo/JCzGr http://911research.wtc7.net/sept11/stockputs.html http://archive.fo/UkjZX f6d061  No.11980923 File: 6cbe4183874a707⋯.jpg (187.27 KB, 1024x722, 512:361, 911 Omen Vice Magazine.jpg) 9/11 and the evacuation of the Bin Laden family from the US by the government. http://archive.li/MbN0i https://www.nytimes.com/2005/03/27/washington/world/the-reach-of-war-arranged-departures-new-details-on-fbi.html http://archive.li/OAaak https://www.democracynow.org/2003/9/16/cheney_claims_no_knowledge_that_white http://archive.li/iKU5c http://archive.li/DfmQk http://archive.li/tfXBn 2a2fed  No.11980925 >>11980891 Sibel Edmonds and newsbud is a known jewish disinformation outlet though. f6d061  No.11980939 File: 0e8724555e76393⋯.png (152.78 KB, 438x620, 219:310, CIA National Clandestine S….png) >>11980925 Post proof and I'll read it. Lucky Larry http://www.historycommons.org/entity.jsp?entity=larry_silverstein http://archive.fo/nqEte http://www.nydailynews.com/archives/money/gov-backs-wtc-developer-double-insurance-claim-article-1.485245 http://archive.fo/pdfqg http://archive.fo/arJZP http://articles.latimes.com/2004/dec/07/nation/na-insurance7 http://archive.fo/7Vc6s http://www.cnn.com/2004/LAW/12/06/wtc.trial/ http://archive.fo/c8v9V http://archive.fo/stIfY https://www.nytimes.com/2008/03/27/nyregion/27rebuild.html http://archive.fo/WCPdy https://www.reuters.com/article/us-usa-sept11-wtc-damages-idUSKCN0RH2MA20150917 http://archive.fo/nHB4u Tower 7 had a CIA NRD office that coordinated with banks, business, and other government agencies. CIA personnel were the first through the rubble. https://www.cbsnews.com/news/report-cia-lost-office-in-wtc/ http://archive.fo/IygdG https://www.globalresearch.ca/the-mysterious-collapse-of-wtc-seven/15201 http://archive.fo/inRDP https://www.gaia.com/lp/content/911-false-flag/ http://archive.fo/IvDx2 https://www.express.co.uk/news/world/736223/9-11-tower-Building-7-collapse-fire-conspiracy http://archive.fo/lF3BJ http://archive.fo/gQGak https://archive.fo/04EJ7 http://www.dailymail.co.uk/news/article-2056088/Footage-kills-conspiracy-theories-Rare-footage-shows-WTC-7-consumed-fire.html http://archive.fo/kkXVM f6d061  No.11980943 File: b95120ccacc8728⋯.jpg (219.17 KB, 1000x750, 4:3, Netanyahu 911 and Iraq.jpg) Israeli messaging company Odigo was warned about 911 prior to the attacks. https://www.haaretz.com/1.5410231 http://archive.fo/rl01M http://www.bollyn.com/the-odigo-warnings-the-4000-israelis-saved-on-9-11/ https://archive.fo/ovGWT http://www.historycommons.org/entity.jsp?entity=odigo_inc. http://archive.fo/mraQS https://rense.com/general66/pre11.htm https://archive.fo/l8rMw http://www.theinsider.org/news/article.asp?id=520 http://archive.fo/caHAB f6d061  No.11980951 File: 0f4b4d90afb3bdb⋯.jpg (145.42 KB, 700x926, 350:463, 911 Ash.jpg) The FBI involvement in the 1993 WTC bombing. https://www.nytimes.com/1993/10/31/nyregion/bomb-informer-s-tapes-give-rare-glimpse-of-fbi-dealings.html http://archive.fo/aZCuc The only film of the 9/11 plane crash was recorded by a man with a Masters Degree in Visual Studies from MIT. That means he’s a whiz at photoshopping. https://en.wikipedia.org/wiki/Luc_Courchesne http://archive.fo/HCygK Get Osama! Now! Or Else… Pre-9/11 article predicting pretext with Bin Laden. https://www.sott.net/article/280235-Get-Osama-Now-Or-else http://archive.fo/OsJaL Steven Emerson https://en.wikipedia.org/wiki/Steven_Emerson http://archive.li/EghUh 1994 Documentary on Islamic Terrorists in the US http://archive.li/lh7sg http://iona.ghandchi.com/emerson.htm http://archive.li/7Osk f6d061  No.11981090 File: cc1ffabf1a4cce3⋯.jpg (106.18 KB, 765x566, 765:566, John Bolton's Sex Scandal.jpg) >>11974894 >>11974903 >>11977297 Bolton is a literal cuck. In the late 1970s and early 1980s he attended a depraved sex club in NYC called Plato's Retreat. During this time he would bring men home from there to screw his wife, very likely raping her. She ran away on an international trip after this. https://www.rawstory.com/exclusives/byrne/larry_flynt_bolton_511 [defunct use archive] http://archive.li/7fZn7 https://web.archive.org/web/20050601040847/https://www.rawstory.com/exclusives/byrne/larry_flynt_bolton_511 I was also able to personally get ahold of two surviving attendees of this club who witnessed Bolton taking men home to screw his wife. b87ce4  No.11981745 File: f8b525851467530⋯.jpg (33.52 KB, 600x494, 300:247, hasbara Is v USA cro.jpg) File: 38b04486e8bd954⋯.png (318.69 KB, 777x777, 1:1, Jew Is.png) File: 56030988c380156⋯.jpg (43.69 KB, 656x522, 328:261, hasbara ACT_IL 02dou.jpg) Kikes are Flooding this thread with link lists - - WE'VE HIT A NERVE Guilty Guilty Guilty 2d9695  No.11990432 >>11977799 None of you fags will even comment on this evidence? Maybe I'm paranoid, but my comments seem to have been purposefully buried by the epic link nigger. 7d8a1d  No.11991137 >>11977799 Disinfo garbage 2d9695  No.11992363 >>11991137 (((Ignores evidence))) dd9ef5  No.11994051 File: a6d1f099de1ee91⋯.jpeg (39.6 KB, 567x540, 21:20, D0E13DC7-57AF-4B8C-A500-3….jpeg) >>11978079 >endless coinops is a good geopolitical strategy for oil. >trick all our most Chad men into going to war and getting killed, fucked up, or chronic illnesses that can’t be traced back to the war. >trust in CIAniggers >muh stratigic oil Day Of The Pillow for you boomer faggot. What are oil and gas prices now compared to 2001 you dumb gorilla nigger? GTFO you’re to fucking stupid to post here. Those trillions could’ve been spent on literally anything else and it would’ve been better for everyone except (((your))) greatest ally. Cesium nuclear and geothermal plants for example. Go to the wailing wall and man the glory hole you retarded treacherous fuck. b0cf9b  No.11994089 >>11994051 You do know we DIDN'T went to Iraq for our oil right? The only country that got cheaper oil was Israel, they get 75% of their oil from the Kurd region of Iraq. Really makes you think though, why is it that "we went to Iraq for oil" meme spread so quickly and uncontested? fb33cb  No.11994709 The real redpill on 9/11 The official story is perfect for White Nationalist purposes, because 1. It's a no-brainer that 9/11 could not have happened if we would simply not let any Muslims into our countries 2. Osama Bin Laden has said in numerous letters and interviews that the best thing for America to do (short of converting to Islam) in order to improve relations with the Muslim world would be to get a nationalist government that actually acts in the interests of the American people instead of serving the Jews. If you faggots had any brains, you would have used 9/11 to redpill normies about muslims and ZOG, instead of wasting your time on conspiracy theories that make you look like a weirdo. d45e58  No.11994743 >>11994709 I suggest you review the evidence of drone strikes and controlled demolition. September 11: The New Pearl Harbor (2013) d45e58  No.11994748 >>11937232 find out what happened to larry jennings after his video interview right after it happened. he went to work and there was nobody on the office floor! d45e58  No.11994753 >>11994748 >larry jennings *Barry Jennings b87ce4  No.11995085 File: bc61a1e1a4496e1⋯.jpeg (153.47 KB, 736x647, 736:647, 911 Israel Gb.jpeg) File: e75d3be6788a4dc⋯.jpg (35.42 KB, 512x370, 256:185, 911 Larry cro.jpg) >>11994709 Your logic has some worthwhile sequence to it, but if u had any brains u would realize da Goyim know "9/11 could not have happened if we would simply not let any jews into our countries" >ftfy e7e446  No.12002924 >>11932344 >second (vid) (((natural and organic collapse))) e7e446  No.12002929 >>11994709 >lies are perfect for white nationalism kek'd… I love where this is going already e7e446  No.12002936 >>11994051 but anon, gas became cheaper… in Israel 193e32  No.12002993 8d6136  No.12004446 Tens of thousands of computer hard drives containing secure data were in those towers. Tough little things, each encased in a hard metal sarcophagus, inside metal-shelled PCs (laptop use was only beginning to rise in 2001). Perhaps more than anything else, hard drives had a chance to survive. Not all, sure. But many. They're never mentioned. Were these simply, recklessly, put on a boat to China? >>12004446 Too bad thermitic material was seen pouring from one of the walls before the towers collapsed. 9c3cd4  No.12005150 >Hey anons train our AI to instantly ban 9/11 truth for us in this datamining thread No faggot. 4d5013  No.12006431 >>11932654 you're not alone. Bollyn posted his audiobook on youtube, you should check it out d45e58  No.12006442 >>12004446 Why were temperatures under the base of the fallen towers thousands of degrees celsius for weeks despite no oxygen source? The diesel containers were eventually found to be in one piece, not leaking, and safely drained. http://xoomer.virgilio.it/911_subito/misteriosi_ritrovamenti_macerie.html 323a49  No.12006510 YouTube embed. Click thumbnail to play. >>11932331 Best analysis of the physics that I've seen. ee95ea  No.12006547 File: 86930eeca0e55fd⋯.jpg (673.41 KB, 3018x2388, 503:398, Personnel_at_Pentagon_Foll….jpg) >>11966285 The drone or cruise missile or whatever it was blew a small hole in the facade that went right through the building. The facade collapsed about half an hour later. The flight recorder recovered at the Pentagon crash site said there was still 30,000 liters of fuel on board at moment of impact. Where did the fuel go? Did it burn? Where is the evidence of this great 30,000 liter fuel fire? ee95ea  No.12006564 File: b0f76d3b6d47dc2⋯.jpg (255.33 KB, 864x864, 1:1, WTCinsertendbuildingsix.jpg) Building One, Building Two, Building Seven WTC…but let's not forget little Building Six WTC. You know the one with the big square hole in the roof that goes all the way down to the foundations. That one. I know, one of the towers collapsed on it and caved in the roof, right? wrong. The roof blew off Building six about 15 minutes before the first tower collapsed. There's video evidence of this and photo evidence too; as you can see in the pic I've posted there's Bldg Six in the foreground with the towers still standing in the background and Building Six is clearly shattered by some massive blast. Rick Siegel a Wall St quant videotaped it all with a radio playing a news station giving it a time stamp then he took his footage home and analyzed it all. The seismograph at U of Columbia recorded the blast that blew Building Six's roof off. https://youtu.be/vCrLDQwNlqw ee95ea  No.12006587 File: 3508557d599ed56⋯.jpg (121.03 KB, 555x432, 185:144, wtc6_outline.jpg) Building Six roof Lots of bombs in those buildings https://youtu.be/1l2sjSxf19I https://youtu.be/0YvrKfWkxdw b87ce4  No.12006631 File: b520785ffda3bbd⋯.png (422.57 KB, 809x448, 809:448, Bollyn who dunnit.png) File: a6a0e2270259e01⋯.png (356 KB, 521x389, 521:389, army Jews Israel.png) >>12006431 I punched Bollyn audiobook into youtube's search field and various chapters from the audiobook came up, along with a potpourri other videos by him came up Some are from his recent speaking tour - =="The Dual Deception: 9/11 and the War on Terror"== the Zionist / Sayanim crime against us continues as long as we have a single soldier in harm's way on behalf of nasty little Israel 616c09  No.12006758 File: 465efaf7fa1c837⋯.png (1.92 MB, 1277x911, 1277:911, 2018-08-14 18_42_57-Clipbo….png) >>12006631 i bought his book and reached out to him requesting his powerpoints, he's got some great slides from them I want to share. I'll prepare them tomorrow ee95ea  No.12007023 >>12006758 Chris Bollyn is a hero. Looking forward to your powerpoint slides. b87ce4  No.12007509 >>12006758 Bump. - I'm very curious about them. 616c09  No.12011052 File: 1e5f07adebdbeae⋯.png (1.46 MB, 960x720, 4:3, Slide1 (2).PNG) >>12007509 >>12007023 i took all his power points, converted to png, then found the duplicates and removed them. What follows are original Bollyn powerpoint slides in png form. Maybe i should make my own thread? b87ce4  No.12011355 YouTube embed. Click thumbnail to play. >While we now gaslighted anons wait for those powerpoints to materialize like ectoplasm, amuse yourself with the following, anons Nobody who is curious about the Zionist hand in 9/11 should miss Lucky Larry Silverstein's tongue tied reaction the first time he is asked on TV to account for exactly WHERE he was on that murderous morning. Can you tell if Somebody is Lying? check out this 30 second clip of Lucky Larry Silverstein from the 4:18 to 4:48 of the linked video from Brother Nathanael Is Silverstein hiding something? I just don't know what to think. :D >failsafe - https://youtu.be/3tXbSggcyHo 4d5013  No.12013843 File: 2c43d396c96a56b⋯.png (562.59 KB, 960x720, 4:3, Slide1.PNG) File: d74bf73fcbdbefc⋯.png (849.1 KB, 960x720, 4:3, Slide2 (2).PNG) File: b280e40942d34d0⋯.png (847.77 KB, 960x720, 4:3, Slide2.PNG) File: c44d9027ad43991⋯.png (531.72 KB, 960x720, 4:3, Slide3 (2).PNG) File: 529f781ffe6f9ba⋯.png (1.33 MB, 960x720, 4:3, Slide3.PNG) >>12011052 posting bollyn powerpoints 4d5013  No.12013844 File: 42f1a80992b520c⋯.png (672.58 KB, 960x720, 4:3, Slide4 (2).PNG) File: 14f66edd5633764⋯.png (537.94 KB, 960x720, 4:3, Slide4 (3).PNG) File: 19c25b378cccafc⋯.png (1.05 MB, 960x720, 4:3, Slide4.PNG) File: 71ef2c731fa2abc⋯.png (678.23 KB, 960x720, 4:3, Slide5 (2).PNG) 4d5013  No.12013846 File: 06f2f9ad051b2de⋯.png (670.22 KB, 960x720, 4:3, Slide5.PNG) File: 5cdb705a3de40bd⋯.png (1.35 MB, 960x720, 4:3, Slide6 (2).PNG) File: 7f907130640126c⋯.png (1.8 MB, 960x720, 4:3, Slide6 (3).PNG) File: 097a53ae81c7e85⋯.png (763.73 KB, 960x720, 4:3, Slide6.PNG) File: 4410fa355536368⋯.png (1.6 MB, 960x720, 4:3, Slide7 (2).PNG) 4d5013  No.12013848 File: c80f433ff174f87⋯.png (1.05 MB, 960x720, 4:3, Slide7.PNG) File: ff26474c60c86f8⋯.png (1.57 MB, 960x720, 4:3, Slide8 (2).PNG) File: d9c31af9fe47778⋯.png (1.6 MB, 960x720, 4:3, Slide8.PNG) File: e8cfd8b42b357d5⋯.png (1.03 MB, 960x720, 4:3, Slide9 (2).PNG) File: 93cc7bad631c50c⋯.png (685.18 KB, 960x720, 4:3, Slide9.PNG) 4d5013  No.12013851 File: 88839b2a5e78813⋯.png (847.83 KB, 960x720, 4:3, Slide10 (2).PNG) File: ddaf213317a1885⋯.png (824.32 KB, 960x720, 4:3, Slide10.PNG) File: 3ca41f6ce1dd3ee⋯.png (942.32 KB, 960x720, 4:3, Slide11 (2).PNG) File: 5a056f204714ec0⋯.png (1.35 MB, 960x720, 4:3, Slide11.PNG) File: 039eb41fabdba7c⋯.png (1.03 MB, 960x720, 4:3, Slide12 (2).PNG) 4d5013  No.12013853 File: 3bdbb7970e8f53b⋯.png (374.88 KB, 960x720, 4:3, Slide12 (3).PNG) File: 4f2bef38ca49215⋯.png (1.46 MB, 960x720, 4:3, Slide12.PNG) File: b41fa14e59c8937⋯.png (685.55 KB, 960x720, 4:3, Slide13 (2).PNG) File: 69583f128124aaa⋯.png (791.07 KB, 960x720, 4:3, Slide13.PNG) File: 6806ddee9213f75⋯.png (762.53 KB, 960x720, 4:3, Slide14 (2).PNG) 4d5013  No.12013856 File: 02a9e352bd3e402⋯.png (363.01 KB, 960x720, 4:3, Slide14.PNG) File: fb1fc84ee546645⋯.png (1.05 MB, 960x720, 4:3, Slide15 (2).PNG) File: 5065798394c47e9⋯.png (338.25 KB, 960x720, 4:3, Slide15.PNG) File: ba4ca44d2e00b7c⋯.png (645.89 KB, 960x720, 4:3, Slide16.PNG) File: 7b00c89d2cb90fe⋯.png (872.49 KB, 960x720, 4:3, Slide17.PNG) 4d5013  No.12013858 File: bbfc2bd4e6fc5fb⋯.png (1.1 MB, 960x720, 4:3, Slide18 (2).PNG) File: 738a9d250a7195d⋯.png (1.49 MB, 960x720, 4:3, Slide18 (3).PNG) File: 58c5cd2f505deeb⋯.png (1.27 MB, 960x720, 4:3, Slide18.PNG) File: 015506e3ed9c9e6⋯.png (616.86 KB, 960x720, 4:3, Slide19 (2).PNG) File: 010b8525be000ca⋯.png (976.59 KB, 960x720, 4:3, Slide19.PNG) 4d5013  No.12013859 File: 4bfc01713caae82⋯.png (900.52 KB, 960x720, 4:3, Slide20.PNG) File: d425c57c5f1f5a3⋯.png (1.15 MB, 960x720, 4:3, Slide21 (2).PNG) File: 98a4a146697a2de⋯.png (1.44 MB, 960x720, 4:3, Slide21 (3).PNG) File: 74da1866564aa8f⋯.png (1.11 MB, 960x720, 4:3, Slide21.PNG) File: c3653909466c753⋯.png (956.99 KB, 960x720, 4:3, Slide22 (2).PNG) 4d5013  No.12013862 File: 70cafaa8b4b8278⋯.png (1.02 MB, 960x720, 4:3, Slide22.PNG) File: 73d7820af4e79b3⋯.png (816.37 KB, 960x720, 4:3, Slide23 (2).PNG) File: 63d1a322fc27cbf⋯.png (454.23 KB, 960x720, 4:3, Slide23.PNG) File: d9ae8cb08254bbc⋯.png (898.7 KB, 960x720, 4:3, Slide24 (2).PNG) File: a3bd344d0c7c19a⋯.png (960.17 KB, 960x720, 4:3, Slide24 (3).PNG) 4d5013  No.12013865 File: 2e2e05b648cbab4⋯.png (543.51 KB, 960x720, 4:3, Slide24.PNG) File: ce9edbf935dbfdf⋯.png (1.56 MB, 960x720, 4:3, Slide25 (2).PNG) File: e8c159bc64d7e18⋯.png (615.81 KB, 960x720, 4:3, Slide25.PNG) File: 2107269a760c07d⋯.png (1.66 MB, 960x720, 4:3, Slide26 (2).PNG) File: 5acc78aa2d4554e⋯.png (370.89 KB, 960x720, 4:3, Slide26 (3).PNG) 4d5013  No.12013866 File: 6d2428f6d235784⋯.png (1.67 MB, 960x720, 4:3, Slide26.PNG) File: 61f6f074f2fee89⋯.png (1.2 MB, 960x720, 4:3, Slide27 (2).PNG) File: f67ba0c65bd6c8f⋯.png (978.14 KB, 960x720, 4:3, Slide27 (3).PNG) File: ddf605f22908e9b⋯.png (848.78 KB, 960x720, 4:3, Slide27.PNG) File: 14cc2514ae0578e⋯.png (1.17 MB, 960x720, 4:3, Slide28 (2).PNG) 4d5013  No.12013867 File: 583adb84075dc9e⋯.png (693.39 KB, 960x720, 4:3, Slide28 (3).PNG) File: e707d5ae2012cb8⋯.png (184.63 KB, 960x720, 4:3, Slide28.PNG) File: 834b7c7da06de16⋯.png (1.11 MB, 960x720, 4:3, Slide29 (2).PNG) File: 715d4957471acc2⋯.png (637.35 KB, 960x720, 4:3, Slide29 (3).PNG) File: 82f956b47102361⋯.png (907.84 KB, 960x720, 4:3, Slide29.PNG) 4d5013  No.12013868 File: be5b647c849becd⋯.png (1.56 MB, 960x720, 4:3, Slide30 (2).PNG) File: 9d807189a6bbce1⋯.png (1.22 MB, 960x720, 4:3, Slide30.PNG) File: 733775ef112e1ad⋯.png (1.05 MB, 960x720, 4:3, Slide31 (2).PNG) File: ea6dd3591d6f222⋯.png (1.03 MB, 960x720, 4:3, Slide31.PNG) File: 5a9294762688717⋯.png (1.85 MB, 960x720, 4:3, Slide32 (2).PNG) 4d5013  No.12013869 File: f1bd1ec10f5bb72⋯.png (1.01 MB, 960x720, 4:3, Slide32.PNG) File: 3ddef3b4e0d4325⋯.png (1.64 MB, 960x720, 4:3, Slide33 (2).PNG) File: 148485ec3630ac0⋯.png (817.87 KB, 960x720, 4:3, Slide33 (3).PNG) File: 88ca3ca53d6e21f⋯.png (1.34 MB, 960x720, 4:3, Slide33.PNG) File: fd633f604246d2a⋯.png (1.67 MB, 960x720, 4:3, Slide34 (2).PNG) 4d5013  No.12013874 File: a8985619b7723e2⋯.png (902.28 KB, 960x720, 4:3, Slide34 (3).PNG) File: e139f03f541d7b3⋯.png (841.41 KB, 960x720, 4:3, Slide34.PNG) File: b5a5d9aff3d6f39⋯.png (208.83 KB, 960x720, 4:3, Slide35 (2).PNG) File: 70d416a87f473d9⋯.png (1.74 MB, 960x720, 4:3, Slide35 (3).PNG) File: 4526a419836f591⋯.png (791.04 KB, 960x720, 4:3, Slide35.PNG) 4d5013  No.12013877 File: 8b155fc2bc2994e⋯.png (1.57 MB, 960x720, 4:3, Slide36 (2).PNG) File: 14ed620a2365b23⋯.png (1.22 MB, 960x720, 4:3, Slide36 (3).PNG) File: 182af81ed7e8ebd⋯.png (362.69 KB, 960x720, 4:3, Slide36.PNG) File: b473c3d7eb5bebe⋯.png (1.8 MB, 960x720, 4:3, Slide37 (2).PNG) File: 593745c54d0cea7⋯.png (1.46 MB, 960x720, 4:3, Slide37.PNG) 4d5013  No.12013880 File: 41ca96a3ebf62d9⋯.png (1.2 MB, 960x720, 4:3, Slide38 (2).PNG) File: 36ab4e307432996⋯.png (1.26 MB, 960x720, 4:3, Slide38.PNG) File: 02ccd651c3fcb80⋯.png (1.27 MB, 960x720, 4:3, Slide39.PNG) File: 5844ad347932b0e⋯.png (1.43 MB, 960x720, 4:3, Slide40 (2).PNG) File: f62a78e0a78dbdd⋯.png (1.36 MB, 960x720, 4:3, Slide40 (3).PNG) 4d5013  No.12013881 File: a15c6060c42593c⋯.png (1.07 MB, 960x720, 4:3, Slide40.PNG) File: 3c8901bb1bc3ce2⋯.png (597.47 KB, 960x720, 4:3, Slide41 (2).PNG) File: 4b70e5124573a06⋯.png (1.14 MB, 960x720, 4:3, Slide41.PNG) File: b60c58c54a8ee73⋯.png (1.05 MB, 960x720, 4:3, Slide42.PNG) File: b7433a50bc11118⋯.png (224.79 KB, 960x720, 4:3, Slide43 (2).PNG) 4d5013  No.12013882 File: db4b707db1cc1fa⋯.png (1.02 MB, 960x720, 4:3, Slide43.PNG) File: 312910c84383194⋯.png (1.1 MB, 960x720, 4:3, Slide44 (2).PNG) File: bd0017cad1415f3⋯.png (703.13 KB, 960x720, 4:3, Slide44.PNG) File: 1d6b588704d1138⋯.png (812.78 KB, 960x720, 4:3, Slide45 (2).PNG) File: 930351054367359⋯.png (501 KB, 960x720, 4:3, Slide45.PNG) 4d5013  No.12013884 File: 40371ba0f970a05⋯.png (882.53 KB, 960x720, 4:3, Slide46 (2).PNG) File: c145069ed25d6c2⋯.png (977.52 KB, 960x720, 4:3, Slide46 (3).PNG) File: d86f06992c1da1b⋯.png (611.45 KB, 960x720, 4:3, Slide46.PNG) File: 5e3f6e1cd1ae961⋯.png (1.29 MB, 960x720, 4:3, Slide47 (2).PNG) File: 85492868407d0e4⋯.png (990.33 KB, 960x720, 4:3, Slide47 (3).PNG) 4d5013  No.12013885 File: cc52d8dc4238a38⋯.png (1.64 MB, 960x720, 4:3, Slide47.PNG) File: 7a0565c987401c6⋯.png (454.41 KB, 960x720, 4:3, Slide48 (2).PNG) File: a9d7e0ce4cdb0fd⋯.png (1.12 MB, 960x720, 4:3, Slide48.PNG) File: a13ffde3a149f0b⋯.png (938.24 KB, 960x720, 4:3, Slide49.PNG) File: 7a9bed352463e2c⋯.png (1.22 MB, 960x720, 4:3, Slide50 (2).PNG) 4d5013  No.12013889 File: 72c6fe5aa670123⋯.png (1.36 MB, 960x720, 4:3, Slide50.PNG) File: ab45137fbb99a8b⋯.png (567.46 KB, 960x720, 4:3, Slide51 (2).PNG) File: 2bc6434c25bc065⋯.png (548.58 KB, 960x720, 4:3, Slide51 (3).PNG) File: 65d2491b60d1e40⋯.png (810.93 KB, 960x720, 4:3, Slide51.PNG) File: 957ce5f01b16e84⋯.png (875.53 KB, 960x720, 4:3, Slide52 (2).PNG) 4d5013  No.12013890 File: edba2043d710b65⋯.png (597.54 KB, 960x720, 4:3, Slide52.PNG) File: 56b372e790e95b7⋯.png (794.79 KB, 960x720, 4:3, Slide53.PNG) File: 7733a40393ca10e⋯.png (885.13 KB, 960x720, 4:3, Slide54 (2).PNG) File: 4b5c1d88b00d6a2⋯.png (864.79 KB, 960x720, 4:3, Slide54.PNG) File: 783155d19b09128⋯.png (1.31 MB, 960x720, 4:3, Slide55 (2).PNG) 4d5013  No.12013892 File: 13b87e58fe1523d⋯.png (1.44 MB, 960x720, 4:3, Slide55.PNG) File: a31690acda3a414⋯.png (1.11 MB, 960x720, 4:3, Slide56.PNG) File: 980e1d11d75e632⋯.png (1.08 MB, 960x720, 4:3, Slide57 (2).PNG) File: 4d472c4997479f4⋯.png (1.23 MB, 960x720, 4:3, Slide57.PNG) File: 4f42c9515e707b8⋯.png (1.01 MB, 960x720, 4:3, Slide58 (2).PNG) 4d5013  No.12013896 File: 0e09768caf8db74⋯.png (1.07 MB, 960x720, 4:3, Slide58.PNG) File: 673c44fba6d8ddb⋯.png (1.34 MB, 960x720, 4:3, Slide59 (2).PNG) File: 419324fd3c05e85⋯.png (949.22 KB, 960x720, 4:3, Slide59.PNG) File: 5932844ae892bdc⋯.png (1.16 MB, 960x720, 4:3, Slide60 (2).PNG) File: b031a1d82d78eb8⋯.png (1.03 MB, 960x720, 4:3, Slide60.PNG) 4d5013  No.12013897 File: ee4a752e832d56c⋯.png (782.32 KB, 960x720, 4:3, Slide61 (2).PNG) File: 817861cf1723b7f⋯.png (271.59 KB, 960x720, 4:3, Slide61.PNG) File: d4f9023a4e13c3f⋯.png (849.34 KB, 960x720, 4:3, Slide62 (2).PNG) File: f5ace19cfb478e3⋯.png (1.35 MB, 960x720, 4:3, Slide62.PNG) File: 55a26f85ead39c2⋯.png (958.54 KB, 960x720, 4:3, Slide63 (2).PNG) 4d5013  No.12013898 File: 6c48ca5ccb2fc89⋯.png (710.81 KB, 960x720, 4:3, Slide63.PNG) File: ee19d7d2398952e⋯.png (1.68 MB, 960x720, 4:3, Slide64 (2).PNG) File: 36d0e5fe11c36ef⋯.png (1.12 MB, 960x720, 4:3, Slide64.PNG) File: 281b711ea638170⋯.png (649.31 KB, 960x720, 4:3, Slide65 (2).PNG) File: b75c38f1fc5c11d⋯.png (880.2 KB, 960x720, 4:3, Slide65 (3).PNG) 4d5013  No.12013901 File: 80734e4924f95d6⋯.png (338.92 KB, 960x720, 4:3, Slide65.PNG) File: d69a3e20a4e9f96⋯.png (901.5 KB, 960x720, 4:3, Slide66 (2).PNG) File: 3199e23e304f458⋯.png (847.61 KB, 960x720, 4:3, Slide66.PNG) File: f88125d5828e419⋯.png (594.61 KB, 960x720, 4:3, Slide67 (2).PNG) File: 62dae44b7838ff8⋯.png (291.53 KB, 960x720, 4:3, Slide67 (3).PNG) 4d5013  No.12013902 File: 2a7f6b016b2e1dc⋯.png (462.82 KB, 960x720, 4:3, Slide67.PNG) File: 1e6d5718b1e42ce⋯.png (1.39 MB, 960x720, 4:3, Slide68 (2).PNG) File: b08f20b03520b92⋯.png (1017.88 KB, 960x720, 4:3, Slide68.PNG) File: f26779e92d75c82⋯.png (508.2 KB, 960x720, 4:3, Slide69 (2).PNG) File: 8aa7306117270de⋯.png (781.24 KB, 960x720, 4:3, Slide69.PNG) 4d5013  No.12013903 File: 828c47ad53a6bd3⋯.png (1002.6 KB, 960x720, 4:3, Slide70.PNG) File: 597b2553097d0e8⋯.png (944.03 KB, 960x720, 4:3, Slide71 (2).PNG) File: 70aa825240eedfb⋯.png (1.13 MB, 960x720, 4:3, Slide71.PNG) File: 89f0f8a7580496e⋯.png (1.46 MB, 960x720, 4:3, Slide72 (2).PNG) File: d71d835bfe12bd9⋯.png (923.34 KB, 960x720, 4:3, Slide72.PNG) 4d5013  No.12013904 File: 7273034f9d29c8e⋯.png (793.72 KB, 960x720, 4:3, Slide73 (2).PNG) File: e63aad67144614c⋯.png (856.03 KB, 960x720, 4:3, Slide73.PNG) File: e463fda408ef03f⋯.png (600.4 KB, 960x720, 4:3, Slide74 (2).PNG) File: 4c29427aaddbf08⋯.png (875.74 KB, 960x720, 4:3, Slide74.PNG) File: 2f6d08157987e19⋯.png (1.06 MB, 960x720, 4:3, Slide75.PNG) 4d5013  No.12013905 File: 32f23a534400b33⋯.png (803.48 KB, 960x720, 4:3, Slide76.PNG) File: 7327be99e218397⋯.png (1.15 MB, 960x720, 4:3, Slide77 (2).PNG) File: 6984f54bf4325c0⋯.png (1.36 MB, 960x720, 4:3, Slide77.PNG) File: 332d7c286df9131⋯.png (502.1 KB, 960x720, 4:3, Slide78 (2).PNG) File: 19228aae120d395⋯.png (1.22 MB, 960x720, 4:3, Slide78.PNG) 4d5013  No.12013907 File: 4aa9b2425d08338⋯.png (907.68 KB, 960x720, 4:3, Slide79 (2).PNG) File: f9b4ebb4b1ff9f6⋯.png (1.24 MB, 960x720, 4:3, Slide79.PNG) File: 135c71ef99af062⋯.png (1.22 MB, 960x720, 4:3, Slide80 (2).PNG) File: 8c3243d07e4051a⋯.png (344.74 KB, 960x720, 4:3, Slide80.PNG) File: 709f904e0ed27e3⋯.png (1.74 MB, 960x720, 4:3, Slide81 (2).PNG) 4d5013  No.12013909 File: fa97b2e658eee2b⋯.png (1.09 MB, 960x720, 4:3, Slide81.PNG) File: daec279c7bc005a⋯.png (949.1 KB, 960x720, 4:3, Slide82 (2).PNG) File: 02b08dc15304c08⋯.png (1.07 MB, 960x720, 4:3, Slide82 (3).PNG) File: 04e3b4fbc8d562a⋯.png (1.27 MB, 960x720, 4:3, Slide82.PNG) File: fbf70fa9e535e21⋯.png (913.73 KB, 960x720, 4:3, Slide83.PNG) 4d5013  No.12013911 File: 18ad161c4cf98e6⋯.png (1.37 MB, 960x720, 4:3, Slide84.PNG) File: dbc6b9737faff03⋯.png (666.84 KB, 960x720, 4:3, Slide85.PNG) File: 2a9d3b5df2029fc⋯.png (1 MB, 960x720, 4:3, Slide86.PNG) File: d01098386268164⋯.png (1.12 MB, 960x720, 4:3, Slide87 (2).PNG) File: 0a3498f518f5587⋯.png (1.27 MB, 960x720, 4:3, Slide87.PNG) 4d5013  No.12013912 File: 4f126eb4b739f33⋯.png (1.02 MB, 960x720, 4:3, Slide88.PNG) File: 066ac0a273d7157⋯.png (1023.11 KB, 960x720, 4:3, Slide89.PNG) File: 95d5710e8047df3⋯.png (462.65 KB, 960x720, 4:3, Slide90 (2).PNG) File: 07a3f10021b7116⋯.png (1.41 MB, 960x720, 4:3, Slide90.PNG) File: 05bda4b3dade380⋯.png (1.67 MB, 960x720, 4:3, Slide91.PNG) 4d5013  No.12013914 File: e5b135a08257799⋯.png (1.01 MB, 960x720, 4:3, Slide92.PNG) File: 02841d50113a897⋯.png (1002.32 KB, 960x720, 4:3, Slide93 (2).PNG) File: b3d21ea5fa5f29d⋯.png (1.18 MB, 960x720, 4:3, Slide93.PNG) File: a1503b602b091fb⋯.png (1.12 MB, 960x720, 4:3, Slide94.PNG) File: f37000b8f74f7a4⋯.png (1.02 MB, 960x720, 4:3, Slide95.PNG) 4d5013  No.12013915 File: 93150528704d934⋯.png (1.03 MB, 960x720, 4:3, Slide96 (2).PNG) File: 49d1aa1b4802169⋯.png (1.54 MB, 960x720, 4:3, Slide96.PNG) File: 7c3c75c04273bfd⋯.png (1.18 MB, 960x720, 4:3, Slide97.PNG) File: fa9f017dd902ee4⋯.png (1.38 MB, 960x720, 4:3, Slide98.PNG) File: f128e65f63de6f6⋯.png (803.3 KB, 960x720, 4:3, Slide99 (2).PNG) 4d5013  No.12013918 File: a8a0e53daed8e5c⋯.png (1.35 MB, 960x720, 4:3, Slide99 (3).PNG) File: 15d37c9a1ead60c⋯.png (1.05 MB, 960x720, 4:3, Slide99.PNG) File: 2fc9bb8fd213510⋯.png (400.7 KB, 960x720, 4:3, Slide100 (2).PNG) File: 88a84e234d5d3b3⋯.png (953.15 KB, 960x720, 4:3, Slide100 (3).PNG) File: bab48baad1baa15⋯.png (681.85 KB, 960x720, 4:3, Slide100.PNG) 4d5013  No.12013920 File: 53d87b730e0fa51⋯.png (1.22 MB, 960x720, 4:3, Slide101 (2).PNG) File: b200f44695ad509⋯.png (1.03 MB, 960x720, 4:3, Slide101 (3).PNG) File: f86a1e3d5aca9a1⋯.png (199.37 KB, 960x720, 4:3, Slide101.PNG) File: 31b44d4b686f6d8⋯.png (1.24 MB, 960x720, 4:3, Slide102 (2).PNG) File: 224fb9127e78fd3⋯.png (972.45 KB, 960x720, 4:3, Slide102.PNG) 4d5013  No.12013923 File: c5db7023e59fcb7⋯.png (968.38 KB, 960x720, 4:3, Slide103.PNG) File: 16a335c5e0f563c⋯.png (1.16 MB, 960x720, 4:3, Slide104.PNG) File: a372db80f11a28e⋯.png (1.27 MB, 960x720, 4:3, Slide105 (2).PNG) File: 63018c67566859f⋯.png (1.24 MB, 960x720, 4:3, Slide105.PNG) File: abbd05d30005c7a⋯.png (913.6 KB, 960x720, 4:3, Slide106 (2).PNG) 4d5013  No.12013925 File: a9525e9d4347f4b⋯.png (1.02 MB, 960x720, 4:3, Slide106.PNG) File: cec48d85d0c588f⋯.png (1.37 MB, 960x720, 4:3, Slide107 (2).PNG) File: ecb1b8b29ce8a91⋯.png (1.5 MB, 960x720, 4:3, Slide107.PNG) File: 0f0d20a15b6187a⋯.png (1.76 MB, 960x720, 4:3, Slide108 (2).PNG) File: 67e53f69c780319⋯.png (907.34 KB, 960x720, 4:3, Slide108.PNG) 4d5013  No.12013927 File: dfbacbe1aa3719f⋯.png (1023.48 KB, 960x720, 4:3, Slide109 (2).PNG) File: f181c423350af9b⋯.png (1005.77 KB, 960x720, 4:3, Slide109.PNG) File: b76f6e97fe2ee5b⋯.png (1.27 MB, 960x720, 4:3, Slide110.PNG) File: 1eaf04faa5032e8⋯.png (562.84 KB, 960x720, 4:3, Slide111.PNG) File: 539fa274a16454f⋯.png (913.39 KB, 960x720, 4:3, Slide112 (2).PNG) 4d5013  No.12013929 File: 305eabd8b017bd3⋯.png (679.56 KB, 960x720, 4:3, Slide112.PNG) File: 54cc0ad3b766126⋯.png (1004.18 KB, 960x720, 4:3, Slide113 (2).PNG) File: b764e6b7d61ce55⋯.png (1.08 MB, 960x720, 4:3, Slide113.PNG) File: b97a3f38fd11c30⋯.png (1.13 MB, 960x720, 4:3, Slide114 (2).PNG) File: cc36bbab4dcdfe6⋯.png (1.06 MB, 960x720, 4:3, Slide114.PNG) 4d5013  No.12013933 File: 81804323485599a⋯.png (1.01 MB, 960x720, 4:3, Slide115 (2).PNG) File: 1edaaf219c1319c⋯.png (785.49 KB, 960x720, 4:3, Slide115.PNG) File: 533720dd9e48bc1⋯.png (1.73 MB, 960x720, 4:3, Slide116.PNG) File: 4700adc914e581b⋯.png (1.21 MB, 960x720, 4:3, Slide117.PNG) File: 1140e70478a9cd0⋯.png (1.02 MB, 960x720, 4:3, Slide118 (2).PNG) 4d5013  No.12013936 File: e5899dec7e390f5⋯.png (691.57 KB, 960x720, 4:3, Slide118.PNG) File: 34132c1e8a05f24⋯.png (1.26 MB, 960x720, 4:3, Slide119 (2).PNG) File: 70ec6a1d80aa8d7⋯.png (637.19 KB, 960x720, 4:3, Slide119.PNG) File: 90cd2a2ace75539⋯.png (1.23 MB, 960x720, 4:3, Slide120 (2).PNG) File: 048813a4b2148f8⋯.png (1.18 MB, 960x720, 4:3, Slide120.PNG) 4d5013  No.12013938 File: 35e0e7db96756b2⋯.png (1.01 MB, 960x720, 4:3, Slide121 (2).PNG) File: 15e6acac333169c⋯.png (1.58 MB, 960x720, 4:3, Slide121.PNG) File: a5d60e06db10ec3⋯.png (1.31 MB, 960x720, 4:3, Slide122.PNG) File: 30691f023631572⋯.png (1.14 MB, 960x720, 4:3, Slide123.PNG) File: 6aaf6af22a9b9a9⋯.png (382.93 KB, 960x720, 4:3, Slide124 (2).PNG) 4d5013  No.12013939 File: 6d6f4d92050ab2e⋯.png (277.4 KB, 960x720, 4:3, Slide124.PNG) File: a22aed50fb8a6e6⋯.png (1 MB, 960x720, 4:3, Slide125 (2).PNG) File: d17037f452b35a6⋯.png (948.34 KB, 960x720, 4:3, Slide125.PNG) File: 04c82a05f4923a2⋯.png (1.06 MB, 960x720, 4:3, Slide126.PNG) File: a790fead8f694d3⋯.png (1.16 MB, 960x720, 4:3, Slide127.PNG) 4d5013  No.12013942 File: 19bbd2135e27878⋯.png (1.02 MB, 960x720, 4:3, Slide129.PNG) File: 742d01c695b3bde⋯.png (1.5 MB, 960x720, 4:3, Slide130.PNG) File: 2aabd61c4cbe930⋯.png (1.24 MB, 960x720, 4:3, Slide131.PNG) File: 39888d2135db13c⋯.png (1005.32 KB, 960x720, 4:3, Slide132.PNG) File: 0c5f2ef4d3ece75⋯.png (1.27 MB, 960x720, 4:3, Slide134.PNG) 4d5013  No.12013943 File: 5c792d75ebe756c⋯.png (1.06 MB, 960x720, 4:3, Slide137.PNG) File: 160a291c4ef6486⋯.png (1.74 MB, 960x720, 4:3, Slide140 (2).PNG) File: cb34f6aadd28107⋯.png (1.21 MB, 960x720, 4:3, Slide140.PNG) File: 88c474bad254af5⋯.png (691.68 KB, 960x720, 4:3, Slide141.PNG) File: 0ddc0ab42813770⋯.png (1.18 MB, 960x720, 4:3, Slide143.PNG) 4d5013  No.12013944 File: f4c8566b9de1242⋯.png (1.38 MB, 960x720, 4:3, Slide144 (2).PNG) File: 0411305b9763bfa⋯.png (1.58 MB, 960x720, 4:3, Slide144.PNG) File: 4e88c33730f8c42⋯.png (276.19 KB, 960x720, 4:3, Slide147.PNG) File: be0f93653d83c05⋯.png (779.36 KB, 960x720, 4:3, Slide150.PNG) acdaca  No.12013984 >>11945552 Nice post, clean and simple, clear as day. b87ce4  No.12014062 File: badb29bf8d29664⋯.jpg (62.61 KB, 555x829, 555:829, 911 Woman in WTC hole.jpg) Fantastic memes from Bollyn's power points Let's bring the horrific murderers to justice on behalf of the people who were roasted alive or chose the terrifying jump. Let's get justice for this goy woman who was photographed in the tower, this shiksa, who simply went to work in NYC on a sunny Tuesday morning, and was burned to death or crushed on behalf of Israel's ugly schemes. Let's do meme war for all the murdered people who can't seek justice for themselves. acdaca  No.12014071 File: a86407cefeacf91⋯.jpg (47.12 KB, 990x566, 495:283, Capture.JPG) >>12011355 Hmmm, so White Rights, and Men's Rights are the definition of the hate speech index. Nothing to see here…….. 7d993b  No.12014108 >>11932337 non-existent apparently ca155e  No.12036306 >>12014062 B-but NIST told me that the jet fuel made the building very very hot resulting in a synchronized global collapsed :( strange how this thread got slid by normies current events in the first page af9363  No.12039215 0718d8  No.12060193 9/11 was done by trump 16ee52  No.12060315 >>12014062 What happens when females override their stay at home and rise kids commands… de3750  No.12069868 >>12060193 kek his close friend Larry Silverman saved a billion dollars in asbestos abatement c125b2  No.12069900 >>11932331 Here is a more rare one It was an energy weapon Check Earth's emp field readings the day of Jets or bombs can't do that http://www.drjudywood.com/wp/ 8f946f  No.12069903 File: 20044e4f7b1eefe⋯.png (140.92 KB, 258x371, 258:371, 1206370099726.png) I clearly remember seeing the original footage from the gas station where the "Air Plane" (that was mysteriously shaped like a missile) flew by and hit the pentagon. 87df6e  No.12070221 >>11933274 ffs give us an archive 95f8d2  No.12070233 A likely scenario that lays out exactly how the Neocons and Jews did 9/11. All the details. https://pastebin.com/yCwrSNEL 4e8eb9  No.12070349 >>12070233 A few grammatical/spelling errors here and there that could be cleaned up but this is the most in-depth story of 9/11 I've ever read b87ce4  No.12070364 >>12070233 >Neocons and Jews most Neocons are Jews >>12070349 >this is the most in-depth story of 9/11 I've ever read Bollyn's material is the best in-depth account. dcedaa  No.12094035 >>12070233 nice dubs >>11932331 Lucky Larry Silverstein is BACK!!! Published on Sep 4, 2018 https://www.bitchute.com/video/JEZaaZFyZQk/ a9d41b  No.12104882 >>12104807 That video by itself proves (((Silverstein))) is a traitor if not a mass murderer! Americans don't give a fuck about other americans, you either have the controlled opposition crowd blaming "elites" or the bluepilled betafucks that think 9/11 was a natural and organic event. b5fcd4  No.12104900 File: b7146222a9dc9c2⋯.mp4 (1.43 MB, 640x360, 16:9, Donald Trump Is Good Frien….mp4) >>12104882 Kill yourself, shariahblue. Larry Silverstein is a great guy, a good guy, a friend of the G-d Emperor. acdaca  No.12105202 File: aab92bd85588df1⋯.jpg (18.79 KB, 167x189, 167:189, folded50.jpg) >>11943829 here, have a 50 e15f05  No.12105288 >>12104900 Spotted the kike. Know how I can tell? Because your reverse psychology is nigger IQ tier and you feign ignorance of the most basic mob tactics/Sun Tzu. b5fcd4  No.12105400 >>12105288 >shilling for the shabbos goy grandfather of yidlets who's friends with Lucky Larry and has a 9/11 conspirator for a lawyer >y-you're the kike, goy! b3b796  No.12105589 File: 23ec2752ba0c8cb⋯.png (241.84 KB, 2416x1928, 302:241, Defense_spending.png) >Can't let the military-industrial-congressional complex go belly up now that the cold war is over. What would happen to X million Red Sea pedestrians without it? Call your Governor every hour and tell him to support us more goy!!! f98530  No.12106016 File: 07c52481f15d345⋯.jpg (553.05 KB, 477x724, 477:724, 0% Survivors.jpg) >>11967290 >Now imagine what we'd do to Israel based on the truth. YES b5fcd4  No.12106045 File: b0a16e7b9dc1511⋯.jpg (94.21 KB, 449x353, 449:353, sabrosky-quote-on-mark-gle….jpg) File: b72a961acc74d28⋯.webm (7.59 MB, 640x360, 16:9, Israel Did 911 - All the ….webm) ee95ea  No.12106249 File: 049020a8329d7cb⋯.gif (1.78 MB, 350x255, 70:51, vqGeMsr.gif) >>12013843 >https://youtu.be/3tXbSggcyHo >>12013843 thx 12eda1  No.12107251 >>12070233 >https://pastebin.com/yCwrSNEL I don't get the part where cheney is furious when the WTC gets demolished. what would he have been expecting? Also I was confused by the stuff about the first instance of Mohammed Atta, he was killed in SA just to get his passport? The other Attas are doubles assuming a dead man's identity? 8d6136  No.12107267 >>12107251 The "voice duplication" stuff was a bit heavy on the tinfoil. 1a8cb2  No.12107312 Side mission: Been digging into the death of Dan Wallace & Luke Rudkowski of WeAreChange. I would suggest you dig too. Not only is Dan's death suspcious as fuck, but Luke's takeover of WeAreChange stinks of controlled opposition. Dan's GF at the time was cheating on him with Luke for a few years prior too apparently. Did Dan get too close to some questions that made (((them))) very uneasy, therefore requiring silencing? 7e10b5  No.12107647 >>11942597 For you my good man. Not saying you are wrong though. Would only take a small team a week or so to place explosives after being built. b3b796  No.12108214 YouTube embed. Click thumbnail to play. 5d951a  No.12108320 >>11932331 Brainlet here: What DOES a naturally collapsing building look like? a9757d  No.12110887 >>11932456 "Our job was to document." Sure, because you can't document a situation if you don't know what is going to happen. 04b21f  No.12111423 >>11936974 Let me get this straight, they probably set up the whole thing to pursue israel's interests and defending a foreign country even though that same country attacked our american soil and we are definitely in the right to declare war on the kikes and kicking them out, but we also have to mention the fact that said planes don,t exist, but buffed the security in airports? And why would they buff security in an airport and for what purpose? I don't get it guys the planes were cgi, unless they are doing this to take freedoms from us. 04b21f  No.12111426 >>12111423 I mean is not only for israel's interests, but something more sinister, there has to be a reason why they buffed the security systems in airports. Can anyone explain? bdd1a4  No.12111493 File: 8fae6f3febfeb6f⋯.jpg (59.53 KB, 640x403, 640:403, bushtellstruth.jpg) File: 08659f86852be84⋯.png (75.5 KB, 461x590, 461:590, PNAC-Rebuilding-Americas-D….png) In order of most highly suspicious; -Building 7 controlled demolition. -Donald Rumsfeld held press conference Sept. 10, 2001 declaring over 3 trillion in unaccounted Naval Defense Spending->Offices hit in Pentagon held Naval Defense records in question. -Nano thermite, molten steel at ground zero. Smoldered for weeks after 9-11. -(((Larry Silverstein))) purchase of World Trade Center complex just months before and multi-billion insurance policy for terrorist attacks. -Patriot Act, 363 pages of sweeping legislation, drafted and passed through Congress in a matter of days. Pre-written. -PNAC, Project for a New American Century "new Pear Harbor" Motive abounds with these facts. 9-11was my first red pill. The evil of the creatures that coordinated this event has no limits. They must be destroyed. cbbdec  No.12111541 File: d7d711741de300b⋯.png (2.56 MB, 2560x1060, 128:53, 2A5FBCBF-C088-4710-83DC-C5….png) >>12110887 >our job was to document “the event” is an equally important part of the quote anon. Since then, I’ve heard Israeli politicians and PM’s refer to other false flags as “the event”. Pic related, toasting the Passover (((gas attack))) by Assad in 2017. Netanyahoo says “the event” quite a bit since 9/11. >>12107647 He’s not wrong, they broke ground on the towere (((33))) years before 9/11. Replacing 2 columns(Jachin and Boaz) with one light column from an inverted tephilin is highly symbolic. >>12111426 >buffed As in made stronger? Prior to 9/11 most airport security was privatized. The company that was in charge of the airports the planes took of from was Israeli owned, ICT Security iirc. Now, with the TSA and Homeland Security ZOG can funnel more taxes directly to the Israeli government instead of using corporate middlemen. Plus with more entities they can spread load the budget and hid more skimming. Additionally, beefing up security at the airports normalizes adding more cameras to street corners. These long cons by the kikes are always multipurpose. Las Vegas was the next large step in, (((blood sacrifice))), increasing security, fun grabbing, conspiracy theory well poisoning, and D&C misdirection psyops visa vie, (((Deep State))) vs (((Trump))) vs (((House of Saud/Saul))). cbbdec  No.12111652 Also, until 1994 two companies have had a monopoly on elevator installation and maintenance since the invention of them, Ottis and Thyssonkrupp. Ottis built the elevators in the WTC but then Atlas swooped in and took the maintenance contract giving them all acces to the whole building at all hours so as not to disturb normal employees. Then after 9/11 ACE Elevators disappeared. Additionally, the security for the WTC was (((Kroll Security))). http://aneta.org/911experiments_com/AceElevator/ http://archive.fo/7uSr2 de69bf  No.12111753 File: 21ca06142555b00⋯.webm (3.02 MB, 608x360, 76:45, 911 and War by Deception_….webm) File: 57ef3ad26ea3f4e⋯.webm (3.84 MB, 608x360, 76:45, 911 and War by Deception_….webm) File: 1013dcba93c9f3f⋯.webm (3.88 MB, 608x360, 76:45, 911 and War by Deception_….webm) 8d6136  No.12116492 >>12110887 >our job was to document >>12111541 >“the event” is an equally important part of the quote anon. Very true. "Job" is the damning word for me. 04b21f  No.12116529 >>11946299 Hello cuckchanner. b87ce4  No.12118503 File: c7dc47f3635ec28⋯.jpg (27.1 KB, 594x480, 99:80, 911 2 Israeis GeoW Bridge.jpg) Two of the crew from Mossad who were captured that fateful day near the George Washington bridge in a van that was found by forensics to contain a telltale residue of explosives . members of Urban Moving System's jewish dance troupe who were celebrating while Americans burned alive in the Twin Towers. Our pals, the jews…. 5a1e13  No.12129618 bump for upcoming 5a1e13  No.12137265 Bump again for the big day today. 97dc09  No.12137890 bump 2d7fec  No.12137901 YouTube embed. Click thumbnail to play. This is a good one 2d7fec  No.12137926 File: 6e9817e11f6dea8⋯.jpg (100.06 KB, 792x532, 198:133, 911 (26).jpg) File: c94d0081d49faaa⋯.jpg (29.83 KB, 320x320, 1:1, 911 (29).jpg) File: ef4cdbb74acaa34⋯.jpg (97.82 KB, 851x638, 851:638, 911 (16).jpg) File: c8b6238c7b8f33a⋯.jpg (167.89 KB, 1440x1093, 1440:1093, 911 (14).jpg) File: a44fb35e07cef3a⋯.jpg (91.27 KB, 522x739, 522:739, 911 (15).jpg) 2d7fec  No.12137930 File: 43615d4b4a0bede⋯.jpg (166.07 KB, 960x960, 1:1, hoaxed (32).jpg) File: d0a8b7a3c04643b⋯.jpg (539 KB, 866x651, 866:651, hoaxed (30).jpg) File: b8d65bc6e20427c⋯.jpg (512.6 KB, 816x2880, 17:60, 911 hoaxed (2).jpg) File: aa76768e1dfa242⋯.jpg (88.47 KB, 599x742, 599:742, 911 hoaxed (4).jpg) File: 12b8181d11b6c78⋯.jpg (63.79 KB, 480x563, 480:563, 911 hoaxed (6).jpg) 5bba67  No.12138053 80b7d1  No.12138185 >>11937699 is this pasta? I like it. Guaranteed You's 2cd4a6  No.12139083 >>11994709 >instead of wasting your time on conspiracy theories that make you look like a weirdo. Filtered. fe2a95  No.12140213 >>12111541 not one of these classless faggots know how to properly hold a wine glass. Forever Neanderthals. Fuck, learn some class you filthy rats. 9bc875  No.12162086 Osama did nothing wrong fac96c  No.12162186 YouTube embed. Click thumbnail to play. >>11932331 NBC reports Melted Cars from 9/11, also Judy Wood's research is worth looking into. 91bff7  No.12162693 >>12162186 reporter calls the attack a bombing… wtf cars on top of each other and melded together, steel was literally melted. this clip has it all! 143a1b  No.12162715 >>12111753 fuck off shill
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17320281267166138, "perplexity": 27444.5050368285}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158045.57/warc/CC-MAIN-20180922044853-20180922065253-00236.warc.gz"}
http://www-old.newton.ac.uk/programmes/PFD/seminars/2005092815001.html
# PFD ## Seminar ### Patterns of synchrony in lattice dynamical systems Antoneli, FM (Sao Paulo) Wednesday 28 September 2005, 15:00-15:30 Seminar Room 1, Newton Institute #### Abstract >From the point of view of coupled systems developed by Stewart, Golubitsky, and Pivato, lattice differential equations consist of choosing a phase space for each point in a lattice and a system of differential equations on each of these spaces such that the whole system is translation invariant. The architecture of a lattice differential equation is the specification of which sites are coupled to which (nearest neighbor coupling is a standard example). A polydiagonal is a finite-dimensional subspace of phase space obtained by setting coordinates in different phase spaces equal. There is a coloring of the network associated to each polydiagonal that is obtained by coloring any two cells that have equal coordinates with the same color. A pattern of synchrony is a coloring associated to a polydiagonal that is flow-invariant for every lattice differential equation with a given architecture. We prove that every pattern of synchrony for a fixed architecture in planar lattice differential equations is spatially doubly periodic assuming that the couplings are sufficiently extensive. For example, nearest and next nearest neighbor couplings are needed for square and hexagonal couplings, and a third level of coupling is needed for the corresponding result to hold in rhombic and primitive cubic lattices. On planar lattices this result is known to fail if the network architecture consists only of nearest neighbor coupling. The techniques we develop to prove spatial periodicity and finiteness can be applied to other lattices.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8068637251853943, "perplexity": 606.3195836141405}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410665301782.61/warc/CC-MAIN-20140914032821-00156-ip-10-234-18-248.ec2.internal.warc.gz"}
https://math-on-web.com/Cross-Browser-Solution/test/mathml3/3-6-1-StacksOfCharactersMstack1.html
- - 3.6.1 MathML - Stacks of Characters mstack Mstack is used to lay out rows of numbers that are aligned on each digit. The children of an mstack represent rows, or groups of them, to be stacked each below the previous row; there can be any number of rows. An msrow represents a row; an msgroup groups a set of rows together so that their horizontal alignment can be adjusted together; an mscarries represents a set of carries to be applied to the following row; an msline represents a line separating rows. Any other element is treated as if implicitly surrounded by msrow. ### $723123456+1302$ #### MathML ```<math > <mstack style="border:1px"> <mn style="border:1px">723</mn> <mn style="border:1px" mathsize="70">123</mn> <msrow style="border:5px"> <mn>456</mn> <mo>+</mo> </msrow> <msline style="border:1px"/> <mn style="border:1px">1302</mn> </mstack> [/itex] ```
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4665585458278656, "perplexity": 1823.3086352207888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00281.warc.gz"}
https://socratic.org/questions/what-is-the-final-concentration-of-a-solution-prepared-by-adding-water-to-50-0-m
Chemistry Topics # What is the final concentration of a solution prepared by adding water to 50.0 mL of 1.5 M NaOH to make 1.00 L of solution? Dec 10, 2015 Moles/volume of solution = $\frac{50.0 \times {10}^{- 3} \cancel{L} \times 1.50 \cdot m o l \cdot \cancel{{L}^{- 1}}}{1.00 \cdot L}$ Given concentration is dimensionally consistent. The answer has units of $m o l \cdot {L}^{-} 1$ as required.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9215815663337708, "perplexity": 1759.3165517916777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573098.0/warc/CC-MAIN-20190917161045-20190917183045-00486.warc.gz"}
https://www.mbmlbook.com/MurderMystery_A_model_of_a_murder.html
1.2 A model of a murder Searching carefully around the library, Dr Bayes spots a bullet lodged in the book case. “Hmm, interesting”, he says, “I think this could be an important clue”. So it seems that the murder weapon was the revolver, not the dagger. Our intuition is that this new evidence points more strongly towards Major Grey than it does to Miss Auburn, since the Major, with his military background, is more likely to have experience with a revolver than Miss Auburn. But how can we use this information? A convenient way to think about the probabilities we have looked at so far is as a description of the process by which we believe the murder took place, taking account of the various sources of uncertainty. So, in this process, we first pick the murderer with the help of Figure 1.1. This shows that there is a 30% chance of choosing Major Grey and a 70% chance of choosing Miss Auburn. Let us suppose that Miss Auburn was the murderer. We can then refer to Figure 1.3 to pick which weapon she used. There is a 20% chance that she would have used the revolver and an 80% chance that she would have used the dagger. Let’s consider the event of Miss Auburn picking the revolver. The probability of choosing Miss Auburn and the revolver is therefore 70% $\times$ 20% = 14%. This is the joint probability of choosing Auburn and revolver. If we repeat this exercise for the other three combinations of murderer and weapon we obtain the joint probability distribution over the two random variables, which we can illustrate pictorially as seen in Figure 1.4. Figure 1.5 below shows how this joint distribution was constructed from the previous distributions we have defined. We have taken the left-hand slice of the $P(\var{murderer})$ square corresponding to Major Grey, and divided it vertically in proportion to the two regions of the conditional probability square for Grey. Likewise, we have taken the right-hand slice of the $P(\var{murderer})$ square corresponding Miss Auburn, and divided it vertically in proportion to the two regions of the conditional probability square for Auburn. = $\times$ $P(\var{weapon}, \var{murderer})$   $P(\var{murderer})$   $P(\var{weapon} | \var{murderer})$ Figure 1.5The joint distribution for our two-variable model, shown as a product of two factors. We denote this joint probability distribution by $P(\var{weapon}, \var{murderer})$, which should be read as “the probability of weapon and murderer”. In general, the joint distribution of two random variables A and B can be written $P(\var{A}, \var{B})$ and specifies the probability for each possible combination of settings of A and B. Because probabilities must sum to one, we have Here the notation $\sum_A$ denotes a sum over all possible states of the random variable A, and likewise for B. This corresponds to the total area of the square in Figure 1.4 being 1, and arises because we assume the world consists of one, and only one, combination of murderer and weapon. Picking a point randomly in this new square corresponds to sampling from the joint probability distribution. Probabilistic models We can now introduce the central concept of this book, the probabilistic model. A probabilistic model consists of: Once we have a probabilistic model, we can reason about the variables it contains, make predictions, learn about the values of some random variables given the values of others, and in general, answer any possible question that can be stated in terms of the random variables included in the model. This makes a probabilistic model an incredibly powerful tool for doing machine learning. We can think of a probabilistic model as a set of assumptions we are making about the problem we are trying to solve, where any assumptions involving uncertainty are expressed using probabilities. The best way to understand how this is done, and how the model can be used to reason and make predictions, is by looking at example models. In this chapter, we give the example of a probabilistic model of a murder. In later chapters, we shall build a variety of more complex models for other applications. All the machine learning applications in this book will be solved solely through the use of probabilistic models. Two rules for working with probabilistic models So for our murder mystery, we have a probabilistic model with two variables murderer and weapon where the joint probability distribution over those variables is the one shown in Figure 1.4. To use this model, we now need to introduce two key rules which allow us to manipulate the probability distributions in a model. From the discussion above, we see that the joint probability distribution for our model is obtained by taking the probability distribution over murderer and multiplying by the conditional distribution of weapon. This can be written in the form Equation (1.15) is an example of a very important result called the product rule of probability. The product rule says that the joint distribution of A and B can be written as the product of the distribution over A and the conditional distribution of B conditioned on the value of A, in the form Now suppose we sum up the values in the two left-hand regions of Figure 1.4 corresponding to Major Grey. Their total area is 0.3, as we expect because we know that the probability of Grey being the murderer is 0.3. The sum is over the different possibilities for the choice of weapon, so we can express this in the form Similarly, the entries in the second column, corresponding to the murderer being Miss Auburn, must add up to 0.7. Combining these together we can write This is an example of the sum rule of probability, which says that the probability distribution over a random variable A is obtained by summing the joint distribution $P(\var{A}, \var{B})$ over all values of B In this context, the distribution $P(\var{A})$ is known as the marginal distribution for A and the act of summing out B is called marginalisation. We can equally apply the sum rule to marginalise over the murderer to find the probability that each of the weapons was used, irrespective of who used them. If we sum the areas of the top two regions of Figure 1.4 we see that the probability of the weapon being the revolver was $0.27 + 0.14 = 0.41$, or 41%. Similarly, if we add up the areas of the bottom two regions we see that the probability that the weapon was the dagger is $0.03 + 0.56 = 0.59$ or 59%. The two marginal probabilities then add up to $1$, which we expect since the weapon must have been either the revolver or the dagger. The sum and product rules are very general. They apply not just when A and B are binary random variables, but also when they are multi-state random variables, and even when they are continuous (in which case the sums are replaced by integrations). Furthermore, A and B could each represent sets of several random variables. For example, if $\var{B} \equiv \{ \var{C}, \var{D} \}$, then from the product rule (1.16) we have and similarly the sum rule (1.19) gives The last result is particularly useful since it shows that we can find the marginal distribution for a particular random variable in a joint distribution by summing over all the other random variables, no matter how many there are. Together, the product rule and sum rule provide the two key results that we will need throughout the book in order to manipulate and calculate probabilities. It is remarkable that the rich and powerful complexity of probabilistic modelling is all founded on these two simple rules. Inference using the joint distribution We now have the tools that we need to incorporate the fact that the weapon was the revolver. Intuitively, we expect that this should increase the probability that Grey was the murderer but to confirm this we need to calculate that updated probability. The process of computing revised probability distributions after we have observed the values of some the random variables, is called inference. Inference is the cornerstone of model-based machine learning – it can be used for reasoning about a model, learning from data, making predictions with a model – in fact any machine learning task can be achieved using inference. We can do inference in our model using the joint probability distribution shown in Figure 1.4. Our model says that, before we observe which weapon was used to commit the crime, all points within this square are equally likely. However, we now know that the weapon was the revolver. We can therefore rule out the two lower regions which correspond to the weapon being the dagger, as illustrated in Figure 1.6. Because all points in the remaining two regions are equally likely, we see that the probability of the murderer being Major Grey is given by the fraction of the remaining area given by the grey box on the left. $P(\var{murderer} = \state{Grey}| \var{weapon} = \state{revolver}) = \frac{0.27}{0.27 + 0.14} \simeq 0.66$ in other words a 66% probability. This is significantly higher than the 30% probability we had before observing that the weapon used was the revolver. We see that our intuition is therefore correct and it now looks more likely that Grey is the murderer rather than Auburn. The probability that we assigned to Grey being the murderer before seeing the evidence of the bullet is sometimes called the prior probability (or just the prior), while the revised probability after seeing the new evidence is called the posterior probability (or just the posterior). The probability that Miss Auburn is the murderer is similarly given by $P(\var{murderer} = \state{Auburn}| \var{weapon} = \state{revolver}) = \frac{0.14}{0.27 + 0.14} \simeq 0.34.$ Because the murderer is either Grey or Auburn these two probabilities again sum to 1. We can capture this pictorially by re-scaling the regions in Figure 1.6 to give the diagram shown in Figure 1.7. We have seen that, as new data, or evidence, is collected we can use the product and sum rules to revise the probabilities to reflect changing levels of uncertainty. The system can be viewed as having learned from that data. So, after all this hard work, have we finally solved our murder mystery? Well, given the evidence so far it appears that Grey is more likely to be the murderer, but the probability of his guilt currently stands at 66% which feels too small for a conviction. But how high a probability would we need? To find an answer we turn to William Blackstone’s principle of 1765: “Better that ten guilty persons escape than one innocent suffer.” We therefore need a probability of guilt for our murderer which exceeds $\frac{10}{10+1} \approx 91\%$. To achieve this level of proof we will need to gather more evidence from the crime scene, and to make a corresponding extension to our model in order to incorporate this new evidence. We’ll look at how to do this in the next section. Self assessment 1.2 The following exercises will help embed the concepts you have learned in this section. It may help to refer back to the text or to the concept summary below. 1. Check for yourself that the joint probabilities for the four areas in Figure 1.4 are correct and confirm that their total is 1. Use this figure to compute the posterior probability over murderer, if the murder weapon had been the dagger rather than the revolver. 2. Choose one of the following scenarios (continued from the previous self assessment) or choose your own scenario 1. Whether you are late for work, depending on whether or not traffic is bad. 2. Whether a user replies to an email, depending on whether or not he knows the sender. 3. Whether it will rain on a particular day, depending on whether or not it rained on the previous day. For your selected scenario, pick a suitable prior probability for the conditioning variable (for example, whether the traffic is bad, whether the user knows the sender, whether it rained the previous day). Recall the conditional probability table that you estimated in the previous self assessment. Using the prior and this conditional distribution, use the product rule to calculate the joint distribution over the two variables in the scenario. Draw this joint distribution pictorially, like the example of Figure 1.4. Make sure you label each area with the probability value, and that these values all add up to 1. 3. Now assume that you know the value of the conditioned variable, for example, assume that you are late for work on a particular day. Now compute the posterior probability of the conditioning variable, for example, the probability that the traffic was bad on that day. You can achieve this using your diagram from the previous question, by crossing out the areas that don’t apply and finding the fraction of the remaining area where the conditioning event happened. 4. For your joint probability distribution, write a program to print out 1,000 joint samples of both variables. Compute the fraction of samples that have each possible pair of values. Check that this is close to your joint probability table. Now change the program to only print out those samples which are consistent with your known value from the previous question (for example, samples where you are late for work). What fraction of these samples have each possible pair of values now? How does this compare to your answer to the previous question? joint probabilityA probability distribution over multiple variables which gives the probability of the variables jointly taking a particular configuration of values. For example, $P(\var{A},\var{B},\var{C})$ is a joint distribution over the random variables A, B, and C. probabilistic modelA set of random variables combined with a joint distribution that assigns a probability to every configuration of these variables. The model is often represented using a graph, such as a factor graph, making it a graphical model. product rule of probabilityThe rule that the joint distribution of A and B can be written as the product of the distribution over A and the conditional distribution of B conditioned on the value of A, in the form sum rule of probabilityThe rule that the probability distribution over a random variable A is obtained by summing the joint distribution $P(\var{A}, \var{B})$ over all values of B marginal distributionThe distribution over a random variable computed by using the sum rule to sum a joint distribution over all other variables in the distribution. marginalisationThe process of summing a joint distribution to compute a marginal distribution. inferenceThe process of computing probability distributions over certain specified random variables, usually after observing the value of some other variables in the model. prior probabilityThe probability distribution over a random variable prior to seeing any data. Careful choice of prior distributions is an important part of model design. posterior probabilityThe updated probability distribution over a random variable after some data has been taken into account. The aim of inference is to compute posterior probability distributions over variables of interest.
{"extraction_info": {"found_math": true, "script_math_tex": 21, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.9312353134155273, "perplexity": 305.3892510440472}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.51/warc/CC-MAIN-20210723210216-20210724000216-00319.warc.gz"}
https://www.physicsforums.com/threads/force-between-magnetic-poles.307728/
# Force between magnetic poles 1. Apr 16, 2009 ### Jack123 Hi, I was wondering if anyone could tell me how to calculate the attractive force between a pair of magnets. I orignally thought that this would involve a really simple formula (something of the 1 over r squared variety) but have struggled to find any equations dealling with the force between poles; they all seem to associate magnetic forces to charged particles. The only formula I have found is located at this address: http://geophysics.ou.edu/solid_earth/notes/mag_basic/mag_basic.html [Broken] In my experiment I was examining how the force of attraction between a solenoid and bar magnet of known strength (0.01 T) depended on current and number of turns of the solenoid as well distance between the two. I reasoned that the field of a solenoid is in effect the same as a bar magnet so I should be able to use the above formula. However, the force I calculated was tiny, despite the fact that I could physically feel the attraction when I suspended the magnet over the solenoid. When I measured the force I found it to be on the order of around a tenth of a Newton, hundreds of times greater than the number I had obtained from the above equation. So what am I doing wrong? Last edited by a moderator: May 4, 2017 2. Dec 1, 2010 ### MagnetDave The result is heavily dependent on the geometry and material of the situation. A "bar" magnet encompasses a wide variety of things that all perform very differently. If you envision your bar as a thin sheet, with the direction of magnetization in the thin axis, it will be close to useless. Conversely, if you make a baton, with the DOM in the long direction, it's quite powerful. Similarly, a Neo magnet will be different than a Samarium magnet, will be different than an Alnico... In short, the situation is not very amenable to a quick-and-dirty formula. You should look into FEMM, which is a simple, free, 2D finite element code so that you can at least get some order-of-magnitude level calculation done. 3. Dec 1, 2010 ### Meir Achuz That formula is for a pole far away from another pole. The force depends on just where the magnet is put. If the bar magnet is placed just at the end of the solenoid, then the force is given by F=BB'A/(2pi), where B and B' are the strengths in gauss, and A is the cross-sectional area of the bar magnet (in cm^2). Last edited: Dec 1, 2010 Similar Discussions: Force between magnetic poles
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862656831741333, "perplexity": 566.7576948102886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105976.13/warc/CC-MAIN-20170820053541-20170820073541-00711.warc.gz"}
http://www.physicsforums.com/showthread.php?t=239779
# Series identity by mhill Tags: identity, series P: 193 for every sequence of numbers a_n E_n is this identity correct ? $$\sum_{n= -\infty}^{\infty}a_n e^{2\pi i E_{n}}= \sum_{n= -\infty}^{\infty}a_n \delta (x-E_{n})$$ P: 2,251 This is true $$\sum_{n=-\infty}^{+\infty} e^{i 2 \pi n x} = \sum_{k=-\infty}^{+\infty} \delta(x-k)$$ but to generalize it with arbitrary coefficients (that are placed on both sides) is not a true equality. P: 193 rbj and matt were right only this $$\sum_{n=-\infty}^{+\infty} e^{i 2 \pi n x} = \sum_{k=-\infty}^{+\infty} \delta(x-k)$$ (1) is correct , however my question is if using Fourier analysis we could generalized to an identity $$\sum_{n=-\infty}^{+\infty}a_{n} e^{i 2 \pi n x} = \sum_{k=-\infty}^{+\infty} b_{n}\delta(x-k)$$ where the a_n and b_n are related by some way , this is interesting regarding an article of Functional equation for Dirichlet series, using (1) the author was able to proof the functional equation for Riemann Zeta, my idea was to develop a functional equation for almost every dirichlet series to see where they have the 'poles' P: 193 If we have in the general case $$\sum_{n=-\infty}^{+\infty}b_{n} e^{i 2 \pi n x} = D A(x)$$ Where A(x) is the partial sum of a_n and D is the derivative operator , in case A(x)=[x] we recover usual delta identity , then i believe we can calculate b_n by the Fourier integral $$b_n = \int_{0}^{1} dx DA(x) e^{-2i\pi x}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9932647347450256, "perplexity": 481.2875184191489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272256.16/warc/CC-MAIN-20140728011752-00477-ip-10-146-231-18.ec2.internal.warc.gz"}
http://pldml.icm.edu.pl/pldml/element/bwmeta1.element.bwnjournal-rm-doi-10_4064-dm482-0-1
Preferencje Język Widoczny [Schowaj] Abstrakt Liczba wyników Tytuł książki Unitary equivalence and decompositions of finite systems of closed densely defined operators in Hilbert spaces Autorzy Seria Rozprawy Matematyczne tom/nr w serii: 482 wydano: 2012 Zawartość Warianty tytułu Abstrakty EN An ideal of N-tuples of operators is a class invariant with respect to unitary equivalence which contains direct sums of arbitrary collections of its members as well as their (reduced) parts. New decomposition theorems (with respect to ideals) for N-tuples of closed densely defined linear operators acting in a common (arbitrary) Hilbert space are presented. Algebraic and order (with respect to containment) properties of the class $CDD_{N}$ of all unitary equivalence classes of such N-tuples are established and certain ideals in $CDD_{N}$ are distinguished. It is proved that infinite operations in $CDD_{N}$ may be reconstructed from the direct sum operation of a pair. Prime decomposition in $CDD_{N}$ is proposed and its uniqueness (in a certain sense) is established. The issue of classification of ideals in $CDD_{N}$ (up to isomorphism) is discussed. A model for $CDD_{N}$ is described and its concrete realization is presented. A new partial order of N-tuples of operators is introduced and its fundamental properties are established. The importance of unitary disjointness of N-tuples and the way how it 'tidies up' the structure of $CDD_{N}$ are emphasized. Słowa kluczowe Tematy Kategoryzacja MSC: Miejsce publikacji Warszawa Seria Rozprawy Matematyczne tom/nr w serii: 482 Liczba stron 106 Liczba rozdzia³ów Opis fizyczny Daty wydano 2012 Twórcy autor • Institute of Mathematics, Jagiellonian University, Łojasiewicza 6, 30-348 Kraków, Poland Bibliografia Języki publikacji EN Uwagi
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6632518172264099, "perplexity": 1248.0226267867033}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622113.11/warc/CC-MAIN-20210625054501-20210625084501-00315.warc.gz"}
https://farside.ph.utexas.edu/teaching/336L/Fluid/node256.html
Next: Cylindrical Coordinates Up: Non-Cartesian Coordinates Previous: Introduction # Orthogonal Curvilinear Coordinates Let , , be a set of standard right-handed Cartesian coordinates. Furthermore, let , , be three independent functions of these coordinates which are such that each unique triplet of , , values is associated with a unique triplet of , , values. It follows that , , can be used as an alternative set of coordinates to distinguish different points in space. Because the surfaces of constant , , and are not generally parallel planes, but rather curved surfaces, this type of coordinate system is termed curvilinear. Let , , and . It follows that , , and are a set of unit basis vectors that are normal to surfaces of constant , , and , respectively, at all points in space. Note, however, that the direction of these basis vectors is generally a function of position. Suppose that the , where runs from 1 to 3, are mutually orthogonal at all points in space: that is, (C.1) In this case, , , are said to constitute an orthogonal coordinate system. Suppose, further, that (C.2) at all points in space, so that , , also constitute a right-handed coordinate system. It follows that (C.3) Finally, a general vector , associated with a particular point in space, can be written (C.4) where the are the local basis vectors of the , , system, and is termed the th component of in this system. Consider two neighboring points in space whose coordinates in the , , system are , , and , , . It is easily shown that the vector directed from the first to the second of these points takes the form (C.5) Hence, from (C.1), an element of length (squared) in the , , coordinate system is written (C.6) Here, the , which are generally functions of position, are known as the scale factors of the system. Elements of area that are normal to , , and , at a given point in space, take the form , , and , respectively. Finally, an element of volume, at a given point in space, is written , where (C.7) It can be seen that [see Equation (A.176)] (C.8) and (C.9) The latter result follows from Equations (A.175) and (A.176) because , et cetera. Finally, it is easily demonstrated from Equations (C.1) and (C.3) that (C.10) (C.11) Consider a scalar field . It follows from the chain rule, and the relation , that (C.12) Hence, the components of in the , , coordinate system are (C.13) Consider a vector field . We can write (C.14) where use has been made of Equations (A.174), (C.9), and (C.10). Thus, the divergence of in the , , coordinate system takes the form (C.15) We can write (C.16) where use has been made of Equations (A.178), (C.8), and (C.12). It follows from Equation (C.11) that (C.17) Hence, the components of in the , , coordinate system are (C.18) Now, [see Equation (A.172)], so Equations (C.12) and (C.15) yield the following expression for in the , , coordinate system: (C.19) The vector identities (A.171) and (A.179) can be combined to give the following expression for that is valid in a general coordinate system: (C.20) Making use of Equations (C.13), (C.15), and (C.18), as well as the easily demonstrated results (C.21) (C.22) and the tensor identity (B.16), Equation (C.20) reduces (after a great deal of tedious algebra) to the following expression for the components of in the , , coordinate system: (C.23) Note, incidentally, that the commonly quoted result is only valid in Cartesian coordinate systems (for which ). Let us define the gradient of a vector field as the tensor whose components in a Cartesian coordinate system take the form (C.24) In an orthogonal curvilinear coordinate system, the previous expression generalizes to (C.25) It thus follows from Equation (C.23), and the relation , that (C.26) The vector identity (A.177) yields the following expression for that is valid in a general coordinate system: (C.27) Making use of Equations (C.15), (C.18), and (C.19), as well as (C.21) and (C.22), and the tensor identity (B.16), the previous equation reduces (after a great deal of tedious algebra) to the following expression for the components of in the , , coordinate system: (C.28) Note, again, that the commonly quoted result is only valid in Cartesian coordinate systems (for which ). Next: Cylindrical Coordinates Up: Non-Cartesian Coordinates Previous: Introduction Richard Fitzpatrick 2016-01-22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9869754314422607, "perplexity": 649.6706531994444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00273.warc.gz"}