url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
https://openstax.org/books/introductory-statistics/pages/11-1-facts-about-the-chi-square-distribution | Introductory Statistics
# 11.1Facts About the Chi-Square Distribution
Introductory Statistics11.1 Facts About the Chi-Square Distribution
The notation for the chi-square distribution is:
$χ∼ χ df 2 χ∼ χ df 2$
where df = degrees of freedom which depends on how chi-square is being used. (If you want to practice calculating chi-square probabilities then use df = n - 1. The degrees of freedom for the three major uses are each calculated differently.)
For the χ2 distribution, the population mean is μ = df and the population standard deviation is $σ= 2(df) σ= 2(df)$.
The random variable is shown as χ2, but may be any upper case letter.
The random variable for a chi-square distribution with k degrees of freedom is the sum of k independent, squared standard normal variables.
χ2 = (Z1)2 + (Z2)2 + ... + (Zk)2
1. The curve is nonsymmetrical and skewed to the right.
2. There is a different chi-square curve for each df.
Figure 11.2
3. The test statistic for any test is always greater than or equal to zero.
4. When df > 90, the chi-square curve approximates the normal distribution. For X ~ $χ 1,000 2 χ 1,000 2$ the mean, μ = df = 1,000 and the standard deviation, σ = $2(1,000) 2(1,000)$ = 44.7. Therefore, X ~ N(1,000, 44.7), approximately.
5. The mean, μ, is located just to the right of the peak.
Figure 11.3
Order a print copy
As an Amazon Associate we earn from qualifying purchases. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9350587129592896, "perplexity": 1319.9530216711064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00174.warc.gz"} |
http://mathoverflow.net/questions/38448/a-priori-energy-estimates-for-burgers-equation-with-dissipation?sort=newest | # A priori energy estimates for Burger's equation with dissipation
I've prove existence using the Galerkin method forBurger's equation with dissipation: $u_t + uu_x - u_{xx} = 0$ on $[0,L] \times [0,T]$ and now am trying to prove regularity.
Clarification: I have proven existence for $u \in L^2([0,T];H_0^1(\Omega))$, $u_t \in L^2([0,T];H^{-1}(\Omega))$.
I start by trying to show that $u_t \in L^2([0,T];L^2[0,L])$ but am having some trouble with this. I multiply by $u_t$ and I obtain after an integration,
$\int_0^T \int_0^L u_t^2dxdt + \int_0^L u_x(t,x)^2dx = \int_0^L u_x(0,x)^2dx + \int_0^T \int_0^L uu_xu_t dxdt$.
I can't see what I could possibly do with the second term on the right hand side of the equation. Perhaps this is not the correct way to proceed concerning $L^2$ regularity for inviscid Burger's equation? Or perhaps I'm missing something obvious?
-
Usually it is much easier to establish the space regularity first. This case is no exception. – fedja Sep 12 '10 at 5:30
Well I've already establsihed control in $L^([0,T];H_0^1(\Omega))$ but this is just the standard energy estimate needed for existence. Usually one then shows that $u_t \in L^2([0,T];L^2(\Omega))$ and then uses that to reduce the problem to the elliptic problem: $uu_x - u_{xx} = -u_t$. I'm not sure what other method you had in mind. – Dorian Sep 12 '10 at 13:24
Show spacial $C^\infty$. To this end just compute the derivative of the integral of the square of $\Delta_x^s u$ with large $s$, use appropriate Gagliardo-Nierenberg to estimate the trilinear term and conclude that the high $H^s$ norm decays (and very fast so) if it is large. This trick works perfectly well in the periodic setting (the key is that $\int ff_x=0$ for all $f$). You may need some adjustments for the interval case but to say more, I need to know how exactly you pose your question (the solution on the interval is far from unique unless you impose some boundary conditions). – fedja Sep 12 '10 at 13:48
I forgot that from the $L^2([0,T];H_0^1(\Omega)$ bounds (btw I'm assuming Dirichlet boundary conditions $u(t,0) = u(t,L)=0$) we get immediately uniform $1/2$ Holder continuity in time. That's enough to deal with the $\int u u_x u_t$ term when estimating $\int (u_t)^2$ since I can put an $L^{\infty}$ bound on $u$ and use Cauchy's inequality to take out the other two terms. – Dorian Sep 12 '10 at 17:37
Since $u_x(0,\cdot)$ is $L^2$, $u(0,\cdot)$ is continuous over $[0,L]$. The Burgers equation satisfies the maximum principle, thus $\|u(t)\|_{L^\infty}\le\|u_0\|_{L^\infty}$. Then the last integral $I$ can be bounded by a use of the Young inequality: $$I\le\frac12\int_0^T\int_0^Lu_t^2dx dt+\frac{\|u_0\|_{L^\infty}^2}{2}\int_0^T\int_0^Lu_x^2dx dt.$$ The first term above is absorbed by your left-hand side. The second one is bounded because of $u\in L^2(0,T;H^1_0)$. Whence the required estimate.
I am not fond of the Galerkin method for proving the existence to the Cauchy problem for the Burgers equation and related ones. It does not give uniqueness, and it is hard to get the maximum principle that way. I prefer Picard fixed point iteration, applied to the mild formulation $$u(t)=K^t*u_0-\int_0^tK^{t-s}*(uu_x(s))ds,$$ where $K$ is the heat kernel. You may find estimates in Chapter 6 of my book Hyperbolic conservation laws I, Cambridge University Press (1999). This method has the advantage to provide regularization for $t>0$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791015982627869, "perplexity": 156.34853741890612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011335666/warc/CC-MAIN-20140305092215-00058-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://devmaster.net/posts/6444/2-1 | 0
101 Nov 04, 2004 at 18:50
here’s an interesting proof. see if you can see where it breaks down:
a = b given
aa = ab multiply both sides by a
aa - bb = ab - bb subtract (b * b) from both sides
(a + b)(a - b) = b(a - b) factor both sides
a + b = b cancel out common factors
b + b = b substitute b for a (from line 1)
2b = b combine the left side
2 = 1 divide both sides by b
the above was taken from Paul Bourke’s site
#### 10 Replies
0
101 Nov 04, 2004 at 18:59
(a-b) = 0, therefore you cannot divide both sides by it.
0
101 Nov 10, 2004 at 04:32
wouldn’t bb be b\^2?
0
101 Nov 10, 2004 at 04:48
bb is shorter to write
0
102 Nov 10, 2004 at 05:12
So where exactly was the catch in this question?
0
101 Nov 10, 2004 at 16:00
He divided by zero, so answers go out the roof at that point. I think there’s some more odd algebra in it, but that’s the first I found.
0
102 Nov 10, 2004 at 16:11
Yes, i know what the error in the transformation is but why is he giving it here at DevMaster? I have it in my old 6th grade mathbooks…
0
101 Nov 10, 2004 at 17:00
Bah, there are a lot more proofs of 2=1 that are actually a *lot* harder to disprove =)
I’ll see if I can remember/dig-up a few just for kicks ;)
0
101 Nov 11, 2004 at 07:36
It is just for the fun here. Just to keep the gray matter working but if u have better examples that are harder to disprove please go ahead and place them here.
Or other strange things that are not correct but appear to be correct i just love them :)
0
101 Nov 19, 2004 at 13:05
Another example using complex numbers:
1 = sqrt(1) = sqrt(1\^2) = sqrt((-1)\^2) = sqrt(-1) * sqrt(-1) = i * i = i\^2 = -1
So, always be careful with complex numbers and roots in general, or sth like that could happen \^\^.
Ah and please forgive me the misuse of the equal-sign \^\^.
0
101 Jan 21, 2005 at 20:52
Something similiar to the original post, but also flawed by the divide by zero, is this:
2(1-1) = (2-2)
2 = (2-2)/(1-1)
2 = 0
:) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.600961446762085, "perplexity": 1388.634814980362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164047228/warc/CC-MAIN-20131204133407-00027-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/40129/amsmath-multiple-labels-in-one-equation | # amsmath - multiple labels in one equation
I would like to use multiple labels in the same equation when using amsmath. Basic latex already has this feature, but not amsmath, which seems to manipulate labels in complicated ways. Is it possible to alter the amsmath.sty code to restore this feature?
Or, alternatively, how to temporarily unload the amsmath package? This issue arises when compiling together different articles with \include, some using amsmath and not others. But I found that unloading packages is not possible in latex.
• Welcome to TeX.SE! You can use backticks, , to "highlight" words and other items that are TeX instructions or filenames. I've done so in your posting.
– Mico
Jan 5 '12 at 12:15
• What exactly do you mean by "labels"? Do you want to assign multiple \label{...} commands to one equation, or do you want more than one "tag" (equation number, etc) to show up next to the object of interest? The amsmath package provides the command \tag{...}, which lets users create fairly elaborate tags.
– Mico
Jan 5 '12 at 12:24
• I want to assign multiple \label{...} commands to one equation. I use one label to number the equation, and other labels to number the parameters appearing in it (I use stepcounter, etc..). Labels are used to refer to those parameters (I use ~100 parameters). I would prefer not to alter my document, because I already wrote it without amsmath, but now I need to compile it with another document that uses amsmath... Jan 5 '12 at 14:13
• Have a look at the question tex.stackexchange.com/questions/9939/… and the answer that Martin Scharrer provided.
– Mico
Jan 5 '12 at 15:09
• I suggest you post a MWE (minimum working example) of your code, containing maybe two equations and the various \label commands you're trying to make compatible with amsmath. Without such a MWE, it's very difficult (impossible?) to figure out what's going one and what may have to be done. Thanks.
– Mico
Jan 5 '12 at 20:12
According to p. 86 of the cleveref user guide,
With amsmath, the original \label command is stored in \ltx@label, and \label@in@display replaces \label inside [single-line] equations. \label@in@display just saves the label for later, and defining it is left until the end of the equation, when \ltx@label is finally called.
Hence, you may want to include the following code in the preamble, after loading the amsmath package:
\makeatletter
\let\ltxxlabel\ltx@label
\makeatother
so that you have a command that doesn't contain the "secret letter" @. Alternatively, you could execute the command \let\ltxxlabel\label before loading the amsmath package. Then, replace all \label commands in your document -- except, of course, those that are actually associated with equation numbers -- with \ltxxlabel.
I cannot try out this suggested solution myself since I don't have a clear idea as to how you use the \label command in your document for purposes other than creating associations with equation numbers. Nevertheless, I would encourage you to try out this method.
Addendum: The cleveref user guide has the following to say about the treatment of the \label macro in the multiline equation environments (such as gather, align, and multline) of the amsmath environment:
The amsmath multi-line equation environments scan their bodies twice: Once to measure, once to typeset. In the measure phase, the \label command is disabled by letting it to \@gobble. ... Unfortunately, amsmath wasn't designed with redefinitions of \label in mind ... The multline environment works a bit differently to the other amsmath environments, in that \label is disabled during the typesetting phase, and enabled during the measuring phase.
Given these observations, it would seem that only the adventurous and daring may want to delve into redefining the ways that amsmath works with the \label command in its multiline equation environments. I must admit to not being sufficiently daring, at least not in this category...
\documentclass{scrartcl}
\usepackage{amsmath}
\begin{document}
\begin{align}\label{foobar}
y &= x \tag{foo}\\
y &= x \tag*{[bar]}\\
y &= x \tag{baz}\\
y &= x \label{foobarbaz}
\end{align}
See Equation~\ref{foobar} and \ref{foobarbaz}.
\end{document}
• thank you, but it does not exactly reply to what I want: I would prefer not to alter my document, because I already wrote it without amsmath, but now I need to compile it with another document that uses amsmath...I use \label in combination with \stepcounter,\incr... to number the parameters inside the equation. Jan 5 '12 at 14:24
The reason for this is because amsmath's align environment actually collects or gather the entire contents before typesetting it for horizontal alignment purposes. At least one way around this would be to reverse your thinking, and therefore reference the parameters in the equation and label them within your text. This way there is no conflict in labelling an entry twice within align.
Here is a minimal example that illustrates this concept:
\documentclass{article}
\usepackage{amsmath}% http://ctan.org/pkg/amsmath
\newcounter{parms} \renewcommand{\theparms}{[\arabic{parms}]}
\newcommand{\newparm}[1]{%
\refstepcounter{parms}\arabic{parms}\label{#1}%
}
\begin{document}
\begin{align}
y &= ax^2+bx+c \label{eq1}\\
z &= i_{\ref{eq-i}}+j_{\ref{eq-j}}+k_{\ref{eq-k}} \label{eq2}
\end{align}
See~\eqref{eq1} and~\eqref{eq2}.
Specifically,~\eqref{eq2} has parameters~\newparm{eq-i},~\newparm{eq-j} and~\newparm{eq-k}.
\end{document}
The command \newparm{<label>} defines the labelling of a parameter (using \refstepcounter). It also prints the parameters number and then labels it as <label>. The display of the parameter is set by \theparms where parms is the parameter counter. All this can be modified. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688543677330017, "perplexity": 2083.0201071937563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358903.73/warc/CC-MAIN-20211130015517-20211130045517-00079.warc.gz"} |
https://socratic.org/questions/how-do-you-simplify-6-2-1-6-using-order-of-operations | Algebra
Topics
# How do you simplify 6/2-1*6² using order of operations?
Dec 22, 2015
#### Answer:
$\frac{6}{2} - 1 \cdot {6}^{2} = - 33$
#### Explanation:
The order of operations is:
Parentheses
Exponents
Multiplication
Division
Addition
Subtraction
Now, multiplication and division actually have the same priority, as do addition and subtraction, but it wont hurt to follow the given order, either.
Let's see how it applies to the expression $\frac{6}{2} - 1 \cdot {6}^{2}$
Parentheses:
There are no parentheses in the given expression, so we can skip this.
Exponents:
We have one exponent to evaluate: $\frac{6}{2} - 1 \cdot {6}^{\textcolor{red}{2}}$
As ${6}^{2} = 6 \cdot 6 = 36$ we have
$\frac{6}{2} - 1 \cdot {6}^{2} = \frac{6}{2} - 1 \cdot 36$
Multiplication:
We have one instance of multiplication to evaluate: $\frac{6}{2} - 1 \textcolor{red}{\cdot} 36$
As $1 \cdot 36 = 36$ we have
$\frac{6}{2} - 1 \cdot 36 = \frac{6}{2} - 36$
Division:
We have one instance of division to evaluate: $6 \textcolor{red}{\div} 2 - 1$
As $\frac{6}{2} = 3$ we have
$\frac{6}{2} - 36 = 3 - 36$
Addition:
There is no addition in the given expression, so we can skip this.
Subtraction:
We have one instance of subtraction to evaluate: $3 \textcolor{red}{-} 36$
As $3 - 36 = - 33$ we have our final result.
$\frac{6}{2} - 1 \cdot {6}^{2} = - 33$
##### Impact of this question
216 views around the world
You can reuse this answer
Creative Commons License | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510599970817566, "perplexity": 1673.6404900320683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575513.97/warc/CC-MAIN-20190922114839-20190922140839-00449.warc.gz"} |
https://gateoverflow.in/97608/probability | +1 vote
345 views
1)For three events A, B and C, we know that
• A and C are independent
B and C are independent
A and B are disjoint
P(A∪C)=2/3 P(B∪C)=3/4 P(A∪B∪C)=11/12
P(A)=___________ ans 1/3
2)Consider independent trails consisting of rolling a pair of fair dice, over and over. What is the probability that a sum of 5 appears before sum of 7? ans 2/5
Answer to question no 1 :
Given A and B are disjoint , so P( A ∩ B ) = 0
Given B and C are independent = P(B ∩ C) = P(B) . P(C)
A and C are independent = P(A ∩ C) = P(A) . P(C)
As A and B are disjoint , then A , B and C will obviosuly be disjoint which is the trick of the question.
We know ,
P(A U B U C) = P(A) + P(B) + P(C) - P( A ∩ B ) - P(B ∩ C) - P(A ∩ C) + P(A ∩ B ∩ C)
==> 11 / 12 = x + y + z - yz - xz [ Say ] ..............(1)
P(B U C) = 3 /4
==> P(B) + P(C) - P(B ∩ C) = 3 /4
==> y + z - yz = 3 / 4 .....(2)
Substituting in (1) , we have :
==> 11 / 12 = x + ( 3 / 4 ) - xz .............(3)
Also P(A U C) = 2 / 3
==> P(A) + P(C) - P(A).P(C) = 2 / 3
==> x + z - xz = 2 / 3
==> x - xz = 2/3 - z
So substituting this in (3) , we have :
11 / 12 = (2 / 3 - z) + (3 / 4)
==> z = (2 / 3 + 3 / 4 - 11 / 12)
==> z = 6 / 12
Now x - xz = 2/3 - z
==> x - 1/2x = 2/3 - 1 /2
==> 1/2 x = 1 / 6
==> x = 1 / 3
Therefore P(A) = 1 / 3
You are interested that the game will end where you first get sum of 5, and that it will happen before the first "sum if 7". Hence, by noting that the first event has 4 elementary outcomes(i.e. of sum of 5) while the second has 6(i.e. of sum of 7)..
So this can be done by in the first "n-1" trials , the remaining 26 ( 36 - 4 - 6 ) outcomes can come and in the nth trial , we need to ensure that only these 4 outcomes which constitute sum of 5 comes..That way we can ensure that sum of 5 will come definitely before sum of 7..So an infinite G.P. will perform as n can be any natural number ranging from 1 to infinity [In the very 1st trial we can get sum of 5 and then we are done..But we need to consider all cases m so infinite G.P will form as ] :
P(5 comes before 7 as sum in 2 dices) = (4 / 36) + (26 / 36) * (4 / 36) + (26 / 36)2 * (4 / 36) ..............to infinity
= (4 / 36) [ 1 + (26 / 36) + (26 / 36)2 + .................. ]
= ( 1 / 9 ) * [1 / ( 1 - (26 / 36) }
= 1 / 9 * (36 / 10)
= 4 / 10
= 2 / 5
Hence 2 / 5 is required probability here..
answered by Veteran (88.6k points) 15 58 294
selected | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8497095108032227, "perplexity": 5939.488548870405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823731.36/warc/CC-MAIN-20171020044747-20171020064747-00065.warc.gz"} |
https://learnzillion.com/lesson_plans/1560-day-5-excerpt-from-thomas-paine-s-common-sense | # Day 5: Excerpt from Thomas Paine's "Common Sense"
## You have saved this lesson!
Here's where you can access your saved items.
Dismiss
Card of | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488048315048218, "perplexity": 21988.960585425135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720026.81/warc/CC-MAIN-20161020183840-00499-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://www.mathworks.com/help/matlab/ref/ellipke.html?requestedDomain=www.mathworks.com&nocookie=true | # Documentation
### This is machine translation
Translated by
Mouse over text to see original. Click the button below to return to the English verison of the page.
# ellipke
Complete elliptic integrals of first and second kind
## Syntax
• ``K = ellipke(M)``
• ``````[K,E] = ellipke(M)``````
example
• ``````[K,E] = ellipke(M,tol)``````
example
## Description
````K = ellipke(M)` returns the complete elliptic integral of the first kind for each element in `M`.```
example
``````[K,E] = ellipke(M)``` returns the complete elliptic integral of the first and second kind.```
example
``````[K,E] = ellipke(M,tol)``` computes the complete elliptic integral to accuracy `tol`. The default value of `tol` is `eps`. Increase `tol` for a less accurate but more quickly computed answer.```
## Examples
collapse all
### Find Complete Elliptic Integrals of First and Second Kind
Find the complete elliptic integrals of the first and second kind for `M = 0.5`.
```M = 0.5; [K,E] = ellipke(M) ```
```K = 1.8541 E = 1.3506 ```
### Plot Complete Elliptic Integrals of First and Second Kind
Plot the complete elliptic integrals of the first and second kind for the allowed range of `M`.
```M = 0:0.01:1; [K,E] = ellipke(M); plot(M,K,M,E) grid on xlabel('M') title('Complete Elliptic Integrals of First and Second Kind') legend('First kind','Second kind') ```
### Faster Calculations of the Complete Elliptic Integrals by Changing the Tolerance
The default value of `tol` is `eps`. Find the runtime with the default value for arbitrary `M` using `tic` and `toc`. Increase `tol` by a factor of thousand and find the runtime. Compare the runtimes.
```tic ellipke(0.904561) toc tic ellipke(0.904561,eps*1000) toc ```
```ans = 2.6001 Elapsed time is 0.004952 seconds. ans = 2.6001 Elapsed time is 0.001266 seconds. ```
`ellipke` runs significantly faster when tolerance is significantly increased.
## Input Arguments
collapse all
### `M` — Input arrayscalar | vector | matrix | multidimensional array
Input array, specified as a scalar, vector, matrix, or multidimensional array. `M` is limited to values 0≤m≤1.
Data Types: `single` | `double`
### `tol` — Accuracy of result`eps` (default) | nonnegative real number
Accuracy of result, specified as a nonnegative real number. The default value is `eps`.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
## Output Arguments
collapse all
### `K` — Complete elliptic integral of first kindscalar | vector | matrix | multidimensional array
Complete elliptic integral of the first kind, returned as a scalar, vector, matrix, or multidimensional array.
### `E` — Complete elliptic integral of second kindscalar | vector | matrix | multidimensional array
Complete elliptic integral of the second kind, returned as a scalar, vector, matrix, or multidimensional array.
collapse all
### Complete Elliptic Integrals of the First and Second Kind
The complete elliptic integral of the first kind is
`$\left[K\left(m\right)\right]={\int }_{0}^{1}{\left[\left(1-{t}^{2}\right)\left(1-m{t}^{2}\right)\right]}^{-\frac{1}{2}}dt.$`
where m is the first argument of `ellipke`.
The complete elliptic integral of the second kind is
`$E\left(m\right)={\int }_{0}^{1}\left(1-{t}^{2}{\right)}^{-\frac{1}{2}}{\left(1-m{t}^{2}\right)}^{\frac{1}{2}}dt.$`
Some definitions of the elliptic functions use the elliptical modulus k or modular angle α instead of the parameter m. They are related by
`${k}^{2}=m={\mathrm{sin}}^{2}\alpha .$`
## References
[1] Abramowitz, M., and I. A. Stegun. Handbook of Mathematical Functions. Dover Publications, 1965. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.866932213306427, "perplexity": 1991.073200574715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982958896.58/warc/CC-MAIN-20160823200918-00243-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://www.physicsforums.com/showthread.php?s=9ff9cef1016c1863e2f5b5547cf4b380&p=4547387 | # Geometry Question About A Sphere
by bodykey
Tags: geometry, sphere
P: 31 If I had a sphere with a radius of 100 meters, a diameter of 200 meters, a volume of 4,188,790.20 square meters, and I wanted to place within this sphere a single dot (one dimensional so it doesn't take up any extra space and there is no displacement --if you're thinking in terms of water--), and I need to have one dot every 50 meters, what is the formula I would use to determine that? I thought it was just divide the volume by the number 50, but that comes out with a large number like 83,775.80, which seems insanely huge for something with just a diameter of 100 meters. What am I doing wrong here? This isn't a homework question, just something I'm trying to throw together for an experiment I'm doing in my personal time.
Sci Advisor P: 5,942 I presume you are trying to fill the volume of the sphere. You need to divide by 125000 (503) not 50. I got 33. Note volume is cubic meters, not square meters.
P: 31 Ah ok....that makes SO much more sense! lol
Mentor
P: 21,059
## Geometry Question About A Sphere
Quote by mathman I presume you are trying to fill the volume of the sphere. You need to divide by 125000 (503) not 50. I got 33. Note volume is cubic meters, not square meters.
Since the "dots" are to be 50 m. apart, each dot could be thought of as the center of a sphere 25 m. in radius, so you need to divide the volume of the large sphere by (4/3)##\pi (25)^3##.
This wouldn't give you the exact number of points inside the sphere, as it doesn't take into account how the small spheres are arranged inside the larger one. One of the areas of mathematics deals with sphere packing inside of geometric objects. Mathematicians who work in this area consider such simple examples as how oranges are stacked in a pyramidal pile on up to how spheres can be packed in much higher dimensions, which has application in the area of digital communications.
Related Discussions Calculus & Beyond Homework 1 Advanced Physics Homework 0 Differential Geometry 16 General Math 3 Introductory Physics Homework 4 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012392520904541, "perplexity": 412.4201195028394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.openfoam.com/documentation/guides/latest/api/zeroField_8H_source.html | The open source CFD toolbox
zeroField.H
Go to the documentation of this file.
1/*---------------------------------------------------------------------------*\
2 ========= |
3 \\ / F ield | OpenFOAM: The Open Source CFD Toolbox
4 \\ / O peration |
5 \\ / A nd | www.openfoam.com
6 \\/ M anipulation |
7-------------------------------------------------------------------------------
8 Copyright (C) 2011-2017 OpenFOAM Foundation
9 Copyright (C) 2019 OpenCFD Ltd.
10-------------------------------------------------------------------------------
12 This file is part of OpenFOAM.
13
14 OpenFOAM is free software: you can redistribute it and/or modify it
16 the Free Software Foundation, either version 3 of the License, or
17 (at your option) any later version.
18
19 OpenFOAM is distributed in the hope that it will be useful, but WITHOUT
20 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
21 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
22 for more details.
23
24 You should have received a copy of the GNU General Public License
25 along with OpenFOAM. If not, see <http://www.gnu.org/licenses/>.
26
27Class
28 Foam::zeroField
29
30Description
31 A class representing the concept of a field of 0 used to avoid unnecessary
32 manipulations for objects which are known to be zero at compile-time.
33
34 Used for example as the argument to a function in which certain terms are
35 optional, see source terms in the MULES solvers.
36
37\*---------------------------------------------------------------------------*/
38
39#ifndef zeroField_H
40#define zeroField_H
41
42#include "zero.H"
43#include "scalar.H"
44
45// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
46
47namespace Foam
48{
49
50/*---------------------------------------------------------------------------*\
51 Class zeroField Declaration
52\*---------------------------------------------------------------------------*/
54class zeroField
55:
56 public zero
57{
58public:
59
60 // Constructors
61
62 //- Default construct
63 zeroField() noexcept = default;
64
65
66 // Member Functions
69 {
70 return zeroField{};
71 }
72
73
74 // Member Operators
76 scalar operator[](const label) const noexcept
77 {
78 return scalar(0);
79 }
82 {
83 return zeroField{};
84 }
87 {
88 return zeroField{};
89 }
90};
91
92
93// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
94
95} // End namespace Foam
96
97// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
98
99// Global Operators
100
101#include "zeroFieldI.H"
102
103// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
104
105#endif
106
107// ************************************************************************* //
A class representing the concept of a field of 0 used to avoid unnecessary manipulations for objects ...
Definition: zeroField.H:56
scalar operator[](const label) const noexcept
Definition: zeroField.H:75
zeroField field() const noexcept
Definition: zeroField.H:67
zeroField operator()() const noexcept
Definition: zeroField.H:80
zeroField operator-() const noexcept
Definition: zeroField.H:85
zeroField() noexcept=default
Default construct.
A class representing the concept of 0 (zero) that can be used to avoid manipulating objects known to ...
Definition: zero.H:63
Namespace for OpenFOAM.
const direction noexcept
Definition: Scalar.H:223 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9037370681762695, "perplexity": 998.7925903123869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00204.warc.gz"} |
http://de.vroniplag.wikia.com/wiki/Nm/Fragment_097_09 | # Nm/Fragment 097 09
## < Nm
31.826Seiten in
diesem Wiki
Typus Verschleierung Bearbeiter Hindemith Gesichtet
Untersuchte Arbeit:
Seite: 97, Zeilen: 9-19
Quelle: Brandes_Erlebach_2005
Seite(n): 7, 8, Zeilen: p7: 30ff; p8: 1ff
Graphs can be undirected or directed. The adjacency matrix of an undirected graph (as shown in Figure 3.2) is symmetric. An undirected edge joining vertices $u, v \in V$ is denoted by $\{u, v\}$.
In directed graphs, each directed edge (arc) has an origin (tail) and a destination (head). An edge with origin $u \in V$ is represented by an order pair $(u, v)$. As a shorthand notation, an edge $\{u, v\}$ can also be denoted by $uv$. It is to note that, in a directed graph, $uv$ is short for $(u, v)$, while in an undirected graph, $uv$ and $vu$ are the same and both stands for $\{u, v\}$. Graphs that can have directed as well undirected edges are called mixed graphs, but such graphs are encountered rarely.
Graphs can be undirected or directed. In undirected graphs, the order of the endvertices of an edge is immaterial. An undirected edge joining vertices $u, v \in V$ is denoted by $\{u, v\}$. In directed graphs, each directed edge (arc) has an origin (tail) and a destination (head). An edge with origin $u \in V$ and destination $v \in V$ is represented by an ordered pair $(u, v)$. As a shorthand notation, an edge $\{u, v\}$ or $(u, v)$ can also be denoted by $uv$. In a directed graph, $uv$ is short for $(u, v)$, while in an undirected graph, $uv$ and $vu$ are the same and both stand for $\{u, v\}$. [...]. Graphs that can have directed edges as well as undirected edges are called mixed graphs, but such graphs are encountered rarely [...]
Anmerkungen The source is not mentioned anywhere in the thesis. The definitions given here are certainly standard and don't need to be quoted. However, Nm uses for several passages the same wording as the source. Note also that "An edge with origin $u \in V$ is represented by an order pair $(u, v)$" is a curious abbreviation of the statement "An edge with origin $u \in V$ and destination $v \in V$ is represented by an ordered pair $(u, v)$" in the source. Sichter (Hindemith). WiseWoman | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8035648465156555, "perplexity": 542.713736030325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00451-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2412033/how-do-i-know-that-these-definitions-of-inner-product-of-k-vectors-and-hodge-s | # How do I know that these definitions of inner product of $k$-vectors and Hodge star are well-defined?
For my background, I read through Spivak's Calculus on Manifolds up to Integration on Chains.
Let's say $V$ is a finite-dimensional real inner product space with $\dim V=n$.
Take a look at these definitions of inner product of $k$-vectors and Hodge star repsectively:
https://en.wikipedia.org/wiki/Exterior_algebra#Inner_product
https://en.wikipedia.org/wiki/Hodge_isomorphism#Formal_definition_of_the_Hodge_star_of_k-vectors
I feel uneasy about these two definitions of the functions. There are some common features in these definitions: they do not use explicit formula to define the functions*, and they just assume that there are really unique functions that satisfy the respective defining property.
The inner product is slightly better than Hodge star, that we can extend by linearity to calculate, but is the inner product well-defined, that the calculation result is independent of the decomposition of k-vectors? Hodge star is worse, that it is rather defining how $\star\beta$ should interact with other $k$-vectors upon wedge product.
So my questions are:
Do we have to show that the two definitions are well-defined, that there really exist unique functions satisfying the respective defining property? Or are they actually intrinsically well-defined but I don't see it? If we have to show the well-definedness, how should we proceed?
If we have to show the well-definedness, I propose that we should first pick a particular bases for $\wedge^k(V)$ and $\wedge^{n-k}(V)$, say $\{e_{i_1}\wedge...\wedge e_{i_k}:1\le i_1\lt...\lt i_k\le n\}$ and $\{e_{i_1}\wedge...\wedge e_{i_{n-k}}:1\le i_1\lt...\lt i_{n-k}\le n\}$ where $\{e_1,...,e_n\}$ is an orthonormal basis for $V$, then assume such bilinear function/linear map satisfying the respective defining property exist and evaluate the values at the basis. Since the values at the basis are determined, the two functions, if exist, are determined uniquely. To show existence, we simply take the bilinear function/linear map defined by the values at basis and show that they satisfy the respective defining property.
Is there anything wrong with the above argument? Is it circular?
*I clarify what it is meant to have an explicit formula in the context of linear algebra with examples. For instance, for matrix product, we have an explicit formula $\sum_j a_{ij}b_{jk}$ to compute the entries; for tensor product, we can find how $\phi_1\otimes\phi_2$ acts on $(v_1,v_2)$ by the formula $\phi_1\otimes\phi_2(v_1,v_2)=\phi_1(v_1)\phi_2(v_2)$; for alternation, albeit a long formula, we can still calculate with an explicit formula $Alt(T)(v_1,...,v_k)=\frac{1}{k!}\sum_{\sigma\in S_k}T(v_{\sigma(1}),...,v_{\sigma(k)})$. Since tensor product and alternation output functions, I think writing down how the output function acts on $(v_1,...,v_k)$ is already explicit enough.
In case of the scalar product, your problem abstracts to the following: Given a vector space $W$ (In your case $\Lambda^kV$) and a spanning set $S$ of $W$ (in your case the set of all pure tensors) and a function $$\langle \cdot \vert \cdot \rangle_0 : S \times S \rightarrow \mathbb{R},$$ under which conditions is there a bilinear extension $\langle \cdot \vert \cdot \rangle$ of $\langle \cdot \vert \cdot \rangle_0$ to all of $W$? Note that if $S$ is a basis, there is no condition at all. What do you need if this is not the case?
First recall that in order to define the Hodge star you have to choose a nonzero element $\omega \in \Lambda^nV$ (this will later be $*1$). Since $\Lambda^nV$ has dimension $1$ this provides you with an isomorphism to $\mathbb{R}$, to make it explicit: $$F: \Lambda^nV \rightarrow \mathbb{R}, \quad \lambda \omega \mapsto \lambda$$ Now you can define linear maps $$S_j: \Lambda^jV \rightarrow (\Lambda^{n-j}V)^\ast, \quad v \mapsto (S_jv: w \mapsto F(w\wedge v))$$ These maps allow us to rewrite the defining equation $\alpha \wedge \ast \beta = \langle \alpha \vert \beta \rangle \omega$ as follows: $$S_{n-k}(*\beta) = \langle \cdot \vert \beta \rangle$$ I.e. defining $\ast \beta$ amounts to the following problem. Given $\beta \in \Lambda^k V$, does there exist a unique element $\gamma\in\Lambda^{n-k}V$ such that $$S_{n-k}(\gamma) = \langle \cdot \vert \beta \rangle.$$ This question can be answered with yes if and only if the map $S_{n-k}$ are isomorphisms. Prove this! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9807052612304688, "perplexity": 150.99300170652316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00136.warc.gz"} |
https://proofwiki.org/wiki/Category:Multiplicative_Groups_of_Reduced_Residues | # Category:Multiplicative Groups of Reduced Residues
This category contains results about Multiplicative Groups of Reduced Residues.
Let $m \in \Z_{> 0}$ be a (strictly) positive integer.
Let $\Z'_m$ denote the reduced residue system modulo $m$.
Consider the algebraic structure:
$\struct {\Z'_m, \times_m}$
where $\times_m$ denotes multiplication modulo $m$.
Then $\struct {\Z'_m, \times_m}$ is referred to as the multiplicative group of reduced residues modulo $m$.
## Subcategories
This category has the following 2 subcategories, out of 2 total.
## Pages in category "Multiplicative Groups of Reduced Residues"
The following 5 pages are in this category, out of 5 total. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9978974461555481, "perplexity": 822.3715059466725}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518882.71/warc/CC-MAIN-20191209121316-20191209145316-00557.warc.gz"} |
https://sherpa.readthedocs.io/en/latest/quick.html | # A quick guide to modeling and fitting in Sherpa¶
Here are some examples of using Sherpa to model and fit data. It is based on some of the examples used in the astropy.modelling documentation.
## Getting started¶
The following modules are assumed to have been imported:
>>> import numpy as np
>>> import matplotlib.pyplot as plt
The basic process, which will be followed below, is:
Although presented as a list, it is not necessarily a linear process, in that the order can be different to that above, and various steps can be repeated. The above list also does not include any visualization steps needed to inform and validate any choices.
## Fitting a one-dimensional data set¶
The following data - where x is the independent axis and y the dependent one - is used in this example:
>>> np.random.seed(0)
>>> x = np.linspace(-5., 5., 200)
>>> ampl_true = 3
>>> pos_true = 1.3
>>> sigma_true = 0.8
>>> err_true = 0.2
>>> y = ampl_true * np.exp(-0.5 * (x - pos_true)**2 / sigma_true**2)
>>> y += np.random.normal(0., err_true, x.shape)
>>> plt.plot(x, y, 'ko');
The aim is to fit a one-dimensional gaussian to this data and to recover estimates of the true parameters of the model, namely the position (pos_true), amplitude (ampl_true), and width (sigma_true). The err_true term adds in random noise (using a Normal distribution) to ensure the data is not perfectly-described by the model.
### Creating a data object¶
Rather than pass around the arrays to be fit, Sherpa has the concept of a “data object”, which stores the independent and dependent axes, as well as any related metadata. For this example, the class to use is Data1D, which requires a string label (used to identify the data), the independent axis, and then dependent axis:
>>> from sherpa.data import Data1D
>>> d = Data1D('example', x, y)
>>> print(d)
name = example
x = Float64[200]
y = Float64[200]
staterror = None
syserror = None
At this point no errors are being used in the fit, so the staterror and syserror fields are empty. They can be set either when the object is created or at a later time.
### Plotting the data¶
The sherpa.plot module provides a number of classes that create pre-canned plots. For example, the sherpa.plot.DataPlot class can be used to display the data. The steps taken are normally:
1. create the object;
2. call the prepare() method with the appropriate arguments, in this case the data object;
3. and then call the plot() method.
Sherpa has one plotting backend, matplotlib, which is used to display plots. There is limited support for customizing these plots - such as always drawing the Y axis with a logarithmic scale - but extensive changes will require calling the plotting back-end directly.
As an example of the DataPlot output:
>>> from sherpa.plot import DataPlot
>>> dplot = DataPlot()
>>> dplot.prepare(d)
>>> dplot.plot()
It is not required to use these classes and in the following, plots will be created either via these classes or directly via matplotlib.
### Define the model¶
In this example a single model is used - a one-dimensional gaussian provided by the Gauss1D class - but more complex examples are possible: these include multiple components, sharing models between data sets, and adding user-defined models. A full description of the model language and capabilities is provided in Creating model instances:
>>> from sherpa.models.basic import Gauss1D
>>> g = Gauss1D()
>>> print(g)
gauss1d
Param Type Value Min Max Units
----- ---- ----- --- --- -----
gauss1d.fwhm thawed 10 1.17549e-38 3.40282e+38
gauss1d.pos thawed 0 -3.40282e+38 3.40282e+38
gauss1d.ampl thawed 1 -3.40282e+38 3.40282e+38
It is also possible to restrict the range of a parameter, toggle parameters so that they are fixed or fitted, and link parameters together.
The sherpa.plot.ModelPlot class can be used to visualize the model. The prepare() method takes both a data object and the model to plot:
>>> from sherpa.plot import ModelPlot
>>> mplot = ModelPlot()
>>> mplot.prepare(d, g)
>>> mplot.plot()
There is also a sherpa.plot.FitPlot class which will combine the two plot results, but it is often just-as-easy to combine them directly:
>>> dplot.plot()
>>> mplot.overplot()
The model parameters can be changed - either manually or automatically - to try and start the fit off closer to the best-fit location, but for this example we shall leave the initial parameters as they are.
### Select the statistics¶
In order to optimise a model - that is, to change the model parameters until the best-fit location is found - a statistic is needed. The statistic calculates a numerical value for a given set of model parameters; this is a measure of how well the model matches the data, and can include knowledge of errors on the dependent axis values. The optimiser (chosen below) attempts to find the set of parameters which minimises this statistic value.
For this example, since the dependent axis (y) has no error estimate, we shall pick the least-square statistic (LeastSq), which calculates the numerical difference of the model to the data for each point:
>>> from sherpa.stats import LeastSq
>>> stat = LeastSq()
### Select the optimisation routine¶
The optimiser is the part that determines how to minimise the statistic value (i.e. how to vary the parameter values of the model to find a local minimum). The main optimisers provided by Sherpa are NelderMead (also known as Simplex) and LevMar (Levenberg-Marquardt). The latter is often quicker, but less robust, so we start with it (the optimiser can be changed and the data re-fit):
>>> from sherpa.optmethods import LevMar
>>> opt = LevMar()
>>> print(opt)
name = levmar
ftol = 1.19209289551e-07
xtol = 1.19209289551e-07
gtol = 1.19209289551e-07
maxfev = None
epsfcn = 1.19209289551e-07
factor = 100.0
verbose = 0
### Fit the data¶
The Fit class is used to bundle up the data, model, statistic, and optimiser choices. The fit() method runs the optimiser, and returns a FitResults instance, which contains information on how the fit performed. This infomation includes the succeeded attribute, to determine whether the fit converged, as well as information on the fit (such as the start and end statistic values) and best-fit parameter values. Note that the model expression can also be queried for the new parameter values.
>>> from sherpa.fit import Fit
>>> gfit = Fit(d, g, stat=stat, method=opt)
>>> print(gfit)
data = example
model = gauss1d
stat = LeastSq
method = LevMar
estmethod = Covariance
To actually fit the data, use the fit() method, which - depending on the data, model, or statistic being used - can take some time:
>>> gres = gfit.fit()
>>> print(gres.succeeded)
True
One useful method for interactive analysis is format(), which returns a string representation of the fit results, as shown below:
>>> print(gres.format())
Method = levmar
Statistic = leastsq
Initial fit statistic = 180.71
Final fit statistic = 8.06975 at function evaluation 30
Data points = 200
Degrees of freedom = 197
Change in statistic = 172.641
gauss1d.fwhm 1.91572 +/- 0.165982
gauss1d.pos 1.2743 +/- 0.0704859
gauss1d.ampl 3.04706 +/- 0.228618
Note
The LevMar optimiser calculates the covariance matrix at the best-fit location, and the errors from this are reported in the output from the call to the fit() method. In this particular case - which uses the LeastSq statistic - the error estimates do not have much meaning. As discussed below, Sherpa can make use of error estimates on the data to calculate meaningful parameter errors.
The sherpa.plot.FitPlot class will display the data and model. The prepare() method requires data and model plot objects; in this case the previous versions can be re-used, although the model plot needs to be updated to reflect the changes to the model parameters:
>>> from sherpa.plot import FitPlot
>>> fplot = FitPlot()
>>> mplot.prepare(d, g)
>>> fplot.prepare(dplot, mplot)
>>> fplot.plot()
As the model can be evaluated directly, this plot can also be created manually:
>>> plt.plot(d.x, d.y, 'ko', label='Data')
>>> plt.plot(d.x, g(d.x), linewidth=2, label='Gaussian')
>>> plt.legend(loc=2);
### Extract the parameter values¶
The fit results include a large number of attributes, many of which are not relevant here (as the fit was done with no error values). The following relation is used to convert from the full-width half-maximum value, used by the Gauss1D model, to the Gaussian sigma value used to create the data: $$\rm{FWHM} = 2 \sqrt{2ln(2)} \sigma$$:
>>> print(gres)
datasets = None
itermethodname = none
methodname = levmar
statname = leastsq
succeeded = True
parnames = ('gauss1d.fwhm', 'gauss1d.pos', 'gauss1d.ampl')
parvals = (1.915724111406394, 1.2743015983545247, 3.0470560360944017)
statval = 8.069746329529591
istatval = 180.71034547759984
dstatval = 172.64059914807027
numpoints = 200
dof = 197
qval = None
rstat = None
message = successful termination
nfev = 30
>>> conv = 2 * np.sqrt(2 * np.log(2))
>>> ans = dict(zip(gres.parnames, gres.parvals))
>>> print("Position = {:.2f} truth= {:.2f}".format(ans['gauss1d.pos'], pos_true))
Position = 1.27 truth= 1.30
>>> print("Amplitude= {:.2f} truth= {:.2f}".format(ans['gauss1d.ampl'], ampl_true))
Amplitude= 3.05 truth= 3.00
>>> print("Sigma = {:.2f} truth= {:.2f}".format(ans['gauss1d.fwhm']/conv, sigma_true))
Sigma = 0.81 truth= 0.80
The model, and its parameter values, can also be queried directly, as they have been changed by the fit:
>>> print(g)
gauss1d
Param Type Value Min Max Units
----- ---- ----- --- --- -----
gauss1d.fwhm thawed 1.91572 1.17549e-38 3.40282e+38
gauss1d.pos thawed 1.2743 -3.40282e+38 3.40282e+38
gauss1d.ampl thawed 3.04706 -3.40282e+38 3.40282e+38
>>> print(g.pos)
val = 1.2743015983545247
min = -3.4028234663852886e+38
max = 3.4028234663852886e+38
units =
frozen = False
default_val = 0.0
default_min = -3.4028234663852886e+38
default_max = 3.4028234663852886e+38
## Including errors¶
For this example, the error on each bin is assumed to be the same, and equal to the true error:
>>> dy = np.ones(x.size) * err_true
>>> de = Data1D('with-errors', x, y, staterror=dy)
>>> print(de)
name = with-errors
x = Float64[200]
y = Float64[200]
staterror = Float64[200]
syserror = None
The statistic is changed from least squares to chi-square (Chi2), to take advantage of this extra knowledge (i.e. the Chi-square statistic includes the error value per bin when calculating the statistic value):
>>> from sherpa.stats import Chi2
>>> ustat = Chi2()
>>> ge = Gauss1D('gerr')
>>> gefit = Fit(de, ge, stat=ustat, method=opt)
>>> geres = gefit.fit()
>>> print(geres.format())
Method = levmar
Statistic = chi2
Initial fit statistic = 4517.76
Final fit statistic = 201.744 at function evaluation 30
Data points = 200
Degrees of freedom = 197
Probability [Q-value] = 0.393342
Reduced statistic = 1.02408
Change in statistic = 4316.01
gerr.fwhm 1.91572 +/- 0.0331963
gerr.pos 1.2743 +/- 0.0140972
gerr.ampl 3.04706 +/- 0.0457235
>>> if not geres.succeeded: print(geres.message)
Since the error value is independent of bin, then the fit results should be the same here (that is, the parameters in g are the same as ge):
>>> print(g)
gauss1d
Param Type Value Min Max Units
----- ---- ----- --- --- -----
gauss1d.fwhm thawed 1.91572 1.17549e-38 3.40282e+38
gauss1d.pos thawed 1.2743 -3.40282e+38 3.40282e+38
gauss1d.ampl thawed 3.04706 -3.40282e+38 3.40282e+38
>>> print(ge)
gerr
Param Type Value Min Max Units
----- ---- ----- --- --- -----
gerr.fwhm thawed 1.91572 1.17549e-38 3.40282e+38
gerr.pos thawed 1.2743 -3.40282e+38 3.40282e+38
gerr.ampl thawed 3.04706 -3.40282e+38 3.40282e+38
The difference is that more of the fields in the result structure are populated: in particular the rstat and qval fields, which give the reduced statistic and the probability of obtaining this statistic value respectively.:
>>> print(geres)
datasets = None
itermethodname = none
methodname = levmar
statname = chi2
succeeded = True
parnames = ('gerr.fwhm', 'gerr.pos', 'gerr.ampl')
parvals = (1.9157241114064163, 1.2743015983545292, 3.047056036094392)
statval = 201.74365823823976
istatval = 4517.758636940002
dstatval = 4316.014978701763
numpoints = 200
dof = 197
qval = 0.3933424667915623
rstat = 1.0240794834428415
message = successful termination
nfev = 30
### Error analysis¶
The default error estimation routine is Covariance, which will be replaced by Confidence for this example:
>>> from sherpa.estmethods import Confidence
>>> gefit.estmethod = Confidence()
>>> print(gefit.estmethod)
name = confidence
sigma = 1
eps = 0.01
maxiters = 200
soft_limits = False
remin = 0.01
fast = False
parallel = True
numcores = 4
maxfits = 5
max_rstat = 3
tol = 0.2
verbose = False
openinterval = False
Running the error analysis can take time, for particularly complex models. The default behavior is to use all the available CPU cores on the machine, but this can be changed with the numcores attribute. Note that a message is displayed to the screen when each bound is calculated, to indicate progress:
>>> errors = gefit.est_errors()
gerr.fwhm lower bound: -0.0326327
gerr.fwhm upper bound: 0.0332578
gerr.pos lower bound: -0.0140981
gerr.pos upper bound: 0.0140981
gerr.ampl lower bound: -0.0456119
gerr.ampl upper bound: 0.0456119
The results can be displayed:
>>> print(errors.format())
Confidence Method = confidence
Iterative Fit Method = None
Fitting Method = levmar
Statistic = chi2
confidence 1-sigma (68.2689%) bounds:
Param Best-Fit Lower Bound Upper Bound
----- -------- ----------- -----------
gerr.fwhm 1.91572 -0.0326327 0.0332578
gerr.pos 1.2743 -0.0140981 0.0140981
gerr.ampl 3.04706 -0.0456119 0.0456119
The ErrorEstResults instance returned by est_errors() contains the parameter values and limits:
>>> print(errors)
datasets = None
methodname = confidence
iterfitname = none
fitname = levmar
statname = chi2
sigma = 1
percent = 68.26894921370858
parnames = ('gerr.fwhm', 'gerr.pos', 'gerr.ampl')
parvals = (1.9157241114064163, 1.2743015983545292, 3.047056036094392)
parmins = (-0.0326327431233302, -0.014098074065578947, -0.045611913713536456)
parmaxes = (0.033257800216357714, 0.014098074065578947, 0.045611913713536456)
nfits = 29
The data can be accessed, e.g. to create a dictionary where the keys are the parameter names and the values represent the parameter ranges:
>>> dvals = zip(errors.parnames, errors.parvals, errors.parmins,
... errors.parmaxes)
>>> pvals = {d[0]: {'val': d[1], 'min': d[2], 'max': d[3]}
for d in dvals}
>>> pvals['gerr.pos']
{'min': -0.014098074065578947, 'max': 0.014098074065578947, 'val': 1.2743015983545292}
### Screen output¶
The default behavior - when not using the default Covariance method - is for est_errors() to print out the parameter bounds as it finds them, which can be useful in an interactive session since the error analysis can be slow. This can be controlled using the Sherpa logging interface.
### A single parameter¶
It is possible to investigate the error surface of a single parameter using the IntervalProjection class. The following shows how the error surface changes with the position of the gaussian. The prepare() method are given the range over which to vary the parameter (the range is chosen to be close to the three-sigma limit from the confidence analysis above, ahd the dotted line is added to indicate the three-sigma limit above the best-fit for a single parameter):
>>> from sherpa.plot import IntervalProjection
>>> iproj = IntervalProjection()
>>> iproj.prepare(min=1.23, max=1.32, nloop=41)
>>> iproj.calc(gefit, ge.pos)
This can take some time, depending on the complexity of the model and number of steps requested. The resulting data looks like:
>>> iproj.plot()
>>> plt.axhline(geres.statval + 9, linestyle='dotted');
The curve is stored in the IntervalProjection object (in fact, these values are created by the call to calc() and so can be accesed without needing to create the plot):
>>> print(iproj)
x = [ 1.23 , 1.2323, 1.2345, 1.2368, 1.239 , 1.2412, 1.2435, 1.2457, 1.248 ,
1.2503, 1.2525, 1.2548, 1.257 , 1.2592, 1.2615, 1.2637, 1.266 , 1.2683,
1.2705, 1.2728, 1.275 , 1.2772, 1.2795, 1.2817, 1.284 , 1.2863, 1.2885,
1.2908, 1.293 , 1.2953, 1.2975, 1.2997, 1.302 , 1.3043, 1.3065, 1.3088,
1.311 , 1.3133, 1.3155, 1.3177, 1.32 ]
y = [ 211.597 , 210.6231, 209.6997, 208.8267, 208.0044, 207.2325, 206.5113,
205.8408, 205.2209, 204.6518, 204.1334, 203.6658, 203.249 , 202.883 ,
202.5679, 202.3037, 202.0903, 201.9279, 201.8164, 201.7558, 201.7461,
201.7874, 201.8796, 202.0228, 202.2169, 202.462 , 202.758 , 203.105 ,
203.5028, 203.9516, 204.4513, 205.0018, 205.6032, 206.2555, 206.9585,
207.7124, 208.5169, 209.3723, 210.2783, 211.235 , 212.2423]
min = 1.23
max = 1.32
nloop = 41
delv = None
fac = 1
log = False
### A contour plot of two parameters¶
The RegionProjection class supports the comparison of two parameters. The contours indicate the one, two, and three sigma contours.
>>> from sherpa.plot import RegionProjection
>>> rproj = RegionProjection()
>>> rproj.prepare(min=[2.8, 1.75], max=[3.3, 2.1], nloop=[21, 21])
>>> rproj.calc(gefit, ge.ampl, ge.fwhm)
As with the interval projection, this step can take time.
>>> rproj.contour()
As with the single-parameter case, the statistic values for the grid are stored in the RegionProjection object by the calc() call, and so can be accesed without needing to create the contour plot. Useful fields include x0 and x1 (the two parameter values), y (the statistic value), and levels (the values used for the contours):
>>> lvls = rproj.levels
>>> print(lvls)
[ 204.03940717 207.92373254 213.57281632]
>>> nx, ny = rproj.nloop
>>> x0, x1, y = rproj.x0, rproj.x1, rproj.y
>>> x0.resize(ny, nx)
>>> x1.resize(ny, nx)
>>> y.resize(ny, nx)
>>> plt.imshow(y, origin='lower', cmap='viridis_r', aspect='auto',
... extent=(x0.min(), x0.max(), x1.min(), x1.max()))
>>> plt.colorbar()
>>> plt.xlabel(rproj.xlabel)
>>> plt.ylabel(rproj.ylabel)
>>> cs = plt.contour(x0, x1, y, levels=lvls)
>>> lbls = [(v, r"${}\sigma$".format(i+1)) for i, v in enumerate(lvls)]
>>> plt.clabel(cs, lvls, fmt=dict(lbls));
## Fitting two-dimensional data¶
Sherpa has support for two-dimensional data - that is data defined on the independent axes x0 and x1. In the example below a contiguous grid is used, that is the pixel size is constant, but there is no requirement that this is the case.
>>> np.random.seed(0)
>>> x1, x0 = np.mgrid[:128, :128]
>>> y = 2 * x0**2 - 0.5 * x1**2 + 1.5 * x0 * x1 - 1
>>> y += np.random.normal(0, 0.1, y.shape) * 50000
### Creating a data object¶
To support irregularly-gridded data, the multi-dimensional data classes require that the coordinate arrays and data values are one-dimensional. For example, the following code creates a Data2D object:
>>> from sherpa.data import Data2D
>>> x0axis = x0.ravel()
>>> x1axis = x1.ravel()
>>> yaxis = y.ravel()
>>> d2 = Data2D('img', x0axis, x1axis, yaxis, shape=(128, 128))
>>> print(d2)
name = img
x0 = Int64[16384]
x1 = Int64[16384]
y = Float64[16384]
shape = (128, 128)
staterror = None
syserror = None
### Define the model¶
Creating the model is the same as the one-dimensional case; in this case the Polynom2D class is used to create a low-order polynomial:
>>> from sherpa.models.basic import Polynom2D
>>> p2 = Polynom2D('p2')
>>> print(p2)
p2
Param Type Value Min Max Units
----- ---- ----- --- --- -----
p2.c thawed 1 -3.40282e+38 3.40282e+38
p2.cy1 thawed 0 -3.40282e+38 3.40282e+38
p2.cy2 thawed 0 -3.40282e+38 3.40282e+38
p2.cx1 thawed 0 -3.40282e+38 3.40282e+38
p2.cx1y1 thawed 0 -3.40282e+38 3.40282e+38
p2.cx1y2 thawed 0 -3.40282e+38 3.40282e+38
p2.cx2 thawed 0 -3.40282e+38 3.40282e+38
p2.cx2y1 thawed 0 -3.40282e+38 3.40282e+38
p2.cx2y2 thawed 0 -3.40282e+38 3.40282e+38
### Control the parameters being fit¶
To reduce the number of parameters being fit, the frozen attribute can be set:
>>> for n in ['cx1', 'cy1', 'cx2y1', 'cx1y2', 'cx2y2']:
...: getattr(p2, n).frozen = True
...:
>>> print(p2)
p2
Param Type Value Min Max Units
----- ---- ----- --- --- -----
p2.c thawed 1 -3.40282e+38 3.40282e+38
p2.cy1 frozen 0 -3.40282e+38 3.40282e+38
p2.cy2 thawed 0 -3.40282e+38 3.40282e+38
p2.cx1 frozen 0 -3.40282e+38 3.40282e+38
p2.cx1y1 thawed 0 -3.40282e+38 3.40282e+38
p2.cx1y2 frozen 0 -3.40282e+38 3.40282e+38
p2.cx2 thawed 0 -3.40282e+38 3.40282e+38
p2.cx2y1 frozen 0 -3.40282e+38 3.40282e+38
p2.cx2y2 frozen 0 -3.40282e+38 3.40282e+38
### Fit the data¶
Fitting is no different (the same statistic and optimisation objects used earlier could have been re-used here):
>>> f2 = Fit(d2, p2, stat=LeastSq(), method=LevMar())
>>> res2 = f2.fit()
>>> if not res2.succeeded: print(res2.message)
>>> print(res2)
datasets = None
itermethodname = none
methodname = levmar
statname = leastsq
succeeded = True
parnames = ('p2.c', 'p2.cy2', 'p2.cx1y1', 'p2.cx2')
parvals = (-80.28947555488139, -0.48174521913599017, 1.5022711710872119, 1.9894112623568638)
statval = 400658883390.66907
istatval = 6571471882611.967
dstatval = 6170812999221.298
numpoints = 16384
dof = 16380
qval = None
rstat = None
message = successful termination
nfev = 45
>>> print(p2)
p2
Param Type Value Min Max Units
----- ---- ----- --- --- -----
p2.c thawed -80.2895 -3.40282e+38 3.40282e+38
p2.cy1 frozen 0 -3.40282e+38 3.40282e+38
p2.cy2 thawed -0.481745 -3.40282e+38 3.40282e+38
p2.cx1 frozen 0 -3.40282e+38 3.40282e+38
p2.cx1y1 thawed 1.50227 -3.40282e+38 3.40282e+38
p2.cx1y2 frozen 0 -3.40282e+38 3.40282e+38
p2.cx2 thawed 1.98941 -3.40282e+38 3.40282e+38
p2.cx2y1 frozen 0 -3.40282e+38 3.40282e+38
p2.cx2y2 frozen 0 -3.40282e+38 3.40282e+38
### Display the model¶
The model can be visualized by evaluating it over a grid of points and then displaying it:
>>> m2 = p2(x0axis, x1axis).reshape(128, 128)
>>> def pimg(d, title):
... plt.imshow(d, origin='lower', interpolation='nearest',
... vmin=-1e4, vmax=5e4, cmap='viridis')
... plt.axis('off')
... plt.colorbar(orientation='horizontal',
... ticks=[0, 20000, 40000])
... plt.title(title)
...
>>> plt.figure(figsize=(8, 3))
>>> plt.subplot(1, 3, 1);
>>> pimg(y, "Data")
>>> plt.subplot(1, 3, 2)
>>> pimg(m2, "Model")
>>> plt.subplot(1, 3, 3)
>>> pimg(y - m2, "Residual")
Note
The sherpa.image model provides support for interactive image visualization, but this only works if the DS9 image viewer is installed. For the examples in this document, matplotlib plots will be created to view the data directly.
## Simultaneous fits¶
Sherpa allows multiple data sets to be fit at the same time, although there is only really a benefit if there is some model component or value that is shared between the data sets). In this example we have a dataset containing a lorentzian signal with a background component, and another with just the background component. Fitting both together can improve the constraints on the parameter values.
First we start by simulating the data, where the Polynom1D class is used to model the background as a straight line, and Lorentz1D for the signal:
>>> from sherpa.models import Polynom1D
>>> from sherpa.astro.models import Lorentz1D
>>> tpoly = Polynom1D()
>>> tlor = Lorentz1D()
>>> tpoly.c0 = 50
>>> tpoly.c1 = 1e-2
>>> tlor.pos = 4400
>>> tlor.fwhm = 200
>>> tlor.ampl = 1e4
>>> x1 = np.linspace(4200, 4600, 21)
>>> y1 = tlor(x1) + tpoly(x1) + np.random.normal(scale=5, size=x1.size)
>>> x2 = np.linspace(4100, 4900, 11)
>>> y2 = tpoly(x2) + np.random.normal(scale=5, size=x2.size)
>>> print("x1 size {} x2 size {}".format(x1.size, x2.size))
x1 size 21 x2 size 11
There is no requirement that the data sets have a common grid, as can be seen in a raw view of the data:
>>> plt.plot(x1, y1)
>>> plt.plot(x2, y2)
The fits are set up as before; a data object is needed for each data set, and model instances are created:
>>> d1 = Data1D('a', x1, y1)
>>> d2 = Data1D('b', x2, y2)
>>> fpoly, flor = Polynom1D(), Lorentz1D()
>>> fpoly.c1.thaw()
>>> flor.pos = 4500
To help the fit, we use a simple algorithm to estimate the starting point for the source amplitude, by evaluating the model on the data grid and calculating the change in the amplitude needed to make it match the data:
>>> flor.ampl = y1.sum() / flor(x1).sum()
For simultaneous fits the same optimisation and statistic needs to be used for each fit (this is an area we are looking to improve):
>>> from sherpa.optmethods import NelderMead
>>> stat, opt = LeastSq(), NelderMead()
Set up the fits to the individual data sets:
>>> f1 = Fit(d1, fpoly + flor, stat, opt)
>>> f2 = Fit(d2, fpoly, stat, opt)
and a simultaneous (i.e. to both data sets) fit:
>>> from sherpa.data import DataSimulFit
>>> from sherpa.models import SimulFitModel
>>> sdata = DataSimulFit('all', (d1, d2))
>>> smodel = SimulFitModel('all', (fpoly + flor, fpoly))
>>> sfit = Fit(sdata, smodel, stat, opt)
Note that there is a simulfit() method that can be used to fit using multiple sherpa.fit.Fit objects, which wraps the above (using individual fit objects allows some of the data to be fit first, which may help reduce the parameter space needed to be searched):
>>> res = sfit.fit()
>>> print(res)
datasets = None
itermethodname = none
statname = leastsq
succeeded = True
parnames = ('polynom1d.c0', 'polynom1d.c1', 'lorentz1d.fwhm', 'lorentz1d.pos', 'lorentz1d.ampl')
parvals = (36.829217311393585, 0.012540257025027028, 249.55651534213359, 4402.7031194359088, 12793.559398547319)
statval = 329.6525419378109
istatval = 3813284.1822045334
dstatval = 3812954.52966
numpoints = 32
dof = 27
qval = None
rstat = None
message = Optimization terminated successfully
nfev = 1152
The values of the numpoints and dof fields show that both datasets have been used in the fit.
The data can then be viewed (in this case a separate grid is used, but the data objects could be used to define the grid):
>>> plt.plot(x1, y1, label='Data 1')
>>> plt.plot(x2, y2, label='Data 2')
>>> x = np.arange(4000, 5000, 10)
>>> plt.plot(x, (fpoly + flor)(x), linestyle='dotted', label='Fit 1')
>>> plt.plot(x, fpoly(x), linestyle='dotted', label='Fit 2')
>>> plt.legend();
How do you do error analysis? Well, can call sfit.est_errors(), but that will fail with the current statistic (LeastSq), so need to change it. The error is 5, per bin, which has to be set up:
>>> print(sfit.calc_stat_info())
name =
ids = None
bkg_ids = None
statname = leastsq
statval = 329.6525419378109
numpoints = 32
dof = 27
qval = None
rstat = None
>>> d1.staterror = np.ones(x1.size) * 5
>>> d2.staterror = np.ones(x2.size) * 5
>>> sfit.stat = Chi2()
>>> check = sfit.fit()
How much did the fit change?:
>>> check.dstatval
0.0
Note that since the error on each bin is the same value, the best-fit value is not going to be different to the LeastSq result (so dstatval should be 0):
>>> print(sfit.calc_stat_info())
name =
ids = None
bkg_ids = None
statname = chi2
statval = 13.186101677512438
numpoints = 32
dof = 27
qval = 0.988009259609
rstat = 0.48837413620416437
>>> sres = sfit.est_errors()
>>> print(sres)
datasets = None
methodname = covariance
iterfitname = none
statname = chi2
sigma = 1
percent = 68.2689492137
parnames = ('polynom1d.c0', 'polynom1d.c1', 'lorentz1d.fwhm', 'lorentz1d.pos', 'lorentz1d.ampl')
parvals = (36.829217311393585, 0.012540257025027028, 249.55651534213359, 4402.7031194359088, 12793.559398547319)
parmins = (-4.9568824809960628, -0.0011007470586726147, -6.6079122387075824, -2.0094070026087474, -337.50275154547768)
parmaxes = (4.9568824809960628, 0.0011007470586726147, 6.6079122387075824, 2.0094070026087474, 337.50275154547768)
nfits = 0
Error estimates on a single parameter are as above:
>>> iproj = IntervalProjection()
>>> iproj.prepare(min=6000, max=18000, nloop=101)
>>> iproj.calc(sfit, flor.ampl)
>>> iproj.plot() | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47856250405311584, "perplexity": 10247.194351599148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00564.warc.gz"} |
http://nrich.maths.org/6755/solution | All in the Mind
Imagine you are suspending a cube from one vertex (corner) and allowing it to hang freely. Now imagine you are lowering it into water until it is exactly half submerged. What shape does the surface of the water make around the cube?
Painting Cubes
Imagine you have six different colours of paint. You paint a cube using a different colour for each of the six faces. How many different cubes can be painted using the same set of six colours?
Tic Tac Toe
In the game of Noughts and Crosses there are 8 distinct winning lines. How many distinct winning lines are there in a game played on a 3 by 3 by 3 board, with 27 cells?
Weekly Problem 1 - 2010
Stage: 3 Short Challenge Level:
One can see the greatest number of cubes when looking at three faces at once. We count the cubes on each face, giving $3\times 11^2=363$ cubes, but have to subtract from this the cubes along the three edges that have been counted twice, and then add back for the cube at the corner for which three faces are visible. The final quantity is $363-(3\times 11)+1=331$ cubes.
This problem is taken from the UKMT Mathematical Challenges.
View the previous week's solution
View the current weekly problem | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28031012415885925, "perplexity": 570.5163186573926}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021511414/warc/CC-MAIN-20140305121151-00028-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://dennybritz.com/posts/wildml/implementing-a-neural-network-from-scratch/ | # Implementing a Neural Network from Scratch in Python
All the code is also available as an Jupyter notebook on Github.
In this post we will implement a simple 3-layer neural network from scratch. We won’t derive all the math that’s required, but I will try to give an intuitive explanation of what we are doing. I will also point to resources for you read up on the details.
Here I’m assuming that you are familiar with basic Calculus and Machine Learning concepts, e.g. you know what classification and regularization is. Ideally you also know a bit about how optimization techniques like gradient descent work. But even if you’re not familiar with any of the above this post could still turn out to be interesting!
But why implement a Neural Network from scratch at all? Even if you plan on using Neural Network libraries like PyBrain in the future, implementing a network from scratch at least once is an extremely valuable exercise. It helps you gain an understanding of how neural networks work, and that is essential for designing effective models.
One thing to note is that the code examples here aren’t terribly efficient. They are meant to be easy to understand. In an upcoming post I will explore how to write an efficient Neural Network implementation using Theano. (Update: now available)
## Generating a dataset #
Let’s start by generating a dataset we can play with. Fortunately, scikit-learn has some useful dataset generators, so we don’t need to write the code ourselves. We will go with the make_moons function.
# Generate a dataset and plot it
np.random.seed(0)
X, y = sklearn.datasets.make_moons(200, noise=0.20)
plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral)
The dataset we generated has two classes, plotted as red and blue points. You can think of the blue dots as male patients and the red dots as female patients, with the x- and y- axis being medical measurements.
Our goal is to train a Machine Learning classifier that predicts the correct class (male of female) given the x- and y- coordinates. Note that the data is not linearly separable, we can’t draw a straight line that separates the two classes. This means that linear classifiers, such as Logistic Regression, won’t be able to fit the data unless you hand-engineer non-linear features (such as polynomials) that work well for the given dataset.
In fact, that’s one of the major advantages of Neural Networks. You don’t need to worry about feature engineering. The hidden layer of a neural network will learn features for you.
## Logistic Regression #
To demonstrate the point let’s train a Logistic Regression classifier. It’s input will be the x- and y-values and the output the predicted class (0 or 1). To make our life easy we use the Logistic Regression class from scikit-learn.
# Train the logistic rgeression classifier
clf = sklearn.linear_model.LogisticRegressionCV()
clf.fit(X, y)
# Plot the decision boundary
plot_decision_boundary(lambda x: clf.predict(x))
plt.title("Logistic Regression")
The graph shows the decision boundary learned by our Logistic Regression classifier. It separates the data as good as it can using a straight line, but it’s unable to capture the “moon shape” of our data.
## Training a Neural Network #
Let’s now build a 3-layer neural network with one input layer, one hidden layer, and one output layer. The number of nodes in the input layer is determined by the dimensionality of our data, 2. Similarly, the number of nodes in the output layer is determined by the number of classes we have, also 2. (Because we only have 2 classes we could actually get away with only one output node predicting 0 or 1, but having 2 makes it easier to extend the network to more classes later on). The input to the network will be x- and y- coordinates and its output will be two probabilities, one for class 0 (“female”) and one for class 1 (“male”). It looks something like this:
We can choose the dimensionality (the number of nodes) of the hidden layer. The more nodes we put into the hidden layer the more complex functions we will be able fit. But higher dimensionality comes at a cost. First, more computation is required to make predictions and learn the network parameters. A bigger number of parameters also means we become more prone to overfitting our data.
How to choose the size of the hidden layer? While there are some general guidelines and recommendations, it always depends on your specific problem and is more of an art than a science. We will play with the number of nodes in the hidden later later on and see how it affects our output.
We also need to pick an activation function for our hidden layer. The activation function transforms the inputs of the layer into its outputs. A nonlinear activation function is what allows us to fit nonlinear hypotheses. Common chocies for activation functions are tanh, the sigmoid function, or ReLUs. We will use tanh, which performs quite well in many scenarios. A nice property of these functions is that their derivate can be computed using the original function value. For example, the derivative of $$\tanh x$$ is $$1-\tanh^2 x$$. This is useful because it allows us to compute $$\tanh x$$ once and re-use that same value later to get the derivative.
Because we want our network to output probabilities, the activation function for the output layer will be the softmax, which is a way to convert raw scores to probabilities. If you’re familiar with the logistic function you can think of softmax as its generalization to multiple classes.
#### How our network makes predictions #
Our network makes predictions using forward propagation, which is just a bunch of matrix multiplications and the application of the activation function(s) we defined above. If x is the 2-dimensional input to our network then we calculate our prediction $$\hat{y}$$ (also two-dimensional) as follows:
\begin{aligned} z_1 & = xW_1 + b_1 \\ a_1 & = \tanh(z_1) \\ z_2 & = a_1W_2 + b_2 \\ a_2 & = \hat{y} = \mathrm{softmax}(z_2) \end{aligned}
$$z_i$$ is the input of layer $$i$$ and $$a_i$$ is the output of layer $$i$$ after applying the activation function. $$W_1, b_1, W_2, b_2$$ are parameters of our network, which we need to learn from our training data. You can think of them as matrices transforming data between layers of the network. Looking at the matrix multiplications above we can figure out the dimensionality of these matrices. If we use 500 nodes for our hidden layer then $$W_1 \in \mathbb{R}^{2\times500}$$, $$b_1 \in \mathbb{R}^{500}$$, $$W_2 \in \mathbb{R}^{500\times2}$$, $$b_2 \in \mathbb{R}^{2}$$. Now you see why we have more parameters if we increase the size of the hidden layer.
#### Learning the Parameters #
Learning the parameters for our network means finding parameters ($$W_1, b_1, W_2, b_2$$) that minimize the error on our training data. But how do we define the error? We call the function that measures our error the loss function. A common choice with the softmax output is the categorical cross-entropy loss (also known as negative log likelihood). If we have $$N$$ training examples and $$C$$ classes then the loss for our prediction $$\hat{y}$$ with respect to the true labels $$y$$ is given by:
\begin{aligned} L(y,\hat{y}) = - \frac{1}{N} \sum_{n \in N} \sum_{i \in C} y_{n,i} \log\hat{y}_{n,i} \end{aligned}
The formula looks complicated, but all it really does is sum over our training examples and add to the loss if we predicted the incorrect class. The further away the two probability distributions $$y$$ (the correct labels) and $$\hat{y}$$ (our predictions) are, the greater our loss will be. By finding parameters that minimize the loss we maximize the likelihood of our training data.
We can use gradient descent to find this minimum. We will implement the most vanilla version of gradient descent, also called batch gradient descent with a fixed learning rate. Variations such as SGD (stochastic gradient descent) or minibatch gradient descent typically perform better in practice. So if you are serious you’ll want to use one of these, and ideally you would also decay the learning rate over time.
As an input, gradient descent needs the gradients (vector of derivatives) of the loss function with respect to our parameters: $$\frac{\partial{L}}{\partial{W_1}}$$, $$\frac{\partial{L}}{\partial{b_1}}$$, $$\frac{\partial{L}}{\partial{W_2}}$$, $$\frac{\partial{L}}{\partial{b_2}}$$. To calculate these gradients we use the famous backpropagation algorithm, which is a way to efficiently calculate the gradients starting from the output. I won’t go into detail how backpropagation works, but there are many excellent explanations (here or here) floating around the web.
Applying the backpropagation formula we find the following (just trust me on this for now):
\begin{aligned} & \delta_3 = \hat{y} - y \\ & \delta_2 = (1 - \tanh^2 z_1) \circ \delta_3W_2^T \\ & \frac{\partial{L}}{\partial{W_2}} = a_1^T \delta_3 \\ & \frac{\partial{L}}{\partial{b_2}} = \delta_3 \\ & \frac{\partial{L}}{\partial{W_1}} = x^T \delta2 \\ & \frac{\partial{L}}{\partial{b_1}} = \delta2 \\ \end{aligned}
#### Implementation #
Now we are ready for our implementation. We start by defining some useful variables and parameters for gradient descent:
num_examples = len(X) # training set size
nn_input_dim = 2 # input layer dimensionality
nn_output_dim = 2 # output layer dimensionality
# Gradient descent parameters (I picked these by hand)
epsilon = 0.01 # learning rate for gradient descent
reg_lambda = 0.01 # regularization strength
First let’s implement the loss function we defined above. We use this to evaluate how well our model is doing:
# Helper function to evaluate the total loss on the dataset
def calculate_loss(model):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation to calculate our predictions
z1 = X.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Calculating the loss
corect_logprobs = -np.log(probs[range(num_examples), y])
data_loss = np.sum(corect_logprobs)
# Add regulatization term to loss (optional)
data_loss += reg_lambda/2 * (np.sum(np.square(W1)) + np.sum(np.square(W2)))
return 1./num_examples * data_loss
We also implement a helper function to calculate the output of the network. It does forward propagation as defined above and returns the class with the highest probability.
# Helper function to predict an output (0 or 1)
def predict(model, x):
W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2']
# Forward propagation
z1 = x.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
return np.argmax(probs, axis=1)
Finally, here comes the function to train our Neural Network. It implements batch gradient descent using the backpropagation derivates we found above.
# This function learns parameters for the neural network and returns the model.
# - nn_hdim: Number of nodes in the hidden layer
# - num_passes: Number of passes through the training data for gradient descent
# - print_loss: If True, print the loss every 1000 iterations
def build_model(nn_hdim, num_passes=20000, print_loss=False):
# Initialize the parameters to random values. We need to learn these.
np.random.seed(0)
W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim)
b1 = np.zeros((1, nn_hdim))
W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim)
b2 = np.zeros((1, nn_output_dim))
# This is what we return at the end
model = {}
# Gradient descent. For each batch...
for i in xrange(0, num_passes):
# Forward propagation
z1 = X.dot(W1) + b1
a1 = np.tanh(z1)
z2 = a1.dot(W2) + b2
exp_scores = np.exp(z2)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
# Backpropagation
delta3 = probs
delta3[range(num_examples), y] -= 1
dW2 = (a1.T).dot(delta3)
db2 = np.sum(delta3, axis=0, keepdims=True)
delta2 = delta3.dot(W2.T) * (1 - np.power(a1, 2))
dW1 = np.dot(X.T, delta2)
db1 = np.sum(delta2, axis=0)
# Add regularization terms (b1 and b2 don't have regularization terms)
dW2 += reg_lambda * W2
dW1 += reg_lambda * W1
W1 += -epsilon * dW1
b1 += -epsilon * db1
W2 += -epsilon * dW2
b2 += -epsilon * db2
# Assign new parameters to the model
model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2}
# Optionally print the loss.
# This is expensive because it uses the whole dataset, so we don't want to do it too often.
if print_loss and i % 1000 == 0:
print "Loss after iteration %i: %f" %(i, calculate_loss(model))
return model
## A network with a hidden layer of size 3 #
Let’s see what happens if we train a network with a hidden layer size of 3.
# Build a model with a 3-dimensional hidden layer
model = build_model(3, print_loss=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(model, x))
plt.title("Decision Boundary for hidden layer size 3")
Yay! This looks pretty good. Our neural networks was able to find a decision boundary that successfully separates the classes.
## Varying the hidden layer size #
In the example above we picked a hidden layer size of 3. Let’s now get a sense of how varying the hidden layer size affects the result.
plt.figure(figsize=(16, 32))
hidden_layer_dimensions = [1, 2, 3, 4, 5, 20, 50]
for i, nn_hdim in enumerate(hidden_layer_dimensions):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer size %d' % nn_hdim)
model = build_model(nn_hdim)
plot_decision_boundary(lambda x: predict(model, x))
plt.show()
We can see that a hidden layer of low dimensionality nicely captures the general trend of our data. Higher dimensionalities are prone to overfitting. They are “memorizing” the data as opposed to fitting the general shape. If we were to evaluate our model on a separate test set (and you should!) the model with a smaller hidden layer size would likely perform better due to better generalization. We could counteract overfitting with stronger regularization, but picking the a correct size for hidden layer is a much more economical solution.
## Exercises #
Here are some things you can try to become more familiar with the code:
2. We used a fixed learning rate $$\epsilon$$ for gradient descent. Implement an annealing schedule for the gradient descent learning rate (more info).
3. We used a $$\tanh$$ activation function for our hidden layer. Experiment with other activation functions (some are mentioned above). Note that changing the activation function also means changing the backpropagation derivative. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426321148872375, "perplexity": 1079.3577336239339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00176.warc.gz"} |
http://math.stackexchange.com/questions/72872/flow-on-manifolds-and-lie-bracket | Flow on manifolds and Lie bracket.
I'm currently reading through some notes on Lie Theory online, and I've stumbled across the following question, which I'm totally stumped by.
"Let X,Y be two commuting complete vector fields on a manifold M, that is $[X,Y]=0$. Show that the vector field X+Y is complete and that the flow of X+Y is given by $\phi_{t,X+Y}(p)=\phi_{X,t}\circ \phi_{Y,t}(p)$, where $\phi_{t,X}$ stands for the flow of the vector field X,and so on."
I have no problems showing that the vector field is complete. However, it's the flow part that bugs me. So far I've tried the following: Look at $h(s,t,p) = \phi_{X,t}\circ \phi_{Y,s}(p)$, for some point p. Set $c(t,p) = h(t,t,p)$. We then have, after differentiating that $\frac{d}{dt}_{t=0}c(t,p) = D_1h(0,0,p)+D_2h(0,0,p)$.Since $h(t,0,p)=\phi_{t,x}(p)$ and $h(0,t,p) = \phi_{t,Y}(p)$ we get that $D_1h(0,0,p) = X(p)$, and $D_2h(0,0,p) = Y(p)$ and thus, the flow is $X(p)+Y(p)$.
-
It would be good if you showed the reasoning you used to prove that $X+Y$ is complete. – Mariano Suárez-Alvarez Oct 16 '11 at 3:16
It is easy to show that if $\phi_t$ is a one-parameter group of diffeomorphisms with the property that $\frac{d}{dt}\vert_{t=0} \phi_t(p) = X_p$ for all $p$, then $\phi_t$ is the flow of $X$. So based on what you've shown, you now need to show that $\phi_{t,X} \circ \phi_{t,Y}$ is a one-paraemter group of diffeomorphisms. To prove this you will need to show that the flows of $X$ and $Y$ commute, which follows from the commutativity of $X$ and $Y$ (I haven't worked out all the details but this is a known fact).
Dear Eric, if am not wrong, then your second paragraph should be So based on what you've shown, you now need to show that $\phi_{t,X}\circ\phi_{t,Y}$ is a one-parameter group of diffeomorphisms''. – Giuseppe Oct 16 '11 at 7:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182591438293457, "perplexity": 70.27632197318606}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824230.71/warc/CC-MAIN-20160723071024-00191-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://kyushu-u.pure.elsevier.com/en/publications/a-viable-explanation-of-the-cmb-dipolar-statistical-anisotropy | # A viable explanation of the CMB dipolar statistical anisotropy
Sugumi Kanno, Misao Sasaki, Takahiro Tanaka
Research output: Contribution to journalArticlepeer-review
36 Citations (Scopus)
## Abstract
The presence of a dipolar statistical anisotropy in the spectrumof cosmicmicrowave background (CMB) fluctuations was reported by the Wilkinson Microwave Anisotropy Probe (WMAP), and has recently been confirmed in the Planck 2013 analysis of the temperature anisotropies. At the same time, the Planck 2013 results report a stringent bound on the amplitude of the local-type non-Gaussianity.We show that the non-linear effect of the dipolar anisotropy generates not only a quadrupole moment in the CMB but also a local-type non-Gaussianity. Consequently, it is not easy to build models having a large dipolar modulation and at the same time a sufficiently small quadrupole and level of local bispectral anisotropy to agree with the present data. In particular, most models proposed so far are almost excluded, or are at best marginally consistent with observational data. We present a simple alternative scenario that may explain the dipolar statistical anisotropy while satisfying the observational bounds on both the quadrupole moment and local-type non-Gaussianity.
Original language English 111E01 Progress of Theoretical and Experimental Physics 2013 11 https://doi.org/10.1093/ptep/ptt093 Published - Nov 1 2013 Yes
## All Science Journal Classification (ASJC) codes
• Physics and Astronomy(all)
## Fingerprint
Dive into the research topics of 'A viable explanation of the CMB dipolar statistical anisotropy'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599047660827637, "perplexity": 2496.851092715365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00467.warc.gz"} |
https://sufficientlyminimal.netlify.app/2017/05/01/a-comparison-of-imputation-algorithms-for-modeling-water-quality/ | A Comparison of Imputation Algorithms for Modeling Water Quality
Approved by:
Primary Advisor: Dr. Edsel Peña; Professor, Dept. of Statistics
Second Reader: Dr. John Grego; Professor & Chair, Dept. of Statistics
Submitted in partial fulfillment of the requirments for
graduation with Honors from the South Carolina Honors College.
Abstract
This project addresses the need for predictive modeling tools to forecast expected concentrations of fecal bacteria in recreational waters in the Charleston, SC area. Data was provided by Charleston Waterkeeper, a water quality monitoring organization that has been measuring Enterococcus faecalis concentrations at 15 recreational sites since 2013. The data contain a non-negligible number of censored and missing observations, so three distinct imputation methods were developed and compared in terms of their effect on final predictive model characteristics. The best performing method relied on drawing samples from a truncated normal distribution to replace censored values, and using a partial model built from all non-missing observations to impute missing values. Finally, a predictive model of Enterococcus in terms of precipitation in the past 72 hours, tidal stage, and sample site was developed. Results from this project may be used for forecasting Enterococcus concentrations in practice, as well as for informing the imputation phases of future studies.
Personal Motivation
The origins of this research date back to the beginning of my involvement with Charleston Waterkeeper, a non-profit water quality monitoring organization based in Charleston, SC. I first discovered Charleston Waterkeeper during the Spring semester of my sophomore year at the University of South Carolina after reading The Riverkeepers by Robert F. Kennedy Jr. and John Cronin, a book that details the formation of the now international Waterkeeper Alliance [1]. Charleston Waterkeeper, a member of the Waterkeeper Alliance, relies on volunteer service to sustain their water quality monitoring program, a volunteer program through which I first became involved with the organization.
The following fall semester, I began taking upper level classes for my major in statistics at the University of South Carolina. The material covered in these introductory courses was foundational to completing this project, and highly applicable to the work I was doing the previous summer with Charleston Waterkeeper. Over the course of that year, I became familiar with statistical methods and tools that struck me as being highly applicable to Charleston Waterkeeper’s water quality monitoring program. With a lifelong interest in environmental sciences and a major in statistics, I decided that building a predictive water quality model would be a perfect application of my major to a meaningful problem.
To me, this thesis embodies the ability for research in statistics to occupy the intersection of natural science, mathematics, and computer science, among other areas. As a truly interdisciplinary subject, research in statistics often requires proficiency in programming languages to run simulations and analyses, knowledge of mathematics to prove underlying principles, and a familiarity with the project’s area of application to guide and contextualize the analysis. Most importantly, a grasp of fundamental statistical theorems and methods is necessary to perform the modeling of natural phenomena. These are all skills that I was introduced to through coursework at USC. I am grateful to my professors and mentors who have challenged me to learn this material both in and out of the classroom.
A Note on Reproducibility
This thesis was written entirely using R Markdown, a document format provided by R Studio for creating technical documents in R [2]. R Markdown allows for seamless integration of R code, output of R code, and LaTeX typesetting. Since R was the only programming language used for this project, writing with R Markdown was a natural choice. Beyond aesthetic reasons, and most importantly, R Markdown allows for the reproducibility of complex analyses performed with R. Reproducibility in the context is used to refer to the ability to realize the same results as a certain study, given the code and data used in that study. By writing in R Markdown and providing data used in this study, I hope to achieve this standard of reproducibility. To reproduce the results discussed here, visit the project’s webpage and download the Thesis.rmd file provided there. All dependent data sets are read directly from this webpage by the Thesis.rmd file, so it is not necessary to download data in order to reproduce this analysis.
Visit to obtain necessary materials.
My hope is that in communicating my research this way, I can invite input and feedback on the project that can move it forward in a productive way.
Introduction
The Recreational Water Quality Monitoring Program
As alluded to earlier, the primary goal of this project is to build a reliable predictive model for water quality in Charleston, SC. Such a model will be built from data obtained through Charleston Waterkeeper’s Recreational Water Quality Monitoring Program (RWQMP), a sampling program that is consistent with protocol stipulated by the South Carolina Department of Health and Environmental Control, as to preserve the compatibility of data collected by the two organizations [3]. The RWQMP measures the concentration of Enterococcus faecalis bacteria in units of most probable number of individuals per 100 mL of sample (MPN/100 mL) [3]. Samples are taken from fifteen different sampling sites both in and around the Charleston Harbor on a weekly basis during the months of May through October, hereafter referred to as the sampling season [3]. Along with the concentration of Enterococcus bacteria, inches of rainfall in the past 24 hours and tidal stage at the time of sampling are recorded for each sample. As such, Charleston Waterkeeper holds a substantial collection of water quality records from the sampling seasons of 2013, 2014, and 2015 that are ripe for statistical analysis, and in this case, predictive modeling. Using the packages ggmap and ggplot2, the latter of which will be the default plotting package used throughout this paper, the locations of each sampling site are shown in Figure 1.
(#tab:site.descriptions)Meaning of Each Site ID
ID Description
AR1 Ashley River
AR2 Brittlebank Park
CC1 Cove Creek
CH1 Melton Peter Demetre Park
CH2 College of Charleston Sailing
CH3 Battery Beach
FB1 Folly Beach Boat Landing
HC1 Hobcaw Creek
HC2 Hobcaw Creek
JC1 James Island Creek
JC2 James Island Creek
SC1 Shem Creek
SC2 Shem Creek
SC3 Shem Creek
WC1 Wappoo Cut Boat Ramp
Modeling Intuition
Sources from coastal ecosystems literature suggest that wetland watersheds often experience a degradation in water quality after significant rainfall events [4]. The suspected mechanism driving this phenomenon is storm water runoff from impervious land surfaces surrounding a body of water. Sample sites in areas with a high percentage of impervious surface coverage are expected to be more severely impacted by rainfall events than those with relatively permeable surrounding land. The variety of surrounding landscapes among the 15 sample sites can be seen in Figure 1. Thus sample site may be a meaningful variable for predicting Enterococcus concentrations.
Similarly, the tidal stage at the time of sampling is suspected to have an impact on the concentration of Enterococcus, with ebb tides tending to be associated with higher bacteria readings than flood tides. This suspicion is grounded more in experience working in the field and exploratory data analysis than ecological literature. However, one might imagine what is occurring during ebb and flood tide while studying Figure 1 and agree that the hypothesis is reasonable. To speak in generalizations, water from the inland rivers and creeks flows out of the harbor and into the ocean during an ebb tide. Compared to ocean water, this ebb tide is relatively “dirty”, in terms of the Enterococcus metric, as it has been subjected to storm water runoff from surrounding impervious surfaces. Conversely, the ocean water that fills the harbor during a flood tide has not been polluted with runoff and thus it dilutes any of the pollutants found in the more estuarine waters. Rationalizations such as this are not meant to conjecture ecological phenomena, but rather to serve as a sanity check for later in the analysis.
Types of Missingness
Before developing and analyzing imputation methods, it is important to first consider the mechanisms behind loss of data from a study. Gelman and Hill develop several classifications of missingness, each resulting from unique “missing mechanisms” [5]. This project deals with two of Gelman and Hill’s classes of missingness: missingness that depends on the missing value itself, and missingness that depends on unobserved predictors. Of the 969 observations in the 2013-2015 data set, 140 (~15%) are censored and 28 (~3%) are missing.
Missingness that depends on the missing value itself has been referred to as “censoring” thus far. Censoring is a sub-class of missing, as the missingness itself is determined by whether or not the value is within a certain interval, which in this case is $$(0,10)$$. There is information to be gained from censored values and there is risk in treating them with haste. To see the risk in mishandling censored values, consider the three logical options for dealing with censored values in the context of predictive modeling:
1. Ignore censored values by removing them from the data set, then proceed with modeling these non-censored data.
2. Chose an arbitrary value within the range of missingness to replace all censored values with, say $$0$$ in this case.
3. Develop a protocol that attempts to gain all possible information that censored values have to offer, while avoiding bias and inefficiency.
An argument will now be presented as to why the first two of these three options are not ideal, and motivate the development of possible methods to fit the criteria of option (3). For the purpose of demonstration, take a small subset of the full 2013-2015 data, specifically all samples taken at Folly Beach Boat Landing (FB1) during the 2015 sampling season.
fb1.2015 = wq.data[(site == "FB1" & grepl("2015",date)),]
Now, in order to make meaningful comparisons, the response variable, Enterococcus, will be replaced with values from a deterministic equation in precipitation, tide, and sample site. In other words, the values of precipitation, tide, and sample site present in fb1.2015 will deterministically generate artificial Enterococcus observations according to the following equation.
$\vec{Y}_i = {\bf X}_{i\times j}\vec{\beta}_j$ $i \in [1,25], \space j \in [1,4]$
Here $$\vec{Y}$$ is a vector of artificial observations, $${\bf X}$$ is a design matrix corresponding to values of the predictor variables, and $$\vec{\beta}$$ is a vector of parameter coefficients. Note that the columns of $${\bf X}$$ correspond to the equation’s intercept, the values of precipitation in the past 72 hours, a dummy variable for the 2-stage tide factor, and a dummy variable for sample site. It is somewhat redundant to include this dummy variable for sample site, since in this case the only sample site is FB1, but it will be kept to remain consistent with later conventions. Lastly, although a negative value for the response variable is meaningless in reality, Enterococcus will be allowed below $$0$$ for sake of argument.
$\mathbf{X} = \left[\begin{matrix} 1 & 0.14 & 0 & 1\\ 1 & 0.00 & 0 & 1\\ 1 & 1.01 & 1 & 1\\ \vdots & \vdots & \vdots & \vdots\\ 1 & 0.00 & 1 & 1 \end{matrix}\right]$
$\vec{\beta} = \left[\begin{array} {rrr} 10.00\\ 16.50\\ -1.75\\ -1.11 \end{array}\right]$
Once $${\bf X}$$ has been loaded into a matrix, and $$\vec{\beta}$$ into a vector, $$\vec{Y}$$ can be computed and used to replace the true Enterococcus observations. The top fifth of the resultant data frame is given below.
(#tab:sim.dat)Preview of Folly Beach Data from 2015
date site ent pre tide fix Y
499 10/28/2015 FB1 98 0.14 ebb fixed 11.200
512 10/21/2015 FB1 41 0.00 ebb fixed 8.890
527 10/14/2015 FB1 10 1.01 flood fixed 23.805
555 09/23/2015 FB1 63 0.00 ebb fixed 8.890
581 09/09/2015 FB1 63 1.13 ebb fixed 27.535
596 09/02/2015 FB1 10 3.46 flood fixed 64.230
Now consider excluding all censored observations from the model. This would amount to grouping the data by value of $$\vec{Y}_i$$, specifically if $$\vec{Y}_i <10$$. This grouping can be accomplished with a single command.
fb1.2015.cens = fb1.2015[fb1.2015$Y > 10,] From this abbreviated data frame, a model can be built to observe the effects of discarding censored observations. Information such as p-values and t-test statistics will be omitted, as they are meaningless when fitting a model to deterministic data. cens.mod = glm(Y ~ pre + tide, data = fb1.2015.cens) (#tab:censored.model.2)Coefficients When Discarding Censored Values term estimate (Intercept) 8.89 pre 16.50 tideflood -1.75 So, the model made from ignoring censored data correctly identified the underlying parameters of the generating model (note that the intercept coefficient is $$10.00 - 1.11 = 8.89$$, or the sum of the intercept and sample site coefficients in $$\vec{\beta}$$). However, this scenario is not ideal, as it has shielded the interval $$(0,10)$$ from being included in the model, leading to a model that would incorrectly predict ~15% of the data. To see why picking arbitrary values for censored observations should be avoided, take the data set formed by replacing all $$\vec{Y}_i <10$$ with $$0$$, and the resultant model. fb1.2015.zero = fb1.2015 fb1.2015.zero[(fb1.2015.zero$Y < 10),7] = 0
zero.mod = glm(Y ~ pre + tide, data = fb1.2015.zero)
(#tab:censored.zero.2)Coefficients When Replacing With 0
term estimate
(Intercept) 3.308305
pre 19.609164
tideflood -2.110561
Hence, the parameter estimates of the model can be corrupted by replacing censored observations with arbitrary constant values from within the interval of censoring. With these two cautions in mind, it is sensible to search for other ways of dealing with censored data. A collection of such methods will be considered later in this paper.
Missingness that depends on unobserved predictors is the second, and much less prevalent, type of missingness present in these data. Gelman and Hill describe this type of missingness as that which results from variables not observed and thus not possible to include in the model. In this case, the unobserved variable responsible for missingness is the event of dangerous weather during the sampling window that prevents samples from being taken at some or all of the fifteen sample sites in a given week. As in the censored case, there are a few general courses of action to take with missing data. For the same reasons as presented above, ignoring or arbitrarily replacing missing values are approaches that should be avoided. An available method for dealing with missing observations is to build a model from all non-missing observations, and then use that model to predict missing observations based on their associated values of the predictor variables. This method has model-stabilizing characteristics, but does not yield any new information. A simple proof of this claim is as follows.
Consider the case of simple linear regression on a data set with $$n$$ total observations, of which $$m$$ are missing. The model made from all $$n-m$$ non-missing observations will be of the usual form $$E(y_i) = {\beta_0}+\beta_1x_i$$ for all $$1 \le i \le n-m$$, where $$\beta_0$$ and $$\beta_1$$ are estimated by minimizing
$SSE_0 = \sum_{i=1}^{n-m} (y_i - \hat{y_i})^2 = \sum_{i=1}^{n-m}(y_i-(\hat{\beta_0}+\hat{\beta_1}x_i))^2$ Solving the least squares equations for each of $$\hat{\beta_0}$$ and $$\hat{\beta_1}$$ gives
$\hat{\beta_0} = \bar{y}-\hat{\beta_1}\bar{x}$
$\hat{\beta_1} = \frac{S_{xy}}{S_{xx}} = \frac{\sum_{i=1}^{n-m} (x_i-\bar{x})(y_i-\bar{y})}{\sum_{i=1}^{n-m} (x_i-\bar{x})^2}$
Let $$\vec{y^*}$$ be the set of predictions $$y^*_j$$, for all $$1 \le j \le m$$, obtained from this model using the $$m$$ values of $$\vec{x}$$ associated with missing observations. Label these values of $$\vec{x}$$ as $$x^*_j$$ for all $$1\le j \le m$$. Explicitly, $$\vec{y^*} = \hat{\beta_0^*}+\hat{\beta_1}\vec{x}$$
Now, the new model made from all $$n$$ observations will have parameter coefficients obtained from solving the least squares equations given by minimizing
$SSE_1 = \sum_{k=1}^{n}(y_k-(\hat{\beta_0}+\hat{\beta_1}x_k))^2$ $= \sum_{i=1}^{n-m}(y_i-(\hat{\beta_0}+\hat{\beta_1}x_i))^2+\sum_{j=1}^{m}(y_j^*-(\hat{\beta_0}+\hat{\beta_1}x_j^*))^2$ $= \sum_{i=1}^{n-m}(y_i-(\hat{\beta_0}+\hat{\beta_1}x_i))^2 + 0 = \sum_{i=1}^{n-m}(y_i-(\hat{\beta_0}+\hat{\beta_1}x_i))^2 = SSE_0$
Hence, the parameter coefficient estimates obtained by minimizing the least squares equations for $$SSE_1$$ will be identical to those obtained by minimizing the least squares equations for $$SSE_0$$, since $$SSE_1 = SSE_0$$. Imputing missing values in this manner would only serve to introduce data points that are exactly on the regression line.
Now that it has been established that care should be taken when imputing missing and censored values, the criteria for what makes some imputation methods better than others should be discussed.
Method Comparison Criteria
As the title suggests, a main goal of this paper is to compare the effects of different imputation methods on a final predictive model for Enterococcus. There are several criteria that could be used to compare methods, but the following metrics will be reported and discussed.
• Ability to faithfully reproduce model parameters: When a test data set is produced deterministically according to an equation with known parameters, and that test data set is artificially made to have various degrees of censoring and missingness, does the imputation method produce a data set upon which a model can be built that closely resembles the underlying deterministic equation?
• Predictive Bias: Does the resultant data set lead to inference that is, on average, overstating or understating reality? In the context of modeling, is it the case that the expected value of Enterococcus given by a certain model is equal to the true value of Enterococcus?
• Predictive Variance: What is the variance of the difference between the prediction given by models that are built from imputed data and true values?
• Convergence properties: Does the algorithm converge to a stable state? If so, does it do so quickly or after many iterations? Note for some imputation methods that rely on repeated random sampling from certain distribution, the issue of convergence is not applicable.
• Computational efficiency: Is the algorithm scalable to larger data sets?
This list is not exhaustive, but it will allow for the comparison of each imputation method.
Methodology
Initial Imputation
Before visualizing and exploring the data, a preliminary attempt at imputing missing and censored values of the response variable, Enterococcus, must be made. More details will be provided in a subsequent section of this paper, but a short description of this initial imputation method is warranted here. An estimate of censored observations can be obtained by sampling from a $$Uniform(0,10)$$ distribution. Missing values will be replaced with the overall mean of Enteroccocus for that particular sample site and year. Once necessary imputations are performed, a “complete” data set can be saved and used for exploratory analysis.
First, examining the first five rows of the raw data saved as a data frame named wq.data:
Table 1: Preview of Full 2013-2015 Data
date site ent pre tide fix
10/30/2013 AR1 10 0.00 flood fixed
10/30/2013 AR2 213 1.46 ebb fixed
10/30/2013 CC1 161 1.46 ebb fixed
10/30/2013 CH1 135 1.46 ebb fixed
10/30/2013 CH2 10 0.02 flood fixed
10/30/2013 FB1 20 0.00 ebb fixed
Sub-setting the wq.data data frame into complete.set, censored.set, and missing.set.
complete.set = wq.data[(ent != "--" & ent != "<10"),]
censored.set = wq.data[(ent == "<10"),]
missing.set = wq.data[(ent == "--"),]
Now samples can be taken from the $$Uniform(0,10)$$ distribution.
cens.sample = ceiling(runif(length(censored.set[,1]),0,10))
censored.set[,3] = cens.sample
The task of computing appropriate means as specified above is slightly more tedious, so that code will be included in the appendix. Essentially, once data is grouped by sample site and sampling season using a similar approach as above, computing means is trivial. Finally, the separate complete.set, censored.set, and missing.set data frames can be combined back into wq.data.
wq.data = rbind(complete.set,censored.set,missing.set)
The data are now prepared for exploratory analysis.
Visual Exploration
As was mentioned in the introduction, a simple hypothesis made after working in the field is that Enterococcus behaves differently depending on the tidal stage at the time of sampling. Namely, Enterococcus concentrations are thought to be higher during ebb tides than flood tides. Using R and ggplot2, the conditional density curve of Enterococcus for each tidal stage can be visualized. The following plot is called a “violin” plot for its ability to sometimes resemble the outline of a violin. Violin plots are a succinct way to compare two or more density curves.
ggplot(wq.data,aes(tide,log(ent))) + geom_violin(aes(fill=tide)) + labs(x="Tidal Stage",y="ln(Ent.)")
Note that a natural logarithm transformation was applied to the response variable, Enterococcus. Due to its extreme variability, an untransformed Enterococcus variable cannot be easily visualized or modeled. The natural logarithm transformation will be used throughout this paper, and unless otherwise noted, both Enterococcus and $$log(Enterococcus)$$ can be thought to imply $$ln(Enterococcus)$$.
Another suspected phenomenon is that Enterococcus levels are expected to be higher at some sample sites than at others, all other factors adjusted for. A similar violin plot can be constructed for the density curve of Enterococcus at each sample site, as seen in Figure 3.
ggplot(wq.data,aes(site,log(ent))) + geom_violin(aes(fill=site)) + labs(x="Sample Site",y="ln(Ent.)")
Of course, whatever effect that tidal stage may have on Enterococcus is dependent on the sampling site. The converse is also true. Since less voluminous waters are more tidal by nature, it is reasonable to expect that the smaller creeks in the bevy of sample sites will be more dramatically impacted by tidal stage than those in more open waters. Combining Figures 2 and 3 gives the conditional density curves of Enterococcus for each combination of sample site and tidal stage shown in Figure 4.
ggplot(wq.data,aes(tide,log(ent))) + geom_violin(aes(fill=site)) + labs(x="Tidal Stage",y="ln(Ent.)")
Visualizing the relationship between $$ln(Ent.)$$ and precipitation is slightly more complicated, as it is best considered individually for each site. In Figure 5, $$ln(Ent.)$$ is plotted versus $$ln(precip.)$$ and faceted by sample site.
ggplot(wq.data,aes(log(pre),log(ent))) + geom_point(aes(color=tide)) + facet_wrap(~ site)
Data for precipitation 72 hours prior to sampling was obtained from a collection of 18 National Oceanographic and Atmospheric Administration (NOAA) gauges in the Charleston area. Charleston Waterkeeper records precipitation levels for the 24 hours prior to sampling, but this window was thought to not be large enough to encompass all of the rainfall events that may affect an Enterococcus measurement. Each of the 15 sample sites was paired with the closest of NOAA’s rain gauges, and the amount of precipitation in inches was retroactively appended to the data set. Unfortunately, there was not complete continuity in NOAA’s records, so it was not always possible to follow this rule. In the event of a missing precipitation record for the closest gauge to a given sample site, the next closest gauge was consulted instead. Due to the need to make a decision on a case by case basis about which rain gauge to consult, this process could not have been easily automated. Ideally, each nearest rain gauge would have continuous and complete temporal coverage over the timeline of this analysis, but that was sadly not the case. Figure 6 shows a map of the locations of NOAA’s rain gauges.
Generating Test Data
To compare the effects of each imputation method, it is important to know the characteristics of the data set being used. Thus, an artificial test data set must be constructed in R using a similar approach as in the Types of Missingness section of this paper. However, here the test data set will be made to resemble the full data set with all 969 observations. The function genTestData() will be defined to generate a test data frame. The function will take in parameters in.set for the input set to be replicated, cens.limit for the limit of left censoring, and percent.missing corresponding to the percent of observations in the test data set that will artificially be made missing. Refer to the appendix for the full code behind the function. The output of the function will be saved to test.df, which will act as the data set to test each of the following imputation methods on.
test.df = genTestData(wq.data, 10, 5)
Method 1: A naive approach.
The first method was introduced briefly in the Initial Imputation subsection and will be encoded in the basicImputation() function given in the appendix. The general goal of this technique is to make a preliminary attempt at imputing missing and censored values, but to keep the strategy simple to allow for meaningful comparison to more sophisticated methods later on. Specifically, censored values will be replaced with a random sample from a $$Uniform(0,10)$$ distribution, and the ceiling function will be applied to ensure all values are in the domain of the natural logarithm function. Missing values in this method will be imputed with the overall mean of Enterococcus, after censored observations have been sampled. Once this is done, a model will be built from the now complete data set, and model parameters will be stored.
The model equation used here, and for each subsequent imputation method, will be as follows.
$E(ln(Ent.)) = \beta_0 + \beta_{precip}(precip)+\beta_{flood}(flood)+\beta_{AR2}+...+\beta_{WC1}$
This procedure will be repeated N times, where N is passed through via parameter. The result will be an $$N\times17$$ data frame of model parameter estimates returned by the function. Refer to the appendix for the R code used to accomplish this.
basic.df = basicImputation(100,test.df)
Method 2: Sampling from a Truncated Normal.
Here, a slightly more refined method will be employed, namely by introducing the Truncated Normal distribution to sample replacement values for censored observations. The truncated normal distribution is more tightly centered about its mean than the uniform distribution, so overall, the samples used to replace censored values will be expected to have a lower overall variability than in the previous method. Using the truncnorm package in R, and in particular the function rtruncnorm(), truncated normal random variables can be easily generated.
In pursuit of stable estimates of model parameters over many simulations, missing values will be imputed by the prediction given by a model built from all non-missing data, after censored observations have been sampled from a truncated normal distribution. It was shown previously in the Types of Missingness section that doing so has no effect on the estimate of model parameter coefficients, because no “new information” is obtained.
The algorithm for implementing this method will proceed similarly to Method 1. The data will be separated into a complete set, a censored set, and a missing set. For each of the N iterations, censored values will be sampled from a truncated normal distribution, and added back to the complete set. Then, a model will be built from all non-missing values and used to predict missing observations. Finally, these newly predicted values will be added back to the complete set and a final model will be built on this complete set. At each step of the simulation, estimates of model parameter coefficients will be stored. These steps are performed by the function truncNormSampler(), and the full code is given in the appendix.
The parameters accepted by truncNormSampler() are N for simulation size, in.set for the test data set, t.a for the left bound of truncation, t.b for the right bound of truncation, t.mean for the mean of the truncated normal, and t.sd for the standard deviation of the truncated normal. A $$N\times17$$ data frame of parameter coefficient estimates is returned by the function.
truncSamp.df <- truncNormSampler(100,test.df)
Method 3: Imputation from the expected value of a Truncated Normal
The third and final method will again involve the truncated normal distribution, but instead of randomly sampling from a truncated normal random variable, censored values will be replaced with its expected value. More precisely, if $$Y$$ is a random variable corresponding to the natural logarithm of Enterococcus, then the following assumptions are being made.
$Y \sim logNormal(\mu,\sigma^2) \implies ln(Y) \sim Normal(\mu,\sigma^2)$
Note $$ln(Y)$$ is the variable to be truncated below some constant $$a$$ and above some constant $$b$$. Ultimately, $$E[ln(Y)|a<ln(Y)<b]$$ is desired. Standardizing,
$E[ln(Y)|a<ln(Y)<b] = E[(\frac{ln(Y)-\mu}{\sigma}+\frac{\mu}{\sigma})\sigma | \frac{a-\mu}{\sigma}<\frac{ln(Y)-\mu}{\sigma}<\frac{b-\mu}{\sigma}].$ Letting $$Z$$ denote a standard normal random variable, $$\alpha = \frac{a-\mu}{\sigma}$$, and $$\beta=\frac{b-\mu}{\sigma}$$, this expected value can be simplified to
$\sigma E[Z|\alpha<Z<\beta]+\mu.$
From the definition of a truncated normal random variable,
$E[Z|\alpha<Z<\beta] = \int_{\alpha}^{\beta}z{\frac{\phi(z)}{\Phi(\beta)-\Phi(\alpha)}}dz = \frac{\int_{\alpha}^{\beta}z\phi(z)dz}{\Phi(\beta)-\Phi(\alpha)},$ where $$\phi(z)$$ and $$\Phi(z)$$ denote the probability density function and cumulative distribution function of $$Z$$ respectively. Considering the numerator of this last fraction,
$\int_{\alpha}^{\beta}z\phi(z)dz=\int_{\alpha}^{\beta}z\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}z^2}dz,$ and letting $$w = \frac{1}{2}z^2$$, the following observations can be made.
$dw = zdz$ $z=\alpha \implies w= \frac{1}{2}\alpha^2$ $z = \beta \implies w = \frac{1}{2}\beta^2$
As a result,
$\int_{\alpha}^{\beta}z\phi(z)dz=\int_{\frac{1}{2}\alpha^2}^{\frac{1}{2}\beta^2} \frac{1}{\sqrt{2\pi}}e^{-w}dw = \frac{1}{\sqrt{2\pi}}[e^{-\frac{1}{2}\alpha^2}-e^{-\frac{1}{2}\beta^2}] = \phi(\alpha)-\phi(\beta)$
Thus $$E[Z|\alpha<Z<\beta]= \frac{\phi(\alpha)-\phi(\beta)}{\Phi(\beta)-\Phi(\alpha)}$$, and finally
$E[ln(Y)|a<ln(Y)<b] = \mu+\sigma{\frac{\phi(\frac{a-\mu}{\sigma})-\phi(\frac{b-\mu}{\sigma})}{\Phi(\frac{b-\mu}{\sigma})-\Phi(\frac{a-\mu}{\sigma})}}.$
For the purposes of imputing censored values in this context, $$Y$$ will be restricted to be between $$0$$ and $$10$$ and therefore $$ln(Y)$$ will have support $$(-\infty,ln(10))$$. When letting $$a = -\infty$$ and $$b = ln(10)$$, the above equation simplifies to
$E(ln(Y)|0<ln(Y)<ln(10))=\mu-\sigma{\frac{\phi(\frac{ln(10)-\mu}{\sigma})}{\Phi(\frac{ln(10)-\mu}{\sigma})}}.$
To compute this expected value, estimates for $$\mu$$ and $$\sigma$$ must be obtained. The approach that will be taken here is to build a starter model from all non-censored and non-missing values, then use that starter model to obtain initial estimates for $$\mu$$ and $$\sigma$$. More formally, each censored observation $$Y_c$$, will be replaced by
$Y^* = \hat{\mu}-\hat{\sigma}{\frac{\phi(\frac{ln(10)-\hat{\mu}}{\hat{\sigma}})}{\Phi(\frac{ln(10)-\hat{\mu}}{\hat{\sigma}})}},$
where $$\hat{\mu}={\bf X}_{i\times j}\vec{\beta}_j$$, the prediction given by the starter model for the set of predictor variable values associated with $$Y_c$$, and $$\hat{\sigma}=\frac{\sum(\hat{Y_i}-Y_i)^2}{n^*}$$, the mean squared error of the starter model, where $$n^*$$ is the number of complete observations in the data set.
Missing values will then be predicted by the model made from all non-missing data, and then added back to the complete data set. Then a final model will be built from this full data set, and the models parameter coefficient estimates will be stored. The function truncNormExpVal() performs these steps, and full code for the function is included in the appendix. Only the parameters N for simulation size, and in.set for the test data set are needed.
truncExp.df <- truncNormExpVal(100,test.df)
Modeling
Once the results from each imputation algorithm are compared, the modeling phase of this project may begin. Of the many possibilities for model choices, two types of models will be pursued: those that model $$ln(Enterococcus)$$ as a continuous numeric random variable, and those that model the probability of Enterococcus exceeding the state-mandated safety threshold for recreational waters.
Continuous numeric outcome: A general linear model will be used to model $$ln(Enterococcus)$$ in this case. The standard assumptions that the model errors, $$\epsilon_i \stackrel{iid}{\sim} Normal(0,\sigma^2)$$ will be checked and attention will be paid to the significance of each parameter coefficient estimate as well as the coefficient of determination for the model.
Binary categorical outcome: Here, logistic regression will be used to estimate the probability of Enterococcus exceeding a certain threshold. Such an estimate may be favorable over the continuous numeric case, as it has easy interpretability in terms of the risk of coming in contact with water from a certain waterway under a given set of conditions. The assumption that each $$Y_i \stackrel{ind}{\sim} Binomial(n_i,\pi_i)$$ will be evaluated as well.
The results of each modeling attempt will be analyzed and used to determine what the best model of Enteroccocus concentrations is for the purposes of this study.
Results
Since each of the three methods considered were iterative processes, and parameter estimates were obtained at each step of the iteration, one way to track the behavior of each method is to plot the estimate for each model parameter at each $$n=1,2,...,N$$. The following plots display the estimate of each model coefficient at each step of the simulation. By themselves, these plots are not very meaningful, but when they are viewed in conjunction with an understanding of the criteria established in the section of this paper, and the true parameter coefficient values, used in the genTestData() function.
Method 3 Simulation Results
These graphical representations of the simulations performed for each imputation method can be supplemented with numerical results as well. Specifically, the absolute difference between the estimates of model parameter coefficients and true coefficients for each imputation method are documented. What follows is a summary containing the sum of these coefficient errors, the predictive bias, and the predictive variance.
Table 2: Coefficient Estimates of Each Imputation Method
Coeff. True M1 Med. M1 Err. M2 Med. M2 Err. M3 Final M3 Err.
(Intercept) 2.8 3.804 1.004 2.816 0.016 2.718 0.082
pre 1.5 1.401 0.099 1.502 0.002 1.629 0.129
flood -1.75 0.118 1.868 0.002 1.752 0.013 1.763
AR2 1.6 1.191 0.409 1.584 0.016 1.574 0.026
CC1 -0.1 -0.728 0.628 -0.107 0.007 -0.066 0.034
CH1 0.1 -0.123 0.223 0.105 0.005 0.131 0.031
CH2 -0.95 -2.042 1.092 -1.137 0.187 -4.936 3.986
CH3 0.14 -0.318 0.458 0.133 0.007 0.162 0.022
FB1 -1.11 -2.243 1.133 -1.185 0.075 -4.139 3.029
HC1 -0.04 -0.79 0.75 -0.046 0.006 -0.032 0.008
HC2 1.34 1.455 0.115 1.325 0.015 1.33 0.01
JC1 1.5 1.04 0.46 1.484 0.016 1.466 0.034
JC2 2.7 2.01 0.69 2.682 0.018 2.645 0.055
SC1 0.56 0.045 0.515 0.549 0.011 0.549 0.011
SC2 1.9 1.378 0.522 1.882 0.018 1.872 0.028
SC3 2.8 2.447 0.353 2.783 0.017 2.781 0.019
WC1 0.8 0.815 0.015 0.788 0.012 0.763 0.037
Table 2: Summary of Imputation Methods
Total Abs. Coeff. Error Predictive Bias Predictive Variance
Method 1 10.334 0.448 0.135
Method 2 2.180 -0.010 0.002
Method 3 9.304 -0.492 1.567
The above results suggest that of the three imputation methods considered, Method 2 is best able to faithfully replicate underlying model parameters, while minimizing predictive bias and variance. Therefore Method 2 will be used to perform imputation of missing and censored values before modeling.
model.df <- truncNormSampler(1,wq.data,ret = "data")
Modeling
Now that imputation has been performed according to Method 2, a model of $$ln(Ent.)$$ as a continuous numeric random variable in terms of precipitation in the past 72 hours, tidal stage, and sample site can be built.
model.glm = glm(log(ent) ~ pre + tide + site, data = model.df)
Below is a summary of the model characteristics for model.glm as well as appropriate plots for verifying regression assumptions.
library(ggfortify)
autoplot(model.glm)
It can be seen from Figure 8 that while the spread of the residuals divided by standard residuals remains roughly constant for each value of the precipitation variable, there is a downward pattern that may suggest there are trends affecting the relationship between precipitation and $$ln(Enterococcus)$$ that are not explained in the model.
res.glm = model.glm$residuals stdresid.glm = rstandard(model.glm) stdres.vs.precip = as.data.frame(cbind((res.glm/stdresid.glm), model.glm$data$pre)) colnames(stdres.vs.precip) = c("Res.StdRes", "Precip") ggplot(data = stdres.vs.precip, aes(y = Res.StdRes, x = Precip)) + geom_point() + ylab("Residuals/Standardized Residuals") + xlab("Precipitation") + geom_smooth() kable(summary(model.glm)$coeff, digits = 3, caption = "GLM Results")
Table 3: GLM Results
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.324 0.135 24.579 0.000
pre 0.708 0.036 19.889 0.000
tideflood -1.066 0.067 -15.825 0.000
siteAR2 1.016 0.175 5.814 0.000
siteCC1 0.112 0.176 0.636 0.525
siteCH1 0.021 0.174 0.120 0.904
siteCH2 -0.272 0.182 -1.499 0.134
siteCH3 0.115 0.174 0.657 0.511
siteFB1 -0.426 0.174 -2.441 0.015
siteHC1 0.162 0.196 0.829 0.407
siteHC2 0.937 0.189 4.965 0.000
siteJC1 0.946 0.175 5.402 0.000
siteJC2 2.044 0.176 11.580 0.000
siteSC1 0.538 0.175 3.078 0.002
siteSC2 1.068 0.175 6.110 0.000
siteSC3 2.139 0.176 12.149 0.000
siteWC1 0.460 0.188 2.445 0.015
Also of interest is a model relating the exceedence of Enterococcus beyond a certain predetermined safety threshold. A binary variable for whether or not Enterococcus exceeds this safety limit can be retroactively created and modeled in terms of the original data set.
safety.lim = 40
exceed = as.numeric(model.df[, 3] > safety.lim)
model.logist = glm(exceed ~ pre + tide + site, data = model.df,
family = binomial())
Finally, a summary of the model characteristics for model.logist and regression assumption diagnostic plots are given below.
kable(summary(model.logist)\$coeff, digits = 3, caption = "Logistic Regression Results")
(#tab:model.log2)Logistic Regression Results
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.469 0.347 -1.351 0.177
pre 1.383 0.162 8.553 0.000
tideflood -1.921 0.188 -10.226 0.000
siteAR2 1.803 0.452 3.988 0.000
siteCC1 -0.171 0.467 -0.366 0.714
siteCH1 -0.143 0.468 -0.305 0.761
siteCH2 -0.212 0.499 -0.424 0.671
siteCH3 0.155 0.458 0.339 0.735
siteFB1 -0.762 0.492 -1.547 0.122
siteHC1 0.530 0.505 1.049 0.294
siteHC2 2.030 0.485 4.186 0.000
siteJC1 1.934 0.463 4.179 0.000
siteJC2 4.080 0.810 5.039 0.000
siteSC1 0.868 0.444 1.956 0.050
siteSC2 1.877 0.457 4.110 0.000
siteSC3 3.595 0.637 5.647 0.000
siteWC1 0.902 0.474 1.904 0.057
Conclusions
Imputation Methods
While there were noticeable differences in the results produced by each imputation method, all three methods performed fairly well. Of the three methods considered, Method 2 performed best in terms of minimizing the overall absolute value errors of the parameter coefficient estimates, the predictive bias, and the predictive variance of the resultant model. In methods 1 and 2, the issue of convergence was not relevant, since both rely on random sampling at each iteration of the simulation. Method 3 did converge to a stable set of model parameter coefficient estimates after roughly $$N=10$$ simulations, but convergence was not considered as important as the first three criteria mentioned. Method 2 was therefore chosen to perform imputation in preparation for the modeling phase.
Modeling
Modeling $$ln(Enterococcus)$$ as a continuous numeric random variable in terms of precipitation, tidal stage, and sample site was achieved with adequate success. Splitting the 15-level sample site factor variable into 14 dummy variables for this model may have been the reason that four of them were not significant at any reasonable significance level. However, the main variables of interest, precipitation and tide, were highly significant. In terms of regression assumptions, this model did not display any flagrant violations of the distributional assumptions for the model errors. Even after the natural logarithm transformation, there were still outliers in the distribution of water quality - a testament to the high variability of Enterococcus in the field. These outliers were not influential enough to severely alter the model, so it was not necessary to consider removing them from this analysis.
The results from modeling component of this project suggest that overall, increased precipitation in the past 72 hours has a detrimental effect on water quality around Charleston. Similarly, ebb tides tend to feature higher bacteria levels than flood tides. Both the precipitation and tidal stage variables were highly significant in the general linear model. The sample sites that are the most prone to high bacteria counts, such as those in Shem Creek and James Island Creek, have significant coefficient estimates in the general linear model, suggesting that these sites have significantly higher bacteria levels on average. Sample sites that do not have significant coefficients in the general linear model are those that also rarely feature dangerously high bacteria levels. The model suggests that these sites, such as Hobcaw Creek and Charleston Harbor, are not expected to have significantly different bacteria levels, adjusting for precipitation and tidal stage.
Compared with the continuous numeric model, using logistic regression to model the exceedance of Enterococcus beyond a certain safety limit was not as successful. In the logistic model, there were several non-significant coefficient estimates, suggesting the model’s credibility may be doubtful. In light of this and the fact that it is generally more desirable to be able to predict Enterococcus as a continuous numeric variable than an binary one, it is not recommended that the logistic model be used in practice to model Enterococcus concentrations. This leaves the continuous numeric model as the chosen model for future use.
Discussion
Critiques
There are several areas in which this project could be improved. Below is a brief list of critiques that are acknowledged.
1. This project relied on data from the 2013-2015 sampling seasons. Data for the 2016 season were available at the time of this writing, but were not included because work on this project began before these most recent observations were released. In the interest of time, the analyses presented here were not redone to incorporate 2016 data, but this could be a future area of improvement.
2. The process of associating NOAA rain gauges with Charleston Waterkeeper Sample sites, as described on pg. 17, invited the possibility of human error. Ideally, there would be a more standardized way to assign precipitation measurements to Enterococcus observations.
3. While the chosen model for Enterococcus assumed a linear relation between $$ln(Ent.)$$ and precipitation, controlling for both tidal stage and sample site, there were some instances of slight violations of this assumption, as evidenced by Figure 5.
4. Decisions regarding the procedural details of each imputation method were more grounded in intuition than statistical literature. If time had permitted, the literature could have been more thoroughly mined for studies that considered similar methods.
5. This project was almost entirely reliant upon R programming. As with most extensive programming endeavors, there are ample opportunities for improving efficiency and eliminating any bugs that may not be immediately visible. Serious effort was addressed to both of these concerns, as well as to the desire to make this project fully reproducible by using R Markdown and posting code and data to the website for this project.
6. Because of extreme weather events contained in the 2013-2015 records, such as the historic rain that occured in October of 2015, there are outliers in both the $$Enterococcus$$ and precipitation variables. The final general linear model would benefit from outlier analysis and possibly outlier removal, since the intention of the model is not to predict bacteria counts under such extreme and rare conditions.
In light of these drawbacks, the final model built during this project can be immediately useful to both Charleston Waterkeeper and those who would like more information about the expected safety of recreational sites in the Charleston are under certain conditions. Additionally, there is a possibility that the imputation methods developed and analyzed here may be applicable to future studies.
The last component of this project was to encode the results from this paper into an interactive web application that can be used to predict the hazard associated with recreation at each of the fifteen sample sites in this study based on a certain set of environmental conditions specified by inches of rainfall in the past 72 hours and tidal stage at the time of recreation. Since this project will be finished in time for the beginning of the 2017 sampling season, the interactive water quality model that came from this analysis can be used by Charleston Waterkeeper to better inform the public about expected risks from recreation. To access the interactive water quality model, visit .
References
1. The Riverkeepers: Two Activists Fight to Reclaim Our Environment as a Basic Human Right by John Cronin and Robert F. Kennedy Jr.
2. R Studio. rmarkdown.rstudio.com.
3. Carmack, Cheryl & Wunderly, Andrew. Charleston Waterkeeper. Recreational Water Quality Monitoring Program Quality Assurance Project Plan. 2015.
4. Holland, A.Frederick, Denise M. Sanger, Christopher P. Gawle, Scott B. Lerberg, Marielis Sexto Santiago, George H.m Riekerk, Lynn E. Zimmerman, and Geoffrey I. Scott. “Linkages between Tidal Creek Ecosystems and the Landscape and Demographic Attributes of Their Watersheds.” Journal of Experimental Marine Biology and Ecology 298.2 (2004): 151-78. Web.
5. Gelman, Andrew, and Jennifer Hill. Data Analysis Using Regression and Multilevel/hierarchical Models. 1st ed. Cambridge: Cambridge UP, 2007. Print.
This research was supported financially by the University of South Carolina’s Magellan Scholars grant. Thank you to the University for this support and to the Magellan Scholars program for continued feedback and advice.
Appendices
Please refer to https://carter-allen.github.io for supporting materials and the interactive application built with the modeling results from this project. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6654908061027527, "perplexity": 1180.7947970651892}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195929.39/warc/CC-MAIN-20201128214643-20201129004643-00090.warc.gz"} |
https://www.physicsforums.com/threads/whats-the-general-solution-to-this-ivp.798503/ | # What's the General Solution to This IVP?
1. Feb 17, 2015
### logan3
$\frac {dy}{dt} = y^3 + t^2, y(0) = 0$
My teacher said this IVP couldn't be expressed in terms of functions we commonly know. I was wondering what the general solution is?
Thank-you
2. Feb 18, 2015
### Raghav Gupta
For solving this
It is a bernoulli form
Watch this video and you will get it how to solve
3. Feb 19, 2015
### LCKurtz
$\frac {dy}{dt} = y^3 + t^2, y(0) = 0$ is not of the form $y' + f(t)y = g(t)y^n$ so it is not the Bernoulli equation.
4. Feb 19, 2015
### Raghav Gupta
Yes it's not Bernoulli form, sorry. I not know how to solve it, but Wolframalpha was showing the simple answer x=1 for y=0.
I don't know how it solved that.
Similar Discussions: What's the General Solution to This IVP? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.925064206123352, "perplexity": 1993.7650332063545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00544.warc.gz"} |
https://gmatclub.com/forum/what-is-the-remainder-when-7-100-is-divided-by-266392.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 21 May 2019, 22:43
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# What is the remainder when 7^100 is divided by 50?
Author Message
TAGS:
### Hide Tags
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 7357
GMAT 1: 760 Q51 V42
GPA: 3.82
What is the remainder when 7^100 is divided by 50? [#permalink]
### Show Tags
24 May 2018, 18:29
00:00
Difficulty:
25% (medium)
Question Stats:
68% (01:08) correct 32% (01:24) wrong based on 157 sessions
### HideShow timer Statistics
[GMAT math practice question]
What is the remainder when $$7^{100}$$ is divided by $$50$$?
$$A. 0$$
$$B. 1$$
$$C. 7$$
$$D. 21$$
$$E. 49$$
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
##### General Discussion
RC Moderator
Joined: 24 Aug 2016
Posts: 802
Concentration: Entrepreneurship, Operations
GMAT 1: 630 Q48 V28
GMAT 2: 540 Q49 V16
Re: What is the remainder when 7^100 is divided by 50? [#permalink]
### Show Tags
24 May 2018, 19:03
MathRevolution wrote:
[GMAT math practice question]
What is the remainder when $$7^{100}$$ is divided by $$50$$?
$$A. 0$$
$$B. 1$$
$$C. 7$$
$$D. 21$$
$$E. 49$$
This question essentially asking what are the last 2 digits of the expression $$7^{100}$$... as what ever is in the hunderds digit, if the last two are 00, the number is always divided by 50.
Now cyclicity of 7 is 4, And the numbers are 7,9,3,1........ Hence the last digit is 1 as 25*4=100
now $$7^{4}$$ = 7*7*7*7= 2401
And the expression is $$2401^{25}$$= in that case the last two will be always 01 ( can be tested quickly with $$101^{2}$$ & $$101^{3}$$)
Hence the reminder is 01.....................Hence , I would go for option B.
_________________
Please let me know if I am going in wrong direction.
Thanks in appreciation.
Manager
Joined: 22 Jun 2017
Posts: 71
Location: Brazil
GMAT 1: 600 Q48 V25
GPA: 3.5
WE: Engineering (Energy and Utilities)
Re: What is the remainder when 7^100 is divided by 50? [#permalink]
### Show Tags
24 May 2018, 19:04
1
Option B
$$7^1$$ = 7
$$7^2$$ = 49
$$7^3$$ = 343
$$7^4$$ = 2401
$$7^5$$ = 16807
$$7^6$$ = 117649
$$7^7$$ = 823543
$$7^8$$ = 5764801
...
So, 7^100 is something that ends with 01.
The remander is 1.
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 7357
GMAT 1: 760 Q51 V42
GPA: 3.82
Re: What is the remainder when 7^100 is divided by 50? [#permalink]
### Show Tags
27 May 2018, 18:21
1
1
=>
The remainder when $$7^{100}$$ is divided by $$50$$ depends only on the units and tens digits.
The units digits of $$7^n$$ cycle through the four values $$7, 9, 3$$, and $$1$$.
The tens digits of $$7^n$$ cycle through the four values $$0, 4, 4$$, and $$0$$.
We have the following sequence of units and tens digits for $$7^n$$:
$$7^1 = 07 ~ 07$$
$$7^2 = 49 ~ 49$$
$$7^3 = 343 ~ 43$$
$$7^4 = 2401 ~ 01$$
$$7^5 = 16807~ 07$$
So, $$7^{100} = (7^4)^{25}$$ has the same units and tens digits as $$7^4$$, that is, $$01$$.
Thus, the remainder when $$7^{100}$$ is divided by $$50$$ is $$1$$.
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
"Only \$149 for 3 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 6200
Location: United States (CA)
Re: What is the remainder when 7^100 is divided by 50? [#permalink]
### Show Tags
29 May 2018, 09:24
2
1
MathRevolution wrote:
[GMAT math practice question]
What is the remainder when $$7^{100}$$ is divided by $$50$$?
$$A. 0$$
$$B. 1$$
$$C. 7$$
$$D. 21$$
$$E. 49$$
We see that 7^2 = 49, which is 50 - 1. Although 49/50 = 0 R 49, rather than using the remainder of 49, let’s call the remainder “-1”.
Since 7^100 = (7^2)^50 = 49^50, which is equivalent to (-1)^50 when it’s divided by 50, and since (-1)^50 = 1, so when (-1)^50 is divided by 50, the remainder is 1.
_________________
# Scott Woodbury-Stewart
Founder and CEO
Scott@TargetTestPrep.com
122 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
Senior Manager
Joined: 29 Dec 2017
Posts: 384
Location: United States
Concentration: Marketing, Technology
GMAT 1: 630 Q44 V33
GMAT 2: 690 Q47 V37
GMAT 3: 710 Q50 V37
GPA: 3.25
WE: Marketing (Telecommunications)
What is the remainder when 7^100 is divided by 50? [#permalink]
### Show Tags
27 Aug 2018, 06:43
1
$$7^{100}/50=49^{50}/50=(50-1)^{50}/50$$ - only $$(-1)^{50} = 1^{50}=1$$ - won't be devisable by 50. The remainder is 1.
Manager
Joined: 03 Feb 2018
Posts: 84
Re: What is the remainder when 7^100 is divided by 50? [#permalink]
### Show Tags
23 Sep 2018, 05:00
ScottTargetTestPrep wrote:
MathRevolution wrote:
[GMAT math practice question]
What is the remainder when $$7^{100}$$ is divided by $$50$$?
$$A. 0$$
$$B. 1$$
$$C. 7$$
$$D. 21$$
$$E. 49$$
We see that 7^2 = 49, which is 50 - 1. Although 49/50 = 0 R 49, rather than using the remainder of 49, let’s call the remainder “-1”.
Since 7^100 = (7^2)^50 = 49^50, which is equivalent to (-1)^50 when it’s divided by 50, and since (-1)^50 = 1, so when (-1)^50 is divided by 50, the remainder is 1.
Hi,
thanks for this solution, but I have a doubt. this question doesn't say that there is exponent for 50. then, How can we take (-1)^50 ?
Regards,
Kishlay
Re: What is the remainder when 7^100 is divided by 50? [#permalink] 23 Sep 2018, 05:00
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5570899844169617, "perplexity": 4302.352754725674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256763.42/warc/CC-MAIN-20190522043027-20190522065027-00226.warc.gz"} |
https://ctan.org/ctan-ann/id/mailman.1754.1475869144.1426.ctan-ann@ctan.org | New on CTAN: spalign
Date: October 7, 2016 8:39:05 PM CEST
Joseph Rabinoff submitted the spalign package. Version number: 2016-10-05 License type: lppl1.3 Summary description: Typeset matrices and arrays with spaces and semicolons as delimiters. Announcement text:
The purpose of this package is to decrease the number of keystrokes needed to typeset small amounts of aligned material (matrices, arrays, etc.). It provides a facility for typing alignment environments and macros with spaces as the alignment delimiter and semicolons (by default) as the end-of-row indicator. For instance, typeset a matrix using \spalignmat{1 12 -3; 24 -2 2; 0 0 1}, or a vector using \spalignvector{22 \frac{1}{2} -14}. This package also contains utility macros for typesetting augmented matrices, vectors, arrays, systems of equations, and more, and is easily extendable to other situations that use alignments. People who have to typeset a large number of matrices (like linear algebra teachers) should find this package to be a real time saver.
This package is located at http://mirror.ctan.org/macros/latex/contrib/spalign More information is at http://www.ctan.org/pkg/spalign We are supported by the TeX User Groups. Please join a users group; see http://www.tug.org/usergroups.html .
Thanks for the upload. For the CTAN Team Ina Dau
spalign – Typeset matrices and arrays with spaces and semicolons as delimiters
Typeset matrices and arrays with spaces and semicolons as delimiters.
The purpose of this package is to decrease the number of keystrokes needed to typeset small amounts of aligned material (matrices, arrays, etc.). It provides a facility for typing alignment environments and macros with spaces as the alignment delimiter and semicolons (by default) as the end-of-row indicator. For instance, typeset a matrix using \spalignmat{1 12 -3; 24 -2 2; 0 0 1}, or a vector using \spalignvector{22 \frac{1}{2} -14}.
This package also contains utility macros for typesetting augmented matrices, vectors, arrays, systems of equations, and more, and is easily extendable to other situations that use alignments.
People who have to typeset a large number of matrices (like linear algebra teachers) should find this package to be a real time saver.
Package spalign Version 2016-10-05 Copyright 2016 Joseph Rabinoff Maintainer Joseph Rabinoff
more | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9753604531288147, "perplexity": 5232.148844103316}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250620381.59/warc/CC-MAIN-20200124130719-20200124155719-00358.warc.gz"} |
http://shacktoms.org/perplexed/Local_Reality_vs_Bell-mathjax.html | # Local Reality vs Bell's Theorem
Shack Toms
Updated: 2009-06-27, MathJax verson: 2011-02-14
1. Reality
Before a we suggest an experimental way to investigate "reality" it is necessary to have an objective (i.e. independently testable) definition of that term. We might call this "objective reality", except that science is limited to the objective, so here it is simply called "reality". The motivation for "reality" is that there is some substance or substances that exist apart from the processes of our observations, and these substances give rise to the phenomena we observe. Furthermore, note that our idea of such substances is based on the reliability of the observations we make. An independently testable notion of these substances can only be based on the results of measurement of physical attributes.
So the question of "reality" comes down to the question of whether physical attributes have defined values apart from our measurements of those attributes. For example, if your notion of objective reality involves a substance called "matter", then your belief in this substance will be based on observable physical attributes which are held to arise from matter. So, if matter exists independent of observation, then these attributes should have definite values even when matter is not being observed.
So, we may define "reality" (for the purposes of scientific inquiry) as the extent to which physical properties have defined values apart from observation.
One may object that we can never know whether physical properties have defined values apart from observation. It turns out, however, that we might be able to find some observable consequence of there being definite values before a measurement takes place—this is a tricky proposition, which is why it has only been done recently, and only with additional assumptions.
2. Locality
"Locality" is the idea that effects can only arise from local (i.e. nearby) causes. For example, you might react to something you see, but that is only because light has traveled from the remote object to your eye. The effect of your response arises from the local cause of this nearby light. Under "locality" if there were no mechanism for the remote information to be available to you, then you wouldn't be able to respond to it.
A premise of the (very successful) theories of Relativity is that there is a limit to how fast information can travel. The limit is exactly 299,792,458 meters per second, the length of the meter is actually defined in terms of the second so that this value is exact. It turns out that light, in a vacuum, travels at this speed, so the speed is usually called the speed of light, and given the symbol "c". When this is taken together with the principle of "locality", it implies that in order for a remote cause to have a local effect, there must have been sufficient time for the information to have traveled from the remote cause to the local effect.
The alternative to "locality" is sometimes called "non-locality", it is also known as "action at a distance".
3. Local realism
The combination of "locality" with "reality" is sometimes called "local realism".
The materialist view is that reality as we observe it is based on divisible substances which interact according to local causes. However, this view is inconsistent with Quantum Mechanics (in ways I'll discuss below). This led Einstein to be very skeptical that Quantum Mechanics would hold up to certain kinds of experiments.
"I cannot seriously believe in quantum theory because it cannot be reconciled with the idea that physics should represent a reality in time and space, free from spooky actions at a distance." - Albert Einstein
4. Uncertainty.
The "problem" really begins with Quantum Mechanics. After some refinement, Quantum Mechanics started to conform extremely well with some otherwise very puzzling experimental results. So it became hard to reject the theory even if it leads to very bizarre conclusions—the bizarre conclusions, where they could be tested, kept getting confirmed by the experimental evidence, which added to the list of things that Quantum Mechanics could predict but alternative theories could not. So part of the problem is that Quantum Mechanics is a very successful theory. The other part of the problem is that Quantum Mechanics leads to very bizarre conclusions.
For one thing (seemingly not so bizarre, at first), most of the predictions of Quantum Mechanics are probabilistic. There are properties whose values cannot be fully predicted by the theory—the theory just gives you a probability distribution so you know which values are likely and which are unlikely but there is no single, certain answer.
Let me get more specific. We might have something like an electron, but the theory may not be able to tell us exactly where it is, or how fast it is moving (it usually cannot answer either of these questions precisely and it can never answer both of those questions precisely at the same time). So we say there is uncertainty in its position (and also in its momentum).
When you measure these properties, you always get a definite answer, but the theory cannot predict exactly what that answer will be. The problem is uncertainty, not any inability to do a precise measurement.
(an aside: Note that this is the same kind of uncertainty that has been popularized in the famous Heisenberg Uncertainty Principle—that principle is a theorem that relates pairs of uncertainties in things called conjugate properties. But uncertainty is found in many places in Quantum Mechanics, not just in the Heisenberg Uncertainty Principle.)
5. Hidden variables.
Initially, there was a lot of concern that quantum mechanics seemed to be such a good (i.e. highly predictive) theory, but so incomplete. The great question was why the exact values of these particular properties should be uncertain within the theory. The exact values were assumed to exist in reality, but for some reason the theory could not precisely predict the result of a measurement.
They were thus called "hidden variables", and there was a lot of interest in why they were hidden, how they were hidden, and if there might be any indirect way to predict them.
The definition of "hidden variables" is equivalent to the definition of "reality" we gave above.
6. Maybe there are no hidden variables.
This is going to seem weird, and the only reason I bring it up is that this turns out to be the correct view (in mainstream physics).
Actually, it was worse than I put it above. Quantum mechanics isn't simply unable to predict the values of certain properties, it actually turns out to be inconsistent with these properties having precise values in reality, apart from a measurement. So either Quantum Mechanics is wrong, or there are no hidden variables. I'll get to why hidden variables are inconsistent with Quantum Mechanics below.
But in Quantum Mechanics, it has to be that a property (such as the position of the electron), under most circumstances, simply does not have an exact value until it is measured (even though the measurement can be very precise).
If you are having trouble picturing this, you are in good company. It is hard to picture it. It is even hard to know fully what it means for something like the position of the electron to not have an exact value. How to picture it is something that nobody has a really good answer to, as far as I know. There are ways to picture some of these things, which are helpful, but there is always something wrong with the picture.
7. Conservation laws.
Some properties in physics act as though they are substances in the sense that the total of those properties remains constant in any interaction. Such properties may be thought of as "quantities" and we say that those quantities are "conserved".
It isn't that there are necessarily any substances involved, it is more by analogy that the term arises. What it means for a quantity to be conserved is that you always end up with the same total that you started with.
For example, momentum is a conserved quantity (and not generally thought of intuitively as as substance). If you push on a freely movable object (no friction) with a given force for a given amount of time, you will give it a certain momentum in the direction of your push. To stop the object you will have to cancel that momentum by pushing in the other direction. Momentum is always conserved, even in the case with friction, but if there are other forces, such as frictional forces, the change in momentum is determined by the net force rather than just the force you applied with your push.
If you try to create momentum by pushing something in one direction, it will push back with an equal and opposite force, so all the momentum you create in the object will be matched by an equal and opposite momentum in the other direction (on you, and on the Earth if you are standing on the Earth). Usually you don't notice this other momentum because it takes a lot of momentum to produce a noticeable change in the velocity of the Earth. But when a car accelerates from a stop light, the car is pushed in one direction and the surface of the Earth is pushed in the other. Equal and opposite forces. Equal and opposite changes in momentum. The change in velocity of the car is much greater because it is much less massive.
In any case, the important thing here to keep in mind about the strict conservation laws, such as conservation of momentum, is that they are exact. Never is there the slightest observed violation.
So one of the very confusing things about uncertainty is that, even though the values of certain properties of individual particles is uncertain, the total of all of these values may be precisely known, if there is a conservation law involved.
Let me restate this in its full oddness, just to make sure it is clear how strange this is. If we have two electrons, their individual momentums (momenta, I guess) may be uncertain, and yet we know that if we do measure the momenta they will add to an exactly known value. Uncertainty in the parts, but not in the sum of the parts.
8. Measurement error versus fundamental uncertainty
Because of the conservation laws, there is a problem with considering the uncertainty in quantum mechanics to be caused by the process of making the measurement. That is, if the problem is that the measurement itself is disturbing the system, in a random way, then why does the total of these measurements on an ensemble of particles add up to a precisely predictable value?
In this kind of case, where the properties of multiple particles is individually uncertain, but correlated (usually by a conservation law), the particles are said to be "entangled".
Einstein, along with Poldosky and Rosen proposed an experiment, called the EPR Paradox, which was intended to show that Quantum Mechanics was wrong because the idea of maintaining conservation laws through entanglement was inconsistent with local realism. Unfortunately for Einstein, experiments showed that the conservation laws were upheld, seemingly in violation of local realism. This is an example of why Quantum Mechanics is such a strong theory. Here Einstein proposed a result that was predicted by Quantum Mechanics but which was "obviously" untenable. And experiment confirmed the surprising predictions of Quantum Mechanics.
So there was something of a search for a way to reconcile local realism with the observed results.
9. Spin (not the political kind).
Many of the experiments involving this kind of subject (that I know of) involve a property called spin. Imagine the way the earth spins. The name "spin" is kind of by analogy with that, except that there might not be any actual spinning going on in the property of that name. But there is a property of spinning objects called "angular momentum" (which I will discuss below), and the property called "spin" refers to an intrinsic angular momentum.
Angular momentum, is intuitively something like regular momentum but it involves spinning rather than moving in straight lines. If you start spinning a bicycle wheel you will have to cancel that by applying a twisting force (called a torque) in the other direction. Like regular momentum, angular momentum is also conserved (when you apply a torque to spin the bicycle tire, the tire also pushes on you with an equal and opposite torque).
Electrons have a small amount of intrinsic angular momentum, as though they were spinning. You might visualize this as though they were tiny, spinning balls. It isn't right to visualize them that way, but it might be helpful for now (there really isn't a good way to visualize them, but it is hard to think about them without some kind of visualization unless you are comfortable thinking of them as abstract manifestations of the equations that describe them).
One more thing you need to know about angular momentum. You know that it describes a property of something that is moving around an axis (think of a spinning ball or bicycle tire). So how do you describe the direction of motion of something moving around an axis? You might think you would just say "clockwise" or "counter-clockwise", but you also have to say which direction the axis is pointing. Since the axis is a line, we just consider the direction as being along that line, and we point along that line toward one end or the other, depending on whether the rotation is clockwise or counter-clockwise.
Again, it may help you visualize this by considering a spinning ball. If you spin the ball about a vertical axis so that when you look down on the ball, it is spinning counter-clockwise, then the angular momentum is considered to be "up". If, when you look down on the ball, it is spinning clockwise, the angular momentum is considered to be "down". This is called the "right hand rule", if you point your right thumb in the direction of the angular momentum, the spinning motion is in the direction that the relaxed fingers on that hand curl. There isn't a lot of "why" about this, it is mostly just a way of talking about it that physicists find convenient (it turns out you can do math on rotations conveniently if you adopt this convention). With this way of speaking about the direction of angular momentum, there is a direction that you can point in which will let people know which direction a spinning object is spinning (i.e. the direction of the axis of rotation and which way the object is spinning about that axis).
An odd thing about this is that every electron has the same magnitude of intrinsic angular momentum. The amount is the value of Planck's constant (usually written as h) divided by $4\pi$. Only the direction can be different. Note that the $\pi$ in that expression is just the ordinary $\pi$ that is the ratio between the circumference and diameter of a circle, about $3.14$.
Some people get overwhelmed when named things start getting thrown into the description, things like Planck's constant. Don't get overwhelmed. Think about the kilogram. It is defined by an actual block of something kept in a vault in France. So when you say that you are buying two kilograms of flour you are just buying two of that defined amount. Planck's constant is just a defined amount of angular momentum. It turns out that this particular defined amount is oddly significant, unlike the kilogram which was chosen somewhat arbitrarily. It is so significant that we like to honor the person who first described it. But it is really just a particular, defined amount of angular momentum, just in the same way that a kilogram is a particular, defined amount of mass.
The value of Planck's constant is a very small amount of angular momentum. I mean *tiny*. If you had a mass of 1 gram, and spun it around an axis at a distance of 1 centimeter, and at a rate of 1 revolution per second, it would have an angular momentum about 1 billion times 1 billion times 1 billion times greater than Planck's constant, (i.e. about 1,000,000,000,000,000,000,000,000,000 times Planck's constant.) I think I have that right. Certainly Planck's constant represents a small, but measurable, amount of angular momentum.
The fundamental unit of angular momentum is actually $\overtwopi{h}$, and electrons have half that spin, so they are called "spin $\overtwo{1}$" particles. The "$\overtwo{1}$" means half of $\overtwopi{h}$. Because, the way modern Physics is done, Plank's constant almost always has to be divided by $2\pi$, Physicists have a special symbol for that. They put a little bar through the "h" (almost as though they were crossing a lower-case "t"), and call it "h bar". I think I can do that—"$\hbar$"—if that doesn't look like an h with a bar through it, let me know.
Now here is the *really* weird part. You can measure the intrinsic angular momentum of the electron about any axis you like and you will always get the same magnitude, the only difference will be which way it points along that axis (e.g. up or down).
I am talking about this concretely, in terms of $\hbar$, just because I want you to be able to see that we are talking about concrete concepts. Electrons intrinsically have angular momentum, angular momentum is something you experience in ordinary life (e.g. the spinning tire), the amount of intrinsic angular momentum that electrons have is very small, but it is conceptually just some value that can be measured. It has been measured, and it always has this one particular very small value, only the direction varies.
This intrinsic angular momentum is the property of electrons known as "spin". Sometimes you might see it as a value in units of angular momentum, or sometimes it will just be "up" or "down" because the magnitude is known. Remember, that it applies to any axis, so it might really be "left" or "right" instead of "up" or "down". Sometimes people measure with respect to a particular direction (e.g. "up"), so that positive values are considered "up" and negative values are considered "down". Sometimes you will see it given the value $\onehalf$, but that means $\onehalf$ of $\hbar$.
Just to be sure you have the full oddness of this, suppose you measure the vertical spin—you get a value of $\halfhbar$ up or down. Then if you measure the horizontal spin, you will get a value of $\halfhbar$ left or right. (I warned you it was not really right to imagine it as a spinning ball.)
Other particles also have "spin". A photon is a particle of light. All photons have an equal magnitude of angular momentum, in this case it is $\hbar$ (not $\halfhbar$), and it always points either along their direction of motion or in the opposite direction. The article discusses photon spin, but I think electrons are easier to think about. So I am hoping that the discussion of electrons helped you visualize this, I thought it might have been harder if we started with the angular momentum of a little massless ball of light. As I said, the reality of these things is always hard to visualize, if you really think about them correctly.
(An aside. Every electron has the same magnitude of spin, and every photon has the same magnitude of spin, but this is not a general property of particles. There are some particles that have more choices, but there are always a limited number of choices.)
(Another aside: The conjugate properties referred to in the Heisenberg Uncertainty Principle are always related to each other in the way that the product of their units will have the units of angular momentum. There is a certain statistical measure of the uncertainty of prediction that is called the "root mean square deviation" or "RMS deviation". It is found by taking measurements, squaring the deviation of those measurements from the predicted mean (i.e. average) value, averaging the value of those squared deviations, and then taking the square root of the result. Yes, it is tedious work. The Heisenberg Uncertainty Principle states that, if you do this over a large enough number of measurements, the product of the RMS deviations found in each of the conjugate properties will not be less than $\halfhbar$. That is, it isn't really a statement about a limitation on measurement, as is widely believed, rather it is a statement about uncertainty. The measurements come into it as a way to measure the uncertainty.)
10. Successive measurements of spin.
Suppose you measure the spin of an electron in a vertical axis and happen to find it is "up", or $\plushalfhbar$. If you measure in a second direction you will find that the value is always $\plushalfhbar$ or $\minushalfhbar$ but usually not with equal probability. If that second direction is also vertical, you will always measure "up" (two successive measurements along the same axis will match). If it is nearly vertical then you will be very likely to measure $\plushalfhbar$, but occasionally you will measure $\minushalfhbar$. If the second direction is horizontal, then the second measurement will be uncorrelated with the first, you will be equally likely to measure $\plushalfhbar$ as $\minushalfhbar$. (This is a crucial point.)
11. Spin uncertainty and quantum entanglement.
If all particles of some kind, such as an electron, have the same magnitude of spin, then how can the value be uncertain? Because the direction might not be known.
For example, suppose a process creates two photons, moving in opposite directions, one to the left and the other to the right. Suppose it is known, because angular momentum is conserved, that the total angular momentum must be zero. In that case you might either measure the spin of both photons pointing along their individual direction of travel, or both pointing opposite to their direction of travel, but you wouldn't know which of these two situations you were going to get until you had measured the spin of at least one of the photons.
So now we can go back to the question of hidden variables, but in this case the hidden variable is the spin of an individual particle, before the spin of either entangled particle has been measured.
12. Bell's theorem
So far, we know that Quantum Mechanics requires that uncertainties be, well, uncertain, but common sense says that these things have definite values (hidden variables) and so we might suspect that there is some flaw in the theory. The last hope for the common sense view was laid to rest after experimental verification that followed a theorem by John Bell, which he published in 1964.
What Bell was able to show (when you put his theory together with the experimental results of others) was that if you looked at a large number of entangled pairs of particles, you could decide whether there were hidden variables on the basis of a statistical analysis. That is, you could decide whether a particle had its own definite value of spin, before it was measured.
Imagine a device that emits two entangled electrons at a time, one to the left and the other to the right. Suppose it is known (from the conservation laws) that their total spin is zero, along any axis. That is, if we measure the spin of the left-going and right-going electron along a similar axis (e.g. both vertical, or both horizontal), the sum of the spins will be zero.
As noted above, a measurement along one axis introduces randomness the measurement along a different axis. But we can use entanglement to effectively take a measurement of the spin along two independent directions, under the assumption of local realism. That is, we might measure the left-going electron in a vertical axis (having satisfied ourselves what this would imply for a vertical spin measurement of the right-going electron) and we could measure the horizontal spin of the right-going electron. If the result of the measurement depended on hidden variables, then these independent measurements let us know the result of the value of these variables, for any particular pair, in their spin in two independent directions.
Bell's Theorem is, however, based on an inequality involving three criteria, not just two. What Bell's inequality says (and this can sometimes be a useful fact to know in ordinary life) is that "the number of items that possess property A and not property B, plus the number of items that posses property B and not property C, must always be greater than or equal to the number of items that possess property A and not property C". An example might be that, in any group of people, considering left-handed vs right-handed, tall vs short, men vs women (that is A = right-handed, B = short, C = male), the number of tall right-handers plus the number of short women cannot be less than the number of right-handed women.
So this is just a mathematical fact. But the problem is that Bell's inequality involves three attributes, and we can only measure electron spin, even using entanglement, in two different orientations.
What Bell proposed was to choose randomly selected orientations of the spin detectors along three different axes and look at the correlations among large numbers of pairs to see if the inequality held in the aggregate.
So, if the spin orientations along the three axes were decided by hidden variables, when the electrons were first emitted, then in the aggregate Bell's inequality should hold. Note that this doesn't involve Quantum Mechanics at all, all it involves is the idea that the total spin is a conserved quantity and that measurements of spin along any axis always give $\plushalfhbar$ or $\minushalfhbar$. These are observed results, confirmed by experiment, independent of any theoretical basis.
It turned out, however, that Bell's Inequality was violated. Thus, either the electron spin is not determined when the electron is emitted or the measurement of the spin of one electron affects the hidden variables of the other electron (e.g. measuring the vertical spin of the right-moving electron changes the horizontal spin of the left-moving electron). You cannot have both reality (i.e. hidden variables) and locality.
Essentially, what it seemed to show is that if you measure the vertical spin of one of the electrons, it instantly randomizes the horizontal spin of the other electron (and vice-versa).
That is, Bell's theorem (together with the confirming experimental results of Aspect and others) showed that local, hidden variables are not only inconsistent with Quantum Mechanics, they are much more generally inconsistent with the experimental results.
13. Locality vs. "Spooky action at a distance."
Bell's theorem showed that, if you assume reality, then the information that arises from measuring one of the entangled particles is instantly available to a measurement of the spin of the other particle (so that the measurements can retain correlation and yet violate the inequality). This kind of "action at a distance", the kind which Einstein called "spooky", violates locality.
So, somehow the conservation laws are satisfied and yet the local values of these properties is not determined until a measurement is made. So there is some kind of action at a distance that happens.
The kind of interaction that happens in "spooky action at a distance" is limited. It doesn't allow people to send other information between the entangled pairs, it only sends this specific information that is involved in maintaining the correlation.
14. The new results.
People who have been trying to resurrect some kind of materialist world-view from the wreckage of the Aspect experiments and Bell's theorem have thus taken to considering non-local models, in which there is still a material objective reality, but it has some non-local hidden variables whose values somehow are available throughout the universe.
The new experiments (if I am interpreting the article correctly) rule out additional large classes of hidden variables (i.e. objective reality existing apart from measurement) even of the non-local kind.
Leggett's paper doesn't rule out all forms of nonlocal reality, but rather a large class of nonlocal reality, which he calls "crypto-nonlocal". The kind of locality he has relaxed is a possible dependence on the non-local experimental set-up. That is, he has shown the inconsistency is a matter of holding both to reality and to locality with respect to the measured outcomes alone. The non-locality with respect to apparatus does not manifest itself with a non-locality with respect to the results of measurements—it remains hidden.
In other words, the "action at a distance" implied by the new experiment must involve sending information about the results of the measurement and not merely sending information about the settings on the equipment.
Leggett's paper is, A.J. Leggett (2003) Nonlocal Hidden-Variable Theories and Quantum Mechanics: An Incompatibility Theorem Foundations of Physics, 33 (10), 1469-1493. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089621305465698, "perplexity": 299.14869633227016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.75/warc/CC-MAIN-20171021224608-20171022004608-00096.warc.gz"} |
http://math.iisc.ac.in/seminars/2016/2016-01-27-tulasi-ram-reddy-a.html | ##### Venue: Lecture Hall 3, Department of Mathematics
In the first part we study critical points of random polynomials. We choose two deterministic sequences of complex numbers,whose empirical measures converge to the same probability measure in complex plane. We make a sequence of polynomials whose zeros are chosen from either of sequences at random. We show that the limiting empirical measure of zeros and critical points agree for these polynomials. As a consequence we show that when we randomly perturb the zeros of a deterministic sequence of polynomials, the limiting empirical measures of zeros and critical points agree. This result can be interpreted as an extension of earlier results where randomness is reduced. Pemantle and Rivin initiated the study of critical points of random polynomials. Kabluchko proved the result considering the zeros to be i.i.d. random variables.
Contact: +91 (80) 2293 2711, +91 (80) 2293 2625
E-mail: chairman.math[at]iisc[dot]ac[dot]in | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9556646347045898, "perplexity": 448.91684929977794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813187.88/warc/CC-MAIN-20180221004620-20180221024620-00399.warc.gz"} |
https://forum.math.toronto.edu/index.php?PHPSESSID=l0j274usu54jpkf1cl5m6qfd37&action=printpage;topic=136.0 | # Toronto Math Forum
## APM346-2012 => APM346 Math => Misc Math => Topic started by: Kun Guo on November 14, 2012, 10:41:56 PM
Title: 2011 term test 2 Problem 3. d
Post by: Kun Guo on November 14, 2012, 10:41:56 PM
ut−kuxx=0 for x∈(0,π)
with the boundary conditions ux(0,t)=0 and u(Ï€,t)=0 and the initial condition u(x,0)=x.
Part d) Write the solution in the form of a series.
If we use separation of variables, U=X*T, I found that T0 is t dependent. Then A0*T0 cannot not just a constant or zeor(A0 is for X).
Are there any mistakes regarding that part in both solutions poster last year?
Title: Re: 2011 term test 2 Problem 3. d
Post by: Victor Ivrii on November 15, 2012, 01:33:06 AM
If we use separation of variables, U=X*T, I found that T0 is t dependent. Then A0*T0 cannot not just a constant or zeor(A0 is for X).
Are there any mistakes regarding that part in both solutions poster last year?
Your observation here is wrong and solutions are correct $X_0(t)T_0(t)= \cos(x/2) e^{-\frac{1}{2}t}$ satisfies boundary conditions.
Title: Re: 2011 term test 2 Problem 3. d
Post by: Kun Guo on November 15, 2012, 11:49:27 AM
Yes I got X0(t)T0(t)=cos(1/2*x)*exp(-1/2*t). But one solutions posted last year have either 0 or pi/2...
Title: Re: 2011 term test 2 Problem 3. d
Post by: Victor Ivrii on November 15, 2012, 12:46:45 PM
Yes I got X0(t)T0(t)=cos(1/2*x)*exp(-1/2*t). But one solutions posted last year have either 0 or pi/2...
So what? Someone posted a wrong solution. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.942096471786499, "perplexity": 6255.556604059131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613453.9/warc/CC-MAIN-20210614201339-20210614231339-00621.warc.gz"} |
https://cstheory.stackexchange.com/tags/pr.probability/hot?filter=month | # Tag Info
## Hot answers tagged pr.probability
17
They are separate (assuming $P \ne NP$). Consider the following property $P(x)$: $x$ is a $2n$-bit string, where either the first $n$ bits are not all zeros, or the last $n$ bits are a yes-instance of 3SAT. It's clear that testing whether $x$ satisfies $P$ is NP-hard, yet almost all strings satisfy it: the density $\to 1$ as $n \to \infty$.
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5321601033210754, "perplexity": 912.7916753296159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250620381.59/warc/CC-MAIN-20200124130719-20200124155719-00087.warc.gz"} |
https://www.webassign.net/manual/instructor_guide/t_i_creating_questions_scored_tutorial.htm | # Create Scored Tutorial Questions
You can transform part or all of a multi-part or multi-mode question into a scored tutorial that guides your students step by step through the solution of a problem. Scored tutorial questions are shown in the assignment itself and count toward the assignment score. In the Assignment Editor, you can set the point value for the entire tutorial or for each question part.
Note:
Important:
• Do not use the <accordion> and <tutorial> tags in the same question. The one exception to this rule is that in a question that uses the <accordion> tag, you can create a popup tutorial.
• For scored tutorial questions to work correctly, you must allow question part submission in your assignment.
Your tutorial might be the entire question or only part of the question. You can create more than one tutorial in a single question, and you can mix scored and popup tutorials in the same question.
## Skipping and Points
For scored tutorials, the decision about whether to allow students to skip tutorial steps affects not only your students' learning experience, but also the points that they can earn for the tutorial.
By default, your students are allowed to skip tutorial steps, but they cannot go back later to complete the skipped steps. This means that students who skip a step permanently forgo any points they could have earned on the step, but they have an opportunity to earn points on any remaining steps in the tutorial.
If you disallow skipping, your students must either answer each step correctly or use all of their submissions for the step before going on to the next step.
Each method has its merits. Skipping steps gives your students an opportunity to move through the tutorial more quickly if they do not understand a step. Disallowing skipping encourages your students to attempt each step, even if only by guessing.
Tip: If you disallow skipping, the number of allowed submissions for the tutorial question is very important. Too many submissions might cause students to give up on a step that they do not understand; too few submissions might not give students enough opportunity to figure out a step for themselves before showing the correct answer.
## To create a scored tutorial from a multi-part or multi-mode question:
1. Open your question in the Question Editor.
2. In Question, add the <tutorial> tag at the beginning of your tutorial.
You can set several attributes to change the way your tutorial behaves.
Attribute
Description
order="ascending"
Shows steps in ascending order with the current step at the bottom. (By default, steps are displayed in ascending order with the current step at the top.)
order="descending"
Shows steps in descending order with the current step at the top. (By default, steps are displayed in ascending order with the current step at the top.)
skip="no"
Requires students to answer each step correctly or use all their submissions before going on to the next step. (By default, students can skip tutorial steps.)
skip_text="text"
Renames the Skip button to text (if you allow students to skip tutorial steps.)
For example:
<tutorial order="ascending" skip_text="Show the answer (no points
earned) and move to the next step">
3. After the <tutorial> tag, use the <premise> tag to set a title for the tutorial and display the overall problem or concept the tutorial addresses. You must use the closing </premise> tag at the end of the premise.
Note:
• You must specify a title attribute for the <premise> tag.
• Do not include any answer boxes in the premise.
• The premise is always displayed at the top of the tutorial.
• The premise is optional, but strongly recommended.
For example:
<premise title="Multiplying Fractions">
When you multiply fractions, you multiply the numerators and you
multiply the denominators.<br><br>
<watex>$\frac{3}{4} * \frac{13}{16} =$</watex>
</premise>
4. Enclose each tutorial step with the <step> tag. You must use the closing </step> tag at the end of each step.
You can set several attributes to change the way each step is displayed.
Attribute
Description
button="text"
Requires students to click a button with the specified text in order to see the step. (By default, each step is displayed as soon as the student either correctly answers or skips the previous step.)
label="text"
Replaces the default label Step n of m with the specified text.
title="text"
Displays the specified text after the step label.
skip_text="text"
Renames the Skip button to text (if you allow students to skip tutorial steps.)
Note: The <step> and <SECTION> tags are not interchangeable. You must specify the <SECTION> tag wherever the question mode changes.
For example:
<step button="Start" label="Part I" title="Multiply the
Numerators">
3 · 13 = <_>
</step>
5. Optionally, add tutorial hints in any step with the <hint> tag. You must end each hint with the closing </hint> tag.
Tutorial hints are shown as a lightbulb icon and display either Hint or a label that you specify with the label attribute. When your student clicks the icon, the contents of the <hint> tag are displayed in place of the label.
Note:
• The <hint> tag can be used only in <step>.
• Each step can contain only one <hint> tag.
• The <hint> and <HINT> tags are not interchangeable.
For example:
<hint label="Show hint">Use the Pythagorean Theorem.</hint>
6. Optionally, use the <conclusion> tag to display information after your students complete or skip the last step.
You must end the conclusion with the closing </conclusion> tag.
Note: You must specify a title attribute for the <conclusion> tag.
For example:
<conclusion title="Conclusion">You have finished the
tutorial.</conclusion>
7. End the tutorial with the closing </tutorial> tag.
8. Click Test/Preview to test the appearance and behavior of the question. See Test Questions.
9. When your question displays and functions correctly, click Save.
WebAssign assigns it a unique question ID (QID), which is displayed in parentheses after the question name.
You can use your question in an assignment and see it in your My Questions list only after it is saved.
## Example Tutorial Question
The following table summarizes an actual question.
QID 1251029 Name Template2 3.TUT.01. Mode Multi-Mode...QN Question For simple systems of equations, you can often use the substitution method to solve for $x$ and $y$.\vspace{1em}$x + y = 6 \\ x - y = 2$ Solve for $x$ in terms of $y$.\vspace{1em} $x + y = 6 \\ x = <_>$ Rewrite the second equation, substituting $6-y$ for $x$. \vspace{1em}$x - y = 2 \\ <_> = 2$
Solve for $y$.\vspace{1em}$6 - y - y = 2 \\ y = <_>$ Substitute 2 for $y$ in either equation and solve for $x$.\vspace{1em} $x + 2 = 6 \\x - 2 = 2 \\ x = <_>$ Answer y:6-y y:(6-y)-y
2 4 Display to Students | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4495760202407837, "perplexity": 2092.304986863754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738004493.88/warc/CC-MAIN-20151001222004-00003-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2018_AMC_8_Problems/Problem_22&oldid=102838 | 2018 AMC 8 Problems/Problem 22
Problem 22
Point is the midpoint of side in square and meets diagonal at The area of quadrilateral is What is the area of
Solution 1
Let the area of be . Thus, the area of triangle is and the area of the square is .
By AAA similarity, with a 1:2 ratio, so the area of triangle is . Now consider trapezoid . Its area is , which is three-fourths the area of the square. We set up an equation in :
Solving, we get . The area of square is .
Solution 2
We can use analytic geometry for this problem.
Let us start by giving the coordinate , the coordinate , and so forth. and can be represented by the equations and , respectively. Solving for their intersection gives point coordinates .
Now, ’s area is simply or . This means that pentagon ’s area is of the entire square, and it follows that quadrilateral ’s area is of the square.
The area of the square is then .
Solution 3
There is a trivial solution. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9671759009361267, "perplexity": 527.2394342146833}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107912807.78/warc/CC-MAIN-20201031032847-20201031062847-00455.warc.gz"} |
https://scipost.org/submissions/scipost_202005_00009v1/ | # Classification of magnetohydrodynamic transportat strong magnetic field
### Submission summary
As Contributors: Bartosz Benenowski · N Poovuttikul Preprint link: scipost_202005_00009v1 Date submitted: 2020-05-21 Submitted by: Poovuttikul, N Submitted to: SciPost Physics Discipline: Physics Subject area: Fluid Dynamics Approach: Theoretical
### Abstract
Magnetohydrodynamics is a theory of long-lived, gapless excitations in plasmas. It was argued from the point of view of fluid with higher-form symmetry that magnetohydrodynamics remains a consistent, non-dissipative theory even in the limit where temperature is negligible compared to the magnetic field. In this limit, leading-order corrections to the ideal magnetohydrodynamics arise at the second order in the gradient expansion of relevant fields, not at the first order as in the standard hydrodynamic theory of dissipative fluids and plasmas. In this paper, we classify the non-dissipative second-order transport by constructing the appropriate non-linear effective action. We find that the theory has eleven independent charge and parity invariant transport coefficients for which we derive a set of Kubo formulae. The relation between hydrodynamics with higher-form symmetry and the theory of force-free electrodynamics,which has recently been shown to correspond to the zero-temperature limit of the ideal magnetohydrodynamics, as well as simple astrophysical applications are also discussed.
###### Current status:
Editor-in-charge assigned
### Submission & Refereeing History
Submission scipost_202005_00009v1 on 21 May 2020 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826739490032196, "perplexity": 2651.7038747834913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151197.83/warc/CC-MAIN-20200714181325-20200714211325-00075.warc.gz"} |
http://mathhelpforum.com/pre-calculus/272514-factoring-1-a.html | 1. ## Factoring...1
What is the first step in factoring the following problem?
(ax + b)^(-1/2) - root {(ax + b)/b}
2. ## Re: Factoring...1
Seems you'd start by using this rule: a^(-p) = 1 / a^p
1 / (ax + b)^(1/2) - {(ax + b)/b}^(1/2)
Remember that root n = n^(1/2)
Also, if you continued the factoring process, I suggest
setting ax + b = k, to get something simpler to work with:
1 / k^(1/2) - (k/b)^(1/2)
3. ## Re: Factoring...1
Originally Posted by DenisB
Seems you'd start by using this rule: a^(-p) = 1 / a^p
1 / (ax + b)^(1/2) - {(ax + b)/b}^(1/2)
Remember that root n = n^(1/2)
Also, if you continued the factoring process, I suggest
setting ax + b = k, to get something simpler to work with:
1 / k^(1/2) - (k/b)^(1/2)
I'll work on this problem using k as a substitute for
(ax + b) as you suggested and post my work this weekend.
4. ## Re: Factoring...1
I tried but couldn't factor completely. Can someone work this out for me?
5. ## Re: Factoring...1
1 / k^(1/2) - (k/b)^(1/2)
right side: (k/b)^(1/2) = k^(1/2) / b^(1/2)
so we now have:
1 / k^(1/2) - k^(1/2) / b^(1/2)
which is of style: 1/u - u/v
I'll let you continue...
6. ## Re: Factoring...1
Originally Posted by DenisB
1 / k^(1/2) - (k/b)^(1/2)
right side: (k/b)^(1/2) = k^(1/2) / b^(1/2)
so we now have:
1 / k^(1/2) - k^(1/2) / b^(1/2)
which is of style: 1/u - u/v
I'll let you continue...
In place of v, you meant b.
1/u - u/v = 1/u - u/b, right?
This looks like a subtraction of two fractions.
Do I proceed in terms of subtracting two fractions and then simplify some more?
7. ## Re: Factoring...1
In place of v, you meant b.
1/u - u/v = 1/u - u/b, right?
No. I replaced k^(1/2) with u and b^(1/2 with v, to get 1/u - u/v
Simplify that by multiplying by uv to get one fraction only.
Then you're finished. You can substitute back in if you wish.
8. ## Re: Factoring...1
Originally Posted by DenisB
No. I replaced k^(1/2) with u and b^(1/2 with v, to get 1/u - u/v
Simplify that by multiplying by uv to get one fraction only.
Then you're finished. You can substitute back in if you wish.
Great. I can take it from here. I will post my work tomorrow.
9. ## Re: Factoring...1
1/u - 1/v becomes (v - u)/uv.
What value for u and v must I substitute back in?
10. ## Re: Factoring...1
1/u - 1/v becomes (v - u)/uv.
What value for u and v must I substitute back in? Denis answered this in the quote box below, which was a couple of posts prior to this.
That last denominator needs grouping symbols around it, such as:
(v - u)/(uv)
Originally Posted by DenisB
No. I replaced k^(1/2) with u and b^(1/2)with v, to get 1/u - u/v
$\dfrac{b^{1/2} - k^{1/2}}{k^{1/2}b^{1/2}} \ =$
11. ## Re: Factoring...1
Originally Posted by greg1313
That last denominator needs grouping symbols around it, such as:
(v - u)/(uv)
$\dfrac{b^{1/2} - k^{1/2}}{k^{1/2}b^{1/2}} \ =$
So, do I now factor or is this the final answer?
12. ## Re: Factoring...1
So, do I now factor or is this the final answer?
Although k = ax + b in this method, I don't see a direct/handy route for showing a factor being taken out.
Starting over:
(ax + b)^(-1/2) - root {(ax + b)/b}
$(ax + b)^{-1/2} - \dfrac{(ax + b)^{1/2}}{b^{1/2}} =$
$(ax + b)^{-1/2} - (ax + b)^{1/2}b^{-1/2} =$
$(ax + b)^{-1/2}{b^{-1/2}}[b^{1/2} - (ax + b)] =$
$[b(ax + b)]^{-1/2}(b^{1/2} - ax - b)$
After this post, I don't expect to post further on this thread.
13. ## Re: Factoring...1
Originally Posted by greg1313
Although k = ax + b in this method, I don't see a direct/handy route for showing a factor being taken out.
Starting over:
(ax + b)^(-1/2) - root {(ax + b)/b}
$(ax + b)^{-1/2} - \dfrac{(ax + b)^{1/2}}{b^{1/2}} =$
$(ax + b)^{-1/2} - (ax + b)^{1/2}b^{-1/2} =$
$(ax + b)^{-1/2}{b^{-1/2}}[b^{1/2} - (ax + b)] =$
$[b(ax + b)]^{-1/2}(b^{1/2} - ax - b)$
After this post, I don't expect to post further on this thread.
I appreciate the breakdown and quick replies. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.730945348739624, "perplexity": 7387.355793373367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00080-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://clay6.com/qa/50645/kcl-crystallizes-in-the-same-type-of-lattice-as-does-nacl-givn-that-large-f | # $KCl$ crystallizes in the same type of lattice as does $NaCl$.Givn that $\large\frac{r_{Na^+}}{r_{Cl^-}}$$=0.5 and \large\frac{r_{Na^+}}{r_{K^+}}$$=0.7$.Calculate the ratio of the side unit cell for $KCl$ to that for $NaCl$,and the ratio of density of $NaCl$ to that of $KCl$
$\begin{array}{1 1}1.143,1.172\\2.143,2.172\\3.143,3.172\\4.143,4.172\end{array}$
why do we need consider 1.5 and 1.2
NaCl crystallizes in the face-centred cubic unit cell,such that
$r_{Na^+}+r_{Cl^-}=\large\frac{a}{2}$
Where $a$ is the edge length of unit cell.Now since $r_{Na^+}/r_{Cl^-}$$=0.5 and r_{Na^+}/r_{K^+}$$=0.7$,we will have
$\large\frac{r_{Na^+}+r_{Cl^-}}{r_{Cl^-}}$$=1.5 And \large\frac{r_{K^+}}{r_{Cl^-}}=\frac{r_{K^+}}{r_{Na^+}/0.5}=\frac{0.5}{r_{Na^+}r_K^+}=\frac{0.5}{0.7} \large\frac{r_k^++r_{Cl^-}}{r_{Na^+}+r_{Cl^-}}=\frac{1.2}{0.7}\times \frac{1}{1.5} Or \large\frac{a_{KCl}/2}{a_{NaCl}/2}=\frac{1.2}{0.7\times 1.5} Or \large\frac{a_{KCl}}{a_{NaCl}}=\frac{1.2}{1.05}$$=1.143$
Now since $\rho=\large\frac{N}{a^3}\big(\frac{M}{N_A}\big)$,we will have
$\large\frac{\rho_{NaCl}}{\rho_{KCl}}=\big(\frac{a_{KCl}}{a_{NaCl}}\big)^3\big(\frac{M_{NaCl}}{M_{KCl}}\big)=$$(1.143)^2\big(\large\frac{58.5}{74.5}\big)$$=1.172$
answered Jul 16, 2014 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9617041349411011, "perplexity": 3025.510263154476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948599156.77/warc/CC-MAIN-20171217230057-20171218012057-00747.warc.gz"} |
https://arxiv.org/abs/hep-lat/9404002 | hep-lat
(what is this?)
# Title: Critical Point Correlation Function for the 2D Random Bond Ising Model
Abstract: High accuracy Monte Carlo simulation results for 1024*1024 Ising system with ferromagnetic impurity bonds are presented. Spin-spin correlation function at a critical point is found to be numerically very close to that of a pure system. This is not trivial since a critical temperature for the system with impurities is almost two times lower than pure Ising $T_c$. Finite corrections to the correlation function due to combined action of impurities and finite lattice size are described.
Comments: 7 pages, 2 figures after LaTeX file Subjects: High Energy Physics - Lattice (hep-lat); Condensed Matter (cond-mat) DOI: 10.1209/0295-5075/27/3/004 Cite as: arXiv:hep-lat/9404002 (or arXiv:hep-lat/9404002v2 for this version)
## Submission history
From: Lev N. Shchur [view email]
[v1] Tue, 5 Apr 1994 21:36:52 GMT (0kb,I)
[v2] Tue, 5 Apr 1994 22:40:53 GMT (39kb) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30081072449684143, "perplexity": 3826.6603055797686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690307.45/warc/CC-MAIN-20170925021633-20170925041633-00471.warc.gz"} |
https://geo.libretexts.org/Bookshelves/Geography_(Physical)/Book%3A_Physical_Geology_(Huth)/11%3A_Structural_Geology/11.02%3A_Assignment-_Identifying_Structural_Features_in_a_Geological_Landscape | # 11.2: Assignment- Identifying Structural Features in a Geological Landscape
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
## Module 11 Assignment
Identifying Structural Features in a Geological Landscape
Figure 1. Swartberg Pass rock formation, South Africa
### Overview
#### Stress versus Strain: A Review
Rocks change as they undergo stress, which is just a force applied to a given area. Since stress is a function of area, changing the area to which stress is applied makes a difference. For example, imagine the stress that is created both at the tip of high heeled shoes and the bottom of athletic shoes. In the high heeled shoe, the area is very small, so that stress is concentrated at that point, while the stress is more spread out in an athletic shoe. Rocks are better able to handle stress that is not concentrated in one point. There are three main types of stress: compression, tension, and shear. When compressional forces are at work, rocks are pushed together. Tensional forces operate when rocks pull away from each other. Shear forces are created when rocks move horizontally past each other in opposite directions. Rocks can withstand compressional stress more than tensional stress.
Applying stress creates a deformation of the rock, also known as strain. As rocks are subjected to increased stress and strain, they at first behave in an elastic manner, which means they return to their original shape after deformation (see figure below). This elastic behavior continues until rocks reach their elastic limit (point X on the figure below), at which point plastic deformation commences. The rocks may bend into folds, or behave in a brittle manner by fracturing (brittle behavior can be easily envisioned if you think of a hammer hitting glass), but regardless they do not return to their original shape when the stress is removed in plastic deformation. The resulting deformation from applied stress depends on many factors, including the type of stress, the type of rock, the depth of the rock and pressure and temperature conditions, and the length of time the rock endures the stress. Rocks behave very differently at depth than at the surface. Rocks tend to deform in a more plastic manner at depth, and in a more brittle manner near the Earth’s surface.
A graph of strain versus stress. As stress and strain increase, rocks first experience elastic deformation that allows them to return to their original shape, until point X is reached. After this point, rocks will either experience plastic deformation or they will fracture, and they cannot return to their original shape.
### Instructions
In this lab you will be applying what you have learned about folds and faults and identifying these features from photographs. I have also included unconformities in this lab as a review, because these are often confused with faults. You may wish to go back and review some of the information about unconformities that you learned in the module on Geologic Time.
For each of the photographs below, please answer the following questions:
1. What type of feature is in the photograph? Is it a fold, a fault, or an unconformity?
2. After you have identified the type of feature, provide some more specifics on the feature. For example, if it is a fold, is it an anticline, syncline, or monocline? If it is a fault, is it a strike-slip, normal, or reverse (thrust) fault? If it is an unconformity, it is a nonconformity, a disconformity, or an angular unconformity?
3. Please describe in some detail how you were able to determine what type of feature was in the photograph— what visual cues did you use to help you?
4. If the feature is a fold or a fault, was it a result of tensional, compressional, or shear stress?
Your assessment should be between 1.5 to 2 pages in length.
An important note about folds:
Although anticlines most often occur “convex up” (meaning they look like a dome), their defining feature is that the oldest rocks will be at the center of the fold and the youngest rocks on the outside of the fold. If rocks have undergone extreme deformation and have completely overturned, it is possible to find an overturned anticline, in which case it appears “concave up,” but the oldest rocks are still in the middle. This is still considered an anticline, as they are truly defined by the placement of the oldest and youngest rocks. Conversely, a syncline is defined as having the youngest rocks at the center and the oldest rocks on the outside of the fold. These most often occur as a concave-upward fold, in which the layered strata are inclined up (resembling a smile). However, for the purpose of this lab, assume that the anticlines are convex-up and the synclines are concave-up.
15 points: All questions were answered thoroughly and accurately. Complete sentences were used to answer the sample questions.
12 points: All questions were answered and were mostly accurate. Only two or three minor errors.
8 points: Answers were too brief and 1 – 2 questions were unanswered
5 points: Only very partial information provided.
0 points: Did not complete the assignment.
11.2: Assignment- Identifying Structural Features in a Geological Landscape is shared under a CC BY-SA license and was authored, remixed, and/or curated by LibreTexts. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.491603285074234, "perplexity": 1284.8041767879997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00496.warc.gz"} |
http://nrich.maths.org/10099/solution | ### Golden Thoughts
Rectangle PQRS has X and Y on the edges. Triangles PQY, YRX and XSP have equal areas. Prove X and Y divide the sides of PQRS in the golden ratio.
A 1 metre cube has one face on the ground and one face against a wall. A 4 metre ladder leans against the wall and just touches the cube. How high is the top of the ladder above the ground?
### Days and Dates
Investigate how you can work out what day of the week your birthday will be on next year, and the year after...
# Weekly Problem 3 - 2014
##### Stage: 3 and 4 Challenge Level:
Notice that
$$RRR=111 \times R$$
and
$$PQPQ=1000 \times P+100 \times Q +10\times P +Q=101 \times (10\times P+Q)$$
so $PQPQ\times RRR=111 \times 101 \times R\times(10 \times P+Q)$.
And we have $639027=PQPQ\times RRR=111 \times 101 \times R\times(10 \times P+Q)$ so we can divide both sides by $111\times101$ to give $$57=R\times(10\times P+Q)\;.$$
The only factors of $57$ are $1,3,19$ and $57$.
$R$ must divide $57$ and because $R$ must be a single digit number it can either be $1$ or $3$.
If $R$ is $3$ then $10\times P+Q=19$ so $P=1$ and $Q=9$.
If $R$ is $1$ then $10\times P+Q=57$ so $P=5$ and $Q=7$.
So there are two solutions (check that these both work) and in both cases $P+Q+R=13$.
This problem is taken from the UKMT Mathematical Challenges.
View the previous week's solution
View the current weekly problem | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40944892168045044, "perplexity": 416.055099239697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661733.69/warc/CC-MAIN-20150417045741-00027-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://mathematica.stackexchange.com/questions/29684/rearranging-a-simple-algebraic-equation | # Rearranging a simple algebraic equation
Suppose I have a simple algebraic equation like:
ChebyshevT[4, p] == 0
1 - 8 p^2 + 8 p^4 == 0
and I want to solve for the term p^4 by simple rearrangement:
p^4 == -(1 - 8 p^2)/8
How do I do that in Mathematica ? And how can I then assign the solution to replace.
-
I am not sure what you are aiming for but you could get what I think is your desired form by the following:
sol=Solve[1 - 8 p^2 + 8 p^4 == 0 /. p^4 -> u, u]
u/.sol[[1]]
This yields:
1/8 (-1 + 8 p^2)
-
There are several ways to do that. For example, this:
Clear[eq1, eq2, p];
eq1 = ChebyshevT[4, p] == 0;
eq2 = eq1 /. a_*p^4 -> a*x;
p^4 == Solve[eq2, x][[1, 1, 2]]
p^4 == 1/8 (-1 + 8 p^2)
or this:
Clear[eq1, eq2, p];
eq1 = ChebyshevT[4, p] == 0;
eq2 = Map[Divide[#, -8] &, Map[Subtract[#, eq1[[1, 3]]] &, eq1]]
1/8 (-1 + 8 p^2) == p^4
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17338138818740845, "perplexity": 4212.226526280737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00089-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://rup.silverchair.com/jcb/article/111/6/2417/55882/Identification-of-functional-regions-on-the-tail | We used purified fusion proteins containing parts of the Acanthamoeba myosin-II tail to localize those regions of the tail responsible for each of the three steps in the successive dimerization mechanism (Sinard, J. H., W. F. Stafford, and T. D. Pollard. 1989. J. Cell Biol. 107:1537-1547) for Acanthamoeba myosin-II minifiliment assembly. Fusion proteins containing the terminal approximately 90% of the myosin-II tail assemble normally, but deletions within the last 100 amino acids of the tail sequence alter or prevent assembly. The first step in minifilament assembly, formation of antiparallel dimers, requires the COOH-terminal approximately 30 amino acids that are thought to form a nonhelical domain at the end of the coiled-coil. The second step, formation of antiparallel tetramers, requires the last approximately 40 residues in the coiled-coil. The final step, the association of two antiparallel tetramers to form the completed octameric minifilament, requires residues approximately 40-70 from the end of the coiled-coil. A region of the tail near the junction with the heads is important for tight packing of the tails in the minifilaments. Divalent cations induce the lateral aggregation of minifilaments formed from native myosin-II or fusion proteins containing a nonmyosin "head," but under the same conditions fusion proteins composed essentially only of myosin tail sequences with very little nonmyosin sequences form paracrystals. The region of the tail necessary for this paracrystal formation lies NH2-terminal to amino acid residue 1,468 in the native myosin-II sequence.
This content is only available as a PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8317863345146179, "perplexity": 5232.709690663569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.2/warc/CC-MAIN-20230202232251-20230203022251-00370.warc.gz"} |
http://docs.menpo.org/en/stable/api/io/import_image.html | import_image¶
menpo.io.import_image(filepath, landmark_resolver=<function same_name>, normalize=None, normalise=None)[source]
Single image (and associated landmarks) importer.
If an image file is found at filepath, returns an Image or subclass representing it. By default, landmark files sharing the same filename stem will be imported and attached with a group name based on the extension of the landmark file, although this behavior can be customised (see landmark_resolver). If the image defines a mask, this mask will be imported.
Parameters
• filepath (pathlib.Path or str) – A relative or absolute filepath to an image file.
• landmark_resolver (function or None, optional) – This function will be used to find landmarks for the image. The function should take one argument (the path to the image) and return a dictionary of the form {'group_name': 'landmark_filepath'} Default finds landmarks with the same name as the image file. If None, landmark importing will be skipped.
• normalize (bool, optional) – If True, normalize the image pixels between 0 and 1 and convert to floating point. If false, the native datatype of the image will be maintained (commonly uint8). Note that in general Menpo assumes Image instances contain floating point data - if you disable this flag you will have to manually convert the images you import to floating point before doing most Menpo operations. This however can be useful to save on memory usage if you only wish to view or crop images.
• normalise (bool, optional) – Deprecated version of normalize. Please use the normalize arg.
Returns
images (Image or list of) – An instantiated Image or subclass thereof or a list of images. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1997346729040146, "perplexity": 3483.944635281717}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572220.19/warc/CC-MAIN-20220816030218-20220816060218-00632.warc.gz"} |
http://projecteuclid.org/euclid.aos/1201012976 | ## The Annals of Statistics
### A constructive approach to the estimation of dimension reduction directions
Yingcun Xia
#### Abstract
In this paper we propose two new methods to estimate the dimension-reduction directions of the central subspace (CS) by constructing a regression model such that the directions are all captured in the regression mean. Compared with the inverse regression estimation methods [e.g., J. Amer. Statist. Assoc. 86 (1991) 328–332, J. Amer. Statist. Assoc. 86 (1991) 316–342, J. Amer. Statist. Assoc. 87 (1992) 1025–1039], the new methods require no strong assumptions on the design of covariates or the functional relation between regressors and the response variable, and have better performance than the inverse regression estimation methods for finite samples. Compared with the direct regression estimation methods [e.g., J. Amer. Statist. Assoc. 84 (1989) 986–995, Ann. Statist. 29 (2001) 1537–1566, J. R. Stat. Soc. Ser. B Stat. Methodol. 64 (2002) 363–410], which can only estimate the directions of CS in the regression mean, the new methods can detect the directions of CS exhaustively. Consistency of the estimators and the convergence of corresponding algorithms are proved.
#### Article information
Source
Ann. Statist. Volume 35, Number 6 (2007), 2654-2690.
Dates
First available in Project Euclid: 22 January 2008
http://projecteuclid.org/euclid.aos/1201012976
Digital Object Identifier
doi:10.1214/009053607000000352
Mathematical Reviews number (MathSciNet)
MR2382662
Zentralblatt MATH identifier
05241119
Subjects
Primary: 62G08: Nonparametric regression
Secondary: 62G09: Resampling methods 62H05: Characterization and structure theory
#### Citation
Xia, Yingcun. A constructive approach to the estimation of dimension reduction directions. Ann. Statist. 35 (2007), no. 6, 2654--2690. doi:10.1214/009053607000000352. http://projecteuclid.org/euclid.aos/1201012976.
#### References
• Ando, T. (1987). Totally positive matrices. Linear Algebra Appl. 90 165–219.
• Bai, Z. D., Miao, B. Q. and Rao, C. R. (1991). Estimation of directions of arrival of signals: Asymptotic results. In Advances in Spectrum Analysis and Array Processing (S. Haykin, ed.) 1 327–347. Prentice Hall, Englewood Cliffs, NJ.
• Chen, C.-H. and Li, K.-C. (1998). Can SIR be as popular as multiple linear regression? Statist. Sinica 8 289–316.
• Chow, Y. S. and Teicher, H. (1978). Probability Theory. Independence, Interchangeability, Martingales. Springer, New York.
• Chung, K. L. (1968). A Course in Probability Theory. Harcourt, Brace and World, New York.
• Cook, R. D. (1998). Regression Graphics. Wiley, New York.
• Cook, R. D. and Li, B. (2002). Dimension reduction for conditional mean in regression. Ann. Statist. 30 455–474.
• Cook, R. D. and Weisberg, S. (1991). Comment on “Sliced inverse regression for dimension reduction,” by K.-C. Li. J. Amer. Statist. Assoc. 86 328–332.
• de la Peña, V. H. (1999). A general class of exponential inequalities for martingales and ratios. Ann. Probab. 27 537–564.
• Delecroix, M., Hristache, M. and Patilea, V. (2005). On semiparametric $M$-estimation in single-index regression. J. Statist. Plann. Inference 136 730–769.
• Fan, J. and Gijbels, I. (1996). Local Polynomial Modelling and Its Applications. Chapman and Hall, London.
• Fan, J. and Yao, Q. (2003). Nonlinear Time Series: Nonparametric and Parametric Methods. Springer, New York.
• Fan, J., Yao, Q. and Tong, H. (1996). Estimation of conditional densities and sensitivity measures in nonlinear dynamical systems. Biometrika 83 189–206.
• Härdle, W., Hall, P. and Ichimura, H. (1993). Optimal smoothing in single-index models. Ann. Statist. 21 157–178.
• Härdle, W. and Stoker, T. M. (1989). Investigating smooth multiple regression by the method of average derivatives. J. Amer. Statist. Assoc. 84 986–995.
• Härdle, W. and Tsybakov, A. B. (1991). Comment on “Sliced inverse regression for dimension reduction,” by K.-C. Li. J. Amer. Statist. Assoc. 86 333–335.
• Hoeffding, W. (1961). The strong law of large numbers for U-statistics. Mimeo Report No. 302, Inst. Statist., Univ. North Carolina.
• Horowitz, J. L. and Härdle, W. (1996). Direct semiparametric estimation of single-index models with discrete covariates. J. Amer. Statist. Assoc. 91 1632–1640.
• Hristache, M., Juditsky, A., Polzehl, J. and Spokoiny, V. (2001). Structure adaptive approach for dimension reduction. Ann. Statist. 29 1537–1566.
• Hristache, M., Juditsky, A. and Spokoiny, V. (2001). Direct estimation of the index coefficient in a single-index model. Ann. Statist. 29 595–623.
• Li, B., Zha, H. and Chiaromonte, F. (2005). Contour regression: A general approach to dimension reduction. Ann. Statist. 33 1580–1616.
• Li, K.-C. (1991). Sliced inverse regression for dimension reduction (with discussion). J. Amer. Statist. Assoc. 86 316–342.
• Li, K.-C. (1992). On principal Hessian directions for data visualization and dimension reduction: Another application of Stein's lemma. J. Amer. Statist. Assoc. 87 1025–1039.
• Lue, H.-H. (2004). Principal Hessian directions for regression with measurement error. Biometrika 91 409–423.
• Mack, Y. P. and Silverman, B. W. (1982). Weak and strong uniform consistency of kernel regression estimates. Z. Wahrsch. Verw. Gebiete 61 405–415.
• Samarov, A. M. (1993). Exploring regression structure using nonparametric functional estimation. J. Amer. Statist. Assoc. 88 836–847.
• Scott, D. W. (1992). Multivariate Density Estimation: Theory, Practice and Visualization. Wiley, New York.
• Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman and Hall, London.
• World Health Organization (2003). Reports on a WHO/HEI working group. Bonn, Germany.
• Xia, Y. (2006). Asymptotic distributions for two estimators of the single-index model. Econometric Theory 22 1112–1137.
• Xia, Y. (2006). A constructive approach to the estimation of dimension reduction directions. Technical report, Dept. Statistics and Applied Probability, National Univ. Singapore.
• Xia, Y., Tong, H. and Li, W. K. (2002). Single-index volatility models and estimation. Statist. Sinica 12 785–799.
• Xia, Y., Tong, H., Li, W. K. and Zhu, L. (2002). An adaptive estimation of dimension reduction space (with discussion). J. R. Stat. Soc. Ser. B. Stat. Methodol. 64 363–410.
• Yin, X. and Cook, R. D. (2002). Dimension reduction for the conditional $k$th moment in regression. J. R. Stat. Soc. Ser. B Stat. Methodol. 64 159–175.
• Yin, X. and Cook, R. D. (2005). Direction estimation in single-index regressions. Biometrika 92 371–384. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.533715009689331, "perplexity": 6596.314116267352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541142.66/warc/CC-MAIN-20161202170901-00368-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://debasishg.blogspot.ie/2010/ | ## Wednesday, December 29, 2010
### DHL delivered a box of these at my doorstep today
Happy New Year! Have a blast .. See ya all in 2011 ..
## Monday, December 27, 2010
### A case study of cleaner composition of APIs with the Reader monad
In my earlier post on composable domain models, I wrote about the following DSL that captures the enrichment of a security trade by computing the applicable tax/fees and then the net cash value of the trade. It uses chained composition of scalaz functors .. In this post we are going to improve upon the compositionality, introduce a new computation structure and make our APIs leaner with respect to type signatures ..
scala> trd1 ° forTrade ° taxFees ° enrichWith ° netAmount
res0: Option[scala.math.BigDecimal] = Some(3307.5000)
Here are the building blocks for the above .. the individual functions and the type definitions for each of them ..
and here's the chaining in action with wiring made explicit ..
Note how we explicitly wire the types up so as to make the entire computation composable. Composability is a worthwhile quality to have for your abstractions. However in order for your functions to compose, the types for input and output for each of them must match. In the above example, we need to have forTrade spit out a Trade object along with the list of tax/fee id, in order for it to compose with taxFees.
For an API to be usable, the secret sauce is to make it lean. Never impose any additional burden on to your API's interface that smells of incidental complexity to the user. This is exactly what we are doing in the above composition. Note we are carrying around the Trade argument pipelining it through each of the above functions. In our use case the Trade is a read-only state and needs to be shared amongst all functions to read the information from the object.
Refactor the above into the Reader monad. A Reader is meant to be used as an environment (it's also known as the Environment monad) for all the participating components of the computation. What we need to do for this is to set up a monadic structure for our computation. Here are the modified function signatures .. I have changed some of the names for better adaptability with the domain, but you get the idea ..
val taxFeeCalculate: Trade => List[TaxFeeId] => List[(TaxFeeId, BigDecimal)] = //..
Every function takes the Trade but we no longer have to do an explicit chaining by emitting the Trade also as an output. This is where a monad shines. A monad gives you a shared interface to many libraries where you don't need to implement sequencing explicitly within your DSELs.
And here's our DSEL that runs through the sequence of enriching a trade while using the passed in trade as an environment .. (thanks @runarorama for the help with the Reader in scalaz)
val enrich = for {
taxFeeValues <- taxFeeCalculate // calculate tax fee values
}
yield((taxFeeIds ° taxFeeValues) ° netAmount)
This is a comprehension in Scala which is like the do notation of Haskell. Desugar it as an exercise and explore how flatMap does the sequencing.
Here's what the type of enrich looks like ..
scala> enrich
enrich is monadic in nature and follows the usual structure of a monad that sequences its operations through bind to give it an imperative look and feel. If any of the above sub-computations fail, the whole computation fails. But show it to a person who knows the domain of security trading - the steps in enrich nicely models the ubiquitous language.
I have the entire DSL in my github repo. You can get the use of enrich here in the test case ..
## Monday, December 20, 2010
### DSLs In Action is out!
It all started with a mail from Christina of Manning expressing their interest in publishing a book on building parsers in Scala. She mentioned that they have come across a blog post of mine titled External DSLs made easy with Scala Parser Combinators and Marjan Bace (Publisher) would like to talk to me on this possible venture. I still remember that hour long discussion with Marjan that ultimately led to the thoughts of coming out with a book on DSL design.
The ebook of DSLs In Action (buy here)* has been published, the print version is due Dec 24, just in time for Christmas. The final book is sized at 377 pages with 9 chapters and 7 appendices. Before publication the book was reviewed at various levels by an illustrious panel of reviewers. I received lots of feedbacks and suggestions at Manning online Author's Forum. The current incarnation of the book is the result of assimilating all suggestions, feedbacks and critiques aided by the constructive editing process of the Manning crew. Jonas Boner has been kind enough to write the foreword of the book. The result is now a reality for all of you to read, enjoy and review.
Some one asked to me pen down my thoughts on DSLs In Action in one page. Not the literal summary of what it contains, but the impressions of some of the thought snippets that it can potentially induce into the minds of its readers. This post is an attempt towards that.
DSL and polyglotism
A DSL is targeted for a specific domain. Your implementation should be expressive enough to model the ubiquitous language that your domain users use everyday. Hence the practice od DSL driven development has an aura of polyglotism in it. Assuming you have the relevant expertise in your team, target the implementation language that's best suited for developing the syntax of the domain. DSLs In Action considers case studies in Ruby, Groovy, Clojure and Scala and focuses its readers to the implementation patterns of DSLs in each of them.
A DSL as a collaboration medium
The one most standout dimension that DSL driven development adds to a team is an increased emphasis on collaboration. The main value proposition that a DSL brings to table is a better collaboration model between the developers and the domain users. It's no secret that most of today's failed projects attribute their debacle to the huge gap in the developers' understanding of the domain. This mainly results from the lack of communication between the developers and the domain users and also between the different groups of developers. When you have code snippets like the following as part of your domain model ..
import TaxFeeImplicits._
case Commission => 20. percent_of trade.principal
case Surcharge => 7. percent_of trade.principal
case VAT => 7. percent_of trade.principal
}
you can collaborate very well with your domain experts to verify the correctness of implementation of the business rule for tax calculation on a security trade. DSLs In Action dispels the myth that a DSL will make your domain analysts programmers - it will not. A DSL best serves as the collaboration bridge to discuss business rules' implementation with your analysts.
Importance of the semantic model
A DSL is a thin layer of linguistic abstraction atop a semantic model. I have blogged on this topic some time ago. The semantic model is the set of abstractions on which you grow your DSL syntax. The semantic model can potentially be reused for purposes other than developing your DSL layer - hence it needs to be as loosely coupled as possible with the syntax that your DSL publishes. In DSLs In Action I discuss the concept of a DSL Facade that helps you decouple basic abstractions of the domain model from the language itself.
DSL design is also abstraction design
Throughout DSLs In Action I have highlighted many of the good qualities that a well-designed abstraction needs to have. In fact Appendix A is totally dedicated to discussing the qualities of good abstractions. In almost every chapter I talk about how these qualities (minimalism, distillation, composability and extensibility) apply in every step of your DSL design. In real world when you implement DSLs, you will see that entire DSLs also need to be composed as abstractions. Use a proper language that supports good models of abstraction composition
External and Internal is only one dimension of classifying DSLs
Though broadly I talk about external and internal DSLs, actually there are lots of different implementation patterns even within them. Internal DSLs can be embedded where you emded the domain language within the type system of the host language. Or they can generative where you allow the language runtime to generate code for you, like you do in Ruby or Groovy using dynamic meta-programming. With the Lisp family you can create your own language using macros. Some time back I blogged about all these varying implementation patterns with internal and external DSLs.
Hope you all like DSLs In Action. If you have reached this far in the post, there are chapters 1 and 4 free on Manning site taht would help you get a sneak peak of some of the stuff that I talked about .. Happy Holidays!
## Monday, December 13, 2010
### Monads, Applicative Functors and sequencing of effects
Monads and applicative functors are both used to model computations - yet it's interesting to note the subtle differences in the way they handle sequencing of effects. Both of them support an applicative style of effectful programming that lets you write code in pointfree style (in Haskell) making your code look so expressive.
Applicative functors are more general than monads and hence have a broader area of application in computation. Haskell does not define monads to be a derivative of applicative functors, possibly for some historical reasons. Scalaz does it and does it correctly and conveniently for you to reduce the number of abstractions that you need to have.
In Haskell we have sequence and sequenceA implemented for monads and applicatives separately ..
sequence :: Monad m => [m a] -> m [a]
-- in Data.Traversable
class (Functor t, Foldable t) => Traversable t where
sequenceA :: Applicative f => t (f a) -> f (t a)
In scalaz we have the following ..
// defined as pimped types on type constructors
def sequence[N[_], B](implicit a: A <:< N[B], t: Traverse[M], n: Applicative[N]): N[M[B]] =
traverse((z: A) => (z: N[B]))
sequence is defined on applicatives that works for monadic structures as well ..
Monads are more restrictive than applicatives. But there are quite a few use cases where you need to have a monad and NOT an applicative. One such use case is when you would like to change the sequence of an effectful computation depending on a previous outcome.
Have a look at the function ifM (monadic if) defined in scalaz ..
// executes the true branch of ifM
scala> true.some ifM(none[Int], 4.some)
res8: Option[Int] = None
Here's how ifM is defined ..
def ifM[B](t: => M[B], f: => M[B])(implicit a: Monad[M], b: A <:< Boolean): M[B] =
>>= ((x: A) => if (x) t else f)
It uses the monadic bind that can influence a subsequent computation depending on the outcome of a previous computation.
(>>=) :: m a -> (-> m b) -> m b
Now consider the same computation implemented using an applicative ..
scala> val ifA = (b: Boolean) => (t: Int) => (f: Int) => (if (b) t else f)
ifA: (Boolean) => (Int) => (Int) => Int = <function1>
// good!
scala> none <*> (some(12) <*> (some(false) map ifA))
res41: Option[Int] = None
scala> none <*> (some(12) <*> (some(true) map ifA))
res42: Option[Int] = None
<*> just sequences the effects through all computation structures - hence we get the last effect as the return value which is the one for the else branch. Have a look at the last snippet where even though the condition is true, we have the else branch returned.
(<*>) :: f(-> b) -> f a -> f b
So <*> cannot change the structure of the computation which remains fixed - it just sequences the effects through the chain.
Monads are the correct way to model your effectful computation when you would like to control the structure of computation depending on the context.
A use case for applicatives
scalaz implements Validation as an applicative functor. This is because here we need to accumulate all the effects that the validation can produce. As I noted in my last post on composing abstractions in scalaz, the following snippet will accumulate all validation errors before bailing out .. quite unlike a monadic API ..
def makeTrade(account: Account, instrument: Instrument, refNo: String, market: Market,
unitPrice: BigDecimal, quantity: BigDecimal) =
(validUnitPrice(unitPrice).liftFailNel |@|
validQuantity(quantity).liftFailNel) { (u, q) => Trade(account, instrument, refNo, market, u, q) }
Let's look into it in a bit more detail ..
sealed trait Validation[+E, +A] {
//..
}
final case class Success[E, A](a: A) extends Validation[E, A]
final case class Failure[E, A](e: E) extends Validation[E, A]
Note in case of success only the actual computation value gets propagated, as in the following ..
trait Validations {
//..
def success[E, A](a: A): Validation[E, A] = Success(a)
def failure[E, A](e: E): Validation[E, A] = Failure(e)
//..
}
With a monadic bind, the computation will be aborted on the first error as we don't have the computation value to be passed to the second argument of >>=. Applicatives allow us to sequence through the computation chain nicely and accumulate all the effects. Hence we can get effects like ..
// failure case
Failure(NonEmptyList(price must be > 0, qty must be <= 500))
One other interesting abstraction to manipulate computation structures is an Arrow, which makes an interesting comparison with Applicative Functors. But that's for some other day, some other post ..
## Wednesday, December 01, 2010
### Composable domain models using scalaz
I have been having some solid fun working through scalaz - it's possibly as close you can get to Haskell with a postfunctional language like Scala, which also supports object oriented paradigms. One of the ways I do learn languages is by developing domain models using the idioms that the language offers and try to make the model as expressive as possible. I pick up domains on which I have worked before - so I have an idea of how much I can gain in epressivity using the new language compared to implementations in older languages.
Securities trading is a domain on which I have been working since the last 10 years. I have implemented domain models of securities trading back office systems in Java and Scala. It's time to add scalaz to the mix and see how much more functional my model turns out to be. I have created a playground for this - tryscalaz is a repository on my github that hosts some of my experiments with scalaz. I have started building a domain model for trading systems. It's far from being a realistic one for production use - its main purpose is to make myself more conversant with scalaz.
Scalaz is a wonderful experiment - it's definitely what functional Scala programs should look like. It has a small but wonderful community - Jason (@retronym) and Runar (@runarorama) always help me proactively both on the mailing list and on Twitter.
I am not going into every detail of how my trade domain model shapes up with Scalaz. I implemented a similar domain model in Haskell very recently and documented it here, here and here on my blog. If nothing else, it will help you compare the various aspects of both the implementations.
In this post let me go through some of the features of Scalaz that I found wonderfully expressive to model your domain constraints. You can get a lot out of using Scala only. But with Scalaz, you can take your composition at a much higher level through the various combinators that it offers as part of implementing typeclasses for functors, applicatives, monads and arrows. I haven't yet explored all of these abstractions - yet many of those are already very useful in making your domain models concise, yet expressive.
Here's some example of composition using the higher order functions that Scalaz offers ..
Note how we can compose the functions much like the Haskell way that I described in the earlier posts. In the above composition, I used map, which we can do in Scala for lists or options which explicitly support a map operation that maps a function over the collection. With scalaz we can use mapping of a function over any A of kind *->* for which there exists a Functor[A]. Scala supports higher kinds and scalaz uses it to make map available more generally than what you get in the Scala standard library.
Now let's infuse some more Scalaz magic into it. Frequently we need to do the same operations on a list of trades, which means that instead of just a map, we need to lift the functions through another level of indirection. Much like ..
Note how the functions forTrade, taxFees etc. get lifted into the List of Options.
Another nice feature that becomes extremely useful with scalaz in a domain model is the use of non-breaking error handling. This is made elegant by designing the Validation[] abstraction as an applicative functor. You can design your validation functions of the domain model as returning an instance of Validation[]. They can then be wired together in a variety of ways to implement accumulation of all failures before reporting to the user .. Here's a simple example from the Trade domain model ..
Validation[] in scalaz works much like Either[], but has a more expressive interface that specifies the success and error types explicitly ..
You can use Validation[] in comprehensions or as an applicative functor and wire up your domain validation logic in a completely functional way. Here's how our above validations work on the REPL ..
When we have invalid trade arguments all validation errors are accumulated and then reported to the user. If all arguments are valid, then we have a valid Trade instance as success. Cool stuff .. a recipe that I would like to have as part of my domain modeling everytime I start a new project ..
## Monday, November 22, 2010
### Exploring scalaz
I have been playing around with Scalaz for a couple of weekends now. Scalaz brings to Scala some generic functions and abstractions that are not there in the current Scala API. These are mostly *functional* higher order structures that Haskell boasts as being part of their standard distribution. Using Scalaz you can write Scala code in applicative style that often come out as being more expressive than the imperative one.
Functors, applicatives, arrows, monads and many such abstractions form part of the Scalaz repertoire. It’s no wonder that using Scalaz needs a new way of thinking on your part than is with standard Scala. You need to think more like as if you’re modeling in Haskell rather than in any object-oriented language.
Typeclasses are the cornerstone of Scalaz distribution. Instead of thinking polymorphically in inheritance hierarchies, think in terms of designing APIs for the open world using typeclasses. Scalaz implements the Haskell hierarchy of typeclasses - Functors, Pointed, Applicative, Monad and the associated operations that come with them.
How is this different from the normal way of thinking ? Let’s consider an example from the current Scala point of view.
We say that with Scala we can design monadic abstractions. flatMap is the bind which helps us glue abstractions just like you would do with >>= of Haskell. But does the Scala standard library really have a monad abstraction ? No! If it had a monad then we would have been able to abstract over it as a separate type and implement APIs like sequence in Haskell ..
sequence :: Monad m => [m a] -> m [a]
We don’t have this in the Scala standard library. Consider another example of a typeclass, Applicative, which is defined in Haskell as
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (-> b) -> f a -> f b
…
Here pure lifts a into the effectful environment of the functor, while (<*>) takes a function from within the functor and applies it over the values of the functor. Scalaz implements Applicative in Scala, so that you can write the following:
scala> import scalaz._
import scalaz._
scala> import Scalaz._
import Scalaz._
scala> List(10, 20, 30) <*> (List(1, 2, 3) map ((_: Int) * (_: Int)).curried)
res14: List[Int] = List(10, 20, 30, 20, 40, 60, 30, 60, 90)
Here we have a pure function that multiplies 2 Ints. We curry the function and partially apply to the members of the List(1, 2, 3). Note List is an instance of Applicative Functor. Then we get a List of partial applications. Finally <*> takes that List and applies to every member of List(10, 20, 30) as a cartesian product. Of course the Haskell variant is much less verbose ..
(*) <$> [1, 2, 3] <*> [10, 20, 30] and this is due to better type inference and curry by default strategy of function application. You can get a more succinct variant in Scalaz using the |@| combinator .. scala> List(10, 20, 30) |@| List(1, 2, 3) apply (* _) res17: List[Int] = List(10, 20, 30, 20, 40, 60, 30, 60, 90) You can have many instances of Applicatives so long you implement the contract that the above definition mandates. Typeclasses give you the option to define abstractions for the open world. Like List, there are many other applicatives implemented in Scalaz like options, tuples, function applications etc. The beauty of this implementation is that you can abstract over them in a uniform way through the power of the Scala type system. Just like List, you can apply <*> over options as well .. scala> some(10) <*> (some(20) map ((_: Int) * (_: Int)).curried) res18: Option[Int] = Some(200) And since all Applicatives can be abstracted over without looking at the exact concrete type, here’s one that mixes an option with a function application through <*> .. scala> some(9) <*> some((_: Int) + 3) res19: Option[Int] = Some(12) The Haskell equivalent of this one is .. Just (+3) <*> Just 9 Scalaz uses two features of Scala to the fullest extent - higher kinds and implicits. The entire design of Scalaz is quite unlike the usual Scala based design that you would encounter elsewhere. Sometimes you will find these implementations quite opaque and verbose. But most of the verbosity comes from the way we encode typeclasses in Scala using implicits. Consider the following definition of map, which is available as a pimp defined in the trait MA .. sealed trait MA[M[_], A] extends PimpedType[M[A]] { import Scalaz._ //.. def map[B](f: A => B)(implicit t: Functor[M]): M[B] = //.. //.. } map takes a pure function (f: A => B) and can be applied on any type constructor M so long it gets an instance of a Functor[M] in its implicit context. Using the trait we pimp the specific type constructor with the map function. Here are some examples of using applicatives and functors in Scalaz. For fun I had translated a few examples from Learn You a Haskell for Great Good. I also mention the corresponding Haskell version for each of them .. // pure (+3) <*> Just 10 << from lyah 10.pure[Option] <*> some((_: Int) + 3) should equal(Some(13)) // pure (+) <*> Just 3 <*> Just 5 << lyah // Note how pure lifts the function into Option applicative // scala> p2c.pure[Option] // res6: Option[(Int) => (Int) => Int] = Some(<function1>) // scala> p2c // res7: (Int) => (Int) => Int = <function1> val p2c = ((_: Int) * (_: Int)).curried some(5) <*> (some(3) <*> p2c.pure[Option]) should equal(Some(15)) // none if any one is none some(9) <*> none should equal(none) // (++) <$> Just "johntra" <*> Just "volta" << lyah
some("volta") <*> (some("johntra") map (((_: String) ++ (_: String)).curried))
should equal(Some("johntravolta"))
// more succinct
some("johntra") |@| (some("volta") apply (++ _) should equal(Some("johntravolta"))
Scalaz is mind bending. It makes you think differently. In this post I have only scratched the surface and talked about a couple of typeclasses. But the only way to learn Scalaz is to run through the codebase. It's dense but if you like functional programming you will get lots of aha! moments going through it.
In the next post I will discuss how I translated part of a domain model of a financial trade system which I wrote in Haskell (Part 1, Part 2 and Part 3) into Scalaz. It has been a fun exercise for me and shows how you can write your Scala code in an applicative style.
## Monday, November 01, 2010
### Domain Modeling in Haskell - Applicative Functors for Expressive Business Rules
In my last post on domain modeling in Haskell, we had seen how to create a factory for creation of trades that creates Trade from an association list. We used monadic lifts (liftM) and monadic apply (ap) to chain our builder that builds up the Trade data. Here's what we did ..
Trade liftM lookup1 "account" alist
ap lookup1 "instrument" alist
ap (read liftM (lookup1 "market" alist))
ap lookup1 "ref_no" alist
ap (read liftM (lookup1 "unit_price" alist))
ap (read liftM (lookup1 "quantity" alist))
lookup1 key alist = case lookup key alist of
Just (Just s@(_:_)) -> Just s
_ -> Nothing
Immediately after the post, I got some useful feedback on Twitter suggesting the use of applicatives instead of monads. A Haskell newbie, that I am, this needed some serious explorations into the wonderful world of functors and applicatives. In this post let's explore some of the goodness that applicatives offer, how using applicative style of programming encourages a more functional feel and why you should always use applicatives unless you need the special power that monads offer.
Functors and Applicatives
In Haskell a functor is a typeclass defined as
class Functor f where
fmap :: (-> b) -> f a -> f b
fmap lifts a pure function into a computational context. For more details on functors, applicatives and all of typeclasses, refer to the Typeclassopedia that Brent Yorgey has written. In the following text, I will be using examples from our domain model of securities trading. After all, I am trying to explore how Haskell can be used to build expressive domain models with a very popular domain at hand.
Consider the association list rates from our domain model in my last post, which stores pairs of tax/fee and the applicable rates as percentage on the principal.
*Main> rates
We would like to increase all rates by 10%. fmap is our friend here ..
*Main> fmap (\(tax, amount) -> (tax, amount * 1.1)) rates
This works since List is an instance of the Functor typeclass. The anonymous function gets lifted into the computational context of the List data type. But the code looks too verbose, since we have to destructure the tuple within the anonymous function and thread the increment logic manually within it. We can increase the level of abstraction by making the tuple itself a functor. In fact it's so in Haskell and the function gets applied to the second component of the tuple. Here's the more idiomatic version ..
*Main> ((1.1*) <$>) <$> rates
Note how we lift the function across 2 levels of functors - a list and within that, a tuple. fmap does the magic! The domain model becomes expressive through the power of Haskell's higher level of abstractions.
Applicatives add more power to functors. While functors lift pure functions, with applicatives you can lift functions from one context into another. Here's how you define the Applicative typeclass.
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (-> b) -> f a -> f b
Note pure is the equivalent of a return in monads, while <*> equals ap.
Control.Applicative also defines a helper function <$>, which is basically an infix version of fmap. The idea is to help write functions in applicative style. (<$>) :: Applicative f => (-> b) -> f a -> f b
Follow the types and see that f <$> u is the same as pure f <*> u. Note also the following equivalence of fmap and <$> ..
*Main> fmap (+3) [1,2,3,4]
[4,5,6,7]
*Main> (+3) <$> [1,2,3,4] [4,5,6,7] Using the applicative style our makeTrade function becomes the following .. makeTrade :: [(String, Maybe String)] -> Maybe Trade makeTrade alist = Trade <$> lookup1 "account" alist
<*> lookup1 "instrument" alist
<*> (read <$> (lookup1 "market" alist)) <*> lookup1 "ref_no" alist <*> (read <$> (lookup1 "unit_price" alist))
<*> (read <$> (lookup1 "quantity" alist)) Can you figure out how the above invocation works ? As I said before, follow the types .. Trade is a pure function and <$> lifts Trade onto the first invocation of lookup1, which is a Maybe functor. The result is another Maybe, which BTW is an Applicative functor as well. Then the rest of the chain continues through partial application and an applicative lifting from one context to another.
Why choose Applicatives over Monads ?
One reason is that there are more applicatives than monads. Monads are more powerful - using (>>=) :: (Monad m) => m a -> (a -> m b) -> m b you can influence the structure of your overall computation, while with applicatives your structure remains fixed. You only get sequencing of effects with an applicative functor. As an exercise try exploring the differences in the behavior of makeTrade function implemented using monadic lifts and applicatives when lookup1 has some side-effecting operations. Conor McBride and Ross Paterson has a great explanation in their functional pearl paper Applicative Programming with Effects. Applicatives being more in number, you have more options of abstracting your domain model.
In our example domain model, suppose we have the list of tax/fees and the list of rates for each of them. And we would like to build our rates data structure. ZipList applicative comes in handy here .. ZipList is an applicative defined as follows ..
instance Applicative ZipList where
pure x = ZipList (repeat x)
ZipList fs <*> ZipList xs = ZipList (zipWith (\f x -> f x) fs xs)
*Main> getZipList $(,) <$> ZipList [TradeTax, Commission, VAT] <*> ZipList [0.2, 0.15, 0.1]
ZipList is an applicative and NOT a monad.
Another important reason to choose applicatives over monads (but only when possible) is that applicatives compose, monads don't (except for certain pairs). The McBride and Paterson paper has lots of discussions on this.
Finally programs written with applicatives often have a more functional feel than some of the monads with the do notation (that has an intrinsically imperative feel). Have a look at the following snippet which does the classical do-style first and then follows it up with the applicative style using applicative functors.
-- classical imperative IO
notice = do
putStrLn $"Trade " ++ trade ++ forClient -- using applicative style notice = do details <- (++) <$> getTradeStr <*> getClientStr
putStrLn $"Trade " ++ details McBride and Paterson has the final say on how to choose monads or applicatives in your design .. "The moral is this: if you’ve got an Applicative functor, that’s good; if you’ve also got a Monad, that’s even better! And the dual of the moral is this: if you want a Monad, that’s good; if you only want an Applicative functor, that’s even better!" Applicative Functors for Expressive Business Rules As an example from our domain model, we can write the following applicative snippet for calculating the net amount of a trade created using makeTrade .. *Main> let trd = makeTrade [("account", Just "a-123"), ("instrument", Just "IBM"), ("market", Just "Singapore"), ("ref_no", Just "r-123"), ("unit_price", Just "12.50"), ("quantity", Just "200" )] *Main> netAmount <$> enrichWith . taxFees . forTrade <$> trd Just 3625.0 Note how the chain of functions get lifted into trd (Maybe Trade) that's created by makeTrade. This is possible since Maybe is an applicative functor. The beauty of applicative functors is that you can abstract this lifting into any of them. Let's lift the chain of invocation into a list of trades generating a list of net amount values for each of them. Remember List is also another applicative functor in Haskell. For a great introduction to applicatives and functors, go read Learn Yourself a Haskell for Great Good. *Main> (netAmount <$> enrichWith . taxFees . forTrade <$>) <$> [trd2, trd3] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44383305311203003, "perplexity": 601.0079055768996}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607849.21/warc/CC-MAIN-20170524173007-20170524193007-00537.warc.gz"} |
https://groupprops.subwiki.org/w/index.php?title=Direct_factor_is_not_finite-join-closed&oldid=16458 | # Direct factor is not finite-join-closed
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
This article gives the statement, and possibly proof, of a subgroup property (i.e., direct factor) not satisfying a subgroup metaproperty (i.e., finite-join-closed subgroup property).This also implies that it does not satisfy the subgroup metaproperty/metaproperties: Join-closed subgroup property (?), .
View all subgroup metaproperty dissatisfactions | View all subgroup metaproperty satisfactions|Get help on looking up metaproperty (dis)satisfactions for subgroup properties
Get more facts about direct factor|Get more facts about finite-join-closed subgroup propertyGet more facts about join-closed subgroup property|
## Statement
A join of finitely many direct factors of a group need not be a direct factor. More specifically, it is possible to have a group $G$ and two subgroups $H, K$ of $G$ such that both $H$ and $K$ are direct factors and the join $HK$ is not a direct factor.
## Proof
### An abelian group example
Suppose $C_n$ denotes the cyclic group of order $n$. Define:
$G = C_4 \times C_2$.
Consider the following subgroups:
$H = 0 \times C_2, \qquad K = \{ (2,1), (0,0) \}, L = C_4 \times 0$.
Then, both $H$ and $K$ are direct factors of $G$, with a common direct factor complement $L$. On the other hand, we have:
$HK = \{ (2,1), (2,0), (0,1), (0,0) \}$.
This is not a direct factor of $G$, because if a complement exists, it must have order two, but all elements of $G$ outside $HK$ have order four. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7555062770843506, "perplexity": 1211.3186636744124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704832583.88/warc/CC-MAIN-20210127183317-20210127213317-00243.warc.gz"} |
http://lists.gnu.org/archive/html/lilypond-user/2009-06/msg00212.html | lilypond-user
[Top][All Lists]
## Re: Center text over a tie?
From: Matt Boersma Subject: Re: Center text over a tie? Date: Wed, 10 Jun 2009 15:49:59 -0600
```On Wed, Jun 10, 2009 at 3:31 PM, Gilles THIBAULT<address@hidden> wrote:
>
>> nineroll = \drummode { sn4:32^~ sn }
>> But to help beginning drummers (like me), the original sheet music
>> puts a small "9" just above the center of the tie
>
> Try that.
>
> %%%%%%%
> #(define ((myCallBack number) grob)
> (grob-interpret-markup grob
> [snip]
That's pretty close! Somehow the tie itself gets shifted left, so the
number 9 collides with the cross-hatches on the note stem. But I
think I understand the Scheme you wrote, more or less. I'm using a
2.13.1 build here, but let me try it again when I'm home with the
released version and see if I can improve that. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9379323720932007, "perplexity": 16525.236813796582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449160.83/warc/CC-MAIN-20151124205409-00146-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.investopedia.com/articles/forex/07/profit_loss.asp | When trading the forex market or other markets, we are often told of a common money management strategy that requires that the average profit be more than the average loss per trade. It's easy to assume that such common advice must be true. However, if we take a deeper look at the relationship between profit and loss, it is clear that the "old," commonly held ideas may need to be adjusted.
### Key Takeaways
• Traders often look to the profit/loss ratio - that is, the proportion of the size of winning trades to losers - as a sign of success and profitability.
• A profit/loss ratio in excess of 2-to-1 is often sought-after, but this simple metric can be a bit misleading since some trades are inherently riskier than others.
• Average profit per trade (APPT) is perhaps a better measure of trading skill as it factors in the statistical probability that a trade will be profitable.
## Profit/Loss Ratio
A profit/loss ratio refers to the size of the average profit compared to the size of the average loss per trade. For example, if your expected profit is $900 and your expected loss is$300 for a particular trade, your profit/loss ratio is 3:1 - which is $900 divided by$300.
Many trading books and "gurus" advocate a profit/loss ratio of at least 2:1 or 3:1, which means that for every $200 or$300 you make per trade, your potential loss should be capped at 100. At first glance, most people would agree with this recommendation. After all, shouldn't any potential loss be kept as small as possible and any potential profit be as large as possible? The answer is, not always. In fact, this common piece of advice can be misleading, and can cause harm to your trading account. The blanket advice of having a profit/loss ratio of at least 2:1 or 3:1 per trade is over-simplistic because it does not take into account the practical realities of the forex market (or any other markets), the individual's trading style and the individual's average profitability per trade (APPT) factor, which is also referred to as statistical expectancy. ## The Importance of Average Profitability Per Trade Average profitability per trade (APPT) basically refers to the average amount you can expect to win or lose per trade. Most people are so focused on either balancing their profit/loss ratios or on the accuracy of their trading approach that they are unaware that a bigger picture exists: Your trading performance depends largely on your APPT. This is the formula for average profitability per trade: \begin{aligned} &APPT\ =\ (PW \times AW)\ -\ (PL \times AL)\\ &\textbf{where:}\\ &PW\ =\ \text{Probability of win}\\ &AW\ =\ \text{Average win}\\ &PL\ =\ \text{Probability of loss}\\ &AL\ =\ \text{Average loss} \end{aligned} Let's explore the APPT of the following hypothetical scenarios: ## Scenario A: Let's say that out of 10 trades you place, you profit on three of them and you realize a loss on seven. Your probability of a win is therefor 30%, or 0.3, while your probability of loss is 70%, or 0.7. Your average winning trade makes600 and your average loss is $300. In this scenario, the APPT is: $(0.3 \times \600) - (0.7 \times \300) = - \30$ As you can see, the APPT is a negative number, which means that for every trade you place, you are likely to lose$30. That's a losing proposition!
Even though the profit/loss ratio is 2:1, this trading approach produces winning trades only 30% of the time, which negates the supposed benefit of having a 2:1 profit/loss ratio.
## Scenario B:
Now let's explore the APPT of a trading approach that has a profit/loss ratio of 1:3, but has more winning trades than losing ones. Let's say out of the 10 trades you place, you make profit on eight of them, and you realize a loss on two trades.
Here is the APPT:
$(0.8 \times \100) - (0.2 \times \300) = \20$
In this case, even though this trading approach has a profit/loss ratio of 1:3, the APPT is positive, which means you can be profitable over time. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9193485975265503, "perplexity": 2116.7060293580003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391309.4/warc/CC-MAIN-20200526191453-20200526221453-00179.warc.gz"} |
https://casmusings.wordpress.com/tag/square/ | # Tag Archives: square
## Squares and Octagons, A compilation
My last post detailed my much-too-long trigonometric proof of why the octagon formed by connecting the midpoints and vertices of the edges of a square into an 8-pointed star is always 1/6 of the area of the original square.
My proof used trigonometry, and responses to the post on Twitter and on my ‘blog showed many cool variations. Dave Radcliffe thought it would be cool to have a compilation of all of the different approaches. I offer that here in the order they were shared with me.
Method 1: My use of trigonometry in a square. See my original post.
Method 2: Using medians in a rectangle from Tatiana Yudovina, a colleague at Hawken School.
Below, the Area(axb rectangle) = ab = 16 blue triangles, and
Area(octagon) = 4 blue triangles – 2 red deltas..
Now look at the two green, similar triangles. They are similar with ratio 1/2, making
Area(red delta) = $\displaystyle \frac{b}{4} \cdot \frac{a}{6} = \frac{ab}{24}$, and
Area(blue triangle) = $\displaystyle \frac{1}{16} ab$
So, Area(octagon) = $\displaystyle 2 \frac{ab}{24}-4\frac {ab}{16}=\frac{1}{6}ab$.
QED
Method 3: Using differences in triangle areas in a square (but easily extended to rectangles)from @Five_Triangles (‘blog here).
Full solution here.
Method 4: Very clever shorter solution using triangle area similarity in a square also from @Five_Triangles (‘blog here).
Full second solution here.
Method 5: Great option Using dilated kitesfrom Dave Radcliffe posting as @daveinstpaul.
Full pdf and proof here.
Method 6: Use fact that triangle medians trisect each other from Mike Lawler posting as @mikeandallie.
Tweet of solution here.
Method 7: Use a coordinate proof on a specific square from Steve Ingrassia, a colleague at Hawken School. Not a quick proof like some of the geometric solutions, but it’s definitely different than the others.
If students know the formula for finding the area of any polygon using its coordinates, then they can prove this result very simply with nothing more than simple algebra 1 techniques. No trig is required.
The area of polygon with vertices (in either clockwise or counterclockwise order, starting at any vertex) of $(x_1, y_1)$, $(x_2, y_2)$, …, $(x_n, y_n)$ is
$\displaystyle Area = \left| \frac{(x_1y_2-x_2y_1)+(x_2y_3-x_3y_2)+...+(x_{n-1}y_n-x_ny_{n-1})}{2} \right|$
Use a 2×2 square situated with vertices at (0,0), (0,2), (2,2), and (2,0). Construct segments connecting each vertex with the midpoints of the sides of the square, and find the equations of the associated lines.
• L1 (connecting (0,0) and (2,1): y = x/2
• L2 (connecting (0,0) and (1,2): y=2x
• L3 (connecting (0,1) and (2,0): y= -x/2 + 1
• L4 (connecting (0,1) and (2,2): y= x/2 + 1
• L5 (connecting (0,2) and (1,0): y = -2x + 2
• L6 (connecting (0,2) and (2,1): y= -x/2 + 2
• L7 (connecting (1,2) and (2,0): y = -2x + 4
• L8 (connecting (2,2) and (1,0): y = 2x – 2
The 8 vertices of the octagon come at pairwise intersections of some of these lines, which can be found with simple substitution:
• Vertex 1 is at the intersection of L1 and L3: (1, 1/2)
• Vertex 2 is at the intersection of L3 and L5: (2/3, 2/3)
• Vertex 3 is at the intersection of L2 and L5: (1/2, 1)
• Vertex 4 is at the intersection of L2 and L4: (2/3, 4/3)
• Vertex 5 is at the intersection of L4 and L6: (1, 3/2)
• Vertex 6 is at the intersection of L6 and L7: (4/3, 4/3)
• Vertex 7 is at the intersection of L7 and L8: (3/2, 1)
• Vertex 8 is at the intersection of L1 and L8: (4/3, 2/3)
Using the coordinates of these 8 vertices in the formula for the area of the octagon, gives
$\displaystyle \frac{ \left| 1/3 +1/3+0+(-1/3)+(-2/3)+(-1/3)+0 \right|}{2} = \frac{2}{3}$
Since the area of the original square was 4, the area of the octagon is exactly 1/6th of the area of the square.
## Squares and Octagons
Following is a really fun problem Tom Reardon showed my department last May as he led us through some TI-Nspire CAS training. Following the introduction of the problem, I offer a mea culpa, a proof, and an extension.
THE PROBLEM:
Take any square and construct midpoints on all four sides.
Connect the four midpoints and four vertices to create a continuous 8-pointed star as shown below. The interior of the star is an octagon. Construct this yourself using your choice of dynamic geometry software and vary the size of the square.
Compare the areas of the external square and the internal octagon.
You should find that the area of the original square is always 6 times the area of the octagon.
I thought that was pretty cool. Then I started to play.
MINOR OBSERVATIONS:
Using my Nspire, I measured the sides of the octagon and found it to be equilateral.
As an extension of Tom’s original problem statement, I wondered if the constant square:octagon ratio occurred in any other quadrilaterals. I found the external quadrilateral was also six times the area of the internal octagon for parallelograms, but not for any more general quadrilaterals. Tapping my understanding of the quadrilateral hierarchy, that means the property also holds for rectangles and rhombi.
MEA CULPA:
Math teachers always warn students to never, ever assume what they haven’t proven. Unfortunately, my initial exploration of this problem was significantly hampered by just such an assumption. I obviously know better (and was reminded afterwards that Tom actually had told us that the octagon was not equiangular–but like many students, I hadn’t listened). After creating the original octagon, measuring its sides and finding them all equivalent, I errantly assumed the octagon was regular. That isn’t true.
That false assumption created flaws in my proof and generalizations. I discovered my error when none of my proof attempts worked out, and I eventually threw everything out and started over. I knew better than to assume. But I persevered, discovered my error through back-tracking, and eventually overcame. That’s what I really hope my students learn.
THE REAL PROOF:
Goal: Prove that the area of the original square is always 6 times the area of the internal octagon.
Assume the side length of a given square is $2x$, making its area $4x^2$.
The octagon’s area obviously is more complicated. While it is not regular, the square’s symmetry guarantees that it can be decomposed into four congruent kites in two different ways. Kite AFGH below is one such kite.
Therefore, the area of the octagon is 4 times the area of AFGH. One way to express the area of any kite is $\frac{1}{2}D_1\cdot D_2$, where $D_1$ and $D_2$ are the kite’s diagonals. If I can determine the lengths of $\overline{AG}$ and $\overline {FH}$, then I will know the area of AFGH and thereby the ratio of the area of the square to the area of the octagon.
The diagonals of every kite are perpendicular, and the diagonal between a kite’s vertices connecting its non-congruent sides is bisected by the kite’s other diagonal. In terms of AFGH, that means $\overline{AG}$ is the perpendicular bisector of $\overline{FH}$.
The square and octagon are concentric at point A, and point E is the midpoint of $\overline{BC}$, so $\Delta BAC$ is isosceles with vertex A, and $\overline{AE}$ is the perpendicular bisector of $\overline{BC}$.
That makes right triangles $\Delta BEF \sim \Delta BCD$. Because $\displaystyle BE=\frac{1}{2} BC$, similarity gives $\displaystyle AF=FE=\frac{1}{2} DC=\frac{x}{2}$. I know one side of the kite.
Let point I be the intersection of the diagonals of AFGH. $\Delta BEA$ is right isosceles, so $\Delta AIF$ is, too, with $m\angle{IAF}=45$ degrees. With $\displaystyle AF=\frac{x}{2}$, the Pythagorean Theorem gives $\displaystyle IF=\frac{x}{2\sqrt{2}}$. Point I is the midpoint of $\overline{FH}$, so $\displaystyle FH=\frac{x}{\sqrt{2}}$. One kite diagonal is accomplished.
Construct $\overline{JF} \parallel \overline{BC}$. Assuming degree angle measures, if $m\angle{FBC}=m\angle{FCB}=\theta$, then $m\angle{GFJ}=\theta$ and $m\angle{AFG}=90-\theta$. Knowing two angles of $\Delta AGF$ gives the third: $m\angle{AGF}=45+\theta$.
I need the length of the kite’s other diagonal, $\overline{AG}$, and the Law of Sines gives
$\displaystyle \frac{AG}{sin(90-\theta )}=\frac{\frac{x}{2}}{sin(45+\theta )}$, or
$\displaystyle AG=\frac{x \cdot sin(90-\theta )}{2sin(45+\theta )}$.
Expanding using cofunction and angle sum identities gives
$\displaystyle AG=\frac{x \cdot sin(90-\theta )}{2sin(45+\theta )}=\frac{x \cdot cos(\theta )}{2 \cdot \left( sin(45)cos(\theta ) +cos(45)sin( \theta) \right)}=\frac{x \cdot cos(\theta )}{\sqrt{2} \cdot \left( cos(\theta ) +sin( \theta) \right)}$
From right $\Delta BCD$, I also know $\displaystyle sin(\theta )=\frac{1}{\sqrt{5}}$ and $\displaystyle cos(\theta)=\frac{2}{\sqrt{5}}$. Therefore, $\displaystyle AG=\frac{x\sqrt{2}}{3}$, and the kite’s second diagonal is now known.
So, the octagon’s area is four times the kite’s area, or
$\displaystyle 4\left( \frac{1}{2} D_1 \cdot D_2 \right) = 2FH \cdot AG = 2 \cdot \frac{x}{\sqrt{2}} \cdot \frac{x\sqrt{2}}{3} = \frac{2}{3}x^2$
Therefore, the ratio of the area of the square to the area of its octagon is
$\displaystyle \frac{area_{square}}{area_{octagon}} = \frac{4x^2}{\frac{2}{3}x^2}=6$.
QED
EXTENSIONS:
This was so nice, I reasoned that it couldn’t be an isolated result.
I have extended and proved that the result is true for other modulo-3 stars like the 8-pointed star in the square for any n-gon. I’ll share that very soon in another post.
I proved the result above, but I wonder if it can be done without resorting to trigonometric identities. Everything else is simple geometry. I also wonder if there are other more elegant approaches.
Finally, I assume there are other constant ratios for other modulo stars inside larger n-gons, but I haven’t explored that idea. Anyone?
## Two Squares, Two Triangles, and some Circles
Here’s another fun twist on another fun problem from the Five Triangles ‘blog. A month ago, this was posted.
What I find cool about so many of the Five Triangles problems is that most permit multiple solutions. I also like that several Five Triangles problems initially appear to not have enough information. This one is no different until you consider the implications of the squares.
I’ve identified three unique ways to approach this problem. I’d love to hear if any of you see any others. Here are my solutions in the order I saw them. The third is the shortest, but all offer unique insights.
Method 1: Law of Cosines
This solution goes far beyond the intended middle school focus of the problem, but it is what I saw first. Sometimes, knowing more gives you additional insights.
Because DEF is a line and EF is a diagonal of a square, I know $m\angle CEF=45^{\circ}$, and therefore $m\angle CED=135^{\circ}$. $\Delta CEF$ is a 45-45-90 triangle with hypotenuse 6, so its leg, CE has measure $\frac{6}{\sqrt{2}}=3\sqrt{2}$. Knowing two sides and an angle in $\Delta DEC$ means I could apply the Law of Cosines.
$DC^2 = 4^2 + (3\sqrt{2})^2 - 2\cdot (3\sqrt{2}) \cdot \cos(135^{\circ})=58$
Because I’m looking for the area of ABCD, and that is equivalent to $DC^2$, I don’t need to solve for the length of DC to know the area I seek is 58.
Method 2: Use Technology
I doubt many would want to solve using this approach, but if you don’t see (or know) trigonometry, you could build a solution from scratch if you are fluent with dynamic geometry software (GeoGebra, TI-Nspire, GSP). My comfort with this made finding the solution via construction pretty straight-forward.
1. Construct segment EF with fixed length 6.
2. Build square CEGF with diagonal EF. (This can be done several ways. I was in a transformations mood, so I rotated EF $90^{\circ}$ to get the other endpoints.)
3. Draw line EF and then circle with radius 4 through point E.
4. Mark point D as the intersection of circle and line EF outside CEGF .
5. Draw a segment through points and C. (The square of the length of CD is the answer, but I decided to go one more step.)
6. Construct square ABCD with sides congruent to CD. (Again, there are several ways to do this. I left my construction marks visible in my construction below.)
7. Compute the area of ABCD.
Here is my final GeoGebra construction.
Method 3: The Pythagorean Theorem
Sometimes, changing a problem can make it much easier to solve.
As soon as I saw the problem, I forwarded it to some colleagues at my school. Tatiana wrote back with a quick solution. In the original image, draw diagonal, CG, of square CEGF. Because the diagonals of a square perpendicularly bisect each other, that creates right $\Delta DHC$ with legs 3 and 7. That means the square of the hypotenuse of $\Delta DHC$ (and therefore the area of the square) can be found via the Pythagorean Theorem.
$DC^2 = 7^2+3^2 = 58$
Method 4: Coordinate Geometry
OK, I said three solutions, and perhaps this approach is completely redundant given the Pythagorean Theorem in the last approach, but you could also find a solution using coordinate geometry.
Because the diagonals of a square are perpendicular, you could construct ECFG with its center at the origin. I placed point C at (0,3) and point E at (3,0). That means point D is at (7,0), making the solution to the problem the square of the length of the segment from (0,3) to (7,0). Obviously, that can be done with the Pythagorean Theorem, but in the image below, I computed number i in the upper left corner of this GeoGebra window as the square of the length of that segment.
Fun.
## Area 10 Squares – Proof & Additional Musings
Additional musings on the problem of Area 10 Squares:
Thanks, again to Dave Gale‘s inspirations and comments on my initial post. For some initial clarifications, what I was asking in Question 3 was whether these square areas ultimately can all be found after a certain undetermined point, thereby creating a largest area that could not be drawn on a square grid. I’m now convinced that the answer to this is a resounding NO–there is no area after which all integral square areas can be constructed using square grid paper. This is because there is no largest un-constructable area (proof below). This opens a new question.
Question 4:
Is there some type of 2-dimensional grid paper which does allow the construction of all square areas?
The 3-dimensional version of this question has been asked previously, and this year in the College Math Journal, Rick Parris of Exeter has “proved that if a cube has all of its vertices in then the edge length is an integer.”
Dave’s proposition above about determining whether an area 112 (or any other) can be made is very interesting. (BTW, 112 cannot be made.) I don’t have any thoughts at present about how to approach the feasibility of a random area. As a result of my searches, I still suspect (but haven’t proven) that non-perfect square multiples of 3 that aren’t multiples of pre-existing squares seem to be completely absent. This feels like a number theory question … not my area of expertise.
Whether or not you decide to read the following proof for why there are an infinite number of impossible-to-draw square areas using square grids, I think one more very interesting question is now raised.
Question 5:
Like the prime numbers, there is an infinite number of impossible-to-draw square areas. Is there a pattern to these impossible areas? (Remember that the pattern of the primes is one of the great unanswered questions in all of mathematics.)
THE PROOF
My proof does not feel the most elegant to me. But I do like how it proves the infinite nature of these numbers without ever looking at the numbers themselves. It works by showing that there are far more integers than there are ways to arrange them on a square grid, basically establishing that there is simply not enough room for all of the integers forcing some to be impossible. I don’t know the formal mathematics name for this principle, but I think of it as a reverse Pigeonhole Principle. Rather than having more pigeons than holes (guaranteeing duplication), in this case, the number of holes (numbers available to be found) grows faster than the number of pigeons (the areas of squares that can actually be determined on a square grid), guaranteeing that there will always be open holes (areas of squares that cannot be determined on using a square grid).
This exploration and proof far exceeds most (all?) textbooks, but the individual steps require nothing more than the ability to write an equation for an exponential function and find the sum a finite arithmetic sequence. The mathematics used here is clearly within the realm of what high school students CAN do. So will we allow them to explore, discover, and prove mathematics outside our formal curricula? I’m not saying that students should do THIS problem (although they should be encouraged in this direction if interested), but they must be encouraged to do something real to them.
Now on to a proof for why there must be an infinite number of impossible-to-draw square areas on a square grid.
This chart shows all possible areas that can be formed on a square grid. The level 0 squares are the horizontal squares discussed earlier. It is lower left-upper right symmetric (as noted on Dave’s ‘blog), so only the upper triangle is shown.
From this, the following can be counted.
Level 1 – Areas 1-9: 6 of 9 possibilities found (yellow)
Level 2 – Areas 10-99: 40 of 90 possibilities found (orange)
Level 3 – Areas 100-999: 342 of 900 possibilities found (blue)
The percentage of possible numbers appears to be declining and is always less than the possible number of areas. But a scant handful of data points does not always definitively describe a pattern.
Determining the total number of possible areas:
Level 1 has 9 single-digit areas. Level 2 has 90 two-digit areas, and Level 3 has 900 three-digit areas. By this pattern, Level M has M-digit areas. This is the number of holes that need to be filled by the squares we can find on the square grid.
Determining an upper bound for the number of areas that can be accommodated on a square grid:
Notice that if a horizontally-oriented square has area of Level M, then every tilted square in its column has area AT LEAST of Level M. Also, the last column that contains any Level M areas is column where floor is the floor function.
In the chart, Column 1 contains 2 areas, and every Column N contains exactly (N+1) areas. The total number of areas represented for Columns 1 through N is an arithmetic sequence, so an upper bound for the number of distinct square areas represented in Columns 1 to N (assuming no duplication, which of course there is) is .
The last column that contains any Level M areas has column number . Assuming all of the entries in the data chart up to column are Level M (another overestimate if is not an integer), then there are
maximum area values to fill the Level M area holes. This is an extreme over-estimate as it ignores the fact that this chart also contains all square areas from Level 1 through Level (M-1), and it also contains a few squares which can be determined multiple ways (e.g., area 25 squares).
Conclusion:
Both of these are dominated by base-10 exponential functions, but the number of areas to be found has a coefficient of 9 and the number of squares that can be found has coefficient 1/2. Further, the number of squares that can be found is decreased by an exponential function of base , accounting in part for the decreasing percentage of found areas noted in the data chart. That is, the number of possible areas grows faster than the number of areas that actually can be created on square grid paper.
While this proof does not say WHICH areas are possible (a great source for further questions and investigation!), it does show that the number of areas of squares impossible to find using a square grid grows without bound. Therefore, there is no largest area possible. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 58, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7527819871902466, "perplexity": 958.0472399957872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103910.54/warc/CC-MAIN-20170817185948-20170817205948-00627.warc.gz"} |
https://www.amtoolbox.org/amt-1.1.0/doc/models/verhulst2018.php | # THE AUDITORY MODELING TOOLBOX
Applies to version: 1.1.0
Go to function
# VERHULST2018 - Cochlear transmission-line model (improved, incl. brainstem)
## Usage
output = verhulst2018(insig,fs,fc_flag)
[output, cf] = verhulst2018(insig,fs,fc_flag, ...)
## Input parameters
insig the input signal to be processed. Each column is processed in parallel, so it is possible to run several simulations in parallel fs sampling rate (Hz) fc_flag list of frequencies specifying the probe positions along the basilar membrane, or 'all' to probe all 1000 cochlear sections, or 'abr' to probe 401 locations between 112 and 12000 Hz. numH number of high-SR fibers. Must be larger than zero. numM number of medium-SR fibers. Can be zero. numL number of low-SR fibers. Can be zero.
System Message: WARNING/2 (<string>, line 35)
Option list ends without a blank line; unexpected unindent.
'ic'
calculate the IC responses
## Output parameters
cf Center frequencies (Hz) of the probed basiliar membrane sections. output Structure with the following fields: fs_an Sample rate (Hz) of the output. fs_abr Sample rate (Hz) of the brainstem sections (IC, CN, W1, W3, and W5). w1 Wave 1, output of the AN model. w3 Wave 3, output of the CN model. w5 Wave 5, output of the IC model. an_summed Sum of HSR, MSR and LSR responses (per channel) and the input to the CN (modelled by verhulst2015_cn). Provided by default. Can be disabled by the flag no_an. ihc IHC receptor potential. Provided by default. Can be disabled by the flag no_ihc. cn Detailed output of the CN. Provided by default. Can be disabled by the flag no_cn. ic Detailed output of the IC. Provided by default. Can be disabled by the flag no_ic.
## Description
The output can optionally provide the following information:
anfH : responses of the high-SR fibers. Optional, only when called with flag anfH. anfM : responses of the medium-SR fibers. Optional, only when called with flag anfM. anfL : responses of the low-SR fibers. Optional, only when called with flag anfL. v : velocity of the basilar membrane sections [time section channel] and input to the AN (modelled by verhulst2018_ihctransduction. Optional, provided only when called with flag v. y : displacement of the basilar membrane sections [time section channel]. Can be disabled by the flag no_y. oae : ottoacoustic emission as sound pressure at the middle ear. Can be disabled by the flag oae.
## License:
This model is licensed under the UGent Academic License. Further usage details are provided in the UGent Academic License which can be found in the AMT directory "licences" and at <https://raw.githubusercontent.com/HearingTechnology/Verhulstetal2018Model/master/license.txt>.
## References:
S. Verhulst, A. Altoè, and V. Vasilkov. Functional modeling of the human auditory brainstem response to broadband stimulation. hearingresearch, 360:55--75, 2018. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4454224705696106, "perplexity": 10600.728398572628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00803.warc.gz"} |
https://issarice.com/material-implication | # Material implication
This document outlines some of the common explanations surrounding the material implication connective in propositional logic. The connective is most commondly represented using an arrow such as or (even in older texts), and when placed between two propositions as in “”, is supposed to mean “if , then ”. Moreover, this connective is defined as a truth function as follows.
• is true when is false or when is true
• is false when is true and is false
In other words, is equivalent to writing . A problem arises when working off this definition, however, because the conditions for truth given above do not correspond to the natural language sense of “if , then ”. The purpose of this document then is to explain why the material implication is defined as it is, and to aid in an intuitive understanding of the connective.
## Two Naive Examples
Consider the conditional proposition “if one is in Berkeley, then one is in California”. We wish to know when this proposition is false and when it is true. To analyze this proposition, we consider various cases:
• Consider first when one is in Berkeley and one is in California. Is “if one is in Berkeley, then one is in California” true? It seems so, since this is exactly what the proposition claims.
• Consider next when one is in Berkeley but not in California. Since Berkeley is located within California, this is impossible; however, if it were true, it would certainly be an anomaly. We might thus reason that in this case, the conditional proposition would be false.
• Finally consider when one is not in Berkeley (regardless of whether one is in California; one could be in Los Angeles, in which case one would be outside of Berkeley but still in California, or one could be in London, in which case one would be outside of both Berkeley and California). In this case, one could still entertain the idea of if one is in Berkeley; i.e., one can assume that one is in Berkeley, and conclude that in this case, one would also be in California.
From the above analysis, we might conclude that the conditional proposition indeed is the same as the material conditional; the conditional proposition “if one is in Berkeley, then one is in California” was seen to be false only when the antecedent is true and the consequent is false, and true otherwise. As the title of this section implies, however, this analysis is naive in that it works in favor of human intuition, and conveniently hides problems with the approach. The next example shows how a similar example can work against human intuition.
Consider the conditional proposition “if one is in Berkeley, then one is in Massachusetts”. We present a similar analysis of this proposition:
• Consider when one is in Berkeley and one is in Massachusetts. We might say this is precisely what the proposition claims, and therefore it is true. We might also say that this situation is impossible, and so the proposition is false. We will give the benefit of the doubt to the proposition and say it is true in this case.
• Consider next when one is in Berkeley but not in Massachusetts. The situation is possible, but it directly contradicts what is claimed by the proposition, so we can conclude that it is false.
• Finally consider when one is not in Berkeley. One can still proceed, as in the last example, by thinking if one is in Berkeley, and asking whether one is in Massachusetts in that case. One certainly is not in Massachusetts even if one assumes one is in Berkeley, so we conclude the proposition to be false in this case.
From our preceding analysis, we obtain a new truth table for the conditional proposition:
T T T
T F F
F T F
F F F
This seems strange, and indeed especially strange if we want to use the conditional proposition in a mathematical context—for this, as mentioned earlier, the proposition should only depend on properties of a formal system, presumably only the truth values of the constituent propositions, i.e., the truth values of and . We presumably also want the conditional proposition to have a systematic method of evaluating the truth; one thing to note is that the material conditional already achieves this. To attempt to resolve this issue, we will now move on and consider various other analyses. We may conclude this section by saying that perhaps our examples were not well chosen; better examples could have been found within pure mathematics. This remark, however, simply cements the title for this section.
## Tarski
The divergency between the usage of the phrase “if…, then…” in ordinary language and its usage in mathematical logic has been at the root of lengthy and even passionate discussions,—in which, by the way, professional logicians took only a minor part. (It is perhaps surprising, that considerably less attention was paid to the analogous divergency in the case of the word “or”.) It has been objected that logicians, on account of their adoption of the concept of material implication, arrived at paradoxes and even at plain nonsense. This has resulted in an outcry for a reform of logic, and in particular, for bringing about a far-reaching rapprochement between logic and ordinary language with regard to the use of implication.
It would be hard to grant that these criticisms are well founded. There is no phrase in ordinary language which has a precisely determined meaning. It would scarcely be possible to find two people who would use every word with exactly the same meaning, and even in the language of a single person the meaning of a given word may vary from one period of the person’s life to another. Moreover, the meaning of words of everyday language is usually very complicated; it depends not only on the external form of the word, but also on the circumstances in which it is uttered, and sometimes even on subjective psychological factors. If a scientist wants to transfer a concept from everyday life into a science and to establish general laws concerning this concept, he (or she) must always make its content clearer, more precise, and simpler, and free it from inessential attributes; it does not matter here whether he is a logician who is concerned with the phrase “if…, then…”, or, for instance, a physicist wanting to establish the exact meaning of the word “metal”. In whatever way the scientist realizes his task, the resulting usage of the term will deviate more or less from the practice of everyday language. If, however, he states explicitly in what sense he decides to use the term, and if afterwards he acts always in accordance with this decision, then nobody will be in a position to object, or to argue that his procedure leads to nonsensical results.
## Ramsey
The following is taken from the Stanford Encyclopedia of Philosophy1 and written by Ramsey. Note that the notation for negation has been changed.
If two people are arguing ‘If , then ?’ and are both in doubt as to , they are adding hypothetically to their stock of knowledge and arguing on that basis about ; so that in a sense ‘If , ’ and ‘If , ’ are contradictories. We can say that they are fixing their degree of belief in given . If turns out false, these degrees of belief are rendered void. If either party believes not for certain, the question ceases to mean anything to him except as a question about what follows from certain laws or hypotheses.
## The number analogy
[sec:The number analogy]
One can also construct the following analogy. Let T and F be and respectively; so to each proposition a number is assigned. Then define ; define ; define ; define and finally define
### Furthering the analogy
[sub:Furthering the analogy] The following is originally from a Google Buzz that Terence Tao posted2. The conditional proposition “if , then ” is the same as “ is at most as true as ” and “ is at least as true as ”. This can easily be seen using the number analogy and some logical equivalences. First note that because for any and , each produces the same digit. (Note in particular that each only produces a when and .) Consider what is stating. It says that is the same as the lesser of and ; so even if is large, it cannot exceed ; i.e., is at most .
Now note that . We see that says is the same as the greater of and ; so even if is small, it cannot be smaller than ; i.e., is at least .
In fact, something exactly like this happens in fuzzy logic.
## Subset analogy
It’s possible to think of as . Indeed, we need only define , so that we have the correspondence:
T T T T
T F F F
F T T T
F F T T
In this sense, the vacuous implication for any proposition becomes the similarly vacuous for any set ; see Halmos’s explanation for the latter.
## Halmos
As Halmos states in his Naive Set Theory (page 8),
The empty set is a subset of every set, or, in other words, for every . To establish this, we might argue as follows. It is to be proved that every element in belongs to ; since there are no elements in , the condition is automatically fulfilled. The reasoning is correct but perhaps unsatisfying. Since it is a typical example of a frequent phenomenon, a condition holding in the “vacuous” sense, a word of advice to the inexperienced reader might be in order. To prove that something is true about the empty set, prove that it cannot be false. How, for instance, could it be false that ? It could be false only if had an element that did not belong to . Since has no elements at all, this is absurd. Conclusion: is not false, and therefore for every . | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9250115752220154, "perplexity": 453.54496385117324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00080.warc.gz"} |
https://mcl.mse.utah.edu/determination-of-crystallinity-in-polymers/ | *All of the information for this post was gathered from the sources listed below. Please use them for further reading on the subject.
The physical properties of a polymeric sample, such as density, ductility, and yeild strength, are all highly dependent on the amount of crystalline material (often called percent crystallinity) in the sample. There are two principal methods by which we are able to determine the percent crystallinity of a sample in the MCL: density measurements, and differential scanning calorimetry (DSC). Both of these methods compare the unknown sample to a fully crystalline or amorphous sample of the same polymer, and require the user to have some reference data. Additionally, DSC is a destructive technique, meaning that the provided sample should be one which is not necessary for further testing.
The most common method for determining the percent crystallinity of a sample involves comparing the density of the sample to the fully crystalline and fully amorphous densities. The MCL has an Archimedes Density Measurement tool which can be used in conjunction with a regular scale in order to determine the density of a sample. The figure below shows the proper setup for the attachment.
In order to measure the density using this apparatus, first place your sample on the top basket and record the value. While the sample is still on the top basket, tare the scale and then move the sample to the bottom basket such that the sample is fully submerged in water and record that value. Finally, use a thermometer to determine the temperature of the water so you can find an accurate value for its density. To calculate the density of your sample, take the first value (dry mass), divide by the second value (buoyancy) and multiply the result by the density of the water. It is important to note that this process does not work if your sample floats or dissolves in water. In that case, a different liquid could be used. Once the density of your sample has been determined, use the equation below to find the weight fraction crystallinity of your sample.
$\chi_c = \frac{(\rho-\rho_a)\rho_c}{(\rho_c-\rho_a)\rho}$
• χ is the weight fraction of crystalline material in your sample
• ρ is the total density of your sample
• ρa is the density of the fully amorphous polymer
• ρc is the density of the fully crystalline polymer
It is also possible to determine the percent crystallinity of a polymer using DSC measurements. In this case, you will need to know the specific heat capacities of purely crystalline and purely amorphous samples of the polymer in order to complete the calculations. It is also helpful to know approximately what the melting temperature of your sample will be. In order to find percent crystallinity, run a DSC scan of the sample from room temperature to a temperature above the melting point. The result of this scan will be a curve with a few humps or peaks. There are three main pieces of data from this curve. The first (ΔH12) will the the integral across the whole temperature range, from the starting to the ending temperature. The second (ΔHf,m) will be the enthalpy of fusion at the melting point, determined by the area under the melting peak. The third (Tm) will be the melting point temperature, usually reported as the onset of the melting peak.
Along with these three pieces of data, you will also need to perform a few calculations. First, find ΔH21 using the amorphous sample’s specific heat capacity.
$\Delta H_{21} = \int_{T_2}^{T_1}C_{p,a} dT$
Next, find the enthalpy of fusion at room temperature using the enthalpy of fusion at the melting point, the difference between the amorphous and crystalline specific heat capacities, and the melting temperature.
$\Delta H_{f,T1} = \Delta H_{f,m}-\int_{T_1}^{T_m}C_{p,a}-C_{p,c} dT$
Once those values are calculated, they can be plugged into a final equation in order to determine the percent crystallinity by weight at room temperature.
$\chi_{c,T1} = \frac{\Delta H_{12}+\Delta H_{21}}{\Delta H_{f,T1}}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6952903866767883, "perplexity": 545.196453459922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662594414.79/warc/CC-MAIN-20220525213545-20220526003545-00798.warc.gz"} |
http://leancrew.com/all-this/2012/12/khan/ | # Khan!
I’d like to like Khan Academy. I like its goals. I like how its website works (not surprising, given that John Resig works there). And I like how it makes its videos available in different ways: streaming on its site, streaming on YouTube, and downloadable for playback anytime. But every time I’ve watched a Khan Academy video on a topic I know something about, I’ve been disappointed.
It’s not the rough-hewn nature of the videos—that makes them more personal, more accessible. No, it’s that they’re wrong. Sometimes they’re just factually wrong, with minor mistakes of the kind that are easy to make when teaching live but which shouldn’t be left in a permanent record. More often, though, they’re pedagogically wrong, taking an approach that won’t serve the student well.
Here’s the video on the behavior of springs and Hooke’s Law, a topic I’ve been dealing with as a student, a teacher, and a practitioner for about 35 years. Certainly, it’s worth 10 minutes of your time. Give it a look, and then we’ll discuss it.
Let’s start with the numerical examples. They are bizarre and don’t come close to describing the kinds of springs the viewer is likely to have experience with. A 10-meter compression under a 5-newton load? It may not seem weird if you don’t have a good feeling for what a newton is, but 5 newtons is the weight of roughly half a kilogram,1 or a bit more than a pound. What kind of spring deflects 33 feet under a one-pound load?
You might think this doesn’t make any difference, that the numbers are just there for illustration, not meant to represent any real spring. But students tend to learn better when examples are concrete and realistic. The deflection in the example could just as easily be 10 millimeters, and then you’d have a spring that makes sense.
And while we’re talking about units, you must have noticed that the video commits the cardinal sin of physics and engineering problems: it gives answers without units. Is the spring constant in the example ½, as the narrator says and writes? No. The spring constant is ½ N/m. You might say I’m being picky here, and you’d be right, but I’m not being any pickier than every science and engineering instructor I’ve ever run across. If you’re a student—and I have to believe many of Khan Academy’s users are students looking for a little help—this video is going to give you the impression that units aren’t that important. Tell that to the designers of the Mars Climate Orbiter.
And then there’s the sign convention. If you ask any engineer to write down Hooke’s Law, they’ll write
not
I recognize, and appreciate, that the narrator wants to emphasize that the $F$ in the equation is the restoring force of the spring rather than the applied force, but that minus sign isn’t the right way to do it. All it does is add something to the equation that’s easy to get wrong. How easy? In the video’s second example, which starts at about the 8:30 mark, the narrator himself forgets his sign convention and ends up calculating a spring constant of -2, an absurd answer (and not just because he forgot the units).
I’m not blaming the narrator for messing up his signs.2 I’ve made countless sign errors; it’s the easiest kind of mistake to make. Which is why you shouldn’t make life more difficult by forcing yourself to keep track of more signs than is absolutely necessary.
Here’s why Hooke’s Law shouldn’t be written with a minus sign. There is no single restoring force—which is better referred to as the internal force—in a spring. In the video examples, the spring is not just exerting a force at the right end, it’s also exerting a force on the wall at the left end. And that force is in the opposite direction of the one at the right end. How does the minus sign handle that situation?
In fact, if we go by what the video says, the force at the wall is a mystery. That end of the spring doesn’t move at all, $x = 0$, and yet we know there’s a force there—it’s necessary for equilibrium.
Which leads to my main criticism: Hooke’s law should not be taught as a vector equation. The $F$ and the $x$ in the equation should be treated as signed scalars, not as vectors. The $x$ represents the change in length of the spring from its natural (unloaded) length, with positive values for stretching and negative values for shortening. The $F$ represents the internal force in the spring, with positive values for tension and negative values for compression. With these definitions, the equation $F = kx$ doesn’t need a negative sign.
Defining and writing Hooke’s Law this way makes it easy to apply in every case. It doesn’t matter which end of the spring moves, or if both ends move, the force in the spring is determined entirely by its change in length. And the directions of the forces exerted by the spring on whatever it’s touching are simple: a spring in compression pushes out; a spring in tension pulls back in. Apply those ideas to the examples in the video and you’ll see how straightforward these problems can be.
More important, this view of Hooke’s Law is the one that will serve the student better when she moves on from elementary problems. The way it’s taught in the video will have to be unlearned at some point in the future.
As I said at the top, I don’t want to be mean to Khan Academy. It’s heart is in the right place. But I’m not convinced poor teaching is better than no teaching.
1. “The weight of half a kilogram” is an awkward construction, but I’m trying to be colloquial and scientific at the same time. The kilogram is commonly thought of as a unit of weight, which is a force, but it is, strictly speaking, a unit of mass.
2. I am, however, blaming him for not immediately realizing that he’d made a mistake. Spring constants aren’t negative. | {"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5863039493560791, "perplexity": 488.19192407541243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193284.93/warc/CC-MAIN-20170322212953-00100-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://www.ms.unimelb.edu.au/research/seminars.php?id=748 | # Black holes and conformal field theory
#### by Omar Foda
Institution: Mathematics and Statistics Department, The University of Melbourne
Date: Mon 31st August 2009
Time: 1:00 PM
Location: Old Geology Theatre 2, The University of Melbourne
Abstract: In 1997, J Maldacena proposed a correspondence
between gravitational theories near black holes in the bulk of anti-de Sitter (AdS) spacetimes and conformally invariant gauge field theories (CFT's) on the boundaries.
The AdS/CFT correspondence is by now regarded as one of the deepest insights into physical theory, and with almost 10,000 citations, Maldacena's paper is one of the most cited papers of all time.
Recently, J Rasmussen brought to my attention the fact that there are examples of AdS/CFT where the CFT's are of the same nature as those used to describe surface critical phenomena.
The limited aim of this talk is to outline a derivation of Cardy's formula for the entropy of a conformal field theory and to explain its role in this very vast subject. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8025614619255066, "perplexity": 1114.2930787147918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824226.31/warc/CC-MAIN-20171020154441-20171020174441-00167.warc.gz"} |
https://industrial-hoses.info/en/technical-info/acid-concentration/ | # Acid concentration
Besides indication in percentages the acid concentration of chemicals is also expressed in degrees Baumé (°Bé). For the four most common acids the graph below can be used to see how many degrees Baumé correspond to a certain concentration in percentages (and vice versa!).
HCI hydrochloric acid H2S04 Sulfuric acid H3P04 Phosphoric acid HN03 Nitric acid
When the density is known the acid concentration can also be found using the formula:
### pH
The degree of acidity is also given in practice with the hydrogen exponent -pH- (negative logarithm of the concentration of hydrogen ions). The pH value of clean water is 7. Solutions with a pH of 0-7 are acid. The higher the figure the greater the concentration. Solutions with a pH of 7-14 are alkaline. The higher the figure, the greater the concentration.
Directly to the web shop | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518547415733337, "perplexity": 3208.08874245423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00510.warc.gz"} |
http://en.wikipedia.org/wiki/Cogeneration | # Cogeneration
Trigeneration cycle
Cogeneration or combined heat and power (CHP) is the use of a heat engine[1] or power station to simultaneously generate electricity and useful heat. Trigeneration or combined cooling, heat and power (CCHP) refers to the simultaneous generation of electricity and useful heating and cooling from the combustion of a fuel or a solar heat collector.
Cogeneration is a thermodynamically efficient use of fuel. In separate production of electricity, some energy must be discarded as waste heat, but in cogeneration this thermal energy is put to use. All thermal power plants emit heat during electricity generation, which can be released into the natural environment through cooling towers, flue gas, or by other means. In contrast, CHP captures some or all of the by-product for heating, either very close to the plant, or—especially in Scandinavia and Eastern Europe—as hot water for district heating with temperatures ranging from approximately 80 to 130 °C. This is also called combined heat and power district heating (CHPDH). Small CHP plants are an example of decentralized energy.[2] By-product heat at moderate temperatures (100–180 °C, 212–356 °F) can also be used in absorption refrigerators for cooling.
The supply of high-temperature heat first drives a gas or steam turbine-powered generator and the resulting low-temperature waste heat is then used for water or space heating as described in cogeneration. At smaller scales (typically below 1 MW) a gas engine or diesel engine may be used. Trigeneration differs from cogeneration in that the waste heat is used for both heating and cooling, typically in an absorption refrigerator. CCHP systems can attain higher overall efficiencies than cogeneration or traditional power plants. In the United States, the application of trigeneration in buildings is called building cooling, heating and power (BCHP). Heating and cooling output may operate concurrently or alternately depending on need and system construction.
Cogeneration was practised in some of the earliest installations of electrical generation. Before central stations distributed power, industries generating their own power used exhaust steam for process heating. Large office and apartment buildings, hotels and stores commonly generated their own power and used waste steam for building heat. Due to the high cost of early purchased power, these CHP operations continued for many years after utility electricity became available.[3]
## Overview
Masnedø CHP power station in Denmark. This station burns straw as fuel. The adjacent greenhouses are heated by district heating from the plant.
Thermal power plants (including those that use fissile elements or burn coal, petroleum, or natural gas), and heat engines in general, do not convert all of their thermal energy into electricity. In most heat engines, a bit more than half is lost as excess heat (see: Second law of thermodynamics and Carnot's theorem). By capturing the excess heat, CHP uses heat that would be wasted in a conventional power plant, potentially reaching an efficiency of up to 80%,[4] for the best conventional plants. This means that less fuel needs to be consumed to produce the same amount of useful energy.
Steam turbines for cogeneration are designed for extraction of steam at lower pressures after it has passed through a number of turbine stages, or they may be designed for final exhaust at back pressure (non-condensing), or both.[5] A typical power generation turbine in a paper mill may have extraction pressures of 160 psig (1.103 MPa) and 60 psig (0.41 MPa). A typical back pressure may be 60 psig (0.41 MPa). In practice these pressures are custom designed for each facility. The extracted or exhaust steam is used for process heating, such as drying paper, evaporation, heat for chemical reactions or distillation. Steam at ordinary process heating conditions still has a considerable amount of enthalpy that could be used for power generation, so cogeneration has lost opportunity cost. Conversely, simply generating steam at process pressure instead of high enough pressure to generate power at the top end also has lost opportunity cost. (See: Steam turbine#Steam supply and exhaust conditions) The capital and operating cost of high pressure boilers, turbines and generators are substantial, and this equipment is normally operated continuously, which usually limits self-generated power to large-scale operations.
A cogeneration plant in Metz, France. The 45MW boiler uses waste wood biomass as energy source, and provides electricity and heat for 30,000 dwellings.
Some tri-cycle plants have used a combined cycle in which several thermodynamic cycles produced electricity, then a heating system was used as a condenser of the power plant's bottoming cycle. For example, the RU-25 MHD generator in Moscow heated a boiler for a conventional steam powerplant, whose condensate was then used for space heat. A more modern system might use a gas turbine powered by natural gas, whose exhaust powers a steam plant, whose condensate provides heat. Tri-cycle plants can have thermal efficiencies above 80%.
The viability of CHP (sometimes termed utilisation factor), especially in smaller CHP installations, depends on a good baseload of operation, both in terms of an on-site (or near site) electrical demand and heat demand. In practice, an exact match between the heat and electricity needs rarely exists. A CHP plant can either meet the need for heat (heat driven operation) or be run as a power plant with some use of its waste heat, the latter being less advantageous in terms of its utilisation factor and thus its overall efficiency. The viability can be greatly increased where opportunities for Trigeneration exist. In such cases, the heat from the CHP plant is also used as a primary energy source to deliver cooling by means of an absorption chiller.
CHP is most efficient when heat can be used on-site or very close to it. Overall efficiency is reduced when the heat must be transported over longer distances. This requires heavily insulated pipes, which are expensive and inefficient; whereas electricity can be transmitted along a comparatively simple wire, and over much longer distances for the same energy loss.
A car engine becomes a CHP plant in winter when the reject heat is useful for warming the interior of the vehicle. The example illustrates the point that deployment of CHP depends on heat uses in the vicinity of the heat engine.
Thermally enhanced oil recovery (TEOR) plants often produce a substantial amount of excess electricity. After generating electricity, these plants pump leftover steam into heavy oil wells so that the oil will flow more easily, increasing production. TEOR cogeneration plants in Kern County, California produce so much electricity that it cannot all be used locally and is transmitted to Los Angeles[citation needed].
CHP is one of the most cost-efficient methods of reducing carbon emissions from heating systems in cold climates [6] and is recognized to be the most energy efficient method of transforming energy from fossil fuels or biomass into electric power.[7] Cogeneration plants are commonly found in district heating systems of cities, central heating systems from buildings, hospitals, prisons and are commonly used in the industry in thermal production processes for process water, cooling, steam production or CO2 fertilization.
## Types of plants
Topping cycle plants primarily produce electricity from a steam turbine. The exhausted steam is then condensed and the low temperature heat released from this condensation is utilized for e.g. district heating or water desalination.
Bottoming cycle plants produce high temperature heat for industrial processes, then a waste heat recovery boiler feeds an electrical plant. Bottoming cycle plants are only used when the industrial process requires very high temperatures such as furnaces for glass and metal manufacturing, so they are less common.
Large cogeneration systems provide heating water and power for an industrial site or an entire town. Common CHP plant types are:
• Gas turbine CHP plants using the waste heat in the flue gas of gas turbines. The fuel used is typically natural gas
• Gas engine CHP plants use a reciprocating gas engine which is generally more competitive than a gas turbine up to about 5 MW. The gaseous fuel used is normally natural gas. These plants are generally manufactured as fully packaged units that can be installed within a plantroom or external plant compound with simple connections to the site's gas supply and electrical distribution and heating systems. Typical outputs and efficiences see [8] Typical large example see [9]
• Biofuel engine CHP plants use an adapted reciprocating gas engine or diesel engine, depending upon which biofuel is being used, and are otherwise very similar in design to a Gas engine CHP plant. The advantage of using a biofuel is one of reduced hydrocarbon fuel consumption and thus reduced carbon emissions. These plants are generally manufactured as fully packaged units that can be installed within a plantroom or external plant compound with simple connections to the site's electrical distribution and heating systems. Another variant is the wood gasifier CHP plant whereby a wood pellet or wood chip biofuel is gasified in a zero oxygen high temperature environment; the resulting gas is then used to power the gas engine. Typical smaller size biogas plant see [10]
• Combined cycle power plants adapted for CHP
• Molten-carbonate fuel cells and solid oxide fuel cells have a hot exhaust, very suitable for heating.
• Steam turbine CHP plants that use the heating system as the steam condenser for the steam turbine.
• Nuclear power plants, similar to other steam turbine power plants, can be fitted with extractions in the turbines to bleed partially expanded steam to a heating system. With a heating system temperature of 95 °C it is possible to extract about 10 MW heat for every MW electricity lost. With a temperature of 130 °C the gain is slightly smaller, about 7 MW for every MWe lost.[11]
Smaller cogeneration units may use a reciprocating engine or Stirling engine. The heat is removed from the exhaust and radiator. The systems are popular in small sizes because small gas and diesel engines are less expensive than small gas- or oil-fired steam-electric plants.
Some cogeneration plants are fired by biomass,[12] or industrial and municipal waste (see incineration).
Some cogeneration plants combine gas and solar photovoltaic generation to further improve technical and environmental performance.[13] Such hybrid systems can be scaled down to the building level[14] and even individual homes.[15]
### MicroCHP
Micro combined heat and power or 'Micro cogeneration" is a so-called distributed energy resource (DER). The installation is usually less than 5 kWe in a house or small business. Instead of burning fuel to merely heat space or water, some of the energy is converted to electricity in addition to heat. This electricity can be used within the home or business or, if permitted by the grid management, sold back into the electric power grid.
Delta-ee consultants stated in 2013 that with 64% of global sales the fuel cell micro-combined heat and power passed the conventional systems in sales in 2012.[16] 20.000 units where sold in Japan in 2012 overall within the Ene Farm project. With a Lifetime of around 60,000 hours. For PEM fuel cell units, which shut down at night, this equates to an estimated lifetime of between ten and fifteen years.[17] For a price of \$22,600 before installation.[18] For 2013 a state subsidy for 50,000 units is in place.[17]
The development of small-scale CHP systems has provided the opportunity for in-house power backup of residential-scale photovoltaic (PV) arrays.[15] The results of a 2011 study show that a PV+CHP hybrid system not only has the potential to radically reduce energy waste in the status quo electrical and heating systems, but it also enables the share of solar PV to be expanded by about a factor of five.[15] In some regions, in order to reduce waste from excess heat, an absorption chiller has been proposed to utilize the CHP-produced thermal energy for cooling of PV-CHP system. [19] These trigeneration+photovoltaic systems have the potential to save even more energy and further reduce emissions compared to conventional sources of power, heating and cooling.[20]
MicroCHP installations use five different technologies: microturbines, internal combustion engines, stirling engines, closed cycle steam engines and fuel cells. One author indicated in 2008 that MicroCHP based on Stirling engines is the most cost effective of the so-called microgeneration technologies in abating carbon emissions;[21] A 2013 UK report from Ecuity Consulting stated that MCHP is the most cost-effective method of utilising gas to generate energy at the domestic level.[22][23] however, advances in reciprocation engine technology are adding efficiency to CHP plant, particularly in the biogas field.[24] As both MiniCHP and CHP have been shown to reduce emissions [25] they could play a large role in the field of CO2 reduction from buildings, where more than 14% of emissions can be saved using CHP in buildings.[26] The ability to reduce emissions is particularly strong for new communities in emission intensive grids that utilize a combination of CHP and photovoltaic systems.[27]
### Trigeneration
A plant producing electricity, heat and cold is called a trigeneration[28] or polygeneration plant. Cogeneration systems linked to absorption chillers use waste heat for refrigeration.[29]
### Combined heat and power district heating
In the United States, Consolidated Edison distributes 66 billion kilograms of 350 °F (180 °C) steam each year through its seven cogeneration plants to 100,000 buildings in Manhattan—the biggest steam district in the United States. The peak delivery is 10 million pounds per hour (or approximately 2.5 GW).[30][31] Other major cogeneration companies in the United States include Recycled Energy Development,[32] and leading advocates include Tom Casten and Amory Lovins.
### Industrial CHP
Cogeneration is still common in pulp and paper mills, refineries and chemical plants. In this "industrial cogeneration/CHP", the heat is typically recovered at higher temperatures (above 100 deg C) and used for process steam or drying duties. This is more valuable and flexible than low-grade waste heat, but there is a slight loss of power generation. The increased focus on sustainability has made industrial CHP more attractive, as it substantially reduces carbon footprint compared to generating steam or burning fuel on-site and importing electric power from the grid.
#### Utility pressures versus self generating industrial
Industrial cogeneration plants normally operate at much lower boiler pressures than utilities. Among the reasons are: 1) Cogeneration plants face possible contamination of returned condensate. Because boiler feed water from cogeneration plants has much lower return rates than 100% condensing power plants, industries usually have to treat proportionately more boiler make up water. Boiler feed water must be completely oxygen free and de-mineralized, and the higher the pressure the more critical the level of purity of the feed water.[5] 2) Utilities are typically larger scale power than industry, which helps offset the higher capital costs of high pressure. 3) Utilities are less likely to have sharp load swings than industrial operations, which deal with shutting down or starting up units that may represent a significant percent of either steam or power demand.
### Heat recovery steam generators
A heat recovery steam generator (HRSG) is a steam boiler that uses hot exhaust gases from the gas turbines or reciprocating engines in a CHP plant to heat up water and generate steam. The steam, in turn, drives a steam turbine or is used in industrial processes that require heat.
HRSGs used in the CHP industry are distinguished from conventional steam generators by the following main features:
• The HRSG is designed based upon the specific features of the gas turbine or reciprocating engine that it will be coupled to.
• Since the exhaust gas temperature is relatively low, heat transmission is accomplished mainly through convection.
• The exhaust gas velocity is limited by the need to keep head losses down. Thus, the transmission coefficient is low, which calls for a large heating surface area.
• Since the temperature difference between the hot gases and the fluid to be heated (steam or water) is low, and with the heat transmission coefficient being low as well, the evaporator and economizer are designed with plate fin heat exchangers.
## Comparison with a heat pump
A heat pump may be compared with a CHP unit, in that for a condensing steam plant, as it switches to produced heat, then electrical power is lost or becomes unavailable, just as the power used in a heat pump becomes unavailable. Typically for every unit of power lost, then about 6 units of heat are made available at about 90 °C. Thus CHP has an effective Coefficient of Performance (COP) compared to a heat pump of 6.[33] It is noteworthy that the unit for the CHP is lost at the high voltage network and therefore incurs no losses, whereas the heat pump unit is lost at the low voltage part of the network and incurs on average a 6% loss. Because the losses are proportional to the square of the current, during peak periods losses are much higher than this and it is likely that widespread i.e. city wide application of heat pumps would cause overloading of the distribution and transmission grids unless they are substantially reinforced.
It is also possible to run a heat driven operation combined with a heat pump, where the excess electricity (as heat demand is the defining factor on utilization) is used to drive a heat pump. As heat demand increases, more electricity is generated to drive the heat pump, with the waste heat also heating the heating fluid.
## Distributed generation
Trigeneration has its greatest benefits when scaled to fit buildings or complexes of buildings where electricity, heating and cooling are perpetually needed. Such installations include but are not limited to: data centers, manufacturing facilities, universities, hospitals, military complexes and colleges. Localized trigeneration has addition benefits as described by distributed generation. Redundancy of power in mission critical applications, lower power usage costs and the ability to sell electrical power back to the local utility are a few of the major benefits. Even for small buildings such as individual family homes trigeneration systems provide benefits over cogeneration because of increased energy utilization.[34] This increased efficiency can also provide significant reduced greenhouse gas emissions, particularly for new communities.[35]
Most industrial countries generate the majority of their electrical power needs in large centralized facilities with capacity for large electrical power output. These plants have excellent economies of scale, but usually transmit electricity long distances resulting in sizable losses, negatively affect the environment. Large power plants can use cogeneration or trigeneration systems only when sufficient need exists in immediate geographic vicinity for an industrial complex, additional power plant or a city. An example of cogeneration with trigeneration applications in a major city is the New York City steam system.
## Thermal efficiency
Every heat engine is subject to the theoretical efficiency limits of the Carnot cycle. When the fuel is natural gas, a gas turbine following the Brayton cycle is typically used.[36] Mechanical energy from the turbine drives an electric generator. The low-grade (i.e. low temperature) waste heat rejected by the turbine is then applied to space heating or cooling or to industrial processes. Cooling is achieved by passing the waste heat to an absorption chiller.
Thermal efficiency in a trigeneration system is defined as:
$\eta_{th} \equiv \frac{W_{out}}{Q_{in}} \equiv \frac{\text{Electrical Power Output + Heat Output + Cooling Output}}{\text{Total Heat Input}}$
Where:
$\eta_{th}$ = Thermal efficiency
$W_{out}$ = Total work output by all systems
$Q_{in}$ = Total heat input into the system
Typical trigeneration models have losses as in any system. The energy distribution below is represented as a percent of total input energy:[37]
Electricity = 45%
Heat + Cooling = 40%
Heat Losses = 13%
Electrical Line Losses = 2%
Conventional central coal- or nuclear-powered power stations convert only about 33% of their input heat to electricity. The remaining 67% emerges from the turbines as low-grade waste heat with no significant local uses so it is usually rejected to the environment. These low conversion efficiencies strongly suggest that productive uses could be found for this waste heat, and in some countries these plants do collect byproduct heat that can be sold to customers.
But if no practical uses can be found for the waste heat from a central power station, e.g., due to distance from potential customers, then moving generation to where the waste heat can find uses may be of great benefit. Even though the efficiency of a small distributed electrical generator may be lower than a large central power plant, the use of its waste heat for local heating and cooling can result in an overall use of the primary fuel supply as great as 80%. This provides substantial financial and environmental benefits.
## Costs
Typically, for a gas-fired plant the fully installed cost per kW electrical is around £400/kW, which is comparable with large central power stations.[10]
## History
### Cogeneration in Europe
A cogeneration thermal power plant in Ferrera Erbognone (PV), Italy
The EU has actively incorporated cogeneration into its energy policy via the CHP Directive. In September 2008 at a hearing of the European Parliament’s Urban Lodgment Intergroup, Energy Commissioner Andris Piebalgs is quoted as saying, “security of supply really starts with energy efficiency.”[38] Energy efficiency and cogeneration are recognized in the opening paragraphs of the European Union’s Cogeneration Directive 2004/08/EC. This directive intends to support cogeneration and establish a method for calculating cogeneration abilities per country. The development of cogeneration has been very uneven over the years and has been dominated throughout the last decades by national circumstances.
As a whole, the European Union generates 11% of its electricity using cogeneration, saving Europe an estimated 35 Mtoe per annum a day.[39] However, there is large difference between Member States with variations of the energy savings between 2% and 60%. Europe has the three countries with the world’s most intensive cogeneration economies: Denmark, the Netherlands and Finland.[40] Of the 28.46 TWh of electrical power generated by conventional thermal power plants in Finland in 2012, 81.80% was cogeneration.[41]
Other European countries are also making great efforts to increase efficiency. Germany reported that at present, over 50% of the country’s total electricity demand could be provided through cogeneration. So far, Germany has set the target to double its electricity cogeneration from 12.5% of the country’s electricity to 25% of the country’s electricity by 2020 and has passed supporting legislation accordingly.[42] The UK is also actively supporting combined heat and power. In light of UK’s goal to achieve a 60% reduction in carbon dioxide emissions by 2050, the government has set the target to source at least 15% of its government electricity use from CHP by 2010.[43] Other UK measures to encourage CHP growth are financial incentives, grant support, a greater regulatory framework, and government leadership and partnership.
According to the IEA 2008 modeling of cogeneration expansion for the G8 countries, the expansion of cogeneration in France, Germany, Italy and the UK alone would effectively double the existing primary fuel savings by 2030. This would increase Europe’s savings from today’s 155.69 Twh to 465 Twh in 2030. It would also result in a 16% to 29% increase in each country’s total cogenerated electricity by 2030.
Governments are being assisted in their CHP endeavors by organizations like COGEN Europe who serve as an information hub for the most recent updates within Europe’s energy policy. COGEN is Europe’s umbrella organization representing the interests of the cogeneration industry.
The European public–private partnership Fuel Cells and Hydrogen Joint Undertaking Seventh Framework Programme project ene.field deploys in 2017[44] up 1,000 residential fuel cell Combined Heat and Power (micro-CHP) installations in 12 states. Per 2012 the first 2 installations have taken place.[45][46][47]
### Cogeneration in the United States
A 250 MW cogeneration plant in Cambridge, Massachusetts
Perhaps the first modern use of energy recycling was done by Thomas Edison. His 1882 Pearl Street Station, the world’s first commercial power plant, was a combined heat and power plant, producing both electricity and thermal energy while using waste heat to warm neighboring buildings.[48] Recycling allowed Edison’s plant to achieve approximately 50 percent efficiency.
By the early 1900s, regulations emerged to promote rural electrification through the construction of centralized plants managed by regional utilities. These regulations not only promoted electrification throughout the countryside, but they also discouraged decentralized power generation, such as cogeneration. As Recycled Energy Development CEO Sean Casten testified to Congress, they even went so far as to make it illegal for non-utilities to sell power.[49]
By 1978, Congress recognized that efficiency at central power plants had stagnated and sought to encourage improved efficiency with the Public Utility Regulatory Policies Act (PURPA), which encouraged utilities to buy power from other energy producers.
#### Diffusion
Cogeneration plants proliferated, soon producing about 8% of all energy in the United States.[50] However, the bill left implementation and enforcement up to individual states, resulting in little or nothing being done in many parts of the country.[citation needed]
In 2008 Tom Casten, chairman of Recycled Energy Development, said that "We think we could make about 19 to 20 percent of U.S. electricity with heat that is currently thrown away by industry."[51]
The United States Department of Energy has an aggressive goal of having CHP constitute 20% of generation capacity by the year 2030. Eight Clean Energy Application Centers[52] have been established across the nation whose mission is to develop the required technology application knowledge and educational infrastructure necessary to lead "clean energy" (combined heat and power, waste heat recovery and district energy) technologies as viable energy options and reduce any perceived risks associated with their implementation. The focus of the Application Centers is to provide an outreach and technology deployment program for end users, policy makers, utilities, and industry stakeholders.
Outside of the United States, energy recycling is more common. Denmark is probably the most active energy recycler, obtaining about 55% of its energy from cogeneration and waste heat recovery.[citation needed] Other large countries, including Germany, Russia, and India, also obtain a much higher share of their energy from decentralized sources.[50][51]
## Applications in power generation systems
### Non-renewable
Any of the following conventional power plants may be converted to a CCHP system:[53]
### Renewable
• Steam, its generation and use. Babcock & Wilcox. (Numerous editions). Check date values in: |date= (help) An engineering handbook widely used by those involved with various types of boilers. Contains numerous illustrations, graphs and useful formulas. (Not specific to cogeneration). The link leads to an entire free e-Book of an early edition. For current practice a more modern edition is recommended.
## References
1. ^ Cogeneration and Cogeneration Schematic, www.clarke-energy.com, retrieved 26.11.11
2. ^ "What is Decentralised Energy?". The Decentralised Energy Knowledge Base.
3. ^ Hunter, Louis C.; Bryant, Lynwood (1991). A History of Industrial Power in the United States, 1730-1930, Vol. 3: The Transmission of Power. Cambridge, Massachusetts, London: MIT Press. ISBN 0-262-08198-9.
4. ^ "Combined Heat and Power – Effective Energy Solutions for a Sustainable Future". Oak Ridge National Laboratory. 1 December 2008. Retrieved 9 September 2011.
5. ^ a b Steam-its generation and use. Babcock & Wilcox. (Numerous editions). Check date values in: |date= (help)
6. ^ "Carbon footprints of various sources of heat – biomass combustion and CHPDH comes out lowest". Claverton Energy Research Group.
7. ^
8. ^ http://www.claverton-energy.com/finning-caterpillar-gas-engine-chp-ratings-and-thermal-outputs.html
9. ^ "Complete 7 MWe Deutz ( 2 x 3.5MWe) gas engine CHP power plant for sale". Claverton Energy Research Group.
10. ^ a b 38% HHV Caterpillar Bio-gas Engine Fitted to Sewage Works | Claverton Group
11. ^
12. ^ "High cogeneration performance by innovative steam turbine for biomass-fired CHP plant in Iislami, Finland". OPET. Retrieved 13 March 2011.
13. ^ A.C. Oliveira, C. Afonso, J. Matos, S. Riffat, M. Nguyen and P. Doherty, "A Combined Heat and Power System for Buildings driven by Solar Energy and Gas", Applied Thermal Engineering, vol. 22, Iss. 6, pp. 587-593 (2002).
14. ^ Yagoub, W., Doherty, P., & Riffat, S. B. (2006). Solar energy-gas driven micro-CHP system for an office building. Applied thermal engineering, 26(14), 1604-1610.
15. ^ a b c J. M. Pearce, “Expanding Photovoltaic Penetration with Residential Distributed Generation from Hybrid Solar Photovoltaic + Combined Heat and Power Systems”, Energy 34, pp. 1947-1954 (2009). [1] Open access
16. ^ The fuel cell industry review 2013
17. ^ a b Latest developments in the Ene-Farm scheme
18. ^ Launch of new 'Ene-Farm' home fuel cell product more affordable and easier to install
19. ^ A. Nosrat and J. M. Pearce, “Dispatch Strategy and Model for Hybrid Photovoltaic and Combined Heating, Cooling, and Power Systems”, Applied Energy 88 (2011) 3270–3276. [2] Open access
20. ^ A.H. Nosrat, L.G. Swan, J.M. Pearce, "Improved Performance of Hybrid Photovoltaic-Trigeneration Systems Over Photovoltaic-Cogen Systems Including Effects of Battery Storage", Energy 49, pp. 366-374 (2013). DOI, open access.
21. ^ What is microgeneration? Jeremy Harrison, Claverton Energy Group Conference, Bath, Oct 24th 2008
22. ^ The role of micro CHP in a smart energy world
23. ^ Micro CHP report powers heated discussion about UK energy future
24. ^ MiniCHP ranges and efficiencies Aug 15 2009
25. ^ Pehnt, M. (2008). Environmental impacts of distributed energy systems—The case of micro cogeneration. Environmental science & policy, 11(1), 25-37.
26. ^ http://alfagy.com/what-is-chp/133-kaarsberg-t-rfiskum-jromm-a-rosenfeld-j-koomey-and-wpteagan-1998-qcombined-heat-and-power-chp-or-cogeneration-for-saving-energy-and-carbon-in-commercial-buildingsq.html "Combined Heat and Power (CHP or Cogeneration) for Saving Energy and Carbon in Commercial Buildings."
27. ^ A.H. Nosrat, L.G. Swan, J. M. Pearce, Simulations of greenhouse gas emission reductions from low-cost hybrid solar photovoltaic and cogeneration systems for new communities, Sustainable Energy Technologies and Assessments, 8, 2014, 34-41. DOI:10.1016/j.seta.2014.06.008 open access
28. ^
29. ^ Fuel Cells and CHP
30. ^ "Newsroom: Steam". ConEdison. Retrieved 2007-07-20.
31. ^ Bevelhymer, Carl (2003-11-10). "Steam". Gotham Gazette. Retrieved 2007-07-20.
32. ^
33. ^ Lowe, R. (2011). "Combined heat and power considered as a virtual steam cycle heat pump". Energy Policy 39 (9): 5528–5534. doi:10.1016/j.enpol.2011.05.007. edit
34. ^ A.H. Nosrat, L.G. Swan, J.M. Pearce, "Improved Performance of Hybrid Photovoltaic-Trigeneration Systems Over Photovoltaic-Cogen Systems Including Effects of Battery Storage", Energy 49, pp. 366-374 (2013).doi:10.1016/j.energy.2012.11.005
35. ^ Amir H. Nosrat, Lukas G. Swan, Joshua M. Pearce, Simulations of greenhouse gas emission reductions from low-cost hybrid solar photovoltaic and cogeneration systems for new communities, Sustainable Energy Technologies and Assessments, Volume 8, December 2014, Pages 34-41. http://dx.doi.org/10.1016/j.seta.2014.06.008 open access
36. ^ Hodge, B.K. (2009). Alternative Energy Systems & Applications. New York: Wiley-IEEE Press.
37. ^ "Trigeneration Systems with Fuel Cells". Research Paper. Retrieved 18 April 2011.
38. ^
39. ^
40. ^
41. ^
42. ^
43. ^
44. ^ 5th stakeholders general assembly of the FCH JU
45. ^ ene.field
46. ^ European-wide field trials for residential fuel cell micro-CHP
47. ^ ene.field Grant No 303462
48. ^
49. ^
50. ^ a b "World Survey of Decentralized Energy". May 2006.
51. ^ a b 'Recycling' Energy Seen Saving Companies Money. By David Schaper. May 22, 2008. Morning Edition. National Public Radio.
52. ^ Eight Clean Energy Application Centers
53. ^ Masters, Gilbert (2004). Renewable and efficient electric power systems. New York: Wiley-IEEE Press. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48035645484924316, "perplexity": 4585.622010867766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444465.10/warc/CC-MAIN-20141017005724-00050-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://pos.sissa.it/390/513/ | Volume 390 - 40th International Conference on High Energy physics (ICHEP2020) - Parallel: Strong Interactions and Hadron Physics
HERA data on azimuthal decorrelation and charged particle multiplicity spectra probing QCD dynamics and quantum entanglement effects
Z. Tu
Full text: Not available
Abstract
The azimuthal decorrelation angle between the leading jet and scattered lepton in deep inelastic scattering is studied with the ZEUS detector at HERA. The data was taken in the HERA II data-taking period and corresponds to an integrated luminosity of 330 pb^{-1}. Azimuthal angular decorrelation has been proposed to study the Q2 dependence of the evolution of the transverse momentum distributions (TMDs) and understand the small-x region, providing unique insight to nucleon structure. Previous decorrelation measurements of two jets have been performed in proton-proton collisions at very high transverse momentum; these measurements are well described by perturbative QCD at next-to-leading order. The azimuthal decorrelation angle obtained in these studies shows good agreement with predictions from Monte Carlo models including leading order matrix elements and parton showers.
New experimental data on charged particle multiplicity distributions are presented, covering the kinematic ranges in momentum transfer 5 < Q^{2} < 100 GeV^{2} and inelasticity 0:0375 < y < 0:6. The data was recorded with the H1 experiment at the HERA collider in positron-proton collisions at a centre-of-mass energy of 320 GeV. Charged particles are counted with transverse momenta larger than 150 MeV and pseudorapidity -1:6 < lab < 1:6 in the laboratory frame, corresponding to high acceptance in the current hemisphere of the hadronic centre-of-mass frame. Charged particle multiplicities are reported on a two dimensional grid of Q2, y and on a three-dimensional grid of Q2, y and . The observable is the probability P(N) to observe N particles in the given region. The data are confronted with predictions from Monte Carlo generators, and with a simplistic model based on quantum entanglement and strict parton-hadron duality.
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046241641044617, "perplexity": 2277.579723501422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141197593.33/warc/CC-MAIN-20201129093434-20201129123434-00228.warc.gz"} |
https://pillar.r-lib.org/reference/tbl_sum.html | tbl_sum() gives a brief textual description of a table-like object, which should include the dimensions and the data source in the first element, and additional information in the other elements (such as grouping for dplyr). The default implementation forwards to obj_sum().
tbl_sum(x)
## Arguments
x
Object to summarise.
## Value
A named character vector, describing the dimensions in the first element and the data source in the name of the first element.
type_sum()
## Examples
tbl_sum(1:10)
#> Description
#> "int [10]"
tbl_sum(matrix(1:10))
#> Description
#> "int [10 × 1]"
tbl_sum(data.frame(a = 1))
#> Description
#> "df [1 × 1]"
tbl_sum(Sys.Date())
#> Description
#> "date [1]"
tbl_sum(Sys.time())
#> Description
#> "dttm [1]"
tbl_sum(mean)
#> Description
#> "fn" | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2138785421848297, "perplexity": 13168.778459940537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00406.warc.gz"} |
http://mathhelpforum.com/calculus/179317-washer-method.html | # Math Help - Washer Method
1. ## Washer Method
I did this problem, but I got \pi32/3 i think it might be right, but i'm not sure.
Find the volume of the solid obtained by rotating the region R bounded by the curves
y = x and y = 2\sqrt[x]{}and the x-axis.
2. Originally Posted by atearney91
I did this problem, but I got \pi32/3 i think it might be right, but i'm not sure.
Find the volume of the solid obtained by rotating the region R bounded by the curves
y = x and y = 2\sqrt[x]{}and the x-axis.
Yes that is correct if you rotate around the x-axis | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8891534805297852, "perplexity": 388.143439256201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010564986/warc/CC-MAIN-20140305090924-00002-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://tug.org/pipermail/macostex-archives/2007-January/027690.html | # [OS X TeX] old pdf's vs. Reader 8
Herbert Schulz herbs at wideopenwest.com
Mon Jan 8 16:31:14 CET 2007
```On Jan 8, 2007, at 9:01 AM, Gianluca Gorni wrote:
>
> Hello!
>
> I have just opened an old pdf with the new Adobe Reader 8, and
> it looks terrible:
>
> http://www.dimi.uniud.it/gorni/Analisi1/An1.2004.12.10.pdf
>
> Large round parentheses are rendered as curly brackets,
> minus signs as \ge, etc.
>
> The same file looks as expected with Acrobat 7 and Preview and
> Texnicscope.
>
> The file was made with Textures and Acrobat 3.02 under MacOS 9.
>
> I am afraid I'll have to refresh my pdf's from the TeX sources.
>
> So much for pdf being an archival format!
>
> Cheers,
> Gianluca Gorni
>
Howdy,
I see the same thing... amazing. Of course, you can open the file in
Preview or several other pdf viewers and its fine! It appears that
Reader, not the pdf file, is at fault here. Waiting for 8.1 :-)!
Good Luck,
Herb Schulz
(herbs at wideopenwest.com) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.960189700126648, "perplexity": 16174.05824452282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398462709.88/warc/CC-MAIN-20151124205422-00299-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/86419/equivariant-differential-forms | # Equivariant differential forms.
I have some question about the equivariant differential forms on a smooth manifold: \ The equivariant differential forms over some smooth manifold $M$, on which the compact Lie group $G$ acts, are defined to be $$\Omega_{G}^q(M)= \oplus_{2i+j=q} (S^{i}(g^{'}) \otimes \Omega^j(M))^G$$ where $g^{'}$ denotes the dual of the Lie algebra $\mathfrak g$ of $G$. Then many authors say that these forms can be considered as polynomial functions on the Lie algebra $\mathfrak g$ of $G$, but I am not sure how this is to be done. For example if we consider the element $(x_1 \otimes... \otimes x_i) \otimes \omega$ where $x_1,..., x_i$ are elements of $g^{'}$ and $\omega$ is a differential form, what is the evaluation of this element on some $a \in \mathfrak g$ in the Lie algebra of $G$.
-
Can you see why elements of $S^r(\mathfrak g)$ can be seen as homogeneous polynomial functions on~$\mathfrak g$ of degree $r$? – Mariano Suárez-Alvarez Jan 23 '12 at 3:36
The same question was originally posted on MathStackexchange, where it has received a comment by MattE, which is even more explicit than the one one by Mariano Suárez-Alvarez. – Giuseppe Tortorella Jan 23 '12 at 8:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964012622833252, "perplexity": 116.36066250583383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298177.21/warc/CC-MAIN-20150323172138-00225-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/76181-third-degree-ploynomial-question-2-a-print.html | # Third degree ploynomial question : 2
• February 28th 2009, 11:03 AM
champrock
Third degree ploynomial question : 2
if a,b,c are the roots of the equation: x^3 + 2x^2 + 3x + 3 = 0 then the value of
[a/(a+1)]^3 + [b/(b+1)]^3 + [c/(c+1)]^3 is?
----------
I know that this equation has one real root and two imaginary roots. the real root lies between -2 and 0. But unable to proceed further.
• February 28th 2009, 01:30 PM
red_dog
Let $y_1=\frac{a}{a+1}, \ y_2=\frac{b}{b+1}, \ y_3=\frac{c}{c+1}$
We form the equation with the roots $y_1, \ y_2, \ y_3$
$y=\frac{x}{x+1}\Rightarrow x=\frac{y}{1-y}$
Replace x in the x-equation an we get the equation in y: $y^3-5y^2+6y-3=0$
$S_1=y_1+y_2+y_3=5$
$S_2=y_1^2+y_2^2+y_3^2=(y_1+y_2+y_3)^2-2(y_1y_2+y_1y_3+y_2y_3)=25-12=13$
Let $S_3=y_1^3+y_2^3+y_3^3$
We have
$y_1^3-5y_1^2+6y_1-3=0$
$y_2^3-5y_2^2+6y_2-3=0$
$y_3^3-5y_3^2+6y_3-3=0$
Adding the three relations we have
$S_3-5S_2+6S_1-9=0\Rightarrow S_3=5S_2-6S_1+9=65-30+9=44$
• February 28th 2009, 07:39 PM
champrock
thanks for that solution.
can u please tell me some good resource on the net to practice solved question on third degree polynomials? I cant find good resources.. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8783426284790039, "perplexity": 1240.664117541744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098924.1/warc/CC-MAIN-20150627031818-00052-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://proofwiki.org/wiki/Definition:Strictly_Increasing_Real_Sequence | # Definition:Strictly Increasing/Sequence/Real Sequence
## Definition
Let $\sequence {x_n}$ be a sequence in $\R$.
Then $\sequence {x_n}$ is strictly increasing if and only if:
$\forall n \in \N: x_n < x_{n + 1}$
## Also known as
A strictly increasing sequence is also referred to as ascending or strictly ascending.
Some sources refer to a strictly increasing sequence as an increasing sequence, and refer to an increasing sequence which is not strictly increasing as a monotonic increasing sequence to distinguish it from a strictly increasing sequence.
That is, such that monotonic is being used to mean an increasing sequence in which consecutive terms may be equal.
$\mathsf{Pr} \infty \mathsf{fWiki}$ does not endorse this viewpoint.
## Examples
### Example: $\sequence {2^n}$
The first few terms of the real sequence:
$S = \sequence {2^n}_{n \mathop \ge 1}$
are:
$2, 4, 8, 16, \dotsc$
$S$ is strictly increasing. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9954797029495239, "perplexity": 384.9790769497378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00334.warc.gz"} |
https://discourse.mc-stan.org/t/regression-model-with-indicators-for-groups-of-size-1-what-does-loo-approximate/12932 | # Regression model with indicators for groups of size 1: what does loo() approximate?
One of the advantages of multilevel modeling is of course predictions for new groups. For classical regression with group indicators (fixed effects), prediction for a new group is ill-defined. As a result, I would think that leave-one-out cross validation for such a classical model with groups of size 1 would also be ill-defined when the “one” is the only member of its group.
However, if I fit such a model with stan_glm() and then loo() I get an answer (after following the suggestion to set a k_threshold). So my question is: what is loo() approximating in this case?
Sorry, short on time, so a short answer - in all cases it is approximating Leave-one-out cross validation. This should make sense for most linear models. Could you elaborate on why you don’t think it would make sense for your model? Maybe @jonah has time to explain in more detail.
1 Like
Thanks @martinmodrak let me clarify with an example. Consider the mtcars dataset:
mpg carb
Mazda RX4 21.0 4
Mazda RX4 Wag 21.0 4
Maserati Bora 15.0 8
Volvo 142E 21.4 2
We can fit the unpooled varying intercept (fixed effects) model predicting mpg from carb with this formula:
mpg ~ factor(carb) - 1
Now suppose we were doing leave-one-out cross validation and the hold-out was “Maserati Bora” which is the only car in the dataset with carb=8. We’d have an issue:
train <- mtcars %>% filter(carb != 8)
test <- mtcars %>% filter(carb == 8)
fit <- stan_glm(mpg ~ factor(carb) - 1, data=train)
posterior_predict(fit, newdata=test)
Which appropriately gives the error:
Error in model.frame.default(Terms, newdata, xlev = object\$xlevels) :
factor factor(carb) has new level 8
This is the issue with leave-one-out in this setting. But loo(fit, k_threshold=0.7) works. So my question is how should I think about the loo approximation in this setting?
Thanks for clarifying and sorry for taking so long to get back to you. First thing to note is that you can predict for new levels in a multilevel model - since you fit the sd of the intercepts, you just draw a new intercept from this fitted distribution.
Now you can’t predict for new levels of fixed effects, but I think what loo does will be similar. I would guess that loo assumes that you still have the level 8 in the model (there will be appropriate number of coefficients in the parameters) but you don’t observe any data for it, so it will essentially be drawn from its prior. This is the same thing that would happen if you coded your model manually in Stan and provided no data for one fixed predictor.
I would however ask @avehtari to check my reasoning, if he’s not busy.
1 Like
Thanks @martinmodrak, your suggestion that the coefficient for the new level’s indicator variable is simply equal to the prior sounds right! The default prior for the coefficient of a centered binary variable in stan_glm() is currently \text{N}(0, 2.5^2\cdot \text{var}(y)). Since the indicator was effectively zero in the training data in this case, perhaps loo() takes zero to be the center… is that right @avehtari?
1 Like
Yes, if the likelihood contribution for some parameter is removed then sampling is using just the prior. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7080051302909851, "perplexity": 1819.4256834249484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00167.warc.gz"} |
https://www.mail-archive.com/everything-list@googlegroups.com/msg13744.html | # Re: No(-)Justification Justifies The Everything Ensemble
```On Thu, Sep 13, 2007 at 03:04:34PM +0200, Bruno Marchal wrote:
>
>
> Le 13-sept.-07, à 00:48, Russell Standish a écrit :
>
> > These sorts of discussions "No-justification", "Zero-information
> > principle", "All of mathematics" and Hal Ruhl's dualling All and
> > Nothing (or should that be "duelling") are really just motivators for
> > getting at the ensemble, which turns out remarkably to be the same in
> > each case - the set of 2^\aleph_0 infinite strings or histories.
>
>
> Once you fix a programming language or a universal machine, then I can ```
```
You don't even need a universal machine. All you need is a mapping
from infinite strings to integers. And that can be given by the
observer, where the integers are an enumeration of the oberver's
possible interpretations.
> imagine how to *represent* an history by an infinite string. But then
> you are using comp and you know the consequences. Unless like some
> people (including Schmidhuber) you don't believe in the difference
> between first and third person points of view.
>
>
> (Youness Ayaita wrote:
>
> > When I first wanted to capture mathematically the Everything, I tried
> > several mathematicalist approaches. But later, I prefered the
> > Everything ensemble that is also known here as the Schmidhuber
> > ensemble.
>
>
> Could you Youness, or Russell, give a definition of "Schmidhuber
The set of all infinite length strings in some chosen alphabet.
> Also I still don't know if the "physical universe" is considered as an
> ouptut of a program, or if it is associated to the running of a
> program.)
No, it is considered to be the stable, sharable dream, as you
sometimes put it. It is the interpretation of the observer, but it
isn't arbitrary.
>
>
> Russell Standish wrote :
>
>
> > Where differences lie is in the measure attached to these strings. I
> > take each string to be of equal weight to any other, so that there are
> > twice the measure of strings satisfying 01* as 011*. This leads
> > naturally to a universal prior.
>
>
> I don't understand. If all infinite strings have the same measure, what
> is the meaning of "universal prior"?
>
The universal prior is a measure on certain sets of strings.
>
> > Neither Bruno's nor Max's theories give a measure,
>
>
> > but remarkably the
> > Occam's razor theorem and White Rabbit result is fairly insensitive to
> > the measure chosen (so long as it's not too pathological!).
>
>
> I don't understand this either.
>
The measure induced by the process of observation is enough to turn a
uniform measure, which is wabbity into one that is not (universal
prior). If the ensemble measure chosen was less wabbity (eg
Schmidhuber's speed prior for instance), then the observer measure
will also be non-wabbity. It is hard to imagine a more wabbity
distribution than the uniform one, but perhaps a delta function on an
extremely wabbity string might do the trick.
>
> > On your comment on permitting infinite strings - the ensemble I
> > describe in my book has only infinite strings, which belong to
> > syntactic space.
>
>
> ?
>
I explain syntactic and semantic spaces in my book - its better to
read that than to try to reproduce it here. These concepts are known
by other names microscopic/macroscopic, L_1/L_2 and so on, but
syntactic/semantic seemed to capture the concept best in the most generality.
>
> > It would be possible to construct an ensemble of purely finite strings
> > (all strings of length googol bits, say). This wouldn't satisfy the
> > zero information principle, or your no-justification, as you still
> > have the finite string size to justify (why googol and not googol+1,
> > for instance). I suspect the observable results would be
> > indistinguishable from the infinite string ensembles for large enough
> > string string size, however.
>
> Hmmm... I think that once we do care about the difference between 3-pov
> and 1-pov, such difference (between ensemble of finite and infinite
> strings) does become palpable (empirically), unless you take special
> infinite set of arbitrarily long (but finite) strings, but then all
> will depends on the chosen representations.
>
As they say in "Grease" - "Tell me more, tell me more...." I suspect
that it would only be detected empirically if your instruments were
accurate enough, which is why I chose a googol, rather than say a
hundred million (which Borges chose for his Library of Babel).
> Bruno
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
--
----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8439711928367615, "perplexity": 4057.008595377338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591150.71/warc/CC-MAIN-20180719164439-20180719184439-00167.warc.gz"} |
http://alpmestan.com/posts/2014-06-18-testing-attoparsec-parsers-with-hspec.html | Almost all haskellers end up, some day, having to write a parser. But then, that’s not really a problem because writing parsers in Haskell isn’t really annoying, like it tends to be elsewhere. Of special interest to us is attoparsec, a very fast parser combinator library. It lets you combine small, simple parsers to express how data should be extracted from that specific format you’re working with.
# Getting our feet wet with attoparsec
For example, suppose you want to parse something of the form |<any char>| where <any char> can be… well, any character. We obviously only care about that precise character sitting there – once the input is processed, we don’t really care about these | anymore. This is a no-brainer with attoparsec.
module Parser
import Data.Attoparsec.Text
weirdParser :: Parser Char
weirdParser = do -- attoparsec's Parser type has a useful monad instance
char '|' -- matches just '|', fails on any other char
c <- anyChar -- matches any character and returns it
char '|' -- matches just '|', like on the first line
return c -- return the inner character we parsed
Here we go, we have our parser. If you’re a bit lost with these combinators, feel free to switch back and forth between this article and the documentation of Data.Attoparsec.Text.
This parser will fail if any of the 3 smaller parsers I’m using fail. If there’s more input than just what we’re interested in, the additional content will be left unconsumed.
Let’s now see our parser in action, by loading it in ghci and trying to feed it various inputs.
First, we want to be able to type in Text values directly without using conversions functions from/to Strings. For that reason, we enable the OverloadedStrings extension. We also import Data.Attoparsec.Text because in addition to containing char and anyChar it also contains the functions that let us run a parser on some input (make sure attoparsec is installed).
λ> :set -XOverloadedStrings
λ> import Data.Attoparsec.Text
Data.Attoparsec.Text contains a parse function, which takes a parser and some input, and yields a Result. A Result will just let us know whether the parser failed, with some diagnostic information, or if it was on its way to successfully parsing a value but didn’t get enough input (imagine we just feed "|x" to our parser: it won’t fail, because it looks almost exactly like what we want to parse, except that it doesn’t have that terminal '|', so attoparsec will just tell us it needs more input to complete – or fail), or, finally, if everything went smoothly and it actually hands back to us a successfully parser Char in our case, along with some possibly unconsumed input.
Why do we care about this? Because when we’ll test our parsers with hspec-attoparsec, we’ll be able to test the kind of Result our parsers leaves us with, among other things.
Back to concrete things, let’s run our parser on a valid input.
λ> parse weirdParser "|x|"
Done "" 'x'
That means it successfully parsed our inner 'x' between two '|'s. What if we have more input than necessary for the parser?
λ> parse weirdParser "|x|hello world"
Done "hello world" 'x'
Interesting! It successfully parsed our 'x' and also tells us "hello world" was left unconsumed, because the parser didn’t need to go that far in the input string to extract the information we want.
But, if the input looks right but lets the parser halfway through completing, what happens?
λ> parse weirdParser "|x"
Partial _
Here, the input is missing the final | that would make the parser succeed. So we’re told that the parser has partially succeeded, meaning that with that input, it’s been running successfully but hasn’t yet parsed everything it’s supposed to. What that Partial holds isn’t an just underscore but a function to resume the parsing with some more input (a continuation). The Show instance for parsers just writes a _ in place of functions.
Ok, and now, how about we feed some “wrong data” to our parser?
λ> parse weirdParser "bbq"
Fail "bbq" ["'|'"] "Failed reading: satisfy"
Alright! Equipped with this minimal knowledge of attoparsec, we’ll now see how we can test our parser.
# Introducing hspec-attoparsec
Well, I happen to be working on an HTML parsing library based on attoparsec, and I’ve been using hspec for all my testing needs these past few months – working with the author surely helped, hello Simon! – so I wanted to check whether I could come up with a minimalist API for testing attoparsec parsers.
If you don’t know how to use hspec, I warmly recommend visititing hspec.github.io, it is well documented.
So let’s first get the boilerplate out of our way.
{-# LANGUAGE OverloadedStrings #-}
module ParserSpec where
-- we import Text, this will be our input type
import Data.Text (Text)
-- we import hspec, to run the test suite
import Test.Hspec
-- we import 'hspec-attoparsec'
import Test.Hspec.Attoparsec
-- we import the module where our parser is defined
import Parser
main :: IO ()
main = hspec spec
spec :: Spec
spec = return () -- this is temporary, we'll write our tests here
And sure enough, we can already get this running in ghci (ignore the warnings, they are just saying that we’re not yet using our parser or hspec-attoparsec), although it’s quite useless:
λ> :l example/Parser.hs example/ParserSpec.hs
[1 of 2] Compiling Parser ( example/Parser.hs, interpreted )
[2 of 2] Compiling ParserSpec ( example/ParserSpec.hs, interpreted )
example/ParserSpec.hs:8:1: Warning:
The import of ‘Test.Hspec.Attoparsec’ is redundant
except perhaps to import instances from ‘Test.Hspec.Attoparsec’
To import instances alone, use: import Test.Hspec.Attoparsec()
example/ParserSpec.hs:10:1: Warning:
The import of ‘Parser’ is redundant
except perhaps to import instances from ‘Parser’
To import instances alone, use: import Parser()
λ> ParserSpec.main
Finished in 0.0001 seconds
0 examples, 0 failures
Alright, let’s first introduce a couple of tests where our parser should succeed.
spec :: Spec
spec = do
describe "weird parser - success cases" $do it "successfully parses |a| into 'a'"$
("|a|" :: Text) ~> weirdParser
shouldParse 'a'
it "successfully parses |3| into '3'" $("|3|" :: Text) ~> weirdParser shouldParse '3' it "successfully parses ||| into '|'"$
("|||" :: Text) ~> weirdParser
shouldParse '|'
We’re using two things from hspec-attoparsec:
• (~>), which connects some input to a parser and extracts either an error string or an actual value, depending on how the parsing went.
• shouldParse, which takes the result of (~>) and compares it to what you expect the value to be. If the parsing fails, the test won’t pass, obviously, and hspec-attoparsec will report that the parsing failed. If the parsing succeeds, the parsed value is compared to the expected one and a proper error message is reported with both values printed out.
(~>) :: Source parser string string' result
=> string -- ^ the input
-> parser string' a -- ^ the parser to run
-> Either String a -- ^ either an error or a parsed value
shouldParse :: (Eq a, Show a)
=> Either String a -- ^ result of a call to ~>
-> a -- ^ expected value
-> Expectation -- ^ resulting hspec "expectation"
Running them gives:
λ> ParserSpec.main
weird parser - success cases
- successfully parses |a| into 'a'
- successfully parses |3| into '3'
- successfully parses ||| into '|'
Finished in 0.0306 seconds
3 examples, 0 failures
If we modify our first test case by expecting 'b' instead of 'a', while still having "|a|" as input, we get:
λ> ParserSpec.main
weird parser - success cases
- successfully parses |a| into 'b' FAILED [1]
- successfully parses |3| into '3'
- successfully parses ||| into '|'
- successfully parses a digit character from |3|
1) weird parser - success cases successfully parses |a| into 'b'
expected: 'b'
but got: 'a'
Randomized with seed 1330009810
Finished in 0.0267 seconds
4 examples, 1 failure
*** Exception: ExitFailure 1
Nice! But what else can we test? Well, we can test that what we parse satisfies some predicate, for example. Let’s add the following to spec:
-- you have to add: import Data.Char (isDigit)
-- in the import list
it "successfully parses a digit character from |3|" $("|3|" :: Text) ~> weirdParser parseSatisfies isDigit where parseSatisfies :: Show a => Either String a -- ^ result of ~> -> (a -> Bool) -- ^ predicate the parsed value should satisfy -> Expectation -- ^ resulting hspec expectation And we get: λ> ParserSpec.main weird parser - success cases - successfully parses |a| into 'a' - successfully parses |3| into '3' - successfully parses ||| into '|' - successfully parses a digit character from |3| Finished in 0.0012 seconds 4 examples, 0 failures Great, what else can we do? Well, sometimes we don’t really care about the concrete values produced, we just want to test that the parser succeeds or fails on some precise inputs we have, because that’s how it’s supposed to behave and we want to have a way that changes in the future won’t affect the parser’s behavior on these inputs. This is what shouldFailOn and shouldSucceedOn are for. Let’s add a couple more tests: spec :: Spec spec = do describe "weird parser - success cases"$ do
it "successfully parses |a| into 'a'" $("|a|" :: Text) ~> weirdParser shouldParse 'a' it "successfully parses |3| into '3'"$
("|3|" :: Text) ~> weirdParser
shouldParse '3'
it "successfully parses ||| into '|'" $("|||" :: Text) ~> weirdParser shouldParse '|' it "successfully parses a digit character from |3|"$
("|3|" :: Text) ~> weirdParser
parseSatisfies isDigit
-- NEW
it "successfully parses |\160|" $weirdParser shouldSucceedOn ("|\160|" :: Text) -- NEW describe "weird parser - failing cases"$ do
it "fails to parse |x-" $weirdParser shouldFailOn ("|x-" :: Text) it "fails to parse ||/"$
weirdParser shouldFailOn ("||/" :: Text)
where
shouldSucceedOn :: (Source p s s' r, Show a)
=> p s' a -- ^ parser to run
-> s -- ^ input string
-> Expectation
shouldFailOn :: (Source p s s' r, Show a)
=> p s' a -- ^ parser to run
-> s -- ^ input string
-> Expectation
And we run our new tests:
λ> :l example/Parser.hs example/ParserSpec.hs
[1 of 2] Compiling Parser ( example/Parser.hs, interpreted )
[2 of 2] Compiling ParserSpec ( example/ParserSpec.hs, interpreted )
λ> ParserSpec.main
weird parser - success cases
- successfully parses |a| into 'a'
- successfully parses |3| into '3'
- successfully parses ||| into '|'
- successfully parses a digit character from |3|
- successfully parses | |
weird parser - failing cases
- fails to parse |x-
- fails to parse ||/
Finished in 0.0015 seconds
7 examples, 0 failures
I think by now you probably understand how to use the library, so I’ll just show the last useful function: leavesUnconsumed. This one will just let you inspect the unconsumed part of the input if there’s any. Using it, you can easily describe how eager in consuming the input your parsers should be.
describe "weird parser - leftovers" $it "leaves \"fooo\" unconsumed in |a|fooo"$
("|a|fooo" :: Text) ~?> weirdParser
leavesUnconsumed "fooo"
Right now, hspec-attoparsec will only consider leftovers when the parser succeeds. I’m not really sure whether we should return Fail’s unconsumed input or not.
# Documentation
The code lives at github.com/alpmestan/hspec-attoparsec, the package is on hackage here where you can also view the documentation. A good source of examples is the package’s own test suite, that you can view in the repo. The example used in this article also lives in the repo, see example/. Let me know through github or by email about any question, feedback, PR, etc. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33771055936813354, "perplexity": 8631.204757726819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820466.2/warc/CC-MAIN-20171016214209-20171016234209-00351.warc.gz"} |
https://kb.firedaemon.com/support/solutions/articles/4000086975-left-4-dead-2-source-as-a-service | # How to run Left 4 Dead 2: Source as a Windows Service Service with FireDaemon Pro.
Left 4 Dead 2 is a first person action game. The dedicated server component can be run as a Windows Service using FireDaemon Pro, which allows you to start the dedicated server automatically at boot prior to login, start multiple instances of the dedicated server and more. This HOWTO will show you how to set it up. You can also use FireDaemon Fusion to manage FireDaemon and other Windows services via a web browser.
## Left 4 Dead 2 Dedicated Server Setup Under FireDaemon Pro
Go to the directory where you installed SteamCMD and create a shortcut to "SteamCMD.exe". Next edit the properties of the shortcut and in the target box, at the end of it (with a space before the following), put:
+login anonymous +force_install_dir "C:\L4D2" +app_update 222860 validate +quit
The target box should now look something like:
C:\SteamCMD\steamcmd.exe +login anonymous +force_install_dir "C:\L4D2" +app_update 222860 validate +quit
Now click the shortcut you created and let it run to download the Left 4 Dead 2 server files. It might take a few hours to update everything. You should also run the shortcut every week or so to grab the latest server updates. Make sure to stop your server first.
Left 4 Dead 2 uses one configuration file to store its settings. By default the configuration file is not included, so create a file named "server.cfg" and put it in the CFG folder of Left 4 Dead 2 (e.g. "C:\L4D2\left4dead2\left4dead2\cfg\server.cfg"). An example configuration file is included at the end of this HOWTO. Go ahead and modify this file to your preferences.
Next start the FireDaemon GUI from the desktop shortcut. Click on the "Create a new service definition" button in the toolbar (or type Ctrl+N) and enter the information into the fields as you see below. Adjust the paths to suit your installation. Note the required parameters.
Executable: The path to your srcds.exe file. For the purposes of this HOWTO, the path is C:\L4D2\left4dead2\srcds.exe.
Working Directory: The directory containing your srcds.exe file. For the purposes of this HOWTO, the path is C:\L4D2\left4dead2.
The most important field on the tab is the Parameters. The Parameters define the initial setup of your server. Here’s the full parameter list you should have:
-console -game left4dead2 -secure +map c1m1_hotel -autoupdate +log on +maxplayers 4 -port 27015 +ip 1.2.3.4 +exec server.cfg
• "-console” enables text base server display. The server can only be automatically restarted in text based mode.
• “-secure” enables VAC (Valve Anti Cheat) protection of your server. You can remove this command if you do not want to use VAC.
• “+map” loads a specified map on server startup. You can change “c1m1_hotel” to whatever map you want. This command should never be removed.
• “-autoupdate” Enables auto update of the server. Valve has not implemented this in Windows so you will have to manually update Left 4 Dead 2 yourself. It’s here for the sake of legacy support if Valve ever decides to add it.
• “+log on” Displays the output of information on the screen. You may turn it off (+log off), but keeping it on makes it easier to debug any errors you might encounter.
• “maxplayers 4” This controls the maximum of amount of players you want your server to run. You can only control the max players on server startup. This command should never be removed. Left 4 Dead 2 can only handle up to 4 players in Coop mode and 8 players in versus mode.
• “-port 27015” This is the default server port. You can change it to anywhere from 27015 to 27020. Changing it is generally used when you host multiple servers (as each server has to use its own port when using the same IP). This command should never be removed.
• “+ip” should be the IP of your computer (not 127.0.0.1, go here to get your IP). This command should never be removed.
• “+exec server.cfg” This executes your server.cfg file on server startup. If you run multiple servers from the same installation, you can specify other config files (eg. server2.cfg)
Now click on the Settings tab. If you DON'T want to see your dedicated server running, uncheck the Interact with Desktop check box & select “Hidden” from the “Show Window” dropdown. You can optionally run Left 4 Dead 2 as the user you installed it as. In the Logon Account field type your username (e.g. Administrator) and then enter the user's password twice in the Password and Confirm fields. You can change the Process Priority to allocate more CPU time to the dedicated server or specify which CPU or core the dedicated server will run on (in the case of multi-processor, hyperthreaded or multi-core CPUs).
Now click on the Lifecycle tab. Uncheck Graceful Shutdown as Left 4 Dead 2 doesn't respond to it.
Now click OK to finish setup and start your Left 4 Dead 2 server!
## Example Configuration File
Below is an example server.cfg file:
// Server Name
hostname "Left 4 Dead 2 Server"
// Rcon Cvars
// Server Cvars
mp_disable_autokick 1 //Prevents a userid from being auto-kicked
sv_allow_color_correction 0 //Allow or disallow clients to use color correction on this server.
sv_allow_wait_command 0 //Allow or disallow the wait command on clients connected to this server.
sv_alltalk 0 //Players can hear all other players, no team restrictions
sv_alternateticks 0 //If set, server only simulates entities on even numbered ticks.
sv_cheats 0 //Allow cheats on server
sv_clearhinthistory 0 //Clear memory of server side hints displayed to the player.
sv_consistency 1 //Whether the server enforces file consistency for critical files
sv_contact "************" //Contact email for server sysop
sv_pausable 0 //Is the server pausable.
sv_steamgroup_exclusive 1 // Setting to 0 will not connect to the Match Making Service
sv_steamgroup // Family Steam Group Number
// Lan or internet play, Server region cvars
sv_lan 0 //Server is a lan server ( no heartbeat, no authentication, no non-class C addresses )
sv_region 3 // Region Codes: 0 - US East coast, 1 - US West coast, 2 - South America, 3 - Europe, 4 - Asia, 5 - Australia, 6 - Middle East, 7 - Africa, 255 - world
// HTTP Redirect
sv_filetransfercompression 0
// Server Logging
sv_log_onefile 0 //Log server information to only one file.
sv_logbans 0 //Log server bans in the server logs.
sv_logfile 1 //Log server information in the log file.
sv_logflush 0 //Flush the log file to disk on each write (slow).
sv_logsdir "logs" //Folder in the game directory where server logs will be stored.
// bandwidth rates/settings
sv_minrate 0
sv_maxrate 25000
sv_minupdaterate 10
sv_maxupdaterate 33
sv_mincmdrate 10
sv_maxcmdrate 33
sv_client_cmdrate_difference 1
sv_client_predict 1
sv_client_interpolate 1
sv_client_min_interp_ratio -1
sv_client_max_interp_ratio -1
sv_rcon_banpenalty 60
sv_rcon_maxfailures 10
sv_rcon_minfailures 5
sv_rcon_minfailuretime 45
sv_allow_lobby_connect_only 0
// Voice Comm
sv_voiceenable "1"
sv_voicecodec vaudio_speex
// Exec Configs
exec banned_user.cfg
mapchangecfgfile server.cfg
mapcyclefile mapcycle.txt | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15797141194343567, "perplexity": 8132.141777341548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00373.warc.gz"} |
https://civilengineering365.com/modulating-electrophysiology-of-motor-neural-networks-via-optogenetic-stimulation-during-neurogenesis-and-synaptogenesis/ | # Modulating electrophysiology of motor neural networks via optogenetic stimulation during neurogenesis and synaptogenesis
#### ByAUTHOR
Jul 27, 2020
Optogenetic stimulation was used on mESC-derived MEBs to implement training regimens during two important stages of neural development: neurogenesis (while still in suspension) and synaptogenesis (seeded on functionalized glass or MEAs) (Fig. 1a). Training regimens consisted of periodic stimulation with 5 ms pulses at 20 Hz in 1 s intervals for an hour (Supplementary Fig. 1a). This regimen has been shown to enhance axonal growth30, and thus would suggest that it could lead to a shift in structural potentiation in a neural network. The regimen was repeated every 24 h as differentiation occurred within the EBs, with an expectation that consistent repetition would enhance the potentiation and cause long-term changes in the firing patterns of the network. Following established differentiation protocols of mESC towards mature motor neurons31,32,33, the described training regimen was started at D2 of differentiation, at which point stem cells have been induced towards neuronal lineages, and specialization and maturation of motor neurons has been shown to take place in the subsequent 7 days (Fig. 1b). Since one of the transcription factors that drove differentiation, retinoic acid, is light sensitive, media was changed every single day immediately after stimulation to ensure that stimulation effects on MEBs were not artifacts (i.e. false positives) caused by photodegradation of factors (Supplementary Fig. 1b)34. Furthermore, since the differentiation was monitored with the expression of the motor neuronal marker Hb9 through a GFP reporter, we used the plateau of GFP expression between D8 and D9, as an indicator that D9 was an appropriate time point for seeding the MEBs on glass (Supplementary Fig. 1c). Thus, after these 7 days (D2-D9) of differentiation, stimulated (S) and non-stimulated (NS) cultures were seeded on MEA chips (Fig. 1c). Careful seeding practices were applied to ensure that ~ 20 MEBs were seeded within the sensing area of the MEAs for a ~ 50% coverage by the MEBs (Supplementary Fig. 2). Seeding in this manner ensured empty space between clusters for the extension of processes, even though some nearby clusters would start fusing into larger clusters. The resulting two groups of samples seeded on MEAs were further subdivided into two more experimental groups, referring to whether or not a training regimen was continued during network formation on chip for the consequent 15 days (D10-D25). For ease of discussion, S or NS prior to a colon (e.g. S:X or NS:X) will refer to the presence or lack thereof of stimulation, during neurogenesis, while S or NS written after a colon (e.g. X:S or X:NS), indicates the presence or absence of stimulation during synaptogenesis (Fig. 1a).
The electrical activity of the resulting neuronal cultures was measured with the MEA system and the raw data was filtered to remove low frequencies (< 200 Hz), to remove undesired voltage artifacts (e.g. stimulation artifacts), and extract action potentials recorded as spiking events (Fig. 1d). A two-step procedure was used to remove false positives from the analyzed data: (1) the detection threshold was set at a value at which no positives would be detected from the ground electrode, then (2) the recorded spikes at each electrode were inspected to ensure that the detected spikes had the appropriate voltage phases relating to action potentials: depolarization, repolarization and refractory period.
### MEB cultures form active neural networks with excitatory and inhibitory populations
In this work, neural networks were cultured from intact MEBs, in contrast to growing them as a monolayer after dissociation. The long-term goal of our study is the modulation of electrical activity of the MEBs towards downstream implantation in in-vivo or in-vitro experimental systems and modulating the functionality of such systems through the resulting interaction. When cultured in their intact form, MEBs tend to keep their spheroid shape, while extending processes which contain neurites that form networks as they undergo synaptogenesis (Fig. 2a). Furthermore, dense web-like neurite structures form within the spheroid itself (Fig. 2b) and both excitatory (vGlut) and inhibitory (GAD65/67) receptors stain positively (Fig. 2c).
Network formation was validated by exposing MEB cultures grown on MEAs (Fig. 2d) to varying concentrations of commonly used exciting and inhibiting signaling molecules for 5 min: L-glutamate, acetylcholine, cyclic AMP, cyclic GMP, norepinephrine and GABA. (Fig. 2e). As expected, L-glutamate evoked a statistically significant (repeated measures ANOVA with a Greenhouse–Geisser correction, n = 15; F(1.28,17.89) = 18.78, p = 1.88E-4) response in the network. A post hoc Tukey test showed a statistically significant positive difference at p < 0.05 between 0 µM to 10 µM, while higher concentrations, 100 µM and 250 µM, showed a decrease in firing rate with the latter showing a statistically significant negative difference to the spontaneous firing rate, most likely related to excitotoxicity35. Other excitatory signaling molecules, acetylcholine and cyclic AMP, evoked a continuously excitatory response (repeated measures ANOVA; ACh (with Greenhouse–Geisser correction), n = 15: F(2.13,29.78) = 16.14, p = 1.31E-5 and cAMP: F(3,42) = 125.49,p = 4.20E-15) continued a gradual increase in firing rate with increasing concentrations. Cyclic GMP, another cyclic nucleotide similar in function as cAMP, failed to evoke any statistically significant effect on firing rate (repeated measures ANOVA with a Greenhouse–Geisser correction, n = 15; F(2.08,29.18) = 2.86, p = 0.07). On the other hand, the inhibitory neurotransmitters evoked statistically significant effects on the MEB-derived networks, with norepinephrine (repeated measures ANOVA, n = 15; F(3,42) = 81.43, p = 1.53E-17), showing a statistically significant decrease at p < 0.05 in a post hoc Tukey test from 0 µM to 10 µM, and 100 µM to 250 µM, while GABA (repeated measures ANOVA, n = 15; F(3,42) = 191.55, p = 1.60E-24) showed a statistically significant decrease in firing rate at p < 0.05 in post hoc Tukey test at each concentration. The responses corroborated the development of endogenously active neural networks expressing different kinds of receptors. The observations that MEBs extend processes within the body itself while responding to both excitatory and inhibitory signaling molecules would lead to the hypothesis that these MEBs could be forming intrabody circuits which could be “trained” during differentiation and have these changes last after network formation.
#### Stimulation during neurogenesis results in morphological changes in MEB cultures
The effects of stimulation during differentiation were initially observed in neurite extension and presynaptic protein clustering. While it has been reported that neurite outgrowth could be enhanced if neural populations simultaneously underwent optogenetic stimulation30, it was not clear if effects of the stimulation on MEBs done in suspension would still result in an increase of neurite extension when later seeded on chips, as this would indicate some stable long-term changes in the neuronal system. To quantify this, S:NS and NS:NS MEBs were seeded at low confluence on gridded coverslips and imaged 6 times every two hours on D10 (1 DIV) to quantify the number of extending neurites (Fig. 3a). Observations showed a consistently statistically significant positive difference (ANOVA, n = 20; 14hrs: F(1,38) = 215.44, p = 0.0; 16hrs: F(1,38) = 148.40, p = 1.08E-2; 18hrs: F(1,38) = 257.32, p = 0.0; 20hrs: F(1,38) = 199.14,p = 1.11E-2; 22hrs: F(1,38) = 221.35, p = 0.0; 24hrs: F(1,38) = 76.11,p = 1.31E-2) of number of neurites extended for S:NS samples, compared to NS:NS, for each of the six hours the two groups were measured and compared. This indicates an increased rate of neurite extension as a result of the stimulation during neurogenesis (Fig. 3b). Next, we wanted to observe the effect of stimulation during differentiation on the propensity of the network to form synapses. To quantify this, the clustering of presynaptic synaptophysin stained with anti-SY38, was counted along individual neurites as well as per unit area between the groups NS:NS and S:S (Fig. 3c). By D11 (2 DIV) S:S samples showed a statistically significant ~ twofold increase (ANOVA, n = 10; F(1,18) = 24.58, p = 1.02E-4) of synaptophysin clusters per neurite than NS:NS samples (Fig. 3d). This increase of pre-synaptic clusters per neurite combined with the increase in neurite extension resulted in S:S samples presenting a statistically significant higher synaptophysin clusters per unit area than NS:NS counterparts at D11 (ANOVA, n = 10; F(1,18) = 40.18, p = 5.68), D13 (ANOVA, n = 10; F(1,18) = 131.58, p = 1.04E-9) and D15 (ANOVA, n = 10; F(1,18) = 74.87, p = 7.88E-8) (Fig. 3e). When monitoring the difference of pre-synaptic clusters per unit area at D13 and D15, the statistically significant difference indicated that optogenetic stimulation during neurogenesis evoked physiological responses on two important aspects of neural network development: neurite extension and presynaptic clustering (Fig. 3e).
#### MEB network synchronicity is amplified by stimulation during neurogenesis and synaptogenesis
Network synchrony is a common parameter used to characterize a developing neural network, as it gives information on the network’s plasticity and connectivity. Various studies have successfully shown that the presence of chronic stimulation results in improved network synchrony36,37,38. In our study, we wanted to observe the long-term effects of stimulation regimens on the network synchrony and determine if these effects were amplified or shifted when the training regimen during neurogenesis was extended during synaptogenesis. From the raster plots of the spontaneous activity recorded at D21, the increased level of synchronous activity was notable between NS:S and S:S samples versus S:NS and NS:NS (Fig. 4a). This can be appreciated by the peaks above the raster plots, which correspond to a summation of the activity across all electrodes, where synchronous networks would result in discrete peaks whereas in samples that lacked coordinated firing, the resulting line plot seemed to lack any peaks.
Similarity between electrode recordings was quantified with cross-correlation in order to quantify synchronous behavior. Values for the similarity across the network were obtained by calculating cross-correlation for all electrode combinations (Supplementary Fig. 3). For this analysis, only spontaneous recordings of active electrodes (electrodes detecting at least 10 spikes/min) were used to quantify the long-term effects of the training regimen on steady state synchrony. When average correlation values per electrodes were mapped to their position on the chip, NS:S and S:S samples showed high synchrony level ((stackrel{-}{chi }) > 0.5) across the entire network for spontaneous recordings at D21 (Fig. 4b). This showed that synchronous behavior extended across the entire network and was markedly higher for networks that were stimulated during synaptogenesis.
Interestingly, when the network wide mean synchronicity was calculated for each recording day, a trend of higher synchrony was observed for samples that had been exposed to some form of training regimen (NS:S, S:NS or S:S) but no statistical significance was observed at D11 (ANOVA, n = 3; F(3,8) = 3.42, p = 0.073) and D13 (ANOVA, n = 3; F(3,8) = 1.77, p = 0.23). At D15, a statistically significant difference (ANOVA, n = 3; F(3,8) = 7.47, p = 0.010) was observed, with a post hoc Tukey test performed at p < 0.05 showing statistical significance between NS:S and S:NS (stackrel{-}{chi }) values. Subsequently, while no statistical significance was observed for D17 (ANOVA, n = 3; F(3,8) = 3.88, p = 0.055), D19 (ANOVA, n = 3; F(3,8) = 3.58, p = 0.066) and D21 (ANOVA, n = 3; F(3,8) = 3.61, p = 0.065), a gradual trend was observed for the synchronicity of networks undergoing training during synaptogenesis (NS:S and S:S) being larger than their counterparts (NS:NS and S:NS). At D23, there was a statistically significant difference among the experimental groups (ANOVA, n = 3; F(3,8) = 8.73, p = 6.6E-3). Post hoc comparisons using Tukey test at p < 0.05 indicated that the (stackrel{-}{chi }) value for NS:S and S:S were higher than both NS:NS and S:NS groups. This statistically significance was sustained for D25 (ANOVA, n = 3; F(3,8) = 6.46, p = 0.016), with the post hoc Tukey test showing significant difference between (stackrel{-}{chi }) for S:S and (stackrel{-}{chi }) for NS:NS as well as S:NS. (Fig. 4c).
#### Spectral density elucidates changes in steady state firing
Conventionally, electrophysiological behavior is characterized by firing rate during set epochs and burst parameters (Supplementary Fig. 4). However, when analyzing these parameters during spontaneous firing, there was no discernable trend in the change of long-term firing rate or burst parameters between experimental groups. However, when observing the spike data during steady state of a more mature neural network (D25), there were deviations on how the spike firing clustered into bursts, despite the fact that no clear change in the number of spikes was observed (Fig. 5a). We accredited this seeming conflict between the quantitative and qualitative data to the selection method of the burst detection parameters (See Quantification and statistical analysis). In order to avoid arbitrariness in the selection of these parameters, we decided to characterize the data in the frequency domain. For this reason, we focused on characterizing spontaneous firing recorded on MEAs by comparing changes in the power spectrums of recorded signals calculated through Fourier transforms (Fig. 5b). To obtain spectral profiles, binned spike counts were divided into 10-s-long contiguous windows and transformed to the frequency domain, thus representing the power spectrum as a function of time (Fig. 5b). When initially calculating the power spectral density (PSD) and observing between the DC frequency and the Nyquist frequency, we noticed that most of the components appeared below 7 Hz for all samples. For this reason, we compared samples between 0.1 Hz (to remove DC component) and 5 Hz. Focusing between 0.1–5 Hz, all samples except S:S, showed frequency profiles of their respective firing patterns with components across the entire bandwidth of interest. This spontaneous heterogeneous firing patterns can be expected from these cultures formed from MEBs, as they are a super-network composed of individual networks from within each MEB. On the other hand, S:S samples show a clear change in their frequency profile, where most of the spectral power fell within 0.1-1 Hz.
Moreover, if the signal power is summed between the frequency range of 0.1-1 Hz, the training regimen pattern had a statistically significant effect at p < 0.05 on the power magnitude within this frequency interval (ANOVA, n = 3; F(3,8) = 20.15, p = 4.37E-4). Post hoc comparisons using Tukey test at p < 0.05 showed a statistically significant difference between power magnitude withing 0.1-1 Hz of samples non stimulated during synaptogenesis (NS:NS, S:NS) and samples stimulated throughout development (S:S) (Fig. 5c). Moreover, the post hoc Tukey test indicated a statistically significant difference between power spectra values between NS:S and S:S, implying that combined stimulation of both neurogenesis and synaptogenesis had an amplified effect on modulating the power spectra of the networks than just stimulation during synaptogenesis. This statistical significance was not observed in the mature networks (D25: ANOVA, n = 3; F(3,8) = 0.063, p = 0.98) if the power was summed for the whole frequency interval of interest (0.1-5 Hz) (Supplementary Fig. 5).
#### Neurogenetic stimulation changes the opto-response of MEB networks
Another aspect of consideration on the effect of training MEBs during neurogenesis was whether the early stage perturbation had some effects on how the later-stage network would respond to the same perturbation. To study this, we recorded responses to optogenetic stimulation from sets of samples that had not undergone the training regimen during neurogenesis (Fig. 6a) and compared them to those set that had undergone such regimen (Fig. 6b). Initial observation showed a difference between how the networks responded when stimulated early in the network development (D11) versus more mature networks (D25). For example, when early networks, which had a low spontaneous firing rate (D11) were stimulated, there would be a very notable evoked response during stimulation followed by a quiescent state, where the network would barely fire before returning to the baseline spontaneous firing rate. In contrast, more mature networks (D25), would still show an evoked response during stimulation but would automatically return to baseline firing rate right after stimulation ceased. What was interesting was that the quiescent time after stimulation for early S:S networks were notably shorter than those from the NS:S samples (Fig. 6a-b). Moreover, at D25, while NS:S samples would return to the same baseline firing rate right after stimulation stopped, S:S samples showed a transient change in firing rate for several seconds after the stimulation stopped (Fig. 6a-b).
To quantify this behavior, the evoked firing rate during stimulation (FRstim) and the post-response firing rate (FRpost) were compared to the firing rate prior to stimulation (FRpre) for the three instances of stimulation within recording for each of the three MEA networks for both experimental groups (Fig. 6c). While the fold-change increase of firing rate FRpre to FRstim decreased with time for both NS:S (repeated measures ANOVA with Greenhouse–Geisser correction, n = 3; F(1.48, 11.83) = 14.79, p = 1.12E-3) and S:S (repeated measures ANOVA with Greenhouse–Geisser correction, n = 3; F(1.88, 15.02) = 11.02, p = 1.31E-3 (because more mature networks would have a higher baseline firing rate), when comparing the amount of evoked action potentials during stimulation (FRstim/FRpre), S:S samples seemed to respond more strongly to stimulation than NS:S samples (Fig. 6d). One-way ANOVA determined a statistically significant difference between NS:S and S:S FRstim/FRpre values for D13 (n = 9; F(1, 16) = 5.55, p = 0.031), D15 (n = 9; F(1,16) = 5.90, p = 0.027), D17 (n = 9; F(1,16) = 11.30, p = 4E-3), D19 (n = 9; F(1,16) = 8.78, p = 9.2E-3), D23 (n = 9; F(1,16) = 10.81, p = 4.6E-3) and D25 (n = 9; F(1,16) = 9.94, p = 6.2E-3), while only showing a trend (not statistically significant) of higher S:S FRstim/FRpre values for D11 (n = 9; F(1,16) = 4.48, p = 0.05) and D21 (n = 9; F(1,16) = 1.1, p = 0.31).
Additionally, the quiescent state response post-stimulation observed in early days (D11, D13 and D15), reflected itself in FRpost being less than FRpre, resulting in FRpost/FRpre < 1 for NS:S and S:S samples. We observed that this transient decrease in firing rate was statistically significantly shorter for the S:S samples than the NS:S for D11 (ANOVA, n = 9; F(1,16) = 19.95, p = 3.9E-4) and D13 (ANOVA, n = 9; F(1,16) = 9.49, p = 7.2E-3) (Fig. 6e). Repeated measured ANOVA indicated that FRpost/FRpre ratios increased for both NS:S (Greenhouse–Geisser corrected, n = 9; F(3.06, 24.48) = 36.92, p = 2.69E-9) and S:S (n = 9; F(7,56) = 5.66, p = 5.63E-5). Furthermore, at later days of network development, it was notable that FRpost/FRpre was ~ 1 for NS:S, meaning that the steady state firing rate was indistinguishable from that immediately following the termination of stimulation. On the other hand, S:S samples showed FRpost/FRpre values above 1 from D17 forward, indicating that the network would transiently increase in firing rate right after stimulation. One-way ANOVA showed that this increase between FRpost/FRpre values for S:S and NS:S was statistically significant for D17 (n = 9; F(1,16) = 12.19, p = 3E-3), D21 (n = 9; F(1,16) = 6.94, p = 0.018) and D23 (n = 9; F(1,16) = 9.91, p = 6.23E-3), while only showing a non-statistically significant trend for D19 (n = 9; F(1,16) = 2.16, p = 0.16) and D25 (n = 9; F(1,16) = 3.76, p = 0.071). It is relevant to mention that these effects were observed while there was no perceivable change in efficiency of the blue light to activate the ChR2 ion channels and evoke a response in the networks (Supplementary Fig. 6). These observations were corroborated by repeated measures ANOVA performed at p < 0.05, which showed no statistically significance change in efficiency (repeated measures ANOVA, n = 12; F(2,22) = 1.25, p = 0.31).
To further study how the training regimens affected network response, we also quantified the evoked response reflected in the network’s synchronicity for the initial stimulation done on the initial spontaneous interval of recording. For this purpose, raster-plots of the average values of cross-correlation (as calculated for the analysis in Fig. 4) were calculated using 10 s bins across the entire 20 min of recording (Fig. 6f). When quantifying the short term effect of stimulation during recording had on network synchronicity, by comparing (stackrel{-}{chi }) post to (stackrel{-}{chi }) pre, a trend was observed where the presence of a training regimen during neurogenesis seemed to cause the correlation fold-change ((stackrel{-}{chi }) post/(stackrel{-}{chi }) pre) for S:S samples to be higher than NS:S samples. One-way ANOVA detected a statistically significant difference between (stackrel{-}{chi }) post/(stackrel{-}{chi }) pre for S:S and NS:S for days D19 (n = 3; F(1,4) = 16.49, p = 0.015) and D23 (n = 3; F(1,4) = 11.12, p = 0.029) (Fig. 6g).
#### Changes evoked by stimulation during neurogenesis result in genetic changes
Given the effects on neurite extension, presynaptic clustering, frequency profiles and network response to stimulation that were observed as a result of the presence of training regimens on MEBs during neurogenesis, we proceeded to determine genetic changes that could provide possible mechanistic explanations. Total messenger RNA sequencing was performed and analyzed for stimulated (S) and non-stimulated (NS) MEBs at D9, as well as EBs at D2. The differentially expressed genes in MEBs that underwent training regimens during neurogenesis were compared to those that did not, both with respect to the genetic expression of EBs sampled prior to differentiation (at D2). A total of 749 differentially expressed genes between S and NS with p < 0.05 were detected and clustered and color coded with respect to the differential expression of D2 (Fig. 7a). There were 200 genes that were upregulated during control differentiation, but this upregulation was lessened for samples that underwent training regimen (black bar), while the upregulation of 172 genes was amplified for those same samples (red bar). On the other hand, there were 202 genes whose downregulation was stagnated for samples with training regimen (yellow bar). For 173 genes, the control downregulation was further amplified after stimulation (blue bar). Something important to note was that this observed differential expression did not include changes in phenotype populations, matching the immunostaining observations (Supplementary Fig. 7). This indicated that training regimen during differentiation did not seem to noticeably disrupt the rate of phenotype specification or generation of the neural populations that generally result from the differentiation protocol (Table 1). This suggests that training regimens affected other functional pathways rather than altering the differentiation of populations. For further analysis, a more stringent threshold (p < 0.0005) was set to detect the most promising genes as key factors for the behavioral changes seen in stimulated MEB cultures. This threshold resulted in 97 differentially expressed genes for the black cluster (Fig. 7b), 63 differentially expressed genes for the red cluster (Fig. 7c), 77 differentially expressed genes for the yellow cluster (Fig. 7d) and 71 differentially expressed genes for the blue cluster (Fig. 7e). From this pool, a thorough literature study was used to identify gene targets that had been reported to be related to known neural development and function (Table 2, Supplementary Fig. 9). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8530715703964233, "perplexity": 3437.733538998824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487647232.60/warc/CC-MAIN-20210619081502-20210619111502-00053.warc.gz"} |
http://mathhelpforum.com/pre-calculus/69123-coin-problem.html | # Math Help - Coin problem
1. ## Coin problem
The value (in cents) of the change in a purse that contains twice as many nickels as pennies, four more dimes than nickels, and as many quarters as dimes and nickels combined; p = number of pennies?
2. ## Coin problem
Hello tmac11522
Originally Posted by tmac11522
The value (in cents) of the change in a purse that contains twice as many nickels as pennies,
Originally Posted by tmac11522
four more dimes than nickels, and as many quarters as dimes and nickels combined; p = number of pennies?
Number of pennies = $p$. Each penny is worth one cent. So the value of the pennies = $1\times p = p$ cents
Number of nickels = twice as many as number of pennies = $2p$. Each nickel is worth 5 cents. So the value of the nickels = $5\times 2p = 10p$ cents
Number of dimes = four more than the number of nickels = $(2p + 4)$. Each dime is worth 10 cents. So the value of the dimes = ??? cents.
Number of quarters = number of dimes plus number of nickels = ??. Each quarter is worth 25 cents, so the value of the quarters is ??? cents.
Add together all four value totals; remove the brackets and simplify the result. You'll then arrive at an expression like $Ap + B$, where $A$ and $B$ are two whole numbers. That will be your final answer, if you don't have any more information.
Can you complete it now? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6908248662948608, "perplexity": 1764.1596236806756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00086-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.researchgate.net/profile/Alexander-Zheleznyak | # Alexander P. ZheleznyakV. N. Karazin Kharkiv National University · Institute of Astronomy
PhD
38
Publications
1,454
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
225
Citations
## Publications
Publications (38)
Article
Full-text available
Quasar microlensing offers a unique opportunity to resolve tiny sources in distant active galactic nuclei and study compact object populations in lensing galaxies. We therefore searched for microlensing-induced variability of the gravitationally lensed quasar QSO 2237+0305 (Einstein Cross) using 4374 optical frames taken with the 2.0 m Liverpool Te...
Preprint
Full-text available
Quasar microlensing offers a unique opportunity to resolve tiny sources in distant active galactic nuclei and study compact object populations in lensing galaxies. We therefore searched for microlensing-induced variability of the gravitationally lensed quasar QSO 2237+0305 (Einstein Cross) using 4374 optical frames taken with the 2.0 m Liverpool Te...
Article
Full-text available
We report the results of an optical spectroscopic follow-up of four double quasar candidates in the Sloan Digital Sky Survey (SDSS) database. SDSS J1617+3827 is most likely a lensed quasar at z = 2.079, consisting of two images with r ∼ 19−21 and separated by ∆θ ∼ 2. We identify an extended source northeast of the brightest image as an early-type l...
Preprint
Full-text available
We report the results of an optical spectroscopic follow-up of four double quasar candidates in the Sloan Digital Sky Survey (SDSS) database. SDSS J1617+3827 is most likely a lensed quasar at z = 2.079, consisting of two images with r ~ 19-21 and separated by ~ 2 arcsec. We identify an extended source northeast of the brightest image as an early-ty...
Article
Full-text available
The optical spectroscopic follow-up of SDSS-III selected targets (Sergeyev et al. 2016MNRAS.456.1948S) has led to the discovery of the double quasar SDSS J1617+3827AB at redshift z = 2.079.
Article
We present new photometric observations of H1413+117 acquired during seasons between 2001 and 2008 in order to estimate the time delays between the lensed QSO images and to characterize at best the on-going micro-lensing events. We propose a highly performing photometric method called the adaptive PSF fitting and have successfully tested this metho...
Article
Full-text available
Optically bright, wide separation double (gravitationally lensed) quasars can be easily monitored, leading to light curves of great importance in determining the Hubble constant and other cosmological parameters, as well as the structure of active nuclei and haloes of galaxies. Searching for new double quasars in the Sloan Digital Sky Survey III (S...
Article
Full-text available
The AZT-22 telescope, equipped with the ML09000-65 CCD camera, has shown to be capable of obtaining high-quality images of astronomical objects and thus solving a wide range of observational tasks. Diffraction-limited telescope optics together with the excellent seeing conditions at Mt. Maidanak Observatory allows recording images with the seeing q...
Article
We report the results of our multicolour observations of PG 1115+080 with the 1.5-m telescope of the Maidanak Observatory (Uzbekistan, Central Asia) in 2001–2006. Monitoring data in filter R spanning the 2004, 2005 and 2006 seasons (76 data points) demonstrate distinct brightness variations of the source quasar with the total amplitude of almost 0....
Article
Full-text available
We present and analyse new R-band frames of the gravitationally lensed double quasar FBQ 0951+2635. These images were obtained with the 1.5-m AZT-22 Telescope at Maidanak (Uzbekistan) during the 2001–2006 period. Previous results in the R band (1999–2001 period) and the new data allow us to discuss the dominant kind of microlensing variability in F...
Article
Full-text available
An observing campaign with 10 participating observatories has undertaken to monitor the optical brightness of the Q0957 gravitationally lensed quasar for 10 consecutive nights in 2000 January. The resulting A image brightness curve has significant brightness fluctuations and makes a photometric prediction for the B image light curve for a second ca...
Article
Full-text available
To go into the details about the variability of the double quasar SBS 0909+532, we designed a monitoring programme with the 2 m Liverpool Robotic Telescope in the r Sloan filter, spanning 1.5 years from 2005 January to 2006 June. The r-band light curves of the A and B components, several cross-correlation techniques and a large number of simulation...
Article
Full-text available
The time delays between the components of a lensed quasar are basic tools to analyze the expansion of the Universe and the structure of the main lens galaxy halo. In this paper, we focus on the variability and time delay of the double system SBS 0909+532A,B as well as the time behaviour of the field stars. We use VR optical observations of SBS 0909...
Article
Full-text available
We present the R c-band light curves for components A and B of the gravitationally lensed quasar SBS 1520+530 obtained during 2001–2005 with the 1.5-m Russian-Turkish Telescope (RTT-150) at the TUBITAK National Observatory (Turkey). Based on an analysis of the data for the period 2001–2002, we have estimated the time delay of the brightness fluctua...
Article
Spatially resolved CCD photometry of the Q2237+0305 gravitationally lensed quasar in V, R and I spectral bands from observations with the Maidanak 1.5-m telescope made in 1995-2000 is presented. For each of the four quasar components, - A, B, C and D, - stellar magnitudes for 78 dates in R filter, and for 17 dates in V and I and shown in Tables 4-6...
Article
Full-text available
Photometry of the Q2237+0305gravitational lens in VRI spectral bands with the 1.5-m telescope of the high-altitude Maidanak observatory in 1995-2000 is presented. Monitoring of Q2237+0305 in July-October 2000, made at nearly daily basis, did not reveal rapid (night-to-night and intranight) variations of brightness of the components during this time...
Article
Observations of the gravitationally lensed quasar SBS 1520+530 obtained in 2000–2001 on the 1.5-m telescope of the Ma $$\overset{\lower0.5em\hbox{\smash{\scriptscriptstyle\smile}}}{l}$$ danak Observatory (Uzbekistan) are presented. The photometric algorithms used to observe the components of SBS 1520+530 are discussed. The images have a resolu...
Article
Full-text available
We obtained a series of more than two hundred R-band CCD images for the crowded central (115″×77″) region of the metal-poor globular cluster M 15 with an angular resolution of 0\mathop ." 5 - 0\mathop ." 90\mathop .\limits^{''} 5 - 0\mathop .\limits^{''} 9 in most images. Optimal image subtraction was used to identify variable stars. Brightness v...
Article
Full-text available
We report on an observing campaign in March 2001 to monitor the brightness of the later arriving Q0957+561 B image in order to compare with the previously published brightness observations of the (first arriving) A image. The 12 participating observatories provided 3543 image frames which we have analyzed for brightness fluctuations. From our class...
Article
Full-text available
Since 1997, a program of observations of gravitational lens systems (GLS) with the 1.5-m telescope of the high-altitude Maidanak Observatory has been carried out by joint efforts of seven institutions from five countries. The Q2237+0305, Q0957+561, SBS 1520+530, and other GLS were observed in VRI spectral bands using the TI 800×800, Pictor 416 and...
Article
The results of two-year observations of the known Q2237+0305 gravitational lens system (Einstein Cross) with the Maidanak 1.5-meter telescope are presented. A contribution of these observations to the existing R lightcurves of four quasar components consists of 12 dates during 133 days in 1997, and of 15 dates during 115 days in 1998. Three-colour...
Article
The brightness of four components of the gravitational lens system QSO 2237+0305 was measured in the R band, made from 2 Jul - 14 Nov 1997. The image processing algorithm is described, and possible sources of errors are discussed. Brightness variations are observed. Based on a very simple microlensing model, the authors estimate a typical mass of t...
Article
The brightness of four components of the gravitational lens system QSO 2237+0305 was measured in the R band, made from 2 Jul - 14 Nov 1997. The image processing algorithm is described, and possible sources of errors are discussed. Brightness variations are observed. Based on a very simple microlensing model, the authors estimate a typical mass of t...
Article
The peculiar T Tauri type star V1331 Cyg = LkH 120, located in the dark cloud Lynds 984, is a FU Orionis pre-outburst candidate (McMuldroch et al. , 1993). This star embedded in circumstellar bright nebulosity is also surrounded by a helix-shaped nebula originated from the star. We obtained a series of speckle images of V1331 Cyg on the standard VB...
Article
Preliminary results of imaging of the unusual T Tauri star V1331 Cyg and its environment made in the frame of our multiobject programme for the study of nebulae, outflows and jets from young stars and related objects are presented. The fine structure of the nebula was determined and its parameters were evaluated. The evidence for optical outflows w...
Article
Full-text available
Observations of the gravitationally lensed quasar Q2237+0305 were made with the 1.5-meter AZT-22 telescope at Maidanak (Uzbekistan) on 17-19 September 1995. All four components of the quasar are clearly resolved. The results of photometric measurements of the components are presented. It is confirmed that the component A again became the brightest...
Article
The peculiar T Tauri type star V1331 Cyg = LkH 120, located in the dark cloud Lynds 984, is known as the FU Orionis pre-outburst candidate (McMuldroch et al., 1993). This star embedded in circumstellar bright nebulosity is also surrounded by helix-shaped nebula originated from the star. We obtained the series of speckle images of V1331 Cyg in stand...
Article
We present the results of the observations of the comet/P Shoemaker-Levy 9 impact on Jupiter at Maydanak Observatory (Uzbekistan) with excellent seeing conditions (see ,e.g., W. B. Scott Aviation Week & Space Technology,1995, v.142, N20, p.68).The photographic, polarimetric and speckle imaging observations of Jupiter were carried out on the 1.5m (A...
Article
Preliminary results of observations at Maidanak are presented.
Article
A fast microphotometer with one-dimensional CCD for fast photometry of photographic images is described. Optical and electric schemes are discussed. Main phototechnical characteristics are presented. The results of processing Mars composite images in B, V, R passbands demonstrate the device possibilities.
Article
A review is given of main results on the problem of angular resolution increase for ground-based observations obtained at the Kharkov Astronomical Observatory since 1980. The general conception of solving this problem is formulated on the basis of the authors' original works. Several specific examples of image processing are presented including spe...
Article
Speckle interferometric data for Vesta and diffraction-limited images of Vesta's disk are presented for three measurements near the opposition of 1988. It is argued that polarimetric data can be utilized to estimate a contribution of albedo features to Vesta's lightcurve, and thus to derive the geometric component of the lightcurve. The analysis of...
Article
The speckle camera based on the image intentifier YM-92 is described. It is applied for observations of binary stars. The field of view is 30arcsec, limiting stellar magnitude is 6m. The possibility of a high-precision speckle interferometric measurements of binary stars with separations up to 30arcsec is demonstrated.
Article
Basic observational data on two SX Phe stars, isolated for the first time in the innermost region of the globular cluster M15, are presented.
Cited By
## Projects
Project (1)
Project
Gravitational lensing is a unique astrophysical tool to investigate the dark side of the Universe, since analyses of multiple quasars (which have undergone a gravitational lens effect) reveal the structure and composition of tiny regions in distant active galactic nuclei, halos of lensing galaxies and the intergalactic space. Thus, in the framework of the Gravitational LENses and DArk MAtter (GLENDAMA) project, we are building a publicly-available online database for a sample of ten bright gravitationally lensed quasars (GLQs) in the northern hemisphere (see http://grupos.unican.es/glendama). Although we used many telescopes and a varied instrumentation throughout the past 20 years, the Liverpool Telescope (LT) and the Gran Telescopio CANARIAS (GTC) played a key role, and we will continue using the LT & GTC in the next decade. Our long-term observing programme with the LT is a singular and legacy initiative, which offers a unique opportunity to complete a detailed monitoring of each GLQ over 10-30 years. These timescales are crucial to, among other things, detect significant microlensing effects in practically all objects in the sample and accurately measure 10-12 time delays between quasar images. In addition to optical light curves, the LT & GTC allows us to follow-up the spectroscopic activity of GLQs. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6220833659172058, "perplexity": 4137.526168472237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103624904.34/warc/CC-MAIN-20220629054527-20220629084527-00607.warc.gz"} |
https://chemistry.stackexchange.com/questions/37501/what-is-the-significance-of-the-standard-temperature-in-standard-enthalpies-of/37518 | # What is the significance of the “Standard Temperature” in Standard Enthalpies of Formation tables?
In my data book, there's a list of common compounds and their molar enthalpies of formation -- at 298.15 K. What's the meaning of this given temperature value (is it the final temperature of the compound in question?) and how does it affect the molar enthalpy of formation of a substance?
What, for example, does it mean when the molar enthalpy of formation of H2O(g) at 298.15 K is given as -241.8 kJ/mol, even though water should be in a liquid state at 298.15 K? And what would the molar enthalpy of formation of H2O(g) be at, for instance, 500 K?
• Seems you forgot about water vapor. – Mithoron Sep 17 '15 at 23:41
• Sorry, I still don't understand. Isn't water vapour H2O(g)? – the real deal Sep 17 '15 at 23:45
• Yes, so there's no problem with it at 25C – Mithoron Sep 17 '15 at 23:47
• So is the standard temperature listed in these molar enthalpy tables the temperature of the system upon losing/gaining the given amount of enthalpy? – the real deal Sep 17 '15 at 23:50
• It's simply enthalpy in this temp. you have to pick temp. so... Oh whatever, I need to go to sleep – Mithoron Sep 17 '15 at 23:56
Most enthalpies (or themodynamic observables whatever their type) are not constant across the temperature range — not even across the range of one phase. So the enthalpy of vapourisation of water is different a $273\,\mathrm{K}$ from what it is at $295\,\mathrm{K}$ or $350\,\mathrm{K}$.
Of course, standard conditions have been defined. However, the definition of standard conditions does not explicitly include temperature! Likely because some wanted $273\,\mathrm{K}$, others wanted $295\,\mathrm{K}$. So the temperature at which a certain entropy is valid must always be specified. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5001116991043091, "perplexity": 970.5084229567867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573533.49/warc/CC-MAIN-20190919142838-20190919164838-00517.warc.gz"} |
https://thomas1111.wordpress.com/2010/01/15/average-over-quadruples-of-the-first-1124-sequence/ | ## Average over quadruples of the first 1124 sequence
Here is a plot corresponding to the computation for the first 1124 sequence of the average of $x_ax_bx_cx_d$ over all quadruples $a,b,c,d \leq N$ such that $ab=cd$ (Note: to cut the computation time I’ve only taken those with $a and $c, this without loss of generality, thanks to the commutativity of the quadruple product which doesn’t care about the order and to the additivity of the average) (click to enlarge).
Here is the final plot. Note the change of axis, I prefer now to plot as a function of $a$ each time $b=1$. (click to enlarge)
Below are previous plots just for the record.
After running a while longer : | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8651148080825806, "perplexity": 542.0190649501423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120206.98/warc/CC-MAIN-20170423031200-00097-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://forums.techguy.org/virus-other-malware-removal/300882-ive-been-owned-various-spywares.html | Search Search for: Virus & Other Malware RemovalAll Forums
Solved: I've been owned by various Spywares (hijack this)
(New)
OhNos111
Member with 125 posts.
Join Date: Nov 2003
25-Nov-2004, 09:53 PM #1
Solved: I've been owned by various Spywares (hijack this)
Just yesterday, my internet stopped working on my PC. I know it's not my connection as Xbox live still works fine. I've run Adaware various times and it keeps finding the same spyware: CoolWebSearch (dealt with a varient of this before), HttpFilter, and Possible hijack attempt. It cleans them out and they just come right back. Every so often, a new one will apear even though I haven't gone online since the last sweep.
The really weird thing is that this spyware screwed up my Task Manager. I can Ctrl-Alt-Delete, bring up the task manager but it will not be fully there. It is missing the tab to go to: processes, performance etc. so I can't see what is running in the backround. Truely weird. System Restores and Spysweeps do nothing.
I'm at a loss. Any and all help would be appreciated.
Oh yeah...attached is a screenshot of my messed up Task Manager.
Here is the Hijack This log:
Quote:
Find all log:
Quote:
--==***@@@ 'FIND-ALL' »»*Original*»» VERSION *9.3 -6/07 @@@***==-- Thu Nov 25 03:07:17 2004 -- ++Results: »»System Info: Microsoft Windows XP [Version 5.1.2600] 'Find-All' is running from Drive: C: "Primary" (DCE7:99F0) - FS:NTFS clusters:4k Total: 120 023 252 992 [112G] - Free: 10 174 009 344 [9.5G] »»IE version and Service packs: 6.0.2800.1106 C:\Program Files\Internet Explorer\Iexplore.exe --a-- W32i APP ENU 6.0.2800.1106 shp 91,136 08-29-2002 iexplore.exe ! REG.EXE VERSION 2.0 HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Internet Settings MinorVersion REG_SZ ;SP1;Q818529;Q330994;Q822925;Q828750;Q832894; »»Google: »»UserAgent: REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\User Agent\Post Platform] »»Wmplayer version: 10.0.0.3646 C:\Program Files\Windows Media Player\wmplayer.exe --a-- W32i APP ENU 10.0.0.3646 shp 73,728 09-22-2004 wmplayer.exe 6.4.9.1125 C:\Program Files\Windows Media Player\mplayer2.exe --a-- W32i APP ENU 6.4.9.1125 shp 4,639 08-29-2002 mplayer2.exe »»M$Java version: 5.0.3810.0 C:\WINDOWS\System32\msjava.dll --a-- W32i DLL ENU 5.0.3810.0 shp 947,472 02-28-2003 msjava.dll »»NotePad(s) version(s)... Tnx,shadoWWWW 5.1.2600.0 C:\WINDOWS\notepad.exe --a-- W32i APP ENU 5.1.2600.0 shp 66,048 11-01-2004 notepad.exe »» Regedit* version(s): 5.1.2600.1106 C:\WINDOWS\regedit.exe --a-- W32i APP ENU 5.1.2600.1106 shp 134,144 08-29-2002 regedit.exe 5.1.2600.0 C:\WINDOWS\System32\regedt32.exe --a-- W32i APP ENU 5.1.2600.0 shp 3,584 08-18-2001 regedt32.exe »»PC uptime: 3:07am up 0 days, 0:15 »»Locked or 'Suspect' file(s) found... »»Tasks (services): 0 System Process 4 System 848 smss.exe 956 csrss.exe Title: 980 winlogon.exe Title: NetDDE Agent 1024 services.exe Svcs: Eventlog,PlugPlay 1036 lsass.exe Svcs: ProtectedStorage,SamSs 1220 svchost.exe Svcs: RpcSs 1368 svchost.exe Svcs: AudioSrv,Browser,CryptSvc,Dhcp,dmserver,ERSvc,EventSystem,FastUserSwitching Compatibility,helpsvc,HidServ,Iprip,lanmanserver,lanmanworkstation,Netman,N la,RasAuto,RasMan,Schedule,seclogon,SENS,SharedAccess,ShellHWDetection,srse rvice,TapiSrv,TermService,Them 1488 svchost.exe Svcs: Dnscache 1620 svchost.exe Svcs: Alerter,LmHosts,RemoteRegistry,SSDPSRV,WebClient 1820 spoolsv.exe Svcs: Spooler 936 explorer.exe Title: Program Manager 1316 NAVAPW32.EXE Title: Norton AntiVirus 1328 WFXSWTCH.exe Title: LOGINOUTTEST 1336 WFXSNT40.EXE Title: WinFax Port Starter 1352 Imgicon.exe Title: 1420 DAP.exe Title: Dialog 1472 CTHELPER.EXE Title: CtHelper 1484 CTNotify.exe Title: Disc Detector 1520 realsched.exe Title: Notification Wnd for RNAdmin 1412 rundll32.exe Title: MediaCenter 1612 ctfmon.exe Title: 1648 SpySweeper.exe Title: 1652 Mediadet.exe Title: Dialog 1676 CTLTray.exe Title: 652 alg.exe Svcs: ALG 664 CTsvcCDA.EXE Svcs: Creative Service for CDROM Access 688 cvpnd.exe Svcs: CVPND 740 NAVAPSVC.EXE Svcs: navapsvc 888 NPROTECT.EXE Svcs: NProtectService 2016 nvsvc32.exe Svcs: NVSvc 244 scagent.exe Svcs: scagent 260 tcpsvcs.exe Svcs: SimpTcp 280 snmp.exe Svcs: SNMP 348 NOPDB.EXE Svcs: Speed Disk service 396 svchost.exe Svcs: stisvc 444 wdfmgr.exe Svcs: UMWdf 508 MsPMSPSv.exe Svcs: WMDM PMSP Service 2492 Ad-Aware.exe Title: Ad-Aware SE 2508 taskmgr.exe Title: 2204 notepad.exe Title: hijackthisNEW011111 - Notepad 2308 Photoshop.exe Title: Adobe Photoshop 2996 cmd.exe Title: C:\WINDOWS\System32\cmd.exe 3048 ntvdm.exe 3548 tlist.exe REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows] "DeviceNotSelectedTimeout"="15" "GDIProcessHandleQuota"=dword:00002710 "Spooler"="yes" "swapdisk"="" "TransmissionRetryTimeout"="90" "USERProcessHandleQuota"=dword:00002710 REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Brows er Helper Objects] @="" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Brows er Helper Objects\{0000CC75-ACF3-4cac-A0A9-DD3868E06852}] [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Brows er Helper Objects\{06849E9F-C8D7-4D59-B87D-784B7D6BE0B3}] [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Brows er Helper Objects\{9C691A33-7DDA-4C2F-BE4C-C176083F35CF}] [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Brows er Helper Objects\{AA58ED58-01DD-4d91-8333-CF10577473F7}] [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Brows er Helper Objects\{B31BB2AA-FCA3-448A-9718-278B636BC42A}] [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Brows er Helper Objects\{BDF3E430-B101-42AD-A544-FADC6B084872}] @="NAV Helper" REGEDIT4 [HKEY_CLASSES_ROOT\PROTOCOLS\Filter] [HKEY_CLASSES_ROOT\PROTOCOLS\Filter\Class Install Handler] @="AP Class Install Handler filter" "CLSID"="{32B533BB-EDAE-11d0-BD5A-00AA00B92AF1}" [HKEY_CLASSES_ROOT\PROTOCOLS\Filter\deflate] @="AP Deflate Encoding/Decoding Filter " "CLSID"="{8f6b0360-b80d-11d0-a9b3-006097942311}" [HKEY_CLASSES_ROOT\PROTOCOLS\Filter\gzip] @="AP GZIP Encoding/Decoding Filter " "CLSID"="{8f6b0360-b80d-11d0-a9b3-006097942311}" [HKEY_CLASSES_ROOT\PROTOCOLS\Filter\lzdhtml] @="AP lzdhtml encoding/decoding Filter" "CLSID"="{8f6b0360-b80d-11d0-a9b3-006097942311}" [HKEY_CLASSES_ROOT\PROTOCOLS\Filter\text/html] "CLSID"="{EE7A946E-61FA-4979-87B8-A6C462E6FA62}" "CLSID_"="{2EEAAC9F-8E16-441B-9F81-B8071DA9E088}" [HKEY_CLASSES_ROOT\PROTOCOLS\Filter\text/plain] "CLSID"="{2EEAAC9F-8E16-441B-9F81-B8071DA9E088}" [HKEY_CLASSES_ROOT\PROTOCOLS\Filter\text/webviewhtml] @="WebView MIME Filter" "CLSID"="{733AC4CB-F1A4-11d0-B951-00A0C90312E1}" REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\ShellServiceOb jectDelayLoad] "PostBootReminder"="{7849596a-48ea-486e-8937-a2a3009f31a9}" "CDBurn"="{fbeb8a05-beee-4442-804e-409d6c4515e9}" "WebCheck"="{E6FB5E20-DE35-11CF-9C87-00AA005127ED}" "SysTray"="{35CEC8A3-2BE6-11D2-8773-92E220524153}" "UPnPMonitor"="{e57ce738-33e8-4c51-8354-bb4de9d215d1}" »»Security settings for 'Windows' key: RegDACL 5.1 - Permissions Manager for Registry keys for Windows NT 4 and above Copyright (c) 1999-2001 Frank Heyne Software (http://www.heysoft.de) This program is Freeware, use it on your own risk! Access Control List for Registry key hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows: (NI) ALLOW Read BUILTIN\Users (IO) ALLOW Read BUILTIN\Users (NI) ALLOW Read BUILTIN\Power Users (IO) ALLOW Read BUILTIN\Power Users (NI) ALLOW Full access BUILTIN\Administrators (IO) ALLOW Full access BUILTIN\Administrators (NI) ALLOW Full access NT AUTHORITY\SYSTEM (IO) ALLOW Full access NT AUTHORITY\SYSTEM (NI) ALLOW Full access BUILTIN\Administrators (IO) ALLOW Full access CREATOR OWNER Effective permissions for Registry key hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows: Read BUILTIN\Users Read BUILTIN\Power Users Full access BUILTIN\Administrators Full access NT AUTHORITY\SYSTEM »»Size of 'Windows' key: (Default-450;No'AppInit'-398;*Fake-~448!) Size of HKEY_LOCAL_MACHINE\software\microsoft\Windows NT\CurrentVersion\Windows: 398 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\IniFileMapping\win.ini\Windows\SYS:Microsoft\Windows NT\CurrentVersion\Windows : AppInit_DLLs »»Winlogon\notify: ! REG.EXE VERSION 2.0 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify\AtiExtEvent HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify\crypt32chain HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify\cryptnet HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify\cscdll HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify\ScCertProp HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify\Schedule HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify\sclgntfy HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify\SensLogn HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify\termsrv HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify\wlballoon Size of HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\Notify: 5056 »»UserInit value: ! REG.EXE VERSION 2.0 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon Userinit REG_SZ C:\WINDOWS\system32\userinit.exe, 5.1.2600.1106 C:\WINDOWS\System32\userinit.exe --a-- W32i APP ENU 5.1.2600.1106 shp 22,016 08-29-2002 userinit.exe »»Group/user settings: User: [DRAGONSPIRIT\Guy1], is a member of: BUILTIN\Administrators \Everyone User is a member of group DRAGONSPIRIT\None. User is a member of group \Everyone. User is a member of group BUILTIN\Administrators. User is a member of group BUILTIN\Users. User is a member of group \LOCAL. User is a member of group NT AUTHORITY\INTERACTIVE. User is a member of group NT AUTHORITY\Authenticated Users. »»ACLs list: C:\junkxxx BUILTIN\AdministratorsOI)(CI)F NT AUTHORITY\SYSTEMOI)(CI)F DRAGONSPIRIT\Guy1:F CREATOR OWNEROI)(CI)(IO)F BUILTIN\UsersOI)(CI)R BUILTIN\UsersCI)(special access FILE_APPEND_DATA BUILTIN\UsersCI)(special access FILE_WRITE_DATA ERROR: There are no more files. »»File(s) in 'junkxxx' folder: »»Md5sums MD5sums 1.1 freeware for Win9x/ME/NT/2000/XP+ Copyright (C) 2001-2002 Jem Berkes - http://www.pc-tools.net/ 0 bytes, 0 ms = 0.00 MB/sec »»hosts file: File not found - C:\WINDOWS\System32\Drivers\etc\hosts ------ »»Rehash: »Strings found: Thu Nov 25 03:07:44 2004 -- ++Find-All backups: c:\findal~1\winBackup.hiv --a-- - - - - - 8,192 11-25-2004 winbackup.hiv c:\findal~1\windows.txt --a-- - - - - - 8,192 11-25-2004 windows.txt A C:\FindallwinBackup.hiv --a-- - - - - - 8,192 06-07-2004 findallwinbackup.hiv A C:\findallappinit.reg --a-- - - - - - 632 06-07-2004 findallappinit.reg ***Next Registry run should open this key directly: ! REG.EXE VERSION 2.0 HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Applets\Regedit LastKey REG_SZ My Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows Attachment Blocked Attachments in the HJT forum are often designed to solve a specific issue and not meant to be used without instructions specific to your computer. If you want help specific to your computer, please post a HiJackThis Log. If you started this thread, please make sure you are logged in to be able to view attachments. Last edited by OhNos111; 25-Nov-2004 at 10:13 PM.. Flrman1 (Mark) Member with 46,322 posts. Join Date: Jul 2002 Location: Thomasville, NC 25-Nov-2004, 11:05 PM #2 Please do this: Click here to download FindNFix. Extract it (it should autoextract to C:\FindnFix when you double click it) Go to the C:\FindnFix folder and doubleclick on !LOG!.BAT and let it run. It will generate a log.txt file. Copy and paste log.txt back here in your next reply. Also a new version of Hijack This has been released so get rid of the old one and Click here to download the new one, come back here and post the log from it. __________________ If I have helped solve your problem, please Click Here and make a donation to help keep this great site running. 100% goes directly to this site. OhNos111 Member with 125 posts. THREAD STARTER Join Date: Nov 2003 26-Nov-2004, 11:34 PM #3 Ok...This is what I got. Hijack This log Quote: Logfile of HijackThis v1.98.2 Scan saved at 10:05:59 PM, on 11/26/2004 Platform: Windows XP SP1 (WinNT 5.01.2600) MSIE: Internet Explorer v6.00 SP1 (6.00.2800.1106) Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\winlogon.exe C:\WINDOWS\system32\services.exe C:\WINDOWS\system32\lsass.exe C:\WINDOWS\system32\svchost.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\system32\spoolsv.exe C:\WINDOWS\system32\CTSVCCDA.EXE C:\Program Files\Northern Trust\VPN\cvpnd.exe C:\Program Files\Norton SystemWorks\Norton AntiVirus\navapsvc.exe C:\Program Files\Norton SystemWorks\Norton Utilities\NPROTECT.EXE C:\WINDOWS\System32\nvsvc32.exe C:\WINDOWS\system32\scagent.exe C:\WINDOWS\System32\tcpsvcs.exe C:\WINDOWS\System32\snmp.exe C:\PROGRA~1\NORTON~1\SPEEDD~1\nopdb.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\System32\MsPMSPSv.exe C:\WINDOWS\Explorer.EXE C:\PROGRA~1\NORTON~1\NORTON~1\navapw32.exe C:\PROGRA~1\NORTON~1\WinFax\WFXSWTCH.exe C:\Program Files\Iomega\DriveIcons\ImgIcon.exe C:\PROGRA~1\DAP\DAP.EXE C:\WINDOWS\System32\CTHELPER.EXE C:\Program Files\Creative\ShareDLL\CtNotify.exe C:\Program Files\Common Files\Real\Update_OB\realsched.exe C:\WINDOWS\System32\RUNDLL32.EXE C:\Program Files\Creative\ShareDLL\Mediadet.exe C:\Program Files\Common Files\Symantec Shared\Security Center\UsrPrmpt.exe C:\WINDOWS\System32\ctfmon.exe C:\Program Files\Webroot\Spy Sweeper\SpySweeper.exe C:\Program Files\Creative\SBAudigy\TaskBar\CTLTray.exe C:\Program Files\Creative\SBAudigy\TaskBar\CTLTask.exe C:\WINDOWS\System32\wuauclt.exe C:\Hijack This\hijackthis.exe N1 - Netscape 4: user_pref("browser.startup.homepage", "http://registration.excite.com/excitereg/login.jsp?app=em&return_url=http://e6.email.excite.com/"); (C:\Program Files\Netscape\Users\someguy\prefs.js) O2 - BHO: DAPHelper Class - {0000CC75-ACF3-4cac-A0A9-DD3868E06852} - C:\Program Files\DAP\DAPBHO.dll O2 - BHO: AcroIEHlprObj Class - {06849E9F-C8D7-4D59-B87D-784B7D6BE0B3} - C:\Program Files\Adobe\Acrobat 5.0\Reader\ActiveX\AcroIEHelper.ocx O2 - BHO: Google Toolbar Helper - {AA58ED58-01DD-4d91-8333-CF10577473F7} - c:\windows\googletoolbar2.dll O2 - BHO: (no name) - {B31BB2AA-FCA3-448A-9718-278B636BC42A} - C:\WINDOWS\mindep.dll (file missing) O2 - BHO: NAV Helper - {BDF3E430-B101-42AD-A544-FADC6B084872} - C:\Program Files\Norton SystemWorks\Norton AntiVirus\NavShExt.dll O3 - Toolbar: Norton AntiVirus - {42CDD1BF-3FFB-4238-8AD1-7859DF00B1D6} - C:\Program Files\Norton SystemWorks\Norton AntiVirus\NavShExt.dll O3 - Toolbar: DAP Bar - {62999427-33FC-4baf-9C9C-BCE6BD127F08} - C:\PROGRA~1\DAP\dapiebar.dll O3 - Toolbar: &Google - {2318C2B1-4965-11d4-9B18-009027A5CD4F} - c:\windows\googletoolbar2.dll O3 - Toolbar: &Radio - {8E718888-423F-11D2-876E-00A0C9082467} - C:\WINDOWS\System32\msdxm.ocx O4 - HKLM\..\Run: [NAV Agent] C:\PROGRA~1\NORTON~1\NORTON~1\navapw32.exe O4 - HKLM\..\Run: [WFXSwtch] C:\PROGRA~1\NORTON~1\WinFax\WFXSWTCH.exe O4 - HKLM\..\Run: [Iomega Startup Options] C:\Program Files\Iomega\Common\ImgStart.exe O4 - HKLM\..\Run: [Iomega Drive Icons] C:\Program Files\Iomega\DriveIcons\ImgIcon.exe O4 - HKLM\..\Run: [CTStartup] C:\Program Files\Creative\Splash Screen\CTEaxSpl.EXE /run O4 - HKLM\..\Run: [DownloadAccelerator] C:\PROGRA~1\DAP\DAP.EXE /STARTUP O4 - HKLM\..\Run: [CTHelper] CTHELPER.EXE O4 - HKLM\..\Run: [Disc Detector] C:\Program Files\Creative\ShareDLL\CtNotify.exe O4 - HKLM\..\Run: [QuickTime Task] "C:\Program Files\QuickTime\qttask.exe" -atboottime O4 - HKLM\..\Run: [TkBellExe] "C:\Program Files\Common Files\Real\Update_OB\realsched.exe" -osboot O4 - HKLM\..\Run: [Symantec NetDriver Monitor] C:\PROGRA~1\SYMNET~1\SNDMon.exe O4 - HKLM\..\Run: [NvCplDaemon] RUNDLL32.EXE C:\WINDOWS\System32\NvCpl.dll,NvStartup O4 - HKLM\..\Run: [nwiz] nwiz.exe /install O4 - HKLM\..\Run: [NvMediaCenter] RUNDLL32.EXE C:\WINDOWS\System32\NvMcTray.dll,NvTaskbarInit O4 - HKLM\..\Run: [SSC_UserPrompt] C:\Program Files\Common Files\Symantec Shared\Security Center\UsrPrmpt.exe O4 - HKCU\..\Run: [ctfmon.exe] C:\WINDOWS\System32\ctfmon.exe O4 - HKCU\..\Run: [SpySweeper] C:\Program Files\Webroot\Spy Sweeper\SpySweeper.exe /0 O4 - HKCU\..\Run: [TaskTray] "C:\Program Files\Creative\SBAudigy\TaskBar\CTLTray.exe" O4 - HKCU\..\Run: [TaskBar] "C:\Program Files\Creative\SBAudigy\TaskBar\CTLTask.exe" O4 - HKCU\..\Run: [wmvdmod] C:\WINDOWS\System32\wmvdmod.exe O6 - HKCU\Software\Policies\Microsoft\Internet Explorer\Control Panel present O8 - Extra context menu item: &Download with &DAP - C:\PROGRA~1\DAP\dapextie.htm O8 - Extra context menu item: &Google Search - res://c:\windows\GoogleToolbar2.dll/cmsearch.html O8 - Extra context menu item: Backward Links - res://c:\windows\GoogleToolbar2.dll/cmbacklinks.html O8 - Extra context menu item: Cached Snapshot of Page - res://c:\windows\GoogleToolbar2.dll/cmcache.html O8 - Extra context menu item: Download &all with DAP - C:\PROGRA~1\DAP\dapextie2.htm O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~2\Office10\EXCEL.EXE/3000 O8 - Extra context menu item: Similar Pages - res://c:\windows\GoogleToolbar2.dll/cmsimilar.html O8 - Extra context menu item: Translate into English - res://c:\windows\GoogleToolbar2.dll/cmtrans.html O9 - Extra button: Run DAP - {669695BC-A811-4A9D-8CDF-BA8C795F261C} - C:\PROGRA~1\DAP\DAP.EXE O9 - Extra button: Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\MSMSGS.EXE O9 - Extra 'Tools' menuitem: Windows Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\MSMSGS.EXE O12 - Plugin for .spop: C:\Program Files\Internet Explorer\Plugins\NPDocBox.dll O16 - DPF: Yahoo! Chat - http://us.chat1.yimg.com/us.yimg.com.../c381/chat.cab O16 - DPF: {0A5FD7C5-A45C-49FC-ADB5-9952547D5715} (Creative Software AutoUpdate) - http://www.creative.com/SU/ocx/12119/CTSUEng.cab O16 - DPF: {1D0D9077-3798-49BB-9058-393499174D5D} - file://c:\counter.cab O16 - DPF: {27527D31-447B-11D5-A46E-0001023B4289} (CoGSManager Class) - http://gamingzone.ubisoft.com/dev/pa.../GSManager.cab O16 - DPF: {2B323CD9-50E3-11D3-9466-00A0C9700498} (Yahoo! Audio Conferencing) - http://cs5.chat.sc5.yahoo.com/v45/yacscom.cab O16 - DPF: {39B0684F-D7BF-4743-B050-FDC3F48F7E3B} (FilePlanet Download Control Class) - http://www.fileplanet.com/fpdlmgr/ca...C_1_0_0_44.cab O16 - DPF: {56336BCB-3D8A-11D6-A00B-0050DA18DE71} (RdxIE Class) - http://software-dl.real.com/20e9126e...p/RdxIE601.cab O16 - DPF: {6414512B-B978-451D-A0D8-FCFDF33E833C} (WUWebControl Class) - http://v5.windowsupdate.microsoft.co...?1099292765625 O16 - DPF: {8EDAD21C-3584-4E66-A8AB-EB0E5584767D} - http://toolbar.google.com/data/GoogleActivate.cab O16 - DPF: {C2FCEF52-ACE9-11D3-BEBD-00105AA9B6AE} (Symantec RuFSI Registry Information Class) - http://security.symantec.com/sscv6/S.../bin/cabsa.cab O16 - DPF: {F6ACF75C-C32C-447B-9BEF-46B766368D29} (Creative Software AutoUpdate Support Package) - http://www.creative.com/SU/ocx/12119/CTPID.cab O18 - Filter: text/html - {EE7A946E-61FA-4979-87B8-A6C462E6FA62} - C:\WINDOWS\httpfilter.dll FindnFix log Quote: Fri 26 Nov 04 22:08:12 »»»»»»»»»»»»»»»»»»***LOG!***(*updated *9/1*)»»»»»»»»»»»»»»»» *System: Microsoft Windows XP Professional 5.1 Service Pack 1 (Build 2600) *IE version: 6.0.2800.1106 SP1-Q818529-Q330994-Q822925-Q828750-Q832894 The type of the file system is NTFS. MS-DOS Version 5.00.500 *command.com test passed! __________________________________ !!*Creating backups...!! The operation completed successfully 22:08:12.51 Fri 11/26/2004 __________________________________ *Local time: Friday, November 26, 2004 (11/26/2004) 10:08 PM, Central Standard Time *Uptime: 22:08:13 up 0 days, 0:07:07 *Path: C:\FINDnFIX ---------------------------------------------------- »»Member of...: ("ADMIN" logon + group match required!) User is a member of group DRAGONSPIRIT\None. User is a member of group \Everyone. User is a member of group BUILTIN\Administrators. User is a member of group BUILTIN\Users. User is a member of group \LOCAL. User is a member of group NT AUTHORITY\INTERACTIVE. User is a member of group NT AUTHORITY\Authenticated Users. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Group BUILTIN\Administrators matches list. Group BUILTIN\Users matches list. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! User: [DRAGONSPIRIT\Guy1], is a member of: BUILTIN\Administrators \Everyone Running in WORKSTATION MODE. SystemDrive is C: SystemRoot is C:\WINDOWS Logon Domain is DRAGONSPIRIT Administrator's Name is Guy1 Computer Name is DRAGONSPIRIT LOGON SERVER is \\DRAGONSPIRIT »»»»»»»»»»»»»»»»»»*** Note! ***»»»»»»»»»»»»»»»» The list will produce a small database of files that will match certain criteria. Ex: read only files, s/h files, last modified date. size, etc. The filters provided and registry scan should match the corresponding file(s) listed. »»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»» Unless the file match the entire criteria, it should not be pointed to remove without attempting to confirm it's nature! »»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»» At times there could be several (legit) files flagged, and/or duplicate culprit file(s)! If in doubt, always search the file(s) and properties according to criteria! The file(s) found should be moved to \FINDnFIX\"junkxxx" Subfolder ___________________________________________________________________________ ___ ***YOU NEED TO DISABLE YOUR ACTIVE ANTI VIRUS PROTECTION TO AVOID CONFLICTS!*** ___________________________________________________________________________ ___ ......Scanning for file(s)... *Note! The list(s) may include legitimate files! »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» »»»»» (*1*) »»»»» ......... »»Read access error(s)... »»»»» (*2*) »»»»»........ »»»»» (*3*) »»»»»........ No matches found. unknown/hidden files... No matches found. »»»»» (*4*) »»»»»......... Sniffing.......... Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL »»»»»(*5*)»»»»» »»»»»(*6*)»»»»» »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» »»»»»Search by size... *List of files and specs according to 'size' : *Note: Not all files listed here are infected, but *may include* the name and spces of the offending file... ___________________________________________________________________________ Path: C:\WINDOWS\SYSTEM32 Including: *.DLL ___________________________________________________________________________ _ *By size and date... No matches found. No matches found. No matches found. Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» BHO search and other files... No matches found. No matches found. --*sp.html in temp folder was NOT FOUND!-- *Filter keys search... HKEY_LOCAL_MACHINE\SOFTWARE\Classes\PROTOCOLS\Filter\text/html CLSID = {EE7A946E-61FA-4979-87B8-A6C462E6FA62} REGDMP: Unable to open key 'HKEY_LOCAL_MACHINE\SOFTWARE\Classes\PROTOCOLS\Filter\text/plain' (2) --(*text/plain Subkey was NOT FOUND!)-- »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» »»Size of Windows key: (*Default-450 *No AppInit-398 *fake(infected)-448,504,512...) Size of HKEY_LOCAL_MACHINE\software\microsoft\Windows NT\CurrentVersion\Windows: 398 »»Checking for AppInit_DLLs (empty) value... ________________________________ !"AppInit_DLLs"=""! Value does not exist ________________________________ »»Comparing *saved* key with *original*... REGDIFF 2.1 - Freeware written by Gerson Kurz (http://www.p-nand-q.com) Comparing File #1 (Keys1\winkey.reg) with File #2 (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows). No differences found. »»Dumping Values........ HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\DeviceNotSelectedTimeout SZ 15 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\GDIProcessHandleQuota DWORD 00002710 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\Spooler SZ yes HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\swapdisk SZ HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\TransmissionRetryTimeout SZ 90 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\USERProcessHandleQuota DWORD 00002710 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows DeviceNotSelectedTimeout = 15 GDIProcessHandleQuota = REG_DWORD 0x00002710 Spooler = yes swapdisk = TransmissionRetryTimeout = 90 USERProcessHandleQuota = REG_DWORD 0x00002710 »»Security settings for 'Windows' key: RegDACL 5.1 - Permissions Manager for Registry keys for Windows NT 4 and above Copyright (c) 1999-2001 Frank Heyne Software (http://www.heysoft.de) This program is Freeware, use it on your own risk! Access Control List for Registry key hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows: (NI) ALLOW Read BUILTIN\Users (IO) ALLOW Read BUILTIN\Users (NI) ALLOW Read BUILTIN\Power Users (IO) ALLOW Read BUILTIN\Power Users (NI) ALLOW Full access BUILTIN\Administrators (IO) ALLOW Full access BUILTIN\Administrators (NI) ALLOW Full access NT AUTHORITY\SYSTEM (IO) ALLOW Full access NT AUTHORITY\SYSTEM (NI) ALLOW Full access BUILTIN\Administrators (IO) ALLOW Full access CREATOR OWNER Effective permissions for Registry key hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows: Read BUILTIN\Users Read BUILTIN\Power Users Full access BUILTIN\Administrators Full access NT AUTHORITY\SYSTEM »»Performing string scan.... 00001150: ? 00001190: vk U 000011D0eviceNotSelectedTimeout 1 5 ( W vk ' 00001210: zGDIProcessHandleQuota" 9 0 ! vk 00001250: Spooler2 y e s vk =pswapdisk 00001290: @ p vk 0 R TransmissionRetr 000012D0:yTimeout vk ' USERProcessHandleQuota 00001310: @ p 00001350: 00001390: 000013D0: 00001410: 00001450: 00001490: 000014D0: 00001510: 00001550: 00001590: 000015D0: ---------- WIN.TXT -------------- --------------$011CF: UDeviceNotSelectedTimeout $01217: zGDIProcessHandleQuota$012C0: TransmissionRetryTimeout $012F0: USERProcessHandleQuota -------------- -------------- No strings found. -------------- -------------- REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows] "DeviceNotSelectedTimeout"="15" "GDIProcessHandleQuota"=dword:00002710 "Spooler"="yes" "swapdisk"="" "TransmissionRetryTimeout"="90" "USERProcessHandleQuota"=dword:00002710 ............. A handle was successfully obtained for the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows key. This key has 0 subkeys. The AppInitDLLs value entry was NOT found! ----------------------- »»»»»»Backups list...»»»»»» 22:09:22 up 0 days, 0:08:16 ----------------------- Fri 26 Nov 04 22:09:22 C:\FINDNFIX\ keyback.hiv Fri Nov 26 2004 10:08:14p A.... 8,192 8.00 K 1 item found: 1 file, 0 directories. Total of file sizes: 8,192 bytes 8.00 K C:\FINDNFIX\KEYS1\ winkey.reg Fri Nov 26 2004 10:08:14p A.... 268 0.26 K 1 item found: 1 file, 0 directories. Total of file sizes: 268 bytes 0.26 K *Temp backups... "C:\Documents and Settings\Guy1\Local Settings\Temp\Backs2\" keyback2.hi_ Nov 26 2004 8192 "keyback2.hi_" winkey2.re_ Nov 26 2004 268 "winkey2.re_" 2 items found: 2 files, 0 directories. Total of file sizes: 8,460 bytes 8.26 K -D---- JUNKXXX 00000000 22:08.14 26/11/2004 A----- STARTIT .BAT 00000060 22:08.14 26/11/2004 ___________________________________________________________________________ _____ ***THE FIX IS NOT COMPATIBLE WITH EARLIER;UNPATCHED VERSIONS OF WIN2K'(SP3 and BELLOW)' AND/OR LAX OF SECURITY UPDATES AND SERVICE PACKS FOR ALL PLATFORMS! MINIMAL REQUIREMENTS INCLUDE: _________XP HOME/PRO; SP1; IE6/SP1 _________2K/SP4; IE6/SP1 ___________________________________________________________________________ _____ »»»»»*** www10.brinkster.com/expl0iter/freeatlast/FNF/ ***»»»»» -----END------ Fri 26 Nov 04 22:09:23 Last edited by OhNos111; 27-Nov-2004 at 02:58 AM.. Flrman1 (Mark) Member with 46,322 posts. Join Date: Jul 2002 Location: Thomasville, NC 27-Nov-2004, 08:00 AM #4 Click here to download CWSinstall.exe. Click on the CWSinstall.exe file and it will install CWShredder. Close all browser windows, click on the cwshredder.exe then click "Fix" (Not "Scan only") and let it do it's thing. When it is finished restart your computer. Go here and download Ad-Aware SE. Install the program and launch it. First in the main window look in the bottom right corner and click on Check for updates now then click Connect and download the latest reference files. From main window :Click Start then under Select a scan Mode tick Perform full system scan. Next deselect Search for negligible risk entries. Now to scan just click the Next button. When the scan is finished mark everything for removal and get rid of it.(Right-click the window and choose select all from the drop down menu and click Next) Restart your computer. Come back here and post another Hijack This log and we'll get rid of what's left. OhNos111 Member with 125 posts. THREAD STARTER Join Date: Nov 2003 27-Nov-2004, 06:28 PM #5 New Hijack This Log Quote: Logfile of HijackThis v1.98.2 Scan saved at 5:15:56 PM, on 11/27/2004 Platform: Windows XP SP1 (WinNT 5.01.2600) MSIE: Internet Explorer v6.00 SP1 (6.00.2800.1106) Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\winlogon.exe C:\WINDOWS\system32\services.exe C:\WINDOWS\system32\lsass.exe C:\WINDOWS\System32\Ati2evxx.exe C:\WINDOWS\system32\svchost.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\system32\spoolsv.exe C:\WINDOWS\system32\CTSVCCDA.EXE C:\Program Files\Northern Trust\VPN\cvpnd.exe C:\Program Files\Norton SystemWorks\Norton AntiVirus\navapsvc.exe C:\Program Files\Norton SystemWorks\Norton Utilities\NPROTECT.EXE C:\Program Files\Canon\Memory Card Utility\PIXMA iP6000D\PDUiP6000DMemCrdMgr.exe C:\WINDOWS\system32\scagent.exe C:\WINDOWS\system32\Ati2evxx.exe C:\WINDOWS\Explorer.EXE C:\PROGRA~1\NORTON~1\NORTON~1\navapw32.exe C:\PROGRA~1\NORTON~1\WinFax\WFXSWTCH.exe C:\Program Files\Iomega\DriveIcons\ImgIcon.exe C:\PROGRA~1\DAP\DAP.EXE C:\WINDOWS\System32\CTHELPER.EXE C:\Program Files\Common Files\Real\Update_OB\realsched.exe C:\Program Files\Canon\Memory Card Utility\PIXMA iP6000D\PDUiP6000DMon.exe C:\Program Files\Canon\Memory Card Utility\PIXMA iP6000D\PDUiP6000DTskbr.exe C:\Program Files\ATI Technologies\ATI Control Panel\atiptaxx.exe C:\WINDOWS\System32\ctfmon.exe C:\Program Files\Webroot\Spy Sweeper\SpySweeper.exe C:\Program Files\Creative\SBAudigy\TaskBar\CTLTray.exe C:\Program Files\Creative\SBAudigy\TaskBar\CTLTask.exe C:\Program Files\Rage3DTweak\RegTwk.exe C:\WINDOWS\System32\tcpsvcs.exe C:\WINDOWS\System32\snmp.exe C:\PROGRA~1\NORTON~1\SPEEDD~1\nopdb.exe C:\Program Files\rage3dtweak\gameutil.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\System32\MsPMSPSv.exe C:\Program Files\Internet Explorer\iexplore.exe C:\Hijack This\hijackthis.exe N1 - Netscape 4: user_pref("browser.startup.homepage", "http://registration.excite.com/excitereg/login.jsp?app=em&return_url=http://e6.email.excite.com/"); (C:\Program Files\Netscape\Users\someguy\prefs.js) O2 - BHO: DAPHelper Class - {0000CC75-ACF3-4cac-A0A9-DD3868E06852} - C:\Program Files\DAP\DAPBHO.dll O2 - BHO: AcroIEHlprObj Class - {06849E9F-C8D7-4D59-B87D-784B7D6BE0B3} - C:\Program Files\Adobe\Acrobat 5.0\Reader\ActiveX\AcroIEHelper.ocx O2 - BHO: Google Toolbar Helper - {AA58ED58-01DD-4d91-8333-CF10577473F7} - c:\windows\googletoolbar2.dll O2 - BHO: (no name) - {B31BB2AA-FCA3-448A-9718-278B636BC42A} - C:\WINDOWS\mindep.dll (file missing) O2 - BHO: NAV Helper - {BDF3E430-B101-42AD-A544-FADC6B084872} - C:\Program Files\Norton SystemWorks\Norton AntiVirus\NavShExt.dll O3 - Toolbar: Norton AntiVirus - {42CDD1BF-3FFB-4238-8AD1-7859DF00B1D6} - C:\Program Files\Norton SystemWorks\Norton AntiVirus\NavShExt.dll O3 - Toolbar: DAP Bar - {62999427-33FC-4baf-9C9C-BCE6BD127F08} - C:\PROGRA~1\DAP\dapiebar.dll O3 - Toolbar: &Google - {2318C2B1-4965-11d4-9B18-009027A5CD4F} - c:\windows\googletoolbar2.dll O3 - Toolbar: &Radio - {8E718888-423F-11D2-876E-00A0C9082467} - C:\WINDOWS\System32\msdxm.ocx O4 - HKLM\..\Run: [NAV Agent] C:\PROGRA~1\NORTON~1\NORTON~1\navapw32.exe O4 - HKLM\..\Run: [WFXSwtch] C:\PROGRA~1\NORTON~1\WinFax\WFXSWTCH.exe O4 - HKLM\..\Run: [Iomega Startup Options] C:\Program Files\Iomega\Common\ImgStart.exe O4 - HKLM\..\Run: [Iomega Drive Icons] C:\Program Files\Iomega\DriveIcons\ImgIcon.exe O4 - HKLM\..\Run: [CTStartup] C:\Program Files\Creative\Splash Screen\CTEaxSpl.EXE /run O4 - HKLM\..\Run: [DownloadAccelerator] C:\PROGRA~1\DAP\DAP.EXE /STARTUP O4 - HKLM\..\Run: [CTHelper] CTHELPER.EXE O4 - HKLM\..\Run: [Disc Detector] C:\Program Files\Creative\ShareDLL\CtNotify.exe O4 - HKLM\..\Run: [QuickTime Task] "C:\Program Files\QuickTime\qttask.exe" -atboottime O4 - HKLM\..\Run: [TkBellExe] "C:\Program Files\Common Files\Real\Update_OB\realsched.exe" -osboot O4 - HKLM\..\Run: [Symantec NetDriver Monitor] C:\PROGRA~1\SYMNET~1\SNDMon.exe O4 - HKLM\..\Run: [SSC_UserPrompt] C:\Program Files\Common Files\Symantec Shared\Security Center\UsrPrmpt.exe O4 - HKLM\..\Run: [PDUiP6000DMon] C:\Program Files\Canon\Memory Card Utility\PIXMA iP6000D\PDUiP6000DMon.exe O4 - HKLM\..\Run: [PDUiP6000DTskbr] C:\Program Files\Canon\Memory Card Utility\PIXMA iP6000D\PDUiP6000DTskbr.exe O4 - HKLM\..\Run: [ATIPTA] C:\Program Files\ATI Technologies\ATI Control Panel\atiptaxx.exe O4 - HKLM\..\Run: [Clocks] RunDll32.exe OCpp.dll,SetClocks 429.75 369.00 O4 - HKCU\..\Run: [ctfmon.exe] C:\WINDOWS\System32\ctfmon.exe O4 - HKCU\..\Run: [SpySweeper] C:\Program Files\Webroot\Spy Sweeper\SpySweeper.exe /0 O4 - HKCU\..\Run: [TaskTray] "C:\Program Files\Creative\SBAudigy\TaskBar\CTLTray.exe" O4 - HKCU\..\Run: [TaskBar] "C:\Program Files\Creative\SBAudigy\TaskBar\CTLTask.exe" O4 - HKCU\..\Run: [wmvdmod] C:\WINDOWS\System32\wmvdmod.exe O4 - HKCU\..\Run: [RegTweak] C:\Program Files\Rage3DTweak\RegTwk.exe O4 - Startup: gameutil.exe.lnk = ? O6 - HKCU\Software\Policies\Microsoft\Internet Explorer\Control Panel present O8 - Extra context menu item: &Download with &DAP - C:\PROGRA~1\DAP\dapextie.htm O8 - Extra context menu item: &Google Search - res://c:\windows\GoogleToolbar2.dll/cmsearch.html O8 - Extra context menu item: Backward Links - res://c:\windows\GoogleToolbar2.dll/cmbacklinks.html O8 - Extra context menu item: Cached Snapshot of Page - res://c:\windows\GoogleToolbar2.dll/cmcache.html O8 - Extra context menu item: Download &all with DAP - C:\PROGRA~1\DAP\dapextie2.htm O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~2\Office10\EXCEL.EXE/3000 O8 - Extra context menu item: Similar Pages - res://c:\windows\GoogleToolbar2.dll/cmsimilar.html O8 - Extra context menu item: Translate into English - res://c:\windows\GoogleToolbar2.dll/cmtrans.html O9 - Extra button: Run DAP - {669695BC-A811-4A9D-8CDF-BA8C795F261C} - C:\PROGRA~1\DAP\DAP.EXE O9 - Extra button: Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\MSMSGS.EXE O9 - Extra 'Tools' menuitem: Windows Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\MSMSGS.EXE O12 - Plugin for .spop: C:\Program Files\Internet Explorer\Plugins\NPDocBox.dll O16 - DPF: Yahoo! Chat - http://us.chat1.yimg.com/us.yimg.com.../c381/chat.cab O16 - DPF: {0A5FD7C5-A45C-49FC-ADB5-9952547D5715} (Creative Software AutoUpdate) - http://www.creative.com/SU/ocx/12119/CTSUEng.cab O16 - DPF: {1D0D9077-3798-49BB-9058-393499174D5D} - file://c:\counter.cab O16 - DPF: {27527D31-447B-11D5-A46E-0001023B4289} (CoGSManager Class) - http://gamingzone.ubisoft.com/dev/pa.../GSManager.cab O16 - DPF: {2B323CD9-50E3-11D3-9466-00A0C9700498} (Yahoo! Audio Conferencing) - http://cs5.chat.sc5.yahoo.com/v45/yacscom.cab O16 - DPF: {39B0684F-D7BF-4743-B050-FDC3F48F7E3B} (FilePlanet Download Control Class) - http://www.fileplanet.com/fpdlmgr/ca...C_1_0_0_44.cab O16 - DPF: {56336BCB-3D8A-11D6-A00B-0050DA18DE71} (RdxIE Class) - http://software-dl.real.com/20e9126e...p/RdxIE601.cab O16 - DPF: {6414512B-B978-451D-A0D8-FCFDF33E833C} (WUWebControl Class) - http://v5.windowsupdate.microsoft.co...?1099292765625 O16 - DPF: {8EDAD21C-3584-4E66-A8AB-EB0E5584767D} - http://toolbar.google.com/data/GoogleActivate.cab O16 - DPF: {C2FCEF52-ACE9-11D3-BEBD-00105AA9B6AE} (Symantec RuFSI Registry Information Class) - http://security.symantec.com/sscv6/S.../bin/cabsa.cab O16 - DPF: {F6ACF75C-C32C-447B-9BEF-46B766368D29} (Creative Software AutoUpdate Support Package) - http://www.creative.com/SU/ocx/12119/CTPID.cab O18 - Filter: text/html - {EE7A946E-61FA-4979-87B8-A6C462E6FA62} - C:\WINDOWS\httpfilter.dll FindnFix Log Quote: Sat 27 Nov 04 17:16:49 »»»»»»»»»»»»»»»»»»***LOG!***(*updated *9/1*)»»»»»»»»»»»»»»»» *System: Microsoft Windows XP Professional 5.1 Service Pack 1 (Build 2600) *IE version: 6.0.2800.1106 SP1-Q818529-Q330994-Q822925-Q828750-Q832894 The type of the file system is NTFS. MS-DOS Version 5.00.500 *command.com test passed! __________________________________ !!*Creating backups...!! (*Backup already exist!) 17:16:49.67 Sat 11/27/2004 __________________________________ *Local time: Saturday, November 27, 2004 (11/27/2004) 5:16 PM, Central Standard Time *Uptime: 17:16:51 up 0 days, 0:32:28 *Path: C:\FINDnFIX ---------------------------------------------------- »»Member of...: ("ADMIN" logon + group match required!) User is a member of group DRAGONSPIRIT\None. User is a member of group \Everyone. User is a member of group BUILTIN\Administrators. User is a member of group BUILTIN\Users. User is a member of group \LOCAL. User is a member of group NT AUTHORITY\INTERACTIVE. User is a member of group NT AUTHORITY\Authenticated Users. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Group BUILTIN\Administrators matches list. Group BUILTIN\Users matches list. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! User: [DRAGONSPIRIT\Guy1], is a member of: BUILTIN\Administrators \Everyone Running in WORKSTATION MODE. SystemDrive is C: SystemRoot is C:\WINDOWS Logon Domain is DRAGONSPIRIT Administrator's Name is Guy1 Computer Name is DRAGONSPIRIT LOGON SERVER is \\DRAGONSPIRIT »»»»»»»»»»»»»»»»»»*** Note! ***»»»»»»»»»»»»»»»» The list will produce a small database of files that will match certain criteria. Ex: read only files, s/h files, last modified date. size, etc. The filters provided and registry scan should match the corresponding file(s) listed. »»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»» Unless the file match the entire criteria, it should not be pointed to remove without attempting to confirm it's nature! »»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»» At times there could be several (legit) files flagged, and/or duplicate culprit file(s)! If in doubt, always search the file(s) and properties according to criteria! The file(s) found should be moved to \FINDnFIX\"junkxxx" Subfolder ___________________________________________________________________________ ___ ***YOU NEED TO DISABLE YOUR ACTIVE ANTI VIRUS PROTECTION TO AVOID CONFLICTS!*** ___________________________________________________________________________ ___ ......Scanning for file(s)... *Note! The list(s) may include legitimate files! »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» »»»»» (*1*) »»»»» ......... »»Read access error(s)... »»»»» (*2*) »»»»»........ »»»»» (*3*) »»»»»........ No matches found. unknown/hidden files... No matches found. »»»»» (*4*) »»»»»......... Sniffing.......... Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL »»»»»(*5*)»»»»» »»»»»(*6*)»»»»» »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» »»»»»Search by size... *List of files and specs according to 'size' : *Note: Not all files listed here are infected, but *may include* the name and spces of the offending file... ___________________________________________________________________________ Path: C:\WINDOWS\SYSTEM32 Including: *.DLL ___________________________________________________________________________ _ *By size and date... No matches found. No matches found. No matches found. Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» BHO search and other files... "C:\WINDOWS\system32\" ati2edxx.dll Oct 26 2004 30720 "ati2edxx.dll" 1 item found: 1 file, 0 directories. Total of file sizes: 30,720 bytes 30.00 K No matches found. --*sp.html in temp folder was NOT FOUND!-- *Filter keys search... HKEY_LOCAL_MACHINE\SOFTWARE\Classes\PROTOCOLS\Filter\text/html CLSID = {EE7A946E-61FA-4979-87B8-A6C462E6FA62} REGDMP: Unable to open key 'HKEY_LOCAL_MACHINE\SOFTWARE\Classes\PROTOCOLS\Filter\text/plain' (2) --(*text/plain Subkey was NOT FOUND!)-- »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» »»Size of Windows key: (*Default-450 *No AppInit-398 *fake(infected)-448,504,512...) Size of HKEY_LOCAL_MACHINE\software\microsoft\Windows NT\CurrentVersion\Windows: 398 »»Checking for AppInit_DLLs (empty) value... ________________________________ !"AppInit_DLLs"=""! Value does not exist ________________________________ »»Comparing *saved* key with *original*... REGDIFF 2.1 - Freeware written by Gerson Kurz (http://www.p-nand-q.com) Comparing File #1 (Keys1\winkey.reg) with File #2 (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows). No differences found. »»Dumping Values........ HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\DeviceNotSelectedTimeout SZ 15 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\GDIProcessHandleQuota DWORD 00002710 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\Spooler SZ yes HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\swapdisk SZ HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\TransmissionRetryTimeout SZ 90 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\USERProcessHandleQuota DWORD 00002710 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows DeviceNotSelectedTimeout = 15 GDIProcessHandleQuota = REG_DWORD 0x00002710 Spooler = yes swapdisk = TransmissionRetryTimeout = 90 USERProcessHandleQuota = REG_DWORD 0x00002710 »»Security settings for 'Windows' key: RegDACL 5.1 - Permissions Manager for Registry keys for Windows NT 4 and above Copyright (c) 1999-2001 Frank Heyne Software (http://www.heysoft.de) This program is Freeware, use it on your own risk! Access Control List for Registry key hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows: (NI) ALLOW Read BUILTIN\Users (IO) ALLOW Read BUILTIN\Users (NI) ALLOW Read BUILTIN\Power Users (IO) ALLOW Read BUILTIN\Power Users (NI) ALLOW Full access BUILTIN\Administrators (IO) ALLOW Full access BUILTIN\Administrators (NI) ALLOW Full access NT AUTHORITY\SYSTEM (IO) ALLOW Full access NT AUTHORITY\SYSTEM (NI) ALLOW Full access BUILTIN\Administrators (IO) ALLOW Full access CREATOR OWNER Effective permissions for Registry key hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows: Read BUILTIN\Users Read BUILTIN\Power Users Full access BUILTIN\Administrators Full access NT AUTHORITY\SYSTEM »»Performing string scan.... 00001150: ? 00001190: vk U 000011D0eviceNotSelectedTimeout 1 5 ( W vk ' 00001210: zGDIProcessHandleQuota" 9 0 ! vk 00001250: Spooler2 y e s vk =pswapdisk 00001290: @ p vk 0 R TransmissionRetr 000012D0:yTimeout vk ' USERProcessHandleQuota 00001310: @ p 00001350: 00001390: 000013D0: 00001410: 00001450: 00001490: 000014D0: 00001510: 00001550: 00001590: 000015D0: ---------- WIN.TXT -------------- --------------$011CF: UDeviceNotSelectedTimeout $01217: zGDIProcessHandleQuota$012C0: TransmissionRetryTimeout $012F0: USERProcessHandleQuota -------------- -------------- No strings found. -------------- -------------- REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows] "DeviceNotSelectedTimeout"="15" "GDIProcessHandleQuota"=dword:00002710 "Spooler"="yes" "swapdisk"="" "TransmissionRetryTimeout"="90" "USERProcessHandleQuota"=dword:00002710 ............. A handle was successfully obtained for the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows key. This key has 0 subkeys. The AppInitDLLs value entry was NOT found! ----------------------- »»»»»»Backups list...»»»»»» 17:18:22 up 0 days, 0:33:59 ----------------------- Sat 27 Nov 04 17:18:22 C:\FINDNFIX\ keyback.hiv Fri Nov 26 2004 10:08:14p A.... 8,192 8.00 K 1 item found: 1 file, 0 directories. Total of file sizes: 8,192 bytes 8.00 K C:\FINDNFIX\KEYS1\ winkey.reg Fri Nov 26 2004 10:08:14p A.... 268 0.26 K 1 item found: 1 file, 0 directories. Total of file sizes: 268 bytes 0.26 K *Temp backups... "C:\Documents and Settings\Guy1\Local Settings\Temp\Backs2\" keyback2.hi_ Nov 26 2004 8192 "keyback2.hi_" winkey2.re_ Nov 26 2004 268 "winkey2.re_" 2 items found: 2 files, 0 directories. Total of file sizes: 8,460 bytes 8.26 K -D---- JUNKXXX 00000000 22:08.14 26/11/2004 A----- STARTIT .BAT 00000060 17:16.50 27/11/2004 ___________________________________________________________________________ _____ ***THE FIX IS NOT COMPATIBLE WITH EARLIER;UNPATCHED VERSIONS OF WIN2K'(SP3 and BELLOW)' AND/OR LAX OF SECURITY UPDATES AND SERVICE PACKS FOR ALL PLATFORMS! MINIMAL REQUIREMENTS INCLUDE: _________XP HOME/PRO; SP1; IE6/SP1 _________2K/SP4; IE6/SP1 ___________________________________________________________________________ _____ »»»»»*** www10.brinkster.com/expl0iter/freeatlast/FNF/ ***»»»»» -----END------ Sat 27 Nov 04 17:18:23 Flrman1 (Mark) Member with 46,322 posts. Join Date: Jul 2002 Location: Thomasville, NC 27-Nov-2004, 06:46 PM #6 Do you know what this is?: O4 - HKLM\..\Run: [Clocks] RunDll32.exe OCpp.dll,SetClocks 429.75 369.00 Click Start > Run > and type in: services.msc Click OK. In the services window find Security Agent. Rightclick and choose "Properties". On the "General" tab under "Service Status" click the "Stop" button to stop the service. Beside "Startup Type" in the dropdown menu select "Disabled". Click Apply then OK. Exit the Services utility. If this service isn't there then skip this part and move on. Download Pocket Killbox from here: http://www.downloads.subratam.org/KillBox.zip Unzip the files to the folder of your choice. Double-click on Killbox.exe to run it. Now put a tick by Delete on reboot. In the "Paste Full Path of File to Delete" box, copy and paste each of the following lines one at a time. After each one it will ask for confimation to delete the file on next reboot. Click Yes. It will then ask if you want to reboot now. Click No. Continue with that same procedure until you have copied and pasted all of these in the "Paste Full Path of File to Delete" box. C:\WINDOWS\system32\scagent.exe C:\WINDOWS\httpfilter.dll C:\WINDOWS\httpfilter2.dll C:\WINDOWS\httpfilter1.dll C:\WINDOWS\System32\wmvdmod.exe Exit the Killbox. Next run Hijack This again and put a check by these. Close ALL windows except HijackThis and click "Fix checked" O2 - BHO: (no name) - {B31BB2AA-FCA3-448A-9718-278B636BC42A} - C:\WINDOWS\mindep.dll (file missing) O4 - HKCU\..\Run: [wmvdmod] C:\WINDOWS\System32\wmvdmod.exe O6 - HKCU\Software\Policies\Microsoft\Internet Explorer\Control Panel present O16 - DPF: {1D0D9077-3798-49BB-9058-393499174D5D} - file://c:\counter.cab O16 - DPF: {56336BCB-3D8A-11D6-A00B-0050DA18DE71} (RdxIE Class) - http://software-dl.real.com/20e9126...ip/RdxIE601.cab O18 - Filter: text/html - {EE7A946E-61FA-4979-87B8-A6C462E6FA62} - C:\WINDOWS\httpfilter.dll Now restart your computer. Let the computer fully reboot and then restart again into safe mode: How to start your computer in safe mode In safe mode navigate to the C:\Windows\Temp folder. Open the Temp folder and go to Edit > Select All then Edit > Delete to delete the entire contents of the Temp folder. Go to Start > Run and type %temp% in the Run box. The Temp folder will open. Click Edit > Select All then Edit > Delete to delete the entire contents of the Temp folder. Finally go to Control Panel > Internet Options. On the General tab under "Temporary Internet Files" Click "Delete Files". Put a check by "Delete Offline Content" and click OK. Click on the Programs tab then click the "Reset Web Settings" button. Click Apply then OK. Empty the Recycle Bin obyone Member with 39 posts. Join Date: Jun 2003 Experience: Advanced 02-Dec-2004, 05:29 AM #7 O4 - HKLM\..\Run: [Clocks] RunDll32.exe OCpp.dll,SetClocks 429.75 369.00 OCpp.dll is a dll file for R3D Tweak; a video card overclocking utility. OK...did everything you asked but I couldn't get "Service Agent" to stop. It just returned an error. I've attached the error as a screenshot. I did everything else but I'm still getting the httpfilter and CWS:about. Here is the new Hijack this Log. Quote: Logfile of HijackThis v1.98.2 Scan saved at 3:41:07 AM, on 12/2/2004 Platform: Windows XP SP1 (WinNT 5.01.2600) MSIE: Internet Explorer v6.00 SP1 (6.00.2800.1106) Running processes: C:\WINDOWS\System32\smss.exe C:\WINDOWS\system32\winlogon.exe C:\WINDOWS\system32\services.exe C:\WINDOWS\system32\lsass.exe C:\WINDOWS\System32\Ati2evxx.exe C:\WINDOWS\system32\svchost.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\system32\spoolsv.exe C:\WINDOWS\system32\CTSVCCDA.EXE C:\Program Files\Northern Trust\VPN\cvpnd.exe C:\Program Files\Norton SystemWorks\Norton AntiVirus\navapsvc.exe C:\Program Files\Norton SystemWorks\Norton Utilities\NPROTECT.EXE C:\Program Files\Canon\Memory Card Utility\PIXMA iP6000D\PDUiP6000DMemCrdMgr.exe C:\WINDOWS\system32\scagent.exe C:\WINDOWS\System32\tcpsvcs.exe C:\WINDOWS\System32\snmp.exe C:\PROGRA~1\NORTON~1\SPEEDD~1\nopdb.exe C:\WINDOWS\System32\svchost.exe C:\WINDOWS\System32\MsPMSPSv.exe C:\WINDOWS\system32\Ati2evxx.exe C:\WINDOWS\Explorer.EXE C:\PROGRA~1\NORTON~1\NORTON~1\navapw32.exe C:\PROGRA~1\NORTON~1\WinFax\WFXSWTCH.exe C:\Program Files\Iomega\DriveIcons\ImgIcon.exe C:\PROGRA~1\DAP\DAP.EXE C:\WINDOWS\System32\CTHELPER.EXE C:\Program Files\Creative\ShareDLL\CtNotify.exe C:\Program Files\Common Files\Real\Update_OB\realsched.exe C:\Program Files\Creative\ShareDLL\Mediadet.exe C:\Program Files\Canon\Memory Card Utility\PIXMA iP6000D\PDUiP6000DMon.exe C:\Program Files\Canon\Memory Card Utility\PIXMA iP6000D\PDUiP6000DTskbr.exe C:\Program Files\ATI Technologies\ATI Control Panel\atiptaxx.exe C:\Program Files\Rage3DTweak\RegTwk.exe C:\WINDOWS\System32\ctfmon.exe C:\Program Files\Webroot\Spy Sweeper\SpySweeper.exe C:\Program Files\Creative\SBAudigy\TaskBar\CTLTray.exe C:\Program Files\Creative\SBAudigy\TaskBar\CTLTask.exe C:\Program Files\rage3dtweak\gameutil.exe C:\Spyware Killers\Hijack This\hijackthis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Bar = about:NavigationFailure R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = about:NavigationFailure R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Bar = about:NavigationFailure R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = about:NavigationFailure R1 - HKCU\Software\Microsoft\Internet Explorer\Search,SearchAssistant = about:NavigationFailure R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = about:NavigationFailure R1 - HKCU\Software\Microsoft\Internet Explorer\Main,HomeOldSP = about:blank R1 - HKLM\Software\Microsoft\Internet Explorer\Main,HomeOldSP = about:blank R3 - Default URLSearchHook is missing N1 - Netscape 4: user_pref("browser.startup.homepage", "http://registration.excite.com/excitereg/login.jsp?app=em&return_url=http://e6.email.excite.com/"); (C:\Program Files\Netscape\Users\someguy\prefs.js) O2 - BHO: DAPHelper Class - {0000CC75-ACF3-4cac-A0A9-DD3868E06852} - C:\Program Files\DAP\DAPBHO.dll O2 - BHO: AcroIEHlprObj Class - {06849E9F-C8D7-4D59-B87D-784B7D6BE0B3} - C:\Program Files\Adobe\Acrobat 5.0\Reader\ActiveX\AcroIEHelper.ocx O2 - BHO: (no name) - {15F2721F-8B6E-4CF4-905F-9AFB3C2D311B} - C:\WINDOWS\mindep.dll O2 - BHO: (no name) - {53707962-6F74-2D53-2644-206D7942484F} - C:\Program Files\Spybot - Search & Destroy\SDHelper.dll O2 - BHO: Google Toolbar Helper - {AA58ED58-01DD-4d91-8333-CF10577473F7} - c:\windows\googletoolbar2.dll O2 - BHO: NAV Helper - {BDF3E430-B101-42AD-A544-FADC6B084872} - C:\Program Files\Norton SystemWorks\Norton AntiVirus\NavShExt.dll O3 - Toolbar: Norton AntiVirus - {42CDD1BF-3FFB-4238-8AD1-7859DF00B1D6} - C:\Program Files\Norton SystemWorks\Norton AntiVirus\NavShExt.dll O3 - Toolbar: DAP Bar - {62999427-33FC-4baf-9C9C-BCE6BD127F08} - C:\PROGRA~1\DAP\dapiebar.dll O3 - Toolbar: &Google - {2318C2B1-4965-11d4-9B18-009027A5CD4F} - c:\windows\googletoolbar2.dll O3 - Toolbar: &Radio - {8E718888-423F-11D2-876E-00A0C9082467} - C:\WINDOWS\System32\msdxm.ocx O4 - HKLM\..\Run: [NAV Agent] C:\PROGRA~1\NORTON~1\NORTON~1\navapw32.exe O4 - HKLM\..\Run: [WFXSwtch] C:\PROGRA~1\NORTON~1\WinFax\WFXSWTCH.exe O4 - HKLM\..\Run: [Iomega Startup Options] C:\Program Files\Iomega\Common\ImgStart.exe O4 - HKLM\..\Run: [Iomega Drive Icons] C:\Program Files\Iomega\DriveIcons\ImgIcon.exe O4 - HKLM\..\Run: [CTStartup] C:\Program Files\Creative\Splash Screen\CTEaxSpl.EXE /run O4 - HKLM\..\Run: [DownloadAccelerator] C:\PROGRA~1\DAP\DAP.EXE /STARTUP O4 - HKLM\..\Run: [CTHelper] CTHELPER.EXE O4 - HKLM\..\Run: [Disc Detector] C:\Program Files\Creative\ShareDLL\CtNotify.exe O4 - HKLM\..\Run: [QuickTime Task] "C:\Program Files\QuickTime\qttask.exe" -atboottime O4 - HKLM\..\Run: [TkBellExe] "C:\Program Files\Common Files\Real\Update_OB\realsched.exe" -osboot O4 - HKLM\..\Run: [Symantec NetDriver Monitor] C:\PROGRA~1\SYMNET~1\SNDMon.exe O4 - HKLM\..\Run: [SSC_UserPrompt] C:\Program Files\Common Files\Symantec Shared\Security Center\UsrPrmpt.exe O4 - HKLM\..\Run: [PDUiP6000DMon] C:\Program Files\Canon\Memory Card Utility\PIXMA iP6000D\PDUiP6000DMon.exe O4 - HKLM\..\Run: [PDUiP6000DTskbr] C:\Program Files\Canon\Memory Card Utility\PIXMA iP6000D\PDUiP6000DTskbr.exe O4 - HKLM\..\Run: [ATIPTA] C:\Program Files\ATI Technologies\ATI Control Panel\atiptaxx.exe O4 - HKLM\..\Run: [RegTweak] C:\Program Files\Rage3DTweak\RegTwk.exe O4 - HKCU\..\Run: [ctfmon.exe] C:\WINDOWS\System32\ctfmon.exe O4 - HKCU\..\Run: [SpySweeper] C:\Program Files\Webroot\Spy Sweeper\SpySweeper.exe /0 O4 - HKCU\..\Run: [TaskTray] "C:\Program Files\Creative\SBAudigy\TaskBar\CTLTray.exe" O4 - HKCU\..\Run: [TaskBar] "C:\Program Files\Creative\SBAudigy\TaskBar\CTLTask.exe" O4 - Global Startup: gameutil.exe.lnk = ? O8 - Extra context menu item: &Download with &DAP - C:\PROGRA~1\DAP\dapextie.htm O8 - Extra context menu item: &Google Search - res://c:\windows\GoogleToolbar2.dll/cmsearch.html O8 - Extra context menu item: Backward Links - res://c:\windows\GoogleToolbar2.dll/cmbacklinks.html O8 - Extra context menu item: Cached Snapshot of Page - res://c:\windows\GoogleToolbar2.dll/cmcache.html O8 - Extra context menu item: Download &all with DAP - C:\PROGRA~1\DAP\dapextie2.htm O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~2\Office10\EXCEL.EXE/3000 O8 - Extra context menu item: Similar Pages - res://c:\windows\GoogleToolbar2.dll/cmsimilar.html O8 - Extra context menu item: Translate into English - res://c:\windows\GoogleToolbar2.dll/cmtrans.html O9 - Extra button: Run DAP - {669695BC-A811-4A9D-8CDF-BA8C795F261C} - C:\PROGRA~1\DAP\DAP.EXE O9 - Extra button: Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\MSMSGS.EXE O9 - Extra 'Tools' menuitem: Windows Messenger - {FB5F1910-F110-11d2-BB9E-00C04F795683} - C:\Program Files\Messenger\MSMSGS.EXE O12 - Plugin for .spop: C:\Program Files\Internet Explorer\Plugins\NPDocBox.dll O16 - DPF: Yahoo! Chat - http://us.chat1.yimg.com/us.yimg.com.../c381/chat.cab O16 - DPF: {0A5FD7C5-A45C-49FC-ADB5-9952547D5715} (Creative Software AutoUpdate) - http://www.creative.com/SU/ocx/12119/CTSUEng.cab O16 - DPF: {27527D31-447B-11D5-A46E-0001023B4289} (CoGSManager Class) - http://gamingzone.ubisoft.com/dev/pa.../GSManager.cab O16 - DPF: {2B323CD9-50E3-11D3-9466-00A0C9700498} (Yahoo! Audio Conferencing) - http://cs5.chat.sc5.yahoo.com/v45/yacscom.cab O16 - DPF: {39B0684F-D7BF-4743-B050-FDC3F48F7E3B} (FilePlanet Download Control Class) - http://www.fileplanet.com/fpdlmgr/ca...C_1_0_0_44.cab O16 - DPF: {6414512B-B978-451D-A0D8-FCFDF33E833C} (WUWebControl Class) - http://v5.windowsupdate.microsoft.co...?1099292765625 O16 - DPF: {8EDAD21C-3584-4E66-A8AB-EB0E5584767D} - http://toolbar.google.com/data/GoogleActivate.cab O16 - DPF: {C2FCEF52-ACE9-11D3-BEBD-00105AA9B6AE} (Symantec RuFSI Registry Information Class) - http://security.symantec.com/sscv6/S.../bin/cabsa.cab O16 - DPF: {F6ACF75C-C32C-447B-9BEF-46B766368D29} (Creative Software AutoUpdate Support Package) - http://www.creative.com/SU/ocx/12119/CTPID.cab O18 - Filter: text/html - {EE7A946E-61FA-4979-87B8-A6C462E6FA62} - C:\WINDOWS\httpfilter.dll O18 - Filter: text/plain - {F8CE07CE-45A0-4ED4-A8E3-F70CEEECD86E} - C:\WINDOWS\mindep.dll FindnFix log Quote: Sat 27 Nov 04 17:16:49 »»»»»»»»»»»»»»»»»»***LOG!***(*updated *9/1*)»»»»»»»»»»»»»»»» *System: Microsoft Windows XP Professional 5.1 Service Pack 1 (Build 2600) *IE version: 6.0.2800.1106 SP1-Q818529-Q330994-Q822925-Q828750-Q832894 The type of the file system is NTFS. MS-DOS Version 5.00.500 *command.com test passed! __________________________________ !!*Creating backups...!! (*Backup already exist!) 17:16:49.67 Sat 11/27/2004 __________________________________ *Local time: Saturday, November 27, 2004 (11/27/2004) 5:16 PM, Central Standard Time *Uptime: 17:16:51 up 0 days, 0:32:28 *Path: C:\FINDnFIX ---------------------------------------------------- »»Member of...: ("ADMIN" logon + group match required!) User is a member of group DRAGONSPIRIT\None. User is a member of group \Everyone. User is a member of group BUILTIN\Administrators. User is a member of group BUILTIN\Users. User is a member of group \LOCAL. User is a member of group NT AUTHORITY\INTERACTIVE. User is a member of group NT AUTHORITY\Authenticated Users. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Group BUILTIN\Administrators matches list. Group BUILTIN\Users matches list. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! User: [DRAGONSPIRIT\Guy1], is a member of: BUILTIN\Administrators \Everyone Running in WORKSTATION MODE. SystemDrive is C: SystemRoot is C:\WINDOWS Logon Domain is DRAGONSPIRIT Administrator's Name is Guy1 Computer Name is DRAGONSPIRIT LOGON SERVER is \\DRAGONSPIRIT »»»»»»»»»»»»»»»»»»*** Note! ***»»»»»»»»»»»»»»»» The list will produce a small database of files that will match certain criteria. Ex: read only files, s/h files, last modified date. size, etc. The filters provided and registry scan should match the corresponding file(s) listed. »»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»» Unless the file match the entire criteria, it should not be pointed to remove without attempting to confirm it's nature! »»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»»» At times there could be several (legit) files flagged, and/or duplicate culprit file(s)! If in doubt, always search the file(s) and properties according to criteria! The file(s) found should be moved to \FINDnFIX\"junkxxx" Subfolder ___________________________________________________________________________ ___ ***YOU NEED TO DISABLE YOUR ACTIVE ANTI VIRUS PROTECTION TO AVOID CONFLICTS!*** ___________________________________________________________________________ ___ ......Scanning for file(s)... *Note! The list(s) may include legitimate files! »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» »»»»» (*1*) »»»»» ......... »»Read access error(s)... »»»»» (*2*) »»»»»........ »»»»» (*3*) »»»»»........ No matches found. unknown/hidden files... No matches found. »»»»» (*4*) »»»»»......... Sniffing.......... Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL »»»»»(*5*)»»»»» »»»»»(*6*)»»»»» »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» »»»»»Search by size... *List of files and specs according to 'size' : *Note: Not all files listed here are infected, but *may include* the name and spces of the offending file... ___________________________________________________________________________ Path: C:\WINDOWS\SYSTEM32 Including: *.DLL ___________________________________________________________________________ _ *By size and date... No matches found. No matches found. No matches found. Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL Power SNiF 1.34 - The Ultimate File Snifferdog. Created Mar 16 1992, 21:09:15. SNiF 1.34 statistics Matching files : 0 Amount in bytes : 0 Directories searched : 1 Commands executed : 0 Masks sniffed for: *.DLL »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» BHO search and other files... "C:\WINDOWS\system32\" ati2edxx.dll Oct 26 2004 30720 "ati2edxx.dll" 1 item found: 1 file, 0 directories. Total of file sizes: 30,720 bytes 30.00 K No matches found. --*sp.html in temp folder was NOT FOUND!-- *Filter keys search... HKEY_LOCAL_MACHINE\SOFTWARE\Classes\PROTOCOLS\Filter\text/html CLSID = {EE7A946E-61FA-4979-87B8-A6C462E6FA62} REGDMP: Unable to open key 'HKEY_LOCAL_MACHINE\SOFTWARE\Classes\PROTOCOLS\Filter\text/plain' (2) --(*text/plain Subkey was NOT FOUND!)-- »»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»»*»»» »»Size of Windows key: (*Default-450 *No AppInit-398 *fake(infected)-448,504,512...) Size of HKEY_LOCAL_MACHINE\software\microsoft\Windows NT\CurrentVersion\Windows: 398 »»Checking for AppInit_DLLs (empty) value... ________________________________ !"AppInit_DLLs"=""! Value does not exist ________________________________ »»Comparing *saved* key with *original*... REGDIFF 2.1 - Freeware written by Gerson Kurz (http://www.p-nand-q.com) Comparing File #1 (Keys1\winkey.reg) with File #2 (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows). No differences found. »»Dumping Values........ HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\DeviceNotSelectedTimeout SZ 15 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\GDIProcessHandleQuota DWORD 00002710 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\Spooler SZ yes HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\swapdisk SZ HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\TransmissionRetryTimeout SZ 90 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\USERProcessHandleQuota DWORD 00002710 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows DeviceNotSelectedTimeout = 15 GDIProcessHandleQuota = REG_DWORD 0x00002710 Spooler = yes swapdisk = TransmissionRetryTimeout = 90 USERProcessHandleQuota = REG_DWORD 0x00002710 »»Security settings for 'Windows' key: RegDACL 5.1 - Permissions Manager for Registry keys for Windows NT 4 and above Copyright (c) 1999-2001 Frank Heyne Software (http://www.heysoft.de) This program is Freeware, use it on your own risk! Access Control List for Registry key hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows: (NI) ALLOW Read BUILTIN\Users (IO) ALLOW Read BUILTIN\Users (NI) ALLOW Read BUILTIN\Power Users (IO) ALLOW Read BUILTIN\Power Users (NI) ALLOW Full access BUILTIN\Administrators (IO) ALLOW Full access BUILTIN\Administrators (NI) ALLOW Full access NT AUTHORITY\SYSTEM (IO) ALLOW Full access NT AUTHORITY\SYSTEM (NI) ALLOW Full access BUILTIN\Administrators (IO) ALLOW Full access CREATOR OWNER Effective permissions for Registry key hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows: Read BUILTIN\Users Read BUILTIN\Power Users Full access BUILTIN\Administrators Full access NT AUTHORITY\SYSTEM »»Performing string scan.... 00001150: ? 00001190: vk U 000011D0eviceNotSelectedTimeout 1 5 ( W vk ' 00001210: zGDIProcessHandleQuota" 9 0 ! vk ` 00001250: Spooler2 y e s vk =pswapdisk 00001290: @ p vk 0 R TransmissionRetr 000012D0:yTimeout vk ' USERProcessHandleQuota 00001310: @ p 00001350: 00001390: 000013D0: 00001410: 00001450: 00001490: 000014D0: 00001510: 00001550: 00001590: 000015D0: ---------- WIN.TXT -------------- --------------$011CF: UDeviceNotSelectedTimeout $01217: zGDIProcessHandleQuota$012C0: TransmissionRetryTimeout \$012F0: USERProcessHandleQuota -------------- -------------- No strings found. -------------- -------------- REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows] "DeviceNotSelectedTimeout"="15" "GDIProcessHandleQuota"=dword:00002710 "Spooler"="yes" "swapdisk"="" "TransmissionRetryTimeout"="90" "USERProcessHandleQuota"=dword:00002710 ............. A handle was successfully obtained for the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows key. This key has 0 subkeys. The AppInitDLLs value entry was NOT found! ----------------------- »»»»»»Backups list...»»»»»» 17:18:22 up 0 days, 0:33:59 ----------------------- Sat 27 Nov 04 17:18:22 C:\FINDNFIX\ keyback.hiv Fri Nov 26 2004 10:08:14p A.... 8,192 8.00 K 1 item found: 1 file, 0 directories. Total of file sizes: 8,192 bytes 8.00 K C:\FINDNFIX\KEYS1\ winkey.reg Fri Nov 26 2004 10:08:14p A.... 268 0.26 K 1 item found: 1 file, 0 directories. Total of file sizes: 268 bytes 0.26 K *Temp backups... "C:\Documents and Settings\Guy1\Local Settings\Temp\Backs2\" keyback2.hi_ Nov 26 2004 8192 "keyback2.hi_" winkey2.re_ Nov 26 2004 268 "winkey2.re_" 2 items found: 2 files, 0 directories. Total of file sizes: 8,460 bytes 8.26 K -D---- JUNKXXX 00000000 22:08.14 26/11/2004 A----- STARTIT .BAT 00000060 17:16.50 27/11/2004 ___________________________________________________________________________ _____ ***THE FIX IS NOT COMPATIBLE WITH EARLIER;UNPATCHED VERSIONS OF WIN2K'(SP3 and BELLOW)' AND/OR LAX OF SECURITY UPDATES AND SERVICE PACKS FOR ALL PLATFORMS! MINIMAL REQUIREMENTS INCLUDE: _________XP HOME/PRO; SP1; IE6/SP1 _________2K/SP4; IE6/SP1 ___________________________________________________________________________ _____ »»»»»*** www10.brinkster.com/expl0iter/freeatlast/FNF/ ***»»»»» -----END------ Sat 27 Nov 04 17:18:23
obyone
Member with 39 posts.
Join Date: Jun 2003
02-Dec-2004, 05:30 AM #8
oops..here's the error.
BTW...My Task manager still looks like this.
Attachment Blocked
Attachments in the HJT forum are often designed to solve a specific issue and not meant to be used without instructions specific to your computer. If you want help specific to your computer, please post a HiJackThis Log. If you started this thread, please make sure you are logged in to be able to view attachments.
Flrman1 (Mark)
Member with 46,322 posts.
Join Date: Jul 2002
Location: Thomasville, NC
02-Dec-2004, 04:14 PM #9
Let's do this again!
Click here to download CWSinstall.exe. Click on the CWSinstall.exe file and it will install CWShredder. Close all browser windows, click on the cwshredder.exe then click "Fix" (Not "Scan only") and let it do it's thing.
When it is finished restart your computer.
Install the program and launch it.
First in the main window look in the bottom right corner and click on Check for updates now then click Connect and download the latest reference files.
From main window :Click Start then under Select a scan Mode tick Perform full system scan.
Next deselect Search for negligible risk entries.
Now to scan just click the Next button.
When the scan is finished mark everything for removal and get rid of it.(Right-click the window and choose select all from the drop down menu and click Next)
Come back here and post another Hijack This log and we'll get rid of what's left.
OhNos111
Member with 125 posts.
Join Date: Nov 2003
03-Dec-2004, 05:37 PM #10
I did exactly that and still came out with the httpfilter and CWS malwares.
The last Hijack this Log is the most recent one, after the CWShredder and Adaware sweeps.
Flrman1 (Mark)
Member with 46,322 posts.
Join Date: Jul 2002
Location: Thomasville, NC
04-Dec-2004, 10:37 AM #11
Please post a current HJT scan.
OhNos111
Member with 125 posts.
Join Date: Nov 2003
04-Dec-2004, 04:39 PM #12
Hijack this:
Quote:
Flrman1 (Mark)
Member with 46,322 posts.
Join Date: Jul 2002
Location: Thomasville, NC
04-Dec-2004, 05:22 PM #13
Click Start > Run > and type in:
services.msc
Click OK.
In the services window find Security Agent.
Rightclick and choose "Properties". On the "General" tab under "Service Status" click the "Stop" button to stop the service. Beside "Startup Type" in the dropdown menu select "Disabled". Click Apply then OK. Exit the Services utility.
If this service isn't there then skip this part and move on.
Unzip the files to the folder of your choice.
Double-click on Killbox.exe to run it. Now put a tick by Delete on reboot. In the "Paste Full Path of File to Delete" box, copy and paste each of the following lines one at a time. After each one it will ask for confimation to delete the file on next reboot and if you want to reboot now. Click No then OK on the next prompt. Continue with that same procedure until you have copied and pasted all of these in the "Paste Full Path of File to Delete" box.
C:\WINDOWS\system32\scagent.exe
C:\WINDOWS\httpfilter.dll
C:\WINDOWS\httpfilter2.dll
C:\WINDOWS\httpfilter1.dll
Exit the Killbox.
Next run Hijack This again and put a check by these. Close ALL windows except HijackThis and click "Fix checked"
O2 - BHO: (no name) - {15F2721F-8B6E-4CF4-905F-9AFB3C2D311B} - C:\WINDOWS\mindep.dll (file missing)
O18 - Filter: text/html - {EE7A946E-61FA-4979-87B8-A6C462E6FA62} - C:\WINDOWS\httpfilter.dll
OhNos111
Member with 125 posts.
Join Date: Nov 2003
07-Dec-2004, 03:28 AM #14
OK...this is what I got.
Quote:
Flrman1 (Mark)
Member with 46,322 posts.
Join Date: Jul 2002
Location: Thomasville, NC
07-Dec-2004, 08:08 AM #15
Fix this one:
R3 - Default URLSearchHook is missing
Restart to safe mode.
How to start your computer in safe mode
In safe mode navigate to the C:\Windows\Temp folder. Open the Temp folder and go to Edit > Select All then Edit > Delete to delete the entire contents of the Temp folder.
Go to Start > Run and type %temp% in the Run box. The Temp folder will open. Click Edit > Select All then Edit > Delete to delete the entire contents of the Temp folder.
Finally go to Control Panel > Internet Options. On the General tab under "Temporary Internet Files" Click "Delete Files". Put a check by "Delete Offline Content" and click OK. Click on the Programs tab then click the "Reset Web Settings" button. Click Apply then OK.
Empty the Recycle Bin
techguy.org/300882
As Seen On
WELCOME TO TECH SUPPORT GUY!
If you're not already familiar with forums, watch our Welcome Guide to get started.
Are you having the same problem? We have volunteers ready to answer your question, but first you'll have to join for free. Need help getting started? Check out our Welcome Guide.
Search Tech Support Guy
Find the solution to your computer problem!
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37191253900527954, "perplexity": 25984.929446505445}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010557037/warc/CC-MAIN-20140305090917-00020-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Slurry | # Slurry
A slurry composed of glass beads in silicone oil flowing down an inclined plane
A slurry is a mixture of solids denser than water suspended in liquid, usually water. The most common use of slurry is as a means of transporting solids, the liquid being a carrier that is pumped on a device such as a centrifugal pump. The size of solid particles may vary from 1 micron up to hundreds of millimeters.
The particles may settle below a certain transport velocity and the mixture can behave as a Newtonian or non-Newtonian fluid. Depending on the mixture, the slurry may be abrasive and/or corrosive.
## Examples
Examples of slurries include:
## Calculations
### Determining solids fraction
To determine the percent solids (or solids fraction) of a slurry from the density of the slurry, solids and liquid[7]
${\displaystyle \phi _{sl}={\frac {\rho _{s}(\rho _{sl}-\rho _{l})}{\rho _{sl}(\rho _{s}-\rho _{l})}}}$
where
${\displaystyle \phi _{sl}}$ is the solids fraction of the slurry (state by mass)
${\displaystyle \rho _{s}}$ is the solids density
${\displaystyle \rho _{sl}}$ is the slurry density
${\displaystyle \rho _{l}}$ is the liquid density
In aqueous slurries, as is common in mineral processing, the specific gravity of the species is typically used, and since ${\displaystyle SG_{water}}$ is taken to be 1, this relation is typically written:
${\displaystyle \phi _{sl}={\frac {\rho _{s}(\rho _{sl}-1)}{\rho _{sl}(\rho _{s}-1)}}}$
even though specific gravity with units tonnes/m^3 (t/m^3) is used instead of the SI density unit, kg/m^3.
### Liquid mass from mass fraction of solids
To determine the mass of liquid in a sample given the mass of solids and the mass fraction: By definition
${\displaystyle \phi _{sl}={\frac {M_{s}}{M_{sl}}}}$
therefore
${\displaystyle M_{sl}={\frac {M_{s}}{\phi _{sl}}}}$
and
${\displaystyle M_{s}+M_{l}={\frac {M_{s}}{\phi _{sl}}}}$
then
${\displaystyle M_{l}={\frac {M_{s}}{\phi _{sl}}}-M_{s}}$
and therefore
${\displaystyle M_{l}={\frac {1-\phi _{sl}}{\phi _{sl}}}M_{s}}$
where
${\displaystyle \phi _{sl}}$ is the solids fraction of the slurry
${\displaystyle M_{s}}$ is the mass or mass flow of solids in the sample or stream
${\displaystyle M_{sl}}$ is the mass or mass flow of slurry in the sample or stream
${\displaystyle M_{l}}$ is the mass or mass flow of liquid in the sample or stream
### Volumetric fraction from mass fraction
${\displaystyle \phi _{sl,m}={\frac {M_{s}}{M_{sl}}}}$
Equivalently
${\displaystyle \phi _{sl,v}={\frac {V_{s}}{V_{sl}}}}$
and in a minerals processing context where the specific gravity of the liquid (water) is taken to be one:
${\displaystyle \phi _{sl,v}={\frac {\frac {M_{s}}{SG_{s}}}{{\frac {M_{s}}{SG_{s}}}+{\frac {M_{l}}{1}}}}}$
So
${\displaystyle \phi _{sl,v}={\frac {M_{s}}{M_{s}+M_{l}SG_{s}}}}$
and
${\displaystyle \phi _{sl,v}={\frac {1}{1+{\frac {M_{l}SG_{s}}{M_{s}}}}}}$
Then combining with the first equation:
${\displaystyle \phi _{sl,v}={\frac {1}{1+{\frac {M_{l}SG_{s}}{\phi _{sl,m}M_{s}}}{\frac {M_{s}}{M_{s}+M_{l}}}}}}$
So
${\displaystyle \phi _{sl,v}={\frac {1}{1+{\frac {SG_{s}}{\phi _{sl,m}}}{\frac {M_{l}}{M_{s}+M_{l}}}}}}$
Then since
${\displaystyle \phi _{sl,m}={\frac {M_{s}}{M_{s}+M_{l}}}=1-{\frac {M_{l}}{M_{s}+M_{l}}}}$
we conclude that
${\displaystyle \phi _{sl,v}={\frac {1}{1+SG_{s}({\frac {1}{\phi _{sl,m}}}-1)}}}$
where
${\displaystyle \phi _{sl,v}}$ is the solids fraction of the slurry on a volumetric basis
${\displaystyle \phi _{sl,m}}$ is the solids fraction of the slurry on a mass basis
${\displaystyle M_{s}}$ is the mass or mass flow of solids in the sample or stream
${\displaystyle M_{sl}}$ is the mass or mass flow of slurry in the sample or stream
${\displaystyle M_{l}}$ is the mass or mass flow of liquid in the sample or stream
${\displaystyle SG_{s}}$ is the bulk specific gravity of the solids | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 31, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487261772155762, "perplexity": 2080.4377140288075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201601.26/warc/CC-MAIN-20200921081428-20200921111428-00101.warc.gz"} |
https://goodboychan.github.io/python/tensorflow/machine_learning/2020/09/11/01-Application-and-Tips-for-Machine-Learning.html | import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
plt.rcParams['figure.figsize'] = (16, 10)
plt.rcParams['text.usetex'] = True
plt.rc('font', size=15)
## Learning Rate
We used Gradient Descent method to find the weight making the cost minimum. Usually, at each step in training, element of weight vector updates its value by the direction of gradient decreasing.
$$\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta)$$
Here, we can see the constant term, $\alpha$, also known as learning rate. Learning rate is a hyper-parameter that controls how much we are adjusting the weights with respect to the loss gradient. Simply speaking, it is a speed of finding the answer that we want to find.
### Good and Bad learning rate
Suppose the there is an optimal point, we want to find it by searching manually. If the speed is too fast, it may pass the optimal point. Then we need to go back and re-search it. Or maybe we can miss it, and keep going. Sometimes it may occur going forward and backward, repeatedly. If the speed it too slow, then it takes a long time to find it.
Actually, there is no answer for optimial learning rate. It may be different on each case and environment, and even if the optimizer may take different learning rate. By andrej karpathy, $3e^4$ is best learning rate for Adam optimzier.
### Annealing the learning rate
So it is good approach of changing the learning rate per each epoch. It is called annealing (or decaying) the learning rate. In the example case we mentioned previously, there is the case that model tries to find the optimal point with going forward and backward repeatedly. If we can change the learning rate in that case, it can avoid to struggling in saddle point, and keep trying to find the point.
There are various way to annealing the learning rate:
• Step Decay: Decay the rate per each N epochs.
• Exponential decay: Decay the rate with exponential function ($\alpha = \alpha_0 e^{-kt}$)
• ${1 \over t}$ decay: Decay the rate with fraction ($\alpha = \frac{\alpha_0}{1 + kt}$)
## Data Preprocessing
When we implement the logistic regression model for finding diabetes, we found out that the scale of each column is different from each other. In this dataset, it is hard to train it cause model understand the value itself. That is, sometime model could learn that large number of value is always good for prediction. And there is the case that the mean and variance is different from each columns. At this case, model may mis-understand the data, and interpret the valid data as an outlier.
For this problem, we need to apply method so called Feature Scaling. Feature scaling makes the data re-arranging in same standard. There are two common ways for feature scaling:
• Standardization : If you take the statistic class, maybe you heard about standard normal distribution. And to make normal distribution to standard normal distribution, we calculate the Z-score (also known as Standard Score). Throught this, it makes the standard normal distribution with mean of 0 and standard deviation of 1. Same approach can be applied here. If we get dataset, and calculate the Z-score, we can make dataset to standard distribution. So here is how to calculate Z-score,
$$x_{new} = \frac{x - \mu}{\sigma}$$
• Normalization : Unlike Standardization, Normalization makes the dataset to have some finite range. such as 0 and 1. To do this, we consider the maximum value of dataset to be 1, and the minimum value of dataset to be 0. And calculate the ratio between the measured point and minimum value. So it is also called Min-Max Scaling.
$$x_{new} = \frac{x - x_{min}}{x_{max} - x_{min}}$$
## Overfitting
Usually, the learning process with updating weight and bias is called fit. So training is also called fit the model. So what does it mean of overfitting?
If we train the model with given data, we can see that the model is well-trained by monitoring the cost or loss on each epochs. The problem is that what if we input the unknown data to the trained model. Does it work? Actually, we don't know any information about unknown data, the mean, standard deviation, etc. Maybe it contains lots of outliers the model doesn't see while training.
As a result, well-performed model must work well on not only training data, but also the unknown data. And that's why we separate the training dataset and test dataset in the data preprocessing step.
Overfitting and underfitting is the problem we explained about. Overfitting is the situation that model works well on training dataset, but worst performance on test dataset. Usually, someone mentioned about high variance. In statistical view, high variance means that the model (or estimator) varies a lot depending on the data you get. And it occurs while the model has various of dependent variable.
Underfitting is the situation that model doesn't work well on training dataset. It has high bias since the model doesn't have ability to handle the difference of dataset, and underfitting model have low number of dependent variables.
So, to avoid the overfitting or underfitting, we need to control the number of dependent variables. Actually, the most effective way to handle both is to get more training data containing variety. But you know that it takes too much cost, and the amount of dataset is not infinite. Second approach is to apply Principle Component Analysis (PCA for short). PCA is one of unsupervised learning method for dimensionality reduction. Through this, we can reduce the number of features (of course, PCA selects the component which mainly affect the data distribution.) for handling high variance. If we suffered from underfitting, then we can add another features.
Another way is to add regularization term for loss. Consider about the hypothesis and cost function for linear regression.
$$H_{\theta}(x) = \theta_0 + \theta_1 x + \theta_2 x^2 + \dots \\ J(\theta) = {1 \over 2m} \sum_{i=1}^m (H_{\theta}(x_i) - y_i)^2$$
We can the additional term for express the weight vector information to the cost function. So it modifies like this,
$$J(\theta) = {1 \over 2m} \sum_{i=1}^m (H_{\theta}(x_i) - y_i)^2 + {\lambda \over 2m} \sum_{j=1}^m \theta_j^2$$
Upper case is the example of L2 loss (squared error) with L2 regularization. If you want to use L1 loss (absolute error) with L1 regularization, you can express like this.
$$J(\theta) = {1 \over m} \sum_{i=1}^m \vert H_{\theta}(x_i) - y_i \vert + {\lambda \over m} \sum_{j=1}^m \vert \theta_j \vert$$
It's up to you what you choose for lost function and regularization term.
There is another way like data augmentation, dropout, and so on. Again, there is no optimal way to handle overfitting and underfitting. You need to try it with some feature, and tune the model based on error.
## Application and tips with Tensorflow
### Data Preprocess
Let's see the example about Data preprocessing, especially on normalization. We have some dataset containing outliers
data = np.array([[828.659973, 833.450012, 908100, 828.349976, 831.659973],
[823.02002, 828.070007, 1828100, 821.655029, 828.070007],
[819.929993, 824.400024, 1438100, 818.97998, 824.159973],
[816, 820.958984, 1008100, 815.48999, 819.23999],
[819.359985, 823, 1188100, 818.469971, 818.97998],
[819, 823, 1198100, 816, 820.450012],
[811.700012, 815.25, 1098100, 809.780029, 813.669983],
[809.51001, 816.659973, 1398100, 804.539978, 809.559998]])
x_train = data[:, :-1]
y_train = data[:, [-1]]
plt.plot(x_train, 'ro')
plt.plot(y_train)
plt.show()
As you can see, most of data are distributed near 800, but some data value is over 1000000. To handle this, we can apply normalization.
def normalize(data):
return (data - np.min(data, 0)) / (np.max(data, 0) - np.min(data, 0))
data_normalized = normalize(data)
x_train = data_normalized[:, :-1]
y_train = data_normalized[:, -1]
plt.plot(x_train, 'ro')
plt.plot(y_train)
plt.show()
### Regularization term
Now we will build linear regression model. But it will be helpful to add regularization term in cost function to avoid overfitting.
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(len(x_train))
# Initialize Weight and bias
W = tf.Variable(tf.random.normal([x_train.shape[1], 1]), dtype=tf.float32)
b = tf.Variable(tf.random.normal([1]), dtype=tf.float32)
variables = [W, b]
We can define the hypothesis of linear regression and regularization term.
$$J(\theta) = {1 \over 2m} \sum_{i=1}^m (H_{\theta}(x_i) - y_i)^2 + {\lambda \over 2m} \sum_{j=1}^m \theta_j^2$$
def hypothesis(X):
return tf.matmul(X, W) + b
# cost with regularizer
def cost_with_regularizer(loss, beta=0.01):
W_reg = tf.nn.l2_loss(W)
return tf.reduce_mean(loss + beta * W_reg)
# Loss function
def loss_fn(h, y, flag=False):
cost = tf.reduce_mean(tf.square(h - y))
if flag:
cost = cost_with_regularizer(cost)
return cost
### Learning rate Decay
We can also implement the learning rate decaying. In this example, we will apply exponential decaying from tensorflow.
start_lr = 0.1
# Learning rate with exponential decay
learning_rate = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate=start_lr,
decay_steps=50,
decay_rate=0.99,
staircase=True
)
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate)
### Training process
Then we can build the gradient function, and finalize the training process
def grad(X, y, flag):
loss = loss_fn(hypothesis(X), y, flag)
for e in range(500):
for X, y in dataset:
X = tf.cast(X, tf.float32)
y = tf.cast(y, tf.float32)
if e % 50 == 0:
print('epoch: {}, loss: {:.4f}'.format(e, loss))
epoch: 0, loss: 0.8063
epoch: 50, loss: 0.1472
epoch: 100, loss: 0.1113
epoch: 150, loss: 0.1031
epoch: 200, loss: 0.1005
epoch: 250, loss: 0.0993
epoch: 300, loss: 0.0985
epoch: 350, loss: 0.0978
epoch: 400, loss: 0.0972
epoch: 450, loss: 0.0968
## Summary
We covered the some application and tips for regression model. Especially, we talked about learning rate, and data preprocessing. And also to avoid overfitting problem, we can apply regularization term. And through the example, (and thanks to tensorflow) we can implement it easily. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7390015721321106, "perplexity": 1522.0154102142328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00584.warc.gz"} |
http://www.ams.org/mathscinet-getitem?mr=502897 | MathSciNet bibliographic data MR502897 58C20 (41A50) Abatzoglou, Theagenis J. The minimum norm projection on $C\sp{2}$$C\sp{2}$-manifolds in ${\bf R}\sp{n}$${\bf R}\sp{n}$. Trans. Amer. Math. Soc. 243 (1978), 115–122. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. | {"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9965233206748962, "perplexity": 7050.419135604503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607998.27/warc/CC-MAIN-20170525044605-20170525064605-00071.warc.gz"} |
https://blog.jverkamp.com/2016/12/15/aoc-2016-day-15-capsule-dropper/ | # AoC 2016 Day 15: Capsule Dropper
### Source: Timing is Everything
Part 1: Given a series of openings one second apart, each with n positions that advance one position per second, what is the first time you can start the simulation so that you pass each in position 0.
Not much to say about this one. Load the disks and then check each disk. The most interesting part is using modular arithmetic so that you don’t have to actually determine the current position–just check if it’s 0.
discs = []
for line in fileinput.input(args.files):
if not line.strip() or line.startswith('#'):
continue
m = re.match(r'Disc #(\d+) has (\d+) positions; at time=0, it is at position (\d+).', line)
index, count, current = m.groups()
discs.append((int(index), int(count), int(current)))
def solve():
for button_press in naturals():
success = True
for (index, count, current) in discs:
if (button_press + index + current) % count != 0:
success = False
break
if success:
return button_press
print('Press the button at t = {}'.format(solve()))
Part 2: Add a new opening at the end with 11 positions.
This doesn’t actually change the problem. It just makes it slower. But it still finishes well within a minute, so not much point in optimizing it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27817851305007935, "perplexity": 3787.8331628112724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486936.35/warc/CC-MAIN-20190218135032-20190218161032-00066.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=3883204 | # Two Stefan Boltzmann Laws
by PatF
Tags: boltzmann, laws, stefan
P: 17 In Reif's book Fundamentals of Statistical and Thermal Physics, he labels two formulas as the Stefan-Boltzmann Law. They are both involve T^4 but the constant is different. In one, on page 376, the law is given as (pi2/15)*(kT)4/(c*hbar)3. The other, on page 388, is (pi2/60)*(kT)4/(c2*hbar3). The second formula is c/4 times the first. I have looked carefully through the text and I can't see why there should be a difference. What have I overlooked?
P: 2,035 I don't have the book, but probably they are not the same - one is referring to the intensity of emitted radiation, and the other is referring to the energy density of radiation. The two differ by a factor of c/4. Try this: http://hyperphysics.phy-astr.gsu.edu...m/raddens.html
P: 17 Thanks much! I just checked and I think you are right.
Related Discussions Introductory Physics Homework 4 Advanced Physics Homework 2 Introductory Physics Homework 1 General Physics 6 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398767948150635, "perplexity": 558.9454467471734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654390/warc/CC-MAIN-20140305060734-00086-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://bestcase.wordpress.com/2015/06/11/capturerecapture-part-one/ | ## Capture/Recapture Part One
Kids doing capture/recapture. From Dan Meyer.
If you’ve been awake and paying attention to stats education, you must have come across capture/recapture and associated classroom activities.
The idea is that you catch 20 fish in a lake and tag them. The next day, you catch 25 fish and note that 5 are tagged. The question is, how many fish are in the lake? The canonical answer is 100: having 5 tagged in the 25 suggests that 1/5 of all fish are tagged; if 20 fish are tagged, then the total number must be 100. Right?
Sort of. After all, we’ve made a lot of assumptions, such as that the fish instantly and perfectly mix, and that when you fish you catch a random sample of the fish in the lake. Not likely. But even supposing that were true, there must be sampling variability: if there were 20 out of 100 tagged, and you catch 25, you will not always catch 5 tagged fish; and then, looking at it the twisted, Bayesian-smelling other way, if you did catch 5, there are lots of other plausible numbers of fish there might be in the lake.
Let’s do those simulations.
## Easy one first
We assume there are actually 100 fish in the lake, of which 20 are tagged. We sample 25 and count the tagged fish, then infer the size of the population.
Let T be total number of tagged fish (20), R be the number that you recapture (25), P be the population, and Q the number of tagged fish in the recaptured set. Our estimate for P is:
$P = T\frac{R}{Q}$
So if Q = 5, our estimate is 100.
In the simulation, then, we’ll let Q be a random binomial, choosing from R events with a probability of (T/100). Here are 100 estimates for the population P:
100 estimates of the population. True value = 100.
I look at this and think, crikey, that’s terrible! It’s really easy to be off by 50% or more.
Of course, things change if you change the setup. For example, if you tag a large fraction of the fish in the lake, the estimates get better and better. But the point is to be able to estimate the population well without actually counting them, right? (Well, that partly misses the point, as we will see in the next post. But that’s what I thought for a long time.)
## Now the Harder Version
Now we do it the other way. Suppose we know the population P and the two sample sizes T and R. That is, suppose we know there are 100 fish in the lake, that we will tag 20 of them, then, later, catch 25. We’ll get some number of tagged fish in the 25. We expect about 5, but it will vary.
The distribution of Q. How many tagged fish will we get out of 25, assuming the total population is 100?
That’s what you see in the figure: 100 examples of the number of tagged fish you will “recapture” under our initial assumption that there are 100 fish in the lake. The vertical lines at 2 and 8 are at the 5th and 95th percentiles. Notice that the expected value, 5, is in the middle of the distribution.
We wonder (à la doing a confidence interval): What is the range of populations for which getting 5 tagged fish out of 25 is plausible? We’ll vary the true population (P will vary up and down from 100) and see where that “5” is in the distribution.
We could figure it out exactly using the binomial distribution, but if we just simulate it, we get an answer of “between about 60 and 200,” which is also terrible. (I put the population on a slider, and slid it back and forth until the 5th and 95th percentiles stayed more or less at 5 fish. That gives me a 90% confidence interval.)
The next two illustrations show the distribution of Q, the number of tagged fish in the sample, for populations P of 60 and 200.
Distribution of number of recaptured tagged fish when the population is 60. The fifth percentile is at 5.
Distribution when the population is 200. Now the 95th percentile is at 5.
So sure, playing with and eating goldfish crackers in class is great, and you certainly could do it several times and average as in this fine post from ispeakmath. And no question that the proportional reasoning here is just the kind of thing we want kids to do. But doesn’t it bother you that a real fisheries person would probably not repeat this procedure and average in order to get a better estimate?
I was tempted to find out what real fisheries people DO do, but never got around to it. But then, a couple days ago, I came across a Very Compelling Real-Life Application Of This Technique. Stay tuned. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5342565774917603, "perplexity": 795.6626047999056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00437-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/viscosity-of-water.152294/ | # Viscosity of Water
1. Jan 20, 2007
1. The problem statement, all variables and given/known data
Hi guys, I need to find the flow of water at temperature of 350K. I had to do this, but for air in the same problem...but I can't find an equation to account for the viscosity change, due to temperature rise. My book lists it at 86.0 x 10^-5 Pa*s @ temp 300K.
2. Relevant equations
3. The attempt at a solution
I've searched the web for some time now, and can't find any equations for water. Any help would be great.
Thanks,
2. Jan 20, 2007
### AlephZero
3. Jan 20, 2007
Ok, so I found the page(thanks)....and the first link takes me right to what I need. But....the viscosity is dynamic(is that the same as just plain viscosity?).......and also, they have it measured in kg/m*s and I need it measured in Pa*s. Again, I've looked in my book and on the net, and I can't find out how you convert between those two.
The only conversion that I found(and I don't think it works)...but its 1kg(force)/m^2 = 9.806650 Pa.
4. Jan 20, 2007
Does anyone know about these conversions.....???
5. Jan 21, 2007
### AlephZero
1 Pascal (Pa) is 1 Newton/m^2
1 Newton (N) is the force to accelerate 1 Kg at 1 m/s^2
So 1 Pa.s = 1 N.s/m^2 = 1 (Kg.m/s^2).(s/m^2) = 1 Kg/(m.s)
BTW There are two different units for viscosity, dynamic and kinematic. Kinematic viscosity = dynamic viscosity / density. The one you want is dynamic viscosity.
6. Jan 21, 2007
Oh ok.....so they are the same units then(or equivilant)....I guess I didn't put to much time into it, seems pretty obvious now, duh. ha. Well thanks though, I really appreciate you showing me the right direction!
Similar Discussions: Viscosity of Water | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034650683403015, "perplexity": 1843.1281147054444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189495.77/warc/CC-MAIN-20170322212949-00010-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://blogs.mathworks.com/seth/2010/01/21/building-models-with-matlab-code/ | # Building Models with MATLAB Code71
Posted by Seth Popinchalk,
Occasionally I get questions about how to build, modify, and add blocks, to Simulink models using MATLAB commands. In this post, I will to give a basic overview of the common model construction commands.
### Contents
Because you need to refer to the system so often when doing model construction from M-code, I immediately save that off in a variable called sys. The new_system command created the empty model in memory, and you have to call open_system to display it on-screen.
sys = 'testModel';
new_system(sys) % Create the model
open_system(sys) % Open the model
When I add blocks to the canvas, I specify the position to provide proper layout. The position parameter provides the top left (x,y) and lower right (x+w,y+h) corners of the block. The x and y values are relative to the origin (0,0) in the upper left corner of the canvas; x increases to the right, and y increases down. To keep my layout organized, I use a standard blocks size of 30 by 30, and offsets of 60.
x = 30;
y = 30;
w = 30;
h = 30;
offset = 60;
I like my ports with slightly different proportions, so I define them to be half the height of the other blocks. add_block specifies the source block and the destination path, which defines the block name. Block names must be unique for a given system so add_block provides a MakeNameUnique option. (not used here)
pos = [x y+h/4 x+w y+h*.75];
add_block('built-in/Inport',[sys '/In1'],'Position',pos);
I'll add an integrator block, offset to the right of the inport.
pos = [(x+offset) y (x+offset)+w y+h];
add_block('built-in/Integrator',[sys '/Int1'],'Position',pos)
To connect the blocks, call add_line and provide the system name, source port and destination port. The ports are designated by the 'blockname/PortNum' format. Default line routing is a direct line connection from the source to destination. I prefer to use the autorouting option.
add_line(sys,'In1/1','Int1/1','autorouting','on')
pos = [(x+offset*2) y (x+offset*2)+w y+h];
pos = [(x+offset*2) y+offset (x+offset*2)+w (y+offset)+h];
add_line(sys,'Int1/1','Scope1/1','autorouting','on')
### Deleting Blocks and Lines
When deleting blocks, I call delete_line before delete_block. This is the reverse of what I did before. The commands are grouped in delete_line/delete_block pairs. For this example, I'll delete the integrator Int2, and add an outport.
delete_line(sys,'Int1/1','Int2/1')
delete_block([sys '/Int2'])
pos = [(x+offset*2) y+h/4 (x+offset*2)+w y+h*.75];
add_line(sys,'Int1/1','Out1/1')
### Replacing Blocks
Sometimes you don't really want to delete a block, you are just going to replace it. replace_block gives you the capability to replace all blocks that match a specific criteria. I reccommend carefully reading the documentation to better understand this function.
replace_block(sys,'Name','In1','built-in/Sin','noprompt');
set_param([sys '/In1'],'Position',[x y x+w y+h],'Name','Sine Wave');
[t,x,y] = sim(sys);
plot(t,y), ylim([-.5 2.5]), grid on
Do you use model construction commands? Why doesn't the cosine on that scope cross below zero? Leave a comment here with your thoughts.
Get the MATLAB code
Published with MATLAB® 7.9
Bob replied on : 1 of 71
Yes, I’ve used the model construction commands.
Occasionally to build up models.
More interestingly to use MATLAB and library blocks with the option turned on to modify contents to populate the block based on mask parameters. Amazing what one can do that way, I’ve been able to avoid writing S-Functions with the approach.
I’ve occasionally needed to replace blocks, but never quite had replace_block work. Problems having lines reconnect with more than one inport/outport. Problems because dialog parameters need to be transformed for the new block.
Jim replied on : 2 of 71
I have done this extensively. I’ve also used commands to create/populate Stateflow flowcharts.
One great use I’ve found is to automatically connect multiple component models into a complete system model. Once the script works, for large models this approach is much faster and much more accurate than connecting ports by hand.
Seth replied on : 3 of 71
@Bob – I like you example of using self modifying blocks in place of S-functions. I find REPLACE_BLOCK is a really powerful tool for updating models that might use an obsolete utility block.
@Jim – I have never written code to connect up large system models into components, but I have seen the results. This is when auto line routing can really improves the look of the diagram.
Seth, thank you very much!
I have additional question. Is it possible to alter Simulink model built into executable? I need a way for changing Simulink model on machine where I have only Matlab Runtime Environment.
Seth replied on : 5 of 71
@Aleksandar – If you build a Simulink model into an executable using Real-Time Workshop, you will only be able to modify the run-time parameters of the model. Using the Rapid Simulation Target (RSIM), you can use RSIMGETRTP to get a run-time parameter structure. This can be modified in MATLAB, saved to a MAT-file and passed into your executable as an argument. Look at the Accelerated Simulations Demos in Real-Time Workshop for examples of how to do this.
If your goal is to modify the MDL file and use the SIM command to run it, that will not work in a deployed MATLAB application.
Marko replied on : 6 of 71
I have one question, but I am not sure is this right address for it. Anyway, I’ve been working in SimPowerSystems for a short period of time. I would like to know is there any possibility of speeding up a simulation, since any model I have run seems to work in a real time. To be more specific, I would like to model how much energy (power KWh) would wind turbine produce in a year, using some of prefabricated models inside SimPowerSys. So, if I put that time (365*24*3600) in the simulation, it seems it would work for a month.
Guy replied on : 7 of 71
@Marko – Simulink include Accelerator and Rapid Accelerator modes which can help speeding up models. However in your case I am not sure this is the appropriate option.
You probably noticed that SimPowerSystems include Wind Farm demos implemented in two ways: detailled and average. For most applications, I recommend having a detailled model to study the transitional dynamics of the system over a short period of time. For the long term you want an average model which will skip details but will run fast.
Marko replied on : 8 of 71
@Guy – Thank you for you effort, but as far as accelerators are concerned ,they compile model into c code and than use some techniques to accelerate the model, but in my application that is not very suitable.
Regarding those average models, you will notice that both detailed and average are run for 0.2 sec in approximately same speed (I have put for ex. 100 sec for simulation time, it seems to run forever). However, I was wondering if there is any model that could be run for much longer time.
Thank you anyway!
Marko replied on : 9 of 71
I am trying to deploy a model on a web. To do so, first I have to build a model using RTW, but when I try to build it, it shows me following message:
Algebraic loops are not supported in generated code. Use the ‘ashow’ command in the Simulink Debugger to see the algebraic loops.
I started with very simple model, which has a loop. I desperately need to work this through. Is there any alternative solution for this? I would be very grateful for any answer! Thank you!
Seth replied on : 10 of 71
@Marko – Real-Time Workshop doesn’t generate code for models that contain algebraic loops. The only option is to remove them from your model. There is a technical support solution entitled: What are algebraic loops in Simulink and how do I solve them?
I recommend you review that information and see if you can resolve the algebraic loop. Good luck!
Mark replied on : 11 of 71
Seth, thank you very much for this post!
I have additional question. Is it possible to rotate/flip blocks with model construction commands? Thank you!
Mark replied on : 13 of 71
Marko replied on : 14 of 71
@Seth, I want to thank you on your valuable comment and I would appreciate if you could tell me if Real-Time Workshop is going to support that functionality (“generate code for models that contain algebraic loops”) in future releases.
Thank you very much again!
Jesus replied on : 15 of 71
Thanks for your post on Simulink model construction commands, but where could we obtain the commands for Simscape models. I infer it uses the same commands, but how can they be parametrized? Could you give us some examples?
Thanks
Guy replied on : 16 of 71
@Jesus – Construction commands are not fully supported for Simscape blocks. For example ADD_BLOCK works fine, but the way to use ADD_LINE is not documented and consequently not supported.
It is possible to use set_param and get_param on Simscape blocks, however the following documentation page warns you:
http://www.mathworks.com/access/helpdesk/help/toolbox/physmod/simscape/ug/bqqjdvg-1.html
It mentions that “You can use the Simulink set_param and get_param commands to set or get Simscape block parameters. The MathWorks does not recommend that you use these commands for this purpose.”
Based on experience, I use set_param with Simscape blocks only to set the value of some dialog parameters, for example to set the resistance of a Resistor block you can do “set_param(gcb,’R’,’100′)”.
Seth replied on : 17 of 71
@Marko – As you know, algebraic loop solving is an iterative process and doesn’t guarantee a solution. It is also not appropriate within an embedded system without some kind of control for performance.
I have heard other requests for support to generate code from models that contain algebraic loops, but these are generally in the context of rapid simulation/batch simulation.
I wonder if your situation is the same? I have passed this on to our developers to consider for a future release. Thanks!
Alex replied on : 18 of 71
Seth – I would like your advice on using systems for rapid prototype development.
rajshekhar replied on : 19 of 71
i have a question that can m-file code dump into input port in simulink model?
Hadi Ariakia replied on : 20 of 71
Hi Seth
Thanks a lot, it is really useful.
I’ve a question regarding the add_line, delete_line etc.
I have to always use power Sim, but I cannot use delete_line for deleting lines between some models.For example, I cannot delete lines connecting two “distributed parameters line” models, or I can delete or add line to/from the most of ports of “Three-phase V-I measurement” model.
I’ve tried all the below commands:
delete_line('Ps_circuit','Section 2/1','Section 3/1')
delete_line('Ps_circuit','Section 2/1','Section 3/1')
delete_line('Ps_circuit','Three-Phase V-I Measurement1/3','Section 1/1')
Whilst the command works for some ports such as Three-Phase V-I Measurement1/1 !
Actually, it returns error messages such as “Invalid Simulink object name: Three-Phase V-I Measurement1/3.”
Could you please give some idea what the problem is?
Regards,
Guy replied on : 21 of 71
Please see my post above. Construction commands like add_line and delete_line are not documented and consequently not supported for physical signals, including SimPowerSystems.
For these blocks, you will be able to use add_line for simulink ports, but not for physical ports.
Guy
Hadi replied on : 23 of 71
Hi Guy
Thank you very much indeed.
raju replied on : 24 of 71
hi
thank you seth and others discussed here.
can you help me in reading the propagated signal names of the vector muxed with n lines one by one through a code.
raju replied on : 25 of 71
hi
can i assign signal names of the lines composed in a vector by naming the vector directly.
wil be looking for answers eagerly.
tong replied on : 26 of 71
Hello!
I have just installed the MATLAB2010.But in the simulink part,the block of Embedded MATLAB fuction can not be used.
It always said’the block can’t be found’.However, the other blocks of User-Defined Fuctions can be used.
So please give me a hand.Thank you!
Rajshekar replied on : 27 of 71
HI,
i.e, one signal to many blocks.
Guy replied on : 28 of 71
Rajshekar,
ADD_LINE can be used for that. For example:
will first connect “Constant” with “Display1″. The second line will branch out from the line just created and finally “Constant” will be connected to both “Display1″ and “Display2″.
Guy
Jörg replied on : 29 of 71
Seth,
way back in the good old days of R13 there was the
save_as(..., 'm')
command. Given a model, it could write an m-script, that when executed, let the model reappear “by magic” (nowadays this works for figures only).
In this spirit, I’d like to know, whether there is something like a “macro recorder” in Simulink: Switch it on, tell it where to store the result and start to draw your model. I expect it to record all my gestures and moves with the mouse and the keyboard as model construction commands … and after stopping the recording, we get that model-constructing m-file. No typing, no calculation. Edit the resulting m-file to polish it according to your needs.
Is there such a feature in Simulink? I could not find it.
pydiraju replied on : 30 of 71
Hi seth,
thank you verymuch for your nice tutorial
pydiraju
Thomas replied on : 31 of 71
Tong,
In 2007b, it is possible. Reminder, Matlab is Case Sensitive
I tried :
and it works.
It’s a bit more complex to fill though.
Emile replied on : 32 of 71
Hi Seth,
Thanks a million for this great tutorial. This is exactly what I needed. First I tried doing this with the
get_param
and
set_param
commands, but this is WAY easier. Fixed in a minute, what I was trying to do for the last half an hour!
BTW, in response of your question: The cosine comes out of the integrator, which has initial condition zero, which explains why the signal does not cross zero.
Cheers,
Emile
Seth replied on : 33 of 71
@Emile – I’m glad it worked! I’m also glad to see you noticed why the Cosine doesn’t cross zero! You are the first to respond to that question. THANKS!
Sneha replied on : 34 of 71
Hello,
My name is Sneha.
I have following question.
I have two subsystems at the top most level in model and the first subsystem has 2 output ports and the second one has two input ports.
I want to join outputs of first to inputs of second by add_line. But I am getting an error ‘??? Error using ==> add_line. Invalid Simulink object specifier.’ for the second command in the following commands
I tried to add line as follows:
Is this approch correct?
Can you suggest me a solution?
Zohaib replied on : 35 of 71
Hi everyone,
Is there a way to connect two ports based on their port names rather than numbers?
For instance I have in System A I have a port named A_port and in System B I have a port named B_port. I want to connect them based on their names, NOT the port numbers.
Thanks,
-Z
vahid replied on : 36 of 71
Hello Seth,
I’m a master student of electrical engineering. I want to work at controlling of wind turbines with matlab/simulink.
I wiil be so thanlful if you help me at this major.
Best regards
vahid
Joe Heinrichs replied on : 37 of 71
Thanks Seth, this is great and it just saved me a ton of time and effort.
Seth Popinchalk replied on : 38 of 71
@Sneha – I’m not sure why it isn’t working. Make sure the block names are exactly the same. You can select the block, then type
gcb
to see what the block name is.
@Zohaib – Connecting by name is not available in the ADD_LINE function, but it is pretty easy to accomplish with a few lines of code. Get the port names of the subsystems, and the port numbers, then search for matches. When a match is found, connect those ports.
@Joe Heinrichs – I am glad you found this post!
M Hakkeem replied on : 39 of 71
Hi,
I want construct stateflow(inside chart: state, transition, Junction like…….) model from m script.
is it possible?
M Hakkeem replied on : 41 of 71
Thanks(Nantree in Tamil Language) Guy Rouleau.
M Hakkeem replied on : 42 of 71
Hi
I want to track my transition action from default transition to end active transition while running the model.
Can we do this? and How Can do?………
M Hakkeem replied on : 43 of 71
Hi
Thanks
M Hakkeem replied on : 44 of 71
I want to track my transition action from default transition to end active transition while running the model.
chandan replied on : 45 of 71
Good tutorial!
I am constructing a big model from matlab code.
It involves around 2000 blocks. I need to set position of these blocks.
What is the limitation on ‘Position’ parameter? I want to know what are boundary values for positioning blocks.
Seth Popinchalk replied on : 46 of 71
@Chandan – The limit for a Model window coordinate is 32768, BUT I strongly reccomend keeping them within the limits of your screen. Subsystems and model reference hierachy make models much easier to work with. Every time I have seen a Simulink model that looks like an Eye-exam-chart, there have been problems lurking… often simple things like unconnected lines, or extra blocks. Those kind of errors can drive you nuts and models that your can’t easily browse will drive you insane when debugging. There is a documentation page dedicated to Simulink Limits here.
chandan replied on : 47 of 71
Taking your suggestion, I will design my model using subsytem in hierarchial manner.
Dhiraj replied on : 48 of 71
Hi,
It was great to go thru the post and the discussions.
I was trying to generate the subsystem of the model I have constructed and make it an s-function for my next model. I just couldnot make it, If anyone have tried and or done, would be of great help to be informed.
Dhiraj replied on : 49 of 71
I am trying to generate it using the model construction commands…
Snehal replied on : 50 of 71
how to ‘fitSystem to Display’ in simulink dynamically.it is option available in View Menu of Simulink Model.
Seth replied on : 51 of 71
@Snehal – I use the Space Bar to fit the system to view.
Wintersprite replied on : 52 of 71
Gentlemen, your posts are great and touch on odd topics that are not explained so well in the documentation. You never answered the guy last year who wondered whether there was any possibility of a “model construction recording mode,” like recording a macro in Excel. You would record everything as you built the model, and then it could generate the corresponding m-script from it. What are the prospects for this?
Guy Rouleau replied on : 53 of 71
@Wintersprite: This is a good point. Currently, MathWorks does not provide such feature. However, you are not the first person asking for that. Our development team is aware of this and considering it.
Can you let us know a bit more details on your use case? Why this feature would be useful for you?
Sriharsha replied on : 54 of 71
Hi,
i have an application where i need to create a subsystem using a GUI.
I have a model with a gain block and a filter block , followed by a scope, where my input is a sine wave.
Now i need to create a subsystem , in which i should include the gain and filter blocks into it (subsystem).
I need to do this using GUI.
The implementation flow is like:
I should select blocks and then i need to click a button on GUI, and then a subsystem should be created in place of blocks.
Regards
Sriharsha S
Wintersprite replied on : 55 of 71
Gentlemen, I have learned a lot since I inquired further about the keystroke recording in Simulink. I thought it would be useful for an application where I need to first have a bunch of display scopes present, and then remove them. I thought this repetitive operation could be learned in a macro so I didn’t have to write the script. But then I got to wondering about a different method of removing lines, and now I have colored all the lines to be removed every time as green, and learned how to select and delete all the green lines and blocks with about four lines of script. But one thing I could still use advice on is where to find a complete list of line properties, because I could really extend the utility of this script by knowing more about the line properties.
Sobhi replied on : 56 of 71
Hello,
Is there a way you can replace a subsystem from one model with a subsystem from a differnet model keeping all the connections after you replace it? I am trying to do this in M script.
Regards,
Sobhi
Seth replied on : 57 of 71
@Sobhi – I think the function you are looking for is replace_block. Try using the full path to the subsystem (‘model/Block_name’) for your new_blk value.
K replied on : 58 of 71
I’ve managed to connect physical ports using the add_line documnetation Guy provided. It’s when I try to delete lines connecting physical ports that I get errors.
I’ve tryied the following command:
>>> delete_line(sys,hblk(2,i).RConn,hblk(4,i).RConn);
and got the following error
??? Invalid line specifier.
1. Is it possible to delete lines connecting physical ports using the delete_line() function?
2. If so, how exactly?
K
K replied on : 59 of 71
@K – The solution I came up with is
delete_line(find_system(sys,’findall’,’on’,’type’,’line’,’SrcBlockHandle’,sbh))
where sbh is the SrcBlockHandle in the source block.
Merle replied on : 60 of 71
Is there a way to rotate a block with a command out of the command window, so the model looks more well-arranged?
Patrick replied on : 61 of 71
@Merie I use the block parameters blockRotation, orientation, and position to make my models look clean. If you use the get_param function on a block that you have set up by hand you can find out the exact settings for that block and add them to your add_block code.
Example:
% Start by adding a desired block to an empty system and stretching, positioning and rotating until content. Then use these three lines below to find out the parameters.
blk_rot = get_param([path ‘/blkName’],’BlockRotation’)
blk_O = get_param([path ‘/blkName’],’Orientation’)
blk_pos = get_param([path ‘/blkName’],’Position’)
% Now that you have those parameters either hard code them into your add_block code or make them variables in your code for easy changes. one other option if you want to just change an existing blocks looks you can simply use set_param function.
‘Orientation’,’Right’);
% or
set_param([path ‘/blkName’],’BlockRotation’,’blk_rot’);
set_param([path ‘/blkName’],’Orientation’,’blk_O’);
set_param([path ‘/blkName’],’Position’,’blk_pos’);
Hope this helps!
-Patrick
samy replied on : 62 of 71
Hello,
I am having a problem with the 3 phase V-I measurement connected to one terminal of the generator and its other terminal rectified then connected to a D.C load(also the same problem when the generator is unloaded).
the error is (Input two of Voltage Measurement block ‘SEIG/Three-Phase V-I Measurement/Model/VI/Va’ is not properly connected to the network.)
Quang replied on : 63 of 71
Hello guys,
Let’s say I have two blocks A and B that are connected by a signal. I would like to copy A, B AND the signal to some subsystem in the SAME model using Matlab APIs. Any ideas?
Henning replied on : 64 of 71
Hello,
how can I change the “signal name” of a line, after insert it with ADD_LINE?
I would create a bus with “Bus Creator” which needs the signal name.
Seth replied on : 65 of 71
@Henning – You can update the label of a line object by using SET_PARAM(h,’Name’,’My Custom Label Name’). Good luck!
ashwani replied on : 66 of 71
hello guys
i have designed a dfig and i m facing problem in connecting it to the grid and the power converter device,just because of lack connectivity of connection port with any signal in my dfig block, and thats why the square connection ports will not formed on the boundry of the sub system, i hope u will definetly provide me some relevant solution of this problem..
Chiara replied on : 67 of 71
I am using R2010a.
I have to ask you two things.
1) I am building a model by command lines and, although I know functions to get/set block position, I would like script to place blocks automatically, avoiding to have blocks too close to each other.
2) When I add a block that already exists, I know how to change name so that it’s unique, but is there a way to avoid that the new block appears on the old one?
Thank you
chitra replied on : 68 of 71
hello sir, i am getting algebraic loops are not supported in generated code. i have added the unit delay block in feedback path but again i am getting this error. so please tell the solution for this. but this works good while simulating and not while running on beagle board.
olinda replied on : 69 of 71
hi,
how to add masked blocks in the m-file? how to get the block name of masked blocks like eg. Bernoulli Binary Generator???
Thank you
olinda replied on : 70 of 71
also i have tried this command line but it works for only Built-in blocks. what is the syntax for the blocks which are not Built-in, like Bernoulli Binary Generator, Interleaver, CRC Generator, Modulator, etc? and how to find the name of the block to be used in the syntax for blocks which are not Built-in???
shilpa replied on : 71 of 71
Can we do the allignment of ports and blocks automatically in the subsystem ?
What is 2 + 4?
Preview: hide
These postings are the author's and don't necessarily represent the opinions of MathWorks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4850875735282898, "perplexity": 1840.479109781985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131295084.53/warc/CC-MAIN-20150323172135-00160-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://rd.springer.com/article/10.1007/JHEP11%282011%29077 | Journal of High Energy Physics
, 2011:77
The exact 8d chiral ring from 4d recursion relations
• M. Billò
• M. Frau
• L. Gallot
• A. Lerda
Open Access
Article
Abstract
We consider the local F-theory set-up corresponding to four D7 branes in type I′ theory, in which the exact axio-dilaton background τ(z) is identified with the low-energy effective coupling of the four-dimensional $$\mathcal{N} = 2$$ super Yang-Mills theory with gauge group SU(2) and N f = 4 flavours living on a probe D3 brane placed at position z. Recently, an intriguing relation has been found between the correlators forming the chiral ring of the eight-dimensional theory on the D7 branes and the large-z expansion of the τ profile. Here we apply to the SU(2) N f = 4 theory some recursion techniques that allow to derive the coefficients of the large-z expansion of τ in terms of modular functions of the UV coupling τ 0. In this way we obtain exact expressions for the elements of the eight-dimensional chiral ring that resum their instanton expansions, previously known only up to the first few orders by means of localization techniques.
Keywords
D-branes F-Theory Supersymmetric Effective Theories
References
1. [1]
R. Donagi and M. Wijnholt, Model building with F-theory, arXiv:0802.2969 [INSPIRE].
2. [2]
J.J. Heckman, Particle physics implications of F-theory, Ann. Rev. Nucl. Part. Sci. 60 (2010) 237 [arXiv:1001.0577] [INSPIRE].
3. [3]
M. Billò, L. Gallot, A. Lerda and I. Pesando, F-theoretic versus microscopic description of a conformal N = 2 SYM theory, JHEP 11 (2010) 041 [arXiv:1008.5240] [INSPIRE].
4. [4]
M. Billò, M. Frau, L. Giacone and A. Lerda, Holographic non-perturbative corrections to gauge couplings, JHEP 08 (2011) 007 [arXiv:1105.1869] [INSPIRE].
5. [5]
F. Fucito, J. Morales and D. Pacifici, Multi instanton tests of holography, arXiv:1106.3526 [INSPIRE].
6. [6]
A. Sen, F theory and orientifolds, Nucl. Phys. B 475 (1996) 562 [hep-th/9605150] [INSPIRE].
7. [7]
T. Banks, M.R. Douglas and N. Seiberg, Probing F-theory with branes, Phys. Lett. B 387 (1996) 278 [hep-th/9605199] [INSPIRE].
8. [8]
N. Seiberg and E. Witten, Electric-magnetic duality, monopole condensation, and confinement in N = 2 supersymmetric Yang-Mills theory, Nucl. Phys. B 426 (1994) 19 [hep-th/9407087] [INSPIRE].
9. [9]
N. Seiberg and E. Witten, Monopoles, duality and chiral symmetry breaking in N = 2 supersymmetric QCD, Nucl. Phys. B 431 (1994) 484 [hep-th/9408099] [INSPIRE].
10. [10]
F. Fucito, J.F. Morales and R. Poghossian, Exotic prepotentials from D(−1)D7 dynamics, JHEP 10 (2009) 041 [arXiv:0906.3802] [INSPIRE].
11. [11]
M. Matone, Instantons and recursion relations in N = 2 SUSY gauge theory, Phys. Lett. B 357 (1995) 342 [hep-th/9506102] [INSPIRE].
12. [12]
J. Minahan, D. Nemeschansky and N. Warner, Partition functions for BPS states of the noncritical E 8 string, Adv. Theor. Math. Phys. 1 (1998) 167 [hep-th/9707149] [INSPIRE].
13. [13]
M. Bershadsky, S. Cecotti, H. Ooguri and C. Vafa, Kodaira-Spencer theory of gravity and exact results for quantum string amplitudes, Commun. Math. Phys. 165 (1994) 311 [hep-th/9309140] [INSPIRE].
14. [14]
J. Minahan, D. Nemeschansky and N. Warner, Instanton expansions for mass deformed N = 4 super Yang-Mills theories, Nucl. Phys. B 528(1998)109 [hep-th/9710146] [INSPIRE].
15. [15]
N.A. Nekrasov, Seiberg-Witten prepotential from instanton counting, Adv. Theor. Math. Phys. 7 (2004) 831 [hep-th/0206161] [INSPIRE].
16. [16]
G. Bonelli and M. Matone, Nonperturbative renormalization group equation and β-function in N = 2 SUSY Yang-Mills, Phys. Rev. Lett. 76 (1996) 4107 [hep-th/9602174] [INSPIRE].
17. [17]
E. D’Hoker and D. Phong, Lectures on supersymmetric Yang-Mills theory and integrable systems, hep-th/9912271 [INSPIRE].
18. [18]
T. Masuda and H. Suzuki, Periods and prepotential of N = 2 SU(2) supersymmetric Yang-Mills theory with massive hypermultiplets, Int. J. Mod. Phys. A 12 (1997) 3413 [hep-th/9609066] [INSPIRE].
19. [19]
A. Marshakov, A. Mironov and A. Morozov, Zamolodchikov asymptotic formula and instanton expansion in N = 2 SUSY N f= 2N c QCD, JHEP 11 (2009) 048 [arXiv:0909.3338] [INSPIRE].
20. [20]
L.F. Alday, D. Gaiotto and Y. Tachikawa, Liouville correlation functions from four-dimensional gauge theories, Lett. Math. Phys. 91 (2010) 167 [arXiv:0906.3219] [INSPIRE].
21. [21]
M. Billò et al., Exotic instanton counting and heterotic/type-I-prime duality, JHEP 07 (2009) 092 [arXiv:0905.4586] [INSPIRE]. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7524968981742859, "perplexity": 7112.156918983349}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210408.15/warc/CC-MAIN-20180816015316-20180816035316-00619.warc.gz"} |
https://homework.cpm.org/category/CON_FOUND/textbook/ac/chapter/7/lesson/7.3.1/problem/7-90 | ### Home > AC > Chapter 7 > Lesson 7.3.1 > Problem7-90
7-90.
Without graphing, identify the slope and y-intercept of each equation below.
1. $y=3x+5$
See the hint above.
Slope $= 3$
$y$-intercept $= \left(0,5\right)$
1. $y=\frac{5}{-4}x$
This equation is the same as:
$y=\frac{5}{-4}x+0.$
$\text{Slope }=\frac{5}{-4}x$
$y$-intercept $= \left(0,0\right)$
1. $y=3$
If there is no $x$-value in the equation, can there be a slope or is it $0$?
Slope $= 0$
$y$-intercept $= \left(0,3\right)$
1. $y=7+4x$
See part (a) and the first hint in the problem.
1. $3x+4y=−4$
Solve the equation for $y$. By doing this, you put the equation in $y = mx + b$ form.
Subtract by $3x$ on each side.
$4y = −3x − 4$
Divide by $4$ on both sides.
$y=\frac{-3}{4}x-1$
1. $x+5y=30$
See part (e) and the first hint in the problem. | {"extraction_info": {"found_math": true, "script_math_tex": 24, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9292029738426208, "perplexity": 1639.3793046260546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00210.warc.gz"} |
http://www.computer.org/csdl/trans/ts/1985/09/01702110-abs.html | Subscribe
Issue No.09 - September (1985 vol.11)
pp: 922-934
B.W. Wah , Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University of Illinois
ABSTRACT
Branch-and-bound algorithms are organized and intelligently structured searches of solutions in a combinatorially large problem space. In this paper, we propose an approximate stochastic model of branch-and-bound algorithms with a best-first search. We have estimated the average memory space required and have predicted the average number of subproblems expanded before the process terminates. Both measures are exponentials of sublinear exponent. In addition, we have also compared the number of subproblems expanded in a best-first search to that expanded in a depth-first search. Depth-first search has been found to have computational complexity comparable to best-first search when the lower-bound function is very accurate or very inaccurate; otherwise, best-fit search is usually better. The results obtained are useful in studying the efficient evaluation of branch-and-bound algorithms in a virtual memory environment. They also confirm that approximations are very effective in reducing the total number of iterations.
INDEX TERMS
subproblem, Approximations, best-first search, branch-and-bound algorithms, depth-first search, iterations, memory space
CITATION
B.W. Wah, null Chee Fen Yu, "Stochastic Modeling of Branch-and-Bound Algorithms with Best-First Search", IEEE Transactions on Software Engineering, vol.11, no. 9, pp. 922-934, September 1985, doi:10.1109/TSE.1985.232550
24 ms
(Ver 2.0)
Marketing Automation Platform | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8453258872032166, "perplexity": 2001.5110310350833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464396.48/warc/CC-MAIN-20151124205424-00142-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://link.springer.com/article/10.1007/s10162-017-0641-9 | # Attentional Modulation of Envelope-Following Responses at Lower (93–109 Hz) but Not Higher (217–233 Hz) Modulation Rates
• Emma Holmes
• David W. Purcell
• Robert P. Carlyon
• Hedwig E. Gockel
• Ingrid S. Johnsrude
Open Access
Research Article
## Abstract
Directing attention to sounds of different frequencies allows listeners to perceive a sound of interest, like a talker, in a mixture. Whether cortically generated frequency-specific attention affects responses as low as the auditory brainstem is currently unclear. Participants attended to either a high- or low-frequency tone stream, which was presented simultaneously and tagged with different amplitude modulation (AM) rates. In a replication design, we showed that envelope-following responses (EFRs) were modulated by attention only when the stimulus AM rate was slow enough for the auditory cortex to track—and not for stimuli with faster AM rates, which are thought to reflect ‘purer’ brainstem sources. Thus, we found no evidence of frequency-specific attentional modulation that can be confidently attributed to brainstem generators. The results demonstrate that different neural populations contribute to EFRs at higher and lower rates, compatible with cortical contributions at lower rates. The results further demonstrate that stimulus AM rate can alter conclusions of EFR studies.
## Keywords
attention FFR EFR EEG brainstem
## Introduction
Understanding spoken language in the presence of other background sounds requires listeners to direct attention flexibly to distinguishing acoustic characteristics (e.g. the fundamental frequency of someone’s voice), an ability likely underpinned by dynamic interactions between basic auditory and higher-level cognitive processes (Carlyon et al. 2001; Davis and Johnsrude 2007; Billig et al. 2013). However, whether directing attention to particular sound frequencies alters processing at the earliest stages of the auditory system is unclear. Improving knowledge of how attention changes the representation of sounds at different stages of auditory processing is fundamental to understanding how listeners hear a sound of interest among a mixture of competing sounds.
Directing attention to sounds at different spatial locations affects cortical activity recorded using electroencephalography (EEG) (Hillyard et al. 1973; Parasuraman et al. 1982; Woldorff et al. 1987; Anourova et al. 2001; Bharadwaj et al. 2014), magnetoencephalography (MEG) (Woldorff et al. 1993; Xiang et al. 2010; Ding and Simon 2012), and functional magnetic resonance imaging (fMRI) (Petkov et al. 2004; Voisin et al. 2006; Formisano et al. 2008). Additionally, fMRI studies have demonstrated that auditory cortex activity is modulated by frequency-specific attention (Paltoglou et al. 2009; Da Costa et al. 2013; Riecke et al. 2016). However, whether top-down projections enable filtering of responses at lower stages of the auditory pathway, potentially facilitating perceptual segregation of sounds of different frequencies, is unclear. Descending anatomical projections from the cortex to the cochlea are, at least broadly, organised by frequency (Winer and Schreiner 2005), so it is anatomically plausible that attending to particular frequencies may enhance tuning or gain in a frequency-specific fashion at the earliest levels of auditory processing. Ongoing tuning of brainstem processing based on expectations and goals would permit auditory processing to rapidly adapt to changes in listening environment and would allow listeners to flexibly enhance processing of target sounds.
The envelope-following response (EFR) is a steady-state electrophysiological response that tracks periodic features of the amplitude envelope of a stimulating sound. EFRs differ depending on whether participants direct attention to auditory or visual stimuli (Galbraith and Arroyo 1993; Hoormann et al. 2000; Galbraith et al. 2003), but results are inconsistent. When participants direct attention away from auditory stimuli (towards visual stimuli), some have observed a decrease in EFR amplitudes with no effect on latencies (Galbraith and Arroyo 1993; Galbraith et al. 2003), whereas others report an increase in latencies with no effect on amplitudes (Hoormann et al. 2000). Studies measuring frequency-following responses when attention is directed to different sounds are also inconsistent: frequency-following responses to the temporal fine structure of sounds were modulated by attention in one study (Galbraith and Doan 1995), but not in two others (Lehmann and Schönwiesner 2014; Varghese et al. 2015). Previous experiments vary in several ways, including the type of acoustic stimulus presented, the frequency of the stimulus eliciting EFRs, and the signal-to-noise ratio of measured EFRs, which may explain the inconsistencies.
EFRs are most commonly recorded using EEG. EFRs elicited by stimuli that have amplitude modulation rates of 70–200 Hz are commonly assumed to reflect neural activity within the rostral brainstem. This assumption is based on electrophysiological recordings (Worden and Marsh 1968; Marsh et al. 1974; Smith et al. 1975), measurements of group delay (Kiren et al. 1994; Herdman et al. 2002; King et al. 2016), and the long-standing belief that auditory cortex frequency-following drops off above about 50–70 Hz (implying that phase-locking at 70 Hz and higher must originate subcortically). However, studies using electrocorticography (ECoG)—an intracranial method that is sensitive to local electrical activity likely not present in EEG (Buzsáki et al. 2012)—have shown that the auditory cortex is capable of tracking frequencies up to 200 Hz (Brugge et al. 2009; Nourski et al. 2013; Behroozmand et al. 2016). In addition, two recent studies using MEG (Coffey et al. 2016) and EEG (Coffey et al. 2017) indicate that cortical generators likely contribute to frequency-following activity at 98 Hz. Nevertheless, it remains unclear whether such cortical contributions are sufficiently large, relative to brainstem generators, to influence the outcomes of EEG studies measuring EFRs at 70–200 Hz.
We recorded EFRs at two sets of modulation frequencies—one within the range traditionally used for EFR recordings (70–200 Hz; experiment 1) and one at higher rates that only brainstem but not cortex is able to track (> 200 Hz; experiment 2). Using EEG, we compared EFRs when participants attended to concurrently presented tone streams of different frequencies. If frequency-specific attention modulates brainstem processing, then EFRs should be modulated by frequency-specific attention at both sets of modulation rates.
## Methods
### Participants
Participants in experiment 1 were 30 right-handed young adults. Experiment 1 included two separate versions of the attend-auditory task (as described below). Pre-established criteria for excluding participants included audiometric thresholds outside of the normal hearing range or poor task performance (negative d′ for the auditory or visual detection tasks). We excluded six participants due to poor auditory task performance. The remaining 24 participants (12 male) were aged 18–27 years (mean [= 20.5, standard deviation [s] = 2.7). Participants had average pure-tone hearing levels of 20 dB HL or better (at six octave frequencies between 0.5 and 8 kHz).
Participants in experiment 2 were 14 right-handed young adults. We excluded one participant due to poor auditory task performance and one due to high audiometric thresholds in the left ear at 4 and 8 kHz. The remaining 12 participants (4 male) were aged 19–26 years ($$\overline{\mathrm{X}}$$ = 22.3, s = 2.2) and had average pure-tone hearing levels of 20 dB HL or better (at six octave frequencies between 0.5 and 8 kHz).
Both experiments were cleared by Western University’s Health Sciences Research Ethics Board. Informed consent was obtained from all participants.
### Apparatus
The experiments were conducted in a sound-insulated and electromagnetically shielded double-walled test booth (Eckoustic model C-26 R.F.). Participants sat in a comfortable chair facing a 22-in. visual display unit (ViewSonic VS2263SMHL).
Acoustic stimuli were presented through a LynxTWO-A sound card (Lynx Studio Technology, Inc.). Stimuli were delivered binaurally through Intelligent Hearing Systems mu-metal shielded Etymotic Research ER2 earphones, which were clipped to the chair and sealed in the ear canal of the listener with disposable foam inserts.
### Stimuli
#### Acoustic Stimuli
Acoustic stimuli for both experiments were three simultaneous streams of tones at three perceptually distinct carrier frequencies (1027, 1343, and 2913 Hz in Experiment 1, and 1753, 2257, and 4537 Hz in Experiment 2) that we trained the listeners to think of as ‘low’, ‘middle’, and ‘high’ frequencies. Each tone stream was ‘tagged’ with a unique AM rate, so that we could isolate the EFR to each stream separately. In Experiment 1, the AM rates for the low-, middle-, and high-frequency streams were 93, 99, and 109 Hz, respectively, whereas in Experiment 2, they were 217, 223, and 233 Hz.
To promote perceptual segregation, tones in the three different streams also had three different durations (1036, 1517, and 1052 ms, for the low-, middle-, and high-frequency streams, respectively) and unique inter-stimulus intervals (51, 63, and 71 ms, respectively), so that the onsets of tones from the three streams occurred at different times (see Fig. 1a). All tones had cosine onset ramps of 10 ms and were sampled at 32,000 samples/s. The level of each stream was set to 70 phons, according to the ISO 226 normal equal-loudness-level contours (ISO 226 2003). On half the trials, the polarity of the temporal fine structure was inverted, so that averaging responses across stimuli would emphasise the envelope response and cancel any artefact related to the stimulus temporal fine structure (Picton and John 2004; Small and Stapells 2004).
On each trial, either the high- or the low-frequency stream started first (compare Fig. 1a, b), which cued the listener to attend to that stream to perform the detection task in the Attend-Auditory condition that is described below. The other two streams started 700 and 1000 ms (in a randomly determined order) after the onset of the first tone from the first stream. The low-, middle-, and high-frequency streams contained 18, 12, and 17 tones, respectively, except that the stream that started first also contained an additional tone, so that the streams ended at approximately the same time. Overall, the acoustic stimulus for each trial lasted approximately 21 s.
During the main parts of the experiment, participants performed a deviant-detection task on whichever stream they were instructed to selectively attend. Both the high- and the low-frequency streams contained three to four shorter deviant tones on every trial. The middle-frequency stream was never the target stream and did not contain deviant stimuli; the purpose of the middle-frequency stream was to make the auditory task more difficult. The durations of shorter deviants depended on each participant’s deviant-detection threshold, which was determined during a preliminary phase of the experiment, described below.
#### Visual Stimuli
The visual stimulus on each trial consisted of five digits selected from the numbers 1–9. One digit was presented in the centre of the screen and four digits were presented above, below, to the left, and to the right of the central digit, as illustrated in Fig. 1c. The digits were selected with replacement. A new array of digits was presented every 750 ms, lasting throughout the full duration of the acoustic stimulus for each trial (28 arrays of digits per trial).
### Procedures
For both experiments, participants were first trained to perform the tasks. They first heard examples of the high- and low-frequency tone streams alone (three tones per example). Participants were allowed to listen to the examples as many times as they liked. Next, participants completed 16 training trials, in which they heard shorter extracts (~ 4-s duration) of the acoustic stimuli used in the main experiment. For these stimuli, participants were instructed to attend to either the highest- or the lowest-frequency tone stream in separate blocks. On each trial, either 0 or 1 deviant stimulus (shorter in duration than the standard) was present in the high- and low-frequency streams. During the first half of practice trials for each attended frequency, deviant tones were 30 % shorter than the standard tones in the stream. During the second half of practice trials at each attended frequency, deviants were 10 % shorter. Participants performed a two-alternative forced-choice task, in which they had to detect whether or not the attended stream contained a shorter deviant tone. Visual feedback was provided during training.
After training, the durations of the shorter tones were altered in an adaptive procedure until the hit rate was between 70 and 85 %. The acoustic stimuli had the same structure as in the main part of the experiment. Participants were instructed to attend to the stream that began first and press a button as quickly as they could whenever they detected deviants in the attended stream, while ignoring deviants in the other streams. The durations of shorter deviants in the high- and low-frequency streams were adapted in separate, but interleaved, runs.
The main part of each experiment comprised three different blocks: Attend-Auditory, Attend-Visual, and Artefact Check. The order of the three blocks was counterbalanced across participants.
#### Attend-Auditory Condition
In the Attend-Auditory condition, participants had to detect shorter tones within either the high- or low-frequency tone stream—whichever began first (see Fig. 1a, b). Participants had to press a button as quickly as they could whenever they detected a shorter deviant in the target stream, while ignoring deviants in the other streams. There were 120 trials in the Attend-Auditory condition (60 Attend-High and 60 Attend-Low). Attend-High and Attend-Low trials were randomly interleaved within each block.
In Experiment 1, 12 participants saw a visual fixation cross throughout the Attend-Auditory condition, and the changing digit array only when performing the Attend-Visual task. Thus, for these participants, the acoustic stimuli were identical across attentional conditions (Auditory and Visual), but the visual stimuli differed. The other 12 participants in Experiment 1 and all the participants in Experiment 2 saw the changing digit array in the Attend-Auditory condition. Thus, for these participants, both the acoustic and visual stimuli were identical in the Attend-Auditory and Attend-Visual conditions. All participants were instructed to fixate on the centre of the screen but focus their attention on the acoustic stimuli. Given there was no difference in EFRs between the two groups in Experiment 1 who experienced different visual stimuli during the Attend-Auditory condition, we analysed the data from all participants in Experiment 1 together.
#### Attend-Visual Condition
Acoustic stimuli in the Attend-Visual condition were identical to those presented in the Attend-Auditory condition—there were two different types of acoustic stimuli, corresponding to the Attend-High and Attend-Low tasks, with either the high- or low-frequency tone stream starting first (see Fig. 1a, b). In the Attend-Visual condition, however, participants were instructed to ignore the acoustic stimuli and attend to the visual stimuli (Fig. 1c).
In the Attend-Visual condition, participants performed a two-back task on the central digit, while ignoring the four distracting digits. Participants had to press a button as quickly as they could whenever the central digit matched the central digit presented two arrays earlier. There were two to five visual targets per trial. There were 60 trials in the Attend-Visual condition (30 in which the high-frequency tone stream began first and 30 in which the low-frequency stream began first; the trial types were randomly interleaved).
#### Artefact Check Condition
In the Artefact Check condition, acoustic stimuli were identical to the Attend-Auditory and Attend-Visual conditions. However, the foam inserts were taken out of the participant’s ears and covered with tape, so that the acoustic stimuli were still delivered to the earphones, but were not audible to the participant. In the Artefact Check condition, participants passively watched a subtitled DVD. There were 60 trials in the Artefact check condition (30 in which the high-frequency stream began first and 30 in which the low-frequency stream began first; the trial types were randomly interleaved).
### Behavioural Analyses
We calculated d′ (Green and Swets 1966) for the Attend-Auditory and Attend-Visual conditions. False alarms were defined as responses to non-deviant tones in the target stream. We used two-tailed paired-sample t tests to compare d′ between the auditory and visual tasks and, within the Attend-Auditory condition, between conditions in which participants attended to the high- or low-frequency stream.
### EEG Recording and Pre-processing
We recorded EEG using disposable Medi-Trace Ag/AgCl electrodes. The recording electrode was placed at the vertex (Cz), with a reference at the posterior midline of the neck (just below the hairline) and a ground (or common) on the left collarbone. Electrode impedances were below 5 kΩ at 10 Hz, and inter-electrode differences in impedance were less than 2 kΩ (measured using an F-EZM5 GRASS impedance meter). A GRASS LP511 EEG amplifier applied a gain of 50,000 with bandpass filtering at 0.3–3000 Hz. A National Instruments (Austin, TX) PCI-6289 M-series acquisition card captured the EEG data at a rate of 32,000 samples/s with 18-bit resolution. The PCI-6289 card applied a further gain of 2 for a total gain of 100,000. The recording program was custom developed using LabVIEW (Version 8.5; National Instruments, Austin, TX).
### EFR Analyses and Statistics
The EEG data were exported to MATLAB (version 2014b; The MathWorks, Inc., Natick, MA, USA) and were analysed using custom-written scripts. First, we isolated epochs corresponding to the times of tones in the high- and low-frequency streams. We ignored the first tone in each stream, then extracted epochs with 1-s duration at the beginning of the next 16 tones in each stream.
We used the Fourier transform (FT) to estimate the frequency spectrum of the response for each epoch, with the purpose of excluding noisy epochs. For each epoch, we averaged amplitudes at 80–200 Hz (excluding the frequencies of interest). We then calculated the mean and standard deviation across epochs for each participant within each condition and excluded epochs with amplitudes > 2 standard deviations from the mean. This led to the rejection of 2.7 % of epochs, on average, per participant in each condition. We computed the time-domain average of all remaining epochs. We averaged across epochs with opposite stimulus polarity so as to isolate the envelope response.
We computed the FT of the time-domain average to estimate the amplitude of the EFR at the AM rates of the low- and high-frequency streams (Experiment 1: 93 and 109 Hz; Experiment 2: 217 and 233 Hz); we refer to the two EFR frequencies of interest as the low and high EFR components. We also estimated the noise floor at each EFR component by averaging the amplitudes at the 10 adjacent frequency bands (five on each side; resolution 1 Hz). We calculated signal-to-noise ratios (SNRs) for each EFR component by dividing the EFR amplitude at the stimulus AM rate by the estimate of the noise floor at the adjacent frequencies.
The Attend-Auditory condition contained twice as many trials as the Attend-Visual and Artefact Check conditions; thus, in order to ensure that effects across conditions were estimated from the same quantity of data, we resampled half the total number of epochs (i.e. n/2) in the Attend-Auditory condition. We drew 500 samples of n/2 trials with replacement, computed the time-domain average within each sample, and then calculated the average EFR SNR, amplitude, and noise estimate across samples.
To analyse the effect of frequency-specific attention on EFRs, we focused on the Attend-Auditory and Attend-Visual conditions only. In the Attend-Auditory condition, participants were instructed to attend to the stream (high or low) that started first. Given the acoustic stimuli differed between these two types of trials (due to the earlier onset), we used the Attend-Visual condition as a baseline to control for possible stimulus-driven differences in EFRs. To that end, all conditions were split into trials in which the high- or low-frequency tone stream began first. We expected frequency-specific attention effects on the EFR in the Attend-Auditory condition but not in the Attend-Visual condition, in which the first tone stream had no implications for participants’ task. In contrast, stimulus-driven earlier onset effects (if present at all) would occur in both Attend-Visual and Attend-Auditory conditions. Importantly, the analyses compared trials (between auditory and visual attention) in which the acoustical stimuli were identical; this was done to extract the effect of frequency-specific attention from the physical stimulus differences. We used two-tailed within-subject ANOVAs to compare EFR SNRs across conditions (Attend-Auditory and Attend-Visual), stimulus types (high- or low-frequency tone stream beginning first), and EFR components. We used a combination of box plots, Q-Q plots, and the Kolmogorov-Smirnov test to check that the data did not deviate strongly from a normal distribution and we checked that the data met the assumption of sphericity.
To investigate whether the extent of EFR modulation by frequency-specific attention was related to task performance, we aimed to extract a measure of attentional modulation to correlate with performance on Attend-Low and Attend-High trials. For the low EFR component, we expected greater SNRs when participants attended to the low-frequency stream than when they attended to the high-frequency stream. Because these two attentional conditions also differed in the frequency of the first tone, we divided the ratio of the Attend-Low and Attend-High SNRs by the ratio of the SNRs in the corresponding Attend-Visual conditions (low-stream first vs. high stream first); the final measure was (Attend-Low / Attend-High) / (Attend-Visual (low stream first) / Attend-Visual (high stream first)). We expected the opposite pattern at the high EFR component—greater SNRs for Attend-High than Attend-Low trials. Thus, we inverted the ratios, consistent with the expected direction of modulation [i.e. (Attend-High / Attend-Low) / (Attend-Visual (high stream first) / Attend-Visual (low stream first))]. We also calculated Pearson’s product-moment correlations between Attend-Low d′ and the extent of EFR modulation at the low EFR component and also between Attend-High d′ and the extent of EFR modulation at the high EFR component.
In addition, we calculated phase coherence (Jerger et al. 1986; Stapells et al. 1987) at each EFR component, separately for each condition. Phase angles at each EFR component were analysed with the FT, then phase coherence was calculated as the root mean square of the sums of the cosines and sines of the individual phase angles. Similar to the other EFR measures, we used within-subject ANOVAs to compare phase coherence across conditions, stimulus types, and EFR components. We also calculated the extent of attentional modulation of EFR phase coherence using the same ratios as those described for SNR. We calculated Pearson’s product-moment correlations between Attend-Low d′ and the extent of EFR modulation at the low EFR component and between Attend-High d′ and the extent of EFR modulation at the high EFR component.
## Results
### Experiment 1: Low AM Rates
Sensitivity (d′) varied substantially among participants in both the auditory (range 0.1–3.2; based on all trials, irrespective of attention condition) and visual (range 0.8–2.8) tasks. Participants performed significantly better on the visual task (x̅ = 2.1, s = 0.5) than the auditory task (x̅ = 1.5, sσ = 0.9) [t(23) = 3.69, p = 0.001]. Figure 2a illustrates d′ for the auditory (separated into Attend-Low and Attend-High trials) and visual tasks. Within the auditory task, performance did not differ significantly between Attend-High (x̅ = 1.6, s = 0.9) and Attend-Low ( = 1.4, s = 1.0) trials [t(23) = 1.31, p = 0.20]. Participants did not frequently respond to deviants in the non-target stream (Attend-Low: = 3.0 % of non-target deviants, s = 2.9; Attend-High: x̅ = 1.3 %, s = 1.6).
#### Comparison of EFRs Between Conditions and Spectral Components
We used two-tailed within-subject ANOVAs as a first step to compare EFR SNRs and phase coherence across conditions (Attend-Auditory, Attend-Visual, and Artefact Check) and EFR components (93 and 109 Hz). SNRs differed among the Attend-Auditory, Attend-Visual, and Artefact Check conditions [F(1.5, 35.3) = 71.0, p < 0.001, p 2 = 0.74]. EFR SNRs were significantly greater in the Attend-Auditory and Attend-Visual conditions than in the Artefact Check condition [F(1, 23) = 139.5, p < 0.001, p 2 = 0.85 and F(1, 23) = 83.6, p < 0.001, p 2 = 0.77, respectively; see Fig. 3a], meaning that EFRs did not arise due to stimulus artefacts. There was a trend for greater SNRs in the Attend-Visual than Attend-Auditory condition, although the post hoc comparison was not significant after Bonferroni correction (p = 0.079). EFR SNRs did not significantly differ between the two EFR components [F(1, 23) = 3.49, p = 0.08, p 2 = 0.09]. There was no significant interaction between Condition and EFR component [F(2, 46) = 0.60, p = 0.56, p 2 = − 0.02].
Similarly, phase coherence differed among the Attend-Auditory, Attend-Visual, and Artefact Check conditions [F(2, 46) = 94.13, p < 0.001, p 2 = 0.79]. Phase coherence in the Attend-Auditory and Attend-Visual conditions were significantly greater than in the Artefact Check condition [F(1, 23) = 123.09, p < 0.001, p 2 = 0.83 and F(1, 23) = 112.46, p < 0.001, p 2 = 0.82, respectively; see Fig. 3b]. Bonferroni-corrected post hoc tests showed no significant difference in phase coherence between the Attend-Auditory and Attend-Visual conditions (p ≈ 1.00). Phase coherence did not differ significantly between 93 and 109 Hz [F(1, 23) = 1.71, p = 0.20, p 2 = 0.03], and there was no significant interaction between Condition and EFR component [F(2, 46) = 0.14, p = 0.87, p 2 = − 0.04].
#### Frequency-Specific Attention Affects EFR Signal-to-Noise Ratio
Table 1 displays mean EFR SNRs separately for the Attend-High, Attend-Low, and Attend-Visual (low or high stream first) conditions. Paired-sample t tests showed that EFR SNRs in the Attend-Visual condition differed significantly between trials in which the low or high stream began first, even though participants’ task was identical in those trials [93 Hz: t(23) = 4.22, p < 0.001; 109 Hz: t(23) = 2.27, p = 0.033]. These results suggest that minor differences in the acoustic stimuli between these trials could potentially contribute to differences in EFRs.
Table 1
EFR signal-to-noise ratios (calculated as the EFR amplitude at the frequency of interest divided by the average amplitude in the noise bands at adjacent frequencies) in the Attend-Auditory (Attend-High and Attend-Low) and Attend-Visual (high or low stream beginning first) conditions at the high and low EFR components
Condition
Experiment 1
Experiment 2
93 Hz
109 Hz
217 Hz
233 Hz
Attend-Low
4.7 ± 1.6
4.8 ± 1.8
6.2 ± 1.8
6.4 ± 2.1
Attend-High
4.4 ± 2.2
5.2 ± 1.5
6.0 ± 2.0
6.5 ± 2.5
Attend-Visual (low stream first)
4.1 ± 2.4
5.6 ± 2.2
6.1 ± 2.1
7.1 ± 2.8
Attend-Visual (high stream first)
5.5 ± 3.1
4.9 ± 1.8
5.7 ± 3.3
7.1 ± 3.2
The Attend-High and Attend-Low conditions presented identical acoustic stimuli as the Attend-Visual (high stream first) and Attend-Visual (low stream first) conditions. Thus, we normalised EFRs in the Attend-High and Attend-Low conditions by the evoked EFRs in the corresponding Attend-Visual condition that contained the identical acoustic stimulus (i.e. high or low stream first, respectively). In the Attend-Auditory conditions, the first tone cued the frequency to be attended, whereas in the Attend-Visual conditions the stream that started first was not relevant for the task.
Figure 4a shows the difference in SNRs between the Attend-Auditory and Attend-Visual conditions, for trials in which the acoustic stimuli were identical. A within-subject two-way ANOVA examining the effect of EFR component (low or high) and Attended frequency (low or high) on these SNR difference values revealed no main effect of EFR component [F(1, 23) = 0.18, p = 0.68, p 2 = −0.03] or of Attended frequency [F(1, 23) = 1.45, p = 0.24, p 2 = 0.02], but a significant interaction [F(1, 23) = 18.81, p < 0.001, p 2 = 0.42].
At the AM rate that tagged the low-frequency carrier (93 Hz), the SNR difference was significantly greater when attention was directed to the low tone stream than the high tone stream [t(23) = 3.77, p = 0.001, d z. = 0.77]. The opposite pattern was obtained for the AM rate that tagged the high-frequency carrier (109 Hz): the SNR difference was significantly greater when attention was directed to the high tone stream than the low tone stream [t(23) = 2.98, p = 0.007, d z = 0.61]. These results indicate that frequency-specific attention significantly modulated EFR SNRs at the lower AM rates (93 and 109 Hz) that are typically used in EFR studies.
#### Frequency-Specific Attention Affects EFR Phase Coherence
Next, we analysed phase coherence. Table 2 displays mean EFR phase coherence separately for the Attend-High, Attend-Low, and Attend-Visual (high or low stream first) conditions. Figure 4b illustrates the difference in phase coherence between the Attend-Auditory and Attend-Visual conditions, for trials in which the acoustic stimuli were identical. A within-subject two-way ANOVA examining the effect of EFR component (low or high) and Attended frequency (low or high) on these phase coherence difference values showed no main effect of EFR component [F(1, 23) = 0.78, p = 0.39, p 2 = −0.01] or Attended frequency [F(1, 23) = 0.36, p = 0.56, p 2 = − 0.03]. However, the two-way interaction between EFR component and Attended frequency was significant [F(1, 23) = 12.31, p = 0.002, p 2 = 0.31].
Table 2
EFR phase coherence values in the Attend-Auditory (Attend-High and Attend-Low) and Attend-Visual (high or low stream beginning first) conditions at the high and low EFR components
Condition
Experiment 1
Experiment 2
93 Hz
109 Hz
217 Hz
233 Hz
Attend-Low
0.20 ± 0.09
0.19 ± 0.10
0.23 ± 0.08
0.22 ± 0.08
Attend-High
0.21 ± 0.07
0.22 ± 0.07
0.28 ± 0.12
0.26 ± 0.12
Attend-Visual (low stream first)
0.18 ± 0.10
0.21 ± 0.10
0.24 ± 0.08
0.22 ± 0.09
Attend-Visual (high stream first)
0.21 ± 0.07
0.20 ± 0.07
0.27 ± 0.11
0.28 ± 0.08
At the AM rate that tagged the low-frequency carrier (93 Hz), the phase coherence difference was significantly greater when attention was directed to the low tone stream than the high tone stream [t(23) = 3.06, p = 0.005, d z = 0.62], demonstrating an effect of frequency-specific attention at the low EFR component. There was a trend towards greater phase coherence at the high EFR component (109 Hz) when attention was directed to the high tone stream than the low tone stream, although the difference was not significant [t(23) = 1.98, p = 0.060, d z = 0.40].
#### No Relationship Between Task Performance and Attentional Modulation of EFRs
There were large individual differences in behavioural performance in the Attend-Auditory task (with some participants responding with low sensitivity). As poor performance could indicate that participants were not deploying frequency-specific attention, we investigated whether only those participants who responded with high sensitivity showed attentional modulation of EFRs. Figure 5a, b displays auditory d′ (in the Attend-Low and Attend-High tasks) and the extent of attentional modulation of EFR SNRs for each participant. Bonferroni-corrected Pearson’s product-moment correlations revealed no relationship between task performance and attentional modulation of SNRs at 93 Hz (r = − 0.06, p ~ 1.00). At 109 Hz, there was a trend towards a negative correlation (r = −0.45) that just missed significance (p = 0.054).
Bonferroni-corrected Pearson’s product-moment correlations between auditory d′ and the extent of attentional modulation of phase coherence values revealed no significant relationship at 93 Hz (r = − 0.24, p ~ 1.00). Although, similar to the SNR results, there was a trend towards a negative correlation at 109 Hz (r = − 0.48, p = 0.07).
### Experiment 2: High AM Rates
Sensitivity (d′) varied among participants in both the auditory (range 1.1–2.7; based on all trials, irrespective of attention condition) and visual (range 1.6–2.6) tasks. There was no significant difference in task performance between the visual (x̅ = 2.1, s = 0.3) and auditory (x̅ = 2.0, s = 0.6) tasks [t(11) = 1.00, p = 0.34], or between Attend-High (x̅ = 1.9, s = 0.7) and Attend-Low (x̅ = 2.2, s = 0.6) trials [t(11) = 1.93, p = 0.08] (Fig. 2b). Participants did not frequently respond to deviants in the non-target stream (Attend-Low: x̅ = 1.8 % of non-target deviants, s = 1.5; Attend-High: x̅ = 2.2 %, s = 3.0).
#### Comparison of EFRs Between Conditions and Spectral Components
We confirmed that the EFRs in the Attend-Auditory and Attend-Visual conditions could not be explained by stimulus artefact (see Fig. 3c, d). A within-subject two-way ANOVA (Condition × EFR component) showed a significant difference in SNRs between the Attend-Auditory, Attend-Visual, and Artefact Check conditions [F(2, 22) = 86.6, p < 0.001, p 2 = 0.87]. SNRs in the Attend-Auditory and Attend-Visual conditions were significantly greater than in the Artefact Check condition [F(1, 11) = 127.9, p < 0.001, p 2 = 0.91 and F(1, 11) = 96.2, p < 0.001, p 2 = 0.88, respectively; see Fig. 3c]. EFR SNRs were greater in the Attend-Visual than in the Attend-Auditory condition (p = 0.013), due to similar amplitudes (p ≈ 1.00) but greater noise in the Attend-Auditory condition (p = 0.007). SNRs did not differ significantly between 217 and 233 Hz [F(1, 11) = 0.86, p = 0.38]. There was also no significant interaction between Condition and EFR component [F(2, 22) = 0.30, p = 0.75, p 2 = −0.01].
Phase coherence also differed among the Attend-Auditory, Attend-Visual, and Artefact Check conditions [F(2, 22) = 89.19, p < 0.001, p 2 = 0.88]. Phase coherence values in the Attend-Auditory and Attend-Visual conditions were significantly greater than in the Artefact Check condition [F(1, 11) = 91.32, p < 0.001, p 2 = 0.87 and F(1, 11) = 114.46, p < 0.001, p 2 = 0.90, respectively; see Fig. 3d]. Bonferroni-corrected post hoc tests showed no significant difference in phase coherence between the Attend-Auditory and Attend-Visual conditions (p ≈ 1.00). Phase coherence did not differ significantly between 217 and 233 Hz [F(1, 11) = 2.21, p = 0.17, p 2 = 0.09], and there was no significant interaction between Condition and EFR component [F(2, 22) = 0.83, p = 0.45, p 2 = 0.01].
#### Similar Magnitude EFRs as Experiment 1
We checked whether we were measuring comparable EFRs in the Attend-Auditory and Attend-Visual conditions of Experiment 2 as in Experiment 1. There was no significant difference in phase coherence between the experiments (Experiment 1: x̅ = 0.20, s = 0.07; Experiment 2: x̅ = 0.25, s = 0.07) [t(34) = 1.89, p = 0.07, g s = 0.65]. However, overall, EFR SNRs were significantly greater in Experiment 2 (x̅ = 6.40, s = 1.74) than in Experiment 1 (x̅ = 4.90, s = 1.74) [t(34) = 2.43, p = 0.010, g s = 0.84]. Thus, we recorded sufficiently robust EFRs to detect attentional modulations of EFRs in Experiment 2 with at least as high power as in Experiment 1.
#### No Effect of Frequency-Specific Attention on EFR Signal-to-Noise Ratio
Table 1 displays mean EFR SNRs separately for the Attend-High, Attend-Low, and Attend-Visual (low or high stream first) conditions. Figure 4c illustrates the difference in SNRs between the Attend-Auditory and Attend-Visual conditions at the two EFR components for trials in which the acoustic stimuli were identical. A within-subject two-way ANOVA examining the effect of EFR component (low or high) and Attended frequency (low or high) on these SNR difference values revealed no main effect of EFR component [F(1, 11) = 1.65, p = 0.22, p 2 = 0.05] or Attended frequency [F(1, 11) = 0.04, p = 0.84, p 2 = −0.08]. The two-way interaction between EFR component and Attended frequency was not significant either [F(1, 11) < 0.01, p = 0.95, p 2 = − 0.08]. Thus, frequency-specific attention had no influence on EFRs at the AM rates used in Experiment 2.
#### No Effect of Frequency-Specific Attention on EFR Phase Coherence
Next, we analysed phase coherence values. Table 2 displays mean EFR phase coherence separately for the Attend-High, Attend-Low, and Attend-Visual (low or high stream first) conditions. Figure 4d illustrates the difference in phase coherence between the Attend-Auditory and Attend-Visual conditions at the two EFR components, for trials in which the acoustic stimuli were identical. A within-subject two-way ANOVA examining the effect of EFR component (low or high) and Attended frequency (low or high) on these phase coherence difference values revealed no main effect of EFR component [F(1, 11) = 0.46, p = 0.51, p 2 = − 0.04] or Attended frequency [F(1, 11) < 0.01, p = 0.96, p 2 = − 0.08]. The two-way interaction between EFR component and Attended frequency was not significant either [F(1, 11) = 1.68, p = 0.22, p 2 = 0.05].
#### No Relationship Between Task Performance and Attentional Modulation of EFRs
Figure 5c, d displays auditory d′ and the extent of attentional EFR SNR modulation for each participant at the two EFR components. Bonferroni-corrected Pearson’s product-moment correlations revealed no relationship between behavioural performance on Attend-Low trials and attentional modulation at the low EFR component (r = −0.45, p = 0.29) or between behavioural performance on Attend-High trials and attentional modulation at the high EFR component (r = 0.46, p = 0.27). Bonferroni-corrected Pearson’s product-moment correlations between auditory d′ and the extent of attentional modulation of phase coherence values revealed no significant relationship at the low EFR component (r = − 0.001, p ~ 1.00) or the high EFR component (r = − 0.20, p ~ 1.00).
### Comparison of Experiments 1 and 2
We found frequency-specific attentional modulation of EFR SNRs and phase coherence in Experiment 1, but not in Experiment 2. To identify whether the differences between experiments were robust, we conducted two mixed three-way ANOVAs—separately for SNRs and phase coherence—with within-subject factors of EFR component and Attended frequency and a between-subject factor of Experiment.
There was a significant three-way interaction of Experiment, EFR component, and Attended frequency for SNRs [F(1, 34) = 6.95, p = 0.013, p 2 = 0.14]. There was also a significant three-way interaction for phase coherence [F(1, 34) = 9.51, p = 0.004, p 2 = 0.19]. These results indicate that the patterns of results indeed differed significantly between the two experiments.
Next, we tested whether differences in behavioural performance could be responsible for different results between the experiments. Performance (d′) on the auditory task did not differ significantly between experiments [t(34) = 2.00, p = 0.054, g s = 0.69], nor did performance on the visual task [t(34) = 0.55, p = 0.59, g s = 0.19].
## Discussion
We found frequency-specific attentional modulation of EFRs at lower (93 and 109 Hz) but not at higher (217 and 233 Hz) stimulus AM rates. At lower rates (Experiment 1), EFRs were larger and showed stronger phase coherence when listeners were attending to the tone stream (low- or high-frequency carrier) that was tagged with that AM rate, compared to when they were attending to the other tone stream. However, at higher AM rates (Experiment 2), we found no effect of frequency-specific attention on EFRs, even though other procedures were identical and behavioural performance, EFR SNRs, and EFR phase coherence values were as good as or better than in Experiment 1. If frequency-specific attention modulated brainstem components of EFRs (in contrast to cortical components), attentional modulation of EFRs should have been present for both the lower and higher ranges of AM rates.
The current experiments are the first to examine attentional modulation of EFRs at two distinct sets of frequencies and using two complementary measures of EFR magnitude. Within each of our two experiments, we incorporated a replication, demonstrating the same pattern of results at two different EFR components (i.e. corresponding to the higher and lower AM rates) and with two different EFR measures (i.e. SNRs and phase coherence). In Experiment 1, EFR SNRs and phase coherence were modulated by attention at both 93 and 109 Hz (although for phase coherence at 109 Hz the trend was not significant). In Experiment 2, we found no evidence of attentional modulation of SNRs or phase coherence at either 217 or 233 Hz. Importantly, we provide strong evidence for a dissociation between the two ranges of AM rates—the patterns of results differed statistically between the two experiments.
The fact that we observed attentional modulation for frequencies with suspected cortical contributions, but not at frequencies higher than the cortex is thought to be capable of tracking, suggests that attentional modulation of EFRs at lower AM rates could result from attentional modulation of a cortical component contributing to the measured EFRs. Cortical contributions to frequency-specific attention could not be measured directly in the current experiments. This is because we designed the stimuli to measure phase-locked responses at frequencies with putative brainstem generators and, thus, the stimuli were amplitude modulated at those frequencies. In addition, we presented sequences of repeated, long-duration tones, meaning that components in filtered time-domain averages were not readily identifiable due to neural adaptation (Sams et al. 1993; Herrmann et al. 2014). However, the results provide strong evidence that different neural processes underlie activity at the higher and lower frequencies tested. Based on evidence from ECoG showing cortical frequency-following in Heschl’s gyrus up to but not beyond 200 Hz (Brugge et al. 2009; Nourski et al. 2013; Behroozmand et al. 2016) and recent MEG (Coffey et al. 2016) and EEG (Coffey et al. 2017) studies showing that the generators of the frequency-following response at 98 Hz most likely include cortex, we suspect that the observed dissociation might arise from cortical contributions to EFRs at the lower frequencies we tested in Experiment 1 and not at the higher frequencies we tested in Experiment 2. Although less likely, another possibility is that different findings at higher- and lower-modulation frequencies reflect the contribution of different combinations of brainstem generators to EFRs (see Marsh et al. 1974; Dykstra et al. 2016). The current results add to the growing literature by demonstrating that the most popular method for recording EFRs—EEG—is sensitive to different neural processes at frequencies above and below 200 Hz, within the range of frequencies at which EFRs are typically assumed to reflect brainstem processes. Furthermore, we show that this difference has the potential to dramatically alter the conclusions of EFR studies.
The results of Experiment 1 show that EFRs elicited by an AM tone have greater SNRs when that tone is attended or when visual stimuli are attended than when attention is directed to a different-frequency tone (Fig. 4a). This result suggests that attention suppresses the amplitude of EFRs to tones at frequencies that are not attended. The results also show that EFRs elicited by an AM tone have greater phase coherence when that tone is attended than when attention is directed to a different-frequency tone or to visual stimuli, suggesting that attention enhances EFR phase coherence for tones at attended frequencies (Fig. 4b). Suppression of EFR amplitudes to an unattended tone was also reported by Hairston et al. (2013). They measured following responses to the temporal fine structure of a ‘background’ 220-Hz pure tone. Participants performed either a temporal discrimination task on pure tones with a frequency of 587 Hz, a visual temporal discrimination task, or no task. The amplitude of the response was lower during the auditory than the visual and no-task conditions. The current results are consistent with those reported by Hairston et al. (2013).
Unlike Experiment 1, two previous studies found no consistent modulation of EFRs by auditory attention (Lehmann and Schönwiesner 2014; Varghese et al. 2015)—although those experiments cued attention to spoken words at different spatial locations (which also differed in fundamental frequency), rather than explicitly to sounds of different frequencies. Varghese et al. (2015) analysed EFRs at similar frequencies (97 and 113 Hz) as the modulation frequencies employed in Experiment 1, but they obtained much poorer SNRs—perhaps attributable to a shorter analysis window and fewer epochs, which likely reduced their ability to detect significant attentional modulation. Lehmann and Schönwiesner (2014) report high SNRs, but used stimuli with relatively high fundamental frequencies of 170 and 225 Hz. They observed attentional modulation in the expected direction at 170 Hz (with dichotic presentation) but not at 225 Hz, which is similar to the higher AM rates used in Experiment 2. The results of the current experiments add, crucially, to the ongoing debate of whether attention affects EFRs by showing that choice of modulation rate can affect the outcomes of EFR studies, which could potentially reconcile seemingly disparate results found in previous studies. The findings of Lehmann and Schönwiesner (2014) are consistent with the results reported here, which reveal attentional modulation at frequencies below 200 Hz (Experiment 1), but not at those above 200 Hz (Experiment 2).
Galbraith and Doan (1995) did find attentional modulation at 400 Hz, which is of higher frequency than the cortex is assumed capable of tracking. However, they cued spatial attention to the left or right ear and recorded following responses to the temporal fine structure, instead of the envelope. Thus, it is possible that frequency-specific attentional modulation of brainstem responses in EEG is more difficult to detect than attention shifts between ears and/or that temporal fine structure following responses reflect different neural processes than envelope responses.
We found no difference in EFR amplitudes and phase coherence between the Attend-Auditory and Attend-Visual conditions overall. Although some previous studies reported modulation of EFRs by visual or auditory attention, the findings are inconsistent: some studies found a difference in amplitudes, but not latencies (e.g. Galbraith and Arroyo 1993; Galbraith et al. 2003), others found a difference in latencies but not amplitudes (e.g. Hoormann et al. 2000), and some reported no differences (e.g. Galbraith and Kane 1993; Varghese et al. 2015). The current finding is not surprising in the context of these previous results. Given that d′ in the current experiment was approximately 2, participants were performing the visual task accurately, making it unlikely that the visual task used in the current experiments did not effectively engage attention.
In Experiment 1, there was a difference in EFR SNRs and phase coherence between the two stimulus types (low or high stream first) in the Attend-Visual condition. We expected to observe no difference in EFRs between these trials relating to attention because the stimulus that began first was irrelevant to the visual task. There are several possible explanations for this finding, which cannot be distinguished here. First, differences in EFRs may reflect acoustic differences between the two stimulus types (i.e. low or high stream first). Second, it is possible that attention did in fact differ between the two stimulus types in the Attend-Visual condition: the onset of the tone streams could have captured attention exogenously. One attention-driven explanation is that each stream captured attention sequentially; thus, the stream that began last in the Attend-Visual condition would capture attention throughout the analysis window, meaning that the stream that began first would be unattended—potentially causing lower EFR SNRs and phase coherence at the AM rate of the first tone stream. A different attention-driven explanation is that the tone stream that began first may have been most salient; if listeners actively suppressed the percept of the stream that began first to help them focus on the visual task, then EFR SNRs and phase coherence would again be lower at the AM rate of the first tone stream. The stimulus-driven and attention-driven explanations could be distinguished in future studies by presenting acoustically identical stimuli in Attend-Low and Attend-High trials and by using a visual, rather than acoustic, stimulus to cue attention. The two attention-driven explanations could be distinguished by analysing EFRs based on the order of streams; if participants suppressed the tone stream that began first, the order of the two later streams should not affect EFRs.
Our results demonstrate that measuring EFRs at different frequencies within the range of frequencies that are typically assumed to reflect brainstem processing has the potential to dramatically alter the conclusions of EFR studies. If we had only measured EFRs at the lower frequencies used in Experiment 1, we may have concluded that attention influences brainstem encoding, whereas, if we had only used the higher frequencies of Experiment 2, we may have concluded that there is no influence of attention on brainstem encoding. Thus, our findings have important implications for experiments comparing EFRs across different populations. Previous studies have found that EFRs elicited by musical notes differ between musicians and non-musicians (Musacchia et al. 2007), EFRs elicited by Mandarin sounds differ between speakers of Mandarin and English (Krishnan et al. 2009, 2010), and EFRs elicited by spoken syllables differ between children with different speech-in-noise abilities (Anderson et al. 2010). Those results have been attributed to differences in brainstem encoding. However, given that EFRs are typically recorded at lower frequencies (70–200 Hz), the reported differences in EFRs could potentially arise from differences in cortical attentional processes rather than brainstem processes. We suggest that the findings of these studies should be re-evaluated and recommend further work aimed at disambiguating cortical and brainstem responses. For example, future studies could present stimuli with fundamental frequencies above 200 Hz to confidently attribute EFRs to brainstem generators. Also, different methods could be used to more clearly separate brainstem responses and cortical activity (e.g. MEG or functional magnetic resonance imaging [fMRI], albeit at the cost of losing information about phase locking). In particular, fMRI, with its very high spatial resolution, might be a promising method to evaluate attentional and cognitive modulation of auditory brainstem (inferior colliculus) and thalamus (medial geniculate body) activity as fMRI has previously been used to show modulation of brainstem responses by spatial attention (Rinne et al. 2008).
## Conclusions
Using EEG—currently the most common method for recording EFRs—we found that frequency-specific attention affected the amplitude of EFRs elicited by stimuli with amplitude modulation rates of 93 and 109 Hz, but not by stimuli with amplitude modulation rates of 217 and 233 Hz. The effect of attention was significantly stronger at the lower two modulation rates than at the higher two rates. We conclude that EFRs at lower amplitude modulation rates reflect different processes (e.g. a cortical contribution, which is modulated by attention) than EFRs above 200 Hz. The significant difference in results between the two sets of AM rates demonstrates that EEG-recorded EFRs reflect different processes for AM rates below 200 Hz (which are commonly used in EFR research) compared to higher rates. Critically, this finding should lead to re-evaluation of previous studies claiming that differences in EFRs reflect differences in brainstem encoding.
## Notes
### Acknowledgements
This work was supported by funding from the Canadian Institutes of Health Research (CIHR; Operating Grant: MOP 133450) and the Natural Sciences and Engineering Research Council of Canada (NSERC; Discovery Grant: 327429-2012). Authors R.P. Carlyon and H.E. Gockel were supported by intramural funding from the Medical Research Council [SUAG/007 RG91365]. We thank Lenny Varghese and Barbara Shinn-Cunningham for helpful comments on phase coherence calculations.
### Compliance with Ethical Standards
Both experiments were cleared by Western University’s Health Sciences Research Ethics Board. Informed consent was obtained from all participants.
## References
1. Anderson S, Skoe E, Chandrasekaran B, Kraus N (2010) Neural timing is linked to speech perception in noise. J Neurosci 30:4922–4926.
2. Anourova I, Nikouline VV, Ilmoniemi RJ et al (2001) Evidence for dissociation of spatial and nonspatial auditory information processing. NeuroImage 14:1268–1277.
3. Behroozmand R, Oya H, Nourski KV et al (2016) Neural correlates of vocal production and motor control in human Heschl’s gyrus. J Neurosci 36:2302–2315.
4. Bharadwaj HM, Lee AKC, Shinn-Cunningham BG (2014) Measuring auditory selective attention using frequency tagging. Front Integr Neurosci 8:6.
5. Billig AJ, Davis MH, Deeks JM et al (2013) Lexical influences on auditory streaming. Curr Biol 23:1585–1589.
6. Brugge JF, Nourski KV, Oya H et al (2009) Coding of repetitive transients by auditory cortex on Heschl’s gyrus. J Neurophysiol 102:2358–2374.
7. Buzsáki G, Anastassiou CA, Koch C (2012) The origin of extracellular fields and currents—EEG, ECoG, LFP and spikes. Nat Rev Neurosci 13:407–420.
8. Carlyon RP, Cusack R, Foxton JM, Robertson IH (2001) Effects of attention and unilateral neglect on auditory stream segregation. J Exp Psychol Hum Percept Perform 27:115–127
9. Coffey EBJ, Herholz SC, Chepesiuk AMP et al (2016) Cortical contributions to the auditory frequency-following response revealed by MEG. Nat Commun 7:1–11.
10. Coffey EBJ, Musacchia G, Zatorre RJ (2017) Cortical correlates of the auditory frequency-following and onset responses: EEG and fMRI evidence. J Neurosci 37:830–838.
11. Da Costa S, van der Zwaag W, Miller LM et al (2013) Tuning in to sound: frequency-selective attentional filter in human primary auditory cortex. J Neurosci 33:1858–1863.
12. Davis MH, Johnsrude IS (2007) Hearing speech sounds: top-down influences on the interface between audition and speech perception. Hear Res 229:132–147.
13. Ding N, Simon JZ (2012) Emergence of neural encoding of auditory objects while listening to competing speakers. Proc Natl Acad Sci U S A 2012:5–10. Google Scholar
14. Dykstra AR, Burchard D, Starzynski C et al (2016) Lateralization and binaural interaction of middle-latency and late-brainstem components of the auditory evoked response. JARO - J Assoc Res Otolaryngol 17:357–370.
15. Formisano E, Martino F, De Bonte M, Goebel R (2008) “Who” is saying “what”? Brain-based decoding of human voice and speech. Science 322:970–973
16. Galbraith GC, Arroyo C (1993) Selective attention and brainstem frequency-following responses. Biol Psychol 37:3–22
17. Galbraith GC, Doan BQ (1995) Brainstem frequency-following and behavioral responses during selective attention to pure tone and missing fundamental stimuli. Int J Psychophysiol 19:203–214.
18. Galbraith GC, Kane JM (1993) Brainstem frequency-following responses and cortical event-related potentials during attention. Percept Mot Skills 76:1231–1241.
19. Galbraith GC, Olfman DM, Huffman TM (2003) Selective attention affects human brain stem frequency-following response. Neuroreport 14:735–738.
20. Green DM, Swets JA (1966) Signal detection theory and psychophysics. Wiley, New YorkGoogle Scholar
21. Hairston WD, Letowski TR, McDowell K (2013) Task-related suppression of the brainstem frequency following response. PLoS One 8:31–34.
22. Herdman AT, Lins O, Van Roon P et al (2002) Intracerebral sources of human auditory steady-state responses. Brain Topogr 15:69–86.
23. Herrmann B, Schlichting N, Obleser J (2014) Dynamic range adaptation to spectral stimulus statistics in human auditory cortex. J Neurosci 34:327–331.
24. Hillyard SA, Hink RF, Schwent VL, Picton TW (1973) Electrical signs of selective attention in the human brain. Science 182:177–180.
25. Hoormann J, Falkenstein M, Hohnsbein J (2000) Early attention effects in human auditory-evoked potentials. Psychophysiology 37:29–42.
26. ISO-226 (2003) Acoustics—normal equal-loudness contours. International Organization for Standardization, GenevaGoogle Scholar
27. Jerger J, Chmiel R, Frost JD, Coker N (1986) Effect of sleep on the auditory steady state evoked potential. Ear Hear 7:240–245.
28. King A, Hopkins K, Plack CJ (2016) Differential group delay of the frequency following response measured vertically and horizontally. J Assoc Res Otolaryngol 17:133–143.
29. Kiren T, Aoyagi M, Furuse H, Koike Y (1994) An experimental study on the generator of amplitude-modulation following response. Acta Otolaryngol Suppl 511:28–33
30. Krishnan A, Gandour JT, Bidelman GM, Swaminathan J (2009) Experience dependent neural representation of dynamic pitch in the brainstem. Neuroreport 4:408–413.
31. Krishnan A, Gandour JT, Bidelman GM (2010) The effects of tone language experience on pitch processing in the brainstem. J Neurolinguistics 23:81–95.
32. Lavie N (1995) Perceptual load as a necessary condition for selective attention. J Exp Psychol Hum Percept Perform 21:451–468
33. Lavie N, Tsal Y (1994) Perceptual load as a major determinant of the locus of selection in visual attention. Percept Psychophys 56:183–197
34. Lehmann A, Schönwiesner M (2014) Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues. PLoS One 9:e85442.
35. Marsh JT, Brown WS, Smith JC (1974) Differential brainstem pathways for the conduction of auditory frequency-following responses. Electroencephalogr Clin Neurophysiol 36:415–424.
36. Musacchia G, Sams M, Skoe E, Kraus N (2007) Musicians have enhanced subcortical auditory and audiovisual processing of speech and music. Proc Natl Acad Sci U S A 104:15894–15898.
37. Nourski KV, Brugge JF, Reale RA et al (2013) Coding of repetitive transients by auditory cortex on posterolateral superior temporal gyrus in humans: an intracranial electrophysiology study. J Neurophysiol 109:1283–1295.
38. Paltoglou AE, Sumner CJ, Hall DA (2009) Examining the role of frequency specificity in the enhancement and suppression of human cortical activity by auditory selective attention. Hear Res 257:106–118.
39. Parasuraman R, Richer F, Beatty J (1982) Detection and recognition: concurrent processes in perception. Percept Psychophys 31:1–12
40. Petkov CI, Kang X, Alho K et al (2004) Attentional modulation of human auditory cortex. Nat Neurosci 7:658–663.
41. Picton TW, John MS (2004) Avoiding electromagnetic artifacts when recording auditory steady-state responses. J Am Acad Audiol 15:541–554
42. Riecke L, Peters JC, Valente G et al (2016) Frequency-selective attention in auditory scenes recruits frequency representations throughout human superior temporal cortex. Cereb Cortex 27:3002–3014. Google Scholar
43. Rinne T, Balk MH, Koistinen S et al (2008) Auditory selective attention modulates activation of human inferior colliculus. J Neurophysiol 100:3323–3327.
44. Sams M, Hari R, Rif J, Knuutila J (1993) The human auditory sensory memory trace persists about 10 sec: neuromagnetic evidence. J Cogn Neurosci 5:363–370.
45. Small SA, Stapells DR (2004) Artifactual responses when recording auditory steady-state responses. Ear Hear 25:611–623
46. Smith JC, Marsh JT, Brown WS (1975) Far-field recorded frequency-following responses: evidence for the locus of brainstem sources. Electroencephalogr Clin Neurophysiol 39:465–472.
47. Stapells DR, Makeig S, Galambos R (1987) Auditory steady-state responses: threshold prediction using phase coherence. Electroencephalogr Clin Neurophysiol 67:260–270.
48. Varghese L, Bharadwaj HM, Shinn-Cunningham BG (2015) Evidence against attentional state modulating scalp-recorded auditory brainstem steady-state responses. Brain Res 1626:146–164. doi:
49. Voisin J, Bidet-Caulet A, Bertrand O, Fonlupt P (2006) Listening in silence activates auditory areas: a functional magnetic resonance imaging study. J Neurosci 26:273–278.
50. Winer JA, Schreiner CE (2005) The inferior colliculus. New York: Springer.
51. Woldorff MG, Hansen JC, Hillyard SA (1987) Evidence for effects of selective attention in the mid-latency range of the human auditory event-related potential. Curr Trends Event-Related Potential Res 40:146–154Google Scholar
52. Woldorff MG, Gallen CC, Hampson SA et al (1993) Modulation of early sensory processing in human auditory cortex during auditory selective attention. Proc Natl Acad Sci U S A 90:8722–8726
53. Worden FG, Marsh JT (1968) Frequency-following (microphonic-like) neural responses evoked by sound. Electroencephalogr Clin Neurophysiol 25:42–52.
54. Xiang J, Simon JZ, Elhilali M (2010) Competing streams at the cocktail party: exploring the mechanisms of attention and temporal integration. J Neurosci 30:12084–12093.
## Authors and Affiliations
• Emma Holmes
• 1
• David W. Purcell
• 2
• Robert P. Carlyon
• 3
• Hedwig E. Gockel
• 3
• Ingrid S. Johnsrude
• 1
• 2
1. 1.Brain and Mind InstituteUniversity of Western OntarioLondonCanada
2. 2.School of Communication Sciences and DisordersUniversity of Western OntarioLondonCanada
3. 3.MRC-Cognition and Brain Sciences UnitUniversity of CambridgeCambridgeUK | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8249058127403259, "perplexity": 5533.032160239623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832330.93/warc/CC-MAIN-20181219130756-20181219152756-00071.warc.gz"} |
https://dimag.ibs.re.kr/2022/kyeongsik-nam-seminar/ | # Kyeongsik Nam (남경식) gave a talk on the number of subgraphs isomorphic to a fixed graph in a random graph and the exponential random graph model at the Discrete Math Seminar
On May 9, 2022, Kyeongsik Nam (남경식) from KAIST gave a talk at the Discrete Math Seminar Kyeongsik Nam (남경식) gave a talk on the number of subgraphs isomorphic to a fixed graph in a random graph and the exponential random graph model at the Discrete Math Seminar. The title of his talk was “Large deviations for subgraph counts in random graphs“. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8234857320785522, "perplexity": 731.1074096382514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00394.warc.gz"} |
https://mersenneforum.org/showthread.php?s=31d90d0baf296882f2d7083a1672bf00&p=353900 | mersenneforum.org cubing/15-puzzle general
Register FAQ Search Today's Posts Mark Forums Read
2013-09-20, 16:02 #1 3.14159 May 2010 Prime hunting commission. 24·3·5·7 Posts cubing/15-puzzle general Since I've been away, I've adopted cubing/solving rubik's cubes as a pastime, as well as solving 15-puzzles. My times for the 3x3x3 cube are roughly ~30 ~ 40 seconds, best time of 25.1 seconds. Not very good since I mostly focus on larger rubik's cubes (5x5x5+ preferred), and I mostly use cube simulators as opposed to physical cubes. Also do 15-puzzles on a regular basis, again using sims because I don't have any physical 15puzzles. For the ordinary 15-puzzle, my times range from ~10 ~ 20 seconds normally, best time was a lucky 7.704 seconds. Code: cubes 2x2x2: 3.3 seconds 3x3x3: 25.1 seconds 4x4x4: 95.6 seconds 5x5x5: 2.465 mins 6x6x6: 3.81 mins 7x7x7: 5.68 mins 8x8x8: 8.49 mins 9x9x9: 11.9 mins 10x10x10: 16.681 mins 11x11x11: 20.31 mins 12x12x12: 27.15 mins 13x13x13: 32.8 mins 14x14x14: 42.2 mins 15x15x15: 48.5 mins 16x16x16: 57.35 mins 17x17x17: ~73 mins 18x18x18: 83 mins 19x19x19: 101.845 mins 20x20x20: 95.556 mins 26x26x26: 230.007 mins 30x30x30: 818.245 mins 32x32x32: 369.521 mins 33x33x33: 399.297 mins 35x35x35: 574.211 mins 40x40x40: ~960 mins 15puzzles 2x2puzzle: 0 3x3puzzle: 0.07 seconds 4x4puzzle: 7.704 seconds 5x5puzzle: 25.891 seconds 6x6puzzle: 56.307 seconds 7x7puzzle: 107.316 seconds 8x8puzzle: 3.131 mins 9x9puzzle: 5.218 mins 10x10puzzle: 6.833 mins 13x13puzzle: 15.913 mins 15x15puzzle: 26.634 mins there's all my personal best times for the puzzle sizes I've solved so far, for both rubik's cube puzzles and 15-puzzles. Anyone else out there for either cubes or 15-puzzles? If so, what times/methods do you use? Some vids of my solves: http://www.youtube.com/watch?v=XKta4LGlXeE http://www.youtube.com/watch?v=9c4tfUZDX_o http://www.youtube.com/watch?v=cGTr7s6g7us Last fiddled with by 3.14159 on 2013-09-20 at 16:09 Reason: n/a
2013-09-22, 23:50 #2 henryzz Just call me Henry "David" Sep 2007 Cambridge (GMT/BST) 133418 Posts http://www.mersenneforum.org/showthread.php?t=11186 These are some quite good times. If you are into solving things like a 40x40x40 it might be fun if you try and break the record for the largest solve. From memory I think it is at around 101 currently. There is a guy on the speedcubing forum trying to break his own record currently.
2013-09-23, 00:59 #3 TheMawn May 2013 East. Always East. 11×157 Posts What cube software do you use? I own a V-Cube 7, V-Cube 5 and a Rubik's brand 3x3x3. I had a Rubik's 4 and a V-Cube 6 but something inside the 4 snapped and I dropped the V-Cube 6 and holy f------ s--- is the V-Cube one complicated-ass piece of hardware! I've given up on Verdes to ever make anything bigger than the 7 and I can't for the life of me find any of the store that sell the larger physical cubes. Sadly, I haven't been able to confirm my hypothesis that I can solve any cube without help having learned about having no centers in 4, having learned about the more complex center building in 5, and then having learned about building the even bigger centers in layers in 6 and 7.
2013-09-23, 12:24 #4
3.14159
May 2010
Prime hunting commission.
69016 Posts
Quote:
What cube software do you use?
Quote:
I've given up on Verdes to ever make anything bigger than the 7 and I can't for the life of me find any of the store that sell the larger physical cubes. Sadly, I haven't been able to confirm my hypothesis that I can solve any cube without help having learned about having no centers in 4, having learned about the more complex center building in 5, and then having learned about building the even bigger centers in layers in 6 and 7.
Yeah, Verdes isn't planning on making any larger cubes than 7. However, if you really need some larger physical cubes than 7x7x7, you should check out http://www.championscubestore.com/, which sells Shengshou cubes. They're currently selling cubes 2x2 ~ 9x9, and will soon release their 10x10 cube.
Last fiddled with by 3.14159 on 2013-09-23 at 12:31
2013-09-23, 18:07 #5
Brian-E
"Brian"
Jul 2007
The Netherlands
7·467 Posts
Quote:
Originally Posted by henryzz If you are into solving things like a 40x40x40 it might be fun if you try and break the record for the largest solve. From memory I think it is at around 101 currently. There is a guy on the speedcubing forum trying to break his own record currently.
Just out of interest, what does the largest solve of 101 mean exactly?
I guess it's 101 moves to solve the cube from a particular starting position, but does that mean a particular position found which takes the record most number of moves to solve (surely that's out of computational reach to verify?), or a record best solve of only 101 moves from a random position (in which case how do we know the starting position was not fortuitously chosen?), or something else?
2013-09-23, 18:42 #6
3.14159
May 2010
Prime hunting commission.
24·3·5·7 Posts
Quote:
Originally Posted by Brian-E Just out of interest, what does the largest solve of 101 mean exactly? I guess it's 101 moves to solve the cube from a particular starting position, but does that mean a particular position found which takes the record most number of moves to solve (surely that's out of computational reach to verify?), or a record best solve of only 101 moves from a random position (in which case how do we know the starting position was not fortuitously chosen?), or something else?
He means the largest cube solved by hand. The world record for that was set a few months ago, which stands at 111x111x111. The solve took slightly less than 30 hours' worth of solving time spaced out over a few days, done by one person, unlike the previous record. The previous record was set by two people over the course of 34 days if I remember right, which was 100x100x100.
Last fiddled with by 3.14159 on 2013-09-23 at 18:51
2013-09-23, 21:15 #7
Brian-E
"Brian"
Jul 2007
The Netherlands
7×467 Posts
Quote:
Originally Posted by 3.14159 He means the largest cube solved by hand. The world record for that was set a few months ago, which stands at 111x111x111. The solve took slightly less than 30 hours' worth of solving time spaced out over a few days, done by one person, unlike the previous record. The previous record was set by two people over the course of 34 days if I remember right, which was 100x100x100.
Thanks. Sorry for my slowness.
2013-09-24, 02:20 #8
LaurV
Romulan Interpreter
Jun 2011
Thailand
33·347 Posts
Quote:
Originally Posted by Brian-E I guess it's 101 moves ... or something else?
Related to that: The God's Number is 20.
Similar Threads Thread Thread Starter Forum Replies Last Post pastcow Factoring 10 2013-02-27 07:01 R.D. Silverman NFSNET Discussion 4 2007-07-19 18:43 pacionet Miscellaneous Math 15 2005-12-08 08:00 Val 15k Search 10 2004-03-13 20:56 TTn Miscellaneous Math 1 2003-08-26 03:14
All times are UTC. The time now is 05:49.
Sat Apr 17 05:49:12 UTC 2021 up 9 days, 30 mins, 0 users, load averages: 1.61, 1.39, 1.43
This forum has received and complied with 0 (zero) government requests for information.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1761738806962967, "perplexity": 4641.406039470522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00074.warc.gz"} |
https://kluedo.ub.uni-kl.de/frontdoor/index/index/searchtype/collection/id/15994/start/3/rows/10/languagefq/eng/sortfield/title/sortorder/asc/yearfq/2017/docId/4640 | • search hit 4 of 5
Back to Result List
## Coverage of Compositional Property Sets for Hardware and Hardware-dependent Software in Formal System-on-Chip Verification
• Divide-and-Conquer is a common strategy to manage the complexity of system design and verification. In the context of System-on-Chip (SoC) design verification, an SoC system is decomposed into several modules and every module is separately verified. Usually an SoC module is reactive: it interacts with its environmental modules. This interaction is normally modeled by environment constraints, which are applied to verify the SoC module. Environment constraints are assumed to be always true when verifying the individual modules of a system. Therefore the correctness of environment constraints is very important for module verification. Environment constraints are also very important for coverage analysis. Coverage analysis in formal verification measures whether or not the property set fully describes the functional behavior of the design under verification (DuV). if a set of properties describes every functional behavior of a DuV, the set of properties is called complete. To verify the correctness of environment constraints, Assume-Guarantee Reasoning rules can be employed. However, the state of the art assume-guarantee reasoning rules cannot be applied to the environment constraints specified by using an industrial standard property language such as SystemVerilog Assertions (SVA). This thesis proposes a new assume-guarantee reasoning rule that can be applied to environment constraints specified by using a property language such as SVA. In addition, this thesis proposes two efficient plausibility checks for constraints that can be conducted without a concrete implementation of the considered environment. Furthermore, this thesis provides a compositional reasoning framework determining that a system is completely verified if all modules are verified with Complete Interval Property Checking (C-IPC) under environment constraints. At present, there is a trend that more of the functionality in SoCs is shifted from the hardware to the hardware-dependent software (HWDS), which is a crucial component in an SoC, since other software layers, such as the operating systems are built on it. Therefore there is an increasing need to apply formal verification to HWDS, especially for safety-critical systems. The interactions between HW and HWDS are often reactive, and happen in a temporal order. This requires new property languages to specify the reactive behavior at the HW and SW interfaces. This thesis introduces a new property language, called Reactive Software Property Language (RSPL), to specify the reactive interactions between the HW and the HWDS. Furthermore, a method for checking the completeness of software properties, which are specified by using RSPL, is presented in this thesis. This method is motivated by the approach of checking the completeness of hardware properties. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015042543411255, "perplexity": 1269.9232824441442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721441.77/warc/CC-MAIN-20190425114058-20190425140058-00271.warc.gz"} |
https://www.physicsforums.com/threads/motor-torque.474773/ | # Motor Torque
• Start date
• #1
41
0
Looking for a motor to turn a large cylinder.
Say if I have a cylinder of radius 1007.54mm and mass of 2 tons, I would have a moment of innertia of 1.027x10^3 kg-m^2
This is from the general cylinder inertial equation,
I = .5*m*r^2
The torque necessary to bring this to 30 deg/ 5 min is
W= $$\varpi$$0 rad/sec + .5*$$\alpha$$*(5 min/60 sec)^2
W = 30 deg*(3.14 rad/180 deg)*2/(.00694 sec^2)
Torque is then,
T = .06224 N-m
Correct?
and if this is so if I had a motor with a minimum torque of .06224 N-m?
I think this is correct but thought I'd have someone run over calculations and see if I am looking for a motor the right way. Thank you!
Last edited:
• Last Post
Replies
5
Views
3K
• Last Post
Replies
6
Views
6K
• Last Post
Replies
5
Views
53K
• Last Post
Replies
3
Views
887
• Last Post
Replies
2
Views
2K
• Last Post
Replies
8
Views
45K
• Last Post
Replies
2
Views
5K
• Last Post
Replies
1
Views
6K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
4
Views
2K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8059592247009277, "perplexity": 8549.617627478692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362992.98/warc/CC-MAIN-20211204124328-20211204154328-00017.warc.gz"} |
https://link.springer.com/article/10.1007%2Fs11634-016-0237-y | Advances in Data Analysis and Classification
, Volume 11, Issue 1, pp 139–158
# A generalized maximum entropy estimator to simple linear measurement error model with a composite indicator
• Maurizio Carpita
• Enrico Ciavolino
Regular Article
## Abstract
We extend the simple linear measurement error model through the inclusion of a composite indicator by using the generalized maximum entropy estimator. A Monte Carlo simulation study is proposed for comparing the performances of the proposed estimator to his counterpart the ordinary least squares “Adjusted for attenuation”. The two estimators are compared in term of correlation with the true latent variable, standard error and root mean of squared error. Two illustrative case studies are reported in order to discuss the results obtained on the real data set, and relate them to the conclusions drawn via simulation study.
## Keywords
Simple linear measurement error model Generalized maximum entropy Composite indicator Global innovation index Manager performance
## Mathematics Subject Classification
97K70 97K80 47N30 94A17 91B82
## Supplementary material
11634_2016_237_MOESM1_ESM.pdf (23 kb)
Supplementary material 1 (pdf 23 KB)
## References
1. Al-Nasser AD (2005) Entropy type estimator to simple linear measurement error models. Aust J Stat 34(3):283–294
2. Bollen K (1989) Structural equations with latent variables. Wiley, New York
3. Brentari E, Zuccolotto P (2011) The impact of chemical and sensorial characteristics on the market price of Italian red wines. Electron J Appl Stat Anal 4(2):265–276Google Scholar
4. Buonaccorsi JP (2010) Measurement error models, methods and applications. Boca Raton: Chapman & Hall, CRC PressGoogle Scholar
5. Carpita M, Manisera M (2012) Constructing indicators of unobservable variables from parallel measurements. Electron J Appl Stat Anal 5(3):320–326
6. Carpita M, Ciavolino E (2014) MEM and SEM in the GME framework: statistical modelling of perception and satisfaction. Procedia Econ Financ 17:20–29
7. Carroll RJ, Ruppert D, Stefanski LA (1995) Measurement error in nonlinear models. Chapman & Hall, London
8. Cheng C-L, Van Ness JW (2010) Statistical regression with measurement error. Wiley, New York
9. Ciavolino E, Al-Nasser AD (2009) Comparing generalized maximum entropy and partial least squares methods for structural equation models. J Nonparametr Stat 21(8):1017–1036
10. Ciavolino E, Carpita M (2015) The GME estimator for the regression model with a composite indicator as explanatory variable. Qual Quant 49(3):955–965
11. Ciavolino E, Carpita M, Al-Nasser AD (2015) Modeling the quality of work in the Italian social co-operatives combining NPCA-RSM and SEM-GME approaches. J Appl Stat 42(1):161–179
12. Ciavolino E, Dahlgaard JJ (2009) Simultaneous equation model based on generalized maximum entropy for studying the effect of the management’s factors on the enterprise performances. J Appl Stat 36(7):801–815
13. Decancq K, Lugo MA (2013) Weights in multidimensional indices of wellbeing: an overview. Econ Rev 32(1):7–34
14. Dutta S (2012) The global innovation index 2012: stronger innovation linkages for global growth. INSEAD, FranceGoogle Scholar
15. Foster JE, McGillivray M, Suman S (2013) Composite indices: rank robustness, statistical association, and redundancy. Econ Rev 32(1):35–56
16. Fuller WA (1987) Measurement errors models. Wiley, New York
17. Golan A (2006) Information and entropy econometrics. A review and synthesis, foundation and trends$$^{{\textregistered }}$$ in Econometrics. 2(1–2), 1–145Google Scholar
18. Golan A, Judge G, Miller D (1996) A maximum entropy econometrics: robust estimation with limited data. Wiley, New York
19. Madansky A (1959) The fighting of straight lines when both variables are subject to error. J Am Stat Assoc 55:173–205
20. Nunnally JC, Bernstein IH (1994) Psychometric theory, 3rd edn. McGraw-Hill, New YorkGoogle Scholar
21. Oberski DL, Satorra A (2013) Measurement error models with uncertainty about the error variance. Struct Equ Model 20:409–428
22. Organisation for Economic Co-operation and Development (2008) Handbook on constructing composite indicators: methodology and user guide. Organisation for Economic Co-operation and Development, ParisGoogle Scholar
23. Pagani L, Zanarotti M (2015) Some considerations to carry out a composite indicator for ordinal data. Electron J Appl Stat Anal 8(3):384–397
24. Paruolo P, Saisana M, Saltelli A (2013) Ratings and rankings: voodoo or science? J R Stat Soc Ser A 176(3):609–634
25. Pukelsheim F (1994) The three sigma rule. Am Stat 48(2):88–91
26. Roeder K, Carroll RJ, Lindsay BG (1996) A semiparametric mixture approach to case–control studies with errors in covariables. J Am Stat Assoc 91:722–732
27. Saltelli A (2007) Composite indicators between analysis and advocacy. Soc Indic Res 81:65–77
28. Schumacker R, Lomax R (2004) A beginner’s guide to structural equation modeling. Lawrence Erlbaum, Mahwah
29. Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27:379–423
30. Vezzoli M, Manisera M (2012) Assessing item contribution on unobservable variables’ measures with hierarchical data. Electron J Appl Stat Anal 5(3):314–319
31. Wansbeek T, Maijer E (2000) Measurement error and latent variables in econometrics. Elsevier, Amsterdam | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089635014533997, "perplexity": 18869.68152688239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203462.50/warc/CC-MAIN-20190324145706-20190324171706-00168.warc.gz"} |
https://bruceleeeowe.wordpress.com/2009/11/10/understanding-newtonian-mechanics/ | # Understanding Newtonian Mechanics
Newtonian mechanics has marked the beginning of a new era for physics. Indeed the newtonian formulation of the gravitational force has allowed to prove the heliocentric theory developped by Copernicus and defended by Galileo. It is a very interesting story that deserve a full post (maybe one day, if I have enough time…).
I’m writing this post because I had to teach to freshmen the foundations of newtonian mechanics. The point is I never liked how formulas were dropped from nowhere when I was a freshman myself. First there was a speach about inertia principle (or newtonian first law of motion), then the teacher would introduce the famous vectorial relation (second law of motion). But the second one was not derived directly from the first one. And yet it is an immediate work.
What is inertia : “The vis insita, or innate force of matter is a power of resisting, by which every body, as much as in it lies, endeavors to preserve in its present state, whether it be of
rest, or of moving uniformly forward in a straight line.”
That’s how Newton defined it in his Principia. It simply states that an object doesn’t change his current motion until it has a good reason, or otherwise,if there’s no cause, there is no consequences… Written formally for an isolated object ( being the
velocity of the object considered).
Once Newton introduced infinitesimal calculus, it was easy to formally link the velocity to the acceleration , particularly interesting for non-constant .
Now, we can notice that any object that is dropped with no speed (), will start moving under the action of gravity. Before it touches the ground, we have a non-null . Something acts on it and accelerates it.
For the proportionnality coefficient, one can simply notice that pushing a mass demands twice the force needed to push a mass , thus introducing an extensive quantity called inertial mass. Why inertial? simply because it quantifies how much a body resists to a change in its current state. It is easy to illustrate that with a spring and two bodies with the same mass, and a non-frictionnal support. Try to pull one of the bodies with the spring, and then both at the same time, and you’ll simply notice that the spring extension will double (the extension of the spring is proportionnal to the force applied).
Finally we end with: . | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8730494976043701, "perplexity": 714.4808903969736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821088.8/warc/CC-MAIN-20171017110249-20171017130249-00260.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/67456-plzzz-help-skew.html | # Math Help - PLzzz Help!!!!!!!!!!!!!!-Skew?
1. ## PLzzz Help!!!!!!!!!!!!!!-Skew?
A random variable is uniformly distributed over[0,1].What is its skew?
Hi,
I've been moving in circles around this question for quite sometime now.
I would appreciate it someone would be kind enough to provide me with the full answer and some explanation please.
As you can see I've been provided with several leads and unfortunately I came up with some funny answer you can see in the post below by me with a similar heading.
I got sigma to be approximately 0.3 and cubed it to get 0.027
for E((x-1/2)^3=-0.5(using the equation above i.e integral)
final answer was -18.5 ?? :?: I 'm sure it is not correct
2. Originally Posted by KayPee
A random variable is uniformly distributed over[0,1].What is its skew?
Hi,
I've been moving in circles around this question for quite sometime now.
I would appreciate it someone would be kind enough to provide me with the full answer and some explanation please.
As you can see I've been provided with several leads and unfortunately I came up with some funny answer you can see in the post below by me with a similar heading.
I got sigma to be approximately 0.3 and cubed it to get 0.027
for E((x-1/2)^3=-0.5(using the equation above i.e integral)
final answer was -18.5 ?? :?: I 'm sure it is not correct
$E ((\frac{x-u^3}{o^3}))$ $= \frac{E(x-u)^3)}{o^3}
$
$u=\frac{1}{2}$
$o=\sqrt{\int 1 0 (x-1/2)^2 dx}$ (int 1 over 0)
$E((x-1/2^3)=\int 1 0 (x-1/2)^3 dx$(int 1 over 0)
Please excuse my lack of knowing the correct LaTex for the required symbols I hope you can interpret my work.
3. ## Thanks so much
Originally Posted by TheMasterMind
$E ((\frac{x-u^3}{o^3}))$ $= \frac{E(x-u)^3)}{o^3}
$
$u=\frac{1}{2}$
$o=\sqrt{\int 1 0 (x-1/2)^2 dx}$ (int 1 over 0)
$E((x-1/2^3)=\int 1 0 (x-1/2)^3 dx$(int 1 over 0)
Please excuse my lack of knowing the correct LaTex for the required symbols I hope you can interpret my work.
Hello
never mind I'm also not good at using the latex .
I've got these steps above already.It is the final answer that I've struggled to reach.
4. Originally Posted by KayPee
Hello
never mind I'm also not good at using the latex .
I've got these steps above already.It is the final answer that I've struggled to reach.
5. To finish this off, the integral:
$\int_0^1 \left(x - \frac{1}{2}\right)^3 ~dx = \frac{1}{4}\left[\left(x - \frac{1}{2}\right)^4\right]^1_0$
$= \frac{1}{4}\left[\left(1 - \frac{1}{2}\right)^4 - \left(0 - \frac{1}{2}\right)^4\right]$
$= \frac{1}{4}\left[\left(\frac{1}{2}\right)^4 - \left(-\frac{1}{2}\right)^4\right]$
We know that for any even exponent b:
$(a)^b = (-a)^b$
So the inside of the brackets is zero, which makes the whole thing 0. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604454040527344, "perplexity": 1310.9212370135995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274289.5/warc/CC-MAIN-20140728011754-00369-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=607012 | # Switching time of a mechanical switch - capacitive model
by msandeep92
Tags: switching time
P: 7 Hi, I am trying to solve a problem, where i need to find the switching time of a mechanical switch. A voltage of V is applied to an acutation pad, and the movable beam is assumed to have a spring constant of K. I have attached the photo for better clarity. Please help me out. Consider the damping of the switch due to air also. Thanks, Sandeep. Attached Thumbnails
Mentor
P: 39,575
Quote by msandeep92 Hi, I am trying to solve a problem, where i need to find the switching time of a mechanical switch. A voltage of V is applied to an acutation pad, and the movable beam is assumed to have a spring constant of K. I have attached the photo for better clarity. Please help me out. Consider the damping of the switch due to air also. Thanks, Sandeep.
Welcome to the PF.
What is the context of your question? Is this for a school research project? Why are you using a capacitive switch instead of inductive? Is this for a nano-scale structure? Why would you still have air in the assembly? What have you done so far on this problem?
P: 7 Yes. This is a part of my research project. This is a nano scale structure. This is a capactive switch being used in RF MEMS - one of the latest emerging fields which is hoped to replace the semiconductor switches for RF applications. Semiconductor switches have very high capacticances turning up at high frequencies. So, we use these switches as a replacement, which provide lower capacitance and hence higher isolation. We are deivcing a new model of the switch for higher switching speed. So, in this regard i need this calculation. What i have done so far on the beam is: Electrostatic force Fe = εA(V^2 )/(2*(d-x)^2); Froce due to spiring Fk = -K*x; Force due to damping Fd = - b*(dx/dt) Using conservation of energy: .5*m*(v^2) = ∫Fdx F = Fe + Fk + Fd Neglect the damping as of now. If i go on integrating Fe and Fd, i get: dx/dt = √[(εA(V^2)x/m(d-x)d) - k(x^2)/2] = p From here, i get time by t = ∫(dx/p). I am struck in this integration, Please help me. I am not able to understand how to integrate the damping term also. Thanks, Sandeep.
P: 7
## Switching time of a mechanical switch - capacitive model
You can see this paper(attached) for better understanding.
Attached Files
Closed form expressions for RF MEMS.pdf (105.3 KB, 2 views)
Related Discussions Academic Guidance 3 Introductory Physics Homework 0 Electrical Engineering 4 Electrical Engineering 5 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541332483291626, "perplexity": 1107.2002742590707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.studypug.com/us/en/math/college-algebra/solve-problems-with-rational-numbers-in-fraction-form | # Solving problems with rational numbers in fraction form
What is a rational number? A rational number is a real number that can be expressed in the form of fraction of two integers, $\frac{a}{b}$, where the denominator (b) does not equal to zero.
A rational number can also be represented in decimal form. The decimal form of rational number always ends with a finite number of digits or repeats the same sequence of number again and again. In other words, a finite number of decimals and a repeating sequence of decimals are both indicators of rational numbers.
The following table shows a few examples of rational numbers.
Number Fraction form Rational or Irrational (R/IR) 3 $\frac{3}{1}$ R 0.2 $\frac{1}{4}$ R 7.5 $\frac{15}{2}$ R ?1.4 -1$\frac{3}{1}$ R $\pi$ (3.141592653…) N/A IR
A number that is not rational is called an irrational number. The difference between rational numbers and irrational numbers is that irrational numbers cannot be expressed as a ratio of two integers; nor have a finite number of decimals or a repeating sequence of decimals.
To begin this chapter, we will first understand the nature of rational numbers and learn how to compare and order rational numbers in both decimal and fraction forms. One way to compare rational numbers is using equivalent fractions. When two rational numbers are in fraction form, we can compare them with their numerators if they are expressed as equivalent fractions with a common denominator. Another way to compare rational numbers is to convert the fractions into decimal numbers.
In the second part of the chapter, we will explore the operations of rational numbers. Adding, subtracting, multiplying and dividing rational numbers in decimal form will be covered. Then, we will also talk about how to solve problems by adding, subtracting, multiplying and dividing rational numbers in fraction form.
### Solving problems with rational numbers in fraction form
Similar to the previous section, we will practice adding, subtracting, multiplying, and dividing rational numbers. Rational numbers can be expressed in two forms: fraction form and decimal form. This time, we will deal with rational numbers in fraction form. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 34, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242598414421082, "perplexity": 237.12869791839643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00168-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://blender.stackexchange.com/questions/35446/cannot-scale-or-rotate-objects | # Cannot scale or rotate objects
I choose the scale or rotate option that's on top of the render timeline, but every time I left click, the red circle with the 4 black lines sticking out (3D cursor) is selected, and not the scaling or rotating options. Is there any way I can fix this?
EDIT: I have posted this on blender as a bug. Can someone who has enough reputation close this question as not constructive (ie this question is something that needs to be discussed not answered)? Thank you for all your help.
EDIT 2: This is my computer specific, please check my solution below.
• could you upload a screenshot? – gladys Aug 10 '15 at 17:04
• Video uploaded. – Yubin Lee Aug 10 '15 at 17:36
• You should not only click by LMB, but click and drag if using manipulator (either scale or rotate) (it's not clear good enough from video how exactly do you do that). – Mr Zak Aug 10 '15 at 17:42
• @MrZak what is the manipulator? And I did click and drag. – Yubin Lee Aug 10 '15 at 17:50
• you have to click the LMB and drag at the same time (the LMB must be pushed while dragging) – gladys Aug 10 '15 at 18:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3551705777645111, "perplexity": 2017.7232368257153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00333.warc.gz"} |
http://mathoverflow.net/questions/129368/asymptotic-series | # Asymptotic series
I have found many references to Poincaré and Borel in relation to their work on asymptotic series, but so far, every source I can get my hands on is very old, hence hard to read (this is not true in general, but in this case, texts that predate Oh notation tend not to be clear).
Can you explain the idea behind asymptotic series, give an illuminating example, and/or suggest a good modern exposition of the theory?
-
Web pages of
Michael Berry: http://www.phy.bris.ac.uk/people/berry_mv/dingle.html
and
John Boyd: http://www-personal.umich.edu/~jpboyd/
both include helpful published and publicly available sources.
-
Basically what I was hoping for... Thanks! – Rodrigo A. Pérez May 2 at 12:10
There are many modern books, for example,
MR1317343 Balser, Werner From divergent power series to analytic functions. Theory and application of multisummable power series. Lecture Notes in Mathematics, 1582. Springer-Verlag, Berlin, 1994.
MR1250603 Candelpergher, B.; Nosmas, J.-C.; Pham, F. Approche de la résurgence. Actualités Mathématiques. Hermann, Paris, 1993.
The very basic idea is the following: You frequently obtain divergent series, a) as formal solutions of differential (or functional) equations, b) as perturbation series when you vary a linear operator.
The question is whether these series have any relation to actual solutions of the problem. It often turns out that they are asymptotic series, and moreover, that they are "Borel summable". Borel summation is a procedure using a form of Laplace transform that under certain conditions recovers the function from its formal asymptotic series.
- | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391522407531738, "perplexity": 1530.0571109314108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345760669/warc/CC-MAIN-20131218054920-00028-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/0706.2304/ | NYU-TH-07/03/01
Charged Condensation
Center for Cosmology and Particle Physics
Department of Physics, New York University, New York, NY, 10003, USA
We consider Bose-Einstein condensation of massive electrically charged scalars in a uniform background of charged fermions. We focus on the case when the scalar condensate screens the background charge, while the net charge of the system resides on its boundary surface. A distinctive signature of this substance is that the photon acquires a Lorentz-violating mass in the bulk of the condensate. Due to this mass, the transverse and longitudinal gauge modes propagate with different group velocities. We give qualitative arguments that at high enough densities and low temperatures a charged system of electrons and helium-4 nuclei, if held together by laboratory devices or by force of gravity, can form such a substance. We briefly discuss possible manifestations of the charged condensate in compact astrophysical objects.
1. Introduction and summary. Consider a sphere enclosing massive stable charged spin-1/2 particles with number density , and stable massive spin-0 particles of an equal but opposite charge. At some high temperature the substance in the sphere could form hot plasma. With the decreasing temperature the opposite charges would ordinarily form neutral atoms of half-integer spins. These atoms would not be able to Bose-Einstein condense because of their spin-statistics.
We will discuss in this work a different sequence of events that could take place in the above system. In particular, we will show that under certain conditions, instead of forming neutral atoms, the charged scalars could themselves condense, neutralizing by this condensate the background charge of the fermions.
Especially interesting we find the case when the system has a net overall charge to begin with. In this case, although the resulting substance is charge neutral in the interior of the sphere, the net charge will reside on its surface. The substance in the bulk has distinctive properties. We will show in Section 2 that propagation of a photon in this substance is rather special. Even at zero temperature, the photon acquires a Lorentz non-invariant mass term. The transverse and longitudinal components of the photon have equal masses; the mass squares are proportional to and inversely proportional to the charged scalar mass. However, the group velocities of the transverse and longitudinal modes are different. The longitudinal mode is similar to a plasmon excitation of cold plasma. The transverse modes of the photon propagate as massive states. We will refer to this phase as the charged condensate, emphasizing that the charged scalars have undergone Bose-Einstein condensation, while the background fermions merely play the role of charge neutralizers in the bulk of the substance, and the net charge of the system is residing on the boundary.
The above mechanism is universal: the gauge field could be a photon or any other field, while the charged scalar could be a fundamental field, or a composite state made of other particles, in the regime where its compositness does not matter. This may have applications in particle physics and condensed matter systems.
As a concrete example we imagine a reservoir, or a trap, in which negatively charged electrons and positively charged helium-4 nuclei, with a nonzero net charge, could be put together at densities high enough for an average inter-particle separation to be smaller than the size of a helium atom. In this case, the helium atoms would not form. The results of Section 2 cannot immediately be applied to this case, since electrons are lighter than the helium nuclei. However, we will argue in Section 3 that if temperature of the system is low enough for the helium de Broglie wavelength to be greater than both the average inter-particle separation and the Compton wavelength of the massive photon, then the charged helium-4 nuclei would fall into the condensate. Photons, in the bulk of this substance, would propagate with a delay caused by the acquired mass. Such a system would also have a net surface charge. Quantitative features of this example are discussed in Section 3. Our estimate for the temperature is within the range of the low temperatures that have already been achieved in experiments on Bose-Einstein condensation of atoms, see, e.g., [1].
In the above example the charged condensate containing droplet was assumed to be held together by a rigid boundary or external fields in a laboratory. In Section 4 we point out that gravity could play the role of the stabilizing force, and briefly discuss possible manifestations of the charged condensation in compact astrophysical objects.
A few comments on the literature. The pion condensation due to strong interactions is well known [2]. In this work we discuss condensations due to electromagnetic interactions instead (or in more general case, due to some Abelian interactions). It was shown in Ref. [3] that the constant charge density strengthens spontaneous symmetry breaking when the symmetry is already broken by the usual Higgs-like nonlinear potential for the scalar. In our work the scalar has a conventional positive-sign mass term. The fact that the conventional-mass scalar could condense in the charged background was first shown in [4]. However, the system considered in [4] is neutral, and thus, is physically different from the one studied in this work (see, brief comments after eq. (4.6) in [4]). An expanded discussions of the topics covered in the present work, with other possible applications will be presented elsewhere [5].
2. Basic mechanism. We consider a simplest model that exhibits the main phenomenon. Let us start with a system in an infinite volume and at zero-temperature. The classical Lagrangian contains a gauge field , a charged scalar field with a right-sign mass term , and fermions with mass
L=−14F2μν+|Dμϕ|2−m2Hϕ∗ϕ+¯ΨiγμDμΨ−mJ¯ΨΨ+μΨ+Ψ. (1)
The chemical potential is introduced for the global fermion number carried by ’s (e.g., lepton, baryon or other number). The covariant derivatives in (1) are defined as for the scalars, and for the fermions. Their respective charges, and , are different in general. For simplicity we assume that .
To study the ground state it is convenient to introduce the following notations for the scalar, gauge field and fermions: , , and . In terms of the gauge invariant variables , and the Lagrangian, takes the form
L=−14F2μν+12(∂μσ)2+12g2B2μσ2−12m2Hσ2+¯ψiγμDμψ−mJ¯ψψ+μψ+ψ, (2)
where now and are a field-strength and covariant derivative for , respectively.
Fermions in (2) obey the conventional Dirac equation with a nonzero chemical potential. This implies a net fermion number in the system, . Since the fermions are also electrically charged, they set a background electric charge density. Such charged fermions would repel each other. In our case, however, the charge will be screened by the charged scalar condensate. One way to see this is to assume that such a self-consistent solution exists, and then check explicitly that it satisfied equations of motion, as we will do it below. We consider distance scales that are greater than an average separation between the fermions, so that their spatial distribution could be assumed to be uniform. Then, the background charge density due to the fermions could be approximated as , where is a constant. The magnitude of the latter is related to the value of the chemical potential . In particular, a self-consistent solution of the equations of motion implies that , where denotes the Fermi energy of the background fermion sea, and is related to as follows, .
The rest of the equations of motion derived from (2) are:
∂μFμν+g2Bνσ2=g¯Jν, □σ=g2B2νσ−m2Hσ. (3)
The Bianchi identity for the first equation in (3), , can also be obtained by varying the action w.r.t. . For a constant charge density, , the theory with the scalar field (1) admits a static solution with constant and :
⟨B0⟩=B0c≡mHg, ⟨σ⟩=σc≡√¯J0mH. (4)
The charge density stored in the condensate, , equals to , by virtue of (4). Hence, the total charge density , vanishes. The ground state is charge-neutral in its bulk. On the other hand, a nonzero in (4) suggests that there must be an uncompensated charge on a surface at infinity, as it will be the case (see below).
Before we continue with studies of small perturbations about the solution (4), we would like to make four essential comments:
(i) The expression for the gauge field in (4) scales as , and is non-perturbative in its nature. Moreover, it diverges in the limit . This seeming non-decoupling of the charged scalar field results from the fact that we’re dealing with a constant background charge density in an infinite volume, i.e., with an infinite background charge. It is not surprising then, that such a background is capable of affecting a charged state of an arbitrary mass. Moreover, when exceeds the fermion mass, our averaging procedure over the background charges should not be applicable in general.
(ii) In regard with the above discussions, it is instructive to regularize the problem by considering a finite volume ball of a radius . A nonzero in (4) suggests that there must be an uncompensated charge on the surface of the ball, which tends to the value, , as . Indeed, such a charge could give rise to a constant in the interior of the ball, where , in analogy with a static potential inside a conducting ball with surface charge . This is indeed what happens in the present case. These and other finite volume effects are discussed in detail in Section 3.
(iii) Unlike for the fermions, we have not introduced chemical potential for the scalars. However, nonzero acts as dynamically induced chemical potential for the perturbations of the scalar. Its value in the ground state, , is consistent with the expectation that the chemical potential be equal to the mass of the scalar in Bose-Einstein condensate.
In general, we could have introduced chemical potential for the charged scalar, . The above described condensation mechanism would still take place with the result, , and , instead of (4). The charge density in the condensate in this case would read, , ensuring charge neutrality of the substance in its bulk, but in general there would be a nonzero surface charge, unless and .
(iv) So far our discussions have been classical. Upon quantization the charged condensate can be thought of a zero-momentum state with a non-zero occupation number of the charged scalar field quanta. It is useful to consider small temperature in the system, in which case the de Broglie wavelength of the condensed scalars, , will exceeds the average inter-particle separation . Thus, it makes sense to think of the charged condensate, as of any other Bose-Einstein condensate, to be a macroscopically occupied mode. The specifics of our case is that this macroscopic state of electrically charged scalars can exist even when the Compton wavelength of the corresponding massive photon is greater than the average interparticle separation between the scalars. In the bulk of the condensate the charge is balanced by the background charge density of fermions.
The uniform fermion background sets a preferred Lorentz frame. We study the spectrum and propagation of perturbations in this background frame. For this we introduce small perturbations of gauge and scalar fields, and , as follows:
Bμ=B0cδμ0+bμ(x), σ=σc+τ(x). (5)
The Lagrangian density for the perturbations reads
L2=−14f2μν+12(∂μτ)2+12g2σ2cb2μ+2gmHσcb0τ+... (6)
Here denotes the field strength for , and we dropped all the fermionic terms as well as the cubic and quartic interaction terms of ’s and . The last term in (6) is Lorentz violating. Calculations of the spectrum of the theory is non-trivial but straightforward. We briefly summarize the results. First, is not a dynamical field, as it has no time derivatives in (6). Therefore, it can be integrated out through its equation of motion, leaving us with the equations for three polarizations of a massive vector , and one scalar . These constitute four physical degrees of freedom of the theory. The transverse part of the vector obeys the free equation
(□+g2σ2c)bTj=0, where bTj≡bj−∂jΔ(∂kbk). (7)
Therefore, the two states of the gauge field carried by have the following mass
m2g=g2σ2c=g2¯J0mH. (8)
Moreover, the frequency and the three-momentum vector of these two states obey the conventional dispersion relation, .
The longitudinal mode of the gauge field , and the scalar , on the other hand, give rise to the following Lorentz-violating dispersion relations (valid for )
ω2±=p2+2m2H+12m2g±√4p2m2H+(2m2H−12m2g)2. (9)
The r.h.s. of (9) is positive. Both of these modes have masses which can be obtained by putting in (9). One of them coincides with (8), and the other one, has the mass squared equal to . Interestingly, the group velocities of the transverse and longitudinal modes of the massive vector boson are different. For , and for an arbitrary , the fastest ones are the transverse modes, they’re followed by the scalar, and the longitudinal mode is the slowest.
In the limit , (9) describes a massive longitudinal component of a vector bosons of mass , and a massless scalar, in agreement with (6). The limit , however, is discontinuous, since for any nonzero in (6) one has to satisfy the Bianchi identity which would not appear as a constraint if had been set to zero in (6) from the very beginning.
It is important to specify the limits of applicability of the above condensation mechanism. (I) The Lagrangian (1) could contain a quartic interaction term for the scalar . It is straightforward to check that our results will hold as long as . (II) The scalar could have an additional Yukawa term, , where is a coupling, denotes either the or matrix depending on the spatial parity of , and denote fermions with different charges that render the Yukawa term gauge invariant. One, or both of these fermions could be setting the background charge density . The fermion condensate, , if non-zero, could act as a source for the scalar. In order for this not to change significantly our results, the condition should be met111The Yukawa coupling would also lead to the new terms in the fermion mass matrix. Depending on a concrete context, this may or may not impose additional constraints.. (III) Due to the above Yukawa couplings the scalar can decay. In order for the condensate phase to form in the first place, the “condensation time” has to be shorter then the lifetime of the . Thruough the work we will be checking the conditions (I-III) when appropriate.
If the number density of the background fermions is such that it allows for the average inter-particle separation between them to be greater than the Bohr radius of a fermion-scalar bound state, then, the fermions would likely form a crystalline structure at low temperatures. If the resulting crystal is due to the metallic bonding, that is it supports quantum gas of almost free scalars, then the condensation of the scalars described above would be similar to the condensation of Cooper pairs in superconductors. This case could be realized if .
On the other hand, if the average inter-particle separation between the background fermions is much smaller than the would-be Bohr radius of the fermion-scalar bound state, then the conventional quantum-mechanical considerations of the van der Waals, ionic, covalent or metallic bonding would not be applicable. This would corresponds to the choice . In this case, the background fermions do not have to form an ordered structure, and yet, we’d expect the condensation of scalars. Moreover, the argument that the crystalline structure should be lost at some high density is supported by the discussions in a paragraph below.
A special sub-case of the discussion in the above paragraph is when : It is straightforward to deduce from the results obtained above that the average inter-particle separation in the system, although is smaller than the would-be Bohr radius, is greater than the Compton wavelength of the massive photon. If so, then, the electric charges of the fermions and bosons are screened for all our purposes. The above described condensation mechanism, with a good approximation, would reduce to the standard Bose-Einstein condensation of (almost) free scalars. This system would behave as a two-component substance of free fermions and condensed scalars.
3. Finite-volume regularization. Here we would like to regularize the infinite-volume theory of the previous section. Consider a material ball of a fixed radius which has a built in constant charge density uniformly distributed over its volume. We will assume that such a ball is prepared “by hands” with appropriate charges, and address the question: How does the electric potential of this ball look like when the charged condensate described in the previous section compensates the fermion charge in its interior? This question is similar in spirit to the one we ordinarily study for, e.g., a uniformly charged insulating ball in electrodynamics.
We’ll be looking for static solutions of eqs. (3), which we parametrize as follows:
B0(r)=B0c+δB0(r), σ(r)=σc+δσ(r). (10)
We focus on the solutions that in the interior of the ball satisfy and . Then the equations for and become:
−∇2δB0+m2gδB0=−2mgmHδσ, (11) −∇2δσ=2mgmHδB0, (12)
where, as before, . Explicit solutions of the above equations can be readily found. For simplicity, we will present them for , i.e., when the term in the first equation can be neglected.
The solutions in the interior of the ball are
δB0(r)=1r[c1sinh(Mr)cos(Mr)+c2cosh(Mr)sin(Mr)], (13) δσ(r)=1r[−c1cosh(Mr)sin(Mr)+c2sinh(Mr)cos(Mr)], (14)
where , and and are constants to be determined from matching these solutions to the exterior ones.
Outside of the ball we approximate the solutions to be
B0=Qr, σ=ke−mH(r−R)r, (15)
where is a yet-unknown effective charge of the ball, which should be determined from the matching conditions, and which we expect to be mostly concentrated near the surface. By matching the solutions and their first derivatives at , we find
c1 =2gD[mg(mHR+1)(sinh(MR)sin(MR)+cosh(MR)cos(MR)) + mH(sinh(MR)sin(MR)−cosh(MR)cos(MR)−mHMsinh(MR)cos(MR))], c2 =2gD[mg(mHR+1)(sinh(MR)sin(MR)−cosh(MR)cos(MR)) − mH(sinh(MR)sin(MR)+cosh(MR)cos(MR)+mHMcosh(MR)sin(MR))].
While, for the charge we obtain the following expressions:
Q = 1gD[(mg(mHR+1)+mH(mHR−1))sinh(2MR) − (mg(mHR+1)−mH(mHR−1))sin(2MR) + (2mHMR−m2H/M)cosh(2MR)+(2mHMR+m2H/M)cos(2MR)],
where Finally, the constant is determined as
k = 1gD[−(mg+mH)sinh(2MR)−(mg−mH)sin(2MR) + 2mgMRcosh(2MR)+2mgMRcos(2MR)].
In the case of physical interest, , the above solutions have a number of interesting properties. The net charge density in the ball, , is exponentially small in the interior, except in a narrow spherical shell near the surface of width . Thus, the charge is screened in the bulk of the ball, but there remains an unscreened surface charge. In this limit the effective charge of the ball is . This system is characterized by the conserved electric charge , and conserved fermion number .
If we increase , with all the other parameters held fixed, the effective charge should also grow linearly with in order for the condensate phase to be possible inside the ball. Put in other words, in order to prepare a ball of a given radius with the charged condensate phase inside, one has to retain a specific amount of charge defined in (S0.Ex4), on its surface. Hence, in the infinite volume limit considered in the previous section, there is “a surface at infinity” that carries charge. This charge is responsible for the constant in (4).
In the bulk of the ball the electric field and the electromagnetic energy are negligible. Closer to the boundary, however, the surface energy becomes non-zero due to the varying electric field. The resulting expression scales as
EnergyE∝Q2R∝m2HRg2. (20)
From our solutions it is also straightforward to get the scaling of the volume energy well within the ball; it reads as .
Let us consider an example of a physical system in which the charged condensate could potentially be obtained. Suppose in a laboratory one could prepare a reservoir, or a trap, in which negatively charged electrons and positively charged helium-4 nuclei, with a net negative charge, could be put together. Consider densities of these particles high enough so that the average separation between the particles, , is smaller than the size of a helium atom, which we estimate for simplicity to be the Bohr radius ( denotes the fine-structure constant, and is the electron mass; we still stay somewhat lower than nuclear densities). As long as the helium atoms in the substance would not form. According to the discussion at the end of Section 2, at high-enough densities (but still somewhat below the nuclear ones) we would not expect the crystalline structure to form either. Can the charged condensate be formed in this system? Strictly speaking, the calculations of the previous section are not directly applicable to this case, because electrons are lighter than the helium-4 nuclei and averaging over the electron positions to calculate the photon mass may not be a good approximation. In this case we would expect the photon mass squared to be determined by , instead of , which should be applicable when the fermions are heavier than the scalars. We can introduce small temperature in the above system to see under what conditions the condensation would take place. Once the thermal de Broglie wavelengths of the helium-4 nuclei have overlaps with each other, and as long at the photon Compton wavelength is shorter than the thermal de Broglie wavelength, the system can be treated as a macroscopic mode, or the condensate. The former condition, , would suggest that , while the latter, , would give a stronger bound (we use as the photon mass squared). Temperatures reached in experiments on Bose-Einstein condensation of atoms are within this range, see, e.g., [1].
Let us look at other characteristics of this system in the condensate phase. Suppose the size of the sphere, or the trap we are dealing with, was . Then, the number of electrons and helium-4 nuclei would have to be for helium atoms not to form. The total mass of these particles would be . Moreover, the photon in this substance would acquire the mass , while the unbalanced charge of units would be residing near the surface, in a narrow spherical shell of size . (The electric field strength near the surface of such a sphere would be enough to ionize the air, so we assume that it’s placed in a vacuum chamber).
Propagation of light in the bulk of this substance would proceed with a delay caused by the induced photon mass . For simplicity, we have considered above the system of a macroscopic size, but nothing prevents one to look at much smaller systems, e.g., for a mm size system the required number of electrons and helium-4 nuclei would have to be , and the mass of the system .
Suppose a ball of a fixed radius and charge determined by (S0.Ex4) with the charged condensate had been prepared. What happens if we gradually bring to the ball’s surface additional charges that would decrees or increase ? In terms of the theory considered above, this would imply that we’re adding a nonzero scalar chemical potential term , as discussed in the comment (iii) on pages 3 and 4. In this case, the value of inside the ball would change to maintain the value of the effective chemical potential, , to be equal to . In this case, one should expect the relation (S0.Ex4) to be modified.
Before turning to the next section, let us comment on certain limiting cases. If , for fixed and finite , we would expect the scalar field to decouple and the solution to turn into the one for the potential of an insulating ball populated by a constant charge density, for which the potential equals to inside, and to outside. On the other hand, this would imply that . However, our expansion breaks down in this regime, and the solutions (13) and (14) are no longer applicable. In the full perturbative expansion, the l.h.s. of equations (11) and (12) include the non-linear terms
+gmHδσ2+2gmgδσδB0+g2δσ2δB0, (21) −gmgδB20−2gmHδσδB0−g2δσδB20, (22)
respectively. When these terms become relevant, and in fact recover the standard electrodynamics result: . Moreover, at some point when exceeds the background fermion mass, mobility of the fermions will play a role and, in general, our results should not be immediately applicable.
Alternatively, we could look at the limit in which for a fixed , i.e., . In this case we have a massless photon and a massive scalar, with scaling as . Since this implies that , the same argument as above applies and the solutions (13) and (14) are not applicable.
Finally, in the limit we return back to equations (11) and (12) and now take so that we neglect the r.h.s. of the first equations. Then, it would seem that as the solutions approach the trivial ones, and . To see how we arrived at this erroneous result we again return to the non-linear terms (21) and (22) which become significant in this limit. Retaining these terms in our equation for , we set and recover the expected electrodynamics result.
In the present work we left out a question of existence of a soliton with the charged condensate phase inside, that would be stable due to sufrace effects. Such an object would be somewhat similar to a droplet in a liquid drop model of the nucleus (see, e.g., [6]). The related issues will be discussed in [5].
4. Comments on compact objects. In in this section we will use the power of gravity as a stabilizer to suggest a possible manifestation of the charged condensation in astrophysics. We consider compact objects. In a general setup, due to energy considerations, the condensing scalar would be a lightest charged scalar available in the spectrum [5], that could condense before decaying. If no new light charged scalars exist, then a first candidate would be a charged pion. However, in order for pions not to decay, one should consider high densities, e.g., the conditions similar to the ones for pion condensation in neutron stars [2].
Charged condensate in compact objects with electrons and helium-4 nuclei could also exist. These object could be held together by gravity which is competing against the degeneracy pressure of the fermions222This is similar to the stabilization mechanism in white dwarfs and neutron stars.. Since this mechanism is generic, and since we would expect any such object to contain a mixture of various species, we will discuss it in general terms of background fermions and charged scalars.
Consider a distribution of charged fermions and charged scalars with the net electric charge . Such a distribution could collapse under the influence of gravity into a compact object, a droplet. Below we consider a regime in which gravitational force is dominating over the electrostatic forces at the surface of the droplet. Moreover, we will assume that the temperature in the interior is low enough for all particles to be treated non-relativistically. Then, at a certain temperature, there should be a phase transition in the interior into the charged condensate state. At that point the relation will be satisfied.
To get qualitative estimates of the size of such a droplet we will ignore the difference between the values of and , and minimize energy as a function of the radius at a fixed value of the charged particle number . Since these discussions are qualitative, we’ll be omitting the factors of order 10 or less. The total energy of a droplet reads:
E(R)=mHN+N√p2J+m2J−GM2R, (23)
where the first term is the energy of the condensate; the second term is the energy of a non-interacting gas of charged particles that give rise to the background density (hence, the subscripts in ); and the last term is due to gravity, where denotes the Newton’s constant (we’ll be using the Planck mass ), and is the total mas of the droplet which depends on . We have ignored in (23) the surface terms which are negligible in the regime where gravity is dominant.
Ec=(mH+mJ)N⎡⎣1−(mJmH)(N1/3B)4⎤⎦. (24)
The critical radius decreases with increasing , the bounds on which are:
1e1/2(mHmJ)3/4B9/4to0.0pt∼
Here the lower bound is due to the requirement that gravity be dominant in stabilizing this object, and the upper bound is for the relativistic gravitational and fermionic effects to be negligible. These objects are stable as long as the gravitational binding energy in (24) exceeds the electrostatic energy of uncompensated charges on its surface. This constraint is taken into account by the bounds (25).
In a simple case when the droplet is assumed to be made of electrons and the charged condensate of helium-4 nuclei, has to be close to the upper bound in (25), . The mass of this object is within an order of magnitude of the mass of the Sun, and its size is . This object has characteristics that are similar to those of neutron stars (except that it will have some surface charge, that was negligible in our considerations). However, propagation of light through such a cold and dense object will have specific characteristics described in Sections 2 and 3.
Acknowledgments. We’d like to thank R. Barbieri, V. Berezhiani, Z. Chacko, G. Dvali, M. Kleban, I. Klebanov, M. Laine, S. Mukhanov and N. Weiner for useful discussions. The work of GG is supported by NASA grant NNGG05GH34G and NSF grant 0403005. RAR is supported by James Arthur graduate fellowship.
## References
• [1] M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman and E. A. Cornell, Science 269 (1995) 198; K. B. Davis, M. O. Mewes, M. R. Andrews, N. J. van Druten, D. S. Durfee, D. M. Kurn and W. Ketterle, Phys. Rev. Lett. 75 (1995) 3969.
• [2] A.B. Migdal, Zh. Eksp. Theor. Fiz. 61 (1971) 2210, [Sov. Phys. JETP 34 (1972) 1184; R. F. Sawyer and D. J. Scalapino, Phys. Rev. D 7, 953 (1973).
• [3] A. D. Linde, Phys. Rev. D 14, 3345 (1976).
• [4] J. I. Kapusta, Phys. Rev. D 24 (1981) 426. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711045622825623, "perplexity": 357.9394294306847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00480.warc.gz"} |
https://www.physicsforums.com/threads/no-interaction-theorem.225026/ | # No-Interaction Theorem
1. Mar 29, 2008
### J.F.
"No-Interaction" Theorem
THEOREM. "No-Interaction" Theorem.
Suppose we seek a many-particle direct-interaction theory with the
following properties:
1. the theory is Lorentz invariant,
2. the theory is based on a Hamiltonian dynamics, and
3. the theory is based on independent (canonical) particle variables.
Then such a theory is only compatible with noninteracting particles.
"No-Interaction" Theorem in Classical Relativistic Mechanics
http://prola.aps.org/abstract/PR/v182/i5/p1397_1
Relativistic particle dynamics—Lagrangian proof of the no-interaction theorem
http://prola.aps.org/abstract/PRD/v30/i10/p2110_1
G. Marmo and N. Mukunda
Instituto di Fisica Teorica, Universita di Napoli, Napoli, Italy and Istituto Nazionale di Física Nucleare, Gruppo Teorico, Sezione di Napoli, Napoli, Italy
E. C. G. Sudarshan
Center for Particle Theory, Department of Physics, The University of Texas at Austin, Austin, Texas 78712
An economical proof is given, in the Lagrangian framework, of the no-interaction theorem of relativistic particle mechanics. It is based on the assumption that there is a Lagrangian, which if singular is allowed to lead at most to primary first-class constraints. The proof works with Lagrange rather than Poisson brackets, leading to considerable simplifications compared to other proofs.
2. Mar 29, 2008
### meopemuk
This theorem was first formulated and proven in a nicely written paper
D. G. Currie, T. F. Jordan and E. C. G. Sudarshan, "Relativistic invariance and Hamiltonian theories of interacting particles", Rev. Mod. Phys., 35 (1963), 350
Eugene.
3. Mar 6, 2010
### aspidistra
Re: "No-Interaction" Theorem
After reading few pages of this paper, I got stuck.
Before posting my question, let me summarize the common abstract mathematical structure described in section 2 of this paper for both classical and quantum mechanics.
1)
Let R be a real inear space with lie braket define on it ie. R is a lie algebra. Any $$A \in R$$ represents the quantities descriptive of some pysical system. ( I think what the authors mean is A represents measurable quantity or observable, am I right?)
2)
Let $$F \in S \subset R$$, there exist a real bilinear functional ( , ) defined for all pairs of A, F such that (A,F) is a real number. The real linear functional ( , F) on R represents a particular state of the system, it maps any $$A \in R$$ to an expectation value (measurement value) $$\langle A \rangle\equiv (A,F)$$
combine 1) and 2) The real lie algebra and A linear functional on R specified by An elements of S provied a complete description of one possible instantaneous state fo the system.
3) Transformations of reference frame is represented by linear automorphisms of R generated by elements of $$L \subset R$$. To be specific, the automorphisms are defined by $$e^{[H]t}(A):=A+[A,H]t+\frac{1}{2!}[[A,H],H]t^2+...$$
, where $$A \in R, H \in L$$ and t is real number for which the series is meaningful.
4)Two further postulates needs to be state.
i)$$e^{[H]t}(F) \in S$$ if $$F\in S$$
ii)$$(e^{[H]t}(A),e^{[H]t}(F))=(A,F)$$ for any A in R and F in S
With the above statement, the authors have the following assertion in pp.354:
"we can generate a description of the system at time zero with respect to a second reference frame translated an amount t in time by letting F represent the state of the system in the second description as well as in the first, and by letting $$e^{[H]t}(A)$$ represent, in the second description, the quantity that was represented by A in the first description."
This statement contradict with what I have learnd about the notion of time translation. See
pp.17-18 of the following lecture note:
ed.mvps.org/physics/files/Classical%20Mechanics.pdf
Take passive point of view (the reference frame itself transform), the system should not at time zero with respect to the second reference frame.
Maybe I misunderstand the notion of time translation, hope someone can explain it to me or recommend some good reference. The two undistinguishable terminology "time translation" and "time evolution" especially troubles me.
Any help will be appreciate, thanks.
4. Mar 6, 2010
### meopemuk
Re: "No-Interaction" Theorem
It seems that some words are missing in this sentence. Could you please clarify what is your question?
Eugene.
5. Mar 7, 2010
### aspidistra
Re: "No-Interaction" Theorem
Un...I am sorry Eugene, my english is not good. I try my best to clarify my question.
The authors said:
"we can generate a description of the system at time zero with respect to a second reference frame translated an amount t in time by letting F represent the state of the system in the second description as well as in the first, and by letting e^{[H]t}(A) represent, in the second description, the quantity that was represented by A in the first description."
My understanding of this paragraph paragraph is that we transform our reference frame which means we reset the origin of our time coordinate. If I understand it correctlly, the situation of our physical system and the transformed reference frame should be like fig.5(a) in pp.18 of this link:
http://ed.mvps.org/physics/files/Classical Mechanics.pdf
Since the authors said (in pp.354 of the paper) : "we may think of A and F as being part of the description of the system at time zero with respect to a given reference frame", I think the transformed description of the system should not still at time zero with respect to the transformed reference. Instead of labeling the system at t'=0, the transformed reference frame should label the system at t'=b.
6. Mar 7, 2010
### Fredrik
Staff Emeritus
Re: "No-Interaction" Theorem
I don't understand the theorem and I don't understand its proof, but I noticed that one of the authors of the second article referenced in the OP is Sudarshan, one of the people who authored the original article, and that the new article claims to have a much simpler proof. So maybe it's better to try to read that one instead of the original.
I think I'm going to have to read one of these articles one of these days, but it won't be today.
7. Mar 7, 2010
### meopemuk
Re: "No-Interaction" Theorem
aspidistra,
I think I understand the origin of your confusion. In order to clear things up I would suggest you to consider "instantaneous" observers, which exist and make their measurements only during very short time intervals.
To make it more clear, let me consider two instantaneous observers. One of them is "John now". Another one is "John 10 minutes later". These two instantaneous observers are related to each other by a time translation transformation (t=10). Suppose that these observers look at the same physical system - for example a kettle on the stove. From the point of view of the observer "John now" the water in the kettle is cold. From the point of view of "John 10 minutes later" the water is boiling. To describe this situation mathematically we can consider water temperature, which is an observable pertinent to the observed system - the kettle. I will denote this observable T. The temperature measured by "John now" will be denoted T(0). The temperature measured by "John 10 minutes later" is T(10). The time evolution of this observable (i.e., the transition from T(0) to T(10)) can be described in the Hamiltonian formalism. The physical system (the kettle on the stove) is described by the Hamiltonian H, which is a generator of time translations. So, in order to connect observations of the two instantaneous observers we need to apply an exponent of the Hamiltonian to the observable. Then in your notation we can write (t=10)
T(10) = e^{[H]t}T(0) = e^{[H] 10}T(0).....................(1)
The chosen physical system (stove + kettle + water) is very complex. So, its Hamiltonian H cannot be written in a simple form, and it is very difficult to perform calculation (1) in any reasonable approximation. However, the same Hamiltonian rules of time evolution apply to all isolated physical systems both simple and complex.
The good thing about using "instantaneous" observers is that the same approach that we've used above for time translations works for all other 9 types of transformations of the Poincare group. For example, consider a third observer "Bob now in another room". So, we can ask what are the results of measurements performed by this observer? This observer is displaced in space with respect to "John now". So, in order to answer the question we need to apply a space translation transformation to our observable - temperature. In order to do that we need to know the space translation operator (total momentum) P that is pertinent to our physical system. Then if the distance between John and Bob is 5 meters (x=5). Then the temperature measured by Bob can be found from formula
T(x=5) = e^{[P]x}T(0) = e^{[P] 5}T(0).....................(5)
If operator P is chosen correctly, then we should obtain the obvious result T(x=5) = T(0).
Similarly, we can introduce the generator of rotations J and the generator of boosts K pertinent to the observed system (the kettle on the stove). The ten generators (H, P, J, K) must form a representation of the Lie algebra of the Poincare group. Their exponents e^{[H]t}, e^{[P]x}, e^{[J]a}, e^{[K]v} must form a representation of the Poincare group itself.
Eugene.
8. Oct 20, 2010
### Federation 2005
Re: "No-Interaction" Theorem
The simplest statement of the theorem is that in any system whose components are "particles" (i.e. irreducible representations of the Poincare group), if the angular momentum and linear momentum are additive, then so must be the energy (and mass moment). Ergo, the potential energy must be 0 and there is no interaction.
I think the additivity requirements correctly state the technical assumptions.
The reason for the problem rests with Newton's Third Law. The explicit statement of the law makes it immediately clear what the problem is: "to each action, there exists an equal and opposite reaction AT THE SAME TIME." If the two interactions are separated in space, this is no problem in non-relativistic theory, since "at the same time" is absolute. But in Relativity -- huge problem. There is no longer an absolute "at the same time". The rug is pulled out from under the Third Law.
That law is the underpinning to all interactions in non-relativistic theory. Is there a way out, you ask? The folklore in the Physics community goes on to assert that this "proves fields are necessary".
It proves no such thing, however! What it does, instead, is prove that the very concept of interaction for many-body systems is impossible -- and this includes what we normally refer to as quantum field theory!
Indeed, this problem is not isolated to classical theory. It ramifies to quantum theory, and the Leutwyler theorem (which is the name of the "no-interaction" theorem) morphs into its quantum version -- the Haag Theorem. The Haag Theorem states that in quantum field theory there is no non-trivial interacting many body system (i.e. no Fock space with a non-trivial interaction); the same as the Leutwyler Theorem states. For quantum fields, this specifically excludes the very setup that perturbation theory assumes!
The reason fields do not solve the problem should be clear. Think of a system composed of two isolated regions surrounded by space. We know from experience (e.g. a star with a large body orbiting it) that there is, indeed, a third-law action-reaction going on. The action of the "planet" is offset simultaneously by the recoil of the "star".
So, how would this be communicated by a field, you ask? The action-reaction pair happen at events that are at a spacelike separation. Any communication of one to the other would be going faster than light speed.
So, think of the situation where you transform to a frame of reference where have the action and reaction not being simultaneous. When one body acts, before the other reacts, what is the condition of the system and where did the extra impulse go? It's as if the impulse shot across space faster than light, leaving one body and coming into the other.
If you want to employ a field to get the impulse to communicate, it's not enough to have wave propagation. Waves do not communicate this aspect of the interaction. What they DO communicate is the kind of shock-wave or wiggling that would occur if you abruptly alter the position of one body. They communicate the 1/r "radiative" part of the field. However, they do NOT communicate the 1/r^2 "Coulomb" part of the field.
If you go into quantum field theory, and carefully analyze who's contributing to what, what you'll end up finding is that for a field such as electromagnetism, there are 4 modes. Two modes are "transverse" and are associated with the two helicities of a light-speed carrier (the photon). One mode is "longitudinal", while the last is "scalar". In effect, the longitudinal photon is a tachyon, while the scalar photon is a bradyon (slower than light). Their observable effects cancel one another out (when you take expectation values), so you don't see anything actually being communicated by them. More precisely, they contribute nothing to the expectation value of the energy, so correspond to no propagation of energy at all. Instead, what they conspire to give you the Coulomb part of the field. For a source-free region of space, the Coulomb part of the field is 0 and they two extra modes contribute nothing at all to the force. (That is: the free field has 0 Coulomb part and is purely radiative).
In particular, when we say the longitudinal mode is "off shell" and "virtual", the upshot of what this is really saying is, is that it is not a luxon at all. The sense in which this mode is off shell is that the total value of P^2 - 2MT + (1/c)^2 T^2 is non-zero, where T = E is its kinetic energy, M = E/c^2 its "relativistic mass" and P their momentum.
For ordinary slower-than-light systems, one has M = m + (1/c)^2 T and E = M c^2. So the quadratic invariant reduces to P^2 - (E/c)^2 + (mc)^2 = 0. For luxons, like the photon, E = T = M c^2, M = (1/c)^2 T, and (in effect) m = 0. So, once again, the invariant reduces to 0. For the longitudinal modes, the invariant is NOT zero. Hence, it is termed "off shell".
What this means becomes perfectly clear once you fall back to non-relativistic theory.
In non-relativistic theory, (1/c)^2 is replaced by 0. The analogue to "off-shell longitudinal mode" then corresponds to a mass M = 0 representation, with non-zero momentum P (i.e. an infinite speed "action at a distance" mode). This type of representation has never been given a name cut from the same cloth as "tachyon", "tardion" (or "bradyon") and "luxon". So, you can coin the term "synchron" for it.
For a synchron, one can always find a (non-unique) frame of reference in which its kinetic energy T = 0. Then the quadratic invariant P^2 - 2MT + (1/c)^2 T^2 reduces to P^2. This invariant is just the impulse transferred by the synchron. A synchron is an action-at-a-distance transfer of impulse across space. Denoting the impulse by Pi, the invariant says P^2 - 2MT + (1/c)^2 P^2 = Pi^2, which is non-zero. This is the non-relativistic version of being "off-shell".
So, when you pass over to relativistic theory, the analogous situation is a system whose momentum P, kinetic energy T and "relativistic mass" M give you a non-zero total for the quadratic invariant. That "off-shell" total is just Pi^2, the square of the impulse transferred.
Thus, the virtual mode is just the non-relativistic version of the synchron. Its non-zero value for the quadratic invariant (which is what makes it off-shell) is the relativistic analogue of impulse squared. It's the instantaneous transfer of impulse across space.
Thus, we find that the community folklore that "fields got rid of action at a distance" and "fields resolved the problem with the Third Law" is just a myth. Field theory didn't get rid of the Coulomb part of the field nor its "simultaneous" action, but simply called it by a different name and buried it under disguise.
To date, there is no clear cut integration of the notion of tachyon and virtual mode, and no clear cut integration of the tacyhon + luxon with the non-relativistic analogue, the synchron, in the literature. So, part of the problem with the no interaction theorem may simply be that we have not fully worked out the basic concepts in the right way and have not correctly accounted for how relativistic concepts are to be linked up with their non-relativistic analogues.
So, there MAY be a way out of the no-interaction theorem that somehow "reverse engineers" Newton's Third Law and the related concept of "action at a distance synchrons" into relativistic form. And it may give more cohesive interpretation of the Coulomb part of the field and how it's propagated by the virtual modes. But this framework would represent a slight upgrade beyond what we presently call relativistic theory.
So, my point of view is that the No Interaction Theorems (both Haag and Leutwyler) signal the presence of a gap or oversight in what we presently call Relativity and point the way toward the upgrades required to fill in this incompleteness and to salvage or recover a Relativistic form of Newton's Third Law.
9. Oct 20, 2010
### bcrowell
Staff Emeritus
Re: "No-Interaction" Theorem
Here's a recent talk on this topic that I found interesting: http://streamer.perimeterinstitute.ca/Flash/1a7787fa-5478-49ca-82c2-4b7a342117c8/index.html [Broken]
Hmmm...this was the first place where you lost me. How does it follow from this that the PE is zero?
Are you asserting that even classically, it's impossible to have an interacting many-body system, and that it doesn't matter whether the theory describes particles or fields? I don't see how that can be right. For example, take a system of point charges interacting electromagnetically in special relativity. You have conservation of energy, momentum, and angular momentum. What is wrong with this theory?
[EDIT] Maybe I'm misunderstanding your references to classical theories...? Haag's theorem http://en.wikipedia.org/wiki/Haag's_theorem is a theorem about quantum field theories.
My understanding is that results like the CJS no-interaction theorem are purely about quantum field theories, and that they basically just say that you can't use the theory to predict the time-evolution of observables -- but you can still find quantities like S-matrices and energies of bound states.
Last edited by a moderator: May 5, 2017
10. Oct 21, 2010
### Fredrik
Staff Emeritus
Re: "No-Interaction" Theorem
I understand almost nothing about that theorem, but one thing I've been told is that it applies both to classical and quantum theories. They used some kind of clever notation to make sure that a single proof would cover both cases.
I still haven't read Federation 2005's post, but I will.
11. Oct 21, 2010
### Fredrik
Staff Emeritus
Re: "No-Interaction" Theorem
@Federation 2005: That's an interesting post, but unfortunately there's a lot I don't understand in it. Can you suggest some articles or books that cover the stuff you're talking about in more detail? I'm going to have to study this stuff sooner or later, and I think I will find your summary of the key points useful when I do, but I'm still hoping there's something I can read that's easier than the original articles.
By the way, a Google search for "Leutwyler theorem" returns your post as #1, so it doesn't seem to be a well-known term. What do you mean by it? Is it what many others call the Currie-Jordan-Sudarshan theorem, or is it a different theorem?
12. Oct 21, 2010
### bcrowell
Staff Emeritus
Re: "No-Interaction" Theorem
I'm sure Fredrik is right about it applying to classical theories as well as quantum-mechanical ones.
On rereading Federation 2005's #8, what strikes me is that the word "Hamiltonian" never appears.
If the CJS no-interaction theorem tells us that a certain classical theory can't be expressed in Hamiltonian form, then our reaction should probably be "so what?" The theory still works if we have to cast it in some other form, like a Lagrangian theory.
It's only a big deal if the theorem tells us that a certain quantum theory can't be expressed in Hamiltonian form, because quantum mechanics depends on Hamiltonians.
13. May 7, 2013
### Federation 2005
No Interaction Theorem: Root Cause and Resolution
Actually, to follow up on my earlier reply (a few years back), a closer examination of the theorem (both the 1963 Currie et al. version and the later Leutwyler generalization of it) do not assume additivity for angular momentum and momentum, they infer it from the transformation propeties posed for the worldline coordinates. Then, follows the slow descent into triviality.
The following also applies to a later formulation of the 2-body no interaction proof published by Jordan (1968, if I recall).
To be more precise, additivity for momentum and angular momentum only up to an adjustment of the momentum (i.e. a canonical transformation). The adjustment made by this transformation also yields certain transformation properties, as a result, for the momentum which, if assumed at the outset, would equivalently characterize the transformation made by the adjustment.
Denoting the infinitesimal forms of rotation, boost and translation respectively by ω, $\upsilon$ and ε and infinitesimal time translation by $\tau$, the transformation assumed for a worldline coordinate q is
$\delta$q = ω$\times$q + (α$\upsilon$ - $\tau$)v + ε
where v = {q, E} is the velocity, with E being the time translation generator.
If you apply this to the bracket relation {q, q'} for two different particle coordinates (noting that the bracket is the 0 dyad), then the transform should be 0. Plugging in the time translation and boost you immediately get that the inverse of the "coefficients of inertia" matrix (i.e. the matrix of all the {q, v'} brackets has zero cross terms -- or more precisely, cross terms that admit only contact terms (ones with δ(q - q') factors in it). The contact terms are not included in the analyses posed by the no interaction theorems and probably won't change the situation much at all.
This entails that E separates into a sum of kinetic terms, i.e. terms dependent on only one of the particle momenta; plus a potential term (i.e. a term independent of all the momenta).
None of this holds in the non-relativistic case. That extra factor α is 1/c2 for relativity and 0 for non-relativistic theory. That factor controls the separability.
The root of the problem is clearly seen here: the correspondence limit employed by this theory has a discontinuity at α = 0. Empirically, this is a big no no. Qualitative differences should never result from a continuous parameter adopting a specific value, because that would in principle provide a way of measuring a value with infinite precision.
In other words, we're using the wrong correspondence limit.
If you go back and take a closer -- more modern -- view of non-relativistic theory, you will see that its symmetry group is now understood to not be the 10-dimensional Galilei group, but the 11-dimensional Bargmann group. In order to have a more cohesive correspondence limit, this then requires that whatever symmetry group we adopt for relativity should have the Bargmann group as its limit. This leave no alternative but for this group to have 11 dimensions, not 10.
The modification takes place in the brackets between the boost and trnslation generators and entails a slightly different notion of time translation as a result. Let H be the new time translation generator. It will be defined so that it has the kinetic energy as its non-relativistic limit. The corresponding to the central charge m in the Bargmann group will be a trivial central charge, which we'll call $\mu$. It is understood that it will have m as its non-relativistic limit, and that the original time translation generator E will continue to be regarded as the "total" energy, with the decomposition E = H + $\mu$/$\alpha$. But to hold true to the requirement for a consistent correspondence limit, it will be necessary to replace E by the "relativistic mass" M = $\alpha$E. Then the decomposition relation will take the form M = $\mu$ + $\alpha$H. In the non-relativistic limit M also goes to m. On the other hand, E has no limit. It's divergent (as seen by the fact that the parameter $\alpha$ was sitting in the denominator). So, it drops out.
Then the Lie brackets continue to have the same form as before, but with E rewritten as M. In particular [K, P] = M I, where I denotes the identity dyad. For $\mu$, the brackets are all 0.
The original time translation was {_, E}. Now it becomes {_, H}. The Leutwyler theorem still applies to the rotation generator J, boost generator K, translation generator P and E (which is now M). But H is exempt. Thus, {_, E} now represents "time translation that would occur if the bodies involve were free and non-interacting", while the difference {_, H-E} represents the interacting part of the time translation generator.
For Leutwyler, the boost transform for q is now ambiguous. There are now two forms one can state this property in:
(a) { q, $\upsilon$$\cdot$K} = $\upsilon$$\cdot$q {q, αH}
or
(b) { q, $\upsilon$$\cdot$K} = $\upsilon$$\cdot$q {q, M}
In the absence of the 11th parameter, both forms are equivalent with H = E = M/$\alpha$. For the extended Poincare' group, they are no longer equivalent.
Version (a) yields a Leutwyler result for the full 11-dimensional symmetry group. Version (b) breaks the no interaction theorem and allows in non-trivial interactions.
The energy difference H - E = -$\mu$/α plays the role of the potential U. The only requirement imposed on it is that all the Poisson brackets formed by the 10 other functions -- the 3 vectors J, K, P and the scalar M -- should be 0. This can be done with non-trivial results.
As a consequence, H decomposes into a sum of purely kinetic contributions from each body -- as in the non-relativistic case -- plus the potential U.
The only drawback to this revision is that this adjustment tot he correspondence limit is still not quite enough! This is best seen by considering the general solution for interacting 2-body systems. When setting α = 0, one gets for U a function of the 3 scalar 2-body invariants formed from their relative speed and relative displacement, as expected. As soon as you turn on α, allowing it to be non-zero, it drops down to only 2 invariants: namely the ones whose non-relativistic limits are relative speed and "areal speed" of the radius vector. The component of velocity collinear to the displacement is lost in the translation. So, no Kepler laws.
So, there's even more subtlety that still lies hidden with the correspondence limit than what I've brought up here. That extra term went somewhere, after turning on α to allow it to be non-zero, and I'm hunting it down to see where it got lost. But it's there somewhere.
What makes the correspondence limit so non-trivial is (at least in part) that you have a complete topological change that takes place at α = 0. You can best see this by combining all groups for all α (both positive and negative -- i.e. the 4-D Euclidean group -- as well as 0) into a single Poisson manifold. This manifold has the 11 coordinates of the duals of the Lie algebras above (which may simply be denoted J, K, P, H and $\mu$ as before), plus $\alpha$ as the 12 coordinate; and it has the Poisson bracket formed in the natural way from the Lie brackets, with it understood that {$\alpha$, _} = 0.
The Galilei group is sitting in there as the Lie group that has the submanifold for ($\alpha$, $\mu$) = (0, 0). The oridinal Poincare' group produces each of the submanifolds ($\alpha$, $\mu$) = (1/V2, 0), for all different values of V. They are immediately adjacent to one another, but the Galilei group is not simply connected, while the Poincare' group is.
This means that contraction can expose not just one order of $\alpha$, but an infinite tower of $\alpha$'s -- much like the Einstein-Infeld analysis of the gravity field. The analysis above only pushes the correction to the first order. The 3rd scalar 2-body invariant is hidden somewhere in whatever extra infrastructure is required to recover the 2nd order or higher.
A similar problem occurs when attempting to unify Einstein and Newtonian gravity as a one-parameter family of infrastructures, geometries and Lagrangian theories. Since the Bargmann and extended Poincare' groups are all little groups of the 4+1 de Sitter group, then a unified model for gravity can be formulated in 4+1 dimensions by requiring that there be an invariant covariantly constant field. The norm of that field, with respect to the 4+1 metric, is just $\alpha$. The resulting field equations include Newton's equation at $\alpha$ = 0, and Einstein's equations (for the most part) for $\alpha$ > 0. But there are important terms that get lost as soon as you get to $\alpha$ = 0 and the resulting field equations for the non-relativistic case become somewhat handicapped. The root of the problem is that the coupling coefficient in general relativity (when you do the dimensional analysis right) is actually 2nd order, not 1st -- it's proportional to 1/$\alpha2$. The conversion to a constrained 5-D geometry allows one to reduce this only by one order. But this isn't quite enough to remove the discontinuity in the correspondence limit at $\alpha$ = 0.
Although: it is enough to write non-relativistic gravity in Lagrangian form in a way that's sufficient to include Newton's equation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8847718834877014, "perplexity": 677.5806654343548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948617.86/warc/CC-MAIN-20180426222608-20180427002608-00135.warc.gz"} |
https://socratic.org/questions/the-shorter-leg-of-a-30-60-90-right-triangle-is-7-5-inches-how-long-is-the-longe | Trigonometry
Topics
# The shorter leg of a 30°- 60°- 90° right triangle is 7.5 inches. How long is the longer leg and the hypotenuse?
Jun 8, 2015
The longer leg is $7.5 \sqrt{3}$ inches.
The hypotenuse is $15.0$ inches.
#### Explanation:
A right triangle with angles ${30}^{\circ} - {60}^{\circ} - {90}^{\circ}$ is one of the standard trigonometric triangles with sides in the ratio:
$\textcolor{w h i t e}{\text{XXXX}}$$1 : \sqrt{3} : 2$
If the shortest side is 7.5 inches,
the other two sides are
$\textcolor{w h i t e}{\text{XXXX}}$$7.5 \sqrt{3} \mathmr{and} 15.0$ inches.
##### Impact of this question
1313 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6306488513946533, "perplexity": 3740.285253487384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057347.80/warc/CC-MAIN-20210922102402-20210922132402-00649.warc.gz"} |
https://docs.starburst.io/latest/installation/license-requirements.html | As mentioned in the overview, a license for Starburst Enterprise platform (SEP) includes support and enables numerous features.
After receiving a signed license file from Starburst Enterprise, it needs to be stored on all SEP nodes in your clusters.
The license file needs to be named starburstdata.license and located within the SEP installation directory as etc/starburstdata.license, or whatever directory is configured as the etc directory with the launcher script. In case of the rpm archive this path is /etc/starburst/starburstdata.license.
Users of the CFT for SEP can provide the license information as part of the configuration.
Kubernetes users need to follow the available instructions, which are included in the complete documentation for Kubernetes usage with Helm. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38499173521995544, "perplexity": 4042.7262256058166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00026.warc.gz"} |
http://tex.stackexchange.com/questions/101857/producing-tikz-made-pdfs-and-using-includegraphics-on-them | # Producing TikZ-made .pdfs and using /includegraphics on them?
I'm helping TeX a professor's handwritten notes for publishing/pre-print purposes, and for producing certain graphics, I use PGF/TikZ. However, it seems he is having trouble compiling the code whenever TikZ is involved, so he wants me to compile the TikZ images separately, and then \includegraphics on all of them instead. (Seems a bit of a crime, but boss' orders I guess.)
However, trying to make the TikZ images standalone (i.e. including the TikZ code, some article document class, then compiling to .pdf), I am left with a huge amount of whitespace coming from the rest of the single page document. This leads to a lot of problems trying to \includegraphics with it, as the image itself is miniscule compared to all the blank space.
Is there a way to shorten the produced pages to fit exactly the dimensions of the TikZ-images? Or aside from dropping TikZ and going with a third party software, is there perhaps a better solution overall?
-
Use \documentclass{standalone} in your figures document. Also be sure to include the same packages than your professor, in order to guarantee that the same typefaces, font sizes, mathematical notation, etc. are used. Also try to create the figures in their final size, so that \includegraphics won't require any scale, width or height option. This way the size of the fonts and line widths will be consistent among all figures. – JLDiaz Mar 10 at 21:33
you could also look into the external library which also has an extra library to include the created PDF's without using TikZ. See the section Using External Graphics Without pgf Installed in the manual. – zeroth Mar 10 at 21:46
Both of these are fantastic answers, though I'm going with zeroth's. :) I tried using the standalone document class in and of itself, but it still required some fussing with the scaling to get it to work as if I inputted the TikZ code normally. On the other hand, the external library as described by zeroth is an excellent way to export all my TikZ code to separate .pdfs without having to manually compile separate .tex documents. And it didn't run into the above problem I had with the standalone document class either (which is kind of weird to be honest, but whatever). – Riem Mar 11 at 0:51
@zeroth Thanks. – percusse Mar 23 at 0:19
As mentioned you can use a package which mimics the internals of a tikzpicture but requiring that the images created by the external library are included.
This is very well explained in the manual of pgf, but I will explain how to use it.
First of, your regular local document will be having regular tikzset commands etc.
So:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{external}
\tikzexternalize
\begin{document}
Text ...
\tikzsetnextfilename{arrow}
\begin{tikzpicture}
\draw [->] (0,0) -- (2,3);
\end{tikzpicture}
\end{document}
This will create the document with the externalized picture creating a picture named arrow.pdf.
Now lets imagine you are done and wish to collaborate with a non-TikZian. This will probably also mean that he does not have any pgf/tikz packages installed.
1. First thing is to find the file tikzexternal.sty, it should be located in your TeX-tree under:
latex/pgf/utilities/tikzexternal.sty
2. Copy this file to your main-TeX file folder.
3. Comment out your \usepackage{tikz,pgf,pgfplots}.
4. Comment out any \usetikzlibrary, \usepgfplotslibrary, etc.
5. Add the usage of the package \usepackage{tikzexternal}.
6. You do not need to comment out any \tikzset commands as that will be emulated away
The above things which should be commented out, can be performed by local \iffalse ... \fi statements, but the explicit comment out is more rigorous when distributing it to others.
This should give you a document like so:
\documentclass{article}
\usepackage{tikzexternal}
\tikzexternalize
\begin{document}
Text ...
\tikzsetnextfilename{arrow}
\begin{tikzpicture}
\draw [->] (0,0) -- (2,3);
\end{tikzpicture}
\end{document}
And compile to find that TikZ is not used, but loading of the image is correct.
However, there are some caveats in using this method. First off, the entire key-value system in keys are not present. Hence, all references to external settings should be performed using the designated commands:
1. Use \tikzsetexternalprefix instead of the key prefix
2. Use \tikzsetfigurename instead of the key figure name
3. Not even the content in \tikzexternalize[<content>] will be recognized.
Also, the package tikzexternal gobbles the tikzpicture environment, hence the full environment should be visible (don't do any fancy things here :) ).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9250850081443787, "perplexity": 1914.4962582412284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345764241/warc/CC-MAIN-20131218054924-00072-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/39180-solve-equation.html | # Math Help - solve equation
1. ## solve equation
I need help/answers for these questions it will be a huge faveur../
1-Simplify:
a: 3x(5x-2) -x (x(squared)-7) -x
b: 5w-3(squared)-2+2w
c: 8(x+3)-2(x-1)
2-Solve:
a: 8x-7=5(x+4)
b: 8-2x=5
c: 3x-5=7
d: 7(3-p)=9-4p
3-First three numbers in a sequence are 1, 2 and 4 exlapin why the fourth term would be 7 or 8..
4-Factorise the expiression
3x(squared)-15x
5-Work out and leave answers as fraction
2/3+3/5+7/4
2 6/7 + 1 3/4
2 1/2 + 1 3/4
6-Square root of 1.44
7- 1.3 squared=
2. Originally Posted by nasb
I need help/answers for these questions it will be a huge faveur../
1-Simplify:
a: 3x(5x-2) -x (x(squared)-7) -x
b: 5w-3(squared)-2+2w
c: 8(x+3)-2(x-1)
2-Solve:
a: 8x-7=5(x+4)
b: 8-2x=5
c: 3x-5=7
d: 7(3-p)=9-4p
3-First three numbers in a sequence are 1, 2 and 4 exlapin why the fourth term would be 7 or 8..
4-Factorise the expiression
3x(squared)-15x
5-Work out and leave answers as fraction
2/3+3/5+7/4
2 6/7 + 1 3/4
2 1/2 + 1 3/4
6-Square root of 1.44
7- 1.3 squared=
Problem 1
(a)
$3x(5x -2) - x(x^2 - 7) - x =$
$15x^2 - 6x - x^3 + 7x - x =$
$15x^2 - x^3 = x^2(15 - x)$
(b)
$(5w - 3)^2 - 2 + 2w =$
$25w^2 - 30w + 9 - 2 + 2w =$
$25w^2 - 28w + 7$
(c)
$8(x + 3) - 2(x - 1) =$
$8x + 24 - 2x + 2 =$
$6x + 22$
3. Originally Posted by nasb
2-Solve:
a: 8x-7=5(x+4)
b: 8-2x=5
c: 3x-5=7
d: 7(3-p)=9-4p
These are all of the same type.
$8x - 7 =5(x + 4)$
$8x - 7 = 5 \cdot x + 5 \cdot 4$
$8x - 7 = 5x + 20$
$8x - 7 - 5x = 5x + 20 - 5x$
$3x - 7 = 20$
$3x - 7 + 7 = 20 + 7$
$3x = 27$
$\frac{3x}{3} = \frac{27}{3}$
$x = 9$
The other three use the same basic pattern of solution. Try them again, step by step, using the above solution as a model. If you are still stuck on any of them, just let us know.
-Dan
4. Originally Posted by nasb
4-Factorise the expiression
3x(squared)-15x
Look for what is common to both terms.
3 divides 15 evenly
x divides x^2 evenly
So
$3x^2 - 15x$
$= 3x \cdot x - 3x \cdot 5$
$3x(x - 5)$
-Dan
5. Originally Posted by topsquark
These are all of the same type.
$8x - 7 =5(x + 4)$
$8x - 7 = 5 \cdot x + 5 \cdot 4$
$8x - 7 = 5x + 20$
$8x - 7 - 5x = 5x + 20 - 5x$
$3x - 7 = 20$
$3x - 7 + 7 = 20 + 7$
$3x = 27$
$\frac{3x}{3} = \frac{27}{3}$
$x = 9$
The other three use the same basic pattern of solution. Try them again, step by step, using the above solution as a model. If you are still stuck on any of them, just let us know.
-Dan | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6302857995033264, "perplexity": 3901.732835293857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657127285.44/warc/CC-MAIN-20140914011207-00089-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://latex.org/forum/viewtopic.php?f=50&t=23494&p=79869 | ## LaTeX forum ⇒ BibTeX, biblatex and biber ⇒ biblatex missing year
Information and discussion about BiBTeX - the bibliography tool for LaTeX documents.
stegzzz
Posts: 2
Joined: Wed Jul 03, 2013 5:15 pm
### biblatex missing year
Hi,I have the latex code below. When I compile using TeXstudio the year information is missing from the in-text citation and reference list, as shown in the attached pdf. Any suggestions for what I can do to fix this?
Thanks!
\documentclass[man,11pt, a4paper]{apa6}
\usepackage{filecontents}
\begin{filecontents}{test.bib}
@article{Smith1978,
Author = {Smith, J.E.},
Title = {A re-examination},
Journal = {Objective Bulletin},
Year = {1978},
Volume = {85},
Number = {3},
Pages = {12-17}
}
\end{filecontents}
\usepackage[style=apa,sortcites=true, sorting=nyt, backend=biber]{biblatex}
\usepackage{csquotes}
\usepackage[american]{babel}
\DeclareLanguageMapping{american}{american-apa}
\author{JJ}
\title{Test}
\shorttitle{test}
\begin{document}
\maketitle
Blah, blah \parencite{Smith1978}. Blah, blah, blah.
\printbibliography
\end{document}
Attachments
reftest.pdf
Johannes_B
Site Moderator
Posts: 4185
Joined: Thu Nov 01, 2012 4:08 pm
I just tested the code you are providing, i cannot reproduce the behaviour. The year is shown.
The smart way: Calm down and take a deep breath, read posts and provided links attentively, try to understand and ask if necessary.
stegzzz
Posts: 2
Joined: Wed Jul 03, 2013 5:15 pm
Johannes_B wrote:I just tested the code you are providing, i cannot reproduce the behaviour. The year is shown.
hmm, I suppose that means there is something about my setup. There are no errors during the build but one clue is the
\DeclareLanguageMapping{american}{american-apa}
statement that is highlighted in red in TeXstudio editor and shows 'unrecognized command' when I hover
Last edited by cgnieder on Wed Jul 03, 2013 9:59 pm, edited 1 time in total. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9506418108940125, "perplexity": 7506.443940186273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154158.4/warc/CC-MAIN-20210801030158-20210801060158-00535.warc.gz"} |
https://emojiguide.com/people-body/woman-tipping-hand/ | # woman tipping hand 💁♀️
Let’s talk facts with the Woman Tipping Hand emoji! This emoji is one of the most widely used emojis. It is the female variation of the Person Tipping Hand emoji.
The woman tipping hand is also known for being called the sassy emoji by social media users. This is because every statement that can be considered strong, mean, or a painful truth can be followed by this sassy hand itself. The word “facts” refers to a truthful statement. This is why it has been common for people to simply say facts and accompany it with this emoji.
It can also present itself as a “no” emoji, since the emoji itself seems to have an almost sarcastic smile accompanied by a flip of a hand. The tipping of the hand, with the right sentence, can almost look like the emoji is trying to explain something simple (such as a no) or drawing a boundary. It can also reference to a woman flipping her hair.
💁♀️ Woman Tipping Hand is a fully-qualified emoji as part of Unicode 6.0 which was introduced in 2010, and was added to Emoji 4.0.
• ## Copy and Paste This Emoji:
• 💁♀️
https://emojiguide.com/people-body/woman-tipping-hand/
Url Copied!
## Woman Tipping Hand Emoji Combination
This emoji can be derived by combining 💁 ♀
## Woman Tipping Hand Emoji History
Woman Tipping Hand Emoji is created in the year 2010.
## Woman Tipping Hand Emoji Unicode Data
• Unicode codepoint
1F481 200D 2640 FE0F
• Version
Version 6.0
• Year
2010
• https://emojiguide.com/people-body/woman-tipping-hand/ | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227011561393738, "perplexity": 5300.743242830601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00160.warc.gz"} |
https://www.earthdoc.org/content/papers/10.3997/2214-4609.201401643 | 1887
### Abstract
F020 Velocity-Saturation Relation for Rocks with Fractal Distribution of the Pore Fluids T.M. Mueller* (University of Karlsruhe) J. Toms (Curtin University) & G. Quiroga Goode (Universidad Autonoma de Tamaulipas) SUMMARY Seismic attributes like attenuation and velocity dispersion are sensitive to the pore fluid distribution in rocks. That is because seismic waves induce local pressure gradients between fluid patches of different elastic properties and consequently induce local fluid flows that are accompanied by internal friction. This effect is known as wave-induced-flow and it has the potential to characterize the heterogeneous pore fluid distributions. In particular velocity-saturation relationships are of practical importance
/content/papers/10.3997/2214-4609.201401643
2007-06-11
2021-01-19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034487009048462, "perplexity": 5569.730728729842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00656.warc.gz"} |
https://aps.arxiv.org/list/math.RA/2006?skip=0&show=25 | # Rings and Algebras
## Authors and titles for math.RA in Jun 2020
[ total of 124 entries: 1-25 | 26-50 | 51-75 | 76-100 | 101-124 ]
[ showing 25 entries per page: fewer | more | all ]
[1]
Title: Sylvester-type quaternion matrix equations with arbitrary equations and arbitrary unknowns
Authors: Zhuo-Heng He
Subjects: Rings and Algebras (math.RA); Numerical Analysis (math.NA)
[2]
Title: on the lower Lie nilpotency index of a group algebra
Subjects: Rings and Algebras (math.RA)
[3]
Title: Clones containing the Mal'cev operation of $\mathbb{Z}_{pq}$
Subjects: Rings and Algebras (math.RA)
[4]
Title: A new invariant for finite dimensional Leibniz/Lie algebras
Comments: Final version, to appear in Journal of Algebra
Journal-ref: Journal of Algebra 562 (2020), 390-409
Subjects: Rings and Algebras (math.RA); Category Theory (math.CT); Quantum Algebra (math.QA)
[5]
Title: The algebraic classification of nilpotent $\mathfrak{CD}$-algebras
Subjects: Rings and Algebras (math.RA)
[6]
Title: Mal'cev conditions corresponding to identities for compatible reflexive relations
Subjects: Rings and Algebras (math.RA)
[7]
Title: Lie nilpotency Index of a modular group algebra
Subjects: Rings and Algebras (math.RA)
[8]
Title: Involutive and oriented dendriform algebras
Journal-ref: Journal of Algebra, Volume 581, 1 September 2021, Pages 63-91
Subjects: Rings and Algebras (math.RA)
[9]
Title: Pseudo-Primary, Classical Prime and Pseudo-Classical Primary Elements in Lattice Modules
Subjects: Rings and Algebras (math.RA)
[10]
Title: An explicit self-dual construction of complete cotorsion pairs in the relative context
Comments: LaTeX 2e with xy-pic; 54 pages, 2 commutative diagrams; v.4: Section 4 added, Introduction expanded; v.5: title changed, Remarks 2.17 and 3.16 inserted, references added; v.6: small expositional improvements and corrections, Proposition 3.28 inserted, references updated
Subjects: Rings and Algebras (math.RA); Representation Theory (math.RT)
[11]
Title: The spectrum of a localic semiring
Authors: Graham Manuell
Subjects: Rings and Algebras (math.RA); Category Theory (math.CT); General Topology (math.GN)
[12]
Title: Biordered sets of lattices and homogeneous basis
Subjects: Rings and Algebras (math.RA); Group Theory (math.GR)
[13]
Title: Parafree augmented algebras and Gröbner-Shirshov bases for complete augmented algebras
Subjects: Rings and Algebras (math.RA); K-Theory and Homology (math.KT)
[14]
Title: Semilattice ordered algebras with constants
Subjects: Rings and Algebras (math.RA)
[15]
Title: Spectrum of Rota-Baxter operators
Authors: Vsevolod Gubarev
Comments: 23 p.; v2: Introduction is extended
Subjects: Rings and Algebras (math.RA)
[16]
Title: The solution of an open problem on semigroup inclusion classes
Subjects: Rings and Algebras (math.RA); Group Theory (math.GR)
[17]
Title: Solvability of Poisson algebras
Subjects: Rings and Algebras (math.RA)
[18]
Title: A note on the regular ideals of Leavitt path algebras
Comments: First version has been significantly improved. We thank the journal referee and Dr. Pere Ara for their insightfull suggestions
Subjects: Rings and Algebras (math.RA)
[19]
Title: Kleene posets and pseudo-Kleene posets
Subjects: Rings and Algebras (math.RA)
[20]
Title: The images of multilinear non-associative polynomials evaluated on a rock-paper-scissors algebra with unit over an arbitrary field
Subjects: Rings and Algebras (math.RA)
[21]
Title: Multiplicative (generalized)-derivations of prime rings that act as $n-$(anti)homomorphisms
Journal-ref: Matematychni Studii, 53 (2020)
Subjects: Rings and Algebras (math.RA)
[22]
Title: Equivariant one-parameter deformations of Lie triple systems
Subjects: Rings and Algebras (math.RA)
[23]
Title: Weak c-ideals of Lie algebras
Subjects: Rings and Algebras (math.RA); Group Theory (math.GR)
[24]
Title: Simple Lie algebras arising from Steinberg algebras of Hausdorff ample groupoids
Authors: Tran Giang Nam | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34988853335380554, "perplexity": 10920.831445480744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00478.warc.gz"} |