qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,480,123
<p>In my master thesis, I'm trying to prove the following limit: <span class="math-container">$$\lim_{\epsilon \to 0^+}\int_0^1 \frac{\left(\ln\left(\frac{\epsilon}{1-x}+1\right)\right)^\alpha}{x^\beta(1-x)^\gamma}\,\mathrm{d}x=0,$$</span> where <span class="math-container">$\alpha, \beta, \gamma \in (0,1)$</span>.</p> <p>Assigning some numerical values to <span class="math-container">$\alpha, \beta, \gamma$</span> and <span class="math-container">$\epsilon$</span> in the Wolfram, I could perceive such convergence does indeed seem to happen.</p> <p>Appreciate any help.</p>
Ethan Bolker
72,858
<p>Think of the points as vectors in <span class="math-container">$\mathbb{R}^3$</span>.</p> <p>If they don't lie on a plane through the origin then they will form a basis, and any point can be written uniquely as a linear combination of the three.</p>
65,166
<p>For a graph $G$, let its Laplacian be $\Delta = I - D^{-1/2}AD^{-1/2}$, where $A$ is the adjacency matrix, $I$ is the identity matrix and $D$ is the diagonal matrix with vertex degrees. I'm interested in the spectral gap of $G$, i.e. the first nonzero eigenvalue of $\Delta$, denoted by $\lambda_{1}(G)$.</p> <p>Is it true that a randomly chosen (with uniform distribution) $d$-regular bipartite graph on $(n, n)$ vertices (with multiple edges allowed) has, with probability approaching $1$ as $n \to \infty$, $\lambda_1$ arbitrarily close to $1$ (i.e. we can make arbitrarily close by taking $d$ large enough)?</p> <p>If yes, is there a reference for this fact?</p> <p>Proofs of expanding properties for random regular graphs which I have found in the literature usually give the probability only bounded from below by a constant, i. e. $1/2$, although I imagine that actually almost all random graphs have good spectral gap.</p> <p>Note: by $d$-regular bipartite graph I mean a graph in which each vertex (on the left and on the right) has degree $d$.</p>
vanvu
15,028
<p>For $d$ fixed and $n$ goes to infinity, the neighborhood of each vertex looks like a tree, with high probability, and I guess you can use the traditional trace method. </p>
317,753
<p>I am taking real analysis in university. I find that it is difficult to prove some certain questions. What I want to ask is:</p> <ul> <li>How do we come out with a proof? Do we use some intuitive idea first and then write it down formally?</li> <li>What books do you recommended for an undergraduate who is studying real analysis? Are there any books which explain the motivation of theorems? </li> </ul>
Community
-1
<p>I started studying Rudin's <strong>Principles of Mathematical Analysis</strong> book for analysis. I just couldn't understand it because I was thinking that definitions are some decorative blah blahs and proofs are some unintelligible mathematician sorceries. </p> <p>Then I have decided to learn something about proofs from the book <a href="http://rads.stackoverflow.com/amzn/click/0521597188" rel="nofollow">An Introduction to Mathematical Reasoning</a>. Just after three chapters all begin to make sense. It is a perfect book for people like me. </p>
1,638,757
<p>Given $2$ points in 2-dimensional space $(x_s,y_s)$ and $(x_d,y_d)$, our task is to find whether $(x_d,y_d)$ can be reached from $(x_s,y_s)$ by making a sequence of zero or more operations. From a given point $(x, y)$, the operations possible are:</p> <pre><code>a) Move to point (y, x) b) Move to point (x, -y) c) Move to point (x+y, y) d) Move to point (2*x, y) </code></pre> <p>Now i am pretty sure this has something to do with gcd's but m not able to formalize my approach, Can someone intuitively explain how can we figure out if a particular state is reachable from the current state?</p>
Hagen von Eitzen
39,174
<p>First note that the actions $a, b, c$ can be inverted: $$ a^{-1}=a,\quad b^{-1}=b,\quad c^{-1}=b\circ c\circ b.$$ Therefore "reachable by zero or more applications of $a, b, c$" is an equivalence relation, which we shall denote by $\sim$. By applying $b$ if necessary, we see that each $(x,y)\in\Bbb Z$ is equivalent to some $(x',y')$ with $y'\ge 0$. Let $m\in\Bbb N_0$ be minimal such that $(x,y)\sim(x',m)$ for some $x'\in\Bbb Z$. As $bab(x',m)=(-x',m)$, there exist $x'$ with $(x',m)\sim (x,y)$ and $x'\ge 0$. Let $n\in\Bbb N_0$ be minimal with $(n,m)\sim(x,y)$. As $(n,m)\sim (m,n)$ we conclude $n\ge m$. If $m&gt;0$ we arrive at $(n,m)\sim (n,-m)\sim (n-m,-m)\sim (n-m,m)$, contradiction. Hence $m=0$. As $\gcd(y,x)=\gcd(x,-y)=\gcd(x+y,y)$, we conclude that $(x,y)\sim(\gcd(x,y),0)$ and $(x,y)\sim(u,v)\iff \gcd(x,y)=\gcd(u,v)$. (Another way to put this: the steps $a,b,c$ allow us to implement Euklid's algorithm).</p> <p>If we now add operation $d$, the situation changes, we do not have an equivalence relation any more. Let us write $(x,y)\to(u,v)$ if $(u,v)$ can be obtained from $(x,y)$ by zero or more applications of $a,b,c,d$. By repeated application of $d$ we find $(x,0)\to (2^kx0)$ for any $k\in\Bbb N_0$, hence $(x,y)\to (u,v)$ whenever $$\gcd(u,v)=2^k\gcd(x,y)$$ with $k\in\Bbb N_0$. On the other hand $\gcd(2x,y)\in\{\gcd(x,y),2\gcd(x,y)\}$ and hence this sufficient condition is also necessary.</p>
526,837
<p>Let $(\Omega, {\cal B}, P )$ be a probability space, $( \mathbb{R}, {\cal R} )$ the usual measurable space of reals and its Borel $\sigma$- algebra, and $X : \Omega \rightarrow \mathbb{R}$ a random variable.</p> <p>The meaning of $ P( X = a) $ is intuitive when $X$ is a discrete random variable, because it's the definition of the probability mass function. I am not sure if my question makes sense, but how should I think of $ P( X = a) $ when $X$ is a continuous random variable? </p>
Carlos Eugenio Thompson Pinzón
99,344
<p>Let $u=1+2x$ then $du=2dx$ and $2-2u=-4x$. Making the substitution \begin{align}\int\frac{-4x}{1+2x}dx&amp;=\int\frac{2-2u}{u}\frac{du}2\\&amp;=\int\left(\frac1u-1\right)du\\&amp;=\ln|u|-u+C\end{align} Substituting back we have: \begin{align}\int\frac{-4x}{1+2x}dx&amp;=\ln|1+2x|-(1+2x)+C\\&amp;=-2x+\ln|1+2x|+C-1\end{align} Now, given that $C$ is a constant, any constant, so $C-1$ is also a constant. Any constant. So you can just drop it and have: $$\int\frac{-4x}{1+2x}dx=-2x+\ln|1+2x|+K$$ The $-1$ is therefor unimportant.</p>
4,218,943
<p>Let <span class="math-container">$A_n$</span> be a sequence of <span class="math-container">$d\times d$</span> symmetric matrices, let <span class="math-container">$A$</span> be a <span class="math-container">$d\times d$</span> symmetric positive definite matrix (matrix entries are assumed to be real numbers). Assume that each element of <span class="math-container">$A_n$</span> converges to the corresponding element of <span class="math-container">$A$</span> as <span class="math-container">$n\to \infty$</span>. Can we conclude that for some <span class="math-container">$\epsilon&gt;0$</span> there exists <span class="math-container">$N_\epsilon$</span> such that the smallest eigenvalue of <span class="math-container">$A_n$</span> is larger than <span class="math-container">$\epsilon$</span>, for all <span class="math-container">$n \geq n_\epsilon$</span>?</p>
Hans Engler
9,787
<p>Yes, since the coefficients of the characteristic polynomials of the <span class="math-container">$A_n$</span> converge to those of the char. polynomial of <span class="math-container">$A$</span>, and the roots of a polynomial depend continuously on the coefficients.</p>
927,261
<p>I was doing a presentation on Limits and I was using this $$f(x)=\frac{x^2+2x-8}{x^2-4}$$ to explain different types of limits. </p> <p>I know that the function is not defined at $x=-2$ or $x=2$. I showed the graph and everyone was ok with the graph at $x=-2$ but one member of the audience didn't like how the graph looked at $x=2$. </p> <p>I think they didn't understand that a function doesn't need to be defined at the point to have a limit. I said there was a hole at $x=2$, not sure now because when I restricted the domain to be close to $x=2$ This was displayed. </p> <p><img src="https://i.stack.imgur.com/8Iayy.jpg" alt="graph of f(x) near x=2"></p> <p>I used "discont=true" as an option of the plot command. </p> <p>I computed both the left and right limits of $f(x), \; x\to 2$, both limits equal 3/2. I don't think there is any up and down behavior like $\sin(1/x)$</p> <p>Is this a problem with maple or have I missed something about limits? </p>
ShakesBeer
168,631
<p>Nope, that definitely shouldn't be happening. I think it must be an error with arithmetic with very small numbers, as computers only have so much precision. (You've used $|x-2|&lt;10^{-7}$ so $|x^2-4|&lt;10^{-14}$ which is the range you get problems in).</p> <p>EDIT: the above is actually slightly wrong. $|x-2|&lt;10^{-7}$ implies $|x^2-4|&lt;4*10^{-7}$. Nonetheless, it's a problem with high precision arithmetic.</p>
3,705,580
<p><span class="math-container">$\mathbf {The \ Problem \ is}:$</span> Let, <span class="math-container">$f,g,h$</span> be three functions defined from <span class="math-container">$(0,\infty)$</span> to <span class="math-container">$(0,\infty)$</span> satisfying the given relation <span class="math-container">$f(x)g(y) = h\big(\sqrt{x^2+y^2}\big)$</span> for all <span class="math-container">$x,y \in (0,\infty)$</span>, then show that <span class="math-container">$\frac{f(x)}{g(x)}$</span> and <span class="math-container">$\frac{g(x)}{h(x)}$</span> are constant.</p> <p><span class="math-container">$\mathbf {My \ approach} :$</span> Actually, by putting <span class="math-container">$x$</span> in place of <span class="math-container">$y$</span> and vice-versa, we can show that <span class="math-container">$\frac{f(x)}{g(x)}$</span> is a constant, let it be <span class="math-container">$c .$</span> Then, I tried that <span class="math-container">$g(x_i)g(y_i)=g(x_j)g(y_j)$</span> whenever <span class="math-container">$(x_i,y_i)$</span>, <span class="math-container">$(x_j,y_j)$</span> satisfies <span class="math-container">$x^2+y^2 =k^2$</span> for every <span class="math-container">$k \in (0,\infty)$</span> . But, I can't approach further.</p> <p>Any help would be greatly appreciated .</p>
max
378,372
<p>The numerator and denominator of your random variable <span class="math-container">$T$</span> are, in fact, independent, since their parameters do not depend on each other. One of the ways to find the distribution of <span class="math-container">$T$</span> is by conditioning on <span class="math-container">$X_3, X_4$</span> as follows <span class="math-container">$$X_1X_3 + X_2X_4|X_3, X_4\sim \mathcal{N}(0, X_3^2 + X_4^2)$$</span> a.s. and hence <span class="math-container">$$\frac{X_1X_3 + X_2X_4}{\sqrt{X_3^2+ X_4^2}}|X_3, X_4\sim \mathcal{N}(0, 1).$$</span> Since the value of the term does not depend on the values of <span class="math-container">$X_3, X_4$</span>, we can conclude that it is independent of <span class="math-container">$X_3$</span> and <span class="math-container">$X_4$</span>. Moreover, we can conclude that the distribution of <span class="math-container">$T$</span> is a product of independent standard normal and a random variable <span class="math-container">$\frac{1}{\sqrt{W}}$</span> with <span class="math-container">$W$</span> being chi-squared with <span class="math-container">$df=2$</span>, which is, in fact, a t-distribution with <span class="math-container">$df = 2$</span> multiplied by a constant <span class="math-container">$\frac{1}{\sqrt{2}}$</span>.</p>
3,381,939
<p>Be T tree order n given:</p> <p><span class="math-container">$(a)$</span> <span class="math-container">$ 95 &lt; n &lt; 100$</span></p> <p><span class="math-container">$(b)$</span> Just have vertices with degree 1,3,5</p> <p><span class="math-container">$(c)$</span> T has twice the vertices of degree 3 that degree 5</p> <p><span class="math-container">$n$</span> is...?</p> <p>Okay, i think that can use handshake to sum degree to obtain <span class="math-container">$2e$</span> and solve for algebra, but I don't see how propose equation. Hope that can help me with it </p>
Noah Schweber
28,111
<p>There is no such sentence, and this remains true if we allow <span class="math-container">$\wedge,\vee,\leftrightarrow$</span> as well (thus subsuming the previous result you mention). </p> <p>In fact, fixing a first-order language <span class="math-container">$\Sigma$</span>, the unique (up to isomorphism) one-element <span class="math-container">$\Sigma$</span>-structure <span class="math-container">$\mathcal{A}_\Sigma$</span> where every <span class="math-container">$n$</span>-ary relation holds on the unique <span class="math-container">$n$</span>-tuple satisfies <em>every</em> negation-free <span class="math-container">$\Sigma$</span>-sentence. </p> <p>This is a straightforward induction on complexity (and also a neat example of a situation where strengthening the induction hypothesis simplifies the argument): </p> <ul> <li><p>For the base case, note that every atomic <span class="math-container">$\Sigma$</span>-formula is true in this structure under every (i.e., the unique) variable assignment.</p></li> <li><p>For the inductive case, quantifiers are essentially trivial, and all connectives except <span class="math-container">$\neg$</span> are "truth preserving" in the sense that whenever all inputs are true the result is true.</p></li> </ul> <p><em>Note that the nature of the structure <span class="math-container">$\mathcal{A}_\Sigma$</span> is used in two places: the analysis of atomic formulas, and the triviality of the quantifiers.</em></p> <p>Incidentally, the point about propositional connectives above is also <a href="https://math.stackexchange.com/questions/2046902/functional-completeness-proving-%e2%87%92-%e2%88%a7-%e2%88%a8-is-functionally-complete">the standard way to prove</a> that <span class="math-container">$\{\wedge,\vee,\rightarrow,\leftrightarrow\}$</span> is not <a href="https://en.wikipedia.org/wiki/Functional_completeness" rel="nofollow noreferrer">functionally complete</a>. Meanwhile, note that the notion of "truth preserving" used here is incredibly weak; in particular, since "False implies False" is true but "True implies False" is false we see that it doesn't imply truth <em>monotonicity</em> (= "making the inputs more true makes the output more true"). As such I don't think it actually has an accepted name, it's too weak a condition in almost every context.</p>
4,272,964
<p>I want to solve the equation following in a set of complex numbers:</p> <p><span class="math-container">$$z^2 + \bar z = \frac 1 2$$</span></p> <p><strong>My work so far</strong></p> <p>Apparently I have a problem with transforming equation above into form that will be easy to solve. I tried to multiply sides by <span class="math-container">$z$</span> and use fact that: <span class="math-container">$z\bar z = |z|^2$</span> but it doesn't seem great idea. After that I tried the following:</p> <p><span class="math-container">$$\bar z = \frac 1 2 - z^2 \Leftrightarrow |z| = | \frac 1 2 - z^2|$$</span></p> <p>and then rewrite as <span class="math-container">$z = Re(z) +Im(z)$</span> but also result was not satisfying. Could you please give me a hand with solving this equation?</p>
user
505,767
<p>We have that</p> <p><span class="math-container">$$z^2 + \bar{z} = \frac 1 2 \iff \bar z^2 + z = \frac 1 2$$</span></p> <p>and then subtracting we obtain</p> <p><span class="math-container">$$z^2-\bar z^2 +\bar z-z=0 \iff (z-\bar z)(z+\bar z-1)=0$$</span></p> <p>that is</p> <p><span class="math-container">$$z=\bar z \quad \lor \quad z+\bar z=1$$</span></p> <p>And it is easy conclude form here by substitution in the original equation and solving the quadratics</p> <ul> <li><p><span class="math-container">$z^2+z=\frac12$</span></p> </li> <li><p><span class="math-container">$z^2+1-z=\frac12$</span></p> </li> </ul>
3,460,595
<p>I am given the following sequence:</p> <p><span class="math-container">$$a_n = 1^9 + 2^9 + ... + n^9 - an^{10}$$</span></p> <p>Where <span class="math-container">$a \in \mathbb{R}$</span>. I have to find the value of <span class="math-container">$a$</span> for which the sequence <span class="math-container">$a_n$</span> is convergent (Or conclude that there is no such value of <span class="math-container">$a$</span>).</p> <p>How can I find this value (or that there is no such value)? I don't know how to approach something like this at all.</p>
Z Ahmed
671,540
<p><span class="math-container">$$ \sum_{1}^{n} k^p \sim \frac{n^{p+1}}{p+1}~~~~~~(1)$$</span></p> <p>So <span class="math-container">$a$</span> needs to be <span class="math-container">$\frac{1}{10}.$</span> for <span class="math-container">$$\frac{1^9+2^9+3^9+4^9+...n^9}{n^{10}}$$</span> to be convergent.</p> <p><strong>Proof of (1)</strong></p> <p><span class="math-container">$$\lim_{n\rightarrow \infty} \sum_{k=1}^{n} \frac{k^p}{n^{p+1}}= \int_{0}^{1} x^p dx=\frac{1}{p+1}$$</span></p> <p>Further to @Ross Millikan the correct question should to what value <span class="math-container">$$\frac{1^9+2^9+3^9+4^9+...n^9}{n^{10}}$$</span> will converge to. The answer is <span class="math-container">$\frac{1}{10}.$</span></p>
2,943,461
<p>I'm stumped on a math puzzle and I can't find an answer to it anywhere! A man is filling a pool from 3 hoses. Hose A could fill it in 2 hours, hose B could fill it in 3 hours and hose C can fill it in 6 hours. However, there is a blockage in hose A, so the guy starts by using hoses B and C. When the blockage in hose A has been cleared, hoses B and C are turned off and hose A starts being used. How long does the pool take to fill? Any help would be strongly appreciated :)</p>
TonyK
1,508
<p>Hose <span class="math-container">$A$</span> can fill half a pool per hour.<br> Hose <span class="math-container">$B$</span> can fill one third of a pool per hour.<br> Hose <span class="math-container">$C$</span> can fill one sixth of a pool per hour.</p> <p>So how many pools per hour can hoses <span class="math-container">$B$</span> and <span class="math-container">$C$</span> fill, working together?</p>
3,433,277
<blockquote> <p>It is given <span class="math-container">$f:\mathbb R \rightarrow \mathbb R$</span> <span class="math-container">$$f(x):=\tan^{-1}(x+1)+ \cot^{-1}(x)$$</span> <span class="math-container">$\mathcal R_f=?$</span></p> </blockquote> <p>So far, I've learned <span class="math-container">$\tan$</span> and <span class="math-container">$\cot$</span> are complementary functions, therefore <span class="math-container">$$\tan^{-1}(x) + \cot^{-1}(x)=\frac{\pi}{2}.$$</span></p> <p>I entered a loop using <span class="math-container">$$\tan(x) =\frac{1}{\cot(x)}\;.$$</span></p> <p>Can I use <span class="math-container">$\tan$</span> for the whole expression and from there on use the <span class="math-container">$\tan$</span> addition formula?</p> <blockquote> <p>Is there a way of finding the image without using derivatives and limits?</p> </blockquote>
Travis Willse
155,629
<p><strong>Hint</strong> Combine the complementarity identity in the question with the arctangent addition identity <span class="math-container">$$\arctan u \pm \arctan v = \arctan\frac{u \pm v}{1 \mp uv} \pmod \pi .$$</span> (I'm not sure that this approach avoids limits in the strict sense, since finding the range of the resulting function requires knowing about the asymptotic behavior of <span class="math-container">$\arctan$</span>.)</p>
686,981
<p>So I have to solve the equation $$y^2=4\tag{1.9.88 unit 3*}$$</p> <p>I did this: $$y^2=4 \text{ means } \sqrt{y^2}=\sqrt{4}=&gt;y=2$$</p> <p>But I have a problem, $y$ can be either negative or positive so I need to do: $$\sqrt{y^2}=|y|=2=&gt;y=2- or- y=-2$$</p> <p>Is it right?</p>
Anonymous Computer
128,641
<p>For this type of problem, I always think, "How would it appear on a graph? What are the x-intercepts?" Of course, I do not have time to actually draw it out. But I can think, "Let $y^2$ be $x^2$. How can $x^2=4$ be turned into a function? I can just move $4$ to the left by subtracting $4$ on both sides, and replacing the $0$ with $f(x)$."<br /><br /> So, what are the x-intercepts of $f(x)=x^2-4$? Hopefully you can immediately identify the x-intercepts as $\pm \ 2$. So that means $x=\pm \ 2$ if $x^2=4$. Just replace $x^2$ with $y^2$. If $y^2=4$, then $y=\pm \ 2$.<br /><br /> Another much quicker way to solve the problem is factoring. Subtract $4$ from both sides to get $y^2-4 = 0$. Then factor the equation using difference of squares. $(y+2)(y-2)=0$. The equation now splits into two different cases. You know that if $ab=0$, then either $a=0$, $b=0$, or $a$ and $b$ both equal $0$.<br /><br /> <b>Case 1: $y+2=0$</b><br /><br /> $y+2=0$<br /> $y=-2$<br /><br /> <b>Case 2: $y-2=0$</b><br /><br /> $y-2=0$<br /> $y=2$<br /><br /> So, the answer is $y = \pm \ 2$.<br /><br /><hr /> <b>Extra Information Below:</b><br /><br /> You may be wondering, "Why can you just replace the $0$ in $x^2-4=0$ with $f(x)$ in the first method of solving the equation?" You must know the difference between $f(x)=x^2-4$ and $0=x^2-4$. $f(x)=x^2-4$ is the graph of all the points that satisfy the condition that each point on the curve has coordinates $(x, f(x))$. $x^2-4=0$ is a <i>special case</i> of the function $f(x)=x^2-4$. Here, $0=f(x)$. So, what points on the curve are in the form $(x, 0)$? Obviously, the points are $(2, 0)$ and $(-2, 0)$.<br /><br /> You can also think of the equation $x^2-4=0$ as a system of equations like: $$\begin{cases} f(x)=x^2-4 \\ f(x)=0 \\ \end{cases}$$ The solution to the system will be the intersections of the function $f(x)=x^2-4$ and the line $f(x)=0$. The line $f(x)=0$ is the x-axis, so the question becomes, "Where does $f(x)=x^2-4$ intersect with the x-axis?" Obviously, they intersect at the points $(2, 0)$ and $(-2, 0)$.</p>
319,725
<p>I am trying to prove the following inequality concerning the <a href="https://en.wikipedia.org/wiki/Beta_function" rel="noreferrer">Beta Function</a>: <span class="math-container">$$ \alpha x^\alpha B(\alpha, x\alpha) \geq 1 \quad \forall 0 &lt; \alpha \leq 1, \ x &gt; 0, $$</span> where as usual <span class="math-container">$B(a,b) = \int_0^1 t^{a-1}(1-t)^{b-1}dt$</span>.</p> <p>In fact, I only need this inequality when <span class="math-container">$x$</span> is large enough, but it empirically seems to be true for all <span class="math-container">$x$</span>.</p> <p>The main reason why I'm confident that the result is true is that it is very easy to plot, and I've experimentally checked it for reasonable values of <span class="math-container">$x$</span> (say between 0 and <span class="math-container">$10^{10}$</span>). For example, for <span class="math-container">$x=100$</span>, the plot is:</p> <p><a href="https://i.stack.imgur.com/UiRCf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UiRCf.png" alt="Plot of the function to be proven greater than 1"></a></p> <p>Varying <span class="math-container">$x$</span>, it seems that the inequality is rather sharp, namely I was not able to find a point where that product is larger than around <span class="math-container">$1.5$</span> (but I do not need any such reverse inequality).</p> <p>I know very little about Beta functions, therefore I apologize in advance if such a result is already known in the literature. I've tried looking around, but I always ended on inequalities trying to link <span class="math-container">$B(a,b)$</span> with <span class="math-container">$\frac{1}{ab}$</span>, which is quite different from what I am looking for, and also only holds true when both <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are smaller than 1, which is not my setting.</p> <p>I have tried the following to prove it, but without success: the inequality is well-known to be an equality when <span class="math-container">$\alpha = 1$</span>, and the limit for <span class="math-container">$\alpha \to 0$</span> should be equal to 1, too. Therefore, it would be enough to prove that there exists at most one <span class="math-container">$0 &lt; \alpha &lt; 1$</span> where the derivative of the expression to be bounded vanishes. This derivative can be written explicitly in terms of the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="noreferrer">digamma function</a> <span class="math-container">$\psi$</span> as: <span class="math-container">$$ x^\alpha B(\alpha, x\alpha) \Big(\alpha \psi(\alpha) - (x+1)\alpha\psi((x+1)\alpha) + x\alpha \psi(x\alpha) + 1 + \alpha \log x \Big). $$</span> Dividing by <span class="math-container">$x^\alpha B(\alpha, x\alpha) \alpha$</span>, this becomes <span class="math-container">$$ -f(\alpha) + \frac{1}{\alpha} + \log x, $$</span> where <span class="math-container">$f(\alpha) = -\psi(\alpha) + (x+1)\psi((x+1)\alpha) - x \psi(x\alpha)$</span> is, as proven <a href="http://web.math.ku.dk/~berg/manus/alzberg2.pdf" rel="noreferrer">by Alzer and Berg</a>, Theorem 4.1, a completely monotonic function. Unfortunately, the difference of two completely monotonic functions (such as <span class="math-container">$f(\alpha)$</span> and <span class="math-container">$\frac{1}{\alpha} + C$</span>) can vanish in arbitrarily many points, therefore this does not allow to conclude.</p> <p>Many thanks in advance for any hint on how to get such a bound!</p> <p>[EDIT]: As pointed out in the comments, the link to the paper of Alzer and Berg pointed to the wrong version, I have corrected the link.</p>
T. Amdeberhan
66,131
<p>This is an attempt to strengthen your claim.</p> <p>If <span class="math-container">$x$</span> is large then <span class="math-container">$B(x,y)\sim \Gamma(y)x^{-y}$</span> and hence <span class="math-container">$$B(\alpha x,\alpha)\sim \Gamma(\alpha)(\alpha x)^{-\alpha};$$</span> where <span class="math-container">$\Gamma(z)$</span> is the <em>Euler Gamma function</em>.</p> <p>On the other hand, for small <span class="math-container">$\alpha$</span>, we have the expansion <span class="math-container">$$\Gamma(1+\alpha)=1+\alpha\Gamma'(1)+\mathcal{O}(\alpha^2).$$</span> Since <span class="math-container">$\alpha\Gamma(\alpha)=\Gamma(1+\alpha)$</span>, it follows that <span class="math-container">$$\Gamma(\alpha)\sim \frac1{\alpha}-\gamma+\mathcal{O}(\alpha)$$</span> where <span class="math-container">$\gamma$</span> is the <em>Euler constant</em>.</p> <p>We may now combine the above two estimates to obtain <span class="math-container">$$\alpha x^{\alpha}B(\alpha x,\alpha)\sim \alpha x^{\alpha}\left(\frac1{\alpha}-\gamma\right)(\alpha x)^{-\alpha}=\left(\frac1{\alpha}-\gamma\right)\alpha^{1-\alpha}\geq1$$</span> provided <span class="math-container">$\alpha$</span> is small enough. For example, <span class="math-container">$0&lt;\alpha&lt;\frac12$</span> works.</p>
332,927
<p>there are two bowls with black olives in one and green in the other. A boy takes 20 green olives and puts in the black olive bowl, mixes the black olive bowl, takes 20 olives and puts it in the green olive bowl. The question is -</p> <p>Are there more green olives in the black olive bowl or black olive in the green olive bowl? Answer with reason.</p>
André Nicolas
6,312
<p>After the two transfers, the number of olives in the left-hand bowl is <strong>unchanged</strong>, and the number of olives in the right-hand bowl is <strong>unchanged</strong>. </p> <p>So if we look at the left-hand bowl ("green olive bowl") any black olives that end up there must displace exactly as many green olives. Since, implausibly, none are eaten by the transferer, they must end up in the right-hand bowl. </p>
2,657,053
<blockquote> <p>Suppose I know that $$\sum_{i=1}^n i^2=\frac{n(n+1)(2n+1)}{6}\,\,\,\, \tag{1} $$ How can I prove the the following? $$ \sum_{i=0}^{n-1} i^2=\frac{n(n-1)(2n-1)}{6} $$</p> </blockquote> <hr> <p>I have looked up the solution to the other problem but it seems to be a bit confusing to me. Is it possible to find a solution derived from equation 1 if you did <strong>NOT</strong> know this part: $$ \frac{n(n-1)(2n-1)}{6} $$</p>
Hypergeometricx
168,053
<p>Put $n=N-1$ in $(1)$:</p> <p>$$\sum_{i=1}^{N-1}i^2=\frac {\overbrace{(N-1)}^n\ \overbrace{N}^{n+1}\ [\overbrace{2(N-1)+1}^{2n+1}]}6=\frac {N(N-1)(2N-1)}6\tag{2}$$ Now write $n$ instead of $N$ in $(2)$ above: $$\sum_{i=1}^{n-1}i^2=\frac {n(n-1)(2n-1)}6$$ Adding $0^2$ gives $$\sum_{i=0}^{n-1}i^2=\frac {n(n-1)(2n-1)}6$$</p>
3,196,238
<p>Let <span class="math-container">$ A = \left\{ (x,y) \in \mathbb{R^2} \mid y= \sin ( \frac{1}{x}) , \ 0 &lt; x \leq 1 \right\}$</span> . Find <span class="math-container">$\operatorname{Cl} A$</span> in topological space <span class="math-container">$\mathbb{R^2}$</span> with dictionary order topology.</p> <p>I guess <span class="math-container">$ \operatorname{Cl} A = A $</span>? </p>
Henno Brandsma
4,280
<p>The plane in the dictionary order is basically <span class="math-container">$\mathbb{R}$</span> many disjoint topological copies of vertical stalks that are homeomorphic to <span class="math-container">$\mathbb{R}$</span>. So the closure can be taken in each “stalk”. Your <span class="math-container">$A$</span> intersects each stalk in a singleton or the emptyset, so the closure is just <span class="math-container">$A$</span> as you claim.</p>
9,345
<p>On meta.tex.sx, I've asked a question about a class of questions that might get asked over there (and have been) that are (i) ostensibly about maths usage, but (ii) might best be served by an answer that is primarily about how to handle the notation in Latex (See <a href="https://tex.meta.stackexchange.com/questions/3523/where-usage-meets-tex">https://tex.meta.stackexchange.com/questions/3523/where-usage-meets-tex</a> ). </p> <p>One of the moderators, Martin Scharrer, suggests that the best course of action for these is to migrate them here: this is the right place for the usage part, and there are many knowledgeable Texnicians over here who might be able to spot and handle the need for Latex-specific content in the answers.</p> <p>Would this policy be acceptable here?</p>
bubba
31,744
<p>It seems to me that this site is about mathematics, not about the tools that people use to write mathematics. If we're going to start answering LaTeX questions, then I hope that we will also answer questions about Microsoft Word, and Powerpoint, and give advice about the best pencils, paper, and chalk to use, too.</p> <p>People who use LaTeX often need copious help, to be sure, but there are plenty of places to get it, and I hope this site doesn't become another one of them.</p> <p>For similar reasons, I'm not keen on discussing Mathematica or Matlab usage here, either.</p>
2,054,949
<p>From what I understand, these three concepts all describe the points where the function is not continuous. How to tell them apart? Thanks!</p>
reuns
276,986
<ul> <li><p>If $f(z)$ is holomorphic/analytic on $0 &lt; |z-z_0| &lt; r$ then $z_0$ is an isolated singularity. From the <a href="https://math.stackexchange.com/questions/2038892/residue-of-complex-function/2038898#2038898">Cauchy integral formula in an annulus</a> you have the Laurent series $f(z) = \sum_{n=-\infty}^\infty a_n (z-z_0)^n$ converging on $0 &lt; |z-z_0| &lt; r$, and two cases are possible : </p> <ul> <li><p>$a_n = 0 $ for $n &lt; -k$ so that $(z-z_0)^{k}f(z)$ is analytic on $|z-z_0| &lt; r$ and $z=z_0$ is a pole of order $k$. If $k\le 0$ then $z_0$ was in fact a removable singularity</p></li> <li><p>otherwise $z= z_0$ is an essential singularity of $f(z)$</p></li> </ul></li> <li><p>Other types of singularities are non-isolated and include :</p> <ul> <li><p>branch points : a point around which you can continue analytically $f(z)$, but $f(z_0+e^{2i \pi}(z-z_0)) \ne f(z)$)</p></li> <li><p>and frontiers : $f(z) = \sum_{n=1}^\infty z^{2^n}$ is analytic on $|z| &lt; 1$ but $\lim_{r \to 1^-} f(r e^{2 \pi i m/2^k}) = \infty$ whenever $m,k \in \mathbb{N}$, so you can't continue analytically $f(z)$ beyond $|z| &lt; 1$</p></li> </ul></li> </ul>
194
<p>In many parts of the world, the majority of the population is uncomfortable with math. In a few countries this is not the case. We would do well to change our education systems to promote a healthier relationship with math. But in the present situation, how can we help the students who come to our classes, which they are required to take, with fear and loathing?</p> <p><strong>How do we help students overcome their math anxieties?</strong></p>
Confutus
40
<p>It seems that the first step is to diagnose the problem. What, specifically, causes the fear or anxiety? Getting a history of the student's mathematical experience is a reasonable start. Helpful questions include courses the student has taken, what the student liked best or disliked most, what parts were easiest or hardest, and so forth. This often identifies specific concerns that can be remedied, which usually works better than vague, general reassurances. An organized, systematic approach to assessment usually works better than an intuitive, haphazard one. </p>
1,447,852
<p>Compute this sum:</p> <p><span class="math-container">$$\sum_{k=0}^{n} k \binom{n}{k}.$$</span></p> <p>I tried but I got stuck.</p>
Servaes
30,382
<p>Here's alternative way to get the result. The first thing to note is that $$\sum_{k=0}^nk\binom{n}{k}=\sum_{k=0}^nk\cdot\frac{n!}{(n-k)!k!}=\sum_{k=1}^nk\cdot\frac{n!}{(n-k)!k!},$$ because the term with $k=0$ is equal to $0$. Next, cancelling the factor $k$ we find that $$\sum_{k=1}^nk\cdot\frac{n!}{(n-k)!k!}=\sum_{k=1}^n\frac{n!}{(n-k)!(k-1)!}=\sum_{k=1}^nn\cdot\frac{(n-1)!}{(n-k)!(k-1)!}.$$ This can be further simplified by taking the factor $n$ out, and setting $j:=k-1$ to get $$\sum_{k=1}^nn\cdot\frac{(n-1)!}{(n-k)!(k-1)!}=n\cdot\sum_{k=1}^n\frac{(n-1)!}{(n-k)!(k-1)!}=n\cdot\sum_{j=0}^{n-1}\frac{(n-1)!}{(n-1-j)!j!}.$$ We can now finish by noting that the terms of this last sum are again binomial coefficients: $$n\cdot\sum_{j=0}^{n-1}\frac{(n-1)!}{(n-1-j)!j!}=n\cdot\sum_{j=0}^{n-1}\binom{n-1}{j}=n\cdot2^{n-1}.$$</p>
1,447,852
<p>Compute this sum:</p> <p><span class="math-container">$$\sum_{k=0}^{n} k \binom{n}{k}.$$</span></p> <p>I tried but I got stuck.</p>
vidhan
240,536
<p>$$\large S=\sum_{k=0}^{n} k \binom{n}{k}$$ $$\large S=0\binom{n}{0}+1\binom{n}{1}+2\binom{n}{2}+..+(n-1)\binom{n}{n-1}+n\binom{n}{n}$$</p> <p>$$\large S=n\binom{n}{n}+(n-1)\binom{n}{n-1}+(n-2)\binom{n}{n-2}+..+1\binom{n}{1}+0\binom{n}{0}$$ Adding the above equations, $$\large 2S=n(\binom{n}{0}+\binom{n}{1}+\binom{n}{2}+..+\binom{n}{n-1}+\binom{n}{n})$$ $$\large 2S=n2^n$$ $$\large S=n2^{n-1}$$</p>
4,036,761
<blockquote> <p>Let G be the additive group of all polynomials in <span class="math-container">$x$</span> with integer coefficients. Show that G is isomorphic to the group <span class="math-container">$\mathbb{Q}$</span>* of all positive rationals (under multiplication).</p> </blockquote> <p>This question is from my abstract algebra assignment and I am unable to prove it. I am not able to deduce what should I map <span class="math-container">$a_0 +a_1 x + a_2 x^2 +...+ a_n x^n $</span> to so not able to proceed.</p> <p>Can you please give a hint?</p>
mathcounterexamples.net
187,663
<p><strong>Hint</strong></p> <p>Consider the map</p> <p><span class="math-container">$$\begin{array}{l|rcl} \phi : &amp; \mathbb Z[x] &amp; \longrightarrow &amp; \mathbb Q^* \\ &amp; q(x)=q_0 + q_1 x + \dots + q_n x^n&amp; \longmapsto &amp; p_0^{q_0}p_1^{q_0} \cdots p_n^{q_n}\end{array}$$</span></p> <p>where <span class="math-container">$p_i$</span> stands for the <span class="math-container">$i$</span>-th prime number.</p>
1,413,363
<p>The question:</p> <p>Find values of $a,b,c.$ if $\displaystyle \frac{x^2+1}{x^2+3x+2} = \frac{a}{x+2}+\frac{bx+c}{x+1}$</p> <p>My working so far:</p> <p><a href="https://i.imgur.com/VegifVa.jpg" rel="nofollow noreferrer">http://i.imgur.com/VegifVa.jpg</a></p> <p>How do I isolate $a$, $b$ and $c$?</p>
Mark Viola
218,419
<p>For the right-hand side, let's obtain a common denominator. To that end, we obtain</p> <p>$$\frac{a}{x+2}+\frac{bx+c}{x+1}=\frac{a(x+1)+(bx+c)(x+2)}{x^2+3x+1}=\frac{bx^2+(a+2b+c)x+(a+2c)}{x^2+3x+1}$$</p> <p>Equating this last expression to $\frac{x^2+1}{x^2+3x+1}$ we see that</p> <p>$$b=1$$</p> <p>$$a+2b+c=0$$</p> <p>and</p> <p>$$a+2c=1$$</p> <p>Can you finish from here?</p> <p>SPOLIER ALERT: SCROLL OVER THE SHADED AREA TO REVEAL SOLUTION</p> <blockquote class="spoiler"> <p>We see that since $b=1$, the problem is reduced to solving a linear system of two equations for the two remaining unknowns, $a$ and $c$. We have $$a+c=-2$$and$$a+2c=1$$Upon solving, we find $a=-5$ and $c=3$.</p> </blockquote>
887,327
<p>I am confused as to how to evaluate the infinite series $$\sum_{n=1}^\infty \frac{\sqrt{n+1}-\sqrt{n}}{\sqrt{n^2+n}}.$$<br> I tried splitting the fraction into two parts, i.e. $\frac{\sqrt{n+1}}{\sqrt{n^2+n}}$ and $\frac{\sqrt{n}}{\sqrt{n^+n}}$, but we know the two individual infinite series diverge. Now how do I proceed?</p>
Hagen von Eitzen
39,174
<p>Note that $\frac{\sqrt{n+1}}{\sqrt{n(n+1)}}=\frac1{\sqrt n}$ and $\frac{\sqrt{n}}{\sqrt{n(n+1)}}=\frac1{\sqrt {n+1}}$. KEyword: Telescope</p>
887,327
<p>I am confused as to how to evaluate the infinite series $$\sum_{n=1}^\infty \frac{\sqrt{n+1}-\sqrt{n}}{\sqrt{n^2+n}}.$$<br> I tried splitting the fraction into two parts, i.e. $\frac{\sqrt{n+1}}{\sqrt{n^2+n}}$ and $\frac{\sqrt{n}}{\sqrt{n^+n}}$, but we know the two individual infinite series diverge. Now how do I proceed?</p>
Darth Geek
163,930
<p><strong>Hint:</strong></p> <p>$$\frac{\sqrt{n+1} - \sqrt{n}}{\sqrt{n^2+n}} = \frac{\sqrt{n+1} - \sqrt{n}}{\sqrt{n+1}\sqrt{n}} = \frac{1}{\sqrt{n}} - \frac{1}{\sqrt{n+1}}$$</p> <p>This kind of series is called <a href="http://en.wikipedia.org/wiki/Telescoping_series">Telescoping Series</a></p>
2,426,897
<p>Let $\mathbb{H}$ be the ring of real quaternions and $Z(\mathbb{H})$ be its center. Of course $Z(\mathbb{H})=\mathbb{R}$. </p> <p>Suppose $a+bi+cj+dk$, $x+yi+zj+wk \in \mathbb{H}$ such that $(a+bi+cj+dk)(x+yi+zj+wk) \in Z(\mathbb{H})$. </p> <p>Does it imply $(x+yi+zj+wk)(a+bi+cj+dk) \in Z(\mathbb{H})$?</p>
eyeballfrog
395,748
<p>You could use the root extraction algorithm to find it directly. It's sort of like long division.</p> <ol> <li>Starting from the decimal place, divide the number into pairs of digits. So $20\,17$.</li> <li>Find the largest integer whose square is less than the first pair. $4^2 &lt; 20 &lt; 5^2$. This is the first digit of the square root.</li> <li>Subtract off the square and bring down the next two digits. $20 - 4^2 = 4 \Longrightarrow 417$. This is our remainder.</li> <li>Find the largest $d$ such that the product $(20\cdot r + d)\cdot d$ is less than the current remainder, where $r$ is the part of the root already found. $(20\cdot 4 + 4)\cdot 4 &lt; 417 &lt; (20 \cdot 4 + 5)\cdot 5$, so $d= 4$.</li> <li>Add $d$ to the end of the digits already found, subtract the product from the current remainder, and bring down the next two digits. So we have 44 for our root and $417 - 84\cdot4 = 81 \Longrightarrow 8100$ for our new remainder.</li> <li>Repeat 4 and 5 until you have enough digits or the remainder and all remaining digits of the number are zero. Since we now have enough digits for the integer part, we can stop here.</li> </ol> <p>So $44 &lt; \sqrt{2017} &lt; 45$.</p> <p>I do want to comment on one suggestion you had, though.</p> <blockquote> <p>However, 2016 (the number before it) and 2018 (the one after) are not, so I tried to factorise them. But that didn't work so well either, because they too are not perfect squares</p> </blockquote> <p>$2016 = 2^5\cdot3^2\cdot7$, which is divisible by a lot of perfect squares. Trial division by some of those perfect squares may result in a quotient that is itself close to a perfect square, which would give a guess as to what $\sqrt{2016}$ is. $2016 = 2^2\cdot504 = 3^3\cdot224 = 4^2\cdot126 = 6^2\cdot56 = 12^2\cdot14$. We should then recognize $224 \approx 225 = 15^2$ and $126\approx 121 = 11^2$. This gives $44^2 &lt; 2016 &lt; 45^2$, so clearly $44^2 &lt; 2017 &lt; 45^2$ as well.</p>
883,972
<p>Let:</p> <p>$$f(n) = n(n+1)(n+2)/(n+3)$$</p> <p>Therefore :</p> <p>$$f∈O(n^2)$$</p> <p>However, I don't understand how it could be $n^2$, shouldn't it be $n^3$? If I expand the top we get $$n^3 + 3n^2 + 2n$$ and the biggest is $n^3$ not $n^2$.</p>
Community
-1
<p>$$ \frac{n(n+1)(n+2)}{n+3} = \frac{\Theta(n^3)}{\Theta(n)} = \Theta(n^2)$$</p>
4,500,163
<blockquote> <p>Take two positive integers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> that are not multiples of <span class="math-container">$5.$</span> Then, construct a list in the following fashion: let the first term be <span class="math-container">$5,$</span> and starting with the second number, each number is obtained by multiplying the previous number on the list by <span class="math-container">$a$</span> and adding <span class="math-container">$b.$</span> What is the maximum number of primes that the list can contain before obtaining the first composite number?</p> </blockquote> <hr /> <p>I first started the problem by writing out a couple terms in the sequence: <span class="math-container">$$5, 5a + b, 5a^2 + ab + b, 5a^3 + a^2b + ab + b, 5a^4 + a^3b + a^2b + ab + b, \cdots$$</span> I then tried interpreting each term in modulo <span class="math-container">$5$</span> but considering each <span class="math-container">$a, b \pmod{5}$</span> and expanding was far too long. Considering the terms in modulo <span class="math-container">$2$</span> also didn't work since there always exists a way to make the term <span class="math-container">$\not\equiv 0 \pmod{2}.$</span></p> <p>Is there a better way to try and solve this problem besides grueling casework on <span class="math-container">$a,b, \pmod{5}$</span>?</p>
John Omielan
602,049
<p>Note your sequence is a <a href="https://en.wikipedia.org/wiki/Linear_recurrence_with_constant_coefficients" rel="nofollow noreferrer">linear recurrence with constant coefficients</a>, i.e.,</p> <p><span class="math-container">$$n_0 = 5, \; \; n_k = an_{k-1} + b \; \forall \; k \ge 1 \tag{1}\label{eq1A}$$</span></p> <p>We could determine <span class="math-container">$n_k$</span> explicitly using the method specified in the link, but note instead that, for <span class="math-container">$a \not\equiv 1 \pmod{5}$</span>, as can be seen from your list of initial values (&amp; can be proven using induction), we have</p> <p><span class="math-container">$$n_k \equiv 5a^k + b\left(\frac{a^k-1}{a-1}\right) \pmod{5} \tag{2}\label{eq2A}$$</span></p> <p>By <a href="https://en.wikipedia.org/wiki/Fermat%27s_little_theorem" rel="nofollow noreferrer">Fermat's little theorem</a>, <span class="math-container">$a^4 - 1 \equiv 0 \pmod{5}$</span>, so <span class="math-container">$n_4 \equiv 0 \pmod{5}$</span>. Since <span class="math-container">$n_4 \gt 5$</span>, it's not a prime and, thus, there's a maximum possible <span class="math-container">$4$</span> primes in a row.</p> <p>For <span class="math-container">$a \equiv 1 \pmod{5}$</span>, we instead have</p> <p><span class="math-container">$$n_k \equiv 5 + kb \pmod{5} \tag{3}\label{eq3A}$$</span></p> <p>Note that <span class="math-container">$n_k \not\equiv 0 \pmod{5}$</span> for <span class="math-container">$1 \le k \le 4$</span>, but that <span class="math-container">$n_5 \equiv 5 + 5b \equiv 0 \pmod{5}$</span>, so it's not a prime. This shows there's a maximum possible <span class="math-container">$5$</span> initial primes in a row. In particular, <span class="math-container">$a = 1$</span> and <span class="math-container">$b = 6$</span> give an initial list of <span class="math-container">$5$</span> primes of <span class="math-container">$\{5, 11, 17, 23, 29\}$</span>. This proves that <span class="math-container">$5$</span> is the maximum initial number of primes in a row possible in your sequence.</p>
1,074,534
<p>How can I get started on this proof? I was thinking originally:</p> <p>Let $ n $ be odd. (Proving by contradiction) then I dont know.</p>
Mathmo123
154,802
<p><strong>Hint:</strong> Try to construct the smallest number with $k&gt;1$ divisors. If it does not have $2$ as a divisor, can it be the smallest?</p>
201,060
<p>I posted this question on Math, but there has been silence there since. So, I wonder if anyone here can get any closer to the answer to my question using Mathematica. Here is the question:</p> <p>Suppose I draw <span class="math-container">$N$</span> random variables from independent but identical uniform distributions, where <span class="math-container">$N$</span> is an even integer. I now sort the drawn values and find the two middlemost of these. Finally, I calculate a simple average of these two middlemost values.</p> <p>Is there a closed-form description of the progression of distributions that arise as <span class="math-container">$N$</span> increases from <span class="math-container">$N=2$</span> to <span class="math-container">$N=∞$</span> ? The first distribution is easily found to be Triangular, but what about the rest? Plots from simulations in MATLAB, with a uniform distribution on the range 0 to 1, provide the following illustrations:</p> <p><a href="https://i.stack.imgur.com/2wwIJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2wwIJ.png" alt="enter image description here"></a></p>
MikeY
47,314
<p>Here's an interesting counter-example for a <strong><em>discrete uniform</em></strong> distribution does not tend to your shape as <span class="math-container">$N$</span> grows.</p> <p>Let your r.v. <span class="math-container">$x$</span> be distributed as per a coin toss, taking value <span class="math-container">${0,1}$</span> with equal probability. Then you have three possible outcomes after <span class="math-container">$N$</span> coin tosses; <span class="math-container">$0, 1/2$</span> , or <span class="math-container">$1$</span></p> <p>The middle outcome's probability is the probability that with <span class="math-container">$N$</span> coin tosses you get exactly <span class="math-container">$N/2$</span> zeros or ones. This probability is</p> <p><span class="math-container">$$ 2^{-n} \binom{n}{\frac{n}{2}} $$</span></p> <p>Which decreases in <span class="math-container">$N$</span>, resulting in the plot below, with the distribution of probabilities of values with <span class="math-container">$N$</span></p> <p><a href="https://i.stack.imgur.com/P5mc9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P5mc9.jpg" alt="enter image description here"></a></p> <p><strong>EDIT</strong></p> <p>You can see the same effect with the discrete distribution for <span class="math-container">$x$</span> expanded to take values <span class="math-container">$x=\{1, 2, ... , 50\}$</span> with equal probability. The non-integer values are much less likely, since the odds of the two middle points hitting the boundary is low. The integers values do tend to a Guassian.</p> <pre><code>middleMean[n_, range_] := Module[{res}, res = Sort@Table[RandomChoice[Range[range]], {n}]; Mean[{res[[n/2]], res[[n/2 + 1]]}] ] Histogram[Table[middleMean[500, 50], {1000}], 50] </code></pre> <p><a href="https://i.stack.imgur.com/QeibX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QeibX.jpg" alt="enter image description here"></a></p>
201,060
<p>I posted this question on Math, but there has been silence there since. So, I wonder if anyone here can get any closer to the answer to my question using Mathematica. Here is the question:</p> <p>Suppose I draw <span class="math-container">$N$</span> random variables from independent but identical uniform distributions, where <span class="math-container">$N$</span> is an even integer. I now sort the drawn values and find the two middlemost of these. Finally, I calculate a simple average of these two middlemost values.</p> <p>Is there a closed-form description of the progression of distributions that arise as <span class="math-container">$N$</span> increases from <span class="math-container">$N=2$</span> to <span class="math-container">$N=∞$</span> ? The first distribution is easily found to be Triangular, but what about the rest? Plots from simulations in MATLAB, with a uniform distribution on the range 0 to 1, provide the following illustrations:</p> <p><a href="https://i.stack.imgur.com/2wwIJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2wwIJ.png" alt="enter image description here"></a></p>
CElliott
40,812
<p>According to "Median." Wikipedia, The Free Encyclopedia. 11 Jun. 2019. Web. 3 Jul. 2019, the median of a uniformly distributed variable is (b-a)/2. In general, for the Uniform, Normal, and Exponential distributions, Mathematica reports the same medians as the Wikipedia article. I did not try the other distributions for which the Wikipedia article gave medians.</p>
2,871,729
<p>Given any $\alpha &gt; 0$, I need to show that for $ x \in [0,\infty)$ \begin{equation} \lim_{x\to 0} x^{\alpha}e^{|\log x|^{1/2}}=0 \end{equation}</p> <p>I have tried using L'Hospital's rule. But I am not able to arrive at answer. </p> <p>Thank you in advance.</p>
marty cohen
13,079
<p>Let $f(x) =x^{a}e^{|\log x|^{1/2}} $. $\ln f(x) =a\ln x+|\log x|^{1/2} $.</p> <p>Let $x = 1/y$, so $y \to \infty$ as $x \to 0$.</p> <p>$\ln f(1/y) =a\ln (1/y)+|\log (1/y)|^{1/2} =-a\ln (y)+|\log y|^{1/2} $.</p> <p>The key is that $\dfrac{|\log y|^{1/2}}{\ln(y)} \to 0$ as $y \to \infty$.</p> <p>Therefore $\ln f(1/y) =\ln (y)(-a +\dfrac{|\log y|^{1/2}}{\ln(y)}) $. Since $\dfrac{|\log y|^{1/2}}{\ln(y)} \to 0$ as $y \to \infty$ and $a &gt; 0$, $-a +\dfrac{|\log y|^{1/2}}{\ln(y)} \lt -a/2$ for large enough $y$ so that $\ln f(1/y) \to -\infty$ so $f(1/y) \to 0$.</p> <p>Note that this works for any exponent less than $1$, not just $\frac12$.</p>
2,871,729
<p>Given any $\alpha &gt; 0$, I need to show that for $ x \in [0,\infty)$ \begin{equation} \lim_{x\to 0} x^{\alpha}e^{|\log x|^{1/2}}=0 \end{equation}</p> <p>I have tried using L'Hospital's rule. But I am not able to arrive at answer. </p> <p>Thank you in advance.</p>
user
505,767
<p>Let $x=e^{-y}\to 0$ as $y\to \infty$ then</p> <p>$$\large{x^{\alpha}e^{|\log x|^{1/2}}=e^{-\alpha y}\,e^{\sqrt y}=e^{\sqrt y\,-\,\alpha y}\to 0}$$</p> <p>indeed</p> <p>$$\sqrt y-\alpha y=y\left(\frac {\sqrt y}y -\alpha\right)=y\left(\frac 1 {\sqrt y}-\alpha\right) \to -\infty$$</p>
3,068,782
<p>The canonical basis is not a Schauder basis of the space of bounded sequences, but in some way, it uniquely determines every element in the space. Is it a basis in a weaker sense? How is it called?</p> <p>Thanks a lot.</p>
Sergio
633,486
<p>Any sequence in <span class="math-container">$\ell^\infty$</span> can be understood as a operator of <span class="math-container">$\ell^1$</span> sequences. Then we can take a Schauder basis of <span class="math-container">$\ell^1$</span> and describe the behavior of the <span class="math-container">$\ell^\infty$</span> operator through the images of the basis elements. </p>
3,224,475
<p>Let <span class="math-container">$\mathbb{Z}_8$</span> be the ring containing elements integer modulo 8 with operation <span class="math-container">$+$</span> and <span class="math-container">$.$</span> being addition and multiplication modulo 8 resp. I want to find <span class="math-container">$a$</span> for every <span class="math-container">$0\neq b \in \mathbb{Z}_8$</span> such that <span class="math-container">$$a.b+b=0.$$</span> </p> <p>PS- I can write <span class="math-container">$a.b+b=(a+1).b=0$</span>, which gives <span class="math-container">$a+1=0$</span> implies <span class="math-container">$a=-1=8-1=7$</span> over <span class="math-container">$\mathbb{Z}_8$</span>. But when I check element wise, for <span class="math-container">$b=2,4,6$</span> we can have also <span class="math-container">$a=3$</span> as a solution. I didn’t expect 2 solutions when I am solving the equation <span class="math-container">$a.b+b=(a+1).b=0$</span>. I see this is happening with non unit elements but Why is it so?</p>
dgould
451,573
<blockquote> <p>How come he came up the time coomplexity is log in just by breaking off binary tree and knowing height is log n</p> </blockquote> <p>I'm guessing this is a key part of the question: you're wondering not just "why is the complexity <em>log(n)</em>?", but "why does knowing that the <em>height of the tree</em> is <em>log2(n)</em> equate to the complexity being <em>O(log(n))</em>?"</p> <p>The answer is that <em>steps down the tree</em> are the unit "operations" you're trying to count. That is, as you walk down the tree, what you're doing at each step is: "<strong><em>[check whether the value at your current position is equal to, greater than, or less than the value you're searching for; and accordingly, return, go left, or go right]</em></strong>". That whole chunk of logic is a constant-time-bounded amount of processing, and the question you're trying to answer is, "How many times (at most) am I going to have to do <strong><em>[that]</em></strong>?"</p> <p>For every possible search, you'll be starting at the root and walking down the tree, all the way (at most) to a leaf on the bottom level. So, the height of the tree is equal to the maximum number of steps you'll need to take.</p> <p>One other possible source of confusion is that seeing him draw the whole tree might give the impression that the search process would involve explicitly constructing the entire Binary Search Tree data structure (which would itself be a <em>O(n)</em> task). But no -- the idea here is that the tree exists abstractly and implicitly, as a graph of the relationships among the elements in the array; and drawing it and tracing paths down it is just a way of visualizing what you're doing as you jump around the array.</p>
3,385,420
<p>The question is from <em>Cambridge Admission Test 1983</em></p> <blockquote> <p>A room contains m men and w women. They leave one by one at random until only people of the same sex remain. show by a carefully explained inductive argument, or otherwise, that the expected number of people remaining is <span class="math-container">$ \frac{\text{m}}{\text{w}+1}+\frac{\text{w}}{\text{m}+1} $</span></p> </blockquote> <p>I can not think of a way by what it says, "inductive argument". Also, I can not fully understand my process. My thought is:</p> <p>Consider w women have arranged, then interpolate m men. Each man has the probability <span class="math-container">$ \frac{1}{\text{w}+1} $</span> to be the first (which corresponds to being the remaining). Then we have the expected number of men to be the remaining is <span class="math-container">$ \frac{m}{\text{w}+1} $</span>.</p> <p>However, similarly, we have the expected number of women remaining is <span class="math-container">$ \frac{w}{\text{m}+1} $</span>. But why can we directly add them together? My way of thinking suggested two mutually expectation, but it seems that we do not know whether the remaining is men or women.</p>
Frostic
402,923
<p>Let X be the number of leavers until only men or women remains. Let <span class="math-container">$S_1=1$</span> if first leaver is a man. </p> <p><span class="math-container">$\mathbb E[X(m,w)] = \mathbb E[X(m-1,w)|S_1=1]P(S_1 = 1) + \mathbb E[X(m,w-1)|S_1=0]P(S_1=0)$</span> <span class="math-container">$ =\mathbb E[X(m-1,w)]\frac{m}{m+w} + \mathbb E[X(m,w-1)]\frac{w}{m+w} $</span></p> <p><strong><em>Induction</em></strong></p> <p>For the induction let assume these two properties:</p> <ul> <li><span class="math-container">$\mathbb E[X(m-1,w)] = \frac {m-1}{w+1} + \frac w{m}$</span> </li> <li><span class="math-container">$\mathbb E[X(m,w-1)] = \frac {m}{w} + \frac {w-1}{m+1}$</span> </li> </ul> <p>Then,</p> <p><span class="math-container">$\begin{align} \mathbb E[X(m,w)] &amp;= (\frac {m-1}{w+1} + \frac w{m})\frac{m}{m+w} + (\frac {m}{w} + \frac {w-1}{m+1})\frac{w}{m+w} \\ &amp; = (\frac {m-1}{w+1} +1)\frac{m}{m+w} + (1+\frac {w-1}{m+1})\frac{w}{m+w} \\ &amp; = \frac {m}{w+1} + \frac {w}{m+1} \end{align}$</span></p> <p><strong><em>Initialization:</em></strong></p> <ul> <li><span class="math-container">$\mathbb E[X(0,1)] = \mathbb E[X(1,0)] = 0 $</span></li> </ul>
1,968,978
<p>Let $f=(f_0,f_1,f_2...)$ and $g=(g_0,g_1,g_2,...)$ be sequences in $F^{\infty}$. We define multiplication $fg$ by expressing the $n$-th component $(fg)_n=\sum_{i=0}^ng_if_{n-i}$. If $h=(h_0,h_1,h_2,...)$ is also in $F^{\infty}$, we want to show multiplication is associative. Hoffman and Kunze give the following calculation:</p> <p>\begin{align} [(fg)h]_n&amp;=\sum_{i=0}^n(fg)_ih_{n-i}\\ &amp;=\sum_{i=0}^n(\sum_{j=0}^if_jg_{i-j})h_{n-i}\\ &amp;=\sum_{i=0}^n\sum_{j=0}^if_ig_{i-j}h_{n-i}\\ &amp;=\sum_{j=0}^nf_j\sum_{i=0}^{n-j}g_ih_{n-i-j}\\ &amp;=\sum_{j=0}^nf_j(gh)_{n-j}=[f(gh)]_n. \end{align} My question is regarding the second to last equality. I'm getting $\sum_{j=0}^n\sum_{i=0}^{n-j}f_{i+j}g_ih_{n-i-j}$. Is my calculation wrong, or is the one in the book wrong? If the book is wrong, it doesn't look like we have associativity, so I'm a bit confused.</p>
Simply Beautiful Art
272,831
<p>Assuming you meant to solve $\frac{y!}{(y-2)!2!}+10y=108$,</p> <p>$$\frac{y!}{(y-2)!}=y(y-1)=y^2-y$$</p> <p>$$\frac{y!}{(y-2)!2!}+10y=0.5y^2+9.5y=108$$</p> <p>$$y^2+19y=216$$</p> <p>Quadratic, with solution given as</p> <p>$$y=8,-27$$</p> <p>Usually $y&gt;0$, so then $y=8$.</p>
612,681
<p>I am a little confused about the decidablity of validity of first order logic formulas. I have a textbook that seems to have 2 contradictory statements. </p> <p>1)Church's theorem: The validity of a first order logic statement is undecidable. (Which I presume means one cannot prove whether a formula is valid or not) 2)If A is valid then the semantic tableau of not(A) closes. (A is a first order logic statement)</p> <p>According to me 2 contradicts one since you would then be able to just apply the semantic tableau algorithm on A and prove its validity.</p> <p>I am sure I am missing something, but what?</p>
dezakin
89,487
<p>First order logic isn't <i>undecidable</i> exactly, but rather often referred to as <i>semidecidable.</i> A valid first order statement is always provably valid. This is a result of the completeness theorems. For all valid statements, there is a decidable, sound and complete proof calculus.</p> <p>However, satisfiability <i>is</i> undecidable as a consequence of Church's theorem. If you can solve the Entscheidungsproblem for a first order statement and its <i>negation</i> you have an algorithm for first order satisfiability, but Church's theorem demonstrates that this isn't possible.</p> <p>Now you can approach this with several sample statements: <span class="math-container">$\phi$</span>, a valid statement, <span class="math-container">$\psi$</span>, a statement whose negation is valid, and <span class="math-container">$\chi$</span>, a statement where neither it nor its negation are valid. Statements <span class="math-container">$\phi$</span> and <span class="math-container">$\psi$</span> are decidable and provable, with tableau or resolution; we use our proof calculus on <span class="math-container">$\phi$</span>, <span class="math-container">$\lnot$$\phi$</span>, <span class="math-container">$\psi$</span>, and <span class="math-container">$\lnot$$\psi$</span> to find out which statements are valid. For invalid statements this doesn't terminate, but we can abandon our search after we find the valid ones.</p> <p>For statement <span class="math-container">$\chi$</span> however there is no algorithm to determine that it isn't valid that is guaranteed to terminate. This is because you can use your proof calculus (pick whatever complete one you want) to determine validity of a statement, or its negation. If neither <span class="math-container">$\chi$</span> nor <span class="math-container">$\lnot$$\chi$</span> are valid however, your proof calculus for determining validity won't give you any information about validity because it won't terminate.</p>
440,583
<p><strong>Question.</strong> Is there an entire function <span class="math-container">$F$</span> satisfying first two or all three of the following assertions:</p> <ul> <li><span class="math-container">$F(z)\neq 0$</span> for all <span class="math-container">$z\in \mathbb{C}$</span>;</li> <li><span class="math-container">$1/F - 1\in H^2(\mathbb{C}_+)$</span> -- the classical Hardy space in the upper half-plane;</li> <li><span class="math-container">$F$</span> is bounded in every horizontal half-plane <span class="math-container">$\{z\colon \text{Im}(z) &gt; \delta\}$</span>?</li> </ul> <p><strong>Thoughts</strong>. Let <span class="math-container">$G= 1/F$</span>. Then we have <span class="math-container">$G(z) = 1 + \int_0^{\infty} h(x)e^{izx}\, dx$</span> for some <span class="math-container">$h\in L^2[0,\infty)$</span> and all <span class="math-container">$z\in \mathbb{C}_+$</span>. For nice functions <span class="math-container">$h$</span> (e.g., for super-exponentially decreasing) this integral representation can be extended to the whole complex plane and probably the example can be constructed in terms of <span class="math-container">$h$</span>. However, I don't know if it is possible to find <span class="math-container">$h$</span> such that <span class="math-container">$G$</span> is non-zero for every <span class="math-container">$z$</span>.</p>
Alexandre Eremenko
25,510
<p>There is a zero-free entire function bounded in every left half-plane, and such that <span class="math-container">$f-1$</span> is in <span class="math-container">$H^2$</span> in every left half-plane.</p> <p>Let <span class="math-container">$\gamma$</span> be the boundary of the region <span class="math-container">$$D=\left\{ x+iy: |y|&lt;2\pi/3, x&gt;0\right\} .$$</span> Consider the function <span class="math-container">$$g(z)=\int_\gamma \frac{\exp e^\zeta}{\zeta-z}d\zeta,\quad z\in {\mathbf{C}}\backslash D.$$</span> The integral evidently converges and <span class="math-container">$g(z)=O(1/z)$</span> in <span class="math-container">${\mathbf{C}}\backslash D.$</span> Now, <span class="math-container">$g$</span> has an analytic continuation to an entire function: deforming the contour to <span class="math-container">$\partial D_t$</span>, where <span class="math-container">$D_t=\{ x+iy:|y|&lt;2\pi/3, x&gt;t\},\; t&gt;0$</span> does not change <span class="math-container">$g$</span> in <span class="math-container">$D$</span>, and shows that <span class="math-container">$g$</span> has an analytic continuation to <span class="math-container">${\mathbf{C}}\backslash D_t$</span>, and this is for every <span class="math-container">$t&gt;0$</span>, so <span class="math-container">$g$</span> is entire. Now <span class="math-container">$f(z)=e^{g(z)}$</span> is the desired function. If you want upper half-planes take <span class="math-container">$f(iz)$</span>.</p> <p>Remark. You can improve the estimate <span class="math-container">$g(z)=O(1/z)$</span>. Evidently, <span class="math-container">$g$</span> has infinitely many zeros <span class="math-container">$z_1,z_2,\ldots$</span>. Then <span class="math-container">$g_k(z)=g(z)/((z-z_1)\ldots(z-z_k))$</span> satisfies <span class="math-container">$g_k(z)=O(z^{-k-1})$</span> as <span class="math-container">$z\to\infty$</span> outside <span class="math-container">$D_t$</span>.</p> <p>Remark 2. This construction is standard in the theory of entire functions, see, for example, <a href="https://mathoverflow.net/questions/190837/entire-function-bounded-at-every-line/190874#190874">Entire function bounded at every line</a> Sometimes this <span class="math-container">$g$</span> is called the Mittag-Leffler function.</p>
2,472,746
<p>I have that $f_n$ is measurable on a finite measure space.</p> <p>Define $F_k=\{\omega:|f_n|&gt;k \}$</p> <p>$F_k$ are measurable and have the property $F_1 \supseteq F_2\supseteq\cdots$</p> <p>Can I then claim that $m\left(\bigcap_{n=1}^\infty F_n\right) = 0$?</p>
operatorerror
210,391
<p>If $f$ maps to $\mathbb{R}\cup \{ \pm\infty\}$, the set is still measurable as this intersection is the preimage of the point $$ +\infty $$ which is measurable, since it is closed and thus Borel. </p> <p>In order for the statement that this set be measure zero, you need to restrict your attention to functions which attain only real (and not extended real) values.</p>
2,472,746
<p>I have that $f_n$ is measurable on a finite measure space.</p> <p>Define $F_k=\{\omega:|f_n|&gt;k \}$</p> <p>$F_k$ are measurable and have the property $F_1 \supseteq F_2\supseteq\cdots$</p> <p>Can I then claim that $m\left(\bigcap_{n=1}^\infty F_n\right) = 0$?</p>
Caleb Stanford
68,107
<p>Assuming you meant $$F_k = \{\omega \;:\; |f_{k}(\omega)| &gt; k\},$$ then this is true, even if $f$ is not measurable (if $f: A \to \mathbb{R}$). We need only note that for any fixed point $a$, $f(a)$ will be less than $k$ once $k$ gets large enough, so it will be excluded from $F_k$ for large enough $k$. So $$ \bigcap_{k=1}^\infty F_k = \varnothing. $$</p>
2,005,798
<p>I have the following equality: $$ \lim_{k \to \infty}\int_{0}^{k} x^{n}\left(1 - {x \over k}\right)^{k}\,\mathrm{d}x = n! $$</p> <p>What I think is that after taking the limit inside the integral (&nbsp;maybe with the help of Fatou's Lemma, I don't know how should I do that yet&nbsp;), then get</p> <p>$$ \int_{0}^{k}\lim_{k \to \infty}\left[\,% x^{n}\left(1 - {x \over k}\right)^{k}\,\right]\mathrm{d}x = \int_{0}^{\infty}x^{n}\,\mathrm{e}^{-x}\,\mathrm{d}x = \Gamma\left(n + 1\right) = n! $$</p> <p>How can I give a clear proof&nbsp;?.</p>
Arthur
15,500
<p>Hint: For the sum $\sum_{k=0}^{n/2} \binom{n}{2k}$, use the binomial theorem to compare $(1+1)^n$ to $(1+(-1))^n$.</p>
2,835,767
<p>Let $V \subset L^2(\Omega)$ be a Hilbertspace and $\{V_n\}$ a sequence of subspaces such that \begin{align*} V_1 \subset V_2 \subset \dots \quad \text{and} \quad \overline{\bigcup_{n \in \mathbb{N}} V_n} = V \, (\text{w.r.t. } V\text{-norm} ). \end{align*} For some $f\in L^2(\Omega)$ we define $\phi_n = \sup_{\| v_n\| = 1, v_n \in V_n} \int_\Omega f(x) v_n(x)\, dx$. How can I prove that \begin{align*} \lim_{n\to\infty} \phi_n = \sup_{\| v\| = 1, v \in V} \int_\Omega f(x) v(x)\, dx \end{align*} holds? Is this convergence uniform?</p>
Sarvesh Ravichandran Iyer
316,409
<p>Recall what a subspace must satisfy : it must be closed under addition and scalar multiplication, and contain the zero element (and must be a subset of the vector space you are checking it is a subspace of, but this is clear here). </p> <p>In none of the conditions does $p(2x)$ come into play. You should explain from where the idea that $p(2x) = 2p(x)$ must be true, came to you. The check performed is incorrect.</p> <p>Note that $p(2x)$ is <em>not</em> the scalar multiplication of $2$ with $p(x)$ : it is the polynomial $p$, applied to the input $2x$. This is something that could differ wildly from $p$ : for example, if $p(x) = 3x^2 + 5x + 6$ then $p(2x) = 12x^2 + 10x + 6$, which is very different from $p(x)$, and will not be $2p(x)$ in general.</p> <p>For the first example, remember the three points : closed under addition, closed under scalar multiplication, and zero is a member.</p> <p>Is it closed under addition? $ax^2+bx^2 = (a+b)x^2$.</p> <p>Is it closed under scalar multiplication? $c(ax^2) = (ca)x^2$.</p> <p>Does it contain zero? $0 = 0(ax^2)$.</p> <p>Hence, it is a subspace.</p> <p>The second set does not contain zero, since $a+x^2$ has degree two always, while $0$ has no degree. So, $0 = a+x^2$ cannot happen, hence the second is not a vector space.</p>
933,487
<p>How do I find it?</p> <p>I know that $\mathcal{L}(e^t \cos t) =\frac{s-1}{(s-1)^2+1^2}$ but what is it when multiplied by $t$, as written in the title?</p>
Matt L.
70,664
<p>You need the relation</p> <p>$$\mathcal{L}\{tf(t)\}\Longleftrightarrow -F'(s)$$</p> <p>i.e. multiplication in the time domain corresponds to differentiation in the $s$-domain (and a negative sign). Since you know $F(s)$, you can easily derive the result.</p>
1,157,007
<p>I know that $f$ and $g$ have a pole or order $k$ in $z=0$. $f-g$ is holomorph in $\infty$.</p> <p>I need to prove that:</p> <p>$$\oint_{|z|=R} (f-g)' dz = 0$$</p> <p>Any help?</p> <p>Note: $f$ and $g$ only have a singularity in $z=0$</p>
nullUser
17,459
<p>If $h$ is holomorphic, then $h'$ exists and is holomorphic. If $h$ is holomorphic, then $\oint_C h dz = 0$ for all circles $C$. Take $h= f-g$, $(f-g)'$ respectively.</p>
2,655,075
<p>How many subsets of the set $\{1, 2, \ldots, 11\}$ have median 6?</p> <p>So I have split this problem into cases. The first case is if 6 is in the subset and the second is where 6 is not. </p> <p>In case 1, I did 6 with 0, 1, 2, 3, 4, and 5 numbers surrounding it which yielded 1+16+36+16+1 = 70</p> <p>My struggle is with case 2, how do I find all the unique subsets that don't use 6?</p>
WaveX
323,744
<p>As far as the sets that DO contain $6$, we have the subset $\{6\}$, and this is the only way with one element in the subset.</p> <p>Next we have $\{a,6,b\}$ and there are $5$ choices for $a$ and $5$ choices for $b$, giving a total of $25$ ways with three elements.</p> <p>Following suite, we have $\{a,b,6,c,d\}$ A little more complicated, but we have $10$ ways for the left side of six and $10$ ways for the right, giving $100$ more subsets.</p> <p>$\{a,b,c,6,d,e,f\}$ has $10$ ways on each side, giving us another $100$ subsets.</p> <p>$\{a,b,c,d,6,e,f,g,h\}$ has $5$ ways to arrange each side, so there's $25$ more subsets.</p> <p>The next size up gives us the original set, so that will be $1$ more</p> <p>So far we have a total of $\color{red}{252}$ subsets.</p> <p>Use the tip that Misha provided in their answer to now include the rest of the subsets that do not include $6$ to arrive at the final answer</p>
572,316
<p>I am reading a bit about calculus of variations, and I've encountered the following.</p> <blockquote> <p>Suppose the given function <span class="math-container">$F(\cdot,\cdot,\cdot)$</span> is twice continuously differentiable with respect to all of its arguments. Among all functions/paths <span class="math-container">$y=y(x)$</span>, which are twice continuously differentiable on the internal <span class="math-container">$[a,b]$</span> with <span class="math-container">$y(a)$</span> and <span class="math-container">$y(b)$</span> specified, find the one which extremizes the functional defined by <span class="math-container">$$J(y) := \int_a^b F(x, y, y_x) \, {\rm d} x$$</span></p> </blockquote> <hr /> <p>I have a bit of trouble understanding what this means exactly, as a lot of the lingo is quite new. As I currently understand it, the function <span class="math-container">$F$</span> is a function, of which is second derivative is constant (like for example <span class="math-container">$y=x^2)$</span>, and it looks as if <span class="math-container">$F$</span> is a function of 3 variables? I'm not too sure what <span class="math-container">$F(x,y, y_x)$</span> means, since <span class="math-container">$y_x$</span> was earlier specified to mean <span class="math-container">$y'(x)$</span>.</p> <p>Also, what is this 'functional' intuitively? I have some trouble understanding it, mainly because of the aforementioned confusion.</p>
Avitus
80,800
<p>Please, have a look at <a href="https://math.stackexchange.com/questions/445551/assumption-that-delta-q-is-small-in-the-derivation-of-euler-lagrange-equatio/445636#445636">this answer</a> for all details. </p> <p>In summary, a functional $J$ is a map that takes functions from an appropriate functional space and returns numbers. If the functional $J$ is represented through an integral like in the OP, then the Lagrangian $F=F(x,y(x),y'(x))$ is seen as a "function" of the variable $x$ <em>and</em> the functions $y$, $y'$. In any case, $F$ cannot be a function by itself (like your proposed, i.e. $F=F(x)$, with $F(x)=x^2$) because $J$ would be no functional anymore. $F$ can be dependent of $x$ and $y$, i.e. $F=F(x,y(x))$, or dependent of higher derivatives, like in the $F=F(x,y(x),y'(x),y^{''}(x))$ case.</p> <p>The estremals of $J$, if they exist, are solutions of the equation</p> <p>$$\delta J=0,$$ with $$\delta J=\frac{dJ}{d\epsilon}|_{\epsilon=0}:=\lim_{\epsilon\rightarrow 0} \frac{J[y+\epsilon\varphi]-J[y]}{\epsilon}, $$</p> <p>for all functions $\varphi$ "near to" $y$ (called variations), such that</p> <p>$$y(a)\stackrel{!}{=}(y+\epsilon\varphi)(a), $$ $$y(b)\stackrel{!}{=}(y+\epsilon\varphi)(b). $$</p> <p>In other words, we are searching for those functions perturbing $y$ w.r.t. which the rate of change of $J$ is zero. Expressing $\delta J=0$ w.r.t. the Lagrangian $F$ leads to the well known Euler Lagrange equations.</p> <p>To finish this short motivation, I would suggest to focus on an easy example and try to understand the formalism (I recocmmend the geodesic problem). I hope it helps.</p>
1,046,687
<p>I have this math problem: "determine whether the series converges absolutely, converges conditionally, or diverges."</p> <p>I can use any method I'd like. This is the series:$$\sum_{n=1}^{\infty}(-1)^n\frac{1}{n\sqrt{n+10}}$$</p> <p>I though about using a comparison test. But I'm not sure what series I can compare $\sum_{n=1}^{\infty}\frac{1}{n\sqrt{n+10}}$ to.</p>
wpm
514,160
<p>Note that the summand is alternating, and converges monotonically to zero in absolute value. This implies that the series is convergent. The series, $S = \sum_{n=1}^{\infty}\frac{(-1)^{n}}{n\sqrt{n+10}}$, is absolutely convergent: $$|S| =|\sum_{n=1}^{\infty}\frac{(-1)^{n}}{n\sqrt{n+10}}| \leq \sum_{n=1}^{\infty} |\frac{1}{n\sqrt{n+10}}| \leq \sum_{n=1}^{\infty}\frac{1}{n\sqrt{n}} = \sum_{n=1}^{\infty}n^{-\frac{3}{2}}$$ Since $f(t) := t^{-\frac{3}{2}}$ is positive and decreasing on the set $[1,\infty) \subset \mathbb{R}$, it follows that $\sum_{n=1}^{\infty}f(n)$ converges if and only if $\int_{1}^{\infty}f(t)dt$ converges. Since $\int_{1}^{\infty}f(t)dt = \int_{1}^{\infty}t^{-\frac{3}{2}} = 2$, it follows that the series $\sum_{n=1}^{\infty}n^{-\frac{3}{2}}$ is convergent. Therefore $S$ is absolutely convergent. $\square$ </p>
4,106,933
<p>I tried <span class="math-container">$\left \vert \frac{\sin x}{x^2} - \frac{\sin c}{c^2}\right \vert \leq \frac{1}{x^2} + \frac{1}{c^2} &lt; \epsilon$</span>, but it doesn't help me much with <span class="math-container">$\vert x - c \vert &lt; \delta$</span>. How can I prove this?</p>
RRL
148,510
<p>Note that for <span class="math-container">$x,y \neq 0$</span>,</p> <p><span class="math-container">$$\left|\frac{\sin x}{x^2} - \frac{\sin y}{y^2} \right| = \left|\frac{\sin x}{x^2} - \frac{\sin x}{xy} + \frac{\sin x}{xy}- \frac{\sin y}{y^2}\right| \\\leqslant \left|\frac{\sin x }{x}\right|\left|\frac{1}{x} - \frac{1}{y}\right| + \frac{1}{|y|}\left| \frac{\sin x}{x} - \frac{\sin y}{y}\right|\\ = \left|\frac{\sin x }{x}\right|\left|\frac{1}{x} - \frac{1}{y}\right| + \frac{1}{|y|}\left| \frac{\sin x}{x} - \frac{\sin x }{y} + \frac{\sin x}{y}-\frac{\sin y}{y}\right|\\ \leqslant \left|\frac{\sin x }{x}\right|\left|\frac{1}{x} - \frac{1}{y}\right| + \frac{|\sin x|}{|y|}\left| \frac{1}{x} - \frac{1 }{y}\right| + \frac{1}{|y|^2}\left|\sin x-\sin y\right|$$</span></p> <p>Since <span class="math-container">$\left| \frac{\sin x}{x}\right|= \frac{|\sin x|}{|x|} \leqslant 1$</span> and <span class="math-container">$|\sin x | \leqslant 1$</span>, it follows that</p> <p><span class="math-container">$$\left|\frac{\sin x}{x^2} - \frac{\sin y}{y^2} \right| \leqslant \left(1 + \frac{1}{|y|} \right)\left|\frac{1}{x} - \frac{1}{y}\right| + \frac{1}{|y|^2}\left|\sin x-\sin y\right| \\ =\left(1 + \frac{1}{|y|} \right)\frac{|x-y|}{|x||y|} + \frac{1}{|y|^2}\left|\sin x-\sin y\right|,$$</span></p> <p>and for all <span class="math-container">$x,y \in [r,\infty)\cup (-\infty,-r],$</span></p> <p><span class="math-container">$$\left|\frac{\sin x}{x^2} - \frac{\sin y}{y^2} \right| \leqslant \frac{1+r}{r^3}|x-y| + \frac{1}{r^2}|\sin x - \sin y|$$</span></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Prosthaphaeresis" rel="nofollow noreferrer">prosthaphaeresis formula</a> , we get <span class="math-container">$|\sin x - \sin y | \leqslant |x - y|$</span> and it follows that</p> <p><span class="math-container">$$\left|\frac{\sin x}{x^2} - \frac{\sin y}{y^2} \right| \leqslant \frac{1+r}{r^3}|x-y| + \frac{1}{r^2}| x - y| = \frac{1+2r}{r^3}|x-y|,$$</span></p> <p>directly proving uniform continuity on <span class="math-container">$[r,\infty)\cup (-\infty,-r]$</span> where <span class="math-container">$r &gt; 0$</span>.</p>
3,281,828
<p>I am new to permutation and combination and am looking for guidance in the following example:</p> <p>We have 3 people - A, B, C</p> <p>How many ways are there to arrange them into Rank 1,2,3</p> <p>Looking at the example, it is clear that No repetitions are allowed and that ordering is not important (in the sense - Rank 1 - A, Rank 2 - B, Rank 3 - C is the same as Rank 2 - B, Rank 3 - C, Rank 1 - A).</p> <p>So as a permutation problem we have answer as 3! = 6</p> <p>Where as a variation problem we have the answer as 3!/0! = 3! = 6 (but if ordering is not important then it would be 3^3 = 27)</p> <p>Please can you help me understand how me decide between Permutation or variation? and whether ordering is important or not?</p>
JMoravitz
179,297
<p>To arrange <span class="math-container">$n$</span> items in a row (which can be accomplished in <span class="math-container">$n!$</span> ways) is equivalent to picking <span class="math-container">$k$</span> of <span class="math-container">$n$</span> items to arrange in a row (which can be accomplished in <span class="math-container">$\frac{n!}{(n-k)!}$</span> ways) in the case that <span class="math-container">$k=n$</span>.</p> <p>The only difference between them is semantics and that the formula for "variations" that you refer to is simply for the more generalized case where we might choose to arrange only some of but not all of our items into a row.</p> <hr> <p>It helps to keep an example in the back of your mind on what exactly each of these formula count. For example, given the set <span class="math-container">$\{a,b,c,d\}$</span>:</p> <ul> <li><p>the calculation <span class="math-container">$4!$</span> here can be in reference to the number of arrangements of <em>all</em> letters where order of letters matters and letters may not be repeated, every letter appearing exactly once each. Such an arrangement can be thought of as <em>a</em> <a href="https://en.wikipedia.org/wiki/Permutation" rel="nofollow noreferrer">permutation</a>. Examples of things being counted here would be <code>abcd</code>, <code>abdc</code>, <code>acbd</code>, <code>acdb</code>, ...</p></li> <li><p>the calculation <span class="math-container">$\frac{4!}{(4-2)!}$</span> here can be in reference to the number of arrangements of <em>only two of</em> the letters where order of letters matters and letters may not be repeated, each letter appearing at most once each. In your terminology, this would be a "variation." Examples of things being counted here would be <code>ab</code>, <code>ac</code>, <code>ad</code>, <code>ba</code>, <code>bc</code>, ... Note how <code>ab</code> and <code>ba</code> are treated as being different.</p></li> <li><p>the calculation <span class="math-container">$\binom{4}{2}=\frac{4!}{2!(4-2)!}$</span> here can be in reference to the number of combinations of two of the letters where here order of letters <em>does not matter</em>, letters may not be repeated, each letter appearing at most once each. Perhaps more correctly described, it counts the number of subsets of size two of the set <span class="math-container">$\{a,b,c,d\}$</span>. Examples of things being counted here would be <span class="math-container">$\{a,b\}, \{a,c\}, \{b,c\},\dots$</span>. Note here that <span class="math-container">$\{a,b\}$</span> is treated as being the <em>same</em> as <span class="math-container">$\{b,a\}$</span> since these are equal sets.</p></li> </ul>
4,454,630
<p>Is it true that for integers <span class="math-container">$i+j+k= 3m = n$</span> where <span class="math-container">$i , j, k , m , n\ge 0$</span> the inequality holds ? <span class="math-container">$$ \binom{n}{i\;j\;k} \le \binom{n}{m\;m\;m} $$</span> I tried to show <span class="math-container">$$ \frac{n!}{m!m!m!} \Big/ \frac{n!}{i!j!k!} = \frac{i!j!k!}{m!m!m!} \ge 1 $$</span> but I have no idea how to proceed , are there any related theorems ?</p>
Atticus Stonestrom
663,661
<p>Let <span class="math-container">$P$</span> be a minimal prime of <span class="math-container">$R$</span>, let <span class="math-container">$K=I+P$</span>, and suppose that <span class="math-container">$K$</span> is a proper ideal of <span class="math-container">$R$</span>. Then, since <span class="math-container">$R/P$</span> is a Noetherian domain, we know <span class="math-container">$O:=\bigcap_{n\in\mathbb{N}}(K/P)^n$</span> is trivial in <span class="math-container">$R/P$</span>. But for any <span class="math-container">$a\in J$</span>, we have <span class="math-container">$a+P\in O$</span>, whence this would imply <span class="math-container">$J\leqslant P$</span>, as needed.</p> <p>So, enumerating the minimal primes of <span class="math-container">$R$</span> as <span class="math-container">$P_1,\dots,P_n$</span>, we have that, if <span class="math-container">$J$</span> is not contained in any minimal prime, then <span class="math-container">$I+P_i=R$</span> for all <span class="math-container">$i\in[n]$</span>. For each <span class="math-container">$i$</span>, fix <span class="math-container">$a_i\in I$</span> and <span class="math-container">$p_i\in P_i$</span> with <span class="math-container">$a_i+p_i=1$</span>; then <span class="math-container">$1-p_1\dots p_n=(a_1+p_1)\dots(a_n+p_n)-p_1\dots p_n\in I$</span>. But <span class="math-container">$p:=p_1\dots p_n$</span> is an element of <span class="math-container">$\bigcap_{i\in[n]}P_i$</span>, so that <span class="math-container">$1-p$</span> is a unit, contradicting that <span class="math-container">$I$</span> is proper.</p> <p>So indeed <span class="math-container">$J$</span> must be contained a minimal prime.</p>
2,941,854
<p>I want to determine all the <span class="math-container">$x$</span> vectors that belong to <span class="math-container">$\mathbb R^3$</span> which have a projection on the <span class="math-container">$xy$</span> plane of <span class="math-container">$w=(1,1,0)$</span> and so that <span class="math-container">$||x||=3$</span>.</p> <p>I know the formula to find a projection of two vectors:</p> <p><span class="math-container">$$p_v(x)=\frac{\langle x, v\rangle}{\langle v, v\rangle}\cdot v$$</span></p> <p>So I have the projection so I should be able to fill that in:</p> <p><span class="math-container">$$(1, 1, 0)=\frac{\langle x, v\rangle}{\langle v, v\rangle}\cdot v$$</span></p> <p>Now I consider a generic vector <span class="math-container">$x = (x_1, x_2, x_3)$</span> and I calculate the dot products, though I don't exactly understand what <span class="math-container">$v$</span> is. I know it's a vector that it should be parallel to the projection of the vectors <span class="math-container">$x$</span>, but not necessarily of the same length.</p> <p>Any hints on how to proceed from here or if I'm doing the right thing? Should I use any formulas?</p>
amd
265,466
<p>What you’ve given in your question is a formula for computing an orthogonal projection onto the vector <span class="math-container">$v$</span>, but in this problem you’re projecting onto a <em>plane</em>, so you can’t use that formula, at least not directly. </p> <p>Think about what it means geometrically to orthogonally project a vector <span class="math-container">$v$</span> onto the <span class="math-container">$x$</span>-<span class="math-container">$y$</span> plane: You draw a line through through <span class="math-container">$v$</span> that’s perpendicular to the plane and then see where this line intersects the plane. Reversing this process, it should be clear that all of the vectors that have <span class="math-container">$w$</span> as their projection lie on a line through <span class="math-container">$w$</span> that’s perpendicular to the plane. You know that the <span class="math-container">$z$</span>-axis is perpendicular to the <span class="math-container">$x$</span>-<span class="math-container">$y$</span> plane, therefore this line can be given parametrically as <span class="math-container">$v = w+t(0,0,1)$</span>. Set the length of this vector to <span class="math-container">$3$</span> and solve for <span class="math-container">$t$</span>.</p>
916,794
<p>I have to find the value of $m$ such that:</p> <p>$\displaystyle\int_0^m \dfrac{dx}{3x+1}=1.$</p> <p>I'm not sure how to integrate when dx is in the numerator. What do I do?</p> <p>edit: I believe there was a typo in the question. Solved now, thank you!</p>
GuiguiDt
171,756
<p>It's like integrating $\int f(x)dx$ with $f(x) = \frac{1}{3x+1}$</p>
916,794
<p>I have to find the value of $m$ such that:</p> <p>$\displaystyle\int_0^m \dfrac{dx}{3x+1}=1.$</p> <p>I'm not sure how to integrate when dx is in the numerator. What do I do?</p> <p>edit: I believe there was a typo in the question. Solved now, thank you!</p>
Dmoreno
121,008
<p>That was probably a typo (meaning the double $\mathrm{d}x$) and your integral is the same as the following:</p> <p>$$ I = \int^m_0 \frac{1}{3x+1} \, \mathrm{d}x = \int^m_0 \frac{\mathrm{d}x}{3x+1} = \int^m_0 \mathrm{d}x \, \frac{1}{3x+1} $$</p> <p>You can place $\mathrm{d}x$ in the numerator, or even before the fraction while there's no confusion (the latter is less common). It's just a matter of notation and convention.</p> <p>Can you take it from here?</p>
2,419,116
<p>The problem is:</p> <p>Prove the convergence of the sequence </p> <p>$\sqrt7,\; \sqrt{7-\sqrt7}, \; \sqrt{7-\sqrt{7+\sqrt7}},\; \sqrt{7-\sqrt{7+\sqrt{7-\sqrt7}}}$, ....</p> <p>AND evaluate its limit.</p> <p>If the convergen is proved, I can evaluate the limit by the recurrence relation</p> <p>$a_{n+2} = \sqrt{7-\sqrt{7+a_n}}$.</p> <p>A quickly find solution to this quartic equation is 2; and other roots (if I find them all) can be disposed (since they are either too large or negative).</p> <p>But this method presupposes that I can find all roots of a quartic equation.</p> <p>Can I have other method that bypasses this?</p> <p>For example can I find another recurrence relation such that I dont have to solve a quartic (or cubic) equation? or at least a quintic equation that involvs only quadratic terms (thus can be reduced to quadratic equation)?</p> <p>If these attempts are futile, I shall happily take my above mathod as an answer.</p>
Hagen von Eitzen
39,174
<p>We have $a_1=\sqrt 7$, $a_2=\sqrt{7-\sqrt 7}$, and then the recursion $a_{n+1}=f(a_n):=\sqrt{7-\sqrt{7+a_n}}$.</p> <p>By induction, one quickly shows $0&lt;a_n\le \sqrt 7$. For $0\le x&lt;y\le\sqrt 7$, we have $$0&lt;\sqrt{7+y}-\sqrt {7+x}=\frac{y-x}{\sqrt{7+x}+\sqrt{7+x}}&lt;\frac{y-x}{2\sqrt 7} $$ and for $0\le x&lt;y&lt;6$, $$0&lt;\sqrt{7-x}-\sqrt{7-y}=\frac{y-x}{\sqrt{7-x}+\sqrt{7-y}} &lt;\frac{y-x}2.$$ We conclude that for $x,y\in[0,\sqrt 7]$, also $f(x),f(y)\in[0,\sqrt 7]$ and $|f(x)-f(y)|\le \frac1{|x-y|}{4\sqrt 7}$, i.e., $f$ is a contraction map. Therefore the even and the odd subsequence both convereg to a fixpoint of $f$ in $[0,\sqrt 7]$. Remains to show that $f$ has exactly one fixpoint $a$ in that interval. From $f(a)=a$, we get $$ (7-a^2)^2-7=a,$$ or $$\tag1a^4-14a^2-a+42=0.$$ The derivative of this, $4a^3-28a-1=4a(a^2-7)-1$, is $\le -1$ for $a\in[0,\sqrt 7]$, hence at most one solution to $(1)$ can exist there. With the rational root theorem in mind or by pure luck, we find that $a=2$ is a and hence the solution.</p>
2,736,323
<blockquote> <p>Given that $Y \sim U(2, 5)$ and $Z = 3Y - 4$, what is the distribution for $Z$?</p> </blockquote> <p>I've worked out that for $Y \sim N(2, 5)$, $Z \sim N(2, 45)$ since </p> <p>$$\mu=3\cdot2 - 4 = 2$$</p> <p>and </p> <p>$$\sigma^2=3^2 \cdot 5 = 45$$</p> <p>I'm wondering how the working differs when we have a uniform distribution, rather than a normal distribution? </p> <p><em>Sorry if a similar question has been asked before - I could not find anything on my search!</em></p> <p>Thanks!</p>
Hendrata
423,285
<p>A general technique for this type of problems is to find the CDF for $Y$, then find CDF for $Z$ accordingly, and recognize what that CDF is.</p> <p>However, for uniform distribution, affine transformation of uniform distribution is still uniform as well. So the lowest value of $Z$ is 2, and the highest value is 11, so $Z$ is uniform from 2 to 11. </p> <p>If you want to know why that claim is true (affine transformation of uniform distribution is also affine), consider the following scenarios:</p> <ol> <li><p>Transforming $Y$ to $3Y$. If $Y$ is uniform then the PDF of $Y$ is a straight line (a ramp function). So it's zero until 2, then ramps up to 1 at 5, then stays 1 after 5. The transformation from $Y$ to $3Y$ merely scales the x-axis, so it's still a ramp function, and that's a CDF of a uniform distribution</p></li> <li><p>Transforming from $3Y$ to $3Y-4$, this is just moving the "location" of the variables to the left by 4, so it does not affect the "shape" of the distribution</p></li> </ol>
1,242,075
<p>I have some function $g$. I know that $g \in C^1[a,b]$, so $g'(x)$ exists. I want to know, if $g: [a,b] \to [a,b]$ is onto. How can I find out if this is true or not?</p> <p>P.S. I am not saying all $g$ have the said property, I want to have some kind of test to distinguish functions with this property from functions without it.</p>
Cameron Buie
28,900
<p>Well, given any $g\in C[a,b],$ we have that $g$ maps $[a,b]$ onto itself if and only if $g(x_1)=a$ for some $x_1\in[a,b],$ $g(x_2)=b$ for some $x_2\in[a,b],$ and $g(x)\in[a,b]$ for all $x\in[a,b].$ Put another way, we have $$\{a,b\}\subseteq g\bigl([a,b]\bigr)\subset[a,b],$$ where $g\bigl([a,b]\bigr)$ denotes the image of $[a,b]$ under $g.$</p> <p>Obviously, if $a$ or $b$ fails to be mapped to by $g$, then $g$ doesn't map $[a,b]$ onto itself, and if there is some $x\in[a,b]$ such that $g(x)\notin[a,b],$ then we don't even have $g:[a,b]\to,[a,b].$ However, if all three conditions hold, the Intermediate Value Theorem tells us that $(a,b)\subseteq g\bigl([a,b]\bigr),$ whence we have $$[a,b]\subseteq g\bigl([a,b]\bigr)\subseteq[a,b],$$ and so $g\bigl([a,b]\bigr)=[a,b],$ as desired.</p> <p>The same arguments work in $C^1[a,b],$ as a subset of $C[a,b].$</p>
3,848,179
<blockquote> <p>The velocity <span class="math-container">$v$</span> of a freefalling skydiver is modeled by the differential equation</p> <p><span class="math-container">$$ m\frac{dv}{dt} = mg - kv^2,$$</span></p> <p>where <span class="math-container">$m$</span> is the mass of the skydiver, <span class="math-container">$g$</span> is the gravitational constant, and <span class="math-container">$k$</span> is the drag coefficient determined by the position of the diver during the dive. Find the general solution of the differential equation.</p> </blockquote> <p>So is it my job to solve for velocity here (<span class="math-container">$v$</span>)? or am I missing something?</p>
Community
-1
<p>Using hint of @E.H.E, we can see that let <span class="math-container">$x=2\sin(\theta)$</span>, so <span class="math-container">\begin{eqnarray} \int \sqrt{4-x^{2}}dx&amp;=&amp;4\int \cos^{2}(\theta)d\theta\\ &amp;=&amp;2\theta+2\sin(\theta)\cos(\theta)+C, \quad \theta=\arcsin(x/2) \\ &amp;=&amp;\frac{x}{2}\sqrt{4-x^{2}}+2\arcsin\left( \frac{x}{2}\right)+C. \end{eqnarray}</span></p>
118,070
<p>I have the following code:</p> <pre><code>a = 6.08717*10^6; b = a/3; c = a*1.5; d = a^2; matrix={{a, b, c, d},{b, c, d, a},{c, d, a, b},{d, a, b, c}}; matrix // EngineeringForm </code></pre> <p>Normally, I use this result by copying (<code>Copy As ► MathML</code>) and pasting into Microsoft Word.</p> <p>However, I do not get the desired formatting.</p> <p>The image below shows what the result is presented in Microsoft Word:</p> <p><a href="https://i.stack.imgur.com/ZZRu6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZZRu6.png" alt="enter image description here"></a></p> <p>The image below shows the result that I want:</p> <p><a href="https://i.stack.imgur.com/tLYEZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tLYEZ.png" alt="enter image description here"></a></p> <p>1 - Comma instead of point</p> <p>2 - Without quotation marks</p> <p>3 - Four decimal places</p> <p>4 - Point instead of the Cross</p>
Carl Woll
45,431
<p>MathML export works by creating <a href="http://reference.wolfram.com/language/ref/TraditionalForm" rel="nofollow noreferrer"><code>TraditionalForm</code></a> boxes and then converting those boxes into MathML. The "Copy As MathML" menu item, on the other hand, takes the boxes in the cell to be copied, and converts those boxes to MathML.</p> <p>Alexey's answer basically worked by modifying the expression so that the boxes corresponding to the expression avoided issues related to extra quotes. This is a bit inconvenient because you always have to remember to do this modification before using <a href="http://reference.wolfram.com/language/ref/Export" rel="nofollow noreferrer"><code>Export</code></a> or "Copy as MathML".</p> <p>It would be convenient to instead do this transformation automatically.</p> <p>Before proceeding with an answer, let's see what goes wrong:</p> <pre><code>ExportString[EngineeringForm[3.15*^19], "MathML"] </code></pre> <blockquote> <pre><code>&lt;math xmlns='http://www.w3.org/1998/Math/MathML'&gt; &lt;semantics&gt; &lt;mrow&gt; &lt;ms&gt;31.5&lt;/ms&gt; &lt;mo&gt;&amp;#215;&lt;/mo&gt; &lt;msup&gt; &lt;mn&gt;10&lt;/mn&gt; &lt;ms&gt;18&lt;/ms&gt; &lt;/msup&gt; &lt;/mrow&gt; &lt;annotation \ encoding='Mathematica'&gt;TagBox[InterpretationBox[RowBox[List[&amp;quot;\ \\&amp;quot;31.5\\&amp;quot;&amp;quot;, &amp;quot;\\[Times]&amp;quot;, \ SuperscriptBox[&amp;quot;10&amp;quot;, &amp;quot;\\&amp;quot;18\\&amp;quot;&amp;quot;]]], \ 3.15`*^19, Rule[AutoDelete, True]], EngineeringForm]&lt;/annotation&gt; &lt;/semantics&gt; &lt;/math&gt; </code></pre> </blockquote> <p>Notice the string tagging: <code>&lt;ms&gt;31.5&lt;/ms&gt;</code> and <code>&lt;ms&gt;18&lt;/ms&gt;</code>. These need to be <code>&lt;mn&gt;31.5&lt;/mn&gt;</code> and <code>&lt;mb&gt;18&lt;/mn&gt;</code> instead.</p> <p>The difference in tagging is related to extra quote characters in the boxes. We can eliminate the extra quote characters by using Alexey' s rule. Here is a function that does this:</p> <pre><code>fix[i:InterpretationBox[_, n_, ___]] := If[MatchQ[n, _Real | _Integer], ReplaceAll[ i, s_String :&gt; RuleCondition[StringTake[s, {2, -2}], StringMatchQ[s,"\""~~__~~"\""]] ], i ] </code></pre> <p>Next, the internal function that converts boxes to MathML is <code>XML`MathML`BoxesToSymbolicMathML</code>. We can modify it to post-process the boxes using the above function:</p> <pre><code>Unprotect[XML`MathML`BoxesToSymbolicMathML]; XML`MathML`BoxesToSymbolicMathML[ boxes_?System`Convert`MathMLDump`boxStructureQ, o___?(System`Convert`MathMLDump`optionOfQ[XML`MathML`BoxesToSymbolicMathML]) ] /; !TrueQ @ <span class="math-container">$MathML := Block[{$</span>MathML=True}, XML`MathML`BoxesToSymbolicMathML[ boxes /. i_InterpretationBox :&gt; fix[i], o ] ] Protect[XML`MathML`BoxesToSymbolicMathML]; </code></pre> <p>Now, MathML output should be as you desired. In each of the following examples, note that the tags used are <code>&lt;mn&gt;</code> and <code>&lt;/mn&gt;</code>.</p> <p>Using MathMLForm:</p> <pre><code>ToString[EngineeringForm[3.14*^19], MathMLForm] </code></pre> <blockquote> <pre><code>&lt;math&gt; &lt;semantics&gt; &lt;mrow&gt; &lt;mn&gt;31.4&lt;/mn&gt; &lt;mo&gt;&amp;#215;&lt;/mo&gt; &lt;msup&gt; &lt;mn&gt;10&lt;/mn&gt; &lt;mn&gt;18&lt;/mn&gt; &lt;/msup&gt; &lt;/mrow&gt; &lt;annotation \ encoding='Mathematica'&gt;TagBox[InterpretationBox[RowBox[List[&amp;quot;31.4&amp;quot;, \ &amp;quot;\\[Times]&amp;quot;, SuperscriptBox[&amp;quot;10&amp;quot;, &amp;quot;18&amp;quot;]]], \ 3.14`*^19, Rule[AutoDelete, True]], EngineeringForm]&lt;/annotation&gt; &lt;/semantics&gt; &lt;/math&gt; </code></pre> </blockquote> <p>Using Export:</p> <pre><code>ExportString[EngineeringForm[3.14*^19], "MathML"] </code></pre> <blockquote> <pre><code>&lt;math xmlns='http://www.w3.org/1998/Math/MathML'&gt; &lt;semantics&gt; &lt;mrow&gt; &lt;mn&gt;31.4&lt;/mn&gt; &lt;mo&gt;&amp;#215;&lt;/mo&gt; &lt;msup&gt; &lt;mn&gt;10&lt;/mn&gt; &lt;mn&gt;18&lt;/mn&gt; &lt;/msup&gt; &lt;/mrow&gt; &lt;annotation \ encoding='Mathematica'&gt;TagBox[InterpretationBox[RowBox[List[&amp;quot;31.4&amp;quot;, \ &amp;quot;\\[Times]&amp;quot;, SuperscriptBox[&amp;quot;10&amp;quot;, &amp;quot;18&amp;quot;]]], \ 3.14`*^19, Rule[AutoDelete, True]], EngineeringForm]&lt;/annotation&gt; &lt;/semantics&gt; &lt;/math&gt; </code></pre> </blockquote> <p>And using the "Copy As MathML" menu item:</p> <pre><code>EngineeringForm[3.24*^20] </code></pre> <p><a href="https://i.stack.imgur.com/JNBtd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JNBtd.png" alt="enter image description here"></a></p> <pre><code>SelectionMove[PreviousCell[],All,Cell]; FrontEnd`CopyAsMathML[]; First @ NotebookImport[ NotebookGet[ClipboardNotebook[]], "Input" -&gt; "Text" ] </code></pre> <blockquote> <pre><code>&lt;math xmlns='http://www.w3.org/1998/Math/MathML' mathematica:form='StandardForm' xmlns:mathematica='http://www.wolfram.com/XML/'&gt; &lt;semantics&gt; &lt;mrow&gt; &lt;mn&gt;324.&lt;/mn&gt; &lt;mo&gt;&amp;#215;&lt;/mo&gt; &lt;msup&gt; &lt;mn&gt;10&lt;/mn&gt; &lt;mn&gt;18&lt;/mn&gt; &lt;/msup&gt; &lt;/mrow&gt; &lt;annotation \ encoding='Mathematica'&gt;TagBox[InterpretationBox[RowBox[List[&amp;quot;324.&amp;quot;, \ &amp;quot;*&amp;quot;, SuperscriptBox[&amp;quot;10&amp;quot;, &amp;quot;18&amp;quot;]]], 3.24`*^20, \ Rule[AutoDelete, True]], EngineeringForm]&lt;/annotation&gt; &lt;/semantics&gt; &lt;/math&gt; </code></pre> </blockquote>
186,726
<p>Just a soft-question that has been bugging me for a long time:</p> <p>How does one deal with mental fatigue when studying math?</p> <p>I am interested in Mathematics, but when studying say Galois Theory and Analysis intensely after around one and a half hours, my brain starts to get foggy and my mental condition drops to suboptimal levels.</p> <p>I would wish to continue studying, but these circumstances force me to take a break. It is truly a case of "the spirit is willing but the brain is weak"?</p> <p>How do people maintain concentration over longer periods of time? Is this ability trainable or genetic? (Other than taking illegal drugs like Erdős.)</p> <p>I know this is a really soft question, but I guess asking mathematicians is the best choice since the subject of Mathematics requires the most mental concentration compared to other subjects.</p>
GeoffDS
8,671
<p>I recently read an <a href="http://www.salon.com/2012/03/14/bring_back_the_40_hour_work_week/">article</a> on the 40 hour work week and I think it is somewhat related. The basic idea of it was that in the mid 20th century, they had a 40 hour work week and they had lots of research on it showing that it was optimal in many ways. That is, if you increased your work week from 40 hours to 60 hours, you wouldn't gain 50% extra productivity. You would gain 20-30% extra productivity. But, this is only over the short run.</p> <p>Once you work 8 weeks of 60 hour work weeks, you end up breaking even. That is, over that period, you would have gotten the same amount of work done if you had just worked 40 hours every week. If you do 80 hour weeks, it only takes about 2 or 3 weeks for you to break even and start doing less than if you had just worked 40 hour weeks the whole time.</p> <p>And, the article mentioned that with jobs that take a lot of mental work, e.g., doing complicated mathematics, in fact you had even less than 40 hours of productive work per week.</p> <p>So, do some mathematics. When you get tired and fatigued mentally, go do something else for a while. Then, come back. Getting enough exercise and sleep, eating healthy, and having fun activities you do is important. That is part of the reason the 40 hour work week is good. Once you start doing too much work, you lose out on all those other important things that help you function normally.</p>
186,726
<p>Just a soft-question that has been bugging me for a long time:</p> <p>How does one deal with mental fatigue when studying math?</p> <p>I am interested in Mathematics, but when studying say Galois Theory and Analysis intensely after around one and a half hours, my brain starts to get foggy and my mental condition drops to suboptimal levels.</p> <p>I would wish to continue studying, but these circumstances force me to take a break. It is truly a case of "the spirit is willing but the brain is weak"?</p> <p>How do people maintain concentration over longer periods of time? Is this ability trainable or genetic? (Other than taking illegal drugs like Erdős.)</p> <p>I know this is a really soft question, but I guess asking mathematicians is the best choice since the subject of Mathematics requires the most mental concentration compared to other subjects.</p>
Community
-1
<p>I lighted upon this sterling answer by virtue of user Hepth at <a href="http://www.physicsforums.com/showthread.php?t=677222">http://www.physicsforums.com/showthread.php?t=677222</a>:</p> <hr> <p>I too get mental fatigue if I'm working too hard. Usually my problem is if I work on research (read papers/do math/program mathematica) for 8+ hours it takes another 4+ hours for my brain to slow down and I can relax. This leads to problems as if I work late, maybe to 9pm before going home, and then get in bed at 11pm, there is 0% chance of sleeping for another 3 hours due to my mind just running and running. Its horrible, and then the next day I'm even more fatigued and tired and subsequently get less done.</p> <p>The key to solving this for myself was to : <strong>1) Go to bed early</strong><br> If you're a student, this is difficult. But stop studying late into the night. Set a curfew where you don't do homework or study after say 8-9pm. Give yourself time to relax. If you're of working age, spend time with your kids after work (most important), but don't check your emails often/fret about tomorrow's workload/etc after a certain time. Relax, and do something you can enjoy without a ton of mental stimulation (take the wife/SO to a movie, the mall, the modern art museum, etc.)</p> <p><strong>2) Get up early</strong><br> If you went to bed early, you should be able to get 8+ hours of sleep and get up at a decent time. The discipline needed to get out of bed quickly and get ready for the day is tough to acquire, but if you nail this down in college, you'll be ready for those 1am, 2am and 3-5 am wakeup calls from your newborn without feeling like your heart is going to explode every time the "alarm" wails.</p> <p><strong>3) You MUST take breaks during your workday.</strong><br> Sometimes I feel like I shouldn't. I'll be on a roll, working hard, making progress, skip lunch, then all of a sudden its 6pm. I FEEL like I was getting a lot done, but I really just wound myself up and zoned-in. While I was working for the full 8-10 hours, I didn't get 10 hours of work done.</p> <p>Instead, if I take a break every hour (or a little less), and go walk and get some water, get some fresh air, I find that during this break my mind will reorganize the priorities of what I'm actually trying to accomplish, and I'll nail down a single task as soon as I sit back in my chair. This also helps avoid the brain-burning overload of studying/working for long periods of time.</p> <p><strong>4) Stop browsing the internet when trying to study/work.</strong><br> Remove facebook/etc from your bookmarks, don't save it so it stays logged in/etc. Make it difficult for yourself to access those sites. While you might think browsing the internet is harmless and basically a "break" from working, its not. You're still thinking about what you're reading, and its a non-stop flow of new, but worthless, input.</p> <p>As for the whole "sick of learning new ideas" problem, it sounds like you're in the part of your studies where you're working on a bunch of difficult material that you have no interest in. If you were interested in it (like I was in physics) you'll have no problem studying it and learning it quickly. But if you think its worthless for you to learn and just hate it (Organic Chem for me, do I really NEED to know how to properly identify and name 1 cis-3 methylcyclopentane ?) then you just need to "soldier on" and try your hardest to be interested in it.</p> <p>It's the interest in a subject that makes learning easy; nothing is actually too difficult to learn.</p> <p>As an aside: Organic Chemistry was my lowest grade in undergrad (I think like 77%). I was "placed" into it due to my entrance exam in my first semester of college. I was taking 23 credit hours of courses and way overloaded. I struggled to get that 77%, hate the class, and it took up about 80% of my study/homework time as we had to complete homework assignments in the chem computer lab every other night. The last semester before I graduate I'm told that Organic Chem was NOT required for my degree, that the school should not have made me take it, and the pre-requisite (Principles of Chemistry) IS required, and though I tested ahead of it I didn't get credit, and I'd have to go back and take it. I ended up doing well, but I realized it's errors like this that are the reason so many people drop out in their last semester.</p>
1,928,259
<p>I have the following problem: </p> <blockquote> <p>The function $f(x)$ is odd, its period is $5$ and $f(-8) = 1$. What is $f(18)$?</p> </blockquote> <p>So, $f(-8) = f(-8 + 5) = 1$. I also know that you could replace $(-8)$ with $(-3)$ and still get the same result of $1$.</p> <p>I'm just learning about periods. My grasp on it still isn't very impressive. I understand that even functions are symmetric about the y axis and odd functions are symmetric about the origin, but my brain just isn't making the connection on this one.</p> <p>Please help!</p> <p>-Jon</p>
Alex Ortiz
305,215
<p>The notation $$ \left(\frac{\partial f}{\partial y}\right)_x \stackrel{\text{def}}{=} \frac{\partial f}{\partial y} $$ and $x$ is <em>not to be considered constant from thereafter</em>. The use of this notation is to make it explicit which variables are being held constant. I think it is somewhat contrived, but it is a notation that is used commonly in statistical mechanics, for instance.</p> <p>Note that this notation also clashes with the more common $f_x$ notation, which is to represent the partial derivative of $f$ with respect to $x$. These two notations are <em>saying different things</em>. One is saying, "these are all the variables that we are treating as constant when we evaluate the partial derivative" while the $f_x$ notation says "this is the partial with respect to $x$".</p> <p>So, $$ \frac{\partial}{\partial x}\left(\frac{\partial f}{\partial y}\right)_x \stackrel{\text{def}}{=} \frac{\partial^2 f}{\partial x\partial y} $$ is not necessarily zero.</p>
2,210,871
<p>I'm doing some calculus homework and I got stuck on a question, but eventually figured it out on my own. My textbook doesn't have all the answers included (it only gives answers to even numbered questions for some reason). Anyways I got stuck when I needed to solve for x for this function.</p> <p>$${\ -3x^3+8x-4{\sqrt{x}}-1=0}$$</p> <p>I tried to factor it but I couldn't see a way to remove the radical. However, intuitively I could see it the answer to this question was just one, after a long time of confusion. Is there a possible way to factor this? Is there any way to solve this other than just looking at it and seeing the correct answer?</p> <p>If you are curious here is the question in my textbook:</p> <p>"Find the equation of the tangent line to the curve at the point (1,5)" $${y=(2-{\sqrt{x})}}{(1+{\sqrt{x}}+3x)}$$</p> <p>Thank you for your time.</p>
Andre.J
361,886
<p>Since you are pursuing a computer science degree, I think you can benefit from learning some Discrete Mathematics. The book by <a href="http://rads.stackoverflow.com/amzn/click/0201199122" rel="nofollow noreferrer">Grimaldi</a> is nice for an introduction, you could also consult the book by <a href="http://rads.stackoverflow.com/amzn/click/0073229725" rel="nofollow noreferrer">Kenneth Rosen</a>.</p> <p>Some single variable calculus never harmed anyone. For that I suggest the book by <a href="http://rads.stackoverflow.com/amzn/click/0914098918" rel="nofollow noreferrer">Spivak</a>.</p> <p>Someday you might also wanna learn some Linear Algebra. The book by <a href="http://rads.stackoverflow.com/amzn/click/0980232775" rel="nofollow noreferrer">Gilbert Strang</a> can be a nice introduction.</p> <p><a href="https://www.khanacademy.org/" rel="nofollow noreferrer">Khan Academy</a> has lots of videos on lots of mathematical topics and could be a nice starting point. You can also search <a href="https://ocw.mit.edu/index.htm" rel="nofollow noreferrer">MIT Open Courseware</a> for lectures on relevant topics. And of course, while learning, if you get stuck somewhere, asking here can help too. </p> <p>As for methods, the only way I know is to sit down with some paper and pencil, and work your way through stuff. You'd need a lot of patience, there's no shortcut or trick. Of course, as always, you'd need to practice solving problems, a lot. </p>
1,903,416
<p>Is there a function that can be bijective, with the set of natural numbers as domain and range, other than $f(n) = n$?</p>
Alex Ortiz
305,215
<p>Sure. Consider the function $f: \Bbb N \to \Bbb N$ defined by</p> <p>$$ f(n) = \cases{ n - 1 &amp; if $n$ is even \\ n + 1 &amp; if $n$ is odd.} $$</p> <p>This function is a non-identity bijection. As <em>Hagen von Eitzen</em> notes, depending on your definition of $\Bbb N$, swap $+$ and $-$ in the definition of $f$ if $\Bbb N = \{0, 1, 2, 3, \ldots\}$.</p>
25,172
<p>What would be a good books of learning differential equations for a student who likes to learn things rigorously and has a good background on analysis and topology?</p>
Dan Fox
672
<p>There are many, many books on ODEs, many of them good.</p> <p>For the basic theory no one seems to have improved much on the book of Coddington and Levinson cited by Gerald Edgar. That book is old-fashioned, has essentially no examples, and could be seen as quite dry. It is clearly written and I was able to learn from it.</p> <p>V. I. Arnold. <em>Ordinary Differential Equations</em> is an excellent book because the geometric ideas behind ODEs are explained. Arnold does not write out all the details (for me this is a virtue of his books), but the writing is always careful and one learns a lot by filling in the details. His more advanced book <em>Geometrical methods in the theory of ordinary differential equations</em> is wonderful but requires some background.</p> <p>Another book in which topological/geometrical ideas operate is that of M. Hirsch and S. Smale, <em>Differential equations, dynamical systems, and linear algebra</em>.</p> <p>A student who has a good background in geometry and topology should also learn about numerics and practical solvability issues.</p> <p>For learning the theory behind numerical methods for solving ODEs I like John Butcher's <em>Numerical methods for ordinary differential equations</em>.</p> <p>A good introduction to how ODEs are solved in practice is the self-descriptively titled Hairer, Nørsett, Wanner, <em>Solving ordinary differential equations. I</em>.</p> <p>For a student with geometric interests it might be interesting to read <em>Numerical Hamiltonian problems</em> by Sanz-Serna and Calvo and <em>Geometric numerical integration</em> by Hairer, Lubich, and Wanner. These are about numeric methods adapted to the underlying geometry related to the flow being integrated.</p> <p>I'm not an expert in ODEs and as I observed there are many, many books available. I just listed some I like written by people with good mathematical judgment.</p>
59,888
<p>Very similar to </p> <p><a href="https://mathematica.stackexchange.com/questions/11638/parameters-in-plot-titles">Parameters in plot titles</a></p> <p>in which I want to call a parameter from an array using <code>PlotLabel</code> in my plot using <code>Manipulate</code>. I've tried all of the suggestions in the above post and keep getting the same error. Simple code:</p> <pre><code>f = {1, 2, 3, 4, 5, 6}; Manipulate[ Plot[Sin[f[[g]] x], {x, -2, 2}, PlotLabel -&gt; Text["f =" f[[g]]]], {g, 1, 3, 1}] </code></pre> <p><img src="https://i.stack.imgur.com/Ebrf4.gif" alt="enter image description here"></p> <p>Curiously, the first <code>f</code> value does not show up incorrectly to the left of the <code>"f="</code> string. Instead of <code>Text[]</code> I've also tried <code>HoldForm[]</code>, <code>TraditionalForm[]</code>, <code>Defer[]</code>. I'm not too picky on where <code>"f ="</code> shows up, but this is pretty confusing to me as why this doesn't work as it should. Thanks</p>
halirutan
187
<p>What you observe is simply because you use a multiplication without noticing it. Basic example:</p> <pre><code>"f=" 2 (* 2 "f=" *) </code></pre> <p>If you look at the full form of this output with <code>FullForm[%]</code> you see that it is indeed <code>Times[2, "f="]</code>. Because the terms in a multiplication are re-ordered by Mathematica, you get the wrong result. </p> <p>The solution is to either use <code>Row</code> like shown by Nasser in the comment</p> <pre><code>f = {1, 2, 3, 4, 5, 6}; Manipulate[ Plot[Sin[f[[g]] x], {x, -2, 2}, PlotLabel -&gt; Row[{"f = ", f[[g]]}]], {{g, 1, "index"}, 1, 3, 1, Appearance -&gt; "Labeled"} ] </code></pre> <p>or you create a full string from your content with</p> <pre><code>Text["f = " &lt;&gt; ToString[f[[g]]]]] </code></pre>
30,081
<pre><code>For[n = 2500, n &lt; 10000, n += 500, sum = 0; pp = N[1/HarmonicNumber[n, 2]]; A = Reverse[IdentityMatrix[n]]; H = IdentityMatrix[n]; For[i = 0, i &lt; n, i++, For[j = 1, j &lt; n + 1 - i, j++, H[[j, j + i]] = Sqrt[pp]/(N[Sqrt[n - i]*(i + 1)])]]; T := A.Orthogonalize[N[H].A, Dot]; Z = T.N[H].A.T\[Transpose].A; sum = Total[(Flatten[UpperTriangularize[Z,1]])^2]; Print[sum] ] </code></pre> <p>I have run this program for a long time to get the result for <code>n = 2500</code> (about 12 hours) and no more result till now (more than 3 days). I want to know how to make it more efficient. Any help or suggestion will be appreciated!</p>
rcollyer
52
<p>There is a lot going on with your code, as already pointed out by others. But, I would like to point out a few more things, and this was <em>much</em>, <em>much</em> to long for a comment. </p> <p>First, I would use <code>SparseArray</code> to generate <code>H</code>, instead of the pre-allocate, fill-in method you are using, as follows:</p> <pre><code>H = SparseArray[ {l_, m_} /; m &gt;= l :&gt; Sqrt[pp]/(Sqrt[n + l - m] (m - l + 1)), {n, n} ]; </code></pre> <p><strong>Edit</strong>: looking at the timing for this, and as pointed out in the comments by ssch, the calculation of <code>H</code> can be sped up significantly by using <code>Array</code>, instead:</p> <pre><code>diags = Sqrt[1./HarmonicNumber[n, 2]]/(Sqrt[n - #] (# + 1)&amp; @ Range[0., n - 1]); H = Array[If[#2 &lt; #1, 0., diags[[#2 - #1 + 1]]] &amp;, {n, n}]; </code></pre> <p>For large matrices, this scales significantly better for large <code>n</code>. For instance, at <code>n = 500</code> the <code>SparseArray</code> method takes 3.3 secs v. 0.028 secs for the <code>Array</code> method.</p> <p><strong>End Edit</strong> </p> <p>This generates an upper-triangular matrix, with <code>m - l</code> taking the place of <code>i</code> in your loops. Also, I got rid of the extra <code>N</code> as it is not necessary because <code>pp</code> is already a machine number from your use of <code>N</code> in its definition. Similarly, all <code>N[H]</code> can be replaced with <code>H</code>, further down in the code. While it may not provide much of a speed up, do not overuse a function if you don't have to.</p> <p>Second, in terms of overuse, the way you have defined <code>T</code> means it will be regenerated every time it is used. <code>Orthogonalize</code> is not a fast operation, so don't make more work for yourself. I would define <code>T</code> as</p> <pre><code>T = A.Orthogonalize[H.A]; </code></pre> <p>where I am using <code>Set</code> (<code>=</code>), instead of <code>SetDelayed</code> (<code>:=</code>), and I have removed the second argument to <code>Orthogonalize</code>. By default, <code>Orthogonalize</code> effectively uses <code>Dot</code>, but by specifying the argument, you may be forgoing potential speed ups. If this is true in this case, I do not know, I have not tested it. But, it holds true in a lot of other cases, so be cautious about overriding defaults.</p> <p>Third, <code>A.M.A == M</code>. This simplifies the definition of <code>Z</code> to</p> <pre><code>Z = T.H.Transpose[T] </code></pre> <p><strong>Edit</strong>: That is not true, in general. But, $A^T = A$ and $AA = I$, so </p> <p>$$A(ABA)^TA = AAB^TAA = B^T,$$</p> <p>and combined with </p> <pre><code>Othogonalize[H.A] == Orthogonalize[H].A </code></pre> <p>as discussed by <a href="https://mathematica.stackexchange.com/a/30108/52">Daniel</a>, both <code>T</code> and <code>Z</code> can be rewritten:</p> <pre><code>T = Orthogonalize[H]; Z = A.T.A.H.Transpose[T]; </code></pre> <p>But, as <code>A</code> is just a permutation, it can be best effected by using <code>Part</code>, and using Daniel's notation:</p> <pre><code>negrange = -Range[n]; Z = T[[negrange, negrange]].H.Transpose[T]; </code></pre> <p>Fourth, timing shows that the longest running part of the code is <code>Orthogonalize</code>. On my machine, at <code>n = 2500</code> this step along can take over a minute. An alternative is to use <a href="https://en.wikipedia.org/wiki/QR_decomposition" rel="nofollow noreferrer">QR decomposition</a> which generates an upper triangular matrix, $R$, and an orthogonal matrix, $Q$, that forms a basis for the column space of the matrix being decomposed. So, to get an orthogonal basis out of the row space, you take the transpose. The definition of <code>T</code> becomes</p> <pre><code>T = QRDecomposition[Transpose[H]][[1]]; </code></pre> <p>The function below has been edited with these changes.</p> <p><strong>End Edit</strong></p> <p>Lastly, I would put the entire contents of the outer <code>For</code> into a separate function, sans the <code>Print</code> statement. <code>Print</code> does not provide results in a form usable for further processing, but as you were using <code>For</code>, its use is understandable. Additionally, putting it into a separate function allows you to use any number of methods for generating your data. With the changes I have suggested above, this looks like</p> <pre><code>myFunction[n_Integer?Positive] := Block[{negrange, diags, H, T, Z}, diags = Sqrt[1./HarmonicNumber[n, 2]]/(Sqrt[n - #] (# + 1)&amp; @ Range[0., n - 1]); H = Array[If[#2 &lt; #1, 0., diags[[#2 - #1 + 1]]] &amp;, {n, n}]; T = QRDecomposition[Transpose[H]][[1]]; negrange = -Range[n]; Z = T[[negrange, negrange]].H.Transpose[T]; Total[(Flatten[UpperTriangularize[Z,1]])^2] ] </code></pre> <p><strong>Edit</strong>: Timings on my computer:</p> <pre><code>n Timing (s) 2500 15.05 3000 24.31 3500 37.61 4000 54.56 </code></pre> <p>So, while not fast, it scales significantly better than the previous versions.</p> <p><strong>End Edit</strong></p> <p>As this is likely to be a long running process, I would run it as follows:</p> <pre><code>Internal`WithLocalSettings[ strm = OpenWrite["output.log"]; AppendTo[ $Output, strm ], Scan[Print[#, "\t", myFunction[#]]&amp;, Range[2500, 10000, 500]], $Output = Drop[$Output, -1]; Close[strm] ] </code></pre> <p>where a <code>Print</code> will output to both a log-file and the screen. This allows you to save the output as you go. If you don't need this, then just use <code>Table</code>.</p> <p><strong>Edit - Last words</strong>: </p> <p>The key here is simply to <strong><em>profile</em></strong> your code. Examine each step and find out which is slow. As shown above, while I like the simplicity of using <code>SparseArray</code>, it scales badly compared to using <code>Array</code>, so for efficiency sake <code>SparseArray</code> is out. The same thing goes for <code>Orthogonalize</code> v. <code>QRDecomposition</code>. As you might not be familiar with the properties of <code>QRDecomposition</code>, that is when you ask: <code>Orthogonalize</code> scales badly, are there alternatives?</p> <p>The other advice is to reduce redundancy. Only do something once, if you can get away with it. There are times where you won't be able to, like you run out of memory, but if you can, it often will save time.</p>
30,081
<pre><code>For[n = 2500, n &lt; 10000, n += 500, sum = 0; pp = N[1/HarmonicNumber[n, 2]]; A = Reverse[IdentityMatrix[n]]; H = IdentityMatrix[n]; For[i = 0, i &lt; n, i++, For[j = 1, j &lt; n + 1 - i, j++, H[[j, j + i]] = Sqrt[pp]/(N[Sqrt[n - i]*(i + 1)])]]; T := A.Orthogonalize[N[H].A, Dot]; Z = T.N[H].A.T\[Transpose].A; sum = Total[(Flatten[UpperTriangularize[Z,1]])^2]; Print[sum] ] </code></pre> <p>I have run this program for a long time to get the result for <code>n = 2500</code> (about 12 hours) and no more result till now (more than 3 days). I want to know how to make it more efficient. Any help or suggestion will be appreciated!</p>
Daniel Lichtblau
51
<p>I'm not sure one can get the larger cases due to memory needs. The code below, which uses the array generation of @rcollyer, seems reasonably effective at least for the smaller end.</p> <pre><code>n = 2500; Timing[sum = 0; diags = Sqrt[ 1./HarmonicNumber[n, 2]]/(Sqrt[n - #] (# + 1) &amp;@Range[0., n - 1]); hh = Array[If[#2 &lt; #1, 0., diags[[#2 - #1 + 1]]] &amp;, {n, n}]; orth = Orthogonalize[hh]; negrange = Range[Length[orth], 1, -1]; z3 = orth[[negrange, negrange]].hh.Transpose[orth]; sum = Total[(Flatten[UpperTriangularize[z3, 1]])^2]] (* {16.630000, 0.415096142869} *) </code></pre> <p>--- edit ---</p> <p>This shows more timings at the lower end of the desired range.</p> <pre><code>Do[ time = Timing[sum = 0; diags = Sqrt[1./HarmonicNumber[n, 2]]/(Sqrt[n - #] (# + 1) &amp;@ Range[0., n - 1]); hh = Array[If[#2 &lt; #1, 0., diags[[#2 - #1 + 1]]] &amp;, {n, n}]; orth = Orthogonalize[hh]; negrange = Range[Length[orth], 1, -1]; z3 = orth[[negrange, negrange]].hh.Transpose[orth]; sum = Total[(Flatten[UpperTriangularize[z3, 1]])^2] ]; Print[{n, time, sum}]; {time, sum}, {n, 2500, 5000, 500}] {2500,{17.230000,0.415096142869},0.415096142869} {3000,{29.170000,0.415356679612},0.415356679612} {3500,{45.930000,0.41554482016},0.41554482016} {4000,{68.400000,0.415687180722},0.415687180722} {4500,{96.070000,0.415798728445},0.415798728445} {5000,{130.300000,0.415888533453},0.415888533453} </code></pre> <p>--- end edit ---</p>
174,149
<p>How many seven - digit even numbers greater than $4,000,000$ can be formed using the digits $0,2,3,3,4,4,5$?</p> <p>I have solved the question using different cases: when $4$ is at the first place and when $5$ is at the first place, then using constraints on last digit.</p> <p>But is there a smarter way ?</p>
Sumit Bhowmick
34,963
<p>Combinations can be:</p> <p>Case1#: $5 \{4 3 3 2 4\} 0$, total combinations are: $\frac{5!}{2!2!} = 30$</p> <p>Case2#: $4 \{5 3 3 2 4\} 0$, total combinations are: $\frac{5!}{2!} = 60$</p> <p>Case3#: $5 \{4 3 3 0 4\} 2$, total combinations are: $\frac{5!}{2!2!} = 30$</p> <p>Case4#: $4 \{5 3 3 0 4\} 2$, total combinations are: $\frac{5!}{2!} = 60$</p> <p>Case5#: $5 \{4 3 3 2 0\} 4$, total combinations are: $\frac{5!}{2!} = 60$</p> <p>Case6#: $4 \{5 3 3 2 0\} 4$, total combinations are: $\frac{5!}{2!} = 60$</p> <p>So, the total no of combination will be $300$.</p>
2,061,547
<p>I am solving for the zeroes of the function:</p> <blockquote> <p>$$\frac{\cos(x)(3\cos^2(x)-1)}{(1+\cos^2(x))^2}$$</p> </blockquote> <p>The zeroes of the function I found were done by setting $\cos(x)=0$, and $3\cos^2(x)-1=0$</p> <p>For the $3\cos^2(x)-1=0$ I solved it and got $x=\cos^{-1}(\frac{\sqrt3}{3})$ but my calculator only gives one solution $x=.955$ but when I graphed it I got another solution at $x=2.186$. How would I get the one solution I didn't get with the calculator?</p>
Asinomás
33,907
<p>One way to do it is to prove that if $x\neq 0$ then $x$ is not a limit point.</p> <p>Try considering each of the following three cases:</p> <ul> <li>$x&lt;0$</li> <li>$x&gt;1$</li> <li>$0\leq1$</li> </ul> <p>The last case is perhaps the trickiest.</p> <p>We can divide it into two cases:</p> <ul> <li>$x$ is of the form $\frac{1}{n}$</li> <li>$x$ is between $\frac{1}{n-1}$ and $\frac{1}{n}$</li> </ul>
145,306
<p>I had a problem on a program of mine that I could avoid by developing the code through other ways. On the other hand, I still do not know how to solve the simple problem below:</p> <p>Consider these two definitions:</p> <p>f = p; p = 2;</p> <p>One can use Clear[p] to clear the value of p, which will lead the output of f to be p, instead of 2.</p> <p>Is there a way to clear the value of p through the definition of the variable f? That is, without explicit writing the variable p. </p> <p>By using Definition[f], the output is "f=p", but if I try to pick the variable "p" from the definition, trough Definition[f][[1]], I cannot apply Clear on it, since p is automatically evaluated to 2, thus the application of Clear in the last case leads to "Clear[2]", instead of "Clear[p]".</p> <p>Context on where this problem may appear: Consider a larger program in which the variable name p was automatically built and depends on the input of the user (in this case the name of the variable will typically be larger, but let us stick with the name p). If the name of the variable was generated automatically, and depends on the user input, I cannot explicitly type in the code "Clear[p]". One can use Clear[f], but this will not clear the value of p. To sove the issue, I simply avoided using variables whose names depend on the user input, nonetheless, I would like to know a solution for the problem above.</p> <p>Thanks.</p>
Coolwater
9,754
<pre><code>f = p; p = 2; Last[MapAt[Clear, First[OwnValues[f]], 2]] p </code></pre> <blockquote> <p>p</p> </blockquote> <p>Maybe instead of <code>f = p</code> you should use <code>f = Hold[p]</code> or <code>f = Hold[#]&amp;[p]</code> so the above becomes</p> <pre><code>ReleaseHold[MapAt[Clear, f, 1]] </code></pre> <p>which maybe looks a bit better</p>
754,301
<p>Say I have the following maximization.</p> <p><span class="math-container">$$ \max_{R: R^T R=I_n} \operatorname{Tr}(RZ),$$</span> where <span class="math-container">$R$</span> is an <span class="math-container">$n\times n$</span> orthogonal transformational, and the SVD of <span class="math-container">$Z$</span> is written as <span class="math-container">$Z = USV^T$</span>. </p> <p>I'm trying to find the optimal <span class="math-container">$R^*$</span> which intuitively I know is equal to <span class="math-container">$VU^T$</span> where <span class="math-container">$$\operatorname{Tr}(RZ)=\operatorname{Tr}(VU^T USV^T)=\operatorname{Tr}(S).$$</span> I know this is the max since it is the sum of all the singular values of <span class="math-container">$Z$</span>. However, I'm having trouble coming up with a mathematical proof justifying my intuition.</p> <p>Any thoughts? </p>
glS
173,147
<p>I'll show how to prove the more general case with complex matrices: find the maximum of <span class="math-container">$\operatorname{Tr}(UZ)$</span> over all unitaries <span class="math-container">$U$</span>: <span class="math-container">$$\max_{U: U^\dagger U=I}|\operatorname{Tr}(UZ)|.$$</span></p> <p>Leveraging the <a href="https://en.wikipedia.org/wiki/Polar_decomposition" rel="nofollow noreferrer">polar decomposition</a>, we know that <span class="math-container">$Z$</span> can be always written as <span class="math-container">$Z=VP$</span> for some unitary <span class="math-container">$V$</span> and positive semi-definite <span class="math-container">$P\ge0$</span>.</p> <p> <p>Because the product of unitaries is another unitary, this observation reduces the problem to the following: <span class="math-container">$$\max_{U: U^\dagger U=I}\operatorname{Tr}(UP).$$</span> Moreover, observe that the <span class="math-container">$P$</span> in the polar decomposition equals <span class="math-container">$\sqrt{A^\dagger A}$</span>. Denoting with <span class="math-container">$\{u_k\}$</span> the eigenvectors of <span class="math-container">$P$</span>, and <span class="math-container">$s_k$</span> the eigenvalues of <span class="math-container">$P$</span> (i.e. the singular values of <span class="math-container">$Z$</span>), we have <span class="math-container">$P=\sum_k s_k u_k u_k^*,$</span> and therefore for every unitary <span class="math-container">$U$</span> there is an orthonormal basis <span class="math-container">$\{v_k\}$</span> such that <span class="math-container">$$UP=\sum_k s_k v_k u_k^*.$$</span> It follows that the trace reads <span class="math-container">$\operatorname{Tr}(UP)=\sum_k s_k \langle u_k,v_k\rangle$</span>, and taking the absolute value,</p> <p><span class="math-container">$$ \lvert\operatorname{Tr}(UP)\rvert=\left\lvert\sum_k s_k = |\langle u_k,v_k\rangle|\right\rvert \le \sum_k s_k \lvert\langle u_k,v_k\rangle\rvert \le\sum_k s_k = \operatorname{Tr}P. \tag X $$</span> therefore if there is a <span class="math-container">$U$</span> such that <span class="math-container">$\lvert\operatorname{Tr}(UP)\rvert=\sum_k s_k$</span>, that ought to be maximum we are looking for. But finding this <span class="math-container">$U$</span> is trivial at this point: just use <span class="math-container">$U=V^\dagger$</span>.</p>
3,264,693
<p>For context, I have been relearning a lot of math through the lovely website Brilliant.org. One of their sections covers complex numbers and tries to intuitively introduce Euler's Formula and complex exponentiation by pulling features from polar coordinates, trigonometry, real number exponentiation, and vector space transformations.</p> <p>While I am now decently familiar with how complex exponentiation behaves (i.e. inducing rotation), I am slightly confused by the following. </p> <p><span class="math-container">$ 2^3 z$</span> can be viewed as stretching the complex number <span class="math-container">$z$</span> by <span class="math-container">$2^3$</span>. This could be rewritten as <span class="math-container">$8z$</span>. Therefore, Brilliant.org suggests that exponentiation of real numbers can be thought of as stretching a vector just like real number multiplication would. (<strong>check - understood</strong>)</p> <p>Brilliant.org then demonstrates that multiplying <span class="math-container">$z_1$</span> by another complex number <span class="math-container">$z_2$</span> is equivalent to first stretching <span class="math-container">$z_1$</span> by the magnitude of <span class="math-container">$z_2$</span> and then rotating <span class="math-container">$z_1$</span> by the angle that <span class="math-container">$z_2$</span> creates with the real axis counterclockwise. (<strong>check - understood</strong>)</p> <p>However, this is where I get confused. Why does, for example, <span class="math-container">$2^{2i}* z$</span> cause purely rotation of z but <span class="math-container">$2i*z$</span> does not (i.e. it causes stretching, too, in addition to rotation)?</p> <p>To me, the fact that <span class="math-container">$2^{(2i+3)}$</span> causes both rotation and stretching makes perfect sense because we can rewrite this as <span class="math-container">$(2^3)*(2^{(2i)})$</span>. As previously noted by Brilliant.org, exponentiation by real numbers can thought of as stretching.</p> <p><strong>Here is the crux of my issue:</strong></p> <blockquote> <p>I understand that the magnitude of the imaginary number in the exponent (for example, the <span class="math-container">$'2'$</span> in <span class="math-container">$e^{2i}$</span> ) can be thought of as a rate of speed...but why does this interpretation '<strong>drop</strong>' when we are doing something like <span class="math-container">$2i * z$</span>. i.e. <strong>Why is the <span class="math-container">$2$</span> in <span class="math-container">$2i*z$</span> not also treated like a rate of rotation but instead treated like a magnitude of stretching ?</strong></p> </blockquote> <p>My math skill is not particularly high level so if anyone can offer as much of an intuitive answer as possible, it would be greatly appreciated!</p> <p>Edit 1: I guess another way of expressing this question is as follows: </p> <p>Why does a duality exist between real number exponentiation and real number multiplication but a duality does not exist between imaginary number exponentiation and imaginary number multiplication (i.e. imaginary number multiplication can cause stretching in addition to rotation)?</p> <p>Edit 2: While I accept that Euler's formula is a way of proving that exponentiation of purely imaginary numbers has a magnitude of 1 and therefore does not invoke stretching, that is not the sort of answer I am looking for. My question is aimed at identifying what was specified in Edit 1. </p> <p>Edit 3: Here is a picture that helps clarify my point of confusion. <a href="https://i.stack.imgur.com/9XeY0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9XeY0.png" alt="Lack of Duality Between Exponentiation and Multiplication"></a></p> <p>Edit 4: The question that was asked in this post <a href="https://math.stackexchange.com/questions/1540062/which-general-physical-transformation-to-the-number-space-does-exponentiation-re">Which general physical transformation to the number space does exponentiation represent?</a> is sort of the theme that I am going for. The answer that was given to this post, however, omits a reference to the complex numbers. </p>
Community
-1
<p>Write <span class="math-container">$z=r_1e^{i\theta_1}$</span>. When multiplying <span class="math-container">$w=r_2e^{i\theta_2}$</span> by <span class="math-container">$z$</span>, we stretch by a factor of <span class="math-container">$r_1$</span>, and rotate by an angle of <span class="math-container">$\theta_1$</span>. This is easy to see, since <span class="math-container">$zw=r_1r_2e^{i(\theta_1 +\theta_2)}$</span>.</p> <p>If <span class="math-container">$z$</span> is purely imaginary, so that we can take <span class="math-container">$\theta _1=\dfrac{\pi}2$</span>, then we get rotation by <span class="math-container">$\dfrac{\pi}2$</span> and stretching by <span class="math-container">$r_1$</span>.</p> <p>Now for multiplication by <span class="math-container">$z^{ri}$</span>, we have <span class="math-container">$z^{ri}=e^{ri\ln z}$</span>. Depending on the value of <span class="math-container">$\ln z$</span>, we have various possibilities. But we do know there will be no stretching. For, the complex number <span class="math-container">$e^{ri\ln z}$</span> has magnitude <span class="math-container">$1$</span>.</p>
3,264,693
<p>For context, I have been relearning a lot of math through the lovely website Brilliant.org. One of their sections covers complex numbers and tries to intuitively introduce Euler's Formula and complex exponentiation by pulling features from polar coordinates, trigonometry, real number exponentiation, and vector space transformations.</p> <p>While I am now decently familiar with how complex exponentiation behaves (i.e. inducing rotation), I am slightly confused by the following. </p> <p><span class="math-container">$ 2^3 z$</span> can be viewed as stretching the complex number <span class="math-container">$z$</span> by <span class="math-container">$2^3$</span>. This could be rewritten as <span class="math-container">$8z$</span>. Therefore, Brilliant.org suggests that exponentiation of real numbers can be thought of as stretching a vector just like real number multiplication would. (<strong>check - understood</strong>)</p> <p>Brilliant.org then demonstrates that multiplying <span class="math-container">$z_1$</span> by another complex number <span class="math-container">$z_2$</span> is equivalent to first stretching <span class="math-container">$z_1$</span> by the magnitude of <span class="math-container">$z_2$</span> and then rotating <span class="math-container">$z_1$</span> by the angle that <span class="math-container">$z_2$</span> creates with the real axis counterclockwise. (<strong>check - understood</strong>)</p> <p>However, this is where I get confused. Why does, for example, <span class="math-container">$2^{2i}* z$</span> cause purely rotation of z but <span class="math-container">$2i*z$</span> does not (i.e. it causes stretching, too, in addition to rotation)?</p> <p>To me, the fact that <span class="math-container">$2^{(2i+3)}$</span> causes both rotation and stretching makes perfect sense because we can rewrite this as <span class="math-container">$(2^3)*(2^{(2i)})$</span>. As previously noted by Brilliant.org, exponentiation by real numbers can thought of as stretching.</p> <p><strong>Here is the crux of my issue:</strong></p> <blockquote> <p>I understand that the magnitude of the imaginary number in the exponent (for example, the <span class="math-container">$'2'$</span> in <span class="math-container">$e^{2i}$</span> ) can be thought of as a rate of speed...but why does this interpretation '<strong>drop</strong>' when we are doing something like <span class="math-container">$2i * z$</span>. i.e. <strong>Why is the <span class="math-container">$2$</span> in <span class="math-container">$2i*z$</span> not also treated like a rate of rotation but instead treated like a magnitude of stretching ?</strong></p> </blockquote> <p>My math skill is not particularly high level so if anyone can offer as much of an intuitive answer as possible, it would be greatly appreciated!</p> <p>Edit 1: I guess another way of expressing this question is as follows: </p> <p>Why does a duality exist between real number exponentiation and real number multiplication but a duality does not exist between imaginary number exponentiation and imaginary number multiplication (i.e. imaginary number multiplication can cause stretching in addition to rotation)?</p> <p>Edit 2: While I accept that Euler's formula is a way of proving that exponentiation of purely imaginary numbers has a magnitude of 1 and therefore does not invoke stretching, that is not the sort of answer I am looking for. My question is aimed at identifying what was specified in Edit 1. </p> <p>Edit 3: Here is a picture that helps clarify my point of confusion. <a href="https://i.stack.imgur.com/9XeY0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9XeY0.png" alt="Lack of Duality Between Exponentiation and Multiplication"></a></p> <p>Edit 4: The question that was asked in this post <a href="https://math.stackexchange.com/questions/1540062/which-general-physical-transformation-to-the-number-space-does-exponentiation-re">Which general physical transformation to the number space does exponentiation represent?</a> is sort of the theme that I am going for. The answer that was given to this post, however, omits a reference to the complex numbers. </p>
Laurel Turner
677,923
<p>To briefly address your specific question about "duality" first, there is no "duality" (not sure what is precisely meant by that term here) between complex exponentiation and complex multiplication, unless the arguments of both are real. This is because complex multiplication is commutative; complex exponentiation is not - neither is real exponentiation <span class="math-container">$(e^{ix}$</span> is not the same as <span class="math-container">$ix^{e}$</span>). Take the second line:</p> <p><span class="math-container">$$c^{a+bi}\cdot z \leftrightarrow (a+bi)\cdot z $$</span> <span class="math-container">$$ \text{stretching and rotating} \leftrightarrow \text{stretching and rotating} $$</span></p> <p>Yes, <span class="math-container">$(a+bi)\cdot z$</span> corresponds to a transformation consisting of some stretching and some rotation, but the stretching is due to <em>both</em> multiplying by <span class="math-container">$a$</span> and multiplying by <span class="math-container">$bi$</span>. </p> <p>Now, let's break down intuitively what different operations on the complex plane are (I know you said you already went over some of this in your question, but I think it will lead into the explanation for complex exponentiation nicely).</p> <p>Complex addition is the same as vector addition: we add the components. Think of this intuitively by imagining each complex number as an arrow: adding two complex numbers is like sticking one of their arrows on the end of the other. Another way: think of adding a complex number not as a static operation, but as a transformation. Adding the number <span class="math-container">$(a+bi)$</span> is the same as shifting the origin of the complex plane onto the point <span class="math-container">$(-a-bi)$</span>. Take a second to imagine why that's true. Thus <span class="math-container">$(a+bi)$</span> is a <em>function</em> with respect to addition, which maps every point in the complex plane to another point in the complex plane, (a+bi) away.</p> <p>Complex multiplication corresponds to both a stretch and a rotation (usually). <span class="math-container">$$(a+bi)\cdot(c+di)=(ac-bd)+(ad+bc)$$</span> </p> <p>A better way to think about this is in terms of Euler's formula: represent your two complex numbers as polar coordinates, and multiplication becomes much clearer: <span class="math-container">$$r_1 e^{i\theta_1}\cdot r_2 e^{i\theta_2}=r_1r_2e^{i(\theta_1+\theta_2)}$$</span> </p> <p>So we can imagine complex multiplication of two numbers is taking the angles they make with the x axis, adding those two angles to get the angle of your new number, then multiplying the magnitudes of the two original numbers to get the magnitude of your new number. Think of complex multiplication by <span class="math-container">$(a+bi)$</span> as two, very dynamic transformations composted together: first stretching the entire complex plane by a factor of <span class="math-container">$\sqrt{a^2+b^2}$</span>, then rotating the entire complex plane by a factor of <span class="math-container">$\tan^{-1}(\frac{b}{a})$</span>. Thus <span class="math-container">$(a+bi)$</span> can also be thought of as a <em>function</em> with respect to multiplication: it maps every point in the complex plane to another point in the complex plane by combination of a rotation and a stretch.</p> <p>What kind of function is "complex exponentiation"? We define it as follows: <span class="math-container">$a^{b+c\cdot i}$</span>=<span class="math-container">$a^{b}\cdot a^{c\cdot i}$</span> where <span class="math-container">$a^{c \cdot i} = e^{c\cdot i \log a}$</span>, etc. based on Euler's formula. <span class="math-container">$e^{c\cdot i \log a}$</span> is <span class="math-container">$e$</span> raised to a complex number and is thus a rotation. Note that we can imagine complex exponentiation again as a sort of dynamic transformation, squishing and mapping the complex plane to a new location.</p> <p>Let's break this down into two cases. The first is the only one your question asks about. </p> <p>1) The base of the exponent is real. As can be seen from above, every "complex exponentiation" is a transformation of the complex plane consisting of two transformations: first, a stretch by some factor, and then a rotation. This is very similar to complex multiplication, which begs the question, when do these two things behave in the same way? You called it "duality," I'm not going to call it that because that word means something specific in linear algebra, I'll just call this similarity "similarity." </p> <p>Multiplying <span class="math-container">$x$</span> by <span class="math-container">$(a+bi)\rightarrow$</span> Stretch by <span class="math-container">$\sqrt{a^2+b^2}$</span>, rotate by <span class="math-container">$\tan^{-1}(\frac{b}{a})$</span> radians. </p> <p>Raising <span class="math-container">$x$</span> to the <span class="math-container">$(a+bi)$</span> power <span class="math-container">$\rightarrow$</span> Stretch by <span class="math-container">$x^a$</span>, rotate by <span class="math-container">$b\cdot i \log x$</span> radians. </p> <p>Note that the two are very dissimilar - exponentiation is dependent on the base whereas multiplication is not. This is a result of the effect that multiplication by a complex number is something called a <em>linear transformation</em> (<a href="https://en.wikipedia.org/wiki/Linear_map" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Linear_map</a>), raising something to a complex power is most certainly <em>not</em>.</p> <p>2) The base of the exponent is complex. This gets a little more complicated, because raising a complex number to a real power corresponds in part to a rotation, so separating which parts are the rotation and which parts are the stretching is a little annoying and won't give much insight here. Complex exponentiation of complex numbers is <strong>really</strong> funky, giving rise to all sorts of weird fractal shapes when we consider which complex numbers get really large when we raise them to a complex power, and which ones don't - this is related to how the Mandelbrot set is formed (<a href="https://en.wikipedia.org/wiki/Mandelbrot_set" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Mandelbrot_set</a>). The point is that, again, the magnitude of the stretch and rotation is dependent on x, meaning that this is <em>not</em> a linear transformation.</p> <p>If you want to gain an intuition for how complex exponentiation of complex numbers works, I recommend you play around with some functions with this grapher: <a href="http://davidbau.com/conformal/#z" rel="nofollow noreferrer">http://davidbau.com/conformal/#z</a></p> <p>So, to return to what I mentioned at the beginning, the reason why you're not seeing a "duality" in case 3 is that there's no "duality" in case 2 to begin with - yes, both complex exponentiation and complex multiplication correspond to both a rotation and a stretch, but the rotation and stretch for each behave in very different ways. Complex multiplication is a linear transform, complex exponentiation is not.</p> <p>I also enjoy brilliant.org's courses; if you're interested, I would recommend you check out their course on linear algebra next (<a href="https://brilliant.org/courses/linear-algebra/" rel="nofollow noreferrer">https://brilliant.org/courses/linear-algebra/</a>). This is the first answer I've actually posted. I would love feedback from anyone if they have it.</p>
1,755,029
<p>Imagine a cubic array made up of an $n\times n\times n$ arrangement of unit cubes: the cubic array is n cubes wide, n cubes high and n cubes deep. A special case is a $3\times3\times3$ Rubik’s cube, which you may be familiar with. How many unit cubes are there on the surface of the $n\times n\times n$ cubic array?</p> <p>As far as I can see there are 27 unit cubes in a $n\times n\times n$ rubik cube. But the answer says something different. There are total $6n^2$ squares are present in $n\times n\times n$ cube. But after that I cant proceed.</p> <p>Please help :)</p>
John Douma
69,810
<p>There are $6$ faces with $n^2$ cubes on each face for a total of $6n^2$ cubes. The eight cubes on the vertices are counted $3$ times each so we must subtract $16$ to get $6n^2-16$ cubes. Likewise, there are $12$ edges, each with $n-2$ cubes that have been double counted so we must subtract $12(n-2)$ to get $6n^2-16-12n+24=6n^2-12n+8$ cubes.</p>
3,862,182
<p>I encountered this question, and I am unsure how to answer it.</p> <p>When <span class="math-container">$P(x)$</span> is divided by <span class="math-container">$x - 4$</span>, the remainder is <span class="math-container">$13$</span>, and when <span class="math-container">$P(x)$</span> is divided by <span class="math-container">$x + 3$</span>, the remainder is <span class="math-container">$-1$</span>. Find the remainder when <span class="math-container">$P(x)$</span> is divided by <span class="math-container">$x^2 - x - 12$</span>.</p> <p>How would I proceed? Thank you in advance!</p>
Bernard
202,857
<p>Use the inverse isomorphism of the isomorphism in the <em>Chinese remainder theorem</em>: as <span class="math-container">$x^2-x-12=(x+3)(x-4)$</span>, we have an isomorphism <span class="math-container">\begin{align} K[X]/(X^2-X-12)&amp;\xrightarrow[\quad]\sim K[X]/(X+3)\times K[X]/(X-4) \\ P\bmod(X^2-X-12)&amp;\longmapsto(P\bmod (X+3), P\bmod (X-4)&amp;&amp;(K\text{ is the base field}) \end{align}</span> and given a Bézout's relation <span class="math-container">$\;U(X)(X+3)+V(X)(X-4)=1$</span>, the inverse isomorphisme is given by <span class="math-container">$$(S\bmod (X+3), T\bmod(X-4))\longmapsto TU(X+3)+SV(X-4)\bmod(X^2\!-X-12) .$$</span></p> <p>Now a Bézout's relation can be found with the <em>extended Euclidean algorithm</em>, but in the present case it is even shorter:<span class="math-container">$(X+3)-(X-4)=7$</span>, so we simply have <span class="math-container">$$\frac17(X+3)-\frac17(X-4)=1$$</span> and given that <span class="math-container">$\:P\bmod(X+3)=-1$</span>, <span class="math-container">$P\bmod(X-4)=13$</span>, we obtain readily <span class="math-container">$$P\bmod(X^2-X-12)=\frac{13}7(X+3)+\frac17(X-4)=2X+5.$$</span></p>
3,074,901
<p>Find the rank of the following matrix</p> <p><span class="math-container">$$\begin{bmatrix}1&amp;-1&amp;2\\2&amp;1&amp;3\end{bmatrix}$$</span></p> <p>My approach: </p> <p>The row space exists in <span class="math-container">$R^3$</span> and is spanned by two vectors. Since the vectors are independent of each other(because they are not scalar multiples of each other). Therefore, the row rank of the matrix which is the rank of this is two, which is the correct answer.</p> <p>However, I'm still confused as to why the answer is the answer. If the row space exists in R^3 doesn't it have be be spanned by at least three vectors. For example, the unit vectors <span class="math-container">$u_1, u_2, u_3$</span> span row space and are independent of each other so the rank of the space should be 3. </p> <p>Can someone please tell me the flaw in my logic/understanding?</p>
eason 曲
505,625
<p>The definition of <em>rank</em> is the number of linearly independent row vectors of a matrix. For a matrix with <span class="math-container">$n$</span> linearly independent col, the max of rank is <span class="math-container">$n$</span>. </p> <p><em>Span</em> means the linear combination of these vectors includes all vectors in this space, which means at least there are <span class="math-container">$n$</span> vectors(for matrix with col <span class="math-container">$n$</span>).</p>
3,046,205
<p>I am trying to figure out the steps between these equal expressions in order to get a more general understanding of product sequences: <span class="math-container">$$\prod_{k=0}^{n}\left(3n-k\right) + \prod_{k=n}^{2n-3}\left(2n-k\right) = \prod_{j=2n}^{3n}j + \prod_{j=3}^{n}j =\frac{(3n)!}{(2n-1)!}+\frac{n!}{2}$$</span></p> <p>I know that <span class="math-container">$ n! :=\prod_{k=1}^{n}k$</span> but I can't figure out how that helps me understand the above equation.</p> <p>edit: Thank you for the great help! Another thing I don't understand, is how I get from <span class="math-container">$\prod_{k=0}^{n}\left(3n-k\right) + \prod_{k=n}^{2n-3}\left(2n-k\right)$</span> to <span class="math-container">$\prod_{j=2n}^{3n}j + \prod_{j=3}^{n}j$</span>. Any help with understanding this is much appreciated, I will try to figure it out myself while I wait for answers.</p>
Jacky Chong
369,395
<p>Using the fact that <span class="math-container">\begin{align} a^{\varphi(b)}\equiv 1 \mod b \end{align}</span> and the fact that for any <span class="math-container">$n$</span> <span class="math-container">\begin{align} n = \varphi(b)k+r \end{align}</span> for some <span class="math-container">$0\leq r&lt;\varphi(b)$</span> we have the desired result. Here <span class="math-container">$\varphi(b)$</span> is the Euler Totient function.</p>
3,604,388
<p>Let <span class="math-container">$P_n$</span> be the statement that <span class="math-container">$\dfrac{d^{2n}}{dx^{2n}}(x^2-1)^n = (2n)!$</span> </p> <p>Base case: n = 0, <span class="math-container">$\dfrac{d^0}{dx^0}(x^2-1)^0 = 1 = 0!$</span></p> <p>Assume <span class="math-container">$P_m = \dfrac{d^m}{dx^m}(x^2-1)^m = m!$</span> is true. </p> <p>Prove <span class="math-container">$P_{m+1} = \dfrac{d^{2(m+1)}}{dx^{2(m+1)}}(x^2-1)^{m+1} = [2(m+1)]!$</span> </p> <p><span class="math-container">$\dfrac{d^{2(m+1)}}{dx^{2(m+1)}}(x^2-1)^{m+1}$</span></p> <p>= <span class="math-container">$\dfrac{d^{2m}}{dx^{2m}}\left(\dfrac{d^2}{dx^2}(x^2-1)^{m+1}\right)$</span> </p> <p>= <span class="math-container">$\dfrac{d^{2m}}{dx^{2m}}\left(2x(m)(m+1)(x^2-1)^{m-1}\right)$</span></p> <p>= <span class="math-container">$[\dfrac{d^{2m}}{dx^{2m}}(x^2-1)^m][2x(m)(m+1)(x^2-1)^{-1}]$</span></p> <p>From the inductive hypothesis, </p> <p>= <span class="math-container">$(2m)! [2x(m)(m+1)(x^2-1)^{-1}]$</span> </p> <p>I got stuck here, and not sure if I have done correctly thus far? I did not know how to get to <span class="math-container">$[2(m+1)]!$</span>. Please advise. Thank you. </p>
Rezha Adrian Tanuharja
751,970
<p>Try the binomial identity</p> <p><span class="math-container">$$ \frac{d^{2n}}{dx^{2n}}(x^{2}+1)(x^{2}+1)^{n-1}=\sum_{i=0}^{2n}{\binom{2n}{i}\frac{d^{i}}{dx^{i}}(x^{2}+1)\frac{d^{2n-i}}{dx^{2n-i}}(x^{2}+1)^{n-1}} $$</span></p>
2,483,188
<p>I am facing this problem: </p> <p><strong>Turn into cartesian form:</strong></p> <p>$$\dfrac{1-e^{i\pi/2}}{1 + e^{i\pi/2}}$$</p> <p>I've tried to operate and I've come up to this:</p> <p>$$\dfrac{1-2e^{i\pi/2} + e^{i\pi}}{1 - e^{i\pi}}$$</p> <p>I do not know how to go on, and I've tried to operate with the cartesian form of the initial quotient, but I come up with an expression similar. I'm stucked.</p>
copper.hat
27,978
<p>Try $F(f) = \int_0^{1 \over 2} f - \int_{1 \over 2}^1 f$. The operator $F$ is linear &amp; continuous.</p> <p>The (an) obvious candidate is $g = 1_{[0,{1 \over 2})} - 1_{({1 \over 2},1]}$, but it is not continuous.</p>
2,705,980
<p>I have the following problem: \begin{cases} y(x) =\left(\dfrac14\right)\left(\dfrac{\mathrm dy}{\mathrm dx}\right)^2 \\ y(0)=0 \end{cases} Which can be written as:</p> <p>$$ \pm 2\sqrt{y} = \frac{dy}{dx} $$</p> <p>I then take the positive case and treat it as an autonomous, seperable ODE. I get $f(x)=x^2$ as my solution.</p> <p>In order to solve this problem, I have to divide each side of the equation by $\frac{1}{\sqrt{y}}$. But since the solution to this IVP is $y(x)=x^2$, zero is in the image of $f(x)$. So at a particular point $1/\sqrt{y}$ is not defined. But the <strong>solution</strong> is defined at $y =0$.</p> <p>In fact, $y(x)= 0$ for all x is another solution. But aside from this solution the non-trivial solution is defined at zero also.</p> <p>So is it wrong to divide across by $1/\sqrt{y}$? And if so how else do I approach this question?</p>
spaceman
522,096
<p>I assume that the $ \mathbf{v}_i $'s are the basis vectors of $ \mathbb{R}^n $. If so, let $ \mathbf{x} \in \mathbb{R^n} $ such that $ A\mathbf{x} = 0 $. Then since $ \mathbf{v}_i $'s form a basis, there exists $ \lambda_i \in \mathbb{R} $ such that $ \mathbf{x} = \sum_{i=1}^{n}\lambda_i \mathbf{v}_i $. Then by linearity: $$ 0 = A\mathbf{x} = \lambda_1A\mathbf{v}_1 + \ldots + \lambda_nA\mathbf{v}_n. $$ Then use the linear independence of $ A\mathbf{v}_i $'s to conclude that $ \mathbf{x} = 0 $.</p> <p>Hope this helps.</p> <p>Edit: As pointed out, you can in fact show that the list of $ \mathbf{v}_i $'s form a basis of $ \mathbb{R}^n $, by first noticing that since $ A\mathbf{v}_i $'s are a list $ n $ linearly independent vectors, then the $ \mathbf{v}_i $'s are also a list of $ n $ linearly independent vectors, and hence form a basis of $ \mathbb{R}^n $ (since $ \dim\mathbb{R}^n = n $).</p>
1,488,737
<blockquote> <p>Let $A$ be a square matrix of order $n$. Prove that if $A^2=A$ then $\mathrm{rank}(A)+\mathrm{rank}(I-A)=n$.</p> </blockquote> <p>I tried to bring the $A$ over to the left hand side and factorise it out, but do not know how to proceed. please help. </p>
Hamid
420,694
<p>Let rank <span class="math-container">$(A) = r$</span> and <span class="math-container">$\lambda$</span> be an eigenvalue of <span class="math-container">$A$</span>. Since <span class="math-container">$A$</span> is idempotent then <span class="math-container">\begin{align*} \lambda x = A x = A^2 x = A.Ax = A\lambda x = \lambda Ax = \lambda^2 x \end{align*}</span> meaning that <span class="math-container">$\lambda =0$</span> or <span class="math-container">$1$</span>. So the eigenvalues of <span class="math-container">$A$</span> are either <span class="math-container">$0$</span> or <span class="math-container">$1$</span>. Writing the eigenvalue decomposition of <span class="math-container">$A$</span> as <span class="math-container">\begin{align*} A = U^T \Lambda U \end{align*}</span> where <span class="math-container">$U$</span> is the orthogonal matrix of eigenvectors of <span class="math-container">$A$</span> and <span class="math-container">$\Lambda$</span> is the diagonal matrix of eigenvalues of <span class="math-container">$A$</span>, we get <span class="math-container">\begin{align*} I- A &amp;= I - U^T \Lambda U\\ &amp;= U^TU - U^T \Lambda U\\ &amp;= U^T(I-\Lambda)U. \end{align*}</span> Note that since rank <span class="math-container">$(A) = r$</span> then only <span class="math-container">$r$</span> of the elements of the diagonal of <span class="math-container">$\Lambda$</span> are <span class="math-container">$1$</span> and the rest are zero. This implies that only <span class="math-container">$n-r$</span> elements of <span class="math-container">$I-\Lambda$</span> are <span class="math-container">$1$</span> and the rest are zero. So, rank <span class="math-container">$(I-A) = n-r$</span> and you have the desired result.</p>
313,025
<p>I got two problems asking for the proof of the limit: </p> <blockquote> <p>Prove the following limit: <br/>$$\sup_{x\ge 0}\ x e^{x^2}\int_x^\infty e^{-t^2} \, dt={1\over 2}.$$</p> </blockquote> <p>and, </p> <blockquote> <p>Prove the following limit: <br/>$$\sup_{x\gt 0}\ x\int_0^\infty {e^{-px}\over {p+1}} \, dp=1.$$</p> </blockquote> <p>I may feel that these two problems are of the same kind. World anyone please help me with one of them and I may figure out the other one? Many thanks! </p>
AJMansfield
50,951
<p>The main things I'd add to the other answers is to explicitly apply the definition of a limit, rather than leave it all to higher-level theorems, since the OP said the problem was proving the <em>limits</em>.</p> <p>$$\lim_{x \rightarrow \infty} f(x) = L \Leftrightarrow \forall \; \varepsilon &gt; 0 \; \exists \; N &gt; 0 \ni x &gt; N \Rightarrow \left| f(x) - L \right| &lt; \varepsilon$$</p> <p>$$\;\\ x e^{x^2}\int_x^\infty e^{-t^2} \, dt = \lim_{b \rightarrow \infty} x e^{x^2}\int_x^b e^{-t^2} \, dt = L \Leftrightarrow \\ \forall \; \varepsilon &gt; 0 \; \exists \; N &gt; 0 \ni b &gt; N \Rightarrow \left| e^{x^2}\int_x^b e^{-t^2} \, dt - L \right| &lt; \varepsilon$$</p> <p>$$\;\\ \text{Let} L = {1\over2}$$</p> <p>$$\text{Show:} \; \forall \; \varepsilon &gt; 0 \; \exists \; N &gt; 0 \ni b &gt; N \Rightarrow \left| x e^{x^2}\int_x^b e^{-t^2} \, dt - {1\over2} \right| &lt; \varepsilon$$</p> <p>And the rest of the whole rigmarole of finding some expression for $N$ containing $\varepsilon$ and $x$ that you can prove is always big enough (and then doing so).</p>
2,650,628
<p>The equation $\log_e(x) + \log_e(1+x) =0$ can be written as:</p> <p>a) $x^2+x-e=0$</p> <p>b) $x^2+x-1=0$</p> <p>c) $x^2+x+1=0$</p> <p>d) $x^2+xe-e=0$</p> <p>I tried differentiating both sides, then it becomes $\frac{1}{x}+\frac{1}{1+x}=0$, but I dont get any of the answers.</p>
hamam_Abdallah
369,188
<p>For $x&gt;0,$</p> <p>$$\ln (ex)=\ln (e)+\ln (x)=1+\ln (x) $$</p> <p>and $$-\ln (x)=\ln \Bigl(\frac {1}{x}\Bigr) $$</p> <p>the equation will be</p> <p>$$\ln (x+1)=\ln \Bigl(\frac 1x\Bigr) $$</p> <p>or</p> <p>$$x+1=\frac 1x $$ and the answer is $ x^2+x-1=0$</p>
2,650,628
<p>The equation $\log_e(x) + \log_e(1+x) =0$ can be written as:</p> <p>a) $x^2+x-e=0$</p> <p>b) $x^2+x-1=0$</p> <p>c) $x^2+x+1=0$</p> <p>d) $x^2+xe-e=0$</p> <p>I tried differentiating both sides, then it becomes $\frac{1}{x}+\frac{1}{1+x}=0$, but I dont get any of the answers.</p>
Michael Hardy
11,667
<p>Recall that $$\log_e x + \log_e(x+1) = \log_e\Big( x(x+1)\Big) = \log_e(x^2+x)$$ and that $$ \text{if } \log_e(x^2+x) = 0 \text{ then } x^2+x = e^0. $$ </p>
2,247,498
<p>Imagine a circle of radius R in 3D space with a line l running threw it's center C in a direction perpendicular to the plane of the circle. Basically, like the axel of a wheel. </p> <p>From a given point P that is not on the circle or on l, a ray extends to intersect both l and the circle. What would be the equations used to find the intersection points the ray make with the circle and l? You are given the coordinates of C and P, the radius R and the orientation of l.</p> <p>I am trying to model looking from a point P onto a wheel-axis shape and find from point of view P the point of the edge of the circle that would appear to intersect with it's axis. Of course it doesn't but it is how this 3d structure would appear in a 2d image if a camera was situated at point P.</p>
Futurologist
357,211
<p>You should have simply stated your question in terms of orthographic projection from the get go. Try the following algorithm, I hope it works (if, of course, I understand correctly what your goal is).</p> <p>This time you assume you are given the vector $\vec{p}$ instead of the point $P$ that determines the direction of all rays illuminating the circle. Last time, all rays were emanating from the point $P$. This time, all rays are parallel to $\vec{p}$. Everything else, including the notations, stays the same. The theoretical arguments are almost word for word the same, except this time the ray $s$ does not pass through point $P$ but is parallel to $\vec{p}$ instead.</p> <p>The algorithm is a bit simpler this time, depending what points you really need to find. I will find all of them: $Q_1, Q_2, R_1, R_2$ but you can choose which ones you really need.</p> <ol> <li><p>Calculate $$\cos(\theta) = \frac{\big( \vec{v} \cdot \vec{p}\big)}{|\vec{v}||\vec{p}|}$$ where $\theta = \angle\,(\vec{v},\vec{p})$ is angle between first vector $\vec{v}$ and second vector $\vec{p}$, following that order. </p></li> <li><p>If $\cos(\theta) &lt; 0 $ then set $$\vec{v} := -\vec{v}$$ $$\cos(\theta) := - \cos(\theta)$$ Else, keep $\vec{v}$ and $\cos(\theta)$ the same. This way $\vec{v}$ and $\vec{p}$ are in the same half-space with respect to the plane $\beta$ determined by the circle (and so $\beta$ is orthogonal to $\vec{v}$).</p></li> </ol> <p>Observe that since $Q_1R_1$ is parallel to $\vec{p}$ $$\angle \, Q_1R_1C = \angle (\vec{v}, \vec{p}) = \theta$$</p> <ol start="3"> <li><p>Calculate $$\vec{u} = \frac{ \vec{v} \times \big(\vec{v}\times\vec{p} \big)}{|\vec{v} \times \big(\vec{v}\times\vec{p}\big)|}$$ This vector is calculated so that it points from $C$ to $Q_1$ which is the point behind $l$.</p></li> <li><p>Calculate $$\vec{CQ_1} = r \vec{u} \,\,\, \text{ and } \,\,\, \vec{CQ_2} = - r \vec{u}$$ which leads to $$\vec{OQ_1} =\vec{OC} + r \vec{u} \,\,\, \text{ and } \,\,\, \vec{OQ_2} = \vec{OC} - r \vec{u}$$</p></li> </ol> <p>If you look at triangle $CQ_1R_1$, you will see that it is right-angled triangle with angle $\angle \, Q_1R_1C = \theta$ and $|CQ_1| = r$ so $$|CQ_1| = r \, \cot(\theta) = \frac{r \,\cos(\theta)}{\sqrt{1 - \cos^2(\theta)}}$$ Thus</p> <ol start="5"> <li>Calculate $$\vec{CR_1} = \frac{r \,\cos(\theta)}{\sqrt{1 - \cos^2(\theta)}} \, \frac{\vec{v}}{|\vec{v}|}\,\,\,\,\, \text{ and } \,\,\,\,\, \vec{CR_2} = - \, \frac{r \,\cos(\theta)}{\sqrt{1 - \cos^2(\theta)}} \, \frac{\vec{v}}{|\vec{v}|}$$ which lead to $$\vec{OR_1} = \vec{OC} + \frac{r \,\cos(\theta)}{\sqrt{1 - \cos^2(\theta)}} \, \frac{\vec{v}}{|\vec{v}|}\,\,\,\,\, \text{ and } \,\,\,\,\, \vec{OR_2} = \vec{OC} - \, \frac{r \,\cos(\theta)}{\sqrt{1 - \cos^2(\theta)}} \, \frac{\vec{v}}{|\vec{v}|}$$ </li> </ol>
3,959,263
<p>Let <span class="math-container">$G$</span> be a tree with a maximum degree of the vertices equal to <span class="math-container">$k$</span>. <strong>At least</strong> how many vertices with a degree of <span class="math-container">$1$</span> can be in <span class="math-container">$G$</span> and why?</p> <p>I think the answer must be <span class="math-container">$k$</span> but I don't know how to prove it.</p>
FFjet
597,771
<p><strong>HINT.</strong> Let <span class="math-container">$G$</span> be a smallest tree that meets the requirements. Could adding nodes to <span class="math-container">$G$</span> decrease the answer?</p>
159,585
<p>This is a kind of a plain question, but I just can't get something.</p> <p>For the congruence and a prime number $p$: $(p+5)(p-1) \equiv 0\pmod {16}$.</p> <p>How come that the in addition to the solutions $$\begin{align*} p &amp;\equiv 11\pmod{16}\\ p &amp;\equiv 1\pmod {16} \end{align*}$$ we also have $$\begin{align*} p &amp;\equiv 9\pmod {16}\\ p &amp;\equiv 3\pmod {16}\ ? \end{align*}$$</p> <p>Where do the last two come from? It is always 4 solutions? I can see that they are satisfy the equation, but how can I calculate them?</p> <p>Thanks</p>
M Turgeon
19,379
<p>(If you know a little abstract algebra.) </p> <p>The equation $$(p+5)(p−1)\equiv 0\pmod{16}$$ implies that either $p+5$ or $p-1$ is a zero divisor in $\mathbb{Z}/16\mathbb{Z}$; these are $$\{0,2,4,6,8,10,12,14\}.$$ However, you have some restrictions, e.g. if $p+5\equiv 2$, then $p-1$ should be congruent to $8$, which it isn't. Hence, this strategy is more arduous, but it gives the big picture.</p>
259,808
<p>For example, suppose I wanted to determine which of the following has the fastest asymptotic growth:</p> <ol> <li><p>$n^2\log(n)+(\log(n))^2$</p></li> <li><p>$n^2+\log(2^n)+1$</p></li> <li><p>$(n+1)^3+(n-1)^3$</p></li> <li><p>$(n+\log(n))^22^{100}$</p></li> </ol> <p>Are there any straightforward methods to tell which is fastest without actually graphic the functions?</p>
Jair Taylor
28,545
<p>Some tips:</p> <ul> <li><p>Keep the following asymptotic inequalities in your head, where $f(n) \ll g(n)$ means that $\lim_{n\rightarrow \infty} \frac{f(n)}{g(n)} = 0$:$$1 \ll \log n \ll \sqrt{n} \ll n\ll n^2 \ll n^3 \ll \ldots \ll e^n \ll n!$$ and more generally, $n^a \ll n^b$ whenever $0 \leq a &lt; b$ and $a^n \ll b^n$ whenever $1 \leq a &lt; b$. ( I don't believe the definition I've given for $f\ll g$ is standard, so be careful using this symbol without explaining it.)</p></li> <li><p>If you are only interested in asymptotic growth, find the term in the expression that grows the fastest - then you can neglect the others. Asymptotically, they will not matter.</p></li> <li><p>Constant multipliers will not matter if one of the two functions is much larger than the other: If $f(x) \ll g(x)$ then $Cf(x) \ll g(x)$ for any $C$, no matter how larger. For example, $10^{10^{10}} n^2 \ll n^3.$</p></li> </ul> <p>To prove a function is asymptotically faster than another, divide them. Choose the term of the two expressions that grows the fastest, and divide this by the top and bottom. Then take the limit as $n$ approaches infinity. If this is zero, the denominator is faster; if it is infinite, the numberator is faster. If the limit is a real number $x &gt;0$ and $x&lt;1$, then the denominator is faster, if it is $x&gt;1$ the numerator is faster, and if it is $1$ the functions are asymptotically equal. For example, to compare (1) and (2):</p> <p>$$\frac{ n^2\log(n)+(\log(n))^2}{n^2+\log(2^n)+1} = \frac{ \log(n)+\frac{(\log(n))^2}{n^2}}{1+\frac{\log(2^n)}{n^2}+\frac{1}{n^2}} $$</p> <p>All of the terms in the numerator and denominator tend to $0$ except for $\log n$ and $1$ (note that $\log 2^n = n \log 2$) and so taking the limit gives $$\lim_{n \rightarrow \infty}\frac{ n^2\log(n)+(\log(n))^2}{n^2+\log(2^n)+1} = \lim_{n \rightarrow \infty} \frac{\log n}{1} = \infty$$ so the numerator is asymptotically larger.</p>
2,547,508
<p>I am trying to prove that various expressions are real valued functions. Is it possible to state that, because no square roots (or variants such as quartic roots etc) are in that function, it is a real valued function?</p>
Community
-1
<p>"Function" is such an incredibly broad notion that it is wholly implausible to give a general answer to this sort of question that isn't something trivial like "Applying a real-valued function always gives a real number".</p> <p>More meaningful questions need to restrict the scope; for example, one might ask what sort of numbers you can produce by plugging real numbers into the operations of addition, subtraction, multiplication, and division. (where by those words I refer to the usual arithmetic operations) (and, of course, I mean that you only plug in numbers that are in the domain of the operation)</p> <p>And, in fact, you can only get real numbers by doing so.</p> <p>Incidentally, assuming they satisfy the "usual laws of arithmetic" (i.e. the field axioms), the sorts of functions you can produce by using these four arithmetic operations, are called <em>rational functions</em>.</p> <p>That said, depending on the fine details of exactly what you mean by your question, this particular fact might be just another trivial one, of the form "applying a real-valued function always gives a real number".</p> <p>This question is more interesting when you ask questions like "what values can I get from rational functions if I only start with rational numbers and $\sqrt{2}$?" (answer: the numbers that can be written in the form $a + b \sqrt{2}$, where $a,b$ are rational numbers)</p>
112,320
<p>What is the number of strings of length $235$ which can be made from the letters A, B, and C, such that the number of A's is always odd, the number of B's is greater than $10$ and less than $45$ and the Number of C's is always even?</p> <p>What I can think of is </p> <p>$$\left(\binom{235}{235} - \left\lfloor235 - \frac{235}2\right\rfloor\right) \binom{235}{35} \binom {235}{ \lfloor 235/2\rfloor}\;.$$</p> <p>Thanks</p>
Daniel Pietrobon
17,824
<p>If there is an odd number of A's and an even number of C's, it follows that there must be an even number of B's.</p> <p>Therefore, to construct a string of length 235, satisfying the properties above, it is sufficient to specify the position of the A's and the B's, because then the C's will be forced.</p> <p>There are ${235 \choose n}$ ways to specify the position of the B's, where $n\in\{12,14,\dots,44\}$.</p> <p>After we have placed the B's, there are $235-n\choose m$ ways to place the A's, where $m \in \{1,3,5,\dots,\frac{(235-n-1)}{2}\}$.</p> <p>Hence, the number of possible strings is:</p> <p>$$\displaystyle\sum_{n\in\{12,14,\dots,44\}}_{m\in\{1,3,\dots,(235-2n-1)/{2}\}} {235 \choose n} {235-n\choose m}$$</p>
761,726
<p>We know that the Banach space $X$ is infinite-dimensional,</p> <p>theconclusion we want to show is: then $X'$ is also infinite-dimensional. </p> <p>$X'$: the space of linear bdd functions</p>
Vincent Boelens
94,696
<p>Use contraposition: if $X'$ is finite dimensional, then $X''$ is as well. Since $X$ can be embedded isometrically in $X''$, it must be finite dimensional.</p>
1,432,040
<p>My approach is to unite the group consisting of Alice ($A$), Bob ($B$) and $k$ persons between them into one new "person". So we can permute $n-(k+2)$ and $k$ persons between $A$ and $B$ separately.</p> <p>It seems like the answer should be $(n-(k+2))!\times k!$, but it is suspicious because it does not take into account two differnt cases for: $A$, k persons, $B$ or $B$, k persons, $A$. Overall I have no confidence in my approach.</p> <p>Please help.</p>
Hagen von Eitzen
39,174
<p>Your idea is fine. But so is your suspicion. You can remedy that by multiplying with $2$ in the end. Apart from that, you made a little mistake: When replacing the $k+2$ persons with one new "person", you end up with $n-(k+2)+1$ persons.</p>
1,432,040
<p>My approach is to unite the group consisting of Alice ($A$), Bob ($B$) and $k$ persons between them into one new "person". So we can permute $n-(k+2)$ and $k$ persons between $A$ and $B$ separately.</p> <p>It seems like the answer should be $(n-(k+2))!\times k!$, but it is suspicious because it does not take into account two differnt cases for: $A$, k persons, $B$ or $B$, k persons, $A$. Overall I have no confidence in my approach.</p> <p>Please help.</p>
Javier
241,291
<p>You have to consider that people between A and B and people outside them are also interchangeable, so the answer can be simplified:</p> <p>Consider A in the position $1$ of $n$. Now B is in the position $k+2$. There are $n-2$ anonymous people who can change positions, so in this configuration there are $(n-2)!$ possibilities. This reasoning can be done for each possible position of A, i.e. from position $1$ to position $n-k-1$ (A is before B for the moment). The only thing left is change the roles of A and B, so we should multiply by $2$.</p> <p>So the final answer is $(n-2)! \cdot (n-k-1) \cdot 2 $</p> <p>Edit: I'm considering there has to be exactly $k$ people between A and B, but they can be any $k$ people of all of them.</p>
714,000
<p>Lets say:</p> <p>$X = \{x_1, x_2, x_3, ... \} $ be a set of Real numbers in range $(R_1, R_2)$ and $m =$ mean of $x$</p> <p>If I have to increase mean of set $X$ by $3$, each number in the set has to be increase by $3$. But how to increase mean of set $X$ by $3$, by only changing a subset of X. Is there any mathematical relations as such?</p>
AnonSubmitter85
33,383
<p>Let $x_1, \dots, x_k$ be the subset that does not change and let $x_{k+1},\dots, x_N$ be the subset that does. We have by hypothesis that</p> <p>$$ {1 \over N} (x_1 + \cdots + x_k) + {1 \over N} (x_{k+1} + \cdots + x_N) = m. $$</p> <p>And you want to find some function $f$ such that</p> <p>$$ {1 \over N} (x_1 + \cdots + x_k) + {1 \over N} f(x_{k+1},\dots,x_N) = m + \sigma. $$</p> <p>Using the expression for $m$ above, this becomes</p> <p>$$ {1 \over N} (x_1 + \cdots + x_k) + {1 \over N} f(x_{k+1},\dots,x_N) = {1 \over N} (x_1 + \cdots + x_k) + {1 \over N} (x_{k+1} + \cdots + x_N) + \sigma. $$</p> <p>$$ \Rightarrow f(x_{k+1},\dots,x_N) = (x_{k+1} + \cdots + x_N) + N \sigma. $$</p> <p>As long that is satisfied, you will change the mean by a value of $\sigma$.</p>
273,127
<p>Let $X$ be the set of all harmonic functions external to the unit sphere on $\mathbb R^3$ which vanish at infinity, so if $V \in X$, then $\nabla^2 V(\mathbf{r}) = 0$ on $\mathbb R^3 - S(2)$ and $\lim_{r \rightarrow \infty} V(r) = 0$. Now consider a function $f: X \rightarrow \mathbb R$, defined by $$ f(V)(\mathbf{r}) = || \nabla V(\mathbf{r}) ||^2 $$ For some given $V \in X$, I am looking for all functions $W \in X$ which satisfy $$ f(V) = f(W) $$ Certainly $W = \pm V$ will satisfy the condition. Can anyone find nontrivial solutions for $W$?</p> <p><strong>My approach so far:</strong></p> <p>The condition on $V$ and $W$ is $$ \nabla V \cdot \nabla V = \nabla W \cdot \nabla W $$ By defining $\phi = V + W$ and $\psi = V - W$, this is equivalent to $$ \nabla \phi \cdot \nabla \psi = 0 $$ I then tried expanding $\nabla \phi$ and $\nabla \psi$ in a basis of vector spherical harmonics and plugging into the above formula. This step makes use of the fact $\nabla^2 \phi = \nabla^2 \psi = 0$ and leads to the following condition on the expansion coefficients: $$ \nabla \phi \cdot \nabla \psi = \sum_{nm,n'm'} \phi_{nm} \psi_{n'm'} \left( \frac{1}{r} \right)^{n+n'+4} \left( (n+1)(n'+1) Y_{nm} Y_{n'm'} + \partial_{\theta} Y_{nm} \partial_{\theta} Y_{n'm'} + \frac{1}{\sin^2{\theta}} \partial_{\phi} Y_{nm} \partial_{\phi} Y_{n'm'} \right) $$ Its not clear to me how to proceed from here, or whether this is even the correct approach to take. I could get rid of the sum over $n',m'$ by integrating both sides over a unit sphere and using the orthogonality relations for the spherical harmonics. Doing this gives: $$ \sum_{nm} (n+1)(2n+1) \phi_{nm} \psi_{nm} = 0 $$ though I'm not sure that yields any additional insight. I would appreciate any ideas.</p>
Karl Fabian
20,804
<p>This problem has an important background in geomagnetism. When planning the MAGSAT satellite mission (1979/80) to determine the spherical harmonic coefficients of the Earth's magnetic field from space, Backus (JGR, 1970) showed that a measurement of the total field intensity <span class="math-container">$||\nabla V||$</span> on a spherical shell is in general not sufficient to uniquely determine <span class="math-container">$V$</span> (not regarding trivial non-uniqueness due to gauge and sign). He did this by explicitly constructing some counterexample by means of similar arguments as used in the question and comments here. As a consequence, MAGSAT became the first mission carrying a vector magnetometer instead of the much simpler absolute field sensors. In relation to the problem as posed here, Backus (Quart. Journ. Mech. and Applied Math., 21, 195-221 , 1968) proved the following theorem:</p> <p>THEOREM 5: Suppose <span class="math-container">$\phi$</span> and <span class="math-container">$\phi'$</span> are harmonic outside some open bounded set <span class="math-container">$W$</span> in <span class="math-container">$\mathbf{R}^n$</span> and vanish at infinity. Suppose that <span class="math-container">$| \nabla \phi| = | \nabla \phi'|$</span> outside some sphere which contains <span class="math-container">$W$</span>. If <span class="math-container">$n \geq 3$</span> then one of the two functions <span class="math-container">$\phi -\phi'$</span> and <span class="math-container">$\phi +\phi'$</span> vanishes identically outside <span class="math-container">$W$</span>.</p>
331,859
<p>I need to find the antiderivative of $$\int\sin^6x\cos^2x \mathrm{d}x.$$ I tried symbolizing $u$ as squared $\sin$ or $\cos$ but that doesn't work. Also I tried using the identity of $1-\cos^2 x = \sin^2 x$ and again if I symbolize $t = \sin^2 x$ I'm stuck with its derivative in the $dt$.</p> <p>Can I be given a hint?</p>
IcyFlame
63,288
<p>If you only want a hint that will be simple:</p> <p>Whenever we have even powers of sin and cos multiplied then we must convert the integral into higher angles.</p>
2,343,993
<blockquote> <p>Find the limit -$$\left(\frac{n}{n+5}\right)^n$$</p> </blockquote> <p>I set it up all the way to $\dfrac{\left(\dfrac{n+5}{n}\right)}{-\dfrac{1}{n^2}}$ but now I am stuck and do not know what to do.</p>
Jaideep Khare
421,580
<p>$$\mathrm L=\lim_{n \to \infty}\left(\frac{n}{n+5}\right)^n \implies \ln {\mathrm L}=\lim_{n \to \infty} n \underbrace{\left(\frac{\ln \left( 1-\frac{5}{n+5} \right)}{\frac{-5}{n+5}} \right)}_{\text{This limit is 1}} \times \frac{-5}{n+5}$$$$ \implies \ln{\mathrm L}=\lim_{n \to \infty}n \cdot \left(\frac{-5}{n+5}\right)=-5$$</p> <p>$$\implies \mathrm L=e^{-5}$$</p>