{"document_id":"230","document_content":"\\begin{definition}[Definition:Angular Momentum]\nThe '''angular momentum''' of a body about a point $P$ is its moment of inertia about $P$ multiplied by its angular velocity about $P$.\nAngular momentum is a vector quantity.\n{{expand|Separate out into orbital angular momentum and spin angular momentum.}}\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"2380","document_content":"\\begin{definition}[Definition:Differential Equation\/Ordinary]\nAn '''ordinary differential equation''' (abbreviated '''O.D.E.''' or '''ODE''') is a '''differential equation''' which has exactly one independent variable.\nAll the derivatives occurring in it are therefore ordinary.\nThe general '''ODE''' of order $n$ is:\n:$\\map f {x, y, \\dfrac {\\d x} {\\d y}, \\dfrac {\\d^2 x} {\\d y^2}, \\ldots, \\dfrac {\\d^n x} {\\d y^n} } = 0$\nor, using the prime notation:\n:$\\map f {x, y, y', y'', \\ldots, y^{\\paren n} } = 0$\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"2723","document_content":"\\begin{definition}[Definition:Eigenvector\/Linear Operator]\nLet $K$ be a field.\nLet $V$ be a vector space over $K$. \nLet $A : V \\to V$ be a linear operator.\nLet $\\lambda \\in K$ be an eigenvalue of $A$.\nA non-zero vector $v \\in V$ is an '''eigenvector corresponding to $\\lambda$''' {{iff}}:\n:$v \\in \\map \\ker {A - \\lambda I}$\nwhere: \n:$I : V \\to V$ is the identity mapping on $V$\n:$\\map \\ker {A - \\lambda I}$ denotes the kernel of $A - \\lambda I$.\nThat is, {{iff}}: \n:$A v = \\lambda v$\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"2724","document_content":"\\begin{definition}[Definition:Eigenvector\/Real Square Matrix]\nLet $\\mathbf A$ be a square matrix of order $n$ over $\\R$. \nLet $\\lambda \\in \\R$ be an eigenvalue of $\\mathbf A$. \nA non-zero vector $\\mathbf v \\in \\R^n$ is an '''eigenvector corresponding to $\\lambda$''' {{iff}}: \n:$\\mathbf A \\mathbf v = \\lambda \\mathbf v$\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"3778","document_content":"\\begin{definition}[Definition:Girth]\nLet $G$ be a graph.\nThe '''girth''' of $G$ is the smallest length of any cycle in $G$.\nAn acyclic graph is defined as having a girth of infinity.\nCategory:Definitions\/Graph Theory\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"3830","document_content":"\\begin{definition}[Definition:Gravity\/Gravitational Force]\nThe '''gravitational force''' on a body $B$ is the force which is exerted on $B$ as a result of the gravitational field whose influence it is under.\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"4138","document_content":"\\begin{definition}[Definition:Hypothesis Test]\nA '''hypothesis test''' is a rule that specifies, for a null hypothesis $H_0$ and alternative hypothesis $H_1$: \n* For which sample values the decision is made to accept $H_0$.\n* For which sample values $H_0$ is rejected and $H_1$ is accepted.\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"5249","document_content":"\\begin{definition}[Definition:Local Gravitational Constant]\nThe '''local gravitational constant''' is the value of the acceleration $g$ caused by the gravitational field given rise to by whatever body or bodies are in a position to exert that gravitational force.\nIn the everyday context, $g$ is the acceleration due to the gravitational field of Earth at whatever point on or near its surface the observer happens to be.\nThus in this context it is approximately equal to $9 \\cdotp 8 \\ \\mathrm m \\ \\mathrm s^{-2}$.\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"5461","document_content":"\\begin{definition}[Definition:Markov Chain]\nLet $\\sequence {X_n}_{n \\mathop \\ge 0}$ be a stochastic process over a countable set $S$.\nLet $\\map \\Pr X$ denote the probability of the random variable $X$.\nLet $\\sequence {X_n}_{n \\mathop \\ge 0}$ satisfy the Markov property:\n:$\\condprob {X_{n + 1} = i_{n + 1} } {X_0 = i_0, X_1 = i_1, \\ldots, X_n = i_n} = \\condprob {X_{n + 1} = i_{n + 1} } {X_n = i_n}$\nfor all $n \\ge 0$ and all $i_0, i_1, \\ldots, i_{n + 1} \\in S$.\nThat is, such that the conditional probability of $X_{i + 1}$ is dependent only upon $X_i$ and upon no earlier values of $\\sequence {X_n}$.\nThat is, the state of $\\sequence {X_n}$ in the future is unaffected by its history.\nThen $\\sequence {X_n}_{n \\mathop \\ge 0}$ is a '''Markov chain'''.\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"5725","document_content":"\\begin{definition}[Definition:Minor of Determinant]\nLet $\\mathbf A = \\sqbrk a_n$ be a square matrix of order $n$.\nConsider the order $k$ square submatrix $\\mathbf B$ obtained by deleting $n - k$ rows and $n - k$ columns from $\\mathbf A$.\nLet $\\map \\det {\\mathbf B}$ denote the determinant of $\\mathbf B$.\nThen $\\map \\det {\\mathbf B}$ is an '''order-$k$ minor''' of $\\map \\det {\\mathbf A}$.\nThus a '''minor''' is a determinant formed from the elements (in the same relative order) of $k$ specified rows and columns.\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"7627","document_content":"\\begin{definition}[Definition:Ramsey Number]\nRamsey's Theorem states that in any coloring of the edges of a sufficiently large complete graph, one will find monochromatic complete subgraphs.\nMore precisely, for any given number of colors $c$, and any given integers $n_1, \\ldots, n_c$, there is a number $\\map R {n_1, \\ldots, n_c}$ such that:\n:if the edges of a complete graph of order $\\map R {n_1, \\ldots, n_c}$ are colored with $c$ different colours, then for some $i$ between $1$ and $c$, it must contain a complete subgraph of order $n_i$ whose edges are all color $i$.\nThis number $\\map R {n_1, \\ldots, n_c}$ is called the '''Ramsey number''' for $n_1, \\ldots, n_c$.\n{{NamedforDef|Frank Plumpton Ramsey|cat = Ramsey}}\nCategory:Definitions\/Ramsey Theory\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"8388","document_content":"\\begin{definition}[Definition:Series\/Complex]\nLet $\\sequence {a_n}$ be a sequence in $\\C$.\nA '''complex series''' $S_n$ is the limit to infinity of the sequence of partial sums of a complex sequence $\\sequence {a_n}$:\n{{begin-eqn}}\n{{eqn | l = S_n\n | r = \\lim_{N \\mathop \\to \\infty} \\sum_{n \\mathop = 1}^N a_n\n | c = \n}}\n{{eqn | r = \\sum_{n \\mathop = 1}^\\infty a_n\n | c = \n}}\n{{eqn | r = a_1 + a_2 + a_3 + \\cdots\n | c = \n}}\n{{end-eqn}}\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"8392","document_content":"\\begin{definition}[Definition:Series\/Real]\nA '''real series''' $S_n$ is the limit to infinity of the sequence of partial sums of a real sequence $\\sequence {a_n}$:\n{{begin-eqn}}\n{{eqn | l = S_n\n | r = \\lim_{N \\mathop \\to \\infty} \\sum_{n \\mathop = 1}^N a_n\n | c = \n}}\n{{eqn | r = \\sum_{n \\mathop = 1}^\\infty a_n\n | c = \n}}\n{{eqn | r = a_1 + a_2 + a_3 + \\cdots\n | c = \n}}\n{{end-eqn}}\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"8547","document_content":"\\begin{definition}[Definition:Simple Harmonic Motion\/Frequency]\nConsider a physical system $S$ in a state of simple harmonic motion:\n:$x = A \\map \\sin {\\omega t + \\phi}$\nThe '''frequency''' $\\nu$ of the motion of $S$ is the number of complete cycles per unit time:\n:$\\nu = \\dfrac 1 T = \\dfrac \\omega {2 \\pi}$\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"8569","document_content":"\\begin{definition}[Definition:Simultaneous Equations\/Solution Set]\nConsider the system of $m$ simultaneous equations in $n$ variables:\n:$\\mathbb S := \\forall i \\in \\set {1, 2, \\ldots, m} : \\map {f_i} {x_1, x_2, \\ldots x_n} = \\beta_i$\nLet $\\mathbb X$ be the set of ordered $n$-tuples:\n:$\\set {\\sequence {x_j}_{j \\mathop \\in \\set {1, 2, \\ldots, n} }: \\forall i \\in \\set {1, 2, \\ldots, m}: \\map {f_i} {\\sequence {x_j} } = \\beta_i}$\nwhich satisfies each of the equations in $\\mathbb S$.\nThen $\\mathbb X$ is called the '''solution set''' of $\\mathbb S$.\nThus to '''solve''' a system of simultaneous equations is to find all the elements of $\\mathbb X$\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"8903","document_content":"\\begin{definition}[Definition:Stirling Numbers of the Second Kind\/Definition 1]\n'''Stirling numbers of the second kind''' are defined recursively by:\n:$\\ds {n \\brace k} := \\begin{cases}\n\\delta_{n k} & : k = 0 \\text{ or } n = 0 \\\\\n& \\\\\n\\ds {n - 1 \\brace k - 1} + k {n - 1 \\brace k} & : \\text{otherwise} \\\\\n\\end{cases}$\nwhere:\n: $\\delta_{n k}$ is the Kronecker delta\n: $n$ and $k$ are non-negative integers.\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"9320","document_content":"\\begin{definition}[Definition:Test Statistic]\nLet $\\theta$ be a population parameter of some population $P$. \nLet $\\Omega$ be the parameter space of $\\theta$. \nLet $\\mathbf X$ be a random sample from $P$. \nLet $T = \\map f {\\mathbf X}$ be a sample statistic.\nLet $\\delta$ be a test procedure of the form: \n:reject $H_0$ if $T \\in C$\nfor some null hypothesis $H_0$ and some $C \\subset \\Omega$.\nWe refer to $T$ as the '''test statistic''' of $\\delta$.\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"10462","document_content":"\\section{Multinomial Coefficient expressed as Product of Binomial Coefficients}\nTags: Multinomial Coefficients, Binomial Coefficients\n\n\\begin{theorem}\n:$\\dbinom {k_1 + k_2 + \\cdots + k_m} {k_1, k_2, \\ldots, k_m} = \\dbinom {k_1 + k_2} {k_1} \\dbinom {k_1 + k_2 + k_3} {k_1 + k_2} \\cdots \\dbinom {k_1 + k_2 + \\cdots + k_m} {k_1 + k_2 + \\cdots + k_{m - 1} }$\nwhere:\n:$\\dbinom {k_1 + k_2 + \\cdots + k_m} {k_1, k_2, \\ldots, k_m}$ denotes a multinomial coefficient\n:$\\dbinom {k_1 + k_2} {k_1}$ etc. denotes binomial coefficients.\n\\end{theorem}\n\n\\begin{proof}\nThe proof proceeds by induction.\nFor all $m \\in \\Z_{> 1}$, let $\\map P m$ be the proposition:\n:$\\dbinom {k_1 + k_2 + \\cdots + k_m} {k_1, k_2, \\ldots, k_m} = \\dbinom {k_1 + k_2} {k_1} \\dbinom {k_1 + k_2 + k_3} {k_1 + k_2} \\cdots \\dbinom {k_1 + k_2 + \\cdots + k_m} {k_1 + k_2 + \\cdots + k_{m - 1} }$\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"10463","document_content":"\\section{Multinomial Theorem}\nTags: Multinomial Coefficients, Binomial Coefficients, Discrete Mathematics, Proofs by Induction, Algebra\n\n\\begin{theorem}\nLet $x_1, x_2, \\ldots, x_k \\in F$, where $F$ is a field.\nThen:\n:$\\ds \\paren {x_1 + x_2 + \\cdots + x_m}^n = \\sum_{k_1 \\mathop + k_2 \\mathop + \\mathop \\cdots \\mathop + k_m \\mathop = n} \\binom n {k_1, k_2, \\ldots, k_m} {x_1}^{k_1} {x_2}^{k_2} \\cdots {x_m}^{k_m}$\nwhere:\n:$m \\in \\Z_{> 0}$ is a positive integer\n:$n \\in \\Z_{\\ge 0}$ is a non-negative integer\n:$\\dbinom n {k_1, k_2, \\ldots, k_m} = \\dfrac {n!} {k_1! \\, k_2! \\, \\cdots k_m!}$ denotes a multinomial coefficient.\nThe sum is taken for all non-negative integers $k_1, k_2, \\ldots, k_m$ such that $k_1 + k_2 + \\cdots + k_m = n$, and with the understanding that wherever $0^0$ may appear it shall be considered to have a value of $1$.\nThe '''multinomial theorem''' is a generalization of the Binomial Theorem.\n\\end{theorem}\n\n\\begin{proof}\nThe proof proceeds by induction on $m$.\nFor each $m \\in \\N_{\\ge 1}$, let $\\map P m$ be the proposition:\n:$\\ds \\forall n \\in \\N: \\paren {x_1 + x_2 + \\cdots + x_m}^n = \\sum_{k_1 \\mathop + k_2 \\mathop + \\mathop \\cdots \\mathop + k_m \\mathop = n} \\binom n {k_1, k_2, \\ldots, k_m} {x_1}^{k_1} {x_2}^{k_2} \\cdots {x_m}^{k_m}$\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"10521","document_content":"\\section{Modulus of Limit}\nTags: Modulus of Limit, Limits of Sequences\n\n\\begin{theorem}\nLet $X$ be one of the standard number fields $\\Q, \\R, \\C$.\nLet $\\sequence {x_n}$ be a sequence in $X$.\nLet $\\sequence {x_n}$ be convergent to the limit $l$.\nThat is, let $\\ds \\lim_{n \\mathop \\to \\infty} x_n = l$.\nThen\n:$\\ds \\lim_{n \\mathop \\to \\infty} \\cmod {x_n} = \\cmod l$\nwhere $\\cmod {x_n}$ is the modulus of $x_n$.\n\\end{theorem}\n\n\\begin{proof}\nBy the Triangle Inequality, we have:\n:$\\cmod {\\cmod {x_n} - \\cmod l} \\le \\cmod {x_n - l}$\nHence by the Squeeze Theorem and Convergent Sequence Minus Limit, $\\cmod {x_n} \\to \\cmod l$ as $n \\to \\infty$.\n{{Qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"10794","document_content":"\\section{Mean Value Theorem}\nTags: Differential Calculus, Named Theorems, Mean Value Theorem\n\n\\begin{theorem}\nLet $f$ be a real function which is continuous on the closed interval $\\closedint a b$ and differentiable on the open interval $\\openint a b$.\nThen:\n:$\\exists \\xi \\in \\openint a b: \\map {f'} \\xi = \\dfrac {\\map f b - \\map f a} {b - a}$\n\\end{theorem}\n\n\\begin{proof}\nFor any constant $h \\in \\R$ we may construct the real function defined on $\\closedint a b$ by:\n:$\\map F x = \\map f x + h x$\nWe have that $h x$ is continuous on $\\closedint a b$ from Linear Function is Continuous.\nFrom the Sum Rule for Continuous Functions, $F$ is continuous on $\\closedint a b$ and differentiable on $\\openint a b$.\nLet us calculate what the constant $h$ has to be such that $\\map F a = \\map F b$:\n{{begin-eqn}}\n{{eqn | l = \\map F a\n | r = \\map F b\n | c = \n}}\n{{eqn | ll= \\leadsto\n | l = \\map f a + h a\n | r = \\map f b + h b\n | c = \n}}\n{{eqn | ll= \\leadsto\n | l = \\map f a - \\map f b\n | r = h b - h a\n | c = rearranging\n}}\n{{eqn | ll= \\leadsto\n | l = \\map f a - \\map f b\n | r = h \\paren {b - a}\n | c = Real Multiplication Distributes over Real Addition\n}}\n{{eqn | ll= \\leadsto\n | l = h\n | r = -\\dfrac {\\map f b - \\map f a} {b - a}\n | c = rearranging\n}}\n{{end-eqn}}\nSince $F$ satisfies the conditions for the application of Rolle's Theorem:\n:$\\exists \\xi \\in \\openint a b: \\map {F'} \\xi = 0$\nBut then:\n:$\\map {F'} \\xi = \\map {f'} \\xi + h = 0$\nThe result follows.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"11018","document_content":"\\section{Liouville's Theorem (Complex Analysis)}\nTags: Complex Analysis, Named Theorems\n\n\\begin{theorem}\nLet $f: \\C \\to \\C$ be a bounded entire function.\nThen $f$ is constant.\n\\end{theorem}\n\n\\begin{proof}\nBy assumption, there is $M \\ge 0$ such that $\\cmod {\\map f z} \\le M$ for all $z \\in \\C$. \nFor any $R \\in \\R: R > 0$, consider the function:\n:$\\map {f_R} z := \\map f {R z}$\nUsing the Cauchy Integral Formula, we see that:\n:$\\ds \\cmod {\\map {f_R'} z} = \\frac 1 {2 \\pi} \\cmod {\\int_{\\map {C_1} z} \\frac {\\map f w} {\\paren {w - z}^2} \\rd w} \\le \\frac 1 {2 \\pi} \\int_{\\map {C_1} z} M \\rd w = M$\nwhere $\\map {C_1} z$ denotes the circle of radius $1$ around $z$.\nHence:\n:$\\ds \\cmod {\\map {f'} z} = \\cmod {\\map {f_R'} z} \/ R \\le M \/ R$\nSince $R$ was arbitrary, it follows that $\\cmod {\\map {f'} z} = 0$ for all $z \\in \\C$.\nThus $f$ is constant.\n{{qed}}\n{{MissingLinks}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"11239","document_content":"\\section{Linear Bound Lemma}\nTags: Named Theorems, Graph Theory\n\n\\begin{theorem}\nFor a simple connected planar graph $G_n$, where $n \\ge 3$ is a number of vertices:\n:$m \\le 3 n \u2212 6$, where $m$ is a number of edges.\n\\end{theorem}\n\n\\begin{proof}\nLet $f$ denote the number of faces of $G_n$. \nLet the sequence $\\sequence {s_i}_{i \\mathop = 1}^f$ be the regions of a planar embedding of $G_n$. \nConsider the sequence $\\sequence {r_i}_{i \\mathop = 1}^f$ where $r_i$ denotes the number of boundary edges for $s_i$. \nSince $G$ is simple, then (by the definition of planar embedding): \n* every region has at least $3$ boundary edges\n* every edge is a boundary edge of at most two regions in the planar embedding.\nUsing this two facts, we can find the boundary for $\\ds \\sum_{i \\mathop = 1}^f r_i$ as:\n:$3 f \\le \\ds \\sum_{i \\mathop = 1}^f r_i \\le 2m$\nNow calculating the Euler Polyhedron Formula with $f \\le 2 m \/3$, we will arrive to $m \\le 3 n \u2212 6$.\n{{qed}}\nCategory:Graph Theory\nCategory:Named Theorems\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"11276","document_content":"\\section{Limit Comparison Test}\nTags: Convergence Tests, Series\n\n\\begin{theorem}\nLet $\\sequence {a_n}$ and $\\sequence {b_n}$ be sequences in $\\R$.\nLet $\\ds \\frac {a_n} {b_n} \\to l$ as $n \\to \\infty$ where $l \\in \\R_{>0}$.\nThen the series $\\ds \\sum_{n \\mathop = 1}^\\infty a_n$ and $\\ds \\sum_{n \\mathop = 1}^\\infty b_n$ are either both convergent or both divergent.\n\\end{theorem}\n\n\\begin{proof}\nLet $\\ds \\sum_{n \\mathop = 1}^\\infty b_n$ be convergent.\nThen by Terms in Convergent Series Converge to Zero, $\\sequence {b_n}$ converges to zero.\nA Convergent Sequence is Bounded.\nSo it follows that:\n:$\\exists H: \\forall n \\in \\N_{>0}: a_n \\le H b_n$\nThus, by the corollary to the Comparison Test, $\\ds \\sum_{n \\mathop = 1}^\\infty a_n$ is convergent.\nSince $l > 0$, from Sequence Converges to Within Half Limit:\n:$\\exists N: \\forall n > N: a_n > \\dfrac 1 2 l b_n$\nHence the convergence of $\\ds \\sum_{n \\mathop = 1}^\\infty a_n$ implies the convergence of $\\ds \\sum_{n \\mathop = 1}^\\infty b_n$.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"11426","document_content":"\\section{Lebesgue Measure is Diffuse}\nTags: Measure Theory, Lebesgue Measure, Diffuse Measures\n\n\\begin{theorem}\nLet $\\lambda^n$ be Lebesgue measure on $\\R^n$.\nThen $\\lambda^n$ is a diffuse measure.\n\\end{theorem}\n\n\\begin{proof}\nA singleton $\\set {\\mathbf x} \\subseteq \\R^n$ is seen to be closed by combining:\n:Euclidean Space is Complete Metric Space\n:Metric Space is Hausdorff\n:Corollary to Compact Subspace of Hausdorff Space is Closed\nHence by Closed Set Measurable in Borel Sigma-Algebra:\n:$\\set {\\mathbf x} \\in \\map \\BB {\\R^n}$\nwhere $\\map \\BB {\\R^n}$ is the Borel $\\sigma$-algebra on $\\R^n$.\nWrite $\\mathbf x + \\epsilon = \\tuple {x_1 + \\epsilon, \\ldots, x_n + \\epsilon}$ for $\\epsilon > 0$.\nThen:\n:$\\ds \\set {\\mathbf x} = \\bigcap_{m \\mathop \\in \\N} \\horectr {\\mathbf x} {\\mathbf x + \\frac 1 m}$\nwhere $\\\\horectr {\\mathbf x} {\\mathbf x + \\dfrac 1 m}$ is a half-open $n$-rectangle.\n{{handwaving|justify equality}}\nBy definition of Lebesgue measure, we have (for all $m \\in \\N$):\n:$\\ds \\map {\\lambda^n} {\\horectr {\\mathbf x} {\\mathbf x + \\frac 1 m} } = \\prod_{i \\mathop = 1}^n \\frac 1 m = m^{-n}$\nFrom Characterization of Measures, it follows that:\n:$\\ds \\map {\\lambda^n} {\\set {\\mathbf x} } = \\lim_{m \\mathop \\to \\infty} m^{-n}$\nwhich equals $0$ from Sequence of Powers of Reciprocals is Null Sequence.\nTherefore, for each $\\mathbf x \\in \\R^n$:\n:$\\map {\\lambda^n} {\\set {\\mathbf x} } = 0$\nthat is, $\\lambda^n$ is a diffuse measure.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"11592","document_content":"\\section{L'H\u00f4pital's Rule}\nTags: Calculus, limits, derivatives, infinity, zero, Calculus, Limits, L'H\u00f4pital's Rule, Limits of Real Functions, Limits of Functions, Differential Calculus, Named Theorems, Calculus, Limits\n\n\\begin{theorem}\nLet $f$ and $g$ be real functions which are continuous on the closed interval $\\closedint a b$ and differentiable on the open interval $\\openint a b$.\nLet:\n:$\\forall x \\in \\openint a b: \\map {g'} x \\ne 0$\nwhere $g'$ denotes the derivative of $g$ {{WRT|Differentiation}} $x$.\nLet:\n:$\\map f a = \\map g a = 0$\nThen:\n:$\\ds \\lim_{x \\mathop \\to a^+} \\frac {\\map f x} {\\map g x} = \\lim_{x \\mathop \\to a^+} \\frac {\\map {f'} x} {\\map {g'} x}$\nprovided that the second limit exists.\n\\end{theorem}\n\n\\begin{proof}\nLet $l = \\displaystyle \\lim_{x \\to a^+} {f' \\left({x}\\right)} {g' \\left({x}\\right)}$.\nLet $\\epsilon > 0$.\nBy the definition of limit, we ought to find a $\\delta > 0$ such that:\n:$\\forall x: \\left\\vert{x - a}\\right\\vert < \\delta \\implies \\left\\vert{ \\dfrac {f \\left({x}\\right)} {g \\left({x}\\right)} - l }\\right\\vert < \\epsilon$\nFix $\\delta$ such that:\n:$\\forall x: \\left\\vert{x - a}\\right\\vert < \\delta \\implies \\left\\vert{ \\dfrac {f' \\left({x}\\right)} {g' \\left({x}\\right)} - l }\\right\\vert < \\epsilon$\nwhich is possible by the definition of limit.\nLet $x$ be such that $\\left\\vert{x - a}\\right\\vert < \\delta$.\nBy the Cauchy Mean Value Theorem with $b = x$:\n: $\\exists \\xi \\in \\left({a \\,.\\,.\\, x}\\right): \\dfrac {f' \\left({\\xi}\\right)} {g' \\left({\\xi}\\right)} = \\dfrac {f \\left({x}\\right) - f \\left({a}\\right)} {g \\left({x}\\right) - g \\left({a}\\right)}$\nSince $f \\left({a}\\right) = g \\left({a}\\right) = 0$, we have:\n: $\\exists \\xi \\in \\left({a \\,.\\,.\\, x}\\right): \\dfrac {f' \\left({\\xi}\\right)} {g' \\left({\\xi}\\right)} = \\dfrac {f \\left({x}\\right)} {g \\left({x}\\right)}$\nNow, as $a < \\xi < x$, it follows that $\\left\\vert{\\xi - a}\\right\\vert < \\delta$ as well.\nTherefore:\n:$\\left\\vert{ \\dfrac {f \\left({x}\\right)} {g \\left({x}\\right)} - l }\\right\\vert = \\left\\vert{ \\dfrac {f' \\left({\\xi}\\right)} {g' \\left({\\xi}\\right)} - l }\\right\\vert < \\epsilon$\nwhich leads us to the desired conclusion that:\n:$\\displaystyle \\lim_{x \\to a^+} \\frac {f \\left({x}\\right)} {g \\left({x}\\right)} = \\lim_{x \\to a^+} \\frac {f^{\\prime} \\left({x}\\right)} {g^{\\prime} \\left({x}\\right)}$\n{{qed}}\n{{namedfor|Guillaume de l'H\u00f4pital|cat=L'H\u00f4pital}}\nHowever, this result was in fact discovered by Johann Bernoulli.\nBecause of variants in the rendition of his name, this proof is often seen written as '''L'Hospital's Rule'''.\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"11617","document_content":"\\section{Jensen's Inequality (Measure Theory)\/Concave Functions}\nTags: Measure Theory\n\n\\begin{theorem}\nLet $\\struct {X, \\Sigma, \\mu}$ be a measure space.\nLet $f: X \\to \\R$ be a $\\mu$-integrable function such that $f \\ge 0$ pointwise.\nLet $\\Lambda: \\hointr 0 \\infty \\to \\hointr 0 \\infty$ be a concave function.\nThen for all positive measurable functions $g: X \\to \\R$, $g \\in \\map {\\MM^+} \\Sigma$:\n:$\\dfrac {\\int \\paren {\\Lambda \\circ g} \\cdot f \\rd \\mu} {\\int f \\rd \\mu} \\le \\map \\Lambda {\\dfrac {\\int g \\cdot f \\rd \\mu} {\\int f \\rd \\mu} }$\nwhere $\\circ$ denotes composition, and $\\cdot$ denotes pointwise multiplication.\n\\end{theorem}\n\n\\begin{proof}\n{{proof wanted}}\nCategory:Measure Theory\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"11618","document_content":"\\section{Jensen's Inequality (Measure Theory)\/Convex Functions}\nTags: Measure Theory\n\n\\begin{theorem}\nLet $\\struct {X, \\Sigma, \\mu}$ be a measure space.\nLet $f: X \\to \\R$ be a $\\mu$-integrable function such that $f \\ge 0$ pointwise.\nLet $V: \\hointr 0 \\infty \\to \\hointr 0 \\infty$ be a convex function.\nThen for all positive measurable functions $g: X \\to \\R$, $g \\in \\map {\\MM^+} \\Sigma$:\n:$\\map V {\\dfrac {\\int g \\cdot f \\rd \\mu} {\\int f \\rd \\mu} } \\le \\dfrac {\\int \\paren {V \\circ g} \\cdot f \\rd \\mu} {\\int f \\rd \\mu}$\nwhere $\\circ$ denotes composition, and $\\cdot$ denotes pointwise multiplication.\n\\end{theorem}\n\n\\begin{proof}\n{{MissingLinks}}\nLet $\\d \\map \\nu x := \\dfrac {\\map f x} {\\int \\map f s \\rd \\map \\mu s} \\rd \\map \\mu x$ be a probability measure.\n{{explain|This proof invokes a probability measure. Needs to be for a measure space. Does the proof work for both?}}\nLet $\\ds x_0 := \\int \\map g s \\rd \\map \\nu s$.\nThen by convexity there exists constants $a, b$ such that:\n{{begin-eqn}}\n{{eqn | l = \\map V {x_0}\n | r = a x_0 + b\n}}\n{{eqn | q = \\forall x \\in \\R_{\\ge 0}\n | l = \\map V x\n | o = \\ge\n | r = a x + b\n}}\n{{end-eqn}}\nIn other words, there is a tangent line at $\\tuple {x_0, V_0}$ that falls below the graph of $V$.\nTherefore:\n{{begin-eqn}}\n{{eqn | l = \\map V {\\map g s}\n | o = \\ge\n | r = a \\map g s + b\n | c = \n}}\n{{eqn | ll= \\leadsto\n | l = \\int \\map V {\\map g s} \\rd \\map \\nu s\n | o = \\ge\n | r = a \\int \\map g s \\rd \\map \\nu s + b\n | c = Integration {{WRT|Integration}} $\\map \\nu s$\n}}\n{{eqn | r = \\map V {x_0}\n | c = \n}}\n{{eqn | r = \\map V {\\int \\map g s \\rd \\map \\nu s}\n | c = \n}}\n{{end-eqn}}\n{{explain|why this does what it purports to}}\nCategory:Measure Theory\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"11619","document_content":"\\section{Jensen's Inequality (Real Analysis)}\nTags: Jensen's Inequality (Real Analysis), Inequalities, Real Analysis, Analysis\n\n\\begin{theorem}\nLet $I$ be a real interval.\nLet $\\phi: I \\to \\R$ be a convex function.\nLet $x_1, x_2, \\ldots, x_n \\in I$.\nLet $\\lambda_1, \\lambda_2, \\ldots, \\lambda_n \\ge 0$ be real numbers, at least one of which is non-zero.\nThen:\n:$\\ds \\map \\phi {\\frac {\\sum_{k \\mathop = 1}^n \\lambda_k x_k} {\\sum_{k \\mathop = 1}^n \\lambda_k} } \\le \\frac {\\sum_{k \\mathop = 1}^n \\lambda_k \\map \\phi {x_k} } {\\sum_{k \\mathop = 1}^n \\lambda_k}$\nFor $\\phi$ strictly convex, equality holds {{iff}} $x_1 = x_2 = \\cdots = x_n$.\n\\end{theorem}\n\n\\begin{proof}\nThe proof proceeds by mathematical induction on $n$.\nFor all $n \\in \\N_{> 0}$, let $\\map P n$ be the proposition:\n:$\\ds \\map \\phi {\\frac {\\sum_{k \\mathop = 1}^n \\lambda_k x_k} {\\sum_{k \\mathop = 1}^n \\lambda_k} } \\le \\frac {\\sum_{k \\mathop = 1}^n \\lambda_k \\map \\phi {x_k} } {\\sum_{k \\mathop = 1}^n \\lambda_k}$\n$\\map P 1$ is true, as this just says:\n:$\\ds \\map \\phi {\\frac {\\lambda_1 x_1} {\\lambda_1} } \\le \\frac {\\lambda_1 \\map \\phi {x_1} } {\\lambda_1}$\n{{begin-eqn}}\n{{eqn | l = \\map \\phi {\\frac {\\lambda_1 x_1} {\\lambda_1} }\n | r = \\map \\phi {x_1}\n | c = \n}}\n{{eqn | r = \\frac {\\lambda_1 \\map \\phi {x_1} } {\\lambda_1}\n | c = \n}}\n{{end-eqn}}\ntrivially.\nThis is our basis for the induction.\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"11654","document_content":"\\section{Kepler's Laws of Planetary Motion\/Third Law\/Examples}\nTags: Kepler's Laws of Planetary Motion\n\n\\begin{theorem}\nLet $P$ be a planet orbiting the sun $S$\nLet $P$ be:\n:$\\text{(a)}: \\quad$ Twice as far away from $S$ as the Earth;\n:$\\text{(b)}: \\quad$ $3$ times as far away from $S$ as the Earth;\n:$\\text{(c)}: \\quad$ $25$ times as far away from $S$ as the Earth.\nThen the orbital period of $P$ is:\n:$\\text{(a)}: \\quad$ approximately $2.8$ years;\n:$\\text{(b)}: \\quad$ approximately $5.2$ years;\n:$\\text{(c)}: \\quad$ $125$ years.\n\\end{theorem}\n\n\\begin{proof}\nLet the orbital period of Earth be $T'$ years.\nLet the mean distance of Earth from $S$ be $A$.\nLet the orbital period of $P$ be $T$ years.\nLet the mean distance of $P$ from $S$ be $a$.\nBy Kepler's Third Law of Planetary Motion:\n{{begin-eqn}}\n{{eqn | l = \\dfrac {T'^2} {A^3}\n | r = \\dfrac {T^2} {a^3}\n | c = \n}}\n{{eqn | ll= \\leadsto\n | l = T^2\n | r = \\dfrac {a^3} {A^3}\n | c = as $T'$ is $1$ year\n}}\n{{eqn | ll= \\leadsto\n | l = T\n | r = \\left({\\dfrac a A}\\right)^{3\/2}\n | c = \n}}\n{{end-eqn}}\nThus the required orbital periods are:\n:$\\text{(a)}: \\quad 2^{3\/2} = 2 \\sqrt 2 \\approx 2.8$ years\n:$\\text{(b)}: \\quad 3^{3\/2} = 3 \\sqrt 3 \\approx 5.2$ years\n:$\\text{(c)}: \\quad 25^{3\/2} = 125$ years.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"11986","document_content":"\\section{Integration by Substitution\/Definite Integral}\nTags: Integration by Substitution\n\n\\begin{theorem}\nLet $\\phi$ be a real function which has a derivative on the closed interval $\\closedint a b$.\nLet $I$ be an open interval which contains the image of $\\closedint a b$ under $\\phi$.\nLet $f$ be a real function which is continuous on $I$.\nThe definite integral of $f$ from $a$ to $b$ can be evaluated by:\n:$\\ds \\int_{\\map \\phi a}^{\\map \\phi b} \\map f t \\rd t = \\int_a^b \\map f {\\map \\phi u} \\dfrac \\d {\\d u} \\map \\phi u \\rd u$\nwhere $t = \\map \\phi u$.\nThe technique of solving an integral in this manner is called '''integration by substitution'''.\n\\end{theorem}\n\n\\begin{proof}\nLet $F$ be an antiderivative of $f$.\nWe have:\n{{begin-eqn}}\n{{eqn | l = \\map {\\frac \\d {\\d u} } {\\map F t}\n | r = \\map {\\frac \\d {\\d u} } {\\map F {\\map \\phi u} }\n | c = Definition of $\\map \\phi u$\n}}\n{{eqn | r = \\dfrac \\d {\\d t} \\map F {\\map \\phi u} \\dfrac \\d {\\d u} \\map \\phi u\n | c = Chain Rule for Derivatives\n}}\n{{eqn | r = \\map f {\\map \\phi u} \\dfrac \\d {\\d u} \\map \\phi u\n | c = as $\\map F t = \\ds \\int \\map f t \\rd t$\n}}\n{{end-eqn}}\nHence $\\map F {\\map \\phi u}$ is an antiderivative of $\\map f {\\map \\phi u} \\dfrac \\d {\\d u} \\map \\phi u$.\nThus:\n{{begin-eqn}}\n{{eqn | l = \\int_a^b \\map f {\\map \\phi u} \\map {\\phi'} u \\rd u\n | r = \\bigintlimits {\\map F {\\map \\phi u} } a b\n | c = Fundamental Theorem of Calculus: Second Part\n}}\n{{eqn | n = 1\n | r = \\map F {\\map \\phi b} - \\map F {\\map \\phi a}\n | c = \n}}\n{{end-eqn}}\nHowever, also:\n{{begin-eqn}}\n{{eqn | l = \\int_{\\map \\phi a}^{\\map \\phi b} \\map f t \\rd t\n | r = \\bigintlimits {\\map F t} {\\map \\phi a} {\\map \\phi b}\n | c = \n}}\n{{eqn | r = \\map F {\\map \\phi b} - \\map F {\\map \\phi a}\n | c = \n}}\n{{eqn | r = \\int_a^b \\map f {\\map \\phi u} \\map {\\phi'} u \\rd u\n | c = from $(1)$\n}}\n{{end-eqn}}\nwhich was to be proved. \n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"12010","document_content":"\\section{Intermediate Value Theorem}\nTags: Proofs, Named Theorems, Analysis\n\n\\begin{theorem}\nLet $f: S \\to \\R$ be a real function on some subset $S$ of $\\R$.\nLet $I \\subseteq S$ be a real interval.\nLet $f: I \\to \\R$ be continuous on $I$.\nThen $f$ is a Darboux function.\nThat is:\nLet $a, b \\in I$.\nLet $k \\in \\R$ lie between $\\map f a$ and $\\map f b$.\nThat is, either:\n:$\\map f a < k < \\map f b$\nor:\n:$\\map f b < k < \\map f a$\nThen $\\exists c \\in \\openint a b$ such that $\\map f c = k$.\n\\end{theorem}\n\n\\begin{proof}\nThis theorem is a restatement of Image of Interval by Continuous Function is Interval.\nFrom Image of Interval by Continuous Function is Interval, the image of $\\openint a b$ under $f$ is also a real interval (but not necessarily open).\nThus if $k$ lies between $\\map f a$ and $\\map f b$, it must be the case that:\n:$k \\in \\Img {\\openint a b}$\nThe result follows.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"12011","document_content":"\\section{Intermediate Value Theorem\/Corollary}\nTags: Named Theorems, Analysis\n\n\\begin{theorem}\nLet $I$ be a real interval.\nLet $a, b \\in I$ such that $\\openint a b$ is an open interval.\nLet $f: I \\to \\R$ be a real function which is continuous on $\\openint a b$.\nLet $0 \\in \\R$ lie between $\\map f a$ and $\\map f b$.\nThat is, either:\n:$\\map f a < 0 < \\map f b$\nor:\n:$\\map f b < 0 < \\map f a$\nThen $f$ has a root in $\\openint a b$.\n\\end{theorem}\n\n\\begin{proof}\nFollows directly from the Intermediate Value Theorem and from the definition of root.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"12012","document_content":"\\section{Intermediate Value Theorem (Topology)}\nTags: Connected Spaces, Order Topology, Connectedness, Continuous Mappings\n\n\\begin{theorem}\nLet $X$ be a connected topological space.\nLet $\\struct {Y, \\preceq, \\tau}$ be a totally ordered set equipped with the order topology.\nLet $f: X \\to Y$ be a continuous mapping.\nLet $a$ and $b$ are two points of $a, b \\in X$ such that:\n:$\\map f a \\prec \\map f b$\nLet:\n:$r \\in Y: \\map f a \\prec r \\prec \\map f b$\nThen there exists a point $c$ of $X$ such that:\n:$\\map f c = r$\n\\end{theorem}\n\n\\begin{proof}\nLet $a, b \\in X$, and let $r \\in Y$ lie between $\\map f a$ and $\\map f b$.\nDefine the sets:\n:$A = f \\sqbrk X \\cap r^\\prec$ and $B = f \\sqbrk X \\cap r^\\succ$\nwhere $r^\\prec$ and $r^\\succ$ denote the strict lower closure and strict upper closure respectively of $r$ in $Y$.\n$A$ and $B$ are disjoint by construction.\n$A$ and $B$ are also non-empty since one contains $\\map f a$ and the other contains $\\map f b$.\n$A$ and $B$ are also both open by definition as the intersection of open sets.\nSuppose there is no point $c$ such that $\\map f c = r$.\nThen:\n:$f \\sqbrk X = A \\cup B$\nso $A$ and $B$ constitute a separation of $X$.\nBut this contradicts the fact that Continuous Image of Connected Space is Connected.\nHence by Proof by Contradiction:\n:$\\exists c \\in X: \\map f c = r$\nwhich is what was to be proved.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"12693","document_content":"\\section{Homogeneous Linear Equations with More Unknowns than Equations}\nTags: Algebra, Linear Algebra, Definitions: Linear Algebra, Definitions: Algebra\n\n\\begin{theorem}\nLet $\\alpha_{ij}$ be elements of a field $F$, where $1 \\le i \\le m, 1 \\le j \\le n$.\nLet $n > m$.\nThen there exist $x_1, x_2, \\ldots, x_n \\in F$ not all zero, such that:\n:$\\ds \\forall i: 1 \\le i \\le m: \\sum_{j \\mathop = 1}^n \\alpha_{ij} x_j = 0$\nAlternatively, this can be expressed as:\nIf $n > m$, the following system of homogeneous linear equations:\n{{begin-eqn}}\n{{eqn | l = 0\n | r = \\alpha_{11} x_1 + \\alpha_{12} x_2 + \\cdots + \\alpha_{1n} x_n\n}}\n{{eqn | l = 0\n | r = \\alpha_{21} x_1 + \\alpha_{22} x_2 + \\cdots + \\alpha_{2n} x_n\n}}\n{{eqn | o = \\cdots\n}}\n{{eqn | l = 0\n | r = \\alpha_{m1} x_1 + \\alpha_{m2} x_2 + \\cdots + \\alpha_{mn} x_n\n}}\n{{end-eqn}}\nhas at least one solution such that not all of $x_1, \\ldots, x_n$ is zero.\n\\end{theorem}\n\n\\begin{proof}\nConsider these vectors for $1 \\le k \\le n$:\n:$\\mathbf a_k = \\tuple {\\alpha_{1k}, \\alpha_{2k}, \\dots, \\alpha_{mk}} \\in F^m$\nSince $n > m$, by Cardinality of Linearly Independent Set is No Greater than Dimension, $\\set {\\mathbf a_1, \\mathbf a_2, \\dots, \\mathbf a_n}$ is linearly dependent.\nBy definition of linearly dependent:\n:$\\ds \\exists \\set {\\lambda_k: 1 \\le k \\le n} \\subseteq F: \\sum_{k \\mathop = 1}^n \\lambda_k \\mathbf a_k = \\mathbf 0$\nwhere at least one of $\\lambda_k$ is not equal to $0$.\nThe system of homogeneous linear equations above can be written as:\n:$\\ds \\sum_{k \\mathop = 1}^n x_k \\mathbf a_k = \\mathbf 0$\nThe result follows from taking $x_k = \\lambda_k$.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"13011","document_content":"\\section{Generating Function for Binomial Coefficients}\nTags: Generating Functions\n\n\\begin{theorem}\nLet $\\sequence {a_n}$ be the sequence defined as:\n:$\\forall n \\in \\N: a_n = \\begin{cases}\n\\dbinom m n & : n = 0, 1, 2, \\ldots, m \\\\\n0 & : \\text{otherwise}\\end{cases}$\nwhere $\\dbinom m n$ denotes a binomial coefficient.\nThen the generating function for $\\sequence {a_n}$ is given as:\n:$\\ds \\map G z = \\sum_{n \\mathop = 0}^m \\dbinom m n z^n = \\paren {1 + z}^m$\n\\end{theorem}\n\n\\begin{proof}\n{{begin-eqn}}\n{{eqn | l = \\paren {1 + z}^m\n | r = \\sum_{n \\mathop = 0}^m \\binom m n z^n\n | c = Binomial Theorem\n}}\n{{end-eqn}}\nThe result follows from the definition of a generating function.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"13467","document_content":"\\section{Finite Semigroup Equal Elements for Different Powers}\nTags: Semigroups\n\n\\begin{theorem}\nLet $\\left({S, \\circ}\\right)$ be a finite semigroup.\nThen:\n: $\\forall x \\in S: \\exists m, n \\in \\N: m \\ne n: x^m = x^n$\n\\end{theorem}\n\n\\begin{proof}\nList the positive powers $x, x^2, x^3, \\ldots$ of any element $x$ of a finite semigroup $\\left({S, \\circ}\\right)$.\nSince all are elements of $S$, and the semigroup has a finite number of elements, it follows from the Pigeonhole Principle this list must contain repetitions.\nSo there must be at least one instance where $x^m = x^n$ for some $m, n \\in \\N$.\n{{Qed}}\nCategory:Semigroups\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"13468","document_content":"\\section{Finite Sequences in Set Form Acyclic Graph}\nTags: Graph Theory\n\n\\begin{theorem}\nLet $S$ be a set.\nLet $V$ be the set of finite sequences in $S$.\nLet $E$ be the set of unordered pairs $\\set {p, q}$ of elements of $V$ such that either:\n:$q$ is formed by extending $p$ by one element or\n:$p$ is formed by extending $q$ by one element.\nThat is:\n:$\\card {\\Dom p \\symdif \\Dom q} = 1$, where $\\symdif$ is symmetric difference\n:$p \\restriction D = q \\restriction D$, where $D = \\Dom p \\cap \\Dom q$\nThen $T = \\struct{V, E}$ is an acyclic graph.\n\\end{theorem}\n\n\\begin{proof}\n{{proof wanted}}\nCategory:Graph Theory\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"13939","document_content":"\\section{Existence of Positive Root of Positive Real Number\/Positive Exponent}\nTags: Real Numbers, Existence of Positive Root of Positive Real Number\n\n\\begin{theorem}\nLet $x \\in \\R$ be a real number such that $x > 0$.\nLet $n \\in \\Z$ be an integer such that $n > 0$.\nThen there exists a $y \\in \\R: y \\ge 0$ such that $y^n = x$.\n\\end{theorem}\n\n\\begin{proof}\nLet $f$ be the real function defined on the unbounded closed interval $\\hointr 0 \\to$ defined by $\\map f y = y^n$.\nConsider first the case of $n > 0$.\nBy Strictly Positive Integer Power Function is Unbounded Above:\n:$\\exists q \\in \\R_{>0}: \\map f q \\ge x$\nSince $x \\ge 0$:\n:$\\map f 0 \\le x$\nBy the Intermediate Value Theorem:\n:$\\exists y \\in \\R: 0 \\le y \\le q, \\map f y = x$\nHence the result has been shown to hold for $n > 0$.\n{{qed|lemma}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"14101","document_content":"\\section{Error Correction Capability of Linear Code}\nTags: Linear Codes\n\n\\begin{theorem}\nLet $C$ be a linear code.\nLet $C$ have a minimum distance $d$.\nThen $C$ corrects $e$ transmission errors for all $e$ such that $2 e + 1 \\le d$.\n\\end{theorem}\n\n\\begin{proof}\nLet $C$ be a linear code whose master code is $V$.\nLet $c \\in C$ be a transmitted codeword.\nLet $v$ be the received word from $c$.\nBy definition, $v$ is an element of $V$.\nLet $v$ have a distance $e$ from $c$, where $2 e + 1 \\le d$.\nThus there have been $e$ transmission errors.\n{{AimForCont}} $c_1$ is a codeword of $C$, distinct from $c$, such that $\\map d {v, c_1} \\le e$.\nThen:\n{{begin-eqn}}\n{{eqn | l = \\map d {c, c_1}\n | o = \\le\n | r = \\map d {c, v} + \\map d {v, c_1}\n | c = \n}}\n{{eqn | o = \\le\n | r = e + e\n | c = \n}}\n{{eqn | o = <\n | r = d\n | c = \n}}\n{{end-eqn}}\nSo $c_1$ has a distance from $c$ less than $d$.\nBut $C$ has a minimum distance $d$.\nThus $c_1$ cannot be a codeword of $C$.\nFrom this contradiction it follows that there is no codeword of $C$ closer to $v$ than $c$.\nHence there is a unique codeword of $C$ which has the smallest distance from $v$.\nHence it can be understood that $C$ has corrected the transmission errors of $v$.\n{{Qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"14139","document_content":"\\section{Euler's Theorem for Planar Graphs}\nTags: Graph Theory\n\n\\begin{theorem}\nLet $G = \\struct {V, E}$ be a connected planar graph with $V$ vertices and $E$ edges.\nLet $F$ be the number of faces of $G$.\nThen:\n:$V - E + F = 2$\n\\end{theorem}\n\n\\begin{proof}\nThe proof proceeds by complete induction.\nLet $G$ be a planar graph with $V$ vertices and $E$ edges.\nFor all $n \\in \\Z_{\\ge 0}$, let $\\map P n$ be the proposition:\n:For all planar graphs $G = \\struct {V, E}$ such that $V + E = n$, the equation $V - E + F = 2$ holds.\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"14371","document_content":"\\section{Equiangular Triangles are Similar}\nTags: Triangles\n\n\\begin{theorem}\nLet two triangles have the same corresponding angles.\nThen their corresponding sides are proportional.\nThus, by definition, such triangles are similar.\n{{:Euclid:Proposition\/VI\/4}}\n\\end{theorem}\n\n\\begin{proof}\nLet $\\triangle ABC, \\triangle DCE$ be equiangular triangles such that:\n:$\\angle ABC = \\angle DCE$\n:$\\angle BAC = \\angle CDE$\n:$\\angle ACB = \\angle CED$\n:300px\nLet $BC$ be placed in a straight line with $CE$.\nFrom Two Angles of Triangle Less than Two Right Angles $\\angle ABC + \\angle ACB$ is less than two right angles.\nAs $\\angle ACB = \\angle DEC$, it follows that $\\angle ABC + \\angle DEC$ is also less than two right angles.\nSo from the Parallel Postulate, $BA$ and $ED$, when produced, will meet.\nLet this happen at $F$.\nWe have that $\\angle ABC = \\angle DCE$.\nSo from Equal Corresponding Angles implies Parallel Lines:\n:$BF \\parallel CD$\nAgain, we have that $\\angle ACB = \\angle CED$.\nAgain from Equal Corresponding Angles implies Parallel Lines:\n:$AC \\parallel FE$\nTherefore by definition $\\Box FACD$ is a parallelogram.\nTherefore from Opposite Sides and Angles of Parallelogram are Equal $FA = DC$ and $AC = FD$.\nSince $AC \\parallel FE$, it follows from Parallel Transversal Theorem that:\n:$BA : AF = BC : CE$\nBut $AF = CD$ so:\n:$BA : AF = BC : CE$\nFrom Proportional Magnitudes are Proportional Alternately:\n:$AB : BC = DC : CE$\nSince $CD \\parallel BF$, from Parallel Transversal Theorem:\n:$BC : CE = FD : DE$\nBut $FD = AC$ so $BC : CE = AC : DE$.\nSo from Proportional Magnitudes are Proportional Alternately, $BC : CA = CE : ED$.\nIt then follows from Equality of Ratios Ex Aequali that $BA : AC = CD : DE$.\n{{qed}}\n{{Euclid Note|4|VI}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"14493","document_content":"\\section{Entire Function with Bounded Real Part is Constant}\nTags: Complex Analysis\n\n\\begin{theorem}\nLet $f : \\C \\to \\C$ be an entire function. \nLet the real part of $f$ be bounded.\nThat is, there exists a positive real number $M$ such that: \n:$\\cmod {\\map \\Re {\\map f z} } < M$\nfor all $z \\in \\C$, where $\\map \\Re {\\map f z}$ denotes the real part of $\\map f z$.\nThen $f$ is constant.\n\\end{theorem}\n\n\\begin{proof}\nLet $g : \\C \\to \\C$ be a complex function with:\n:$\\ds \\map g z = e^{\\map f z}$\nBy Derivative of Complex Composite Function, $g$ is entire with derivative:\n:$\\ds \\map {g'} z = \\map {f'} z e^{\\map f z}$\nWe have:\n{{begin-eqn}}\n{{eqn\t| l = \\cmod {\\map g z}\n\t| r = e^{\\map \\Re {\\map f z} }\n\t| c = Modulus of Positive Real Number to Complex Power is Positive Real Number to Power of Real Part\n}}\n{{eqn\t| o = \\le\n\t| r = e^{\\cmod {\\map \\Re {\\map f z} } }\n\t| c = Exponential is Strictly Increasing\n}}\n{{eqn\t| o = <\n\t| r = e^M\n\t| c = Exponential is Strictly Increasing\n}}\n{{end-eqn}}\nSo $g$ is a bounded entire function.\nBy Liouville's Theorem, $g$ is therefore a constant function.\nWe therefore have, by Derivative of Constant: Complex:\n:$\\map {g'} z = 0$\nfor all $z \\in \\C$.\nThat is:\n:$\\map {f'} z e^{\\map f z} = 0$\nSince the exponential function is non-zero, we must have: \n:$\\map {f'} z = 0$\nfor all $z \\in \\C$.\nFrom Zero Derivative implies Constant Complex Function, we then have that $f$ is constant on $\\C$. \n{{qed}}\nCategory:Complex Analysis\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"15210","document_content":"\\section{Derivative of Composite Function}\nTags: Differential Calculus\n\n\\begin{theorem}\nLet $f, g, h$ be continuous real functions such that:\n:$\\forall x \\in \\R: \\map h x = \\map {f \\circ g} x = \\map f {\\map g x}$\nThen:\n:$\\map {h'} x = \\map {f'} {\\map g x} \\map {g'} x$\nwhere $h'$ denotes the derivative of $h$.\nUsing the $D_x$ notation:\n:$\\map {D_x} {\\map f {\\map g x} } = \\map {D_{\\map g x} } {\\map f {\\map g x} } \\map {D_x} {\\map g x}$\nThis is often informally referred to as the '''chain rule (for differentiation)'''.\n\\end{theorem}\n\n\\begin{proof}\nLet $\\map g x = y$, and let:\n{{begin-eqn}}\n{{eqn | l = \\map g {x + \\delta x}\n | r = y + \\delta y\n | c = \n}}\n{{eqn | ll= \\leadsto\n | l = \\delta y\n | r = \\map g {x + \\delta x} - \\map g x\n | c = \n}}\n{{end-eqn}}\nThus:\n:$\\delta y \\to 0$ as $\\delta x \\to 0$\nand:\n:$(1): \\quad \\dfrac {\\delta y} {\\delta x} \\to \\map {g'} x$\nThere are two cases to consider:\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"15648","document_content":"\\section{Convergence of P-Series\/Real}\nTags: Convergence Tests, P-Series, Convergence of P-Series\n\n\\begin{theorem}\nLet $p \\in \\R$ be a real number.\nThen the $p$-series:\n:$\\ds \\sum_{n \\mathop = 1}^\\infty n^{-p}$\nis convergent {{iff}} $p > 1$.\n\\end{theorem}\n\n\\begin{proof}\nBy the Integral Test:\n:$\\displaystyle \\sum_{n \\mathop = 1}^\\infty \\frac 1 {n^x}$ converges {{iff}} the improper integral $\\displaystyle \\int_1^\\infty \\frac {\\d t} {t^x}$ exists.\nThe result follows from Integral to Infinity of Reciprocal of Power of x.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"15865","document_content":"\\section{Conservation of Angular Momentum}\nTags: Classical Mechanics\n\n\\begin{theorem}\nNewton's Laws of Motion imply the conservation of angular momentum in systems of masses in which no external force is acting.\n\\end{theorem}\n\n\\begin{proof}\nWe start by stating Newton's Third Law of Motion in all its detail.\nWe consider a collection of massive bodies denoted by the subscripts $1$ to $N$.\nThese bodies interact with each other and exert forces on each other and these forces occur in equal and opposite pairs.\nThe force $F_{i j}$ exerted by body $i$ on body $j$ is related to the force exerted by body $j$ on body $i$ by:\n:$(1): \\quad \\vec {F_{i j}} = -\\vec {F_{j i}}$\nThe final part of Newton's Third Law of Motion is that these equal and opposite forces act through the line that connects the two bodies in question.\nThis can be stated thus:\n:$(2): \\quad \\vec {F_{i j}} = a_{i j} \\paren {\\vec {r_j} - \\vec {r_i} }$\nwhere:\n:$\\vec{r_i}$ is the position of body $i$\n:$a_{i j} $ is the magnitude of the force.\nAs a consequence of $(1)$:\n:$a_{j i} = a_{i j}$\nLet the total torque $\\vec {\\tau_{\\operatorname {total} } }$ on the system be measured about an origin located at $\\vec {r_0}$.\nThus:\n{{begin-eqn}}\n{{eqn | l = \\vec {\\tau_{\\operatorname {total} } }\n | r = \\sum_i \\vec{\\tau_i}\n | c = \n}}\n{{eqn | r = \\sum_i \\paren {\\paren {\\vec {r_i} - \\vec {r_0} } \\times \\sum_j \\vec {F_{j i} } }\n | c = \n}}\n{{eqn | r = \\sum_{i, j} \\paren {\\paren {\\paren {\\vec {r_j} - \\vec {r_0} } - \\paren {\\vec {r_j} - \\vec {r_i} } } \\times \\vec {F_{j i} } }\n | c = \n}}\n{{eqn | r = \\sum_{i, j} \\paren {\\paren {\\paren {\\vec {r_j} - \\vec {r_0} } - \\paren {\\vec {r_j} - \\vec {r_i} } } \\times a_{i j} \\paren {\\vec {r_j} - \\vec {r_i} } }\n | c = \n}}\n{{eqn | r = \\sum_{i, j} \\paren {\\vec {r_j} - \\vec {r_0} } \\times a_{i j} \\paren {\\vec {r_j} - \\vec {r_i} }\n | c = \n}}\n{{eqn | r = \\sum_{i, j} \\vec {r_j} \\times a_{i j} \\paren {\\vec {r_j} - \\vec {r_i} } - \\paren {\\vec {r_0} \\times \\sum_{i, j} \\vec {F_{i j} } }\n | c = \n}}\n{{end-eqn}}\nBy hypothesis there is no external force.\nThus the second term disappears, and:\n{{begin-eqn}}\n{{eqn | l = \\vec {\\tau_{\\operatorname {total} } }\n | r = \\sum_{i, j} \\vec {r_j} \\times a_{i j} \\paren {\\vec {r_j} - \\vec {r_i} }\n | c = \n}}\n{{eqn | r = \\sum_{i \\mathop > j} \\vec {r_j} \\times a_{i j} \\paren {\\vec {r_j} - \\vec {r_i} } + \\vec {r_i} \\times a_{j i} \\paren {\\vec {r_j} - \\vec {r_i} }\n | c = \n}}\n{{eqn | r = \\sum_{i \\mathop > j} \\vec {r_j} \\times \\vec {r_i} \\paren {a_{j i} - a_{i j} }\n | c = \n}}\n{{eqn | r = \\vec 0\n | c = \n}}\n{{end-eqn}}\nIn summary, in a system of masses in which there is no external force, the total torque on the system is equal to $0$.\nThis is because the pair of torque between two bodies must cancel out.\nSince the rate of change of angular momentum is proportional to the torque, the angular momentum of a system is conserved when no external force is applied.\n{{qed}}\nCategory:Classical Mechanics\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"16790","document_content":"\\section{Cayley's Formula}\nTags: Named Theorems, Graph Theory, Combinatorics\n\n\\begin{theorem}\nThe number of distinct labeled trees with $n$ nodes is $n^{n - 2}$.\n\\end{theorem}\n\n\\begin{proof}\nFollows directly from Bijection between Pr\u00fcfer Sequences and Labeled Trees.\nThis shows that there is a bijection between the set of labeled trees with $n$ nodes and the set of all Pr\u00fcfer sequences of the form:\n:$\\tuple {\\mathbf a_1, \\mathbf a_2, \\ldots, \\mathbf a_{n - 2} }$\nwhere each of the $\\mathbf a_i$'s is one of the integers $1, 2, \\ldots, n$, allowing for repetition.\nSince there are exactly $n$ possible values for each integer $\\mathbf a_i$, the total number of such sequences is $\\ds \\prod_{i \\mathop = 1}^{n - 2} n$.\nThe result follows from Equivalence of Mappings between Sets of Same Cardinality.\n{{qed}}\n{{Namedfor|Arthur Cayley|cat = Cayley}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"16899","document_content":"\\section{Cauchy-Goursat Theorem}\nTags: Complex Analysis\n\n\\begin{theorem}\nLet $D$ be a simply connected open subset of the complex plane $\\C$.\nLet $\\partial D$ denote the closed contour bounding $D$.\nLet $f: D \\to \\C$ be holomorphic everywhere in $D$.\nThen:\n:$\\ds \\oint_{\\partial D} \\map f z \\rd z = 0$\n\\end{theorem}\n\n\\begin{proof}\nBegin by rewriting the function $f$ and differential $\\rd z$ in terms of their real and complex parts:\n:$f = u + iv$\n:$\\d z = \\d x + i \\rd y$\nThen we have:\n:$\\ds \\oint_{\\partial D} \\map f z \\rd z = \\oint_{\\partial D} \\paren {u + iv} \\paren {\\d x + i \\rd y}$\nExpanding the result and again separating into real and complex parts yields two integrals of real variables:\n:$\\ds \\oint_{\\partial D} \\paren {u \\rd x - v \\rd y} + i \\oint_{\\partial D} \\paren {v \\rd x + u \\rd y}$\nWe next apply Green's Theorem to each integral term to convert the contour integrals to surface integrals over $D$:\n:$\\ds \\iint_D \\paren {-\\dfrac {\\partial v} {\\partial x} - \\dfrac {\\partial u} {\\partial y} } \\rd x \\rd y + \\iint_D \\paren {\\dfrac {\\partial u} {\\partial x} - \\dfrac {\\partial v} {\\partial y} } \\rd x \\rd y$\nBy the assumption that $f$ is holomorphic, it satisfies the Cauchy-Riemann Equations\n:$\\dfrac {\\partial v} {\\partial x} + \\dfrac {\\partial u} {\\partial y} = 0$\n:$\\dfrac {\\partial u} {\\partial x} - \\dfrac {\\partial v} {\\partial y} = 0$\nThe integrands are therefore zero and hence the integral is zero.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"16984","document_content":"\\section{Cantor Set has Zero Lebesgue Measure}\nTags: Cantor Set, Measure Theory, Lebesgue Measure, Cantor Space\n\n\\begin{theorem}\nLet $\\CC$ be the Cantor set.\nLet $\\lambda$ denote the Lebesgue measure on the Borel $\\sigma$-algebra $\\map \\BB \\R$ on $\\R$.\nThen $\\CC$ is $\\map \\BB \\R$-measurable, and $\\map \\lambda \\CC = 0$.\nThat is, $\\CC$ is a $\\lambda$-null set.\n\\end{theorem}\n\n\\begin{proof}\nConsider the definition of $\\CC$ as a limit of a decreasing sequence.\nIn the notation as introduced there, we see that each $S_n$ is a collection of disjoint closed intervals.\nFrom Closed Set Measurable in Borel Sigma-Algebra, these are measurable sets.\nFurthermore, each $S_n$ is finite.\nHence by Sigma-Algebra Closed under Union, it follows that $C_n := \\ds \\bigcup S_n$ is measurable as well.\nThen, as we have:\n:$\\CC = \\ds \\bigcap_{n \\mathop \\in \\N} C_n$\nit follows from Sigma-Algebra Closed under Countable Intersection that $\\CC$ is measurable.\nThe $C_n$ also form a decreasing sequence of sets with limit $\\CC$.\nThus, from Characterization of Measures: $(3')$, it follows that:\n:$\\map \\lambda \\CC = \\ds \\lim_{n \\mathop \\to \\infty} \\map \\lambda {C_n}$\nIt is not too hard to show that, for all $n \\in \\N$:\n:$\\map \\lambda {C_n} = \\paren {\\dfrac 2 3}^n$\n{{finish|yes, I know}}\nNow we have by Sequence of Powers of Number less than One that:\n:$\\ds \\lim_{n \\mathop \\to \\infty} \\paren {\\frac 2 3}^n = 0$\nand the result follows.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"17066","document_content":"\\section{Brouwer's Fixed Point Theorem\/One-Dimensional Version}\nTags: Continuity, Brouwer's Fixed Point Theorem, Analysis, Fixed Point Theorems, Continuous Real Functions, Named Theorems, Continuous Functions, Topology\n\n\\begin{theorem}\nLet $f: \\closedint a b \\to \\closedint a b$ be a real function which is continuous on the closed interval $\\closedint a b$.\nThen:\n:$\\exists \\xi \\in \\closedint a b: \\map f \\xi = \\xi$\nThat is, a continuous real function from a closed real interval to itself fixes some point of that interval.\n\\end{theorem}\n\n\\begin{proof}\nAs the codomain of $f$ is $\\closedint a b$, it follows that the image of $f$ is a subset of $\\closedint a b$.\nThus $\\map f a \\ge a$ and $\\map f b \\le b$.\nLet us define the real function $g: \\closedint a b \\to \\R$ by $g \\left({x}\\right) = \\map f x - x$.\nThen by the Combined Sum Rule for Continuous Functions, $\\map g x$ is continuous on $\\closedint a b$.\nBut $\\map g a \\ge 0$ and $\\map g b \\le 0$.\nBy the Intermediate Value Theorem, $\\exists \\xi: \\map g \\xi = 0$.\nThus $\\map f \\xi = \\xi$.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"17218","document_content":"\\section{Binomial Theorem\/Examples\/4th Power of Sum}\nTags: Fourth Powers, Algebra, Examples of Use of Binomial Theorem\n\n\\begin{theorem}\n:$\\paren {x + y}^4 = x^4 + 4 x^3 y + 6 x^2 y^2 + 4 x y^3 + y^4$\n\\end{theorem}\n\n\\begin{proof}\nFollows directly from the Binomial Theorem:\n:$\\ds \\forall n \\in \\Z_{\\ge 0}: \\paren {x + y}^n = \\sum_{k \\mathop = 0}^n \\binom n k x^{n - k} y^k$\nputting $n = 4$.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"17301","document_content":"\\section{Bijection between Pr\u00fcfer Sequences and Labeled Trees}\nTags: Tree Theory, Trees, Graph Theory, Combinatorics\n\n\\begin{theorem}\nThere is a one-to-one correspondence between Pr\u00fcfer sequences and labeled trees.\nThat is, every labeled tree has a unique Pr\u00fcfer sequence that defines it, and every Pr\u00fcfer sequence defines just one labeled tree.\n\\end{theorem}\n\n\\begin{proof}\nLet $T$ be the set of all labeled trees of order $n$.\nLet $P$ be the set of all Pr\u00fcfer sequence of length $n-2$.\nLet $\\phi: T \\to P$ be the mapping that maps each tree to its Pr\u00fcfer sequence.\n* From Pr\u00fcfer Sequence from Labeled Tree, $\\phi$ is clearly well-defined, as every element of $T$ maps uniquely to an element of $P$.\n* However, from Labeled Tree from Pr\u00fcfer Sequence, $\\phi^{-1}: P \\to T$ is also clearly well-defined, as every element of $P$ maps to a unique element of $T$.\nHence the result.\n{{questionable|How is it immediate that the two constructions are mutually inverse?}}\n{{qed}}\nCategory:Tree Theory\nCategory:Combinatorics\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"17471","document_content":"\\section{Area of Trapezoid}\nTags: Trapezoids, Areas of Quadrilaterals, Area Formulas, Quadrilaterals\n\n\\begin{theorem}\n:410px\nLet $ABCD$ be a trapezoid:\n:whose parallel sides are of lengths $a$ and $b$\nand\n:whose height is $h$.\nThen the area of $ABCD$ is given by:\n:$\\Box ABCD = \\dfrac {h \\paren {a + b} } 2$\n\\end{theorem}\n\n\\begin{proof}\n:600px\nExtend line $AB$ to $E$ by length $a$.\nExtend line $DC$ to $F$ by length $b$.\nThen $BECF$ is another trapezoid whose parallel sides are of lengths $a$ and $b$ and whose height is $h$.\nAlso, $AEFD$ is a parallelogram which comprises the two trapezoids $ABCD$ and $BECF$.\nSo $\\Box ABCD + \\Box BECF = \\Box AEFD$ and $\\Box ABCD = \\Box BECF$.\n$AEFD$ is of altitude $h$ with sides of length $a + b$.\nThus from Area of Parallelogram the area of $AEFD$ is given by:\n: $\\Box AEFD = h \\paren {a + b}$\nIt follows that $\\Box ABCD = \\dfrac {h \\paren {a + b} } 2$\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"17901","document_content":"\\section{Number of Compositions}\nTags: Combinatorics\n\n\\begin{theorem}\nA $k$-composition of a positive integer $n$ is an ordered $k$-tuple: $c = \\tuple {c_1, c_2, \\ldots, c_k}$ such that $c_1 + c_2 + \\cdots + c_k = n$ and $c_i $ are strictly positive integers.\nThe number of $k$-composition of $n$ is $\\dbinom {n - 1} {k - 1}$ and the total number of compositions of $n$ is $2^{n - 1}$ (that is for $k = 1, 2, 3, \\ldots, n$).\n\\end{theorem}\n\n\\begin{proof}\nConsider the following array consisting of $n$ ones and $n - 1$ blanks:\n:$\\begin{bmatrix} 1 \\ \\_ \\ 1 \\ \\_ \\ \\cdots \\ \\_ \\ 1 \\ \\_ \\ 1 \\end{bmatrix}$\nIn each blank we can either put a comma or a plus sign.\nEach way of choosing $,$ or $+$ will give a composition of $n$ with the commas separating the individual $c_i$'s.\nIt follows easily that there are $2^{n-1}$ ways of doing this, since there are two choices for each of $n-1$ blanks.\nThe result follows from the Product Rule for Counting.\nSimilarly if we want specifically $k$ different $c_i$'s then we are left with choosing $k - 1$ out of $n - 1$ blanks to place the $k - 1$ commas.\nThe number of ways of doing so is $\\dbinom {n - 1} {k - 1}$ from the Binomial Theorem.\n{{qed}}\nCategory:Combinatorics\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"17913","document_content":"\\section{Number of Distinct Parenthesizations on Word}\nTags: Parenthesization, Catalan Numbers\n\n\\begin{theorem}\nLet $w_n$ denote an arbitrary word of $n$ elements.\nThe number of distinct parenthesizations of $w_n$ is the Catalan number $C_{n - 1}$:\n:$C_{n - 1} = \\dfrac 1 n \\dbinom {2 \\paren {n - 1} } {n - 1}$\n\\end{theorem}\n\n\\begin{proof}\nLet $w_n$ denote an arbitrary word of $n$ elements.\nLet $a_n$ denote the number of ways $W_n$ elements may be parenthesized.\nFirst note that we have:\n{{begin-eqn}}\n{{eqn | l = a_1\n | r = 1\n | c = \n}}\n{{eqn | l = a_2\n | r = 1\n | c = \n}}\n{{eqn | l = a_3\n | r = 2\n | c = that is, $b_1 \\paren {b_2 b_3}$ and $\\paren {b_1 b_2} b_3$\n}}\n{{end-eqn}}\nand from Parenthesization of Word of $4$ Elements:\n:$a_4 = 5$\nConsider a word $w_{n + 1}$ of $n + 1$ elements.\nThen $w_{n + 1}$ can be formed as any one of:\n:$w_1$ concatenated with $w_n$\n:$w_2$ concatenated with $w_{n - 1}$\n:$\\dotsc$ and so on until:\n:$w_n$ concatenated with $w_1$\nThus the $i$th row in the above sequence is the number of parenthesizations of $w_{n + 1}$ in which the two outermost parenthesizations contain $i$ and $n - i + 1$ terms respectively.\nWe have that:\n:there are $a_i$ parenthesizations of $w_i$\n:there are $a_{n - i + 1}$ parenthesizations of $w_{n - i + 1}$\nHence the total number of parenthesizations of $w_{n + 1}$ is the sum of all these parenthesizations for $1 \\le i \\le n$.\nThat is:\n:$(1): \\quad a_{n + 1} = a_1 a_n + a_2 a_{n - 1} + \\dotsb + a_n a_1$\nLet us start with the generating function:\n:$\\ds \\map {G_A} z = \\sum_{n \\mathop = 1}^\\infty a_n z^n$\nThen:\n{{begin-eqn}}\n{{eqn | l = \\map {G_A} z\n | r = z + \\sum_{n \\mathop = 2}^\\infty \\paren {a_1 a_n + a_2 a_{n - 1} + \\dotsb + a_n a_1} z^n\n | c = from $(1)$\n}}\n{{eqn | r = z + \\sum_{n \\mathop = 1}^\\infty a_n z^n \\sum_{n \\mathop = 1}^\\infty a_n z^n\n | c = \n}}\n{{eqn | r = z + \\paren {\\map {G_A} z}^2\n | c = \n}}\n{{end-eqn}}\nThus $\\map {G_A} z$ satisfies the quadratic equation:\n:$\\paren {\\map {G_A} z}^2 - \\map {G_A} z + z = 0$\nBy the Quadratic Formula, this gives:\n:$\\map {G_A} z = \\dfrac {1 \\pm \\sqrt {1 - 4 z} } 2$\nSince $\\map {G_A} 0 = 0$, we can eliminate the positive square root and arrive at:\n:$(2): \\quad \\map {G_A} z = \\dfrac 1 2 - \\dfrac {\\sqrt {1 - 4 z} } 2$\nExpanding $\\sqrt {1 - 4 z}$ using the Binomial Theorem:\n:$\\ds \\map {G_A} z = \\dfrac 1 2 - \\dfrac 1 2 \\sum_{n \\mathop = 0}^\\infty \\paren {-1}^n \\dbinom {\\frac 1 2} n 4^n z^n$\nwhere:\n:$\\dbinom {\\frac 1 2} 0 = 1$\nand:\n:$\\dbinom {\\frac 1 2} n = \\dfrac {\\frac 1 2 \\paren {\\frac 1 2 - 1} \\dotsm \\paren {\\frac 1 2 - n + 1} } {n!}$\nAs a result:\n:$\\ds (3): \\quad \\map {G_A} z = -\\dfrac 1 2 \\sum_{n \\mathop = 1}^\\infty \\paren {-1}^n \\dbinom {\\frac 1 2} n 4^n z^n$\nWe can expand $(3)$ as a Taylor series about $0$.\nAs such a series, when it exists, is unique, the coefficients must be $a_n$.\nHence:\n{{begin-eqn}}\n{{eqn | l = a_n\n | r = -\\dfrac 1 2 \\paren {-1}^n \\dbinom {\\frac 1 2} n 4^n\n | c = \n}}\n{{eqn | r = \\paren {-1}^{n - 1} \\dfrac 1 2 \\dfrac {\\frac 1 2 \\paren {\\frac 1 2 - 1} \\dotsm \\paren {\\frac 1 2 - n + 1} } {n!} 4^n\n | c = \n}}\n{{eqn | r = \\paren {-1}^n \\dfrac 1 2 \\dfrac {\\paren {-1} \\paren {1 - 2} \\dotsm \\paren {1 - 2 \\paren {n - 1 } } 2^n} {n!}\n | c = \n}}\n{{eqn | r = \\dfrac 1 2 \\dfrac {1 \\times 3 \\times \\dotsb \\times \\paren {2 n - 3} 2^n} {n!}\n | c = \n}}\n{{eqn | r = \\dfrac 1 2 \\dfrac {1 \\times 3 \\times \\dotsb \\times \\paren {2 n - 3} n! 2^n} {n! n!}\n | c = multiplying top and bottom by $n!$\n}}\n{{eqn | r = \\dfrac 1 2 \\dfrac {1 \\times 2 \\times 3 \\times 4 \\times \\dotsb \\times \\paren {2 n - 4} \\paren {2 n - 3} \\paren {2 n - 2} \\paren {2 n} } {\\paren {n!}^2}\n | c = \n}}\n{{eqn | r = \\dfrac 1 2 \\dfrac {1 \\times 2 \\times 3 \\times 4 \\times \\dotsb \\times \\paren {2 n - 2} } {\\paren {\\paren {n - 1}!}^2}\n | c = \n}}\n{{eqn | r = \\frac 1 n \\binom {2 n - 2} {n - 1}\n | c = \n}}\n{{end-eqn}}\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"18400","document_content":"\\section{Orthogonal Projection onto Closed Linear Span}\nTags: Linear Transformations on Hilbert Spaces, Hilbert Spaces\n\n\\begin{theorem}\nLet $H$ be a Hilbert space with inner product $\\innerprod \\cdot \\cdot$ and inner product norm $\\norm \\cdot$. \nLet $E = \\set {e_1, \\ldots, e_n}$ be an orthonormal subset of $H$.\nLet $M = \\vee E$, where $\\vee E$ is the closed linear span of $E$. \nLet $P$ be the orthogonal projection onto $M$.\nThen:\n:$\\forall h \\in H: P h = \\ds \\sum_{k \\mathop = 1}^n \\innerprod h {e_k} e_k$\n\\end{theorem}\n\n\\begin{proof}\nLet $h \\in H$. \nLet: \n:$\\ds u = \\sum_{k \\mathop = 1}^n \\innerprod h {e_k} e_k$ \nWe have that:\n:$u \\in \\map \\span E$\nand from the definition of closed linear span:\n:$M = \\paren {\\map \\span E}^-$\nWe therefore have, by the definition of closure: \n:$u \\in M$ \nLet $v = h - u$ \nWe want to show that $v \\in M^\\bot$. \nFrom Intersection of Orthocomplements is Orthocomplement of Closed Linear Span, it suffices to show that: \n:$v \\in E^\\bot$\nNote that for each $l$ we have: \n:$\\innerprod v {e_l} = \\innerprod h {e_l} - \\innerprod u {e_l}$\nsince the inner product is linear in its first argument. \nWe have: \n{{begin-eqn}}\n{{eqn\t| l = \\innerprod u {e_l} \n\t| r = \\innerprod {\\sum_{k \\mathop = 1}^n \\innerprod h {e_k} e_k} {e_l}\n}}\n{{eqn\t| r = \\sum_{k \\mathop = 1}^n \\innerprod {\\innerprod h {e_k} e_k} {e_l}\n\t| c = linearity of inner product in first argument\n}}\n{{eqn\t| r = \\sum_{k \\mathop = 1}^n \\innerprod h {e_k} \\innerprod {e_k} {e_l}\n\t| c = linearity of inner product in first argument\n}}\n{{eqn\t| r = \\innerprod h {e_l} \\innerprod {e_l} {e_l}\n\t| c = {{Defof|Orthonormal Subset}}\n}}\n{{eqn\t| r = \\innerprod h {e_l} \\norm {e_l}^2\n\t| c = {{Defof|Inner Product Norm}}\n}}\n{{eqn\t| r = \\innerprod h {e_l}\n\t| c = since $\\norm {e_l} = 1$\n}}\n{{end-eqn}}\nso:\n:$\\innerprod v {e_l} = 0$ \nThat is: \n:$v \\in E^\\bot$\nso, by Intersection of Orthocomplements is Orthocomplement of Closed Linear Span, we have: \n:$v \\in M^\\bot$\nWe can therefore decompose each $h \\in H$ as: \n:$h = u + v$\nwith $u \\in M$ and $v \\in M^\\bot$. \nSo we have: \n{{begin-eqn}}\n{{eqn\t| l = P h \n\t| r = \\map P {u + v}\n}}\n{{eqn\t| r = \\map P u + \\map P v\n\t| c = Orthogonal Projection on Closed Linear Subspace of Hilbert Space is Linear Transformation\n}}\n{{eqn\t| r = v\n\t| c = Kernel of Orthogonal Projection on Closed Linear Subspace of Hilbert Space, Fixed Points of Orthogonal Projection on Closed Linear Subspace of Hilbert Space\n}}\n{{eqn\t| r = \\sum_{k \\mathop = 1}^n \\innerprod h {e_k} e_k\n}}\n{{end-eqn}}\nfor each $h \\in H$. \n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"18695","document_content":"\\section{Pigeonhole Principle}\nTags: Pigeonhole Principle, Named Theorems, Combinatorics\n\n\\begin{theorem}\nLet $S$ be a finite set whose cardinality is $n$.\nLet $S_1, S_2, \\ldots, S_k$ be a partition of $S$ into $k$ subsets.\nThen:\n:at least one subset $S_i$ of $S$ contains at least $\\ceiling {\\dfrac n k}$ elements\nwhere $\\ceiling {\\, \\cdot \\,}$ denotes the ceiling function.\n\\end{theorem}\n\n\\begin{proof}\n{{AimForCont}} no subset $S_i$ of $S$ has as many as $\\ceiling {\\dfrac n k}$ elements.\nThen the maximum number of elements of any $S_i$ would be $\\ceiling {\\dfrac n k} - 1$.\nSo the total number of elements of $S$ would be no more than $k \\paren {\\ceiling {\\dfrac n k} - 1} = k \\ceiling {\\dfrac n k} - k$.\nThere are two cases:\n:$n$ is divisible by $k$\n:$n$ is not divisible by $k$.\nSuppose $k \\divides n$.\nThen $\\ceiling {\\dfrac n k} = \\dfrac n k$ is an integer and:\n:$k \\ceiling {\\dfrac n k} - k = n - k$\nThus:\n:$\\ds \\card S = \\sum_{i \\mathop = 1}^k \\card {S_i} \\le n - k < n$\nThis contradicts the fact that $\\card S = n$.\nHence our assumption that no subset $S_i$ of $S$ has as many as $\\ceiling {\\dfrac n k}$ elements was false.\nNext, suppose that $k \\nmid n$.\nThen:\n:$\\card S = k \\ceiling {\\dfrac n k} - k < \\dfrac {k \\paren {n + k} } k - k = n$\nand again this contradicts the fact that $\\card S = n$.\nIn the same way, our assumption that no subset $S_i$ of $S$ has as many as $\\ceiling {\\dfrac n k}$ elements was false.\nHence, by Proof by Contradiction, there has to be at least $\\ceiling {\\dfrac n k}$ elements in at least one $S_i \\subseteq S$.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"18841","document_content":"\\section{Positive Real has Real Square Root}\nTags: Real Numbers\n\n\\begin{theorem}\nLet $x \\in \\R_{>0}$ be a (strictly) positive real number.\nThen:\n:$\\exists y \\in \\R: x = y^2$\n\\end{theorem}\n\n\\begin{proof}\nLet $f: \\R \\to \\R$ be defined as:\n:$\\forall x \\in \\R: \\map f x = x^2$\nWe have that $f$ is the pointwise product of the identity mapping with itself.\nBy Product Rule for Continuous Real Functions and Identity Mapping is Continuous, $f$ is continuous.\nBy Power Function is Unbounded Above:\n:$\\exists q \\in \\R: \\map f q > x$\nThen:\n:$0^2 = 0 \\le x$\nBy the Intermediate Value Theorem:\n:$\\exists y \\in \\R: 0 < y < q: y^2 = x$\n{{qed}}\nCategory:Real Numbers\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"19708","document_content":"\\section{Product Rule for Counting}\nTags: Product Rule for Counting, Counting Arguments, Combinatorics, combinatorics\n\n\\begin{theorem}\nLet it be possible to choose an element $\\alpha$ from a given set $S$ in $m$ different ways.\nLet it be possible to choose an element $\\beta$ from a given set $T$ in $n$ different ways.\nThen the ordered pair $\\tuple {\\alpha, \\beta}$ can be chosen from the cartesian product $S \\times T$ in $m n$ different ways.\n\\end{theorem}\n\n\\begin{proof}\n{{handwaving}}\nThe validity of this rule follows directly from the definition of multiplication of integers.\nThe product $a b$ (for $a, b \\in \\N_{>0}$) is the number of sequences $\\sequence {A, B}$, where $A$ can be any one of $a$ items and $B$ can be any one of $b$ items.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"20117","document_content":"\\section{Ramsey's Theorem}\nTags: Ramsey Theory, Named Theorems, Combinatorics\n\n\\begin{theorem}\nIn any coloring of the edges of a sufficiently large complete graph, one will find monochromatic complete subgraphs.\nFor 2 colors, Ramsey's theorem states that for any pair of positive integers $\\tuple {r, s}$, there exists a least positive integer $\\map R {r, s}$ such that for any complete graph on $\\map R {r, s}$ vertices, whose edges are colored red or blue, there exists either a complete subgraph on $r$ vertices which is entirely red, or a complete subgraph on $s$ vertices which is entirely blue.\nMore generally, for any given number of colors $c$, and any given integers $n_1, \\ldots, n_c$, there is a number $\\map R {n_1, \\ldots, n_c}$ such that:\n:if the edges of a complete graph of order $\\map R {n_1, \\ldots, n_c}$ are colored with $c$ different colours, then for some $i$ between $1$ and $c$, it must contain a complete subgraph of order $n_i$ whose edges are all color $i$.\nThis number $\\map R {n_1, \\ldots, n_c}$ is called the Ramsey number for $n_1, \\ldots, n_c$.\nThe special case above has $c = 2$ (and $n_1 = r$ and $n_2 = s$).\nHere $\\map R {r, s}$ signifies an integer that depends on both $r$ and $s$. It is understood to represent the smallest integer for which the theorem holds.\n\\end{theorem}\n\n\\begin{proof}\nFirst we prove the theorem for the 2-color case, by induction on $r + s$.\nIt is clear from the definition that\n:$\\forall n \\in \\N: \\map R {n, 1} = \\map R {1, n} = 1$\nbecause the complete graph on one node has no edges.\nThis is the base case.\nWe prove that $R \\left({r, s}\\right)$ exists by finding an explicit bound for it.\nBy the inductive hypothesis, $\\map R {r - 1, s}$ and $\\map R {r, s - 1}$ exist.\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"20341","document_content":"\\section{Real Symmetric Positive Definite Matrix has Positive Eigenvalues}\nTags: Symmetric Matrices, Positive Definite Matrices\n\n\\begin{theorem}\nLet $A$ be a symmetric positive definite matrix over $\\mathbb R$.\nLet $\\lambda$ be an eigenvalue of $A$. \nThen $\\lambda$ is real with $\\lambda > 0$.\n\\end{theorem}\n\n\\begin{proof}\nLet $\\lambda$ be an eigenvalue of $A$ and let $\\mathbf v$ be a corresponding eigenvector.\nFrom Real Symmetric Matrix has Real Eigenvalues, $\\lambda$ is real.\nFrom the definition of a positive definite matrix, we have: \n:$\\mathbf v^\\intercal A \\mathbf v > 0$\nThat is: \n{{begin-eqn}}\n{{eqn\t| l = 0\n\t| o = <\n\t| r = \\mathbf v^\\intercal A \\mathbf v\n}}\n{{eqn\t| r = \\mathbf v^\\intercal \\paren {\\lambda \\mathbf v}\n\t| c = {{Defof|Eigenvector of Real Square Matrix}}\n}}\n{{eqn\t| r = \\lambda \\paren {\\mathbf v^\\intercal \\mathbf v}\n}}\n{{eqn\t| r = \\lambda \\paren {\\mathbf v \\cdot \\mathbf v}\n\t| c = {{Defof|Dot Product}}\n}}\n{{eqn\t| r = \\lambda \\norm {\\mathbf v}^2\n\t| c = Dot Product of Vector with Itself\n}}\n{{end-eqn}}\nFrom Euclidean Space is Normed Space, we have: \n:$\\norm {\\mathbf v}^2 > 0$\nso:\n:$\\lambda > 0$\n{{qed}}\nCategory:Symmetric Matrices\nCategory:Positive Definite Matrices\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"20418","document_content":"\\section{Reduction of Explicit ODE to First Order System}\nTags: Ordinary Differential Equations\n\n\\begin{theorem}\nLet $\\map {x^{\\paren n} } t = \\map F {t, x, x', \\ldots, x^{\\paren {n - 1} } }$, $\\map x {t_0} = x_0$ be an explicit ODE with $x \\in \\R^m$.\nLet there exist $I \\subseteq \\R$ such that there exists a unique particular solution:\n:$x: I \\to \\R^m$\nto this ODE.\nThen there exists a system of first order ODEs:\n:$y' = \\map {\\tilde F} {t, y}$\nwith $y = \\tuple {y_1, \\ldots, y_{m n} }^T \\in \\R^{m n}$ such that:\n:$\\tuple {\\map {y_1} t, \\ldots, \\map {y_m} t} = \\map x t$\nfor all $t \\in I$ and $\\map y {t_0} = x_0$.\n\\end{theorem}\n\n\\begin{proof}\nDefine the mappings:\n:$z_1, \\ldots, z_n: I \\to \\R^m$\nby:\n:$z_j = x^{\\paren {j - 1} }$, $j = 1, \\ldots, n$\nThen:\n{{begin-eqn}}\n{{eqn | l = z_1'\n | r = z_2\n}}\n{{eqn | o = \\vdots\n}}\n{{eqn | l = z_{n - 1}'\n | r = z_n\n}}\n{{eqn | l = z_n'\n | r = \\map F {t, z_1, \\ldots, z_n}\n}}\n{{end-eqn}}\nThis is a system of $m n$ first order ODEs.\nBy construction:\n:$\\map {z_1} t = \\map x t$\nfor all $t \\in I$ and $\\map {z_1} {t_0} = x_0$.\nTherefore we can take:\n:$y = \\begin {pmatrix} z_1 \\\\ \\vdots \\\\ z_{n - 1} \\\\ z_n \\end {pmatrix}, \\quad \\tilde F: \\begin {pmatrix} z_1 \\\\ \\vdots \\\\ z_n \\end{pmatrix} \\mapsto \\begin {pmatrix} z_2 \\\\ \\vdots \\\\ z_n \\\\ \\map F {t, z_1, \\ldots, z_n} \\end {pmatrix}$\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"20530","document_content":"\\section{Removable Singularity at Infinity implies Constant Function}\nTags: Complex Analysis\n\n\\begin{theorem}\nLet $f : \\C \\to \\C$ be an entire function.\nLet $f$ have an removable singularity at $\\infty$. \nThen $f$ is constant.\n\\end{theorem}\n\n\\begin{proof}\nWe are given that $f$ has a removable singularity at $\\infty$.\nBy Riemann Removable Singularities Theorem, $f$ must be bounded in a neighborhood of $\\infty$.\nThat is, there exists a real number $M > 0$ such that:\n:$\\forall z \\in \\set {z : \\cmod z > r}: \\cmod {\\map f z} \\le M$\nfor some real $r \\ge 0$.\nHowever, by Continuous Function on Compact Space is Bounded, $f$ is also bounded on $\\set {z: \\cmod z \\le r}$. \nAs $\\set {z: \\cmod z > r} \\cup \\set {z: \\cmod z \\le r} = \\C$, $f$ is therefore bounded on $\\C$.\nTherefore by Liouville's Theorem, $f$ is constant.\n{{qed}}\nCategory:Complex Analysis\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"20759","document_content":"\\section{Rolle's Theorem}\nTags: Continuity, Differential Calculus, Continuous Real Functions, Rolle's Theorem, Named Theorems, Continuous Functions, Differentiable Real Functions, Differential Real Functions\n\n\\begin{theorem}\nLet $f$ be a real function which is:\n:continuous on the closed interval $\\closedint a b$\nand:\n:differentiable on the open interval $\\openint a b$.\nLet $\\map f a = \\map f b$.\nThen:\n:$\\exists \\xi \\in \\openint a b: \\map {f'} \\xi = 0$\n\\end{theorem}\n\n\\begin{proof}\nWe have that $f$ is continuous on $\\closedint a b$.\nIt follows from Continuous Image of Closed Interval is Closed Interval that $f$ attains:\n:a maximum $M$ at some $\\xi_1 \\in \\closedint a b$\nand:\n:a minimum $m$ at some $\\xi_2 \\in \\closedint a b$.\nSuppose $\\xi_1$ and $\\xi_2$ are both end points of $\\closedint a b$.\nBecause $\\map f a = \\map f b$ it follows that $m = M$ and so $f$ is constant on $\\closedint a b$.\nThen, by Derivative of Constant, $\\map {f'} \\xi = 0$ for all $\\xi \\in \\openint a b$.\nSuppose $\\xi_1$ is not an end point of $\\closedint a b$.\nThen $\\xi_1 \\in \\openint a b$ and $f$ has a local maximum at $\\xi_1$.\nHence the result follows from Derivative at Maximum or Minimum.\nSimilarly, suppose $\\xi_2$ is not an end point of $\\closedint a b$.\nThen $\\xi_2 \\in \\openint a b$ and $f$ has a local minimum at $\\xi_2$.\nHence the result follows from Derivative at Maximum or Minimum.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"20761","document_content":"\\section{Root of Equation e^x (x - 1) = e^-x (x + 1)}\nTags: Analysis\n\n\\begin{theorem}\nThe equation:\n:$e^x \\paren {x - 1} = e^{-x} \\paren {x + 1}$\nhas a root:\n:$x = 1 \\cdotp 19966 \\, 78640 \\, 25773 \\, 4 \\ldots$\n\\end{theorem}\n\n\\begin{proof}\nLet $\\map f x = e^x \\paren {x - 1} - e^{-x} \\paren {x + 1}$.\nThen if $\\map f c = 0$, $c$ is a root of $e^x \\paren {x - 1} = e^{-x} \\paren {x + 1}$.\nNotice that:\n:$\\map f 1 = e^1 \\paren {1 - 1} - e^{-1} \\paren {1 + 1} = -\\dfrac 2 e < 0$\n:$\\map f 2 = e^2 \\paren {2 - 1} - e^{-2} \\paren {2 + 1} = e^2 - \\dfrac 3 {e^2} > 0$\nBy Intermediate Value Theorem:\n:$\\exists c \\in \\openint 1 2: \\map f c = 0$.\nThis shows that our equation has a root between $1$ and $2$.\nThe exact value of this root can be found using any numerical method, e.g. Newton's Method.\n{{ProofWanted|Analysis needed of the Kepler Equation}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"20819","document_content":"\\section{Sample Matrix Independence Test}\nTags: Linear Second Order ODEs, Linear Algebra\n\n\\begin{theorem}\nLet $V$ be a vector space of real or complex-valued functions on a set $J$.\nLet $f_1, \\ldots, f_n$ be functions in $V$.\nLet '''samples''' $x_1, \\ldots, x_n$ from $J$ be given.\nDefine the '''sample matrix''' :\n:$S = \\begin{bmatrix}\n\\map {f_1} {x_1} & \\cdots & \\map {f_n} {x_1} \\\\\n\\vdots & \\ddots & \\vdots \\\\\n\\map {f_1} {x_n} & \\cdots & \\map {f_n} {x_n} \\\\\n\\end{bmatrix}$\nLet $S$ be invertible.\nThen $f_1, \\ldots, f_n$ are linearly independent in $V$.\n\\end{theorem}\n\n\\begin{proof}\nThe definition of linear independence is applied.\nAssume a linear combination of the functions $f_1, \\ldots, f_n$ is the zero function:\n{{begin-eqn}}\n{{eqn | n = 1\n | l = \\sum_{i \\mathop = 1}^n c_i \\map {f_i} x \n | r = 0\n | c = for all $x$\n}}\n{{end-eqn}}\nLet $\\vec c$ have components $c_1, \\ldots, c_n$.\nFor $i = 1, \\ldots, n$ replace $x = x_i$ in $(1)$.\nThere are $n$ linear homogeneous algebraic equations, written as:\n:$S \\vec c = \\vec 0$\nBecause $S$ is invertible:\n:$\\vec c = \\vec 0$\nThe functions are linearly independent.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"20859","document_content":"\\section{Schauder Basis is Linearly Independent}\nTags: Linear Independence, Schauder Bases\n\n\\begin{theorem}\nLet $\\Bbb F \\in \\set {\\R, \\C}$. \nLet $\\struct {X, \\norm \\cdot}$ be a normed vector space over $\\Bbb F$.\nLet $\\set {e_n : n \\in \\N}$ be a Schauder basis for $X$. \nThen $\\set {e_n : n \\in \\N}$ is linearly independent.\n\\end{theorem}\n\n\\begin{proof}\nSuppose that: \n:$\\ds \\sum_{k \\mathop = 1}^n \\alpha_{i_k} e_{i_k} = 0$\nfor some $n \\in \\N$, $i_1, \\ldots, i_n \\in \\N$ and $\\alpha_{i_1}, \\ldots, \\alpha_{i_n} \\in \\Bbb F$.\nDefine a sequence $\\sequence {\\alpha_j}_{j \\mathop \\in \\N}$ in $\\Bbb F$ by: \n:$\\ds \\alpha_j = \\begin{cases}\\alpha_{i_k} & \\text { if there exists } k \\text { such that } j = i_k \\\\ 0 & \\text { otherwise}\\end{cases}$\nThen, we have: \n:$\\ds \\sum_{j \\mathop = 1}^\\infty \\alpha_j e_j = \\sum_{k \\mathop = 1}^n \\alpha_{i_k} e_{i_k} = 0$\nFrom the definition of Schauder basis, we then have: \n:$\\alpha_j = 0$ for each $j \\in \\N$\nand in particular: \n:$\\alpha_{i_k} = 0$ for each $k$. \nSince the coefficients $\\alpha_{i_1}, \\ldots, \\alpha_{i_n}$ and $n \\in \\N$ were arbitrary, we have that:\n:$\\set {e_n : n \\in \\N}$ is linearly independent.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"21599","document_content":"\\section{Solution of Second Order Differential Equation with Missing Dependent Variable}\nTags: Second Order ODEs\n\n\\begin{theorem}\nLet $\\map f {x, y', y''} = 0$ be a second order ordinary differential equation in which the dependent variable $y$ is not explicitly present.\nThen $f$ can be reduced to a first order ordinary differential equation, whose solution can be determined.\n\\end{theorem}\n\n\\begin{proof}\nConsider the second order ordinary differential equation:\n:$(1): \\quad \\map f {x, y', y''} = 0$\nLet a new dependent variable $p$ be introduced:\n:$y' = p$\n:$y'' = \\dfrac {\\d p} {\\d x}$\nThen $(1)$ can be transformed into:\n:$(2): \\quad \\map f {x, p, \\dfrac {\\d p} {\\d x} } = 0$\nwhich is a first order ODE.\nIf $(2)$ has a solution which can readily be found, it will be expressible in the form:\n:$(3): \\quad \\map g {x, p}$\nwhich can then be expressed in the form:\n:$\\map g {x, \\dfrac {\\d y} {\\d x} } = 0$\nwhich is likewise subject to the techniques of solution of a first order ODE.\nHence such a second order ODE is reduced to the problem of solving two first order ODEs in succession.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"21600","document_content":"\\section{Solution of Second Order Differential Equation with Missing Independent Variable}\nTags: Second Order ODEs\n\n\\begin{theorem}\nLet $\\map g {y, \\dfrac {\\d y} {\\d x}, \\dfrac {\\d^2 y} {\\d x^2} } = 0$ be a second order ordinary differential equation in which the independent variable $x$ is not explicitly present.\nThen $g$ can be reduced to a first order ordinary differential equation, whose solution can be determined.\n\\end{theorem}\n\n\\begin{proof}\nConsider the second order ordinary differential equation:\n:$(1): \\quad \\map g {y, \\dfrac {\\d y} {\\d x}, \\dfrac {\\d^2 y} {\\d x^2} } = 0$\nLet a new dependent variable $p$ be introduced:\n:$y' = p$\nHence:\n:$y'' = \\dfrac {\\d p} {\\d x} = \\dfrac {\\d p} {\\d y} \\dfrac {\\d y} {\\d x} = p \\dfrac {\\d p} {\\d y}$\nThen $(1)$ can be transformed into:\n:$(2): \\quad \\map g {y, p, p \\dfrac {\\d p} {\\d y} = 0}$\nwhich is a first order ODE.\nIf $(2)$ has a solution which can readily be found, it will be expressible in the form:\n:$(3): \\quad \\map g {x, p}$\nwhich can then be expressed in the form:\n:$\\map g {x, \\dfrac {\\d y} {\\d x} }$\nwhich is likewise subject to the techniques of solution of a first order ODE.\nHence such a second order ODE is reduced to the problem of solving two first order ODEs in succession.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"21758","document_content":"\\section{Squeeze Theorem\/Functions}\nTags: Named Theorems, Limits of Real Functions, Limits of Functions, Squeeze Theorem\n\n\\begin{theorem}\nLet $a$ be a point on an open real interval $I$.\nLet $f$, $g$ and $h$ be real functions defined at all points of $I$ except for possibly at point $a$.\nSuppose that:\n:$\\forall x \\ne a \\in I: \\map g x \\le \\map f x \\le \\map h x$\n:$\\ds \\lim_{x \\mathop \\to a} \\map g x = \\lim_{x \\mathop \\to a} \\map h x = L$\nThen:\n:$\\ds \\lim_{x \\mathop \\to a} \\ \\map f x = L$\n\\end{theorem}\n\n\\begin{proof}\nWe start by proving the special case where $\\forall x: g \\left({x}\\right) = 0$ and $L=0$, in which case $\\displaystyle \\lim_{x \\to a} \\ h \\left({x}\\right) = 0$.\nLet $\\epsilon > 0$ be a positive real number.\nThen by the definition of the limit of a function:\n: $\\exists \\delta > 0: 0 < \\left|{x - a}\\right| < \\delta \\implies \\left|{h \\left({x}\\right)}\\right| < \\epsilon$\nNow:\n: $\\forall x \\ne a: 0 = g \\left({x}\\right) \\le f \\left({x}\\right) \\le h \\left({x}\\right)$\nso that:\n:$\\left|{f \\left({x}\\right)}\\right| \\le \\left|{h \\left({x}\\right)}\\right|$\nThus:\n: $0 < |x-a| < \\delta \\implies \\left|{f \\left({x}\\right)}\\right| \\le \\left|{h \\left({x}\\right)}\\right| < \\epsilon$\nBy the transitive property of $\\le$, this proves that:\n: $\\displaystyle \\lim_{x \\to a} \\ f \\left({x}\\right) = 0 = L$\nWe now move on to the general case, with $g \\left({x}\\right)$ and $L$ arbitrary.\nFor $x \\ne a$, we have:\n: $g \\left({x}\\right) \\le f \\left({x}\\right) \\le h \\left({x}\\right)$\nBy subtracting $g \\left({x}\\right)$ from all expressions, we have:\n: $0 \\le f \\left({x}\\right) - g \\left({x}\\right) \\le h \\left({x}\\right) - g \\left({x}\\right)$\nSince as $x \\to a, h \\left({x}\\right) \\to L$ and $g \\left({x}\\right) \\to L$, we have:\n: $h \\left({x}\\right) - g \\left({x}\\right) \\to L - L = 0$\nFrom the special case, we now have:\n: $f \\left({x}\\right) - g \\left({x}\\right) \\to 0$\nWe conclude that:\n: $f \\left({x}\\right) = \\left({f \\left({x}\\right) - g \\left({x}\\right)}\\right) + g \\left({x}\\right) \\to 0 + L = L$\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"21762","document_content":"\\section{Squeeze Theorem\/Sequences\/Real Numbers}\nTags: Named Theorems, Limits of Sequences, Squeeze Theorem, Real Analysis\n\n\\begin{theorem}\nLet $\\sequence {x_n}$, $\\sequence {y_n}$ and $\\sequence {z_n}$ be sequences in $\\R$.\nLet $\\sequence {y_n}$ and $\\sequence {z_n}$ both be convergent to the following limit:\n:$\\ds \\lim_{n \\mathop \\to \\infty} y_n = l, \\lim_{n \\mathop \\to \\infty} z_n = l$\nSuppose that:\n:$\\forall n \\in \\N: y_n \\le x_n \\le z_n$\nThen:\n:$x_n \\to l$ as $n \\to \\infty$\nthat is:\n:$\\ds \\lim_{n \\mathop \\to \\infty} x_n = l$\nThus, if $\\sequence {x_n}$ is always between two other sequences that both converge to the same limit, $\\sequence {x_n} $ is said to be '''sandwiched''' or '''squeezed''' between those two sequences and itself must therefore converge to that same limit.\n\\end{theorem}\n\n\\begin{proof}\nFrom Negative of Absolute Value: Corollary 1:\n:$\\size {x - l} < \\epsilon \\iff l - \\epsilon < x < l + \\epsilon$\nLet $\\epsilon > 0$.\nWe need to prove that:\n:$\\exists N: \\forall n > N: \\size {x_n - l} < \\epsilon$\nAs $\\ds \\lim_{n \\mathop \\to \\infty} y_n = l$ we know that:\n:$\\exists N_1: \\forall n > N_1: \\size {y_n - l} < \\epsilon$\nAs $\\ds \\lim_{n \\mathop \\to \\infty} z_n = l$ we know that:\n:$\\exists N_2: \\forall n > N_2: \\size {z_n - l} < \\epsilon$\nLet $N = \\max \\set {N_1, N_2}$.\nThen if $n > N$, it follows that $n > N_1$ and $n > N_2$.\nSo:\n:$\\forall n > N: l - \\epsilon < y_n < l + \\epsilon$\n:$\\forall n > N: l - \\epsilon < z_n < l + \\epsilon$\nBut:\n:$\\forall n \\in \\N: y_n \\le x_n \\le z_n$\nSo:\n:$\\forall n > N: l - \\epsilon < y_n \\le x_n \\le z_n < l + \\epsilon$\nand so:\n:$\\forall n > N: l - \\epsilon < x_n < l + \\epsilon$\nSo:\n:$\\forall n > N: \\size {x_n - l} < \\epsilon$\nHence the result.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"22806","document_content":"\\section{Triangles with Proportional Sides are Similar}\nTags: Triangles\n\n\\begin{theorem}\nLet two triangles have corresponding sides which are proportional.\nThen their corresponding angles are equal.\nThus, by definition, such triangles are similar.\n{{:Euclid:Proposition\/VI\/5}}\n\\end{theorem}\n\n\\begin{proof}\nLet $\\triangle ABC, \\triangle DEF$ be triangles whose sides are proportional, so that:\n:$ AB : BC = DE : EF$\n:$ BC : CA = EF : FD$\n:$ BA : AC = ED : DF$\nWe need to show that\n: $\\angle ABC = \\angle DEF$\n: $\\angle BCA = \\angle EFD$\n: $\\angle BAC = \\angle EDF$\n:400px\nOn the straight line $EF$, and at the points $E, F$ on it, construct $\\angle FEG = \\angle ABC$ and $\\angle EFG = \\angle ACB$.\nFrom Sum of Angles of Triangle Equals Two Right Angles, the remaining angle at $A$ equals the remaining angle at $G$.\nTherefore $\\triangle ABC$ is equiangular with $\\triangle GEF$.\nFrom Equiangular Triangles are Similar, the sides about the equal angles are proportional, and those are corresponding sides which subtend the equal angles.\nSo:\n: $AB : BD = GE : EF$\nBut by hypothesis:\n: $AB : BC = DE : EF$\nSo from Equality of Ratios is Transitive\n: $DE : EF = GE : EF$\nSo each of $DE, GE$ has the same ratio to $EF$.\nSo from Magnitudes with Same Ratios are Equal:\n: $DE = GE$\nFor the same reason:\n: $DF = GF$\nSo we have that $DE = EG$, $EF$ is common and $DF = FG$.\nSo from Triangle Side-Side-Side Equality:\n: $\\triangle DEF = \\triangle GEF$\nThat is:\n: $\\angle DEF = \\angle GEF, \\angle DFE = \\angle GFE, \\angle EDF = \\angle EGF$\nAs $\\angle GEF = \\angle ABC$ it follows that:\n: $\\angle ABC = \\angle DEF$\nFor the same reason $\\angle ACB = \\angle DFE$ and $\\angle BAC = \\angle EDF$.\nHence the result.\n{{Qed}}\n{{Euclid Note|5|VI}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"23033","document_content":"\\section{Union of Non-Disjoint Convex Sets is Convex Set}\nTags: Set Union, Convex Sets (Order Theory), Convex Sets, Union\n\n\\begin{theorem}\nLet $\\struct {S, \\preccurlyeq}$ be an ordered set.\nLet $\\CC$ be a set of convex sets of $S$ such that their intersection is non-empty:\n:$\\ds \\bigcap \\CC \\ne \\O$\nThen the union $\\ds \\bigcup \\CC$ is also convex.\n\\end{theorem}\n\n\\begin{proof}\nLet $x, y, z \\in S$ be arbitrary elements of $S$ such that $x \\prec y \\prec z$.\nLet $x, z \\in \\ds \\bigcup \\CC$.\nFirst let $x, z \\in C$ where $C \\in \\CC$.\nThen as $C$ is convex, $y \\in C$.\nHence, by definition of union, $y \\in \\ds \\bigcup \\CC$.\nNow let $x \\in C_1, z \\in C_2$ where $C_1, C_2 \\in \\CC$.\nWe have that $\\ds \\bigcap \\CC \\ne \\O$.\nThus $C_1 \\cap C_2 \\ne \\O$.\nThen $\\exists a \\in C_1 \\cap C_2: x < a < z$.\nHence one of the following cases holds:\n:$(1): \\quad x < y < a < z$, whence $y \\in C_1$, by convexity of $C_1$\n:$(2): \\quad x < a < y < z$, whence $y \\in C_2$, by convexity of $C_2$\n:$(3): \\quad y = a$, whence $y \\in C_1$ and $y \\in C_2$, by definition of $a$.\nThus in all cases $y \\in \\ds \\bigcup \\CC$.\nThus $\\ds \\bigcup \\CC$ is convex by definition.\n{{qed}}\n\\end{proof}\n\n","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"} {"document_id":"23699","document_content":"\\begin{definition}[Definition:Acyclic Graph]\nAn '''acyclic graph''' is a graph or digraph with no cycles.\nAn acyclic connected undirected graph is a tree.\nAn acyclic disconnected undirected graph is a forest.\nCategory:Definitions\/Graph Theory\n\\end{definition}","parent_id":null,"metadata":null,"task_split":"theorem_retrieval"}