content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
In`de*ter"mi*nate (?), a. [L. indeterminatus.]
Not determinate; not certain or fixed; indefinite; not precise; as, an indeterminate number of years.
Indeterminate analysis Math., that branch of analysis which has for its object the solution of indeterminate problems. -- Indeterminate coefficients Math., coefficients arbitrarily assumed for
convenience of calculation, or to facilitate some artifice of analysis. Their values are subsequently determined. -- Indeterminate equation Math., an equation in which the unknown quantities admit of
an infinite number of values, or sets of values. A group of equations is indeterminate when it contains more unknown quantities than there are equations. -- Indeterminate inflorescence Bot., a mode
of inflorescence in which the flowers all arise from axillary buds, the terminal bud going on to grow and sometimes continuing the stem indefinitely; -- called also acropetal, botryose, centripetal,
∧ indefinite inflorescence. Gray. -- Indeterminate problem Math., a problem which admits of an infinite number of solutions, or one in which there are fewer imposed conditions than there are unknown
or required results. -- Indeterminate quantity Math., a quantity which has no fixed value, but which may be varied in accordance with any proposed condition. -- Indeterminate series Math., a series
whose terms proceed by the powers of an indeterminate quantity, sometimes also with indeterminate exponents, or indeterminate coefficients.
-- In`de*ter"mi*nate*ly adv. -- In`de*ter"mi*nate*ness, n.
© Webster 1913. | {"url":"http://everything2.com/title/Indeterminate","timestamp":"2014-04-20T06:57:00Z","content_type":null,"content_length":"21775","record_id":"<urn:uuid:be2d5f9e-4093-4b47-b159-845bfc55cc37>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sequence Cauchy in L2?
July 15th 2009, 12:03 PM
Sequence Cauchy in L2?
Is the sequence {sin(nx)} Cauchy in L2 over [0,pi]? Why or why not?
Under the L2 norm, this would amount to the integral of |sin(nx)-sin(mx)|^2 from 0 to pi being bounded by small epsilon for all n,m > N, right?
I can't figure this guy out. Thanks for any help in advance.
July 15th 2009, 12:21 PM
$\{ \sin (nx) \}$ is an orthogonal set in $L^2([0, \pi ])$ (prove it) and so remember that if $u$ and $v$ are orthogonal vectors $\Vert u + v \Vert ^2 = \Vert u-v \Vert ^2 = \Vert u \Vert ^2 + \
Vert v \Vert ^2$ (Because $L^2([0, \pi ])$ is a Hilbert sapce). | {"url":"http://mathhelpforum.com/differential-geometry/95208-sequence-cauchy-l2-print.html","timestamp":"2014-04-20T06:59:28Z","content_type":null,"content_length":"4843","record_id":"<urn:uuid:391f196c-246b-4013-829b-62465cc3fbd8>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
White Plains, NY Algebra 1 Tutor
Find a White Plains, NY Algebra 1 Tutor
...I do research on machine learning in music & audio processing applications. In my spare time, I enjoy hiking, traveling, learning languages, producing/recording music, and cooking. I speak
English and Russian fluently, and have basic/intermediate Spanish.
10 Subjects: including algebra 1, physics, calculus, geometry
...With my resources and past experiences, I guarantee a significant increase in your scores. Having received tutoring myself for an SAT II Subject Test, I can personally attest to the potential
gains to be made by working with a tutor, since my score improved. SAS is a great software suite used for analyzing large amounts of data.
26 Subjects: including algebra 1, calculus, geometry, writing
...Though my years of teaching, I've developed the art of teaching to students with various learning styles, by using a variety of techniques relevant to their needs leading to long term mastery
and higher achievement. I am a firm believer that all students can learn and achieve as long as the righ...
22 Subjects: including algebra 1, calculus, geometry, GED
...My students over the past two years have a median grade of 4 on the AP exam, with more than 50% earning a 5 (highest possible score), more than 75% earning a 4 or above, and more than 90%
earning a 3 or above. I have taught my school's Precalculus course for seven years. Each school has a diffe...
9 Subjects: including algebra 1, calculus, geometry, algebra 2
...I've also worked as a translator and interpreter for different companies throughout Connecticut, with a proven ability to translate written documents from Portuguese and Spanish to English.
I've worked several years as a Portuguese Interpreter for the State of Connecticut Judicial Branch, curren...
6 Subjects: including algebra 1, Spanish, ESL/ESOL, cooking | {"url":"http://www.purplemath.com/White_Plains_NY_algebra_1_tutors.php","timestamp":"2014-04-20T09:00:00Z","content_type":null,"content_length":"24319","record_id":"<urn:uuid:96b2dfba-6ed6-40ec-92d7-d4ff5189f098>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Most Cited Linear Algebra and its Applications Articles
The most cited articles published since 2009, extracted from
Volume 432, Issue 9, April 2010, Pages 2257-2272
Cvetković, D. | Simić, S.K.
A spectral graph theory is a theory in which graphs are studied by means of eigenvalues of a matrix M which is in a prescribed way defined for any graph. This theory is called M-theory. We outline a
spectral theory of graphs based on the signless Laplacians Q and compare it with other spectral theories, in particular to those based on the adjacency matrix A and the Laplacian L. As demonstrated
in the first part, the Q-theory can be constructed in part using various connections to other theories: equivalency with A-theory and L-theory for regular graphs, common features with L-theory for
bipartite graphs, general analogies with A-theory and analogies with A-theory via line graphs and subdivision graphs. In this part, we introduce notions of enriched and restricted spectral theories
and present results on integral graphs, enumeration of spanning trees, characterizations by eigenvalues, cospectral graphs and graph angles. © 2009 Elsevier Inc. All rights reserved.
Volume 431, Issues 5-7, August 2009, Pages 701-715
Zhu, J. | Tian, Y.-P. | Kuang, J.
In this paper, a general consensus protocol is considered for multi-agent systems with double-integrator dynamics. The advantage of this protocol is that different consensus dynamics including
linear, periodic and positive exponential dynamics can be realized by choosing different gains. Necessary and sufficient conditions for solving the consensus problem with the considered general
protocol are obtained, namely, all the gains realizing the consensus can be described. The design method of the consensus protocol is constructively given. Moreover, a periodic consensus protocol is
obtained as a special case and it is revealed that the maximum convergence speed can be achieved by choosing suitable gains. © 2009 Elsevier Inc. All rights reserved.
Volume 432, Issue 6, March 2010, Pages 1531-1552
Dehghan, M. | Hajarian, M.
In the present paper, by extending the idea of conjugate gradient (CG) method, we construct an iterative method to solve the general coupled matrix equationsunderover(∑, j = 1, p) Aij Xj Bij = Mi, i
= 1, 2, ..., p,(including the generalized (coupled) Lyapunov and Sylvester matrix equations as special cases) over generalized bisymmetric matrix group (X1, X2, ..., Xp). By using the iterative
method, the solvability of the general coupled matrix equations over generalized bisymmetric matrix group can be determined in the absence of roundoff errors. When the general coupled matrix
equations are consistent over generalized bisymmetric matrices, a generalized bisymmetric solution group can be obtained within finite iteration steps in the absence of roundoff errors. The least
Frobenius norm generalized bisymmetric solution group of the general coupled matrix equations can be derived when an appropriate initial iterative matrix group is chosen. In addition, the optimal
approximation generalized bisymmetric solution group to a given matrix group (over(X, ^)1, over(X, ^)2, ..., over(X, ^)p) in Frobenius norm can be obtained by finding the least Frobenius norm
generalized bisymmetric solution group of new general coupled matrix equations. The numerical results indicate that the iterative method works quite well in practice. © 2009 Elsevier Inc. All rights
Volume 430, Issues 5-6, March 2009, Pages 1626-1640
Wang, Q.-W. | Li, C.-K.
We in this paper first establish a new expression of the general solution to the consistent system of linear quaternion matrix equations A1 X1 = C1, A2 X2 = C2, A3 X1 B1 + A4 X2 B2 = C3, which was
investigated recently [Q.W. Wang, H.X. Chang, C.Y. Lin, P-(skew)symmetric common solutions to a pair of quaternion matrix equations, Appl. Math. Comput. 195 (2008) 721-732], then derive the maximal
and minimal ranks and the least-norm of the general solution to the system mentioned above. Some previous known results can be viewed as special cases of the results of this paper. © 2008 Elsevier
Inc. All rights reserved.
Volume 431, Issue 12, December 2009, Pages 2291-2303
Wang, Q.-W. | van der Woude, J.W. | Chang, H.-X.
Let H be the real quaternion algebra and Hn × m denote the set of all n × m matrices over H. Let P ∈ Hn × n and Q ∈ Hm × m be involutions, i.e., P2 = I, Q2 = I. A matrix A ∈ Hn × m is said to be (P,
Q)-symmetric if A = PAQ. This paper studies the system of linear real quaternion matrix equations(A1 X1 = C1; X1 B1 = C2) (A2 X2 = C3; X2 B2 = C4) A3 X1 B3 + A4 X2 B4 = Cc .We present some necessary
and sufficient conditions for the existence of a solution to this system and give an expression of the general solution to the system when the solvability conditions are satisfied. As applications,
we discuss the necessary and sufficient conditions for the systemAa X = Ca, XBb = Cb, Ac XBc = Ccto have a (P, Q)-symmetric solution. We also show an expression of the (P, Q)-symmetric solution to
the system when the solvability conditions are met. Moreover, we provide an algorithm and a numerical example to illustrate our results. The findings of this paper extend some known results in the
literature. © 2009 Elsevier Inc. All rights reserved.
Volume 430, Issues 8-9, April 2009, Pages 2290-2300
Stevanović, D. | Ilić, A.
Let G be a graph of order n and let P (G, λ) = ∑k = 0 n (- 1)k ck λn - k be the characteristic polynomial of its Laplacian matrix. Generalizing an approach of Mohar on graph transformations, we show
that among all connected unicyclic graphs of order n, the kth coefficient ck is largest when the graph is a cycle Cn and smallest when the graph is the a Sn with an additional edge between two of its
pendent vertices. A relation to the recently established Laplacian-like energy of a graph is discussed. © 2008 Elsevier Inc. All rights reserved.
Volume 433, Issue 1, July 2010, Pages 263-296
Tian, Y.
The inertia of a Hermitian matrix is defined to be a triplet composed of the numbers of the positive, negative and zero eigenvalues of the matrix counted with multiplicities, respectively. In this
paper, we show some basic formulas for inertias of 2 × 2 block Hermitian matrices. From these formulas, we derive various equalities and inequalities for inertias of sums, parallel sums, products of
Hermitian matrices, submatrices in block Hermitian matrices, differences of outer inverses of Hermitian matrices. As applications, we derive the extremal inertias of the linear matrix expression A -
BXB* with respect to a variable Hermitian matrix X. In addition, we give some results on the extremal inertias of Hermitian solutions to the matrix equation AX = B, as well as the extremal inertias
of a partial block Hermitian matrix. © 2010 Elsevier Inc. All rights reserved.
Volume 431, Issue 8, September 2009, Pages 1223-1233
Gutman, I. | Kiani, D. | Mirzakhah, M. | Zhou, B.
The Laplacian-energy like invariant LEL (G) and the incidence energy IE (G) of a graph are recently proposed quantities, equal, respectively, to the sum of the square roots of the Laplacian
eigenvalues, and the sum of the singular values of the incidence matrix of the graph G. However, IE (G) is closely related with the eigenvalues of the Laplacian and signless Laplacian matrices of G.
For bipartite graphs, IE = LEL. We now point out some further relations for IE and LEL: IE can be expressed in terms of eigenvalues of the line graph, whereas LEL in terms of singular values of the
incidence matrix of a directed graph. Several lower and upper bounds for IE are obtained, including those that pertain to the line graph of G. In addition, Nordhaus-Gaddum-type results for IE are
established. © 2009 Elsevier Inc. All rights reserved.
Volume 430, Issue 1, January 2009, Pages 106-113
Indulal, G.
The D-eigenvalues {μ1, μ2, ..., ..., μp} of a graph G are the eigenvalues of its distance matrix D and form the D-spectrum of G denoted by specD (G). The greatest D-eigenvalue is called the
D-spectral radius of G denoted by μ1. The D-energy ED (G) of the graph G is the sum of the absolute values of its D-eigenvalues. In this paper we obtain some lower bounds for μ1 and characterize
those graphs for which these bounds are best possible. We also obtain an upperbound for ED (G) and determine those maximal D-energy graphs. © 2008 Elsevier Inc. All rights reserved.
Volume 432, Issue 1, January 2010, Pages 70-88
Oseledets, I. | Tyrtyshnikov, E.
As is well known, a rank-r matrix can be recovered from a cross of r linearly independent columns and rows, and an arbitrary matrix can be interpolated on the cross entries. Other entries by this
cross or pseudo-skeleton approximation are given with errors depending on the closeness of the matrix to a rank-r matrix and as well on the choice of cross. In this paper we extend this construction
to d-dimensional arrays (tensors) and suggest a new interpolation formula in which a d-dimensional array is interpolated on the entries of some TT-cross (tensor train-cross). The total number of
entries and the complexity of our interpolation algorithm depend on d linearly, so the approach does not suffer from the curse of dimensionality. We also propose a TT-cross method for computation of
d-dimensional integrals and apply it to some examples with dimensionality in the range from d = 100 up to d = 4000 and the relative accuracy of order 10- 10. In all constructions we capitalize on the
new tensor decomposition in the form of tensor trains (TT-decomposition). © 2009 Elsevier Inc. All rights reserved.
Volume 432, Issue 11, June 2010, Pages 3018-3029
Das, K.Ch.
Let G = (V, E) be a simple graph. Denote by D (G) the diagonal matrix of its vertex degrees and by A (G) its adjacency matrix. Then the Laplacian matrix of G is L (G) = D (G) - A (G) and the signless
Laplacian matrix of G is Q (G) = D (G) + A (G). In this paper we obtain a lower bound on the second largest signless Laplacian eigenvalue and an upper bound on the smallest signless Laplacian
eigenvalue of G. In [5], Cvetković et al. have given a series of 30 conjectures on Laplacian eigenvalues and signless Laplacian eigenvalues of G (see also [1]). Here we prove five conjectures. © 2010
Elsevier Inc. All rights reserved.
Volume 430, Issue 4, February 2009, Pages 1282-1291
Deng, C.Y.
In this note, the Drazin inverses of sum and difference of idempotents on a Hilbert space are established under some conditions. © 2008 Elsevier Inc. All rights reserved.
Volume 431, Issues 1-2, July 2009, Pages 211-227
Stegeman, A.
In the Candecomp/Parafac (CP) model, a three-way array under(X, {combining low line}) is written as the sum of R outer vector product arrays and a residual array. The former comprise the columns of
the component matrices A, B and C. For fixed residuals, (A, B, C) is unique up to trivial ambiguities, if 2 R + 2 is less than or equal to the sum of the k-ranks of A, B and C. This classical result
was shown by Kruskal in 1977. In this paper, we consider the case where one of A, B, C has full column rank, and show that in this case Kruskal's uniqueness condition implies a recently obtained
uniqueness condition. Moreover, we obtain Kruskal-type uniqueness conditions that are weaker than Kruskal's condition itself. Also, for (A, B, C) with rank(A) = R - 1 and C full column rank, we
obtain easy-to-check necessary and sufficient uniqueness conditions. We extend our results to the Indscal decomposition in which the array under(X, {combining low line}) has symmetric slices and A =
B is imposed. We consider the real-valued CP and Indscal decompositions, but our results are also valid for their complex-valued counterparts. © 2009 Elsevier Inc. All rights reserved.
Volume 431, Issues 5-7, August 2009, Pages 808-817
Zheng, B. | Bai, Z.-Z. | Yang, X.
Bai et al. recently proposed an efficient parameterized Uzawa method for solving the nonsingular saddle point problems [Z.-Z. Bai, B.N. Parlett, Z.-Q. Wang, On generalized successive overrelaxation
methods for augmented linear systems, Numer. Math. 102 (2005) 1-38]. In this paper, we further prove the semi-convergence of this method when it is applied to solve the singular saddle point problems
under suitable restrictions on the involved iteration parameters. The optimal iteration parameters and the corresponding optimal semi-convergence factor for the parameterized Uzawa method are
determined. In addition, numerical experiments are used to show the feasibility and effectiveness of the parameterized Uzawa method for solving singular saddle point problems. © 2009 Elsevier Inc.
All rights reserved.
Volume 434, Issue 10, May 2011, Pages 2109-2139
Tian, Y.
We give in this paper a group of closed-form formulas for the maximal and minimal ranks and inertias of the linear Hermitian matrix function A-BX-(BX)* with respect to a variable matrix X. As
applications, we derive the extremal ranks and inertias of the matrices X±X*, where X is a solution to the matrix equation AXB=C, and then give necessary and sufficient conditions for the matrix
equation AXB=C to have Hermitian, definite and Re-definite solutions. In addition, we give closed-form formulas for the extremal ranks and inertias of the difference X1-X2, where X1 and X2 are
Hermitian solutions of two matrix equations A1X1A1=C1 and A2X2A2=C2, and then use the formulas to characterize relations between Hermitian solutions of the two equations. © 2010 Elsevier Inc. All
rights reserved.
Volume 430, Issues 11-12, June 2009, Pages 2997-3007
Comon, P. | ten Berge, J.M.F. | De Lathauwer, L. | Castaing, J.
The concept of tensor rank was introduced in the 20s. In the 70s, when methods of Component Analysis on arrays with more than two indices became popular, tensor rank became a much studied topic. The
generic rank may be seen as an upper bound to the number of factors that are needed to construct a random tensor. We explain in this paper how to obtain numerically in the complex field the generic
rank of tensors of arbitrary dimensions, based on Terracini's lemma, and compare it with the algebraic results already known in the real or complex fields. In particular, we examine the cases of
symmetric tensors, tensors with symmetric matrix slices, complex tensors enjoying Hermitian symmetries, or merely tensors with free entries. © 2009 Elsevier Inc. All rights reserved.
Volume 432, Issue 6, March 2010, Pages 1412-1459
Böttcher, A. | Spitkovsky, I.M.
This paper is a survey of the basics of the theory of two projections. It contains in particular the theorem by Halmos on two orthogonal projections and Roch, Silbermann, Gohberg, and Krupnik's
theorem on two idempotents in Banach algebras. These two theorems, which deliver the desired results usually very quickly and comfortably, are missing or wrongly cited in many recent publications on
the topic, The paper is intended as a gentle guide to the field. The basic theorems are precisely stated, some of them are accompanied by full proofs, others not, but precise references are given in
each case, and many examples illustrate how to work with the theorems. © 2009 Elsevier Inc. All rights reserved.
Volume 432, Issue 7, March 2010, Pages 1825-1835
Adiga, C. | Balakrishnan, R. | So, W.
We are interested in the energy of the skew-adjacency matrix of a directed graph D, which is simply called the skew energy of D in this paper. Properties of the skew energy of D are studied. In
particular, a sharp upper bound for the skew energy of D is derived in terms of the order of D and the maximum degree of its underlying undirected graph. An infinite family of digraphs attaining the
maximum skew energy is constructed. Moreover, the skew energy of a directed tree is independent of its orientation, and interestingly it is equal to the energy of the underlying undirected tree. Skew
energies of directed cycles under different orientations are also computed. Some open problems are presented. © 2009 Elsevier Inc. All rights reserved.
Volume 433, Issue 4, October 2010, Pages 699-717
Simson, D.
Linear algebra technique in the study of linear representations of finite posets is developed in the paper. A concept of a quadratic wandering on a class of posets I is introduced and finite posets I
are studied by means of the four integral bilinear forms over(b, ̂) I, b I, over(b, ̄) I, b I • : Z I × Z I → Z (1.1), the associated Coxeter transformations, and the Coxeter polynomials (in connection
with bilinear forms of Dynkin diagrams, extended Dynkin diagrams and irreducible root systems are also studied). Bilinear equivalences between some of the forms are established and equivalences with
the bilinear forms of Dynkin diagrams and extended Dynkin diagrams are discussed. A homological interpretation of the bilinear forms (1.1) is given and Z-bilinear equivalences between them are
discussed. By applying well-known results of Bongartz, Loupias, and Zavadskij-Shkabara, we give several characterisations of posets I, with the Euler form over(q, ̄) I (x) = over(b, ̄) I (x, x) weakly
positive (resp. with the reduced Euler form q I • (x) = b I • (x, x) weakly positive), and posets I, with the Tits form over(q, ̂) I (x) = over(b, ̄) I (x, x) weakly positive. © 2010 Elsevier Inc. All
rights reserved.
Volume 432, Issue 9, April 2010, Pages 2181-2213
Brualdi, R.A.
In this primarily expository paper we survey classical and some more recent results on the spectra of digraphs, equivalently, the spectra of (0, 1)-matrices, with emphasis on the spectral radius. ©
2009 Elsevier Inc. All rights reserved.
Volume 430, Issue 1, January 2009, Pages 547-552
Ahmadi, O. | Alon, N. | Blake, I.F. | Shparlinski, I.E.
It is shown that only a fraction of 2- Ω (n) of the graphs on n vertices have an integral spectrum. Although there are several explicit constructions of such graphs, no upper bound for their number
has been known. Graphs of this type play an important role in quantum networks supporting the so-called perfect state transfer. © 2008 Elsevier Inc. All rights reserved.
Volume 431, Issue 10, October 2009, Pages 1910-1922
Deng, C. | Wei, Y.
In this paper we give formulae for the generalized Drazin inverse Md of an anti-triangular matrix M under some conditions. Moreover, some particular cases of these results are also considered. © 2009
Elsevier Inc. All rights reserved.
Volume 431, Issue 12, December 2009, Pages 2359-2372
Liu, Y. | Tian, Y. | Takane, Y.
Given a complex matrix equation AXA* = B, where B* = ± B, we present explicit formulas for the maximal and minimal ranks of Hermitian (skew-Hermitian) solutions X to the equation as well as the
maximal and minimal ranks of the real matrices X0 and X1 in a Hermitian (skew-Hermitian) solution X = X0 + iX1. As applications, we give the maximal and minimal ranks of the real matrices C and D in
a Hermitian (skew-Hermitian) g-inverse (A + iB)- = C + iD of a Hermitian (skew-Hermitian) matrix A + iB. © 2009 Elsevier Inc. All rights reserved.
Volume 431, Issues 5-7, August 2009, Pages 843-854
Qi, X. | Hou, J.
For a scalar ξ, a notion of (generalized) ξ-Lie derivations is introduced which coincides with the notion of (generalized) Lie derivations if ξ = 1. Some characterizations of additive (generalized)
ξ-Lie derivations on the triangular algebras and the standard operator subalgebras of Banach space nest algebras are given. It is shown, under some suitable assumption, that an additive map L is an
additive (generalized) Lie derivation if and only if it is the sum of an additive (generalized) derivation and an additive map from the algebra into its center vanishing all commutators; is an
additive (generalized) ξ-Lie derivation with ξ ≠ 1 if and only if it is an additive (generalized) derivation satisfying L (ξ A) = ξ L (A) for all A. © 2009 Elsevier Inc. All rights reserved.
Volume 431, Issues 1-2, July 2009, Pages 162-178
Wang, J. | Huang, Q. | Belardo, F. | Li Marzi, E.M.
Let AG and DG be respectively the adjacency matrix and the degree matrix of a graph G. The signless Laplacian matrix of G is defined as QG = DG + AG. The Q-spectrum of G is the set of the eigenvalues
together with their multiplicities of QG. The Q-index of G is the maximum eigenvalue of QG. The possibilities for developing a spectral theory of graphs based on the signless Laplacian matrices were
discussed by Cvetković et al. [D. Cvetković, P. Rowlinson, S.K. Simić, Signless Laplacians of finite graphs, Linear Algebra Appl. 423 (2007) 155-171]. In the latter paper the authors determine the
graphs whose Q-index is in the interval [0, 4]. In this paper, we investigate some properties of Q-spectra of graphs, especially for the limit points of the Q-index. By using these results, we
characterize respectively the structures of graphs whose the Q-index lies in the intervals (4, 2 + sqrt(5)], (2 + sqrt(5), ε{lunate} + 2] and (ε{lunate} + 2, 4.5], where ε{lunate} = frac(1, 3) ((54 -
6 sqrt(33))frac(1, 3) + (54 + 6 sqrt(33))frac(1, 3)) ≈ 2.382975767. © 2009 Elsevier Inc. All rights reserved. | {"url":"http://www.journals.elsevier.com/linear-algebra-and-its-applications/most-cited-articles/","timestamp":"2014-04-20T13:21:41Z","content_type":null,"content_length":"91557","record_id":"<urn:uuid:7e30a4f0-a042-486d-a8cf-b92a60d8b69e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Methods - Operations Research Models and Methods
Dynamic programming (DP) has quite a different model form than the other types of mathematical programming. Instead of an objective function and constraints, dynamic programming models consist of a
collection of equations that describe a sequential decision process.
The process begins in some initial state, the first decision moves it to a second state, and then continues through alternating decisions and states until a final state is reached. For a given
problem, the model must provide:
• mathematical vectors that describe states and decisions
• initial and final state definitions
• transition functions that map the current state and decision to the next state in the sequence
• the objective function that evaluates the sequence of decisions
Many problems are naturally described as a sequential decision process and are ready candidates for a DP solution. Others that seem more naturally stated as Integer Programming models can be adapted
to the DP format. DP has an advantage over other formulations in that it does not require linearity.
One difficulty with DP is the curse of dimensionality. DP solves for the optimal solution from every feasible state. If the number of feasible states is too large, a very long time might be required
to solve a problem.
This DP add-in provides a general structure with which most problems appropriate for DP can be modeled and solved. | {"url":"http://www.me.utexas.edu/~jensen/ORMM/methods/unit/dynamic/index.html","timestamp":"2014-04-16T13:03:08Z","content_type":null,"content_length":"12952","record_id":"<urn:uuid:ef9401cb-72eb-460e-8544-76e29ed014e6>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time of death problem using differential equation
September 26th 2012, 10:29 AM
Time of death problem using differential equation
The problem states: Give an interval that estimates for the person's time of death. Ts= 21.1 degrees C room temperature, Normal temperature of the body Ta= is an interval of 36.6 to 37.2 degrees
C. Temperature of the body when found at midnight is Ti= 34.8 degrees C. The final temperature half hour later of the body Tf= 34.3 degrees C.
So using Newton's cooling equation:
where the temperature of the corpse is:
K=-(1/2)ln((Tf - Ts)/(Ti - Ts)
Time of death is:
D=-(1/k)ln((Ta - Ts)/(Ti - Ts))
So i'm confused on where to intergate? I wanted to intergate for the time of death for To but in respects to time but there is not a time variable in that equation so I'm stumped. Thanks to all
who can help.
September 26th 2012, 12:15 PM
Re: Time of death problem using differential equation
The problem states: Give an interval that estimates for the person's time of death. Ts= 21.1 degrees C room temperature, Normal temperature of the body Ta= is an interval of 36.6 to 37.2 degrees
C. Temperature of the body when found at midnight is Ti= 34.8 degrees C. The final temperature half hour later of the body Tf= 34.3 degrees C.
So using Newton's cooling equation:
where the temperature of the corpse is:
K=-(1/2)ln((Tf - Ts)/(Ti - Ts)
Time of death is:
D=-(1/k)ln((Ta - Ts)/(Ti - Ts))
So i'm confused on where to intergate? I wanted to intergate for the time of death for To but in respects to time but there is not a time variable in that equation so I'm stumped. Thanks to all
who can help.
So the first order ode is given by the model
$\frac{dT}{dt} \propto (T_s-T)$ where $T_s$ is the room temperature.
This gives that
If we seperate this equation and integrate we getwe get that
$\frac{dT}{T-21.1}=-kdt \implies \ln|T-21.1|=-kt+C \iff T(t)=Ae^{-kt}+21.1$
We are told the temperature at midnight $t_m$ and 30 minutes later $t_m+30$
Using these two data point we get
$34.8=Ae^{-kt_m}+21.1 \iff Ae^{-kt_m}=13.7$
$34.3=Ae^{-kt_m-30k}+21.1 \iff e^{-30k}=\frac{13.2}{Ae^{-kt_m}}$
puting the first equation into the 2nd gives
$e^{-30k}=\frac{132}{137} \iff k=-\frac{1}{30}\ln \left( \frac{132}{137}\right)$
Puting this back into the orginial equation gives
$T(t)=Ae^{\frac{t}{30}\ln \left( \frac{132}{137}\right)}+21.1 =A \left( \frac{132}{137}\right)^{\frac{t}{30}}+21.1$
Solving this for time gives
$t=\frac{30\ln\left( \frac{T-21.1}{A}\right)}{\ln \left( \frac{132}{137}\right)}$
Since we know the temperature of the body at midnight we can plug that into this equation to get
$t=\frac{30\ln\left( \frac{13.7}{A}\right)}{\ln \left( \frac{132}{137}\right)}$
Now if you plug in the two initial temperatures $A$ it will give you the number of minutes before midnight. Just use the two different values of $A$ given.
September 27th 2012, 05:52 AM
Re: Time of death problem using differential equation
Thank you TheEmptySet for your help! You made understanding this problem a lot easier! I have one question though. What do you mean by the initial temperatures A? Is it the intial temperature
(Ti) the body was found and final temperature (Tf) or is it the average body temperature (Ta) and the final (Tf)? Sorry I'm sure I'm way over thinking this! If anyone else has the answer please
September 27th 2012, 07:47 AM
Re: Time of death problem using differential equation
Thank you TheEmptySet for your help! You made understanding this problem a lot easier! I have one question though. What do you mean by the initial temperatures A? Is it the intial temperature
(Ti) the body was found and final temperature (Tf) or is it the average body temperature (Ta) and the final (Tf)? Sorry I'm sure I'm way over thinking this! If anyone else has the answer please
Notice that
This should be the temperature at the begining. You are given a range of values for $T(0)$.
$36.6 \le T(0) \le 37.2 \implies 36.6 \le A+21.1 \le 37.2 \iff 15.5 \le A \le 16.1$
If you plug these values into the equation above it will give you the set of $t$ values.
September 27th 2012, 08:00 AM
Re: Time of death problem using differential equation
Ok that makes sense. I confused myself there for a minute. Thank you again for your help! | {"url":"http://mathhelpforum.com/differential-equations/204129-time-death-problem-using-differential-equation-print.html","timestamp":"2014-04-19T16:14:48Z","content_type":null,"content_length":"13287","record_id":"<urn:uuid:22653b90-fbc2-4487-a448-01288242a078>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Try this Logic Games Question from Manhattan LSAT
Go ahead. Give this Logic Games Practice Question from Manhattan LSAT a try: Pitch Meetings.
A screenwriter has pitch meetings with six producers – F, G, H, I, J, and K – over the course of a day. He will meet with each producer once, and one at a time. The following conditions apply:
• The screenwriter will meet with K before G if he meets with F before J.
• The screenwriter will meet with G before H only if he meets with I before J.
• The screenwriter will meet with F before I if, and only if, he meets with F after J.
• The screenwriter cannot meet with G last.
1. Which of the following could be the order of meetings?
(A) H, J, G, F, I, K
(B) I, H, F, J, K, G
(C) G, H, J, F, I, K
(D) H, G, J, I, K, F
(E) I, G, F, K, J, H
2. Each of the following could be true except:
(A) J is first.
(B) J is last.
(C) I is first.
(D) I is last.
(E) G is first.
3. If the screenwriter meets with I second, it must be true that…
(A) K is first.
(B) G is third.
(C) He meets with either F or G third.
(D) He meets with either J or H last.
(E) He meets with G before he meets with H.
4. If the screenwriter meets with K first and H last, how many different ways can the meetings be arranged?
(A) 1
(B) 2
(C) 4
(D) 6
(E) 8
5. If the screenwriter meets with K fifth, it must be false that…
(A) G is before F.
(B) G is before H.
(C) G is before J.
(D) H is before J.
(E) H is before F.
6. Which of the following pairs of assignments would completely determine the order of meetings?
(A) J is first, and K is last.
(B) J is first, and H is second.
(C) F is third, and H is fourth.
(D) G is second, H is third.
(E) H is fifth, and J is last.
Click here for some excellent explanations/solutions to this game by Manhattan LSAT.
1 Comment
One Response
1. Hello. Very cool web site!! Man .. Excellent .. Amazing .. I’ll bookmark your web site and take the feeds also…I am satisfied to find a lot of useful info right here within the article. Thank you
for sharing.. | {"url":"http://www.lawschoolpodcaster.com/2011/03/29/try-this-logic-game-question-from-manhattan-lsat/","timestamp":"2014-04-17T09:33:44Z","content_type":null,"content_length":"21818","record_id":"<urn:uuid:599e99b6-0c00-47b0-bcc8-e8b48ddc6835>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Benthem, Modal correspondence theory
, 1997
"... We identify the computational complexity of the satisfiability problem for FO², the fragment of first-order logic consisting of all relational first-order sentences with at most two distinct
variables. Although this fragment was shown to be decidable a long time ago, the computational complexity ..."
Cited by 48 (1 self)
Add to MetaCart
We identify the computational complexity of the satisfiability problem for FO², the fragment of first-order logic consisting of all relational first-order sentences with at most two distinct
variables. Although this fragment was shown to be decidable a long time ago, the computational complexity of its decision problem has not been pinpointed so far. In 1975 Mortimer proved that FO² has
the finite-model property, which means that if an FO²-sentence is satisfiable, then it has a finite model. Moreover, Mortimer showed that every satisfiable FO²-sentence has a model whose size is at
most doubly exponential in the size of the sentence. In this paper, we improve Mortimer's bound by one exponential and show that every satisfiable FO²-sentence has a model whose size is at most
exponential in the size of the sentence. As a consequence, we establish that the satisfiability problem for FO² is NEXPTIME-complete.
- THEORETICAL COMPUTER SCIENCE , 1998
"... Consider the class of all those properties of worlds in finite Kripke structures (or of states in finite transition systems), that are ffl recognizable in polynomial time, and ffl closed under
bisimulation equivalence. It is shown that the class of these bisimulation-invariant Ptime queries has a ..."
Cited by 16 (1 self)
Add to MetaCart
Consider the class of all those properties of worlds in finite Kripke structures (or of states in finite transition systems), that are ffl recognizable in polynomial time, and ffl closed under
bisimulation equivalence. It is shown that the class of these bisimulation-invariant Ptime queries has a natural logical characterization. It is captured by the straightforward extension of
propositional µ-calculus to arbitrary finite dimension. Bisimulation-invariant Ptime, or the modal fragment of Ptime, thus proves to be one of the very rare cases in which a logical characterization
is known in a setting of unordered structures. It is also shown that higher-dimensional µ-calculus is undecidable for satisfiability in finite structures, and even \Sigma 1 1 -hard over general
, 2000
"... We show that the loosely guarded and packed fragments of first-order logic have the finite model property. We use a construction of Herwig. We point out some consequences in temporal predicate
logic and algebraic logic. AMS classification: Primary 03B20; Secondary 03B45, 03C07, 03C13, 03C30, 03G1 ..."
Cited by 15 (3 self)
Add to MetaCart
We show that the loosely guarded and packed fragments of first-order logic have the finite model property. We use a construction of Herwig. We point out some consequences in temporal predicate logic
and algebraic logic. AMS classification: Primary 03B20; Secondary 03B45, 03C07, 03C13, 03C30, 03G15 Keywords: finite structures, modal logic, modal fragment, packed fragment 1 Introduction Perhaps
because beginning students of modal logic are often told that modal logic is more expressive than first-order logic and indeed has some second-order expressive power, or perhaps because they are
hoping for something new, it can come as a surprise to them that every modal formula has a `standard translation' into first-order logic. For example, (p !q) is translated to 9y(R(x;y) ^ (P(y) ! 8z(R
(y;z) ! Q(z)))): (1) The translation mimics the Kripke semantics for modal logic. Not every first-order formula (with one free variable in the appropriate signature) is the translation of a modal
- IN LOGIC COLLOQUIUM ’02 , 2006
"... We study bisimulation invariance over finite structures. This investigation leads to a new, quite elementary proof of the van Benthem-Rosen characterisation of basic modal logic as the
bisimulation invariant fragment of first-order logic. The ramification of this characterisation for the finer no ..."
Cited by 6 (0 self)
Add to MetaCart
We study bisimulation invariance over finite structures. This investigation leads to a new, quite elementary proof of the van Benthem-Rosen characterisation of basic modal logic as the bisimulation
invariant fragment of first-order logic. The ramification of this characterisation for the finer notion of global twoway bisimulation equivalence is based on bisimulation respecting constructions of
models that recover in finite models some of the desirable properties of the usually in finite bisimilar unravellings.
, 2001
"... Different variants of guarded logics (a powerful generalization of modal logics) are surveyed and an elementary proof for the decidability of guarded fixed point logics is presented. In a joint
paper with Igor Walukiewicz, we proved that the satisfiability problems for guarded fixed point logics ar ..."
Cited by 3 (0 self)
Add to MetaCart
Different variants of guarded logics (a powerful generalization of modal logics) are surveyed and an elementary proof for the decidability of guarded fixed point logics is presented. In a joint paper
with Igor Walukiewicz, we proved that the satisfiability problems for guarded fixed point logics are decidable and complete for deterministic double exponential time (E. Grädel and I. Walukiewicz,
Guarded fixed point logic, Proc. 14th IEEE Symp. on Logic in Computer Science, 1999, pp. 45-54). That proof relies on alternating automata on trees and on a forgetful determinacy theorem for games on
graphs with unbounded branching. The exposition given here emphasizes the tree model property of guarded logics: every satisfiable sentence has a model of bounded tree width. Based on the tree model
property, we show that the satisfiability problem for guarded fixed point formulae can be reduced to the monadic theory of countable trees (SωS), or to the µ-calculus with backwards modalities.
, 2006
"... Abstract. We investigate the asymptotic properties of the logical system for information update developped by Baltag, Moss and Solecki [2]. We build on the idea of looking at update logics as
dynamical systems. We show that every epistemic formula either always holds or is always refuted from certai ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. We investigate the asymptotic properties of the logical system for information update developped by Baltag, Moss and Solecki [2]. We build on the idea of looking at update logics as
dynamical systems. We show that every epistemic formula either always holds or is always refuted from certain moment on, in the course of update with factual epistemic events, i.e. events with only
propositional prerequisite formulas, or signals. We characterize in terms of a pebble game the class of frames such that iterated update with factual epistemic events built over them gives rise only
to finite sets of reachable states. The characterization is nontrivial, and so the ’Finite Evolution Conjecture ’ (see van Benthem [4]) is refuted. Finally, after giving some basic insights into the
dissipative nature of update with general, nonfactual epistemic events, we show the distinctive stabilizing nature of epistemically ordered multi-S5 events- events in which agents can be ordered in
terms of how much they know. 1.
"... (This is a sample cover image for this issue. The actual cover is not yet available at this time.) This article appeared in a journal published by Elsevier. The attached copy is furnished to the
author for internal non-commercial research and education use, including for instruction at the authors i ..."
Add to MetaCart
(This is a sample cover image for this issue. The actual cover is not yet available at this time.) This article appeared in a journal published by Elsevier. The attached copy is furnished to the
author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution,
or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or
Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2319710","timestamp":"2014-04-20T17:09:07Z","content_type":null,"content_length":"28811","record_id":"<urn:uuid:d58b559c-3937-457c-812d-2cef1bdfa80b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATHEMATICA BOHEMICA, Vol. 126, No. 1, pp. 161-170 (2001)
On subalgebra lattices of a finite unary algebra I
Konrad Pioro
Konrad Pioro, Institute of Mathematics, Warsaw University, ul. Banacha 2, PL-02-097 Warsaw, Poland, e-mail: kpioro@mimuw.edu.pl
Abstract: One of the main aims of the present and the next part [15] is to show that the theory of graphs (its language and results) can be very useful in algebraic investigations. We characterize,
in terms of isomorphisms of some digraphs, all pairs $\langle \mathbf {A},\mathbf {L}\rangle$, where $\mathbf {A}$ is a finite unary algebra and $\Cal L$ a finite lattice such that the subalgebra
lattice of $\mathbf {A}$ is isomorphic to $\mathbf {L}$. Moreover, we find necessary and sufficient conditions for two arbitrary finite unary algebras to have isomorphic subalgebra lattices. We solve
these two problems in the more general case of partial unary algebras. \endgraf In the next part [15] we will use these results to describe connections between various kinds of lattices of (partial)
subalgebras of a finite unary algebra.
Keywords: unary algebra, partial algebra, subalgebra lattice, directed graph
Classification (MSC2000): 05C20, 05C40, 05C99, 08A30, 08A60, 05C90, 06B15, 06D05, 08A55
Full text of the article:
[Previous Article] [Next Article] [Contents of this Number] © 2005 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/MB/126.1/13.html","timestamp":"2014-04-20T18:59:55Z","content_type":null,"content_length":"2980","record_id":"<urn:uuid:2b9682e9-6f4e-4692-abe4-1776e707f992>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pressure is simply the force per unit area that a fluid exerts on its surroundings. If it is a gas, then the pressure of the gas is the force per unit area that the gas exerts on the walls of the
container that holds it. If the fluid is a liquid, then the pressure is the force per unit area that the liquid exerts on the container in which it is contained. Obviously, the pressure of a gas will
be uniform on all the walls that must enclose the gas completely. In a liquid, the pressure will vary, being greatest on the bottom of the vessel and zero on the top surface, which need not be
Static Pressure
The statements made in the previous paragraph are explicitly true for a fluid that is not moving in space, that is not being pumped through pipes or flowing through a channel. The pressure in cases
where no motion is occurring is referred to as
Dynamic Pressure
If a fluid is in motion, the pressure that it exerts on its surroundings
on the motion. Thus, if we measure the pressure of water in a hose with the nozzle closed, we may find a pressure of, say, 40 pounds per square inch (note: force per unit area). If the nozzle is
opened, the pressure in the hose will drop to a different value, say, 30 pounds per square inch. For this reason, a thorough description of pressure must note the circumstances under which it is
measured. Pressure can depend on flow, compressibility of the fluid, external forces, and numerous other factors.
Since pressure is force per unit area, we describe it in the SI system of units by newtons per square meter. This unit has been named the
(Pa), so that 1 Pa = 1 N/m
. As will be seen later, this is not a very convenient unit, and it is often used in conjunction with the SI standard prefixes, as kPa, or MPa. You will see the combination N/cm
used, but use of this combination should be avoided in favor of Pa with the appropriate prefix. In the English system of units, the most common designation is the pound per square inch, Ib/in
. This is usually written
The conversion is that 1 psi is approximately 6.895 kPa. For very low pressures, such as may be found in vacuum systems, the unit
is often used. One Torr is approximately 133.3 Pa. Again, use of the pascal with appropriate prefix is preferred. Other units that you may encounter in the pressure description are the
(at), which is 101.325 kPa or ==14.7 psi, and the
which is 100 kPa. The use of inches or feet of water and millimeters of mercury will be discussed later.
Gauge Pressure
In many cases, the absolute pressure is not the quantity of major interest in describing the pressure. The atmosphere of gas that surrounds the earth exerts a pressure, because of its weight, at the
surface of the earth of approximately 14.7 psi, which defines the "atmosphere" unit. If a closed vessel at the earth's surface contained a gas at an absolute pressure of 14.7 psi, then it would exert
effective pressure on the walls of the container because the atmospheric gas exerts the same pressure from the outside. In cases like this, it is more appropriate to describe pressure in a relative
sense, that is, compared to atmospheric pressure. This is called
gauge pressure,
and is given by
P[g] = P[abs] - P[at] (5.30)
where p[g] = gauge pressure
p[abs] = absolute pressure
p[at] = atmospheric pressure
In the English system of units, the abbreviation
psig is
used to represent the gauge pressure.
Head Pressure
For liquids, the expression
head pressure
pressure head
is often used to describe the pressure of the liquid in a tank or pipe. This refers to the static pressure produced by the weight of the liquid above the point at which the pressure is being
described. This pressure depends
on the height of the liquid above that point and the liquid density (mass per unit volume). In terms of an equation, if a liquid is contained in a tank, then the pressure at the bottom of the tank is
given by
P = Pgh (5.31)
where p = pressure in Pa
p = density in kg/m^3
g = acceleration due to gravity (9.8 m/s^2)
h = depth of liquid in m
This same equation could be used to find the pressure in the English system, but it is common practice to express the density in this system as the weight density
in Ib/ft
, which includes the gravity term of Equation (5.31). In this case, the relation between pressure and depth becomes
p = p[w]h (5.32)
where p = pressure in Ib/ft^2
p[w] = weight density in Ib/ft3
h = depth in ft
If the pressure is desired in psi, then the ft
would be expressed as 144 in
. Because of the common occurrence of liquid tanks and the necessity to express the pressure of such systems, it has become common practice to describe the pressure directly in terms of the
depth of a particular liquid. Thus, the term
mm of mercury
means that the pressure is equivalent to that produced by so many millimeters of mercury depth, which could be calculated from Equation (5.31) using the density of mercury. In the same sense, the
expression "inches of water" or "feet of water" means the pressure that is equivalent to some particular depth of water using its weight density.
Now you can see the basis for level measurement on pressure mentioned in Section 5.2.4. Equation (5.31) shows that the level of liquid of density
is directly related to the pressure. From level measurement we pass to pressure measurement, which is usually done by some type of displacement measurement.
EXAMPLE 5.16
A tank holds water with a depth of 7.0 feet. What is the pressure at the tank bottom in psi and Pa? (density = 10
We can find the pressure in Pa directly by converting the 7.0 ft into meters; thus,
(7.0 ft)(0.3048 m/ft) = 2.1 m. From Equation (5.31),
p = (10^3 kg/m^3)(m/s^2)(2.1m)
p = 21 kPa (note significant figures)
To find the pressure in psi, we can convert the pressure in Pa to psi or use Equation (5.32). Let's use the latter. The weight density is found from
p[w] = (10^3 kg/m^3)(9.8 m/s^2) = 9.8 X 10^3 N/m^3
= (9.8 X 10^3 N/m^3)(0.3048 m/ft^3)(0.2248lb/N
= 62.4 lb/ft^3
The pressure is
Purchase Process Control Instrumentation Technology from Prentice Hall.
Related Links:
Pressure Sensors (p < one atmosphere)
Pressure Sensors (p > one atmosphere)
So, what would the water pressure be on a water tank? It seems like it would take a lot to build a tank that could hold that, especially if you want to transport it anywhere. That could provide a
huge problem.
- enderberett@gmail.com - Jan 28, 2013
Excerpt from the book published by Prentice Hall Professional (http://www.phptr.com).
Copyright Prentice Hall Inc., A Pearson Education Company, Upper Saddle River, New Jersey 07458.
This material is protected under the copyright laws of the U.S. and other countries and any uses not in conformity with the copyright laws are prohibited, including but not limited to reproduction,
DOWNLOADING, duplication, adaptation and transmission or broadcast by any media, devices or processes. | {"url":"http://zone.ni.com/devzone/cda/ph/p/id/190","timestamp":"2014-04-21T02:56:27Z","content_type":null,"content_length":"24691","record_id":"<urn:uuid:a27aa5cd-d276-41e7-a0da-57faa7fa56cd>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof, bijective aplication and conformal mapping
November 19th 2011, 11:35 AM #1
May 2010
Proof, bijective aplication and conformal mapping
Hi there. I have to prove that if $f:A \rightarrow B$ it's a bijective and analytic function with analyitic inverse, then f is conformal.
I think I should prove the angle preserving using the analyticity, but I'm not sure how.
Re: Proof, bijective aplication and conformal mapping
Since it's bijective there is no critical point. Otherwise if p is a cirtical point, f(p) can be expanded as f(z)=az^k + ... thus f is a k-folder covering on a small disk of D\{p}. To show that f
preserve angles, note that its differential df ( the induced linear map at the tangent space) is a linear map:
df(v) = f' * v. Where the right side is the complex product. And since p is not critical f' is no zero, f'=r e^{i \theta} is a linear map composed by a rotation of \theta followed by a dilation
of ratio r, while both the rotation and the dilation preserve angles.
Translate the above idea into math expressions and you're done.
November 19th 2011, 11:44 AM #2
Senior Member
Mar 2010
Beijing, China | {"url":"http://mathhelpforum.com/differential-geometry/192272-proof-bijective-aplication-conformal-mapping.html","timestamp":"2014-04-19T17:29:26Z","content_type":null,"content_length":"32553","record_id":"<urn:uuid:1a000cd7-f963-404d-83f4-237d0cd3114d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
conversion of 10-digital id to 8-digital id
I am writing a conversion formula to conversion from 10-digital unique id to 8 digital id and verse vice.
Can somebody advise me where to find the solution.
What do you mean by n-digital ?
Assuming that by this you mean converting a number with n digits to a number with m digits, given n > m, while maintaining uniqueness, I’m afraid this is not possible, since there are more unique
numbers that can be formed with n digits than can be formed with m digits.
If you may lose uniqueness in the conversion process, have a look at the various hashing methods. The easiest solution probably would be modulo operations, which essentially cut off the
highest-significant digits.
If you mean something else, please provide some more details on what you would like to do.
- Wernaeh
Or do you maybe mean base 10 (decimal) to base 8 (octal)? (just a guess)
Thanks for roel and Wernaeh replies.
There is a database using 10-digital length as a unqiue identifier for records. However, it requirs to link (as a key) with other old object which only allows 8-digital length identifier. I need to
maintain the uniquenss both in 10-digital & 8-digital data.
I think roel’s suggestion base 10 (decimal) to base 8 (octal) will work. Please advise where to find the conversion formula.
Do you mean 10 digits, 10 bits, base-ten or something else entirely? “10-digital length” is nonsensical. Try reading these articles and rephrasing your question.
EDIT: This one may be handy if Wernaeh’s interpretation of your question is correct: http://en.wikipedia.org/wiki/Hash_function
Thanks for roel and Wernaeh replies.
There is a database using 10-digital length as a unqiue identifier for records. However, it requirs to link (as a key) with other old object which only allows 8-digital length identifier. I need
to maintain the uniquenss both in 10-digital & 8-digital data.
I think roel’s suggestion base 10 (decimal) to base 8 (octal) will work. Please advise where to find the conversion formula.
No, I don’t think this will work. Essentially, regardless of the actual representation of your data, and regardless of the base used, the number of unique entries is determined by the length - in
digits - of the unique identifier.
I think the easiest solution to your problem would be to reserve a range for old objects, and add all new objects to other ranges.
Say, your digits are numbered as 0000000000 up to 1111111111 (assuming by digital, you also infer a binary system, but the same scheme easily extends to other systems). Then, reserve a certain prefix
for old objects, for example, the prefix 00. All old objects (i.e. old IDs 00000000 to 11111111) are then stored as 0000000000 up to 0011111111. All new objects then use the remaining range from
0100000000 up to 1111111111, i.e. have a prefix that is not 00.
Hope this helps.
- Wernaeh
Thanks for giving more information as reference.
If the condition allow us to treated 8-digital id as 8 alpha-numerical length id and the 10-digital id can be treated as 10 numerical length id, is it possible to work out the conversion.
I recall in the old days, we used DOS operation system which only allows 8.3 character length for a filename. Whist a file created in NTFS file system which has filename longer than 8.3 character can
be viewed in DOS operation system in 8.3 format. Can this method be used in the above situation.
If the condition allow us to treated 8-digital id as 8 alpha-numerical length id and the 10-digital id can be treated as 10 numerical length id, is it possible to work out the conversion.
If you take e.g. a hex system (base 16) then the biggest number you could display with 8 digits is FFFFFFFF witch is 4294967295 in a decimal system.
So if you define a base using all caracters from the alphabet you would get a base of 36 (0 - Z). Then the biggest number you could display with 8 digits becomes ZZZZZZZZ. This should be bigger than
what you could represent in a decimal system with 10 digits (9999999999).
If I didn’t misunderstand your problem, I would say this works. You should be able to do it.
Thanks for moe.
You have solved the problem.
:lol: I’m completely confused. What just happened in this thread? :wallbash:
My guess is the poster had a database with 10-digit numeric IDs and another with 8-character alphanumeric IDs…it seems that representing 10-(decimal)digit numbers using 8-digit base 36 values lets
him keep the two in sync.
Or alternatively someone is confused, makes a solution that seems to work, but breaks totally couple years down the line. =)
Earlier in this thread it has been established that you can not put a bigger number into a smaller one if the bigger number is more than the system with smaller values can handle. I think the problem
was to convert the 10 digit into an 8 digit number. Since he usese alpha-numeric values for the 8 digit number he can represent a bigger value in the 8 digits than he can in the 10 digit numeric
As long as nelsonyam1935 is aware that the biggest number he can use is the biggest value from the one system, witch is capable to hold the smaller biggest value, he should be fine. (That was a
confusing sentence :) ).
He mentioned that the newer Database is using the 10 digit numeric value. Therfore he should be pretty safe, as you can assume you rather get new values from the new database and convert it into the
old one. Hence, you convert the 10 digits decimal into a 8 digit with base 36. (smaller into bigger). However, you are right, without a check not to use an invalid number, he might run into a problem
down the road.
Also, I mentioned I was not sure I got the problem right ;) It was just a suggestion…
I’m going by the thread title here: conversion of 10-digital id to 8-digital id
Only one directional conversion not backwards :)
I felt like I need to be more specific.
If you assume the new database delivers the new values you never get one bigger than 9999999999. So the backwards conversion from the old db into the new one will only be for sync reasons. You could
assume that this number already fits into a decimal system. Even so the 8 digital alpha-numeric value theoretically could hold a bigger number.
One directional conversion for the new values and two directional conversion for sysc-ing the two db’s. | {"url":"http://devmaster.net/posts/12318/conversion-of-10-digital-id-to-8-digital-id","timestamp":"2014-04-17T19:13:59Z","content_type":null,"content_length":"33386","record_id":"<urn:uuid:d921da60-4727-44e6-b4a7-4293db8aff47>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why Minimize Negative Log Likelihood?
May 23, 2011
One of the wonders of machine learning is the diversity of divergent traditions from which it originates, from classical statistics (both frequentist and Bayesian) to information and control
theories, plus a significant dose of pragmatism from computer science. For those interested in the historical relationship between statistics and machine learning, see Breiman’s Two Cultures.
This diversity is reflected in the surprising complexity in answering simple-sounding questions, which often speaks to the heart of trading using computational machine learning models—ranging from
estimating HMM models via MLE (e.g. vol / correlation regime models) to non-convex optimization via non-standard likelihood or loss functions (e.g. portfolio optimization via omega):
Why is minimizing the negative log likelihood equivalent to maximum likelihood estimation (MLE)?
Or, equivalently, in Bayesian-speak:
Why is minimizing the negative log likelihood equivalent to maximum a posteriori probability (MAP), given a uniform prior?
Answering this question provides insight into the foundations of machine learning, as well as connection with several branches of mathematics.
Classic statistics opens the answer, beginning with the definition of a likelihood function:
$\mathcal{L}(\theta\,|\,x_1,\ldots,x_n) = f(x_1,x_2,\ldots,x_n|\theta) = \prod\limits_{i=1}^n f(x_i|\theta)$
Applying the natural log function in this context is handy, for several reasons. First, numerical analysis reminds us that logs reduce potential for underflow, due to very small likelihoods. Second,
calculus reminds us logs permit the addition trick: converting a product of factors into a summation of factors (as seen before in Why Log Returns?). Finally, calculus again reminds us that the
natural log function is a monotone transformation.
Thus, the extrema of $\mathcal{L}$ are equivalent to the extrema of $\log \mathcal{L}$:
$\log \mathcal{L}(\theta\,|\,x_1,\ldots,x_n) = \sum\limits_{i=1}^n \log f(x_i|\theta)$
From which the maximum likelihood estimator $\hat{\theta}_{\textnormal{MLE}}$ is defined as:
$\hat{\theta}_{\textnormal{MLE}} = \underset{\theta}{\arg\max} \sum\limits_{i=1}^n \log f(x_i|\theta)$
As an aside, Bayesians will remind us we can generalized into a MAP estimator, given uniform prior $g(\theta)$:
$\underset{\theta}{\arg\max} \sum\limits_{i=1}^n \log f(x_i|\theta) = \underset{\theta}{\arg\max} \log(f|\theta) = \underset{\theta}{\arg\max} \log(f|\theta) g(\theta) = \hat{\theta}_{\textnormal
From which optimization and real analysis reminds us of the following equivalence, for all $x$:
$\underset{x}{\arg\max} (x) = \underset{x}{\arg\min} (-x)$
Thus, the following are equivalent:
$\underset{\theta}{\arg\max} \sum\limits_{i=1}^n \log f(x_i|\theta) = \underset{\theta}{\arg\min} - \sum\limits_{i=1}^n \log f(x_i|\theta) = \hat{\theta}_{\textnormal{MLE}}$
From this, we technically have an answer to the above two questions on equivalence. Yet, from here lies the opportunity to continue and uncover the relationship between MLE/MAP and both entropy and
loss via Kullback-Leibler divergence (KL). To get there, consider the statistical average of the above:
$\underset{\theta}{\arg\min} (\frac{1}{n} \sum\limits_{i=1}^n - \log f(x_i|\theta) )$
Which converges, by the strong law of large numbers, to the expectation:
$E[- \log f(x|\theta)]$
Which is interesting when considering the difference in distribution between $\theta$ and its corresponding true actual parameter $\theta^*$:
$E[\log f(x|\theta^*) - \log f(x|\theta)] = E[\log\frac{f(x|\theta^*)}{f(x|\theta)}] = \int \log \frac{f(x|\theta^*)}{f(x|\theta)} f(x|\theta^*) dx$
Which is indeed equal to none other than the KL divergence, $K(f(x|\theta),f(x|\theta^*))$, between $\theta$ and $\theta^*$:
$\int \log \frac{f(x|\theta^*)}{f(x|\theta)} f(x|\theta^*) dx = K(f(x|\theta),f(x|\theta^*))$
Which information theory reminds us is relative entropy, and thus is also equal to the excess risk for the loss function defined by the negative log-likelihood. Finally, connecting Bayesian
statistics to the foundation of information theory: gain in Shannon entropy going from prior to posterior is indeed the KL divergence.
Thus, maximum likelihood and maximum a posteriori probability are special case loss functions (see Loss Function Semantics for more on loss semantics in ML).
1. May 23, 2011 3:47 am
Nice writeup!
I wonder, why you define the (log-) likelihood function in terms of a full factorization of x. To me that seems to be the mean-field approximation of f(x|theta) as in variational Bayes. Shouldn’t
the general case be \prod( x_i | \theta, x_{i+1},…,x_n) to keep the dependencies between the xi?
2. May 23, 2011 5:07 am
Nice post. Two more interesting questions though are the following: 1) why MLE “works”? In what sense does it work? 2) Why Bayes MAP works, and in what regime is it close to MLE.
□ May 23, 2011 8:33 am
@gappy: good to hear from you; thanks for complement. Agree those are very interesting questions, especially in the pragmatic ML sense of “work”, meaning the estimated parameters generate
effective out-of-sample prediction (which, arguably, is what really matters for trading).
3. May 24, 2011 6:42 am
Nice writeup.
Though I use MLE a lot, explicitly or implicitly (where for example LSQ is a MLE estimator on series with normal errors), I find MLE to be problematic in the financial space because more often
than not we do not know the distribution OR one can take a snapshots of the empirical distribution for some lookback period, but do not know how it evolves.
Of course there are approaches that attempt to determine the distribution and its evolution, such as particle filters or other forms of sampling. These encounter problems with outliers and sparse
data though.
ML techniques become more valuable, particularly in situations where the distribution is not known and/or dimensionality is high.
□ May 24, 2011 9:47 am
@tr8dr: thanks; good to hear from you, given your blog has been quiet for a while. Agree with your comments. To gappy’s question above and your comment about ML value, curious what you think
of Bayesian methods vis-a-vis distribution uncertainty: given strong uncertainty on distribution, do you prefer MLE methods or applying Bayesian methods and hoping robustness guides
increasingly accurate posterior iteration? | {"url":"http://quantivity.wordpress.com/2011/05/23/why-minimize-negative-log-likelihood/","timestamp":"2014-04-17T06:52:53Z","content_type":null,"content_length":"75252","record_id":"<urn:uuid:56e8cd0b-7d66-444c-9695-ce1794e25cf1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Register or Login To Download This Patent As A PDF
United States Patent Application 20080075377
Kind Code A1
Topiwala; Pankaj N. ;   et al. March 27, 2008
Fast lapped image transforms using lifting steps
This invention introduces a class of multi-band linear phase lapped biorthogonal transforms with fast, VLSI-friendly implementations via lifting steps called the LiftLT. The transform is based on a
lattice structure which robustly enforces both linear phase and perfect reconstruction properties. The lattice coefficients are parameterized as a series of lifting steps, providing fast, efficient
in-place computation of the transform coefficients as well as the ability to map integers to integers. Our main motivation of the new transform is its application in image and video coding. Comparing
to the popular 8.times.8 DCT, the 8.times.16 LiftLT only requires 1 more multiplication, 22 more additions, and 6 more shifting operations. However, image coding examples show that the LiftLT is far
superior to the DCT in both objective and subjective coding performance. Thanks to properly designed overlapping basis functions, the LiftLT can completely eliminate annoying blocking artifacts. In
fact, the novel LiftLT's coding performance consistently surpasses that of the much more complex 9/7-tap biorthogonal wavelet with floating-point coefficients. More importantly, our transform's
block-based nature facilitates one-pass sequential block coding, region-of-interest coding/decoding as well as parallel processing.
Inventors: Topiwala; Pankaj N.; (Clarksville, MD) ; Tran; Trac D.; (Columbia, MD)
Correspondence Address: BAKER & HOSTETLER LLP
WASHINGTON SQUARE, SUITE 1100
1050 CONNECTICUT AVE. N.W.
Serial No.: 896522
Series Code: 11
Filed: September 4, 2007
Current U.S. Class: 382/248; 375/E7.103; 375/E7.226
Class at Publication: 382/248
International Class: G06K 9/36 20060101 G06K009/36
1. An apparatus for coding, storing or transmitting, and decoding M.times.M sized blocks of digitally represented images, where M is an even number, comprising a. a forward transform comprising i. a
base transform having M channels numbered 0 through M-1, half of said channel numbers being odd and half being even; ii. an equal normalization factor in each of the M channels selected to be
dyadic-rational; iii. a full-scale butterfly implemented as a series of lifting steps with a first set of dyadic rational coefficients; iv. M/2 delay lines in the odd numbered channels; v. a
full-scale butterfly implemented as a series of lifting steps with said first set of dyadic rational coefficients; and vi. a series of lifting steps in the odd numbered channels with a second
specifically selected set of dyadic-rational coefficients; b. means for transmission or storage of the transform output coefficients; and c. an inverse transform comprising i. M channels numbered 0
through M-1, half of said channel numbers being odd and half being even; ii. a series of inverse lifting steps in the odd numbered channels with said second set of specifically selected
dyadic-rational coefficients; iii. a full-scale butterfly implemented as a series of lifting steps with said first set of specifically selected dyadic-rational coefficients; iv. M/2 delay lines in
the even numbered channels; v. a full-scale butterfly implemented as a series of lifting steps with said first set of specifically selected dyadic-rational coefficients; vi. an equal denormalization
factor in each of the M channels specifically selected to be dyadic-rational; and vii. a base inverse transform having M channels numbered 0 through M-1.
2. The apparatus of claim 1 in which the normalizing factor takes the value 25/16 and simultaneously the denormalizing factor takes the value 16/25.
3. The apparatus of claim 1 in which the normalizing factor takes the value 5/4 and simultaneously the denormalizing factor takes the value 4/5.
4. The apparatus of claim 1 in which the first set of dyadic rational coefficients are all equal to 1.
5. The apparatus of claim 1 in which the second set of dyadic rational coefficients are all equal to 1/2.
6. The apparatus of claim 1 in which the base transform is any M.times.M invertible matrix of the form of a linear phase filter and the inverse base transform is the inverse of said M.times.M
invertible matrix.
7. The apparatus of claim 1 in which the base transform is the forward M.times.M discrete cosine transform and the inverse base transform is the inverse M.times.M discrete cosine transform.
8. An apparatus for coding, compressing, storing or transmitting, and decoding a block of M.times.M intensities from a digital image selected by an M.times.M window moving recursively over the image,
comprising: a. an M.times.M block transform comprising: i. an initial stage ii. a normalizing factor in each channel b. a cascade comprising a plurality of dyadic rational lifting transforms, each of
said plurality of dyadic rational lifting transforms comprising i. a first bank of pairs of butterfly lifting steps with unitary coefficients between adjacent lines of said transform; ii. a bank of
delay lines in a first group of M/2 alternating lines; iii. a second bank of butterfly lifting steps with unitary coefficients, and iv. a bank of pairs of butterfly lifting steps with coefficients of
1/2 between M/2-1 pairs of said M/2 alternating lines; c. means for transmission or storage of the output coefficients of said M.times.M block transform; and d. an inverse transform comprising i. a
cascade comprising a plurality of dyadic rational lifting transforms, each of said plurality of dyadic rational lifting transforms comprising a) a bank of pairs of butterfly lifting steps with
coefficients of 1/2 between said M/2-1 pairs of said M/2 alternating lines; b) a first bank of pairs of butterfly lifting steps with unitary coefficients between adjacent lines of said transform; c)
a bank of delay lines in a second group of M/2 alternating lines; and d) a second bank of pairs of butterfly lifting steps with unitary coefficients between adjacent lines of said transform; ii. a
de-scaling bank; and iii. an inverse initial stage.
9. A method of coding, storing or transmitting, and decoding M.times.M sized blocks of digitally represented images, where M is a power of 2, comprising a. transmitting the original picture signals
to a coder, which effects the steps of i. converting the signals with a base transform having M channels numbered 0 through M-1, half of said channel numbers being odd and half being even; ii.
normalizing the output of the preceding step with a dyadic rational normalization factor in each of said M channels; iii. processing the output of the preceding step through two lifting steps with a
first set of identical dyadic rational coefficients connecting each pair of adjacent numbered channels in a butterfly configuration; iv. transmitting the resulting coefficients through M/2 delay
lines in the odd numbered channels; v. processing the output of the preceding step through two inverse lifting steps with the first set of dyadic rational coefficients connecting each pair of
adjacent numbered channels in a butterfly configuration; and vi. applying two lifting steps with a second set of identical dyadic rational coefficients connecting each pair of adjacent odd numbered
channels to the output of the preceding step; b. transmitting or storing the transform output coefficients; c. receiving the transform output coefficients in a decoder; and d. processing the output
coefficients in a decoder, comprising the steps of i. receiving the coefficients in M channels numbered 0 through M-1, half of said channel numbers being odd and half being even; ii. applying two
inverse lifting steps with dyadic rational coefficients connecting each pair of adjacent odd numbered channels; iii. applying two lifting steps with dyadic rational coefficients connecting each pair
of adjacent numbered channels in a butterfly configuration; iv. transmitting the result of the preceding step through M/2 delay lines in the even numbered channels; v. applying two inverse lifting
steps with dyadic rational coefficients connecting each pair of adjacent numbered channels in a butterfly configuration; vi. denormalizing the result of the preceding step with a dyadic rational
inverse normalization factor in each of said M channels; and vii. processing the result of the preceding step through a base inverse transform having M channels numbered 0 through M-1.
10. A method of coding, compressing, storing or transmitting, and decoding a block of M.times.M intensities from a digital image selected by an M.times.M window moving recursively over the image,
comprising the steps of: a. Processing the intensities in an M.times.M block coder comprising the steps of: i. processing the intensities through an initial stage; ii. scaling the result of the
preceding step in each channel; b. processing the result of the preceding step through a cascade comprising a plurality of dyadic rational lifting transforms, each of said plurality of dyadic
rational lifting transforms comprising i. a first bank of pairs of butterfly lifting steps with unitary coefficients between adjacent lines of said transform; ii. a bank of delay lines in a first
group of M/2 alternating lines; iii. a second bank of butterfly lifting steps with unitary coefficients, and iv. a bank of pairs of butterfly lifting steps with coefficients of 1/2 between M/2-1
pairs of said M/2 alternating lines; c. transmitting or storing the output coefficients of said M.times.M block coder; d. receiving the output coefficients in a decoder; and e. processing the output
coefficients in the decoder, comprising the steps of i. processing the output coefficients through a cascade comprising a plurality of dyadic rational lifting transforms, each of said plurality of
dyadic rational lifting transforms comprising a) a bank of pairs of butterfly lifting steps with coefficients of 1/2 between said M/2-1 pairs of said M/2 alternating lines; b) a first bank of pairs
of butterfly lifting steps with unitary coefficients between adjacent lines of said transform; c) a bank of delay lines in a second group of M/2 alternating lines; d) a second bank of pairs of
butterfly lifting steps with unitary coefficients between adjacent lines of said transform; e) a de-scaling bank; and f. processing the results of the preceding step in an inverse initial stage.
11. The apparatus of claim 1 in which the [constants] coefficients are approximations chosen for rapid computing rather than exact [constants] coefficients.
12. A method of coding, storing or transmitting, and decoding a block of M.times.M intensities from a digital image selected by an M.times.M window moving recursively over the image, comprising: a.
processing the intensities in an M.times.M block coder comprising the steps of: i. processing the intensities through an initial stage; ii. scaling the result of the preceding step in each channel;
b. processing the result of the preceding step through a transform coder using a method of processing blocks of samples of digital signals of integer length M comprising processing the digital
samples of length M with an invertible linear transform of dimension M, said transform being representable as a cascade, using the steps, in arbitrary order, of: i) at least one +/-1 butterfly step,
ii) at least one lifting step with rational complex coefficients, and iii) at least one scaling factor; c. transmitting or storing the output coefficients of said M.times.M block coder; d. receiving
the output coefficients in a decoder; and e. processing the output coefficients in the decoder into a reconstructed image using the inverse of the coder of steps a. and b.
13. The method of claim 12 wherein the method of processing blocks of samples of digital signals of integer length M additionally comprises the step of at least one time delay.
14. The method of claim 12, wherein the rational complex coefficients in the at least one lifting step are dyadic.
15. The method of claim 12, wherein a) said invertible transform is an approximation of a biorthogonal transform; b) said biorthogonal transformation comprises a representation as a cascade of at
least one butterfly step, at least one orthogonal transform, and at least one scaling factor; c) said at least one orthogonal transform comprises a cascade of i) at least one +/-1 butterfly step, ii)
at least one planar rotation, and iii) at least one scaling factor; b) said at least one planar rotation being represented by equivalent lifting steps and scale factors; and, c) said approximation is
obtained by replacing floating point coefficients in the lifting steps with rational coefficients.
16. The method of claim 15, wherein the coefficients of the lifting steps are chosen to be dyadic rational.
17. The method of claim 12, wherein the invertible transform is a unitary transform.
18. The method of claim 12, wherein a) said invertible transform is an approximation of a unitary transform; b) said approximation of the unitary transform comprises a representation of the unitary
transform as a cascade of at least one butterfly step, at least one orthogonal transform, and at least one scale factor; c) said at least one orthogonal transform being represented as a cascade of
(1) at least one +/-1 butterfly steps, (2) at least one planar rotation, and (3) at least one scaling factor; d) said at least one planar rotation being represented by equivalent lifting steps and
scale factors; and, e) said approximation being derived by using approximate rational values for the coefficients in the lifting steps.
19. The method of claim 18, wherein the invertible transform is an approximation of a transform selected from the group of special unitary transforms: discrete cosine transform (DCT); discrete
Fourier transform (DFT); discrete sine transform (DST).
20. The method of claim 18, wherein the coefficients of the lifting steps are dyadic rational.
21. The method of claim 18, wherein at least one of the following lifting steps is used, whose matrix representations take on the form: [ 1 a 0 1 ] , [ 1 0 b 1 ] ,where a, b are selected from the
group: +/-{8, 5, 4, 2, 1, 1/2, 1/4, 3/4, 5/4, 1/8, 3/8, , 5/8, 7/8, 1/16, 3/16, 5/16, 7/16, 9/16, 11/16, 13/16, 15/16, 25/16}.
22. The method of claim 21, wherein the invertible transform is an approximation of a transform selected from the group: discrete cosine transform (DCT); discrete Fourier transform (DFT); discrete
sine transform (DST).
23. The method of claim 22, wherein the approximation of the 4 point DCT is selected from the group of matrices: { [ 1 1 1 1 2 1 - 1 - 2 1 - 1 - 1 1 1 - 1 1 - 1 ] , [ 1 1 1 1 2 1 - 1 - 2 1 - 1 - 1 1
1 - 2 2 - 1 ] , [ 1 1 1 1 2 1 - 1 - 2 1 - 1 - 1 1 5 - 2 2 - 5 ] }
24. The method of claim 19 in which the invertible transform is an approximation of a transform selected from the group three point DCT, 4 point DCT, 8 point DCT, and 16 point DCT.
25. The method of claim 19 in which the invertible transform is an approximation of a transform selected from the group 512 point FFT, 1024 point FFT, 2048 point FFT, and 4096 point FFT.
26. A method of coding, storing or transmitting, and decoding sequences of intensities of integer length M recursively selected from a time ordered string of intensities arising from electrical
signals, the method comprising the steps of a) recursively processing the sequences of intensities of integer length M with an invertible forward linear transform of dimension M, said transform being
representable as a cascade using the steps, in a preselected arbitrary order, of: ii) at least one +/-1 butterfly step, iii) at least one lifting step with rational complex coefficients, and iv)
applying at least one scaling factor; b) compressing the resulting transform coefficients; c) storing or transmitting the compressed transform coefficients; d) receiving or recovering from storage
the transmitted or stored compressed transform coefficients; e) decompressing the received or recovered compressed transform coefficients: and f) recursively processing the decompressed transform
coefficients with the inverse of the forward linear transform of dimension M, said inverse transform being representable as a cascade using the steps, in the exact reverse order of the preselected
arbitrary order, of: ii) a least one inverse butterfly corresponding to each of the at least one +/-1 butterfly step; iii) at least one inverse lifting step corresponding to each of the at least one
lifting step with rational complex coefficients; and, iv) applying at least on inverse scaling factor corresponding to the at least one scaling factor.
27. The method of claim 26 wherein the method of processing blocks of samples of digital signals of integer length M additionally comprises the step of at least one time delay.
28. The method of claim 26, wherein the rational complex coefficients in the at least one lifting step are dyadic.
29. The method of claim 26, wherein a) said invertible transform is an approximation of a biorthogonal transform; b) said biorthogonal transformation comprises a representation as a cascade of at
least one butterfly step, at least one orthogonal transform, and at least one scaling factor; c) said at least one orthogonal transform comprising a cascade of i) at least one +/-1 butterfly step,
ii) at least one planar rotation, and iii) at least one scaling factor; b) said at least one planar rotation being represented by equivalent lifting steps and scale factors; and, c) said
approximation being obtained by replacing floating point coefficients in the lifting steps with rational coefficients.
30. The method of claim 29, wherein the coefficients of the lifting steps are chosen to be dyadic rational.
31. The method of claim 26, wherein the invertible transform is a unitary transform.
32. The method of claim 26, wherein a) said invertible transform is an approximation of a unitary transform; b) said approximation of the unitary transform comprises a representation of the unitary
transform as a cascade of at least one butterfly step, at least one orthogonal transform, and at least one scale factor; c) said at least one orthogonal transform being represented as a cascade of
(1) at least one +/-1 butterfly steps, (2) at least one planar rotation, and (3) at least one scaling factor; d) said at least one planar rotation being represented by equivalent lifting steps and
scale factors; and, e) said approximation being derived by using approximate rational values for the coefficients in the lifting steps.
33. The method of claim 32, wherein the invertible transform is an approximation of a transform selected from the group of special unitary transforms: discrete cosine transform (DCT); discrete
Fourier transform (DFT); discrete sine transform (DST).
34. The method of claim 32, wherein the coefficients of the lifting steps are dyadic rational.
35. The method of claim 32, wherein at least one of the following lifting steps is used, whose matrix representations take on the form: [ 1 a 0 1 ] , [ 1 0 b 1 ] ,where a, b are selected from the
group: +/-{8, 5, 4, 2, 1, 1/2, 1/4, 3/4, 5/4, 1/8, 3/8, , 5/8, 7/8, 1/16, 3/16, 5/16, 7/16, 9/16, 11/16, 13/16, 15/16, 25/16}.
36. The method of claim 35, wherein the invertible transform is an approximation of a transform selected from the group: discrete cosine transform (DCT); discrete Fourier transform (DFT); discrete
sine transform (DST).
37. The method of claim 36, wherein the approximation of the 4 point DCT is selected from the group of matrices: { [ 1 1 1 1 2 1 - 1 - 2 1 - 1 - 1 1 1 - 1 1 - 1 ] , [ 1 1 1 1 2 1 - 1 - 2 1 - 1 - 1 1
1 - 2 2 - 1 ] , [ 1 1 1 1 2 1 - 1 - 2 1 - 1 - 1 1 5 - 2 2 - 5 ] }
38. The method of claim 33 in which the invertible transform is an approximation of a transform selected from the group three point DCT, 4 point DCT, 8 point DCT, and 16 point DCT.
39. The method of claim 33 in which the invertible transform is an approximation of a transform selected from the group 512 point FFT, 1024 point FFT, 2048 point FFT, and 4096 point FFT.
[0001] This application claims priority to and is a continuation of reissue U.S. patent application entitled, Fast Lapped Image Transforms Using Lifting Steps, filed Jul. 29, 2003, having a Ser. No.
10/629,303, now pending, the disclosure of which is hereby incorporated by reference in its entirety.
[0002] The current invention relates to the processing of images such as p
ographs, drawings, and other two dimensional displays. It further relates to the processing of such images which are captured in digital format or after they have been converted to or expressed in
digital format. This invention further relates to use of novel coding methods to increase the speed and compression ratio for digital image storage and transmission while avoiding introduction of
undesirable artifacts into the reconstructed images.
[0003] In general, image processing is the analysis and manipulation of two-dimensional representations, which can comprise p
ographs, drawings, paintings, blueprints, x-rays of medical patients, or indeed abstract art or artistic patterns. These images are all two-dimensional arrays of information. Until fairly recently,
images have comprised almost exclusively analog displays of analog information, for example, conventional p
hotographs and motion pictures. Even the signals encoding television pictures, notwithstanding that the vertical scan comprises a finite number of lines, are fundamentally analog in nature.
[0004] Beginning in the early 1960's, images began to be captured or converted and stored as two-dimensional digital data, and digital image processing followed. At first, images were recorded or
transmitted in analog form and then converted to digital representation for manipulation on a computer. Currently digital capture and transmission are on their way to dominance, in part because of
the advent of charge coupled device (CCD) image recording arrays and in part because of the availability of inexpensive high speed computers to store and manipulate images.
[0005] An important task of image processing is the correction or enhancement of a particular image. For example, digital enhancement of images of celestial objects taken by space probes has provided
substantial scientific information. However, the current invention relates primarily to compression for transmission or storage of digital images and not to enhancement.
[0006] One of the problems with digital images is that a complete single image frame can require up to several megabytes of storage space or transmission bandwidth. That is, one of today's 31/2 inch
floppy discs can hold at best a little more than one gray-scale frame and sometimes substantially less than one whole frame. A full-page color picture, for example, uncompressed, can occupy 30
megabytes of storage space. Storing or transmitting the vast amounts of data which would be required for real-time uncompressed high resolution digital video is technologically daunting and virtually
impossible for many important communication channels, such as the telephone line. The transmission of digital images from space probes can take many hours or even days if insufficiently compressed
images are involved. Accordingly, there has been a decades long effort to develop methods of extracting from images the information essential to an aesthetically pleasing or scientifically useful
picture without degrading the image quality too much and especially without introducing unsightly or confusing artifacts into the image.
[0007] The basic approach has usually involved some form of coding of picture intensities coupled with quantization. One approach is block coding; another approach, mathematically equivalent with
proper phasing, is multiphase filter banks. Frequency based multi-band transforms have long found application in image coding. For instance, the JPEG image compression standard, W. B. Pennebaker and
J. L. Mitchell, "JPEG: Still Image Compression Standard," Van Nostrand Reinhold, 1993, employs the 8.times.8 discrete cosine transform (DCT) at its transformation stage. At high bit rates, JPEG
offers almost lossless reconstructed image quality. However, when more compression is needed, annoying blocking artifacts appear since the DCT bases are short and do not overlap, creating
discontinuities at block boundaries.
[0008] The wavelet transform, on the other hand, with long, varying-length, and overlapping bases, has elegantly solved the blocking problem. However, the transform's computational complexity can be
significantly higher than that of the DCT. This complexity gap is partly in terms of the number of arithmetical operations involved, but more importantly, in terms of the memory buffer space
required. In particular, some implementations of the wavelet transform require many more operations per output coefficient as well as a large buffer.
[0009] An interesting alternative to wavelets is the lapped transform, e.g., H. S. Malvar, Signal Processing with Lapped Transforms, Artech House, 1992, where pixels from adjacent blocks are utilized
in the calculation of transform coefficients for the working block. The lapped transforms outperform the DCT on two counts: (i) from the analysis viewpoint, they take into account inter-block
correlation and hence provide better energy compaction; (ii) from the synthesis viewpoint, their overlapping basis functions decay asymptotically to zero at the ends, reducing blocking
discontinuities dramatically.
[0010] Nevertheless, lapped transforms have not yet been able to supplant the unadorned DCT in international standard coding routines. The principal reason is that the modest improvement in coding
performance available up to now has not been sufficient to justify the significant increase in computational complexity. In the prior art, therefore, lapped transforms remained too computationally
complex for the benefits they provided. In particular, the previous lapped transformed somewhat reduced but did not eliminate the annoying blocking artifacts.
[0011] It is therefore an object of the current invention to provide a new transform which is simple and fast enough to replace the bare DCT in international standards, in particular in JPEG and
MPEG-like coding standards. It is another object of this invention to provide an image transform which has overlapping basis functions so as to avoid blocking artifacts. It is a further object of
this invention to provide a lapped transform which is approximately as fast as, but more efficient for compression than, the bare DCT. It is yet another object of this invention to provide
dramatically improved speed and efficiency using a lapped transform with lifting steps in a butterfly structure with dyadic-rational coefficients. It is yet a further object of this invention to
provide a transform structure such that for a negligible complexity surplus over the bare DCT a dramatic coding performance gain can be obtained both from a subjective and objective point of view
while blocking artifacts are completely eliminated.
[0012] In the current invention, we use a family of lapped biorthogonal transforms implementing a small number of dyadic-rational lifting steps. The resulting transform, called the LiftLT, not only
has high computation speed but is well-suited to implementation via VLSI.
[0013] Moreover, it also consistently outperforms state-of-the-art wavelet based coding systems in coding performance when the same quantizer and entropy coder are used. The LiftLT is a lapped
biorthogonal transform using lifting steps in a modular lattice structure, the result of which is a fast, efficient, and robust encoding system. With only 1 more multiplication (which can also be
implemented with shift-and-add operations), 22 more additions, and 4 more delay elements compared to the bare DCT, the LiftLT offers a fast, low-cost approach capable of straightforward VLSI
implementation while providing reconstructed images which are high in quality, both objectively and subjectively. Despite its simplicity, the LiftLT provides a significant improvement in
reconstructed image quality over the traditional DCT in that blocking is completely eliminated while at medium and high compression ratios ringing artifacts are reasonably contained. The performance
of the LiftLT surpasses even that of the well-known 9/7-tap biorthogonal wavelet transform with irrational coefficients. The LiftLT's block-based structure also provides several other advantages:
supporting parallel processing mode, facilitating region-of-interest coding and decoding, and processing large images under severe memory constraints.
[0014] Most generally, the current invention is an apparatus for block coding of windows of digitally represented images comprising a chain of lattices of lapped transforms with dyadic rational
lifting steps. More particularly, this invention is a system of electronic devices which codes, stores or transmits, and decodes M.times.M sized blocks of digitally represented images, where M is an
even number. The main block transform structure comprises a transform having M channels numbered 0 through M-1, half of said channel numbers being odd and half being even; a normalizer with a dyadic
rational normalization factor in each of said M channels; two lifting steps with a first set of identical dyadic rational coefficients connecting each pair of adjacent numbered channels in a
butterfly configuration, M/2 delay lines in the odd numbered channels; two inverse lifting steps with the first set of dyadic rational coefficients connecting each pair of adjacent numbered channels
in a butterfly configuration; and two lifting steps with a second set of identical dyadic rational coefficients connecting each pair of adjacent odd numbered channels; means for transmission or
storage of the transform output coefficients; and an inverse transform comprising M channels numbered 0 through M-1, half of said channel numbers being odd and half being even; two inverse lifting
steps with dyadic rational coefficients connecting each pair of adjacent odd numbered channels; two lifting steps with dyadic rational coefficients connecting each pair of adjacent numbered channels
in a butterfly configuration; M/2 delay lines in the even numbered channels; two inverse lifting steps with dyadic rational coefficients connecting each pair of adjacent numbered channels in a
butterfly configuration; a denormalizer with a dyadic rational inverse normalization factor in each of said M channels; and a base inverse transform having M channels numbered 0 through M-1.
[0015] There has thus been outlined, rather broadly, certain embodiments of the invention in order that the detailed description thereof herein may be better understood, and in order that the present
contribution to the art may be better appreciated. There are, of course, additional embodiments of the invention that will be described below and which will form the subject matter of the claims
appended hereto.
[0016] In this respect, before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of
construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The invention is capable of embodiments in addition to those described
and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein, as well as the abstract, are for the purpose of description
and should not be regarded as limiting.
[0017] As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for the designing of other structures, methods and
systems for carrying out the several purposes of the present invention. It is important, therefore, that the claims be regarded as including such equivalent constructions insofar as they do not
depart from the spirit and scope of the present invention.
[0018] FIG. 1 is a polyphase representation of a linear phase perfect reconstruction filter bank.
[0019] FIG. 2 shows the most general lattice structure for linear phase lapped transforms with filter length L=KM.
[0020] FIG. 3 shows the parameterization of an invertible matrix via the singular value decomposition.
[0021] FIG. 4 portrays the basic butterfly lifting configuration.
[0022] FIG. 5 depicts the analysis LiftLT lattice drawn for M=8.
[0023] FIG. 6 depicts the synthesis LiftLT lattice drawn for M=8.
[0024] FIG. 7 depicts a VLSI implementation of the analysis filter bank operations.
[0025] FIG. 8 shows frequency and time responses of the 8.times.16 LiftLT: Left: analysis bank. Right: synthesis bank.
[0026] FIG. 9 portrays reconstructed "Barbara" images at 1:32 compression ratio.
[0027] Typically, a block transform for image processing is applied to a block (or window) of, for example, 8.times.8 group of pixels and the process is iterated over the entire image. A biorthogonal
transform in a block coder uses as a decomposition basis a complete set of basis vectors, similar to an orthogonal basis. However, the basis vectors are more general in that they may not be
orthogonal to all other basis vectors; the restriction is that there is a "dual" basis to the original biorthogonal basis such that every vector in the original basis has a "dual" vector in the dual
basis to which it is orthogonal. The basic idea of combining the concepts of biorthogonality and lapped transforms has already appeared in the prior art. The most general lattice for M-channel linear
phase lapped biorthogonal transforms is presented in T. D. Tran, R. de Queiroz, and T. Q. Nguyen, "The generalized lapped biorthogonal transform," ICASSP, pp. 1441-1444, Seattle, May 1998, and in T.
D. Tran, R. L. de Queiroz, and T. Q. Nguyen, "Linear phase perfect reconstruction filter bank: lattice structure, design, and application in image coding" (submitted to IEEE Trans. on Signal
Processing, April 1998). A signal processing flow diagram of this well-known generalized filter bank is shown in FIG. 2.
[0028] In the current invention, which we call the Fast LiftLT, we apply lapped transforms based on using fast lifting steps in an M-channel uniform linear-phase perfect reconstruction filter bank,
according to the generic polyphase representation of FIG. 1. In the lapped biorthogonal approach, the polyphase matrix E(z) can be factorized as E .function. ( z ) = G K - 1 .function. ( z ) .times.
G K - 2 .function. ( z ) .times. .times. .times. .times. G 1 .function. ( z ) .times. E 0 .function. ( z ) , where ( 1 ) G i .function. ( z ) = .times. 1 2 .function. [ U i 0 0 V i ] .function. [ I I
I - I ] .function. [ I 0 0 z - 1 .times. I ] .function. [ I I I - I ] .ident. .times. 1 2 .times. .PHI. i .times. W .times. .LAMBDA. .function. ( z ) .times. W , and ( 2 ) E 0 .function. ( z ) = 1 2
.function. [ U 0 U 0 .times. J M .times. / .times. 2 V 0 .times. J M / 2 - V 0 ] . ( 3 ) In these equations, I is the identity matrix.
[0029] The transform decomposition expressed by equations (1) through (3) is readily represented, as shown in FIG. 2, as a complete lattice replacing the "analysis" filter bank E(z) of FIG. 1. This
decomposition results in a lattice of filters having length L=KM. (K is often called the overlapping factor.) Each cascading structure G.sub.i(z) increases the filter length by M. All u.sub.i and
v.sub.i, i=0, 1, . . . , k-1, are arbitrary M/2.times.M/2 invertible matrices. According to a theorem well known in the art, invertible matrices can be completely represented by their singular value
decomposition (SVD), given by U.sub.i=U.sub.i0.GAMMA..sub.iU.sub.i1, V.sub.i=V.sub.i0.DELTA..sub.iV.sub.i1 where u.sub.i0, u.sub.i1, v.sub.i0, v.sub.i1 are diagonalizing orthogonal matrices and
.GAMMA..sub.i, .DELTA..sub.i are diagonal matrices with positive elements.
[0030] It is well known that any M/2.times.M/2 orthogonal matrix can be factorized into M(M-2)/8 plane rotations .theta..sub.i and that the diagonal matrices represent simply scaling factors
.alpha..sub.i. Accordingly, the most general LT lattice consists of KM(M-2)/2 two dimensional rotations and 2M diagonal scaling factors .alpha..sub.i. The orthogonal matrix as a sequence of pairwise
plane rotations .theta..sub.i as shown in FIG. 3.
[0031] It is also well known that a plane rotation can be performed by 3 "shears": [ cos .times. .times. .theta. i - sin .times. .times. .theta. i sin .times. .times. .theta. i cos .times. .times.
.theta. i ] = [ 1 cos .times. .times. .theta. i - 1 sin .times. .times. .theta. i 0 1 ] .function. [ 1 0 sin .times. .times. .theta. i 1 ] .function. [ 1 cos .times. .times. .theta. i - 1 sin .times.
.times. .theta. i 0 1 ] . This can be easily verified by computation.
[0032] In signal processing terminology, a "lifting" step is one which effects a linear transform of pairs of coefficients: [ a b ] .fwdarw. [ 1 + k .times. .times. m k m 1 ] .times. [ a b ] . The
signal processing flow diagram of this operation is shown in FIG. 4. The crossing arrangement of these flow paths is also referred to as a butterfly configuration. Each of the above "shears" can be
written as a lifting step.
[0033] Combining the foregoing, the shears referred to can be expressed as computationally equivalent "lifting steps" in signal processing. In other words, we can replace each "rotation" by 3
closely-related lifting steps with butterfly structure. It is possible therefore to implement the complete LT lattice shown in FIG. 2 by 3KM(M-2)/2 lifting steps and 2M scaling multipliers.
[0034] In the simplest but currently preferred embodiment, to minimize the complexity of the transform we choose a small overlapping factor K=2 and set the initial stage E.sub.0 to be the DCT itself.
Many other coding transforms can serve for the base stage instead of the DCT, and it should be recognized that many other embodiments are possible and can be implemented by one skilled in the art of
signal processing.
[0035] Following the observation in H. S. Malvar, "Lapped biorthogonal transforms for tranform coding with reduced blocking and ringing artifacts," ICASSP97, Munich, April 1997, we apply a scaling
factor to the first DCT's antisymmetric basis to generate synthesis LT basis functions whose end values decay smoothly to exact zero--a crucial advantage in blocking artifacts elimination. However,
instead of scaling the analysis by {square root over (2)} and the synthesis by 1/ {square root over (2)}, we opt for 25/16 and its inverse 16/25 since they allow the implementation of both analysis
and synthesis banks in integer arithmetic. Another value that works almost as well as 25/16 is 5/4 . To summarize, the following choices are made in the first stage: the combination of U.sub.00 and
V.sub.00 with the previous butterfly form the DCT; .DELTA..sub.0=diag[ 25/16, 1, . . . , 1], and .GAMMA..sub.0=U.sub.00=V.sub.00=I.sub.M/2. See FIG. 2.
[0036] After 2 series of .+-.1 butterflies W and the delay chain .LAMBDA.(z), the LT symmetric basis functions already have good attenuation, especially at DC (.omega.=0). Hence, we can comfortably
set U.sub.1=I.sub.M/2.
[0037] As noted, V.sub.1 is factorizable into a series of lifting steps and diagonal scalings. However, there are several problems: (i) the large number of lifting steps is costly in both speed and
physical real-estate in VLSI implementation; (ii) the lifting steps are related; (iii) and it is not immediately obvious what choices of rotation angles will result in dyadic rational lifting
multipliers. In the current invention, we approximate V.sub.1 by (M/2)-1 combinations of block-diagonal predict-and-update lifting steps, i.e., [ 1 u i 0 1 ] .times. [ 1 0 - p i 1 ] .
[0038] Here, the free parameters u.sub.i and p.sub.i can be chosen arbitrarily and independently without affecting perfect reconstruction. The inverses are trivially obtained by switching the order
and the sign of the lifting steps. Unlike popular lifting implementations of various wavelets, all of our lifting steps are of zero-order, namely operating in the same time epoch. In other words, we
simply use a series of 2.times.2 upper or lower diagonal matrices to parameterize the invertible matrix V.sub.1.
[0039] Most importantly, fast-computable VLSI-friendly transforms are readily available when u.sub.i and p.sub.i are restricted to dyadic rational values, that is, rational fractions having
(preferably small) powers of 2 denominators. With such coefficients, transform operations can for the most part be reduced to a small number of shifts and adds. In particular, setting all of the
approximating lifting step coefficients to -1/2 yields a very fast and elegant lapped transform. With this choice, each lifting step can be implemented using only one simple bit shift and one
[0040] The resulting LiftLT lattice structures are presented in FIGS. 5 and 7. The analysis filter shown in FIG. 5 comprises a DCT block 1, 25/16 normalization 2, a delay line 3 on four of the eight
channels, a butterfly structured set of lifting steps 5, and a set of four fast dyadic lifting steps 6. The frequency and impulse responses of the 8.times.16 LiftLT's basis functions are depicted in
FIG. 7.
[0041] The inverse or synthesis lattice is shown in FIG. 6. This system comprises a set of four fast dyadic lifting steps 11, a butterfly-structured set of lifting steps 12, a delay line 13 on four
of the eight channels, 16/25 inverse normalization 14, and an inverse DCT block 15. FIG. 7 also shows the frequency and impulse responses of the synthesis lattice.
[0042] The LiftLT is sufficiently fast for many applications, especially in hardware, since most of the incrementally added computation comes from the 2 butterflies and the 6 shift-and-add lifting
steps. It is faster than the type-I fast LOT described in H. S. Malvar, Signal Processing with Lapped Transforms, Artech House, 1992. Besides its low complexity, the LiftLT possesses many
characteristics of a high-performance transform in image compression: (i) it has high energy compaction due to a high coding gain and a low attenuation near DC where most of the image energy is
concentrated; (ii) its synthesis basis functions also decay smoothly to zero, resulting in blocking-free reconstructed images.
[0043] Comparisons of complexity and performance between the LiftLT and other popular transforms are tabulated in Table 1 and Table 2. The LiftLT's performance is already very close to that of the
optimal generalized lapped biorthogonal transform, while its complexity is the lowest amongst the transforms except for the DCT.
[0044] To assess the new method in image coding, we compared images coded and decoded with four different transforms:
[0045] DCT: 8-channel, 8-tap filters
[0046] Type-I Fast LOT: 8-channel, 16-tap filters
[0047] LiftLT: 8-channel, 16-tap filters
[0048] Wavelet: 9/7-tap biorthogonal.
[0049] In this comparison, we use the same SPIHT's quantizer and entropy coder, A. Said and W. A. Pearlman, "A new fast and efficient image coder based on set partitioning in hierarchical trees,"
IEEE Trans on Circuits Syst. Video Tech., vol. 6, pp. 243-250, June 1996, for every transform. In the block-transform cases, we use the modified zero-tree structure in T. D. Tran and T. Q. Nguyen, "A
lapped transform embedded image coder," ISCAS, Monterey, May 1998, where each block of transform coefficients is treated analogously to a full wavelet tree and three more levels of decomposition are
employed to decorrelate the DC subband further. Table 1 contains a comparison of the complexity of these four coding systems, comparing numbers of operations needed per 8 transform coefficients:
TABLE-US-00001 No. No. No. Transform Multiplications Additions Shifts 8 .times. 8 DCT 13 29 0 8 .times. 16 Type-I Fast LOT 22 54 0 9/7 Wavelet, 1-level 36 56 0 8 .times. 16 Fast LiftLT 14 51 6
[0050] In such a comparison, the number of multiplication operations dominates the "cost" of the transform in terms of computing resources and time, and number of additions and number of shifts have
negligible effect. In this table, it is clear that the fast LiftLT is almost as low as the DCT in complexity and more than twice as efficient as the wavelet transform. Table 2 sets forth a number of
different performance measures for each of the four coding methods: TABLE-US-00002 Coding DC Stopband Mir. Freq. Gain Atten. Atten. Atten. Transform (dB) (-dB) (-dB) (-dB) 8 .times. 8 DOT 8.83 310.62
9.96 322.1 8 .times. 16 Type-I Fast LOT 9.2 309.04 17.32 314.7 9/7 Wavelet, 1-level 9.62 327.4 13.5 55.54 8 .times. 16 Fast LiftLT 9.54 312.56 13.21 304.85
The fast LiftLT is comparable to the wavelet transform in coding gain and stopband attenuation and significantly better than the DCT in mirror frequency attenuation (a figure of merit related to
[0051] Reconstructed images for a standard 512.times.512 "Barbara" test image at 1:32 compression ratio are shown in FIG. 8 for aesthetic and heuristic evaluation. Top left 21 is the reconstructed
image for the 8.times.8 DCT (27.28 dB PSNR); top right shows the result for the 8.times.16 LOT (28.71 dB PSNR); bottom left is the 9/7 tap wavelet reconstruction (27.58 dB PSNR); and bottom right,
8.times.16 LiftLT (28.93 dB PSNR). The objective coding results for standard 512.times.512 "Lena," "Goldhill," and "Barbara" test image (PSNR in dB's) are tabulated in Table 3: TABLE-US-00003 Lena
Goldhill Barbara Comp. 9/7 WL 8 .times. 8 8 .times. 16 8 .times. 16 9/7 WL 8 .times. 8 8 .times. 16 8 .times. 16 9/7 WL 8 .times. 8 8 .times. 16 8 .times. 16 Ratio SPIHT DCT LOT LiftLT SPIHT DCT LOT
LiftLT SPIHT DCT LOT LiftLT 8 40.41 39.91 40.02 40.21 36.55 36.25 36.56 36.56 36.41 36.31 37.22 37.57 16 37.21 36.38 36.69 37.11 33.13 32.76 33.12 33.22 31.4 31.11 32.52 32.82 32 34.11 32.9 33.49 34
30.56 30.07 30.52 30.63 27.58 27.28 28.71 28.93 64 31.1 29.67 30.43 30.9 28.48 27.93 28.34 28.54 24.86 24.58 25.66 25.93 100 29.35 27.8 28.59 29.03 27.38 26.65 27.08 27.28 23.76 23.42 24.32 24.5 128
28.38 26.91 27.6 28.12 26.73 26.01 26.46 26.7 23.35 22.68 23.36 23.47
PSNR is an acronym for power signal to noise ratio and represents the logarithm of the ratio of maximum amplitude squared to the mean square error of the reconstructed signal expressed in decibels
[0052] The LiftLT outperforms its block transform relatives for all test images at all bit rates. Comparing to the wavelet transform, the LiftLT is quite competitive on smooth images--about 0.2 dB
below on Lena. However, for more complex images such as Goldhill or Barbara, the LiftLT consistently surpasses the 9/7 -tap wavelet. The PSNR improvement can reach as high as 1.5 dB.
[0053] FIG. 8 also shows the reconstruction performance in Barbara images at 1:32 compression ratio for heuristic comparison. The visual quality of the LiftLT reconstructed image is noticeably
superior. Blocking is completely avoided whereas ringing is reasonably contained. Top left: 8.times.8 DCT, 27.28 dB. Top right: 8.times.16 LOT, 28.71 dB. Bottom left: 9/7 -tap wavelet, 27.58 dB.
Bottom right: 8.times.16 LiftLT, 28.93 dB. Visual inspection indicates that the LiftLT coder gives at least as good performance as the wavelet coder. The appearance of blocking artifacts in the DCT
reconstruction (upper left) is readily apparent. The LOT transform result (upper right) suffers visibly from the same artifacts even though it is lapped. In addition, it is substantially more complex
and therefore slower than the DCT transform. The wavelet transform reconstruction (lower left) shows no blocking and is of generally high quality for this level of compression. It is faster than the
LOT but significantly slower than the DCT. Finally, the results of the LiftLT transform are shown at lower right. Again, it shows no blocking artifacts, and the picture quality is in general
comparable to that of the wavelet transform reconstruction, while its speed is very close to that of the bare DCT.
* * * * * | {"url":"http://patents.com/us-20080075377.html","timestamp":"2014-04-21T09:49:13Z","content_type":null,"content_length":"65756","record_id":"<urn:uuid:d06b2b96-f320-4db6-b1db-7db592f7f85a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Name the intersection of the two planes shown.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c422b3e4b066f22e10b47a","timestamp":"2014-04-20T13:47:36Z","content_type":null,"content_length":"40386","record_id":"<urn:uuid:a9a68a78-d1e0-4407-a04c-ddc6fc6bcc25>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Discussion on Time as a Concept/Theory
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Basically trying to capture the discussion that's been going on in chat regarding Time. I'm totally behind and just catching up, but I love hearing about this sort of stuff. (BTW I *know* it's
not a question, but hey, I get to bend the rules *sometimes* right?)
Best Response
You've already chosen the best response.
We're kind of jumping in halfway through the discussion, so converting to this format might be a bit messy, but here is my assertion summarized. Time is a dimension of reality, along with the
three spatial dimensions. We can define forwards in time as the direction in which total entropy increases. Correspondingly, then, backwards in time is the direction in which total entropy
decreases. This produces a coherent system and axiomatization of the notion of time.
Best Response
You've already chosen the best response.
The real question that there actually is concerns consciousness moreso than time itself. We recognize that we are consciousness put into a particular narrative, that combination of space and time
that we view as our reality. It is arguable that consciousness could exist independent of that narrative, but then you're talking more about philosophy than you are about physics.
Best Response
You've already chosen the best response.
The debate began here: nbouscal: color is just a human construct agentx5: I agree, nbouscal nbouscal: because we just happen to be able to visually interpret a particular slice of the
electromagnetic spectrum agentx5: As is time itself Alternate point of view: Time is an abstract concept, non-real. It is a measured value, it's an extrinsic property, sure I can agree with that.
But to say time is concrete is something I strong disagree with. Why? Goes back to the definition of the word first: 1: naming a real thing or class of things 2: formed by coalition of particles
into one solid mass 3: or... a): characterized by or belonging to immediate experience of actual things or events b): specific, particular <i.e.: a concrete proposal> c): real, tangible <i.e.:
concrete evidence> What is not concrete, is abstract is it not? "Backwards in time := Direction in which entropy decreases." -- my distinguished colleague, @nbouscal's , in describing his point
of view This is a concept I disagree with. Entropy is defined by the Gibbs Free Energy Equation as: \(\Delta G = \Delta H - T \Delta S_{intial} \) where \(\Delta G < 0 \), "Spontaneous" \(\Delta
G = 0 \), "Equilibrium" \(\Delta G > 0 \), "Nonspontaneous" \(\Delta H =\) change in heat energy (in Joules, BTU, etc.) In layman's terms I could said -\(\Delta\)H means a release of heat from
the system/reaction/etc out into the environment (heat release by a fire, steam condensing into water, etc.), +\(\Delta\)H is heat energy taken out of the environment by the system (energy put
into boiling water to steam, chemical coldpacks that you administer as First Aid to reduce swelling). T is temperature (average thermal kinetic energy) and \(\Delta\)S is what is known as
entropy. Other ways of writing the metric standard for energy (the Joule), in US units it's ft-lbs or BTU: \(\rm 1 \ J = \rm 1\ \frac{kg \cdot m^2}{s^2} = 1\ N \cdot m = \rm 1\ Pa \cdot m^3={}\rm
1\ W \cdot s \) How is time measured? The base unit of time (in all systems we use) is the second. The official defintion since 1967, the second has been defined to be: "The duration of
9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 (\(\large {}^{133}_{55}Cs\)) atom." It's measured as
an abstract unit of change. Change itself is what's being measured, not time. Finally I'd point to why "time travel" is bogus: * Special Relativity shows us that time should slow for the traveler
as they travel at a faster speed (magnitude of velocity), and indeed this had held true in experiments so far. * Distance = | Displacement | or Distance = \(\large \sum_{1}^{n}\) | Displacement\
(_n\) | In the same way that distances can't be negative, neither can time. You cannot therefore say that just because relative entropy is negative that this represents a change back in time! In
fact I can show that with a simple equation: 2n CO\(_2\) + 4n H\(_2\)O + photons → 2(CH\(_2\)O)n + 2n O\(_2\) + 2n H\(_2\)O This is better known as the general reaction equation for
photosynthesis, which is endothermic. There is no space time distortion or reversal going on where leaves are. Metaphor: "Burn a forest to ash and it can be regrown, but it will never be like it
was before" Even if you CAN reverse everything exactly, it still doesn't change the fact that change occurred. Change occurred and TIME is said to have passed. There are in fact many instances in
nature that would make the case that time moves in spiraling cycles, a combination of both linear and cyclical behaviors. This would agree with the notion of time having an "arrow", a direction
if you will, but it would not PROVE time as concrete as my college claims.
Best Response
You've already chosen the best response.
Thank you for making this topic/question @cshalvey :-) It should prove to be an interesting discussion, as long as it's taken strictly as a matter of logic and philosophy. I would expect that
some tempers might flare in this, as this is one of the harder things to explain about the world we perceive as human beings.
Best Response
You've already chosen the best response.
@agentx5 Completely agree. Also, in situations where there isn't a universally agreed upon answer, opinions, emotions, and personal philosophy are bound to get involved. For example, I don't see
people having these arguments about the conversion equation between Fahrenheit and Celsius ;)
Best Response
You've already chosen the best response.
Your definition of second is not an abstract measurement of change, it's a duration. Duration only makes sense within the context of time as a dimension. Duration is a norm defined on that
dimension, and the direction is provided by increase in entropy. You can't say that time is an abstract concept any more than you can say that space is an abstract concept. Throughout the
discussion you've been talking about reversals, about negative distances, and about a bunch of other stuff that is completely unrelated to anything I've said. I'm not talking about traveling
backwards in time. I'm not talking about reversing time. I'm talking about time as a dimensional basis for reality, and defining that basis in terms of entropy. To draw a direct analogy: we can
define the x-axis of a coordinate graph in terms of a unit vector. We can then talk about the negative unit vector, which goes in the opposite direction. Despite that definition, we still have
the case where all norms are positive. The same thing applies to time. We can talk about what it means to go in the opposite direction of the basis vector for time without needing negative norms.
Best Response
You've already chosen the best response.
"Let us draw an arrow arbitrarily. If as we follow the arrow we find more and more of the random element in the state of the world, then the arrow is pointing towards the future; if the random
element decreases the arrow points towards the past. That is the only distinction known to physics. This follows at once if our fundamental contention is admitted that the introduction of
randomness is the only thing which cannot be undone. I shall use the phrase ‘time’s arrow’ to express this one-way property of time which has no analogue in space." - Arthur Eddington
Best Response
You've already chosen the best response.
I attest time cannot be drawn as a vector. The "arrow of time" Eddington says here is an abstraction. First let's go back to how we define vectors in space in the first place with a little
airplane :-) |dw:1342641518710:dw| Now if the airplane flies to one airport an back it's displacement is zero. Using what I said previously: 100 km heading forward to + 100 km heading back from
if we use sign conventions letting +x be going to, and -x being heading the other way. 100 km - 100km for a net change (displacement) 0 km But distance? | 100 km | + | -100 km | = 100 km + 100 km
= 0 km Question, if this was in distance per time how would it change? See what I mean? Here's another example, a pendulum's period varies only base on the force of gravity and the length of the
rotating arm. |dw:1342642271667:dw| It is said to sweep out equal arc lengths in the same time. Think if a metronome if you need a more concrete example. \(\large T \approx 2\pi \sqrt{\frac{L}
{g}}\) T = period in units of time L = length of the rod/arm g = local acceleration due to gravity Now, question. According to my interpretation I proposed, are we not REALLY just measuring the
effect for the change to repeat itself? This is very hard to put in words, I don't have anything in the English language I can do to properly describe this. But when we said period we're really
measuring the effect of gravity to make a sequence of event repeat in a cycle, over and over.
Best Response
You've already chosen the best response.
Dimensions should have the ability to draw vectors, magnitudes that have direction. Time only has one direction, as you stated. Therefore I feel it can't truly be a "4th dimension".
Best Response
You've already chosen the best response.
Ack I have a typo: | 100 km | + | -100 km | = 100 km + 100 km = 200 km = distance. Forgot to fix when using copy & paste, and I can't edit without deleting the drawings...
Best Response
You've already chosen the best response.
First off, a vector is not defined as a magnitude with direction. A vector is defined as an element of a vector space. I know we're on the Physics board, not the Math one, but that distinction is
important and it really bothers me when people miss it. Second, every basis vector is unidirectional, time is no different from the standard here. You define the basis vector, and then the
opposite direction is derived from the inverse of that vector. It is the same with time. There is no reason to take any issue with the inverse of a vector in the time dimension, we do it all the
time in regular speech. "A year ago" refers to a point backwards in time (direction) a distance of one year (magnitude). So even using the inaccurate physics definition of a vector, we're still
fine. You're trying to use spatial intuition to argue against time being a vector, which misses the fact that time is a non-spatial dimension. It is distinct and separate from the spatial
dimensions. The three spatial dimensions, together with the fourth dimension of time, form the basis of the 'vector space' that we call reality.
Best Response
You've already chosen the best response.
Give me a explanation of what you feel defines "vector space"?
Best Response
You've already chosen the best response.
A vector space over a field is a set with two binary operations that satisfy certain axioms. Specifically, closure, associative, commutative, inverses, identities, and distributive laws.
Best Response
You've already chosen the best response.
If you went to the wiki page for Vector instead of the wiki page for Euclidean vector, you'd find the proper definition. Physicists have a habit of defining vectors improperly. It's a systemic
problem. There are vectors that do not make sense under that definition.
Best Response
You've already chosen the best response.
Lots of options, but the definition is given in the first sentence: "Many special instances of the general definition of vector as \(\mathbf{\text{an element of a vector space}}\) are listed
below." (emphasis mine)
Best Response
You've already chosen the best response.
Explain to me how vectors in a function space can be accurately defined as objects with magnitude and direction.
Best Response
You've already chosen the best response.
Looking at the axioms here: http://en.wikipedia.org/wiki/Vector_space#Definition Show me how time can mathematically work those? Because I see an issue with the units right away (also a common
mistake in physics, forgetting to put your units in as you work with the numbers)
Best Response
You've already chosen the best response.
What issue? If you have two seconds and add five seconds you have seven seconds. If you have two seconds and multiply by the scalar four, you have eight seconds. There is no issue.
Best Response
You've already chosen the best response.
But... You can have 5 m + 5 m = 10 m or 5 m - 5 m = 10 m You can have 5 s + 5 s = 10 s , but you cannot have 5 s - 5 s = 0 s. The "inverse elements of addition" axiom is violated, agreed?
Best Response
You've already chosen the best response.
No, not agreed. You can talk about the inverse of a second, a negative second, as being one second backwards in time. We can set the origin as the present moment and talk about positive and
negative seconds as forwards and backwards in time from the present moment. So, a vector from the origin (now) to a year ago would be the inverse of a vector from the origin to a year from now.
Best Response
You've already chosen the best response.
You're still trying to use physical, spatial notions as the foundation of your argumentation. You have to leave those at the door and argue from axioms.
Best Response
You've already chosen the best response.
"being one second backwards in time" How can you say that? It doesn't happen in reality. You cannot go back in time... There's an issue with negative values for time, if you're treating it as a
vector. cos(\(\pi\)) = cos(\(180^o\)) = -1 I'm gathering not all vectors have magnitude & direction. Give an example?
Best Response
You've already chosen the best response.
To convert your formula into words: "Five seconds before five seconds from now is right now." That is fine, nobody would have a problem with that statement. You can talk about back in time. You
can say something happened a year ago. There is no problem here. An example is a vector in a function space. Not sure how you could ascribe direction and magnitude to a vector in a function
Best Response
You've already chosen the best response.
Then if the axioms for treating time like a vector is true, then the axioms listed should apply to something as complex as this: |dw:1342644420216:dw| But it's non-real, the parametric equations
for time in this 2D plane wouldn't even make sense from what I see.
Best Response
You've already chosen the best response.
It's easy to argue 5 seconds before 5 seconds have passed is right now, but a bit harder if you want to start treating time has having to obey those properties and still apply meaning to
observations witnessed.
Best Response
You've already chosen the best response.
You have to recognize the distinction between something being coherent with the axioms and something correlating to your intuitive senses of the world. You can make whatever vector space you
want, as long as it adheres to the axioms. Whether or not it will correlate to the real world is a different question. I'm simply asserting that a vector space whose basis is three spatial unit
vectors and one time unit vector correlates to the real world. I still don't see any issue, and I don't understand what issue you're having. It's perfectly easy to talk about time as a vector. A
year from now is a year from now. A year ago is a year ago. A year before a year from now is today. Two times a year from now is two years from now. All of the axioms hold easily, there is no
struggle at all.
Best Response
You've already chosen the best response.
But what you're still measuring is the effect of change itself... Any way you want to measure it.
Best Response
You've already chosen the best response.
Another example, by the way, that is fun to try to wrap your head around is a vector in \(\mathbb{R}^\omega\). I have been completely unable to figure out how to think of those vectors as having
direction, because I don't know how to visualize an infinite-dimensional space. No, what I'm measuring is duration. I'm measuring time. I'm defining a direction on time using entropy as a clock,
and I'm defining a magnitude on time using duration.
Best Response
You've already chosen the best response.
It's very simple. I'm just defining a basis vector. That basis vector points towards increase in entropy. Everything else just follows naturally.
Best Response
You've already chosen the best response.
Within the framework of Einstein's relativity, time is simply one of four local coordinates that can be used to measure the interval between events. For example, if I agree to meet Ilsa at the
Gare de Lyon to take the last train to Marseilles on the day the Germans march into Paris, then the time of this event (4.45 PM, 14 June 1940) is just one of the coordinates I need to specify, in
my reference frame, the interval between that event and her later walking into my gin joint, of all the gin joints in all the world, in Casablanca in December 1941. The interval has a part that
consists of subtracting spacial coordinates -- it's 1,172 miles from Paris to Casablanca -- and a part that consists of subtracting the time coordinate -- it's 18 months between June of 1940 and
December of 1941. The math of relativity specifies exactly how you do the subtraction, and add the results up to get an unambigious measure of interval on which all observers will agree. An
important fact, however, is that all observers need NOT agree on how much of a given interval is due to spacial separation, and how much to temporal. For Rick, the last time he saw Ilsa can not
only seem like yesterday, it can for all experimental purposes *be* yesterday, depending on his frame of reference, while for Ilsa, in another frame, it can be yesterday, last week, 18 months
ago, or 18,000 years ago. (The only thing on which both Rick and Ilsa must agree is that Paris came before Casablanca.) In short, time does not have any independent and special existence. It is
conceptually no different from a space coordinate. To be sure, we have the odd observational result that time *seems* to be profoundly different than space, to us. Exactly why that is so is still
a question that has not been fully answered.
Best Response
You've already chosen the best response.
Are we really arguing that time and space are the same thing? x, y, and z can all be changed depending our perspective. We can determine a surface area as being the product of xy, yz, or zx. You
can never determine a surface area as being a product of x and t, as it is nonsensical. If time and space are equivalent, m/s is a radian and m/s^2 is a hertz and wavenumber. Time and space are
entirely different, and anything saying they're equivalent is nonsense. It's like equating mass and volume as being the same thing.
Best Response
You've already chosen the best response.
I'm not presently arguing that time and space are the same thing. I'm simply arguing that time and space are both dimensions and that they both exist, not that they are equivalent. I don't think
such an equivalence is quite so nonsensical as you think, though. Carl, your point that all they must agree on is the order is a very interesting one. Now I'm going to spend my whole shift this
evening thinking about the topology of time. Thanks :P
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5006f70ee4b020a2b3bc8932","timestamp":"2014-04-18T14:06:27Z","content_type":null,"content_length":"233530","record_id":"<urn:uuid:cdb297f8-04c1-48f2-9281-d353733200dd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
January 10th 2009, 04:59 AM #1
Jan 2008
Q1: If a = i + 2j - k and b = j + k, find a unit vector perpendicular to both a and b.
Q2: The points P and Q have position vectors a + b and 3a - 2b respectively when referred from the origin O. Given that OPQR is a parallelogram, express the vectors PQ and PR in terms of a and b.
[I have found PQ = 2a - 3b; PR = a - 4b] By evaluating 2 scalar products, show that if OPQR is a square, then|a| $^2$= 2|b| $^2$.
Thank you for helping!
Q1: If a = i + 2j - k and b = j + k, find a unit vector perpendicular to both a and b.
Q2: The points P and Q have position vectors a + b and 3a - 2b respectively when referred from the origin O. Given that OPQR is a parallelogram, express the vectors PQ and PR in terms of a and b.
[I have found PQ = 2a - 3b; PR = a - 4b] By evaluating 2 scalar products, show that if OPQR is a square, then|a| $^2$= 2|b| $^2$.
Thank you for helping!
1. Take the cross product of $\mathbf{a}$ and $\mathbf{b}$ and then divide this vector by its length.
Q1: If a = i + 2j - k and b = j + k, find a unit vector perpendicular to both a and b.
Q2: The points P and Q have position vectors a + b and 3a - 2b respectively when referred from the origin O. Given that OPQR is a parallelogram, express the vectors PQ and PR in terms of a and b.
[I have found PQ = 2a - 3b; PR = a - 4b] By evaluating 2 scalar products, show that if OPQR is a square, then|a| $^2$= 2|b| $^2$.
Thank you for helping!
2. Drawing a picture always helps.
Since it's a parallelogram, notice that OR is parallel and of equal magnitude to PQ and that QR is parallel and of equal magnitude to OP.
What do you have when vectors are parallel and of equal length? They are EQUAL.
^ Thank you for your suggestions! I got stuck at part that requires the evaluation of the scalar product...
What sides touch?
OP touches PQ
PQ touches QR
QR touches OR
OP touches OR.
If it's a square, then the angles should be right angles. Evaluating any of the dot products of touching sides should give 0 if this is the case.
Then show that the lengths are equal, and you've got a square.
See how you go from there.
January 10th 2009, 05:03 AM #2
January 10th 2009, 05:05 AM #3
January 10th 2009, 05:21 AM #4
Jan 2008
January 10th 2009, 05:26 AM #5
January 10th 2009, 05:37 AM #6
Jan 2008
January 10th 2009, 05:44 AM #7
January 10th 2009, 06:00 AM #8
Jan 2008 | {"url":"http://mathhelpforum.com/calculus/67527-vectors.html","timestamp":"2014-04-18T06:31:46Z","content_type":null,"content_length":"58063","record_id":"<urn:uuid:08e3137e-999c-4c2f-b388-959f95c380c6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Write an equation for the parabola that has a vertex at (-2,3) and with a vertical stretch of 5. Please explain... - Homework Help - eNotes.com
Write an equation for the parabola that has a vertex at (-2,3) and with a vertical stretch of 5.
Please explain fully.
Start with the "standard" parabola `y=x^2.` First we can take care of the vertical scale factor we need. The parabola described by
accomplishes the vertical stretch. This still has vertex `(0,0)` though. To change that, replace `x` by `x+2` and `y` by `y-3.` This gives
`y-3=5(x+2)^2,` or equivalently `y=5(x+2)^2+3.` Since `5(x+2)^2>=0` for all `x,` the right hand side takes the minimum value of 3 when `x=-2,` proving that the vertex is `(-2,3).`
Thus the equation we want is
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/write-an-equation-parabola-that-has-vertex-2-3-365302","timestamp":"2014-04-20T19:56:06Z","content_type":null,"content_length":"25759","record_id":"<urn:uuid:a054f7c8-7f2f-44a3-ba75-bdf4d0b50b15>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dowtown Carrier Annex, CA Statistics Tutor
Find a Dowtown Carrier Annex, CA Statistics Tutor
...I am confident that my skills as an editor, proofreader, and writer will also be a benefit to anyone who needs help with a desktop publishing task. I have two years of experience preparing
students of grade levels 6-8 for the ISEE. I was employed at an after-school test prep academy as a teacher of an SSAT/ISEE test prep class, and I am familiar with the format and material tested.
73 Subjects: including statistics, reading, English, chemistry
...Additionally, since I am certified in SAT prep, if their SAT score is not competitive enough for the desired school, I offer SAT tutoring to help them raise their score. I have taken Bio
statistics as part of my studies during my first two years of medical school at NYCOM. I am familiar with fin...
43 Subjects: including statistics, English, reading, writing
...I have provided guidance to classmates on the use of SAS. As a graduate student I gained a great deal of experience in public speaking. I have given talks in classes I have and also given
seminars on research I have done.
31 Subjects: including statistics, English, reading, literature
...I started tutoring when I was 15. My students have stayed in touch after more than 20 years, being most of them successful professionals in various technical fields such as Engineering. My
resume comprises my career accomplishments as International Marketing Director for a software company in t...
20 Subjects: including statistics, Spanish, physics, calculus
...I am a product of public K-12 education and understand where kids are coming from who attend those schools. I also know that, particularly in math and the hard sciences, much of my work will
be confidence building and challenging the negative self-talk about "not being a math person." I love what I do! Please feel free to ask if you have any questions.
52 Subjects: including statistics, chemistry, English, finance
Related Dowtown Carrier Annex, CA Tutors
Dowtown Carrier Annex, CA Accounting Tutors
Dowtown Carrier Annex, CA ACT Tutors
Dowtown Carrier Annex, CA Algebra Tutors
Dowtown Carrier Annex, CA Algebra 2 Tutors
Dowtown Carrier Annex, CA Calculus Tutors
Dowtown Carrier Annex, CA Geometry Tutors
Dowtown Carrier Annex, CA Math Tutors
Dowtown Carrier Annex, CA Prealgebra Tutors
Dowtown Carrier Annex, CA Precalculus Tutors
Dowtown Carrier Annex, CA SAT Tutors
Dowtown Carrier Annex, CA SAT Math Tutors
Dowtown Carrier Annex, CA Science Tutors
Dowtown Carrier Annex, CA Statistics Tutors
Dowtown Carrier Annex, CA Trigonometry Tutors
Nearby Cities With statistics Tutor
Cimarron, CA statistics Tutors
Dockweiler, CA statistics Tutors
Foy, CA statistics Tutors
Green, CA statistics Tutors
Griffith, CA statistics Tutors
La Tijera, CA statistics Tutors
Lafayette Square, LA statistics Tutors
Oakwood, CA statistics Tutors
Pico Heights, CA statistics Tutors
Preuss, CA statistics Tutors
Rimpau, CA statistics Tutors
Sanford, CA statistics Tutors
Santa Western, CA statistics Tutors
Westvern, CA statistics Tutors
Wilshire Park, LA statistics Tutors | {"url":"http://www.purplemath.com/Dowtown_Carrier_Annex_CA_statistics_tutors.php","timestamp":"2014-04-19T14:43:31Z","content_type":null,"content_length":"25011","record_id":"<urn:uuid:f48f458c-331c-4f23-9c8a-e34e0c812a00>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: help with my simple integration command in mata
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: help with my simple integration command in mata
From Glenn Goldsmith <glenn.goldsmith@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject RE: st: help with my simple integration command in mata
Date Tue, 16 Jun 2009 00:36:52 +0100
Hi Nathan,
1. The error is occurring because your test function -funky()- needs
to -return()- 10*x rather than just displaying it as it does now.
Replacing the -10*x- line with -return(10*x)- should do the trick.
2. It looks like there might be something else slightly odd going on
with your program though. Your loop calculates 15 values (0 through
14) but your return statement only uses 14 of these (2 through 15). I
assume it should use all 15, with the multipliers
(1,4,2,4,2,...,2,4,1) rather than the (1,2,4,2,...,2,4,1) you
currently have?
3. In terms of general debugging, mata can be a little difficult; but
inserting -mata set matalnum on- after your -mata clear- line will at
least give you line numbers for the errors. (You'll want to turn it
off after testing though, because when it's on it stops the compiler
from optimizing the code.)
4. There are a couple of other ways you could clean up your code to
achieve what you want slightly more efficiently. A modified version is
mata clear
mata set matalnum off
scalar NDintegration( // to perform integration via
pointer(real scalar function) scalar f, //points to the
functionto be integrated
real scalar lower, //lower bound of integration
real scalar higher) //upper bound
real scalar width
//I like declarations, your mileage may vary
real colvector Y
real rowvector X
if (lower > higher) _error("limits of integration reversed") //make
sure upper > lower NB: better to use _error() than return()
width = (higher-lower) / 14 //calculates width of each "slice"
Y = J(15,1,.) //creates an empty 15 x 1 column vector
for (i=1; i<=15; i++) {
x = lower + ((i-1)*width)
Y[i,1] = (*f)(x) //places function evaluations
directly into Y, avoiding all the copying you were doing previously
X = (1,4,2,4,2,4,2,4,2,4,2,4,2,4,1)
//amended 1 x 15 row vector of multipliers
//matrix multiplication makes this quicker and more
scalar funky(real scalar x) {
//declares the output to be a scalar. otherwise you
could run into problems in the main program
f = &funky()
NDintegration(f, 0, 10)
Nathan Danneman <ndanneman@gmail.com> wrote:
Hi all,
I have written the following bit of code that uses Simpson's method to
approximate definite integrals. It doesn't seem to work: it responds
that there is a conformability error, which I cannot seem to locate.
I would welcome any general tips on debugging programs in mata, and
especially welcome any specific tips on refining this code.
Thanks so much,
mata clear
function NDintegration( // to perform integration via simpson's
pointer(real scalar function) scalar f, //points to the function
to be integrated
real scalar lower, //lower bound of integration
real scalar higher) //upper bound
if (lower > higher) return("limits of integration reversed")
//make sure upper > lower
width = (higher-lower) / 14 //calculates width of each "slice"
Y = (.) //creates an empty matrix
for (i=0; i<=14; i++) {
x = lower + (i*width)
yi = (*f)(x) //is this part correct? I'm not sure I've used
the pointer correctly
d = yi
Y = (Y, d)
st_matrix("Y", Y)
return( (width/3) * (1*st_matrix("Y")[1,2] + 2*st_matrix("Y")[1,3] +
4*st_matrix("Y")[1,4] + 2*st_matrix("Y")[1,5] + 4*st_matrix("Y")[1,6]
+ 2*st_matrix("Y")[1,7] + 4*st_matrix("Y")[1,8] +
2*st_matrix("Y")[1,9] + 4*st_matrix("Y")[1,10] +
2*st_matrix("Y")[1,11] + 4*st_matrix("Y")[1,12] +
2*st_matrix("Y")[1,13] + 4*st_matrix("Y")[1,14] +
1*st_matrix("Y")[1,15]) )
//this last bit simply sums up the y values and multiplies them by the
appropriate fudge factors
function funky(real scalar x) { //I tried to use
this simple function to test the integration routine...no luck
f = &funky()
NDintegration(f, 0, 10)
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-06/msg00536.html","timestamp":"2014-04-21T12:41:26Z","content_type":null,"content_length":"9395","record_id":"<urn:uuid:61583f9a-910c-40c0-b762-afac4151d7b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
Doubly Linked List Tutorial
By Pritam
Jul 30th, 2012
8 Comments
805 Views
Continuing with our tutorials series on data structure, let’s discuss Doubly Linked List in this article. Earlier in this series of article we discussed Singly Linked List, it’s definition and
various operations which can be performed on it along with code snippets. Let’s now dig into Doubly Linked List and understand what all operations are supported by it.
A doubly linked list, in computer science, is a linked data structure that consists of a set of sequentially linked records called nodes. Each node contains two fields, called links, that are
references to the previous and to the next node in the sequence of nodes. The beginning and ending nodes’ previous and next links, respectively, point to some kind of terminator, typically a sentinel
node or null, to facilitate traversal of the list.
Noticeable differences between singly and doubly linked lists :
• Although doubly linked lists require more space per node as compared to singly linked list, and their elementary operations are more expensive; but they are often easier to manipulate because
they allow sequential access to the list in both directions.
• In doubly linked list insertion or deletion of any node, whose address is given, can be carried out in constant number of operations and on the other hand in singly linked list the same operation
would require the address of the predecessor’s address which of course is not a problem with doubly linked list as we can move in both directions.
The below figure show how a doubly linked list looks like :
Let’s now directly jump over to various operations which can be performed over doubly linked list:
• Add a node in a list at beginning or at end or in between.
• Delete a node from list at specific location.
• Reverse a list.
• Count nodes present in the list.
• Print the list to see all the nodes present in the list.
First let’s have a look at the node definition :
struct node{
int data;
struct node *prev;
struct node *next;
It’s evident from the above node definition that a node in a doubly linked list have 2 links (next and prev) and one data value (data).
• Add a node in a list.
Insertion of a node in a linked list can be done at three places, viz. at start, in between at a specified location or at end.
□ Inserting a node at the start of list :
Algorithm :
1. Update the next pointer of the new node to the head node and make prev pointer of the new node as NULL
2. Now update head node’s prev pointer to point to new node and make new node as head node.
□ Inserting a node in between of list :
Algorithm :
1. Traverse the list to the position where the new node is to be inserted.Let’s call this node as Position Node (we have to insert new node just next to it).
2. Make the next pointer of new pointer to point to next node of position node. Also make the prev point of new node to point to position node.
3. Now point position node’s next pointer to new node and prev node of next node of position node to point to new node.
□ Inserting a node at the end of the list :
Algorithm :
1. Traverse the list to end. Let’s call the current last node of list as Last node.
2. Make next pointer of New node to point to NULL and prev pointer of new node to point to Last node.
3. Update next pointer of Last node to point to new Node.
Thus we see how easily we can add a node in a list. Let us now write code for all the three cases.Please note here that we are passing double pointer in the function as we may require to change
the head pointer.
void AddNode(struct node **head, int position, int value)
int currentNodePosition=1;
struct node *temp, *newNode;
newNode = (struct node *)malloc(sizeof(struct node));
if(*head == NULL)
printf("\nList Empty.");
printf("\nError while allocating memory to new Node");
newNode->data = value;
printf("\nInserting node at the beginning of the list");
newNode->next = *head;
newNode->prev = NULL;
//*head->prev = newNode;
*head = newNode;
temp = *head;
while((currentNodePosition<position-1)&& (temp->next!=NULL))
temp = temp->next;
printf("\nInserting node at the last.");
newNode->next = temp->next;
newNode->prev = temp;
temp->next = newNode;
printf("\nInserting node at position : %d",position);
newNode->next = temp->next;
newNode->prev = temp;
temp->next->prev = newNode;
temp->next = newNode;
• Delete a node from a list.
As similar to Insertion of a node in a linked list, deletion can also be done at three places, viz. from start, in between at a specified location or from end.
□ Deletion of node from start of list :
Algorithm :
1. Create a temporary node which will point to the same node where Head pointer is pointing.
2. Now move the head pointer to point to the next node. Also change the heads prev to NULL. Then dispose off the node pointed by temporary node.
□ Deletion of node from end of list :
Algorithm :
1. Traverse the list till end. While traversing maintain the previous node address also. Thus when we reach at end then we have one pointer pointing to NULL and other pointing to penultimate
2. Update the next pointer of penultimate node to point to NULL.
3. Dispose off the Last Node.
□ Deletion of node from an intermediate position :
Algorithm :
1. As similar to previous case maintain two pointer, one pointing to the node to be deleted and other to the node previous to our target node (Node to be deleted).
2. Once we reach our target node, change previous node next pointer to point to next pointer of target node and make prev pointer of next node of target node to point to previous node of
target node.
3. Dispose off the target node.
I know there is a lot of confusion in the algorithm. To clear the confusion have a look at the code part below.
void DeleteNodeFromList(struct node **head, int position)
struct node *temp, *temp2;
temp = *head;
int currentNodePosition = 1;
if(*head == NULL)
printf("\nList is empty");
if(position == 1)
temp = temp->next;
if(*head != NULL)
temp->prev = NULL;
while((currentNodePosition<position)&& (temp->next!=NULL))
printf("\nDeleting node from starting");
temp = temp->next;
if(temp->next == NULL)
printf("\nDeleting node from last");
temp2 = temp->prev;
temp2->next = NULL;
temp2 = temp->prev;
temp2->next = temp->next;
temp->next->prev = temp2;
• Reversing a list.
This is one of the favorite question which interviewer is bound to ask while interviewing a candidate on data structures. There are many ways to accomplish this but we will explain the way we
implemented in our example.
Algorithm :
□ First swap prev and next pointer of all nodes.
□ Point head to the tail node as tail node is now our new head node.
This method is pretty straight forward and fast also. Below is the code snippet of reversing a Doubly Linked List.
void ReverseList(struct node **head)
struct node *temp, *temp2;
temp = *head;
temp2 = NULL;
while(temp != NULL)
temp2 = temp->next;
temp->next = temp->prev;
temp->prev = temp2;
if(temp->prev == NULL)
*head = temp;
temp = temp->prev;
• Counting nodes in list/ Printing content of list.
This one is the easiest operation on Doubly Linked List. All what you have to do is to traverse the list and while traversing you need to keep on increment counter (while counting nodes in list)/
you need to print the data of each node (while printing the nodes). The code snippets are as below :
void PrintList(struct node ** head)
struct node *temp;
temp = *head;
if(temp == NULL)
printf("List is empty");
while(temp->next != NULL)
printf("%d <==>",temp->data);
temp= temp->next;
void CountNodesInList(struct node ** head)
struct node *temp;
int numberOfNodes = 1;
temp = *head;
if(temp == NULL)
printf("\nList is empty");
while(temp->next != NULL)
temp= temp->next;
printf("\nThere is %d node in the list.",numberOfNodes);
printf("\nThere are %d node/s in the list.",numberOfNodes);
Those were the major operations which can be performed on a Doubly Linked List. Please find the complete code snippet here. Visit idlebrains for more tutorials on data structures. Visit singly linked
list to refresh your memories with your very own singly linked list.
I wish you learn with as much fun as i did while writing this article. Do post your suggestions, doubts, or comments. Keep an eye on idlebrains for much more and Stay tuned with us.
Related Posts
8 Responses to “Doubly Linked List Tutorial”
1. Can u plz tell me the program to merge two doubly linked list in c code . Plz reply
□ Can you please elaborate on what parameters you want to merge the 2 lists.
2. I will right away seize your rss feed as I can not to find your e-mail subscription link or e-newsletter service. Do you’ve any? Kindly allow me know in order that I may just subscribe. Thanks.
3. thanku for the detailed explanation
□ Thanks Radhu for dropping by
4. IT IS REALLY VERY USEFUL IN SHORT .
5. good mornign sir.
I have seen your interesting tutorial about the double linked list.
Functions are ok but i don’t know how to run this program ,pratically into the main i try to do something but it doesn’t works .Your link “Please find the complete code snippet here” doesn’t work
, no page opening when i click on it .
Could you please tell me what to do ?
Thanks a lot | {"url":"http://idlebrains.org/tutorials/data-structures-tutorials/doubly-linked-list-tutorial/","timestamp":"2014-04-20T21:28:55Z","content_type":null,"content_length":"83710","record_id":"<urn:uuid:5edbae10-718d-4f6d-9999-c53442eaf246>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lily Pads - A Skip-Counting Game
Lily Pads – A Skip-Counting Game
By mindfull on
Hard to believe the summer has flown by so fast. In the spirit of the season (new classes and freshly sharpened pencils and all that) I wanted to share a game that I put together last spring. It’s
appropriate for students in late grade 1 (skip counting from zero) through grade 5-6 (using multiples).
To play, students pair up and each one chooses a colour of counter to play with. Player A spins the spinner (use a paperclip and a downward pointed pencil as a spinner) to find out what number she
must count by. Player A puts a counter in her colour on any number in the lily pad grid that is a multiple of that number. So if Player A spins a 2, she can cover a 2, 4, 6, 8, 10, etc – but NOT a
5 or a 15… Then Player B has a turn.
Three in a row in one colour wins the game.
Oh – and if you spin a lily pad, you can put your counter anywhere at all!
Consider using this game as a beginning of the year start up task. Observe your students as they play and listen to their strategies. Chances are you’ll learn something new about your kids….
Have fun!
Posted in: Uncategorized | Tagged: intermediate math games, multiples, multiplication, primary math games, skip counting | {"url":"http://mindfull.wordpress.com/2012/09/04/lily-pads-a-skip-counting-game/","timestamp":"2014-04-19T14:35:16Z","content_type":null,"content_length":"65423","record_id":"<urn:uuid:35479016-ee51-4364-a3a0-a07966b5363f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
A(nother) fast automatic Metropolis algorithm - Statistical Modeling, Causal Inference, and Social Science
A(nother) fast automatic Metropolis algorithm
Mark Girolami sends along this article, “Riemann Manifold Langevin and Hamiltonian Monte Carlo,” by Ben Calderhead, Siu Chin, and himself:
This paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when
sampling from target densities that may be high dimensional and exhibit strong correlations. The methods provide fully automated adaptation mechanisms that circumvent the costly pilot runs
required to tune proposal densities for Metropolis-Hastings or indeed Hamiltonian Monte Carlo and Metropolis Adjusted Langevin Algorithms. This allows for highly efficient sampling even in very
high dimensions where different scalings may be required for the transient and stationary phases of the Markov chain. The proposed methodology exploits the Riemannian geometry of the parameter
space of statistical models and thus automatically adapts to the local structure when simulating paths across this manifold providing highly efficient convergence and exploration of the target
density. The performance of these Riemannian Manifold Monte Carlo methods is rigorously assessed by performing inference on logistic regression models, log-Gaussian Cox point processes,
stochastic volatility models, and Bayesian estimation of dynamical systems described by nonlinear differential equations. Substantial improvements in the time normalised Effective Sample Size are
reported when compared to alternative sampling approaches.
Cool! And they have Matlab code so you can go try it out yourself. If anybody out there knows more about this (I’m looking at you, Radford and Christian), please let us know. I care a lot about this
right now because we’re starting a big project on Bayesian computation for hierarchical regression models with deep interactions.
P.S. The tables are ugly but I forgive the authors because their graphs are so pretty; for example:
4 Comments
1. Andrew, the link to the matlab code doesn't work.
2. The link to the matlab code has the url for this posting pre-appended (i.e., it doesn't work unless you copy the link address and delete the offending material).
3. The cleaned up version being:
4. This is excellent. Thanks for linking to things like this. I have been on the lookout for good automatic algorithms to implement for PyMC. | {"url":"http://andrewgelman.com/2010/04/19/another_fast_au/","timestamp":"2014-04-20T15:54:17Z","content_type":null,"content_length":"25303","record_id":"<urn:uuid:ba50fe17-d358-4e36-a33b-0e6ce3cadb62>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is a Dummy Variable
A dummy variable is commonly used in statistics and econometrics and regression analysis. This indicator variable takes on the value of 1 or 0 to indicate the availability or lack of some effect that
would change the outcome of whatever is being tested. Commonly used uses for a dummy variable includes demonstrating the presence or lack of a war in any given period, a major weather event such as a
hurricane or even a strike.
What is a Dummy Variable
Since regression models are quantitative by nature, dummy variables play an important role in expressing some qualitative facts. Dependent variables in models are not only impacted by quantitative
variables, but also are impacted by qualitative variables such as religions, gender, color, and geography. The Dummy Variable accounts for such variables my marking the presence of the impacting
variable with a value of 1 and the lack of the tested variable with a value of 0.
A dummy variable with a value of 0 will lead to the variable’s coefficient to go away while a value of 1 will cause the coefficient to act as an intercept in the model. With such ease of setting up
and the obvious reasons for supporting the usage, dummy variables are now commonly used in economic forecasting and time series analysis.
Let’s say that Wages are being tested as the dependent variable and wage is a function of gender and education. Where:
Wage = α[0] + δ[0]female + α[1]education + e
Female is 1 while Male is 0
δ[0] is the the difference in wages between males and females.
When dealing with dummy variables, it is important not to fall into what is known as the dummy variable trap. In certain circumstances, perfect multicollinearity can occur, messing up the model.
Models can also of course have more than one dummy variable In a similar model, perhaps race is a considered variable.
Y[i] = β[1] + β[2]D[2] + β[3]D[3] + αX[i] + U[i]
y = wages
x = education years
D2 = gender ( 1 if female, 0 if male)
D3 = Race ( 1 if not white, 0 if white)
Dummy Variables can also be used as the dependent variable. Commonly tested situations could include a political party affiliation, retirement, promotions at work, or what to study in college.
How to Create Dummy Variables in E-views
Creating dummy variables is fairly simple in E-Views and can be done the same way that other variables are created.
Upon creating a new variable, double click it to open up the series. Then you can click and select cells in the columns and click “Edit +/-”. This will allow you to input your own data where you can
input 1 or 0 to set up your variable as a dummy variable. | {"url":"http://www.economicswiki.com/economics-tutorials/dummy-variable/","timestamp":"2014-04-20T08:15:25Z","content_type":null,"content_length":"31043","record_id":"<urn:uuid:95b1e63f-c822-4377-b7d5-689d7b132ea9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Turn Rate Maths (need help) - Star Trek Online
Turn Rate Maths (need help)
Thread Tools Display Modes
So I was bored and decided to try and find out how the game calculates turn rates for fun and wow. I am either missing something huge or their may be a bug of sorts.
The method STO uses for it's math is pretty basic and universal across many systems including shield totals, boff abilities, shield regen, weapon damage, and several others I'm sure that I'm
forgetting about. You have a base value, stuff adds to that base value typically calculated as a % of it, then that total gets multiplied by any true modifiers to give a final value.
For the test I used a Tac Oddy and Defiant. I learned that the vast majority of modifiers are additive in nature I will list below. To find this I simply went to tribble, cleared out any skills that
may interfere, and then watched how the numbers changed when I methodically added each item.
- Engine Power
- [Turn] mod on Engines
- Starship Impulse Thruster Skill
- RCS Consoles
- Turn Rate boosting Boff abilities
The way those all work is that they give a flat bonus that is added to the base and to one another. Meaning if you start at 10 turn rate and use an RCS console to gain another 2 turn rate you will
have 12. If you instead raise engine power by enough to gain 2 turn rate for a total of 12 and then add the same RCS console it will still only provide 2 turn rate for a total of 14.
I also realized some strange things that make absolutely no sense to me.
- Boff buffs do not stack. If you activate EM3, Aux2Damp, and APO you will only gain the boost from one, not all three.
- Engine type, rank, and quality have absolutely no effect at all.
Now here is where it gets really weird. As I stated before I used a base Defiant and a Tactical Oddessy. In the chart below I will compare the changes in turn rates.
Base Amount: Defiant (17) Oddy (6)
In Space 25 Power Mod: Defiant (20.7/+3.7) Oddy (6.8/+.8)
+30% RCS Mod: Defiant (24.9/+4.2) Oddy (7.7/+.9)
+Turn Engine Mod: Defiant (22.1/+1.4) Oddy (7.1/+.3)
+141% Aux 2 Damp Mod: Defiant (40.5/+19.8) Oddy (11.0/+4.2)
All above combined: Defiant (46.1) Oddy (12.2)
Now this makes absolutely no sense to me. So the next logical step is to find some %s and see if they make any sense.
% of listed base / Defiant / Oddy
Power Mod: D (22%) / O (13%)
RCS Mod: D (25%) / O (15%)
+Turn Mod: D (8%) / O (5%)
Aux 2 Damp: D (115%) / (70%)
Combined: D (171%) / (103%)
Ok that was interesting. If the defiant is getting 100% of the bonus so to speak that means the oddessy is only receiving about 60% of the bonus. The only other factor I can think of is the inertia
values. Defiant has 70 while the Oddy has 20. The only conclusion I can make is that the inertia value is somehow creating the difference in which case...
Why are slow turning ships penalized twice? ARGH. It is the same as the beam issue they have lower damage to begin with, which is fine, but then the energy drain mechanics give them a double penalty
whammy which messes it all up. /rant off.
But no seriously does anyone know the role inertia plays or am I missing something?
# 2
03-10-2013, 04:51 PM
I often wondered what inertia did,,and im still not sure.
# 3
03-10-2013, 05:23 PM
Inertia in game is the time your ship needs to change speed and direction.
The higher your inertia rating, the more direct and quicker your ship reacts to movement changes.
-= ISE: 12:19 -=- CSE 12:41 -=- KASE 11:59 -=- HSe 8:06 total =-
-= KAGE 5:43 =-
[7:07] [Combat (Self)] Your Dual Disruptor Banks - Overload II deals 123086 (41096) Disruptor Damage(Critical) to Assimilated Carrier.
# 4
03-10-2013, 05:27 PM
Subtract 3 from your "base" turn rate. That's the ship's "actual" base turn rate, the one that's used for calculating your final turn rate (the one that's used for the multipliers and RCS and
whatnot). The 3 is the "free" turn rate that a ship has even when its impulse engines are removed.
What Inertia does is that it makes your ship accelerate more quickly. Ships with relatively high turn rates but low Inertia values (which corresponds to a high physical inertia... STO Devs kinda
messed up kinematics so that those unfamiliar with physics would see that high number = good) will "power slide" as the nose turns but the mass of the ship continues moving on the previous direction,
only slowly turning its velocity to match its direction. It's most noticeable with a ship like the Bortasqu' trying to pull a reverse-impulse maneuver (or something similar) as Evasive Maneuvers
wears off.
Please stop exponential holding/reputation costs!
# 5
03-10-2013, 08:00 PM
Originally Posted by
Subtract 3 from your "base" turn rate. That's the ship's "actual" base turn rate, the one that's used for calculating your final turn rate (the one that's used for the multipliers and RCS and
whatnot). The 3 is the "free" turn rate that a ship has even when its impulse engines are removed.
What Inertia does is that it makes your ship accelerate more quickly. Ships with relatively high turn rates but low Inertia values (which corresponds to a high physical inertia... STO Devs kinda
messed up kinematics so that those unfamiliar with physics would see that high number = good) will "power slide" as the nose turns but the mass of the ship continues moving on the previous direction,
only slowly turning its velocity to match its direction. It's most noticeable with a ship like the Bortasqu' trying to pull a reverse-impulse maneuver (or something similar) as Evasive Maneuvers
wears off.
Thank you! That works out perfectly mate.
Defiant base = 14 *.3 = 4.2 which is what the 30% RCS consoles established.
Oddy base = 3 * .3 = .9 which also matches up.
A turn Mod would then be a 10% boost which matches up with other gear mods. Skill I think is 15% then and so forth. I will do more testing later to list all the actual % mods at some point unless
there is a post already done about it that I missed.
Thanks again!
# 6
03-10-2013, 09:54 PM
The "fix" only helps expose the problem more... it's proportionately cutting out even more turn rate modification from low-turn ships (primarily Fed Cruisers, Carriers, and the Bortasqu'), while
having a comparatively minor effect on high-turn ships (that is, Escorts and BoPs).
I know it works this way with RCS, I'm not entirely sure with the others (I haven't bothered to run the math), but I have my suspicions.
Please stop exponential holding/reputation costs!
# 7
03-10-2013, 10:13 PM
Originally Posted by
The "fix" only helps expose the problem more... it's proportionately cutting out even more turn rate modification from low-turn ships (primarily Fed Cruisers, Carriers, and the Bortasqu'), while
having a comparatively minor effect on high-turn ships (that is, Escorts and BoPs).
I know it works this way with RCS, I'm not entirely sure with the others (I haven't bothered to run the math), but I have my suspicions.
It does from the testing I did. That -3 to find base works perfectly to model all the things I tested.
# 8
03-10-2013, 11:37 PM
"- Boff buffs do not stack. If you activate EM3, Aux2Damp, and APO you will only gain the boost from one, not all three.
- Engine type, rank, and quality have absolutely no effect at all." from OP
Engines matter for turnrate, idk why you didn't see this. I've stacked turnrate buffs before. What map are you testing on, is it in system space? What engine power levels were you using?
Also, fyi the Omega shield energy wake procs adjust the ship's inertia value. I'm not sure what if anything else does.
# 9
03-11-2013, 12:14 AM
It was something like that if i remember right. There was also some modifier, but that had to do something with old skill system, so it probably changed. But the basic formula was something like this
3 + (Turn rate -3) * (1 + (all turn bonuses in percentages/100) + Engine power level / 100) + turn bonus of your engine
However, I saved that file when I did the turn rate research in like season 2-3. The thread is probably hidden deep in the "archived" threads.
All of the turn bonuses stack. A 50% increase to turn rate its a 0,5 turn multiplied by your actual base turn rate. So a Ship with 6 turn rate, gets +3 from the bonus, while the ship with 22turn rate
*spits on the ground* gets +11.
That means if you stacked turn bonuses for total of 200%, a Galaxy would get meager +12 turn while the bug *spits again* would get +44 just from the modifiers.
But I could be totaly wrong about this, as I said, last time I did proper research on this, it was like season 3.
Last edited by dalnar83; 03-11-2013 at 12:22 AM.
# 10
03-11-2013, 11:04 PM
If you are moving at 1/4 impulse or greater or at full reverse, then your turn rate is approximately
turn_rate = base_turn_rate + (base_turn_rate - 3) * (0.38 * impulse_thrusters_skill / 99 + engine_power / 100 + bonus_from_consoles + bonus_from_engines).
I'm reasonably sure the above formula is accurate, though I haven't tested turn rates in a while. The factor of 0.38 modifying impulse thrusters skill might be a little off. The above formula doesn't
include boff abilities; I would also be interested in knowing how they fit into this.
Skills and bonuses also affect the turn rate at full stop, but because the effect is so small, I cannot infer what the exact formula is.
Last edited by frtoaster; 03-11-2013 at 11:27 PM.
Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code
code is
HTML code is Off
All times are GMT -7. The time now is 06:11 AM. | {"url":"http://sto-forum.perfectworld.com/showthread.php?t=580541","timestamp":"2014-04-18T13:11:48Z","content_type":null,"content_length":"59759","record_id":"<urn:uuid:8d5437c7-9759-483c-a75a-0924caf28c55>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Curriculum Foundations Workshop in Engineering
By David Bigio and Susan L. Ganter
The MAA Committee on Curriculum Renewal Across the First Two Years (CRAFTY) is conducting the Curriculum Foundations (CF) Project, a major analysis of the first two years of the undergraduate
mathematics curriculum. The goal of the project is to develop a curriculum document that will assist college mathematics departments as they plan their programs for the next decade. Much of the
information for this curriculum document was gathered between Fall 1999 and Spring 2001 through a series of invitational disciplinary workshops, funded and hosted by a wide variety of institutions.
This article, focusing on engineering, is part of a series of reports from these disciplinary workshops.
Format of the Curriculum Foundations Workshops
Each CF workshop focused on a particular partner discipline or on a group of related disciplines, the objective being a clear, concise statement of what students in that area need to learn in their
first two years of college mathematics. The workshops were not intended to be dialogues between mathematics and the partner disciplines, but rather a dialogue among representatives of the discipline
under consideration, with math-ematicians present only to listen to the discussions and to provide clarification on questions about the mathematics curriculum. For this reason, almost all of the
individuals invited to participate in each workshop were from the partner disciplines.
The major product of each workshop is a report or group of reports sum-marizing the recommendations and conclusions of the workshop. These were written by the representatives from the partner
disciplines and address a series of questions formulated by CRAFTY. The reports from each workshop have been widely circulated within the specific disciplines as well as the mathematics community in
order to solicit a broad range of comments. A curriculum conference that included invitees from all disciplines was convened in November 2001 to synthesize the workshop findings.
The MAA Engineering Workshop at Clemson University
One of the MAA Curriculum Foun-dations workshops was sponsored and hosted by Clemson University on May 4-7, 2000. This workshop focused on the needs of engineering from the first two years of college
mathematics instruction. The workshop had thirty-eight invited participants, with roughly equal representation from each of four areas in engineering (chemical, civil, electrical, mechanical) and
mathematics. The workshop resulted in four documents, one for each of the four engineering areas, addressing the MAA questions specified at the outset of the workshop.
This report focuses on the aggregate recommendations of the four groups at the engineering workshop. It is not intended to be a definitive document, but rather a working paper that generates
discussion among mathematicians and engineers in order to provide additional feedback for the mathematics community. Therefore, the authors welcome comments and additional ideas.
Desired Student Outcomes
Before identifying any guiding principles for the mathematics curriculum, the workshop participants developed a list of general learning outcomes that should be achieved by all engineering students.
Specifically, the workshop participants want students to be able to:
1. Statics, including large scale stresses and integration principle directions;
2. Dynamics, or a phenomenon in time; and,
3. Non-deterministic problems; i.e., those utilizing statistics and probability.
Engineering students need to understand the physics of a wide variety of problems and how mathematical equations can be used to describe the physics. Then, they need to solve these problems using the
appropriate tools, understand the solution, and interpret the results.
Content and Timing
The issue of content is a complex one across the engineering disciplines. In general, the study of engineering can be divided into three areas:
1. Statics, including large scale stresses and integration principle directions;
2. Dynamics, or a phenomenon in time; and,
3. Non-deterministic problems; i.e., those utilizing statistics and probability.
Discussions at the engineering workshop made it clear that each of these areas requires different mathematical concepts at different times in the learning curve. And since mathematics is the language
of engineering, then as with any ’foreignâ? language, students that learn math-ematics years before their current engineering class may no longer be facile with the language. So, the workshop
participants believe it is critical to integrate mathematics with applications in specific engineering disciplines. This ’just-in-timeâ? approach requires co-ordination between mathematics and
engineering departments, perhaps resulting in a reconfiguration of the current collegiate mathematics curriculum for the first two years.
In engineering disciplines, problem solving requires the ability to understand a physical problem, place it in a mathematical context, solve the necessary equations, and interpret the results.
Particularly in the first two years, students are more comfortable and adept at using sample problems to understand concepts. However, a full understanding of the problem solving process requires
students to move through Bloom’s Taxonomy from mechanics to con-ceptualization to integration. This learning process can be solidified by extending the required mathematics courses for engineers into
the third year, so that the material can be coordinated with major engineering courses.
Instructional Techniques
Instruction should be done in the context of physical concepts, not as isolated theoretical exercises. Students with mathematical training that focuses on symbolic techniques have great difficulty
moving to more complex engineering problems. These students generally are too dependent on methodology and are unable to conceptualize based on broad principles.
Active learning is an important method for helping students to learn about open-ended problems. It also prepares them for real problems they will encounter in their future engineering careers.
Team Work and Interdisciplinary Collaboration
Educational reform in engineering?being driven in large part by ABET2000, the new standards of the Accreditation Board for Engineering and Tech-nology?supports the use of active learning,
problem-based experiences, and team work. This has led to the use of problems that are open ended in engineering courses. The paradigms for teaching in engineering are evolving to include more varied
methods, and to address a wide variety of student learning styles. If mathematics is taught in a way that does not support these new methodologies, it will not be effective in preparing students for
engineering courses. Interdisciplinary team teaching can be used to promote the integration of mathematics and engineering content. Student teams also can help instructional effectiveness.
To accomplish these objectives, the workshop participants propose that mathematics departments consider developing small projects within mathematics courses that require team work and active learning
so that the students have more opportunities to learn fundamental mathematics principles, beyond simple examples. One such project could utilize ordinary differential equations, moving from simple to
complex mathematical situations. Projects also could cut across multiple engineering areas to further develop connections between disciplines.
The engineering workshop of the Curriculum Foundations Project certainly raised more questions than the answers it produced. For example, is it better pedagogically to teach mathematics to engineers
as a homogeneous group, or together with non-engineering students? There was general consensus among the workshop participants that the mathematics community should look closely at the heterogeneous
approach to the first two years?as well as other established traditions of the current course organization.
But one thing is clear: regardless of what new teaching methods are utilized, the needs of students and their learning processes are different than even just ten years ago. Students will work in a
complex technological world and interface with problems from many disciplines. They need to understand how to use fundamental concepts in a variety of settings?and to appropriately integrate
calculators, computers, and other technologies when solving physical problems.
Although this report is very general, the engineering workshop participants have provided more specificity and detail about specific mathematics topics in the full workshop reports (see http://
academic.bowdoin.edu/faculty/B/barker/dissemination/Curriculum_Fo... there are four reports from this workshop, covering Civil, Chemical, Mechanical, and Electrical Engineering). The basic
mathematical foundations presented in the reports are mostly the same as a decade ago, but the methods for helping students learn the ideas promoted by the reports implies the need for a pedagogical
transformation that is critical to the training of engineers for the future.
David Bigio is a member of the Mechanical Engineering Department at the University of Maryland. Susan L. Ganter is in the Department of Mathematical Sciences at Clemson University.
This issue includes two articles on Curriculum Foundations, a project of CRAFTY, the MAA Committee on Curriculum Renewal Across the First Two Years. Earlier articles have described the project as a
whole (November 2000), the workshop on the mathematics courses needed by physics students (March 2001), computer science students (May/June 2002), chemistry students (September 2002), students
engaged in interdisciplinary programs (September 2002), students in the Life and Health Sciences (November 2002), and students in technical programs in two-year colleges (November 2002). All of these
reports can also be found on MAA Online, at /features/currfound.html. Future articles will focus on other client disciplines. CRAFTY is a subcommittee of CUPM, the Committee on the Undergraduate
Program in Mathematics, which is undertaking a review of the whole undergraduate curriculum. | {"url":"http://www.maa.org/programs/faculty-and-departments/curriculum-department-guidelines-recommendations/crafty/summer-reports/workshop-engineering","timestamp":"2014-04-17T02:03:24Z","content_type":null,"content_length":"105418","record_id":"<urn:uuid:2f02624e-4d41-497c-8f4d-84e93f89b24a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 22
During Bill's three-hour meeting, the word global was used, on average, once every five minutes during the first two hours. If the word global was used 54 times throughout the meeting, then what was
the average number of minutes between uses in the third hour?
life orientation
its degrees,diploma and certificate
With the aid of chemical equation only describe how butane can be converted to but-2-ene
How would you solve by completing the square? 3x^2 + X - 1/2= 0
Using the difference quotient to find M-sec for h=0.5 at x=1. f(x)=2x+5 difference quotient-f(x+h)-f(x)/h
Using the difference quotient find M-sec for h=0.5 at x=1. f(x)=2x+5 difference quotient-f(x+h)-f(x)/h
business communication
is this a runon sentence? He quit smoking five years ago, he still craves a cigarette from time to time.
what was your answer? :)
So the quantity supplied moves down the supply curve?
Err, I still don't understand?
How does the quantity supplied of a good with a large elasticity of supply react to a price change?
So i'm suppose to divide 54 into 900?
Write the percent as a decimal. 43%=___
IM SUPPOSE TO IDENTIFY THE PERCENT, WHOLE, AND PART IN THE FOLLOWING. 13 is 2% of what number of injections?
im not understanding
Identify the percent, whole, and part in the following. 13 is 2% of what number of injections?
Find the percent notation. 10/25=___%
Q1)which of these energy resources does not originate from the suns energy?:natural gas,oil,geothermal energy,coal Q2)the burning of fossil fuels does not release? potential energy,light energy,sound
energy , heat energy?? Q3)at the moment renewable energy sources generate? mo... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=elliot","timestamp":"2014-04-16T11:44:58Z","content_type":null,"content_length":"8890","record_id":"<urn:uuid:b2225793-2e77-4227-88ae-f49ea0cfa595>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
[plt-scheme] order of magnitude
From: Jos Koot (jos.koot at telefonica.net)
Date: Thu Nov 5 09:29:10 EST 2009
Mathematically your derivation is correct, but computationally it may be incorrect. The problem is that in an actual computation, e.g. in Scheme,
(expt 10 (/ (log q) (log 10)))
may be
slightly less than q
or exactly equal to q
or slightly greater than q.
This holds even when q is exact, for log is not required to and often even cannot produce an exact result. In fact I do not even trust that the absolute error of (/ (log q) (log 10)) is allways less than one, where absolute error is the difference between mathematical value and actually computed value. As another axample:
(- 2 (sqr (sqrt 2))) ; --> -4.440892098500626e-016 and hence:
(zero? (- 2 (sqr (sqrt 2)))) --> #f
----- Original Message -----
From: "Jens Axel Søgaard" <jensaxel at soegaard.net>
To: "Jos Koot" <jos.koot at telefonica.net>
Cc: <plt-scheme at list.cs.brown.edu>
Sent: Thursday, November 05, 2009 2:58 PM
Subject: Re: [plt-scheme] order of magnitude
2009/11/5 Jos Koot <jos.koot at telefonica.net>:
> Let q be a positive rational (or real) number.
> I am looking for the greatest integer number m such that:
> (<= (expt 10 m) q)
I get:
greatest integer m s.t.: 10^m <= q
greatest integer m s.t.: log(10^m) <= log(q)
greatest integer m s.t.: m*log(10) <= log(q)
greatest integer m s.t.: m <= log(q)/log(10)
m = floor(log(q)/log(10))
Jens Axel Søgaard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.racket-lang.org/users/archive/attachments/20091105/e0612ef9/attachment.html>
Posted on the users mailing list. | {"url":"http://lists.racket-lang.org/users/archive/2009-November/036490.html","timestamp":"2014-04-20T18:32:53Z","content_type":null,"content_length":"7111","record_id":"<urn:uuid:87b08e1e-fa2c-4605-b120-9432fb717841>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
Verification of executable pipelined machines with bit-level interfaces
Verification of executable pipelined machines with bit-level interfaces (2005)
Download Links
Other Repositories/Bibliography
by Panagiotis Manolios
Venue: In ICCAD-2005, International Conference on Computer-Aided Design
Citations: 6 - 4 self
author = {Panagiotis Manolios},
title = {Verification of executable pipelined machines with bit-level interfaces},
booktitle = {In ICCAD-2005, International Conference on Computer-Aided Design},
year = {2005},
pages = {855--862}
Abstract — We show how to verify pipelined machine models with bit-level interfaces by using a combination of deductive reasoning and decision procedures. While decision procedures such as those
implemented in UCLID can be used to verify away the datapath, require the use of numerous abstractions, implement a small subset of the instruction set, and are far from executable. In contrast, we
focus on verifying executable machines with bit-level interfaces. Such proofs have previously required substantial expert guidance and the use of deductive reasoning engines. We show that by
integrating UCLID with the ACL2 theorem proving system, we can use ACL2 to reduce the proof that an executable, bit-level machine refines its instruction set architecture to a proof that a term level
abstraction of the bit-level machine refines the instruction set architecture, which is then handled automatically by UCLID. In this way, we exploit the strengths of ACL2 and UCLID to prove theorems
that are not possible to even state using UCLID and that would require prohibitively more effort using just ACL2. I.
332 The mechanical evaluation of expressions - Landin - 1963
299 Definitional Interpreters for Higher-Order Programming Languages - Reynolds - 1972
260 Automated verification of pipelined microprocessor control - Burch, Dill - 1994
260 Computer-aided reasoning: An approach - Kaufmann, Manolios, et al. - 2000
196 Using a highperformance, programmable secure coprocessor - Smith, Palmer, et al. - 1998
142 Modeling and verifying systems using a logic of counter arithmetic with lambda expressions and uninterpreted functions - BRYANT, LAHIRI, et al. - 2002
107 Integrating decision procedures into heuristic theorem provers: A case study of linear arithmetic - Boyer, Moore - 1988
68 An approach to systems verification - Bevier, Jr, et al.
58 Microprocessor Design Verification - Hunt - 1989
54 M.N.: Exploiting Positive Equality in a Logic of Equality with Uninterpreted Functions - Bryant, German, et al. - 1999
46 A hybrid SAT-based decision procedure for separation logic with uninterpreted functions - Seshia, Lahiri, et al. - 2003
42 Modeling and Verification of Out-of-order Microprocessors using UCLID - Lahiri, Seshia, et al.
39 The UCLID Decision Procedure - Lahiri, Seshia - 2004
39 Mechanical Verification of Reactive Systems - Manolios - 2001
35 Proof of Correctness of a Processor with Reorder Buffer Using the Completion Functions Approach - Hosabettu, Srivas, et al. - 1999
26 High-speed, analyzable simulators - Greve, Wilding, et al. - 2000
26 Correctness of Pipelined Machines - Manolios - 2000
21 Automatic verification of safety and liveness for xscale-like processor models using web refinements - Manolios, Srinivasan - 2004
21 Formal Verification of an Advanced Pipelined Machine - Sawada - 1999
20 A summary of intrinsic partitioning verification - Greve, Richards, et al. - 2004
15 A mechanically checked proof of correctness of the AMD K5 floating point square root microcode - Russinoff - 1999
13 RTL verification: A floating-point multiplier - Russinoff, Flatau - 2000
11 Siege homepage. See URL http://www.cs.sfu.ca/ ∼loryan/personal - Ryan
10 FM8501: A Verified Microprocessor, volume 795 of LNAI - Hunt - 1994
8 An embedded 32-bit microprocessor core for low-power and high-performance applications - Clark, Hoffman, et al.
8 Formal verification of divide and square root algorithms using series calculation - Sawada - 2002
7 A complete compositional reasoning framework for the efficient verification of pipelined machines - Manolios, Srinivasan - 2005
4 A suite of hard ACL2 theorems arising in refinement-based processor verification - Manolios, Srinivasan - 2004
1 A user’s guide to uclid version 1.0, 2003. See URL http://www.cs.cmu.edu/ uclid/userguide.ps - Seshia, Lahiri, et al. | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.122.2210","timestamp":"2014-04-18T08:50:27Z","content_type":null,"content_length":"31374","record_id":"<urn:uuid:1b6c3f8b-ee83-4ca2-89d3-d538f7d4cb43>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
should this factorise?
February 13th 2010, 11:54 PM
should this factorise?
Start of problem at the top
I believe i have done it right but it doesnt seem to factorise
Do i need to run it from the quadratic equation?
February 14th 2010, 12:03 AM
Prove It
Start of problem at the top
I believe i have done it right but it doesnt seem to factorise
Do i need to run it from the quadratic equation?
$\sin{\theta} + \cos^2{\theta} = \frac{1}{5}$
$\sin{\theta} + 1 - \sin^2{\theta} = \frac{1}{5}$
$0 = \sin^2{\theta} - \sin{\theta} - \frac{4}{5}$
Now Complete the Square:
$0 = \sin^2{\theta} - \sin{\theta} + \left(-\frac{1}{2}\right)^2 - \left(-\frac{1}{2}\right)^2 - \frac{4}{5}$
$0 = \left(\sin{\theta} - \frac{1}{2}\right)^2 - \frac{1}{4} - \frac{4}{5}$
$0 = \left(\sin{\theta} - \frac{1}{2}\right)^2 - \frac{21}{20}$
$\frac{21}{20} = \left(\sin{\theta} - \frac{1}{2}\right)^2$
$\pm\sqrt{\frac{21}{20}} = \sin{\theta} - \frac{1}{2}$
$\pm \frac{\sqrt{21}}{2\sqrt{5}} = \sin{\theta} - \frac{1}{2}$
$\pm \frac{\sqrt{105}}{10} = \sin{\theta} - \frac{1}{2}$
$\frac{1}{2} \pm \frac{\sqrt{105}}{10} = \sin{\theta}$
$\frac{5 \pm \sqrt{105}}{10} = \sin{\theta}$
$\theta = \arcsin{\left(\frac{5 + \sqrt{105}}{10}\right)}$ or $\theta = \arcsin{\left(\frac{5 - \sqrt{105}}{10}\right)}$.
February 14th 2010, 12:44 AM
Yes, thats where i went with it but this is from a book that isnt driven at any equations that wont factorise and the level its picthed at would not go into that depth.
I had the same answer yet though I had gone wrong somewhere (Headbang)
February 14th 2010, 12:55 AM
Hello 200001
Start of problem at the top
I believe i have done it right but it doesnt seem to factorise
Do i need to run it from the quadratic equation?
You have a sign wrong in the third line of your solution. It should be:
$\Rightarrow 5\sin^2\phi-5\sin\phi-4=0$
$\Rightarrow \sin\phi = \frac{5\pm\sqrt{105}}{10}$
Now $-1\le\sin\phi\le 1$. So the only valid root is:
$\sin\phi = \frac{5-\sqrt{105}}{10}$
$\Rightarrow \phi = n\pi +(-1)^n \arcsin\left(\frac{5-\sqrt{105}}{10}\right)$
February 15th 2010, 10:14 PM
that makes sense | {"url":"http://mathhelpforum.com/trigonometry/128737-should-factorise-print.html","timestamp":"2014-04-16T17:23:06Z","content_type":null,"content_length":"11932","record_id":"<urn:uuid:ef6e2f18-4e32-475a-a4f6-97596ef39f30>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coquand, Thierry - Department of Computer Science and Engineering, Chalmers University of Technology
• Logic in Computer Science Thierry Coquand
• REMARK ON THE FORSTER-SWAN THEOREM 1. A variation on the stable range theorem
• HOW TO MEASURE BOREL SETS THIERRY COQUAND
• A Direct Proof of the Localic HahnBanach Theorem Thierry Coquand
• Inductive Definitions and Type Theory an Introduction
• On Weyl's theorem Thierry Coquand
• A SIMPLE PROOF OF STONE-WEIRSTRASS THIERRY COQUAND
• Formal Topology with Posets Thierry Coquand
• A Note on Formal Iterated Function Systems Thierry Coquand
• Regular Expressions [1] Equivalence relation and partitions
• On the definition of the ^ function We write q.a instead of (q, a) and q.x instead of ^(q, x).
• Proof Theory in Type Theory Thierry Coquand
• A Topos Theoretic Fix Point Theorem Thierry Coquand
• Case Analysis, Variables and Parameters Thierry Coquand
• A Topos Theoretic Fix Point Theorem Thierry Coquand
• Curriculum Vitae for Thierry Coquand Born 18/04/1961, Jallieu (Is`ere, France)
• Constructive Mathematics and Functional Programming
• Constructive Homological Algebra Thierry Coquand
• Constructive Algebra in Functional Programming and Type Theory
• Annales UMCS Informatica AI 3 (2005) 15-25 Annales UMCS
• Decidability Proof of LTL The goal of this note is to explain why LTL is decidable. Given an LTL formula we explain
• Finite Automata and Their Decision Proble'ms# Abstract: Finite automata are considered in this paper as instruments for classifying finite tapes. Each one-
• Infinite objects in constructive mathematics Thierry Coquand
• Auslander-Buchsbaum-Hochster May 18, 2010
• Grade and Linear Equations September 25, 2010
• A remark about the theory of local rings March 10, 2008
• On Dedekind-Kronecker-Kneser's Reciprocity Theorem August 15, 2006
• Infinite objects in constructive mathematics Thierry Coquand
• Infinite objects in constructive mathematics Thierry Coquand
• Equality and dependent type theory Oberwolfach, March 2 (with some later corrections)
• A Calculus of Definitions March 18, 2008
• A direct proof of Ramsey's Theorem September 27, 2010
• Constructive Logic Thierry Coquand
• Spaces as Distributive Lattices Thierry Coquand
• Logic in Computer Science Another presentation of natural deduction
• Course organization Textbook J.E. Hopcroft, R. Motwani, J.D. Ullman Introduction to
• Deterministic Finite Automata Definition: A deterministic finite automaton (DFA) consists of
• Finite Automata We present one application of finite automata: non trivial text
• Search algorithm Clever algorithm even for a single word
• Regular Expressions [1] Regular Expressions
• Regular Expressions [1] Warshall's algorithm
• Regular Expressions [1] Warshall's algorithm
• Constructive Mathematics in Theory and Programming Practice
• Dynamic construction of aglebraic closure and a coinductive proof of Hensel's lemma
• Comments and hints for 2009 example exams Harald Hammarstrom
• Geometric Hahn-Banach theorem Thierry Coquand
• Dynamical Methods in Algebra Dynamical Methods in Algebra [1] Dynamical Methods in Algebra
• Curves and coherent Prufer rings Thierry Coquand (
• A Simple Programming Language Type theory and functional programming
• Prufer domain Thierry Coquand
• Regular expressions Consider the regular sets denotated by the following pairs of regular expres-
• An Algorithm for TypeChecking Dependent Types Thierry Coquand
• Modules as dependently typed records Thierry Coquand, Randy Pollack and Makoto Takeyama
• AXIOMATIC SET THEORY 1 Cantor-Bendixson Analysis
• Week 2: DFA and NFA 1. Exercise 2.2.1
• Application of ZMT March 4, 2006
• Une nouvelle caract erisation el ementaire de la dimension Thierry Coquand ( ) Henri Lombardi ( y ), Marie-Fran coise Roy ( z ),
• Global divisors on an algebraic curve January 13, 2008
• Places on algebraic curves March 10, 2008
• Completeness Theorems and -calculus Thierry Coquand
• Krull Dimension Thierry Coquand
• ABELIAN l-GROUPS, GENERALISED REALS AND OPEN LOCALES THIERRY COQUAND
• x 2 + 1 IS POSITIVE THIERRY COQUAND
• Proofs by induction, Alphabet, Strings [1] Proofs by Induction
• constructive mathematics
• Week 1: Finite Automata Proofs by induction
• Introduction In this paper, we present a theory of constructions for higher-order intuitionistic logic. The original
• Computational Content of Classical Logic
• Algebraic Closure Thierry Coquand
• Pattern Matching with Dependent Types Thierry Coquand
• AXIOMATIC SET THEORY 4 Complete Boolean Algebra
• Proposition. Given n+2 polynomials g 1 , g 2 , : : : , g n+2 in n indeterminates with rational coecients, construct n+1 polynomials f 1 , f 2 , : : : , f n+1 in the same indeterminates with
• Krull Dimension of Distributive Thierry Coquand and Henri Lombardi
• Language of a Grammar If G is a grammar we write
• A proof of Higman's lemma by structural induction Thierry Coquand, Daniel Fridlender
• Hilbert's program in abstract algebra Thierry Coquand
• About Stone's general theory of Thierry Coquand
• Real Spectrum Thierry Coquand
• Algebraically Closed Fields Thierry Coquand
• Sur un th eor eme de Kronecker concernant les vari et es alg ebriques
• A New Paradox in Type Theory Thierry Coquand
• The paradox of trees in Type Theory Thierry Coquand
• A Finitary Subsystem of Polymorphic calculus Thorsten Altenkirch and Thierry Coquand
• About Stone's notion of Spectrum Thierry Coquand
• Infinite objects in constructive mathematics Thierry Coquand
• A hybrid MAC protocol for a metro WDM network using multiple free spectral ranges of an
• HOW TO DEFINE MEASURE OF BOREL SETS THIERRY COQUAND
• A Boolean Model of Ultrafilters Thierry Coquand
• Exercises on the course on Constructive Logic August 10, 2008
• Kurs: MAN321/TMV026 Andliga automater och formella sprak Plats: M-huset
• HOW TO DEFINE MEASURE OF BOREL SETS THIERRY COQUAND
• Domains for polymorphism (Isle of Thorns, August 88) Introduction
• 1. We consider the following language: we have one binary relation symbol R, and one unary function symbol f. Define what is a model for this language (2p). Give an example of a
• Coherent Ring Thierry Coquand
• The Zariski Spectrum of a ring Thierry Coquand
• Tiling rectangles Dedicated to Jan von Plato, for his 50th birthday
• THE LOGIC IN COMPUTER SCIENCE COLUMN Yuri GUREVICH
• A NOTE ON MEASURES WITH VALUES IN A PARTIALLY ORDERED VECTOR SPACE
• Entailment Relations and Distributive Lattices Jan Cederquist 1 and Thierry Coquand 2
• Finite Automata: Homework 2 1. Let be {0, 1}. Give a NFA with four states equivalent to the regular expression (01+011+
• About Stone's notion of Spectrum Thierry Coquand
• ORDINALS IN TYPE THEORY Thierry Coquand, Peter Hancock and Anton Setzer
• CONSTRUCTIVE TOPOLOGY AND COMBINATORICS Thierry Coquand
• A Finitary Version of System F Thorsten Altenkirch and Thierry Coquand
• A Constructive Analysis of the StoneWeierstrass Theorem Thierry Coquand
• A constructive topological proof of van der Waerden's theorem Thierry Coquand
• Course Notes in Typed Lambda Calculus Thierry Coquand
• Infinite Objects in Type Theory Thierry Coquand
• Main Points of the Course What has been covered: chapters 1 to 5 + 7
• Constructive Algebra Thierry Coquand
• A logical approach to abstract algebra Thierry Coquand
• Exercices on Typed LambdaCalculus Thierry Coquand
• Sequents, Frames, and Completeness Thierry Coquand 1 and GuoQiang Zhang 2??
• A Note on the Open Induction Principle Thierry Coquand
• Inductive Definitions and Type Theory an Introduction
• Lewis Carroll, Gentzen and Entailment Relations Thierry Coquand
• Space of Valuations March 14, 2008
• Kurs: MAN321/TMV026 Andliga automater Plats: M-huset
• Some Lemmas around Peskine's Proof of Zariski Main Theorem January 7, 2008
• Equality and dependent type theory Thierry Coquand
• Topology and Sequent Calculus Thierry Coquand
• We let D be the set of all untyped, maybe open, terms, with ficonversion as equality. We let c n be the lambda term xf f n x: We consider the
• On the measure problem Thierry Coquand
• Valuations and Dedekind's Prague Theorem Thierry Coquand and Henrik Persson
• On seminormality Thierry Coquand
• Infinite objects in constructive mathematics Thierry Coquand
• AXIOMATIC SET THEORY 2 Ordinal Arithmetic
• Hidden constructions in abstract algebra (3) Krull Dimension of distributive lattices and commutative
• A Constructive Analysis of StoneWeierstrass Theorem
• An Analysis of Girard's Paradox Thierry Coquand
• A new method for establishing conservativity of classical systems over their intuitionistic version
• Inductively generated formal topologies Thierry Coquand y , Giovanni Sambin z ,
• A direct proof of the Dedekind-Mertens Lemma April 5, 2006
• Some results about Measure Theory Chalmers University
• Invariant Measure on Compact Space Thierry Coquand
• A direct proof of Ramsey's Theorem September 26, 2011
• Constructive Kan fibrations February 27, 2012
• Types as Kan Simplicial Sets Th. Coquand (jww Simon Huber) | {"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/31/023.html","timestamp":"2014-04-18T14:24:05Z","content_type":null,"content_length":"29756","record_id":"<urn:uuid:26855cac-2b4d-421c-af10-7b45894cfc13>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prime Number Puzzle
Date: 09/27/2001 at 20:56:08
From: Pat
Subject: Prime number word problem
I am the product of four primes, and the sum of these numbers is
thirty. My three digits are prime and differant. Who am I?
I couldn't come up with any combination without using one (1,5,11,13
for 715) How do I do this? Can I use a prime number twice in my set,
like 2,2,3,3,7,13 because I am using only four primes even though I am
using them more then once?
Date: 09/28/2001 at 14:12:39
From: Doctor Ian
Subject: Re: Prime number word problem
Hi Pat,
The main thing here is to proceed systematically. For example, you
know that you're looking for a 3-digit number, with distinct prime
digits. How many of those are there? Not very many.
The digits have to be 2, 3, 5, or 7, and they can't repeat, so you
need to find all the permutations of the 3-element subsets of this
2-- => 23- => 235
25 => 253
27- => 273
3-- => 32-
5-- => 52-
7-- => 72-
I'll leave it for you to fill in the rest of the table. The number of
possibilities is 4*3*2 = 24, which isn't too awful.
Once you have the 24 numbers, you can break them into prime factors.
Keep the ones that have four prime factors; and of those, keep the
ones whose factors add up to 30. Maybe there will be only one of
those, maybe not.
By the way, note that 1 is _not_ a prime number, so it won't appear as
a digit in the number, or as one of the prime factors.
In case you don't feel like working out the prime factorization of 24
different 3-digit numbers, you can look them up here:
Prime Factorization - Cenius.net
I hope this helps. Write back if you'd like to talk about this some
more, or if you have any other questions.
- Doctor Ian, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/57996.html","timestamp":"2014-04-19T09:36:18Z","content_type":null,"content_length":"7236","record_id":"<urn:uuid:b963896f-6029-4968-ae89-ee4354b62e05>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/fox375/answered","timestamp":"2014-04-20T00:55:14Z","content_type":null,"content_length":"112387","record_id":"<urn:uuid:cb8bead7-d30a-44ca-967a-0abd1cba1c4d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Soquel Algebra 1 Tutor
Find a Soquel Algebra 1 Tutor
...As soon as you learn how to use it you will have the whole world on your lap. I completed my studies from elementary school to Master's Degree in Russia and know very well what makes the
Russian school of Mathematics one of the best in the world. I combine traditional and innovative methods to teach Maths by creating interest and fun.
8 Subjects: including algebra 1, reading, ESL/ESOL, Russian
I am an experienced, enthusiastic, and dedicated instructor who will help students understand physics and mathematics using various method of instruction. I have a M.S. degree in Condensed Matter
Physics from Iowa State University with more than 5 years of teaching experience in physics. I was a r...
14 Subjects: including algebra 1, calculus, physics, statistics
...My education includes a B.A. in chemistry, a B.A. in mathematics plus 90 graduate units in pure and applied mathematics. I absolutely love science. I have years of experience with not only the
software and applications that run on computer systems but also the hardware.
19 Subjects: including algebra 1, chemistry, Spanish, calculus
Dear Student and Parent,As a teacher, I understand that a student can be distracted by many factors in the classroom. These distractions may include other students, other stimuli, whether audio or
visual, or any mental or physical unpreparedness on the student's part at the time. All these factors combined can result in an undesirable learning environment for some students.
5 Subjects: including algebra 1, geometry, algebra 2, prealgebra
...I love teaching students at this age, where their critical thinking and reasoning skills have matured to the point where problem-solving can be exciting rather than intimidating. I have a large
library of enrichment material for those students who need/want additional challenge, and I also have ...
10 Subjects: including algebra 1, calculus, geometry, algebra 2 | {"url":"http://www.purplemath.com/Soquel_algebra_1_tutors.php","timestamp":"2014-04-19T17:04:49Z","content_type":null,"content_length":"23982","record_id":"<urn:uuid:b2e05bef-feab-4571-85e7-a23844f1928c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patente US4791403 - Log encoder/decorder system
This is a continuation of application Ser. No. 805,157, filed Dec. 4, 1985.
In general, the present invention involves encoding and decoding data by using logarithms. In particular, the invention relates to encoding and decoding in an arithmetic coding system which
compresses data during coding and de-compresses data during decoding, wherein at least some of the computations are performed in a logarithmic domain.
One technique of compressing and de-compressing data is known as arithmetic coding. Arithmetic coding provides that, during encoding, the results of a set of events are made to correspond to a
respect point on a number line and, during decoding, the results from the set of events can be re-determined from a knowledge of the respective point. Specifically, during encoding the occurrence of
a first answer for an event (or decision) is related to a corresponding first interval along the number line. The occurrence of a second (subsequent) event is associated with a subinterval along the
first interval. With additional events, successive subintervals are determined. The final subinterval is represented by a selected point, which is defined as a compressed data stream. For a given set
of answers for a respective set of decisions, only one subinterval with one corresponding compressed data stream is defined. Moreover, only one set of answers can give rise to a given compressed data
Hence, given the compressed data stream, the original set of answers can be determined during decoding.
When the possible number of answers which can occur at any given event is two, the arithmetic coding is binary. One example of a binary application is embodied in the processing of white/black data
of a picture element (pel) of a facsimile processing system. In this facsimile application, there are two complementary probabilities referred to herein as "P" and "Q". Given that a pel can be either
black or white, one probability corresponds to a pel being black and the other corresponds to the pel being white. Such an environment is discussed in an article by Langdon, "An Introduction to
Arithmetic Coding", IBM J. Res. Develop. Vol 28, No 2, pages 135-149 (March, 1984).
As noted in the article, one of the two possible answers in binary coding may be more likely than the other at a given time. Moreover, from time to time, which of the two answers is more likely may
switch. In a black background, for example, the probability of the "black" should be significantly greater than for the "white" answer, whereas in a white background the "white" answer should be more
likely. In the Langdon article, an approach is proposed which may be applied to a binary arithmetic code or, more generally, to a multisymbol alphabet environment. In each case, the approach involves
processing numerical data in the real number system. Specifically, the encoding process is described as defining a variable C (representing a code point along the number line) and an expressing C+A
(representing an available space starting at the current code point C and extending a length A along the number line). The interval A is sometimes referred to as the range. In the binary context,
Langdon represents the variable A in floating point notation.
Other references of a related nature include two patents, U.S. Pat. Nos. 4,467,317 and 4,286,256. An article by Rissanen and Langdon, "Arithmetic Coding", IBM J. Res. Develop. Vol 23, No 2, pages
149-162 (March 1979), also discusses arithmetic coding. In the Langdon and Rissanen-Langdon articles and in the above-referenced patents, which are incorporated herein by reference, additional
patents and publications are discussed which may serve as further background to the present invention.
In reviewing the various cited references, it is observed that logarithms are used to represent the measure of entropy H(S) of a symbol and to represent a measure of the width of the code space. The
use of logarithms in converting computationally expensive multiplies and divides into additions and subtractions is well known. A review of prior technology also indicates that no references disclose
an encoder/decoder in an arithmetic coding system wherein range is re-computed for successive subintervals with finite precision in the logarithmic domain rather than in the real number domain.
The use of logarithms for re-computing range is not straightforward because of the precision requirements of arithmetic coding. Specifically, when transferring from the real number domain to a
logarithm domain in which there is finite precision, the resulting logarithm includes some truncation due to the finite precision requirement. A problem attending the truncation involves the antilogs
that are to be subsequently taken. Due to the truncation, it is possible for the sum of the antilogs of log P and log Q to exceed one--resulting in ambiguity which cannot be tolerated in the
arithmetic coding. Stated otherwise, the truncation operation must be performed to assure decodability. Until the present invention, this feature has not been achieved with a log coder.
Hence, it is a major object of the present invention to provide an arithmetic coding encoder/decoder system in which probabilities are represented in the logarithmic domain with finite precision. In
achieving the above-noted object, prescribed log tables and an antilog table are employed. In accordance with the present invention, the antilog table is generated in accordance with specific
constraints applied in working with fractions employed in the encoding and decoding process.
In the case of a binary event (or decision), the more probable symbol has a probability P and the other (less probable) symbol has a probability Q. A range R is defined along a number line wherein R
defines the limits in which a point is to be located. As recognized in the previous technology, a "new" range R' has been defined as the current range R multiplied by a symbol probability such as P.
The new range R' would thus be R*P--where * indicates multiplication--and corresponds to a reduction in the range R.
In accordance with the present invention, overlap of intervals along the number line is avoided by imposing the constraint:
In the logarithmic domain, this condition becomes: ##EQU1##
Based on the above relationships, the following constraints apply: ##EQU2## From the above-discussed relationships, a general condition is defined as:
antilog (α+β)≦antilog (α)*antilog (β)
where α and β represent any two mantissas which correspond to inputs to an antilog table.
An antilog table formed according to the general condition set forth immediately hereinabove applies whether α corresponds to log R and β corresponds to log P or whether, for a real number product
R*Q, α would be log R and β would be log Q.
The present invention is accordingly able to achieve the object of modifying the range R--formed from a product of real numbers such as R*P or R*Q--in the logarithmic domain where the antilog table
employed has the above-noted general condition applied thereto.
The present invention achieves the further object of reducing the size of the antilog table by applying the above-noted general condition in only certain cases while applying a different condition in
other cases. That is, the antilog table outputs are limited to achieve reduced size based on the following two relationships:
antilog (α+β)≦antilog (α)*antilog (β);
2*antilog (α+β-1)≦antilog (α)*antilog (β);
A further object of the present invention is the assuring of decodability. This object is achieved by imposing a constraint in addition to the above two relationships. According to this added
constraint, all output values of the antilog table must be unique.
With decodability assured, the logarithm domain is employed to convert multiplications and divisions in the real number domain into additions and subtractions in the logarithmic domain. This achieves
the object of reducing computational requirements.
The present invention also provides for controlled rounding in the log table used in converting to the log domain in order to guarantee correct identification of the least probable symbol in the
decoder while still in the log domain.
Also according to the invention, optimized matched pairs of log P and log Q are determined for finite precision (and decodable) operation.
Furthermore, the present invention enables the collapse of the log P's to an optimal set for the given precision.
Further reduction of the log P table is achieved with an upper limit placed on the maximum coding error at least over a range of actual probabilities.
The present invention also provides for the encoding of successive occurrences of the most probable symbol in a given state of constant probability by a simple counting procedure.
Moreover, the present invention provides for byte, rather than bit, oriented input/output of compressed data with minimal testing.
In addition to the above-mentioned features and objects achieved by the present invention, the present logarithmic encoder/decoder addresses difficulties recognized as generally affecting arithmetic
coding compression/de-compression systems.
Furthermore, it should be observed that the antilog table embodied with the aforementioned constraints may be used in applications other than in arithmetic coding. In general, where multiplications
and divisions of real numbers are performed, such calculations can be facilitated by performing additions and subtractions in the logarithmic domain. The aforementioned antilog table enables the
conversion to and from the logarithmic domain where the real numbers and logarithms have finite precision and where reversibility is required.
FIG. 1 is a diagram illustrating the basic structure of a compression/de-compression system including a log coder and log decoder.
FIG. 2 is an illustration showing a piece of number line divided into segments, one corresponding to the most (more) probable symbol and the other segment corresponding to the least (less) probable
FIG. 3 is a flowchart showing an encoder and decoder in an arithmetic coding compression/de-compression system.
FIG. 4 is a flowchart remapping the structure of FIG. 3 to a log coder implementation.
FIG. 5 is a graph depicting coding inefficiency relative to the entropy limit for all possible LP values for a sample probability interval. P[<LPS>] has the same meaning as Q and LP represents the
log2 (1/P[MPS]).
FIG. 6 is a graph depicting coding inefficiency relative to the entropy limit for an optimum set of LP values for a sample probability interval.
FIG. 7 is a graph depicting coding error relative to the entropy limit for a reduced set of LP values for a sample probability interval.
FIG. 8 is a graph depicting coding error relative to the entropy limit for all 48 LP values in the reduced set.
FIG. 9 is a flowchart illustrating the operations of a log coder according to the present invention.
FIG. 10 is a flowchart illustrating the operations of a log decoder according to the present invention.
FIGS. 11 through 31 are routines employed in the log coder or decoder or both to perform the arithmetic coding in accordance with the invention.
DESCRIPTION OF THE INVENTION I. Introduction
Section II of this disclosure discusses some basic principles of arithmetic coding that pertain to the implementation of a log coder. Section III develops the basic mathematical rules for arithmetic
coding in the log domain, and therefore the rules by which the various antilog and log tables must be generated. Section IV discusses more detailed aspects of an actual implementation, dealing with
such practical aspects as changing the state being coded, renormalization, carry propagation, and byte oriented input/output. It also describes the implementation of a probability adaptation concept
described in a co-pending patent application of W. B. Pennebaker and J. L. Mitchell, co-inventors of the present invention, entitled "Probability Adaptation for Arithmetic Coders" filed on even date
herewith. Section V describes tests of the functioning encoder and decoder.
Appendix 1 contains a software implementation of the log encoder/decoder in the Program Development System (PDS) language. PDS, it is noted, used forward polish notation (i.e., the operators are
followed by the operands). Appendices 2, 3 and 4 contain the PDS software for the calculation of the log and antilog tables. Appendix 5 contains a detailed operational test of the encoder and
II. Basic Principles of Arithmetic Coding as Applied to Log Coding
Referring to FIG. 1, the basic structure of a compression/de-compression system 100 is shown. The encoder 102 is divided into two basic parts, a state generator 104 and a log coder 106. The decoder
110 is the inverse of this, including a log decoder 112 and a state generator 114. The state generator 102 contains a model which classifies input data into a set of binary context states which are,
in general, history dependent.
The state generator 104 communicates to the log coder 106 the context state for the current binary decision. A similar context state output is provided by the state generator 114 for the decoder.
The context state, it is noted, identifies the most probable symbol (MPS); stores an index value I corresponding to a probability value in the lookup table; stores a count (i.e., the k count)
representing the number of occurrences of the LPS symbol; and stores a count (i.e., the n count) representing the number of occurrences of either symbol. The YN (yes/no) output from the encoder state
generator 104 informs the log coder 106 what the current binary symbol is.
In the example of a black/white facsimile system, the state generator 104 identifies a context state which is arrived at as the result of a particular pattern (or one of several patterns) of black/
white pels. The actual next pel in the pattern is provided as a YN output.
The YN output from the log decoder 112 informs the decoder state generator 114 what the current symbol is. Using the YN value the decoder state generator 114 is able to reconstruct the output data.
The output from the log coder 106 is a single merged compressed data stream. Each context state, however, is encoded by the log coder 106 as if it were an independent entity. Preferably, additional
information is passed via the state which enables the encoder 102 and decoder 110 to perform calculations required for probability adaptation (such as is set forth in the aforementioned patent
application filed on even date herewith). Combining the encoding/decoding method or apparatus of the present invention with the probability adaptation improves overall computational efficiency and
compression performance. However, the present invention may be implemented in a system wherein probabilities are predetermined or are fixed and do not adapt with entry of new data. Hereinbelow, the
present invention sets forth an embodiment employing probability adaptation.
In encoding a set of binary decision, the present invention represents a point on the probability number line with sufficient precision to uniquely identify the particular sequence of symbols from
all other possible sequences. The intervals on the number line are allotted according to the probabilities of the sequences. The more probable sequences are coded with less precision--i.e., with
fewer bits--and cover a larger interval.
A schematic of the mathematic operations for arithmetic coding based on infinite precision (rather than finite precision as in the present invention) is shown in FIG. 2. The symbol X(n-1) is an
infinite precision compressed data stream generated in coding the n-1 prior binary decisions. X(n-1) points to the bottom of the range, along the number line, that is available to the nth symbol. R
(n-1) is in the range available for that symbol. The log coder 106 uses the convention that the most probable symbol (MPS) and least probable symbol (LPS) are ordered such that the probability
interval allotted to the MPS is below that of the LPS. Then, if the nth YN is an MPS, the point represented by X(n) remains fixed at the base of the interval and the range R(n) shrinks to that
accorded the MPS. If the nth YN is an LPS, a the MPS range is added to X(n-1) to shift the bottom point of the range X(n) to the base of the interval for the LPS and the range R(n) shrinks to that
accorded the LPS. These two possibilities are shown graphically in FIG. 2.
Adopting the convention that the MPS occupies the lower part of the interval has computational advantages. In this regard, it is noted that the only information required for the coding process is a
current compressed data stream X(n-1), a current range R(n-1) along the probability number line and a current MPS probability P. (The probability of the LPS, Q, is by definition 1-P.) As the nth
symbol is started, the current compressed data stream and interval are known. The current probability is determined from the context state information, i.e., the I value.
The basic principles of arithmetic coding discussed hereinabove are now considered in constructing to encoder and decoder. The problems relating to finite precision and carry propagation will be
ignored for the moment.
A flow chart for an infinite precision arithmetic encoder and decoder is illustrated in FIG. 3. The encoder is identified at block 200 of the flow chart and the decoder is identified at block 210.
Note that the oval blocks label the function which is performed by the flow chart.
The encoder first reduces the range from R to R*P at block 202. If an MPS is being encoded (at decision block 204), that is all that is required for that symbol. If an LPS is being encoded, the
compressed data stream X is increased by R so that it points to the bottom of the new probability interval (see X(n) in FIG. 2 wherein YN≠MPS), and the range R is further reduced by multiplying by
the ratio Q/P (Q being less than P by definition). These operations are performed at block 206. The coded data is in compressed form and may be transferred by any of various known elements 208 to the
The decoder operates in a manner analogous to the encoder. For each symbol being decoded, the decoder first reduces the current range R to R*P. The symbol YN is assumed to be an MPS, and is so set at
block 212. If the compressed data stream X is determined at decision block 214 to be less than the new range R, the symbol must have been an MPS, and the decoder has successfully decoded the symbol.
If this test fails, however, an LPS must be decoded. The YN symbol is switched to an LPS via an `exclusive or` operation; the range allotted to the MPS subtracted from X; and--as in the encoder--the
new range is further reduced by the ratio Q/P in block 216.
The particular sequence of operations described in FIG. 3, it is noted, leads to a computationally efficient implementation relative to other valid operational sequences.
The remapping of FIG. 3 to a Log Coder and Log Decoder implementation is shown in FIG. 4. This is an overview of the function performed in the Log Coder 106 and Log Decoder 112 illustrated in FIG. 1.
The logarithm of the current range, LR, is the measure of the probability interval available. LP is the log of the current MPS probability. The product, R*P, is therefore replaced by the sum of
logarithms in block 302. Again, if YN is the MPS at decision block 304, the encoding operation is complete (conceptually). If YN is an LPS, the compressed data stream X must be increased by the
antilog of the LR determined at step 302, and the log of the ratio of Q/P (LQP) must be added to LR as shown at block 306.
The decoder follows a similar structure. The range is first reduced, which in the log domain is done by adding two logarithms. (Preferably, the logarithms discussed in this embodiment are in base-2.)
This is performed at block 308. The YN symbol is initially defined to be an MPS, and a test is made (in the log domain) to see if the compressed data stream X is less than the range determined in
block 310. Note that LX and LR are logrithm magnitudes of numbers less than one. LX greater than LR means than X is less than R. If so, the log decoder is finished for that symbol. If not, the YN is
converted to an LPS; the value of X is adjusted by subtracting the probability interval allotted to the MPS (ANTILOG(LR)); a new log X (denoted by LX) is calculated; and the log of the range LR is
adjusted to match to the LPS range. These operations are performed at block 312.
This section has described the basic conceptual structure of the log coder and decoder. In this description the quantities LP and LQP were assumed to be looked up in tables from the context state
information. In addition, the antilog calculation was assumed to be performed by a simple lookup table procedure. In making these assumptions, a number of very fundamental questions regarding finite
precision arithmetic and guarantees of decodability have been side-stepped. These questions are addressed in the next section.
III. Mathematical Principles and Generation of Tables for Arithmetic Coding in the Logarithmic Domain
If decodability is to be guaranteed, there must never be any overlap of intervals along the probability number line. A necessary condition for this is:
P+Q≦1 Eq. 1
In the long domain this becomes:
antilog(log P)+antilog(log Q)≦1 Eq. 2
where the antilog is done by lookup tables which are generated subject to constraints described below.
The precision of log P is restricted to a given number of bits. For 10 bit precision log P can be regarded as being renormalized to convert it to a 10 bit integer having values from 1 to 1024. Since
P is a fraction where 0.5≦P<1, log P is negative. For convenience, LP and LQ are defined as: ##EQU3## where the factor 1024 applies to 10 bit precision in the log tables.
Equation 2 is not a sufficient condition for decodability. In this regard, it is noted that in general, the range R is not 1, so equation 1 must be modified to:
R*P+R*Q≦R Eq. 4
which in the log domain becomes:
antilog(log R+log P)+antilog(log R+log Q)≦antilog(log R) Eq. 5
Dividing both sides of equation 5 by antilog(log R) gives: ##EQU4## In order for both equations 2 and 5 to always hold ##EQU5##
Therefore, if in general
antilog(α+β)≦antilog(α)*antilog(β) Eq. 6
holds for the antilog table, equation 5 will be satisfied. Moreover, to achieve decodability, all output values of the antilog table must be unique.
Equation 6 represents a fundamental constraint on the structure of the antilog table.
To limit the size of the antilog table, equation 6 is revised slightly as follows.
antilog(α+β)≦antilog(α)*antilog(β); where α+β<1 Eq. 7a
2*antilog(α+β-1)≦antilog(α)*antilog(β); where α+β≧1 Eq. 7b
These two relationships are used in the generations of the antilog table, together with the one further constraint; namely, all output values of the antilog table must be unique, to achieve
decodability and facilitated computation.
A program for generating and optimizing the antilog table is set forth in Appendix 2. The optimized 12 bit precision antilog table can be found in the function `maketbls` in Appendix 1. (The table
order has been inverted in the actual code, because the table address there is not generated from the true mantissa, mx, but rather from 1-mx.) Note that the antilog table is defined such that the
log index is a number between 0 and 1, giving outputs between 1 and 2 (with proper renormalization of inputs and outputs). Although 14 bits are actually used in the antilog table in `maketbls`, only
4096 entries are needed for the table. With the exception of the first entry in the table, the two most significant bits are always the same. Thus, it can be regarded as a 12 bit precision table. It
is also noted that the non-linearities of the log and antilog tables and the uniqueness requirement demand that the precision of the antilog table be somewhat higher than the precision of the log
Once the antilog table is known, the log tables can be constructed. Two tables are required. As suggested in FIG. 4, both the encoder and decoder require a knowledge of log P and log Q/P. Given all
possible values of log P (or LP), equation 2 can now be used to generate the values of log Q/P (LQP) which satisfy that constraint. While all values of log P are valid and decodable, the finite
precision and constraints placed on the antilog table make many of the log P values non-optimum (as defined below).
For 10-bit precision, there are 2^10 or 1024 possible estimated probabilities. This high number of probabilities is unwieldy. Hence, the number of probabilities is reduced according to appropriate
criteria. In the present invention, estimated probabilities are discarded based on coding inefficiency. In this regard, it is noted that coding inefficiency is defined as: ##EQU6## Entropy is equal
-P log 2(P)-Q log 2(Q) where P+Q=1 and is defined in terms of bits/symbol for ideal coding. Bit Rate is equal to -P log 2(P[est])-Q log 2(Q[est]) where (Q[est] +P[est])≦1 and is defined in terms of
bits/symbol for an estimated probability (Bit Rate=Entropy represents the ideal condition).
Referring to FIG. 5, each curve corresponds to a probability estimate. Some curves have at least one point which, for a ##EQU7## abscissa value corresponding thereto, has a lower inefficiency value
than any other curve. Some do not. Those estimated probabilities having curves that do not have a relative minimum inefficiency value are discarded as non-optimum. In FIG. 5, P[LPS] means Q and QL
and QA represent the log and antilog precisions, respectively. The relative error is the coding inefficiency defined above.
After applying the "relative minimum criteria," the 1024 possible probabilities are reduced to 131. These 131 estimated probabilities may serve as entries in a table for estimated Q's.
A plot of relative coding inefficiency with respect to the entropy limit, (P*LP+Q*(LP+LQP)-Entropy)/Entropy, is shown in FIG. 6 for a sample probability interval for the set of 131 remaining
If slightly larger coding inefficiencies are allowed than the table would produce, the table can be further reduced. For example, deleting entries which can be replaced by neighboring entries without
increasing the coding inefficiency to more than 0.2% further collapses the table to 48 entries. FIG. 7 shows this set for the same probability interval used for FIG. 6. FIG. 8 shows the coding
inefficiency curves for all 48 entries. Code for constructing and collapsing the LP, LQP tables is given in Appendix 3. A 10 bit precision LP, LQP table is found in the function `maketbls` in
Appendix 1. This table has been collapsed in 48 entries.
In addition to the LP and LQP tables, the decoder also requires a log table in order to generate LX, the log of the current piece of the compressed data. This table must be structured so as to
guarantee that the comparison of LX with LR will give exactly the same decision as a comparison of X with R. The following basic rules must be followed in constructing the LX table:
1. There must be an entry for every possible antilog value. Therefore, 4096 entries are required if the antilog table has 12 bit precision. Since only 10 bit precision is required in the log domain
for this case, the table can be guaranteed to have all possible (1024) output values.
2. The LX table must be reversible for all values which are outputs of the antilog table. That is:
LX=log (antilog(LX)) Eq. 8
3. Given an Xa and Xb as two adjacent output values from the antilog table, and an X' such that Xa>X'>Xb, the output for log X' must be rounded down to log Xb. The reason for this is as follows:
For each Xc generated by the antilog table, if the bound between LPS and MPS occurs at Xc, Xc must be distinguished from Xc-1. If X≧Xc, and LPS has occurred, and if X≦Xc-1, an MPS has occurred.
Therefore, for any given antilog table output, if X is slightly less than that output, log (X) should be rounded down to the log of the next antilog table output. If it were rounded up, it might be
decoded as an LPS. Since the log (R) must be changed by at least one (i.e., by the smallest log P increment allowed) before decoding the next symbol, there is no danger that the next symbol might be
erroneously decoded as an MPS.
IV. Detailed Description of the Log Coder Operation A. Definitions
LR is defined as 1024*(-log R), in keeping with the conventions used for LP and LQP. LR is a 16 bit integer, but can be considered a binary fraction with the binary point positioned to provide 10
fractional bits and 6 integer bits. LR serves a dual function function; it is a measure of the current range and also a measure of the symbol count for probability adaption. Probability adaptation is
discussed in the previously cited patent application.
X contains the least significant bits of the compressed data stream for the encoder. In the encoder, X is a 32 bit word consisting of 12 integer bits and 20 fractional bits plus any carry. In the
decoder only the 20 fractional bits are used and X is the current most significant bits after proper renormalization.
B. LOG CODER (FIG. 9):
FIG. 9 shows a flow chart for an actual software implementation of the log coder. Some of the basic structure from FIG. 4 should be apparent, but there are significant addition. Before encoding a new
symbol, a check is made to see if there is a new context state, NS, changed from the previous context state, S. The new context state is dictated by the model. By way of example, if the neighborhood
identified with a pel changes from being predominantly black to predominantly white, a corresponding model would declare a new state (NS). Accordingly, a new set of parameters--MPS I, k, and n (or
the like)--would be called.
CHANGESTATE (FIG. 11) is called to save and restore (a) a pointer to the probability table, (b) parameters required for probability adaption, and (c) the MPS symbol. Once the context state is set, LR
is increased by LP, as in FIG. 4. (Note the sign conventions for LR and LP.)
The adjusting of LR serves as a count indicator in the following way. If R is the current range and P is the MPS probability, the new range R' after n events which happen to be MPS's is:
R'=R*P*P* . . . , P=R*P^n
In logarithmic form, this becomes:
log R'=log R+n log P
which can also be represented as:
|log R'|=|log R|+n|log P|
The number n is then set forth as:
n|log P|=|log R'|-|log R|
Hence, the adjustment of R in logarithmic terms (i.e., LR) is in units of |log P|. When nmax log P=n log P, the probability adaption discussed herein and in the co-pending application is triggered.
The YN decision is compared to MPS, and if an MPS has occurred, the coding is essentially complete. However, range checking must now be done to see if LR is too big. In this regard it is noted that
event counting is performed in movements of log P rather than by 1. That is, instead of counting so that n⃡n+1, the count for the MPS increments by log P with each occurrence of an MPS symbol. LR is
compared to a quantity LRM, which is a measure of the total symbol count required before a meaningful check can be made on the current probability. If LR does reach LRM, a call is made to UPDATEMPS,
where checks are made to see if a new (smaller Q) probability is required. The UPDATEMPTS routine is described in the co-pending application which, it is noted, is directed to a single context state.
If YN is not equal to MPS, an LPS has occurred. In this case the LPS counter, K, is incremented and a measure of the number of symbols encountered in the current block, DLRM, is saved. RENORM is then
called to do any renormalization of LR and X required to guarantee that the integer part of LR is less than 8. After renormalization, ANTILOGX is called to calculate the amount DX which must be added
to X to shift the compressed data to the base of the LPS probability interval. Then XCARRY is called to handle any carry which might have occurred when DX was added to X, and finally, UPDATELPS is
called to take care of any probability adaptation (larger Q). LRM is then calculated from the saved DLRM and the new LR value.
C. LOG DECODER (FIG. 10)
The decoder also has some significant additions relative to the conceptual version of FIG. 4. As in the encoder, if the context state has been changed from the last decoder call, CHANGESTATE is
invoked. A comparison measure LRT is specified as the smaller of LRM and LX. LRT must be updated if the context state is changed. LP is then added to LR to decrease the range, and the YN answer is
defaulted to MPS. LR is then compared to LRT and, if smaller, the MPS symbol decode is complete. Since LRT is the smaller of LRM and LX, the comparison of LR against LRT serves a dual purpose. If LX
is larger than LRM, passing the test means that the probability adaptation data block size was reached and that checking for probability adaptation must be done. If LX is not bigger than LR, either
an LPS symbol has been decoded or renormalization is required. The count of symbols for the current data block is saved in DLRM and the range measure LR is then renormalized. If LX is still greater
than LR (LX and LR are the magnitude of log X and log R), only a renormalization is needed. However, LR must still be compared to LRM in the event that UPDATEMPS is required.
If LX is equal to or less than LR, an LPS has been decoded. The LPS count, K--which is employed in probability adaptation--is incremented; the YN answer is switched to LPS; and the antilog table is
used to calculate DX, which is then subtracted from X. A new LX is then calculated; the probability adaptation code is called; and finally LRM is updated. For all paths which require a new value of
LRM, UPDATELRT is called to calculate a new value of LRT.
D. CHANGESTATE (FIG. 11)
CHANGESTATE saves the total symbol count DLRST(S), required for the probability adaptation in that state. (The LPS count K and the pointer to the probability table--i.e., log P, log Q/P, etc.,--are
saved each time they are changed and do not need to be saved here.) The pointer S is then shifted to the new context state NS. The LPS count K, the probability table pointer I, and the current MPS
are restored. The value of the current log P is represented by LP (which is preferably saved in a register), and LRM is calculated from the current LR and the saved measure of the symbol count is
this context state. LRMBIG is then called to do any renormalization that might be needed.
E. UPDATEMPS (FIG. 12)
UPDATEMPS checks to see if the probability needs to be adjusted. If the probability confidence measure so indicates, QSMALLER is called to adjust the probability table pointer to a smaller Q value.
UPDATEMPS then resets the LPS count, K; stores the new value in KST(S); and adjusts the comparison value LRM to the end of the new block of symbols. LRMBIG is then called to see if any
renormalization is required before proceeding.
F. QSMALLER (FIG. 13)
QSMALLER performs the probability adaptation as described in the co-pending cited patent application. Basically, if the LPS count K is too small, the probability table pointer I is adjusted to a new
position (smaller Q) which restores confidence in the probability estimate. The last entry in the table of log P values is a zero. This is an illegal value which is used as a stop. If zero is
encountered, the index is backed up to the last valid entry in the table. It is also noted that whenever I is changed, the new index is saved in the context state information. The value of LP must
also be updated from LOGP(I).
G. RENORM--for the log encoder (FIG. 14)
RENORM does the renormalization of LR and X, thus preventing LR from overflowing the allowed 15 bit range. Each time the characteristic part of LR (a log magnitude quantity) reaches or exceeds 8, one
byte can be shifted from X to the compressed data stream buffer. The byte pointer BP is incremented to point to the next byte position; the high order byte in X is stored in B (byte pointed to by
BP); and LR is decremented by the integer value of 8 (i.e., hex 2000). At that point the bytes remaining in X can be shifted left by 8 bits. Each time a new byte is added to the buffer, CHECKFFFF is
called to see if a bit pattern of hex FFFF has been created in the buffer. If so, a byte must be stuffed in the buffer to block carry propagation. This is a form of bit stuffing as described in the
aforementioned Rissanen-Langdon article entitled "Arithmetic Coding".
Each time a byte is added to the buffer a check is made to see if the buffer is full. If so, BUFOUT is called to transmit the contents of the buffer. This loop is repeated until the integer part of
LR is less than 8.
H. ANTILOGX (FIG. 15)
ANTILOGX calculates the antilog of LR wherein DX--the amount which must be added to the code stream in the encoder and subtracted from the code stream in the decoder--is obtained. The mantissa of LR
(MR) is first obtained from the low 12 bits of LR. (Actually, 1-MR is the true mantissa, but the antilog table is inverted to avoid this subtraction). MR is used to index into the ANTILOG table (see
function `maketbls` in Appendix 1.) CT, the integer part of LR, is obtained by shifting LR right by 10 bits. The true characteristic would be calculated as 8-CT if one unit were not already imbedded
in the MR value. Consequently, the output of the ANTILOG table only needs to be shifted by 7-CT to provide DX.
J. CHECKFFFF (FIG. 16)
As described earlier, CHECKFFFF looks for hex FFFF patterns on byte boundaries in the code stream. If the pattern is found, a zero byte is stuffed in the code stream following the FFFF pattern.
K. UPDATELPS (FIG. 17)
UPDATELPS is called when an LPS occurs. It first adjusts the range measure, LR, to that of the LPS. It then checks to see if probability adaptation is required by comparing the count K with Kmax. If
K is greater or equal to Kmax, QBIGGER is called to shift the probability table pointer to a larger Q. The LPS count K is then zeroed and the block counter DLRM reset. The new probability table index
is stored in the context state information and LP is updated.
If the current LPS count is within the confidence limits (K<Kmax), the total count measure DLRM is checked to see if it is negative. If so, it is clamped to zero.
The new K value is stored in the context state information as the last step.
L. QBIGGER (FIG. 18)
QBIGGER moves the probability table index to a larger Q. If required, it also interchanges the definitions of LPS and MPS. Since the probability table does not extend beyond Q=0.5 (the symbols are
interchanged instead), there is a discontinuity in the probability adjustment procedure at Q=0.5. This discontinuity is approximately compensated for by saving the unused table index increment in
INCRSV and using it (in SWITCHMPS) to adjust the index to a smaller Q after the MPS-LPS interchange. The test sequence and table pointer adjustment algorithm are described in the previously cited
patent application. After adjustment of the index I a new LP value is set.
M. INCRINDEX (FIG. 19)
INCRINDEX shifts the probability table index to a larger Q if possible. If the pointer is already at the top of the table (I=0), the unused increment is saved for use in SWITCHMPS.
N. DBLINDEX (FIG. 20)
DBLINDEX attempts to adjust the probability table index so as to double the value of Q. If it can not do so, the index change is saved in INCRSV for use in SWITCHMPS.
P. SWITCHMPS (FIG. 21)
SWITCHMPS checks to see if the probability table index is at the top of the table. If so, it is time to interchange the MPS and LPS symbols. The revised definition of the MPS is stored in the context
state information, and any unused table adjustment is now added to I to shift the new interchanged Q to a smaller value.
Q. LOGX (FIG. 22)
LOGX calculates log X for use in the decoder. Noting that X is a 20 bit binary fraction, the characteristic is first obtained by shifting X until only the eight most significant bits of X are
present. A maximum of 8 bits can be used for the characteristic, as at least 12 bits must be retained in X for it to align properly with the output of the antilog table. If the eight bits (CX) that
define the characteristic are all zero, the value of LX is defaulted to hex 2000, a lower limit to the true value. For the conventions used, hex 2000 is equivalent to a characteristic of -8. If CX is
not zero, it is converted to the true characteristic by a lookup table (chartbl in the function `maketbls` in Appendix 1). After the shift by CT, the X value has 13 bits, but the leading bit is
always 1 and is truncated by the `AND` with hex FFF. This provides the address to logtbl (see `maketbls`) where the log of X is tabulated. The output of the log table is subtracted from the
characteristic of X, properly shifted to conform to the conventions used for the logs.
R. RENORM--for the log decoder (FIG. 23)
If the LR value is hex 2000 or larger, renormalization is required before an LPS symbol can be decoded. RENORM is also called when LRM gets too large to fit in 15 bits. RENORM's first task is to
check for the hex FFFF pattern in the code stream. If the previous two bytes read from the buffer (B and BO) were both hex FF, the next byte will contain only a possible carry which must be read and
added to the present value of X. After this special situation is taken care of, X is shifted by 8 bits and the next byte read and added in. Each time a non-carry byte is read from the code stream, LR
is decremented by hex 2000, the change in the characteristic required for an 8 bit shift in X and R. This process is repeated until the integer part of LR is less than 8. After renormalization, a new
value of LX is obtained in the call to LOGX.
S. BYTEIN (FIG. 24)
Each time a byte is read from the code stream buffer the pointer BP is first incremented and a check made to see if it is at the end of the buffer (BE). If so, the final byte in the buffer BO is
moved to the byte just preceding the start of the buffer BA in order to preserve any possible hex FFFF patterns. Then a new buffer is obtained before returning with the pointer at the byte to be
T. UPDATELRT (FIG. 25)
LRT is the parameter required in the comparison with LR in the decoder. The test has two functions: first to detect the occurrence of an LPS, and second to see if the current block count is at the
point where a probability update may be required. LX is the measure for the first test, LRM for the second. LRT is always set to the smaller of the two.
U. LRMBIG (FIG. 26)
This code makes sure that LR does not overflow a 15 bit range. If LRM (which is always greater or equal to LR) exceeds hex 7FFF, the 16th bit has been set and renormalization is required. Note that
DLRM, the symbol count in the current block, is saved before renormalization so that the probability adaptation is not disturbed.
V. XCARRY (FIG. 27)
XCARRY checks to see if the latest addition to X in the encoder has caused a carry. If so, the carry is propagated to the last byte written into the code stream buffer. That byte is checked for
overflow, and any hex FFFF pattern that might have been created is also handled. This technique has been described in a co-pending patent application to J. L. Mitchell and G. Goertzel entitled,
"Symmetrical Adaptive Data Compression/Decompression System" which is incorporated into this application by reference to the extent, if any, required to fully disclose the present invention.
W. BUFOUT (FIG. 28)
BUFOUT has the task of transmitting a complete buffer and moving the last three bytes written to the buffer back to the beginning of the buffer. Depending on the circumstances, the bytes moved may or
may not be part of the next buffer, but may be required for CHECKFFFF and XCARRY.
X. INITENC (FIG. 29)
INITENC does the initialization for the encoder. It first sets up the tables required, such as the state tables and the various log, antilog and probability tables. It points the state pointer at a
dummy state so that a call to CHANGESTATE is forced when the first symbol is encoded. It sets the length of the buffer for the code stream to 256 bytes (an arbitrary but convenient choice), and
initializes the pointer to that buffer to 3 bytes before the actual start of the buffer to be sent. The initialization conditions create two dummy bytes of data that are not required to be sent. The
pointer is updated before a byte is written; hence an offset of 3 at initialization is needed. Zeroing Z zeros both the 20 fractional bits and the 12 integer bits, creating 12 bits in the code stream
which are guaranteed to be zero. LR should, in principle, be initialized to 1, giving a range slightly smaller than 1. However, by initializing to hex 1001, an extra 4 bit shift of X is forced,
creating exactly two bytes of zeros. These become history for the carry propagation code, but are never sent. The pointer BE is set to point two bytes beyond the last byte in the actual buffer being
sent. Note that CHECKFFFF may move the buffer pointer beyond this point before the contents of the buffer are actually sent. Note that setting LRM to LR is not necessary. The CHANGESTATE call
overrides this and initializes LRM with LR+DLRST(I). Currently, the initialization of all context states is to IST(S)=0, MPSST(S)=0, KST(S)=0, and DLRST(S)=NMAXLP(0). Thus, all states start a new
block with P=0.5 and MPS=0.
Y. INITDEC (FIG. 30)
INITDEC does the initialization for the decoder. All states are initialized as in the encoder. Note that again a call to CHANGESTATE is forced. The initialization of X is from the buffer of
compressed data. It is noted, however, that LR is initialized to match to the encoder. This forces the extra 4 bit shift required to match the shift in the encoder. LX is calculated by LOGX from the
actual code stream data. If the initial two bytes happen to create the hex FFFF pattern, the next zero byte must be skipped.
Z. FLUSH (FIG. 31)
FLUSH is called after the last symbol has been encoded to `flush` out any residual compressed data from X. By calling RENORM all but the last 4 bytes are sent. Those 4 bytes are forced out by adding
hex 8000 to LR and calling RENORM again. Then any buffers or partial buffers remaining must be sent.
V. Tests of the Log Coder
Tests of the present encoder and decoder have included tests on large image data files of both grayscale and facsimile data. The results show that the encoder and decoder are running without error,
and that performance is slightly better (less than 1% better) than the arithmetic coder described in the Mitchell-Goertzel patent application referenced hereinabove. It should be noted, however, that
no attempt was made to initialize the log coder to the expected skew as was done in the coder described in the Mitchell-Goertzel patent application referenced hereinabove. According to the test
performed, a grayscale encoder according to the present invention produces context states with remarkably stable and well behaved statistical distributions.
The results of a detailed test are given in Appendix 5 for a 256 bit data file. This test contains a comprehesive breakdown of the operation of both the encoder and decoder.
The Log Encoder/Decoder system is designed for implementation on both 16 and 32 bit processors. As noted hereinabove, the operations are structured to make the computational effort scale with the
output of compressed data. When working with large probability skews in a single context state, the encoding process reduces essentially to a single 16 bit addition, a test for the type of symbol
encountered, and a test for overflow of the allowed range. At the same time, coding efficiency is not sacrificed. Unless the skews are very large (and very little coded data is being generated) the
coding efficiency is within 0.2% of the entropy limit, given a correctly specified probability.
VI. Alternative Embodiments
While the invention has been described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made without
departing from the scope of the invention.
For example, as suggested hereinabove, the described embodiment includes probability adaptation. That is, as data is processed, the probabilities are adjusted in response thereto. A number of the
functions detailed hereinabove relate to this feature. The present invention may however also be practiced without probability adaptation--wherein probabilities do not adapt based on prior inputs--in
which event an artisan of ordinary skill may readily omit or alter the operations as appropriate.
Moreover, although described in the context of a grayscale application and a general facsimile application, the teachings of the present invention extend also to other environments in which the
product of probabilities are required. Such environments include, but are not limited to, weather calculations, language applications (such as speech recognition), and other problems that may
characterized in a probabilistic manner. The result of reducing computational requirements by converting to and computing in the log domain has general use and is intended to have broad scope of
It is further noted that, although set forth with a binary arithmetic coding preferred embodiment, the present invention may also be applied to environments in which more than two outcomes (or
answers) may result from a decision. In such a case, the multisymbol outcome may be represented as a group of binary decisions or some alternative approach employing the teachings of the invention
may be implemented.
It is also observed that the logarithmic domain set forth hereinabove is preferably the base-2 logarithmic domain, although some of the logarithms may be taken in other bases. In this regard, base-2
logarithms are notated by log[2] or log2.
It is further noted that the number line may be ordered with the P-related values at the lower end or with the Q-related values at the lower end in accordance with the invention. ##SPC1## ##SPC2## ##
SPC3## ##SPC4## ##SPC5## ##SPC6## ##SPC7## ##SPC8## | {"url":"http://www.google.com.br/patents/US4791403","timestamp":"2014-04-16T13:38:55Z","content_type":null,"content_length":"128689","record_id":"<urn:uuid:ee133929-3aed-4b0c-916b-77914be5fa19>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
when using a FFT should I convert the AtoD samples into a Voltage?
when using the FFT for DSP should I convert the A to D samples back into a Voltages ?
A to D sample = sample voltage * ( bits / max Voltages)
sample Voltage = (A to D sample ) *(max Voltages/ bits)
that is right, right?
anyhow should I keep the samples as digital samples or convert them to analog samples
before I run them in to the FFT?
why or why not?
I kind of think it should matter... but I do not know . I am new to this.
I do think that the A to D / mic on a computer might remove all the negative numbers, but i do not know is you would get negative numbers from a microphone...
do you ?
A microphone puts out an AC signal. How are you handling that at your ADC? Are you offsetting the zero crossing voltage of the input signal to bias it up to half of the input voltage range of the
ADC? If so, you will want to subtract that offset out of your digital data before doing the FFT, or else just subtract the DC component out of the final FFT data.
If your ADC circuit outputs 2's complement data to represent the AC waveform, you will probably need to do something before the FFT, I would think... | {"url":"http://www.physicsforums.com/showthread.php?s=0f3b42447c0e9d41fbf51cda945131e7&p=4502364","timestamp":"2014-04-17T21:38:01Z","content_type":null,"content_length":"35155","record_id":"<urn:uuid:d31bc56a-895e-49ba-a423-5cb7ddbcfa55>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Crust and Skeleton
From VoroWiki
A concept closely associated with the VD is the medial axis transform (MAT), or "skeleton". This is a subset of the VD, and is usually applied to polygons -- each point on the MAT is equidistant to
at least two edges of the polygon. The resulting graph may be used to classify the polygon shape (Blum, 1966^[1]) -- for example in character recognition. This will be discussed further when we talk
about using the line segment VD to form polygons. However, each boundary of a complex object may be approximated by multiple points, and the MAT constructed by throwing away the unwanted edges, as
before. Thus boundary points taken from an image may be used to generate the Euclidean VD and thus the MAT, giving a kind of shape description, as before. The work of Ogniewicz shows several
examples. Okabe et al. (2000)^[2] shows a variety of examples of generating complex objects from point sets on the boundary.
One problem, however, is that it is not always easy to colour the black boundary pixels so that edges may be distinguished. In this case all input points to the VD will have the same colour, and no
MAT will be produced. (In the work of Gold (1998) just described, vertices were coloured by flood-filling the individual polygons.)
Computing the crust and the skeleton
A brilliant recent paper (Amenta et al., 1998) suggests how a solution may be achieved using “Voronoi Filtering”. First generate the VD of the “black” boundary points, and extract the “red” Voronoï
vertices. Construct a DT of both the black and red points, and throw away all triangle edges connecting black points to red points. If the original boundary was sufficiently well sampled then the
original boundary points will be correctly connected. (Sufficiently well sampled was shown to be a function of the distance from the boundary to the MAT. It is usually easily achievable except at
sharp corners.) Of interest to us here is that the MAT is also extracted. A useful terrestrial example is the extraction of approximate watershed boundaries from a scanned hydrological network - the
MAT gives the watershed between watercourses. Of greater Marine interest would be the extraction of spatial clusters (of boats, fish...) using the same method. The boundaries of diffuse clusters are
rarely distinct, but the MAT separating the clusters is often well defined.
One-step crust and skeleton extraction algorithm
Application examples
• Scanned cadastral map processing -- preserve the topological relationships
• Terrain generation from contour maps - facilitate ridge and valley delineation
• Re-contouring of maps at different intervals
• Polygon generalization and text recognition
1. ↑ Blum H, (1967) A transformation for extracting new descriptors of shape. In: Whaten Dunn W (ed), Models for the Perception of Speech and Visual Form. MIT Press, 153-171.
2. ↑ Okabe, A.; Boots, B.; Sugihara, K. & Chiu, S.N. (2000), Spatial Tessellations: Concepts and Applications of Voronoi Diagrams, John Wiley and Sons. | {"url":"http://www.voronoi.com/wiki/index.php?title=Crust_and_Skeleton","timestamp":"2014-04-18T18:14:34Z","content_type":null,"content_length":"19225","record_id":"<urn:uuid:9bfeaa4b-031d-4725-86c2-e4ab35a2f84d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Energy-Aware Adaptive Cooperative FEC Protocol in MIMO Channel for Wireless Sensor Networks
Journal of Electrical and Computer Engineering
Volume 2013 (2013), Article ID 891429, 9 pages
Research Article
Energy-Aware Adaptive Cooperative FEC Protocol in MIMO Channel for Wireless Sensor Networks
^1School of Computer Science & Engineering, Changshu Institute of Technology, Changshu 215500, China
^2Department of Computer Science & Technology, Nanjing University of Technology, Nanjing 211816, China
Received 26 March 2013; Accepted 23 June 2013
Academic Editor: Dharma Agrawal
Copyright © 2013 Yong Jin and Guangwei Bai. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
We propose an adaptive cooperative forward error correction (ACFEC) based on energy efficiency combining Reed-Solomon (RS) coder algorithm and multiple input multiple output (MIMO) channel technology
with monitoring signal-to-noise ratio (SNR) in wireless sensor networks. First, we propose a new Markov chain model for FEC based on RS codes and derive the expressions for QoS on the basis of this
model, which comprise four metrics: throughput, packet error rate, delay, and energy efficiency. Then, we apply RS codes with the MIMO channel technology to the cross-layer design. Numerical and
simulation results show that the joint design of MIMO and adaptive cooperative FEC based on RS codes can achieve considerable spectral efficiency gain, real-time performance, reliability, and energy
1. Introduction
There is an unprecedented revolution in wireless sensor networks [1, 2], which was driven by the explosive growth and diversity of wireless application services such as data acquisition and
interactive mobile multimedia applications. Various sophisticated applications in wireless sensor networks require different quality of service (QoS) satisfaction. In order to prolong the lifetime of
sensor network and achieve better utilization of scarce radio resources, the cooperative communication approach and error control schemes have drawn significant research attention [3–8].
The use of forward error correction (FEC) [9] scheme is a classical solution to improve the reliability of data transmissions in wireless sensor networks. Specially, packet-based Reed-Solomon coder [
10] has been already used in many application services of wireless sensor networks. In particular, many of delay-sensitive multimedia services rely on the Reed-Solomon (RS) coder. An adaptive FEC
technique was proposed in the literature [3], which dynamically tunes the amount of FEC code based on the arrival of acknowledgement packets and out of control with signal-to-noise ratio (SNR) or bit
error rate from receivers. Our previous study [4] discussed the energy efficiency of FEC on distance and predicts the frame loss rate with GM (1, 1) model, on the basis of adjusting the FEC parameter
of sensor nodes. However, energy saving and consumption is a chief issue in wireless sensor networks. So, Gupta et al. [5] proposed an energy-efficient data gathering protocol that uses a
prediction-based filtering mechanism to solve the problem of redundant data transmissions. At the same time, Singh et al. [6] proposed an approach of energy-efficient transmission error recovery
algorithm. Furthermore, the multi-input multi-output (MIMO) scheme is usually used in wireless sensor networks in order to reduce the fading effects in wireless channel [11]. A distributed threshold
based on MAC protocol for cooperative MIMO transmissions was proposed in the literature [7] which uses a threshold scheme based on the queue length and minimizes latency ensuring the stability of
transmission queues. A clustered wireless sensor networks were proposed by Del Coso et al. [8] for minimum end-to-end outage probability and presented a per-link energy constraint in cooperative
distributed MIMO channels.
However, there are some drawbacks in all these methods and research findings explained above. The influence of RS coder on packet error rate and communication quality in cooperative transmission is
not considered in [3, 4], which also ignore the combination of RS coder and MIMO channel technology. The energy information based on RS coder is not mentioned in [5, 6]. How to select optimal relay
nodes adaptively and set up the parameters of FEC schemes dynamically was not considered in [7, 8].
On the basis of the above researches, we analyze the adaptive cooperative FEC mechanism, in the hope that the data could be transmitted in wireless sensor networks with high quality. First of all, we
combine the RS coder and MIMO channel technology. The relay node could be chosen according to the SNR threshold based on the characteristics of energy efficiency secondly. Our analytical and
simulation results show that our proposed mechanism is capable of utilizing the available network resource and achieve good perceptual quality, in terms of system throughput, reliability, and
real-time performance, as well as maximization of the wireless sensor networks lifetime.
2. System Model
2.1. Network Model
The FEC system model based on RS codes in MIMO channels using STBCs of wireless senor networks is shown in Figure 1. Assume that there are transmit antennas and receive antennas.
The multiple modulation and coding schemes are used at the physical layer. The symbols are coded and transmitted via the MIMO fading channels after space-time block coding. The SNR is monitored at
the receiver node and then sent back through a feedback channel to the FEC controller, which chooses the appropriate parameters of RS coder in the next transmission accordingly.
At the data link layer, the FEC protocol is adopted to error control of packet transmissions. The raw data packet is encoded by the with encoder. Here, is the length of the elements in the finite
field, which is in bits and belongs to . Note that the number of data packets generated from the source packets is denoted by and the number of redundant data packets which are made from raw data
packets is denoted by and is the total number of data packets. In particular and must be differing by a positive even integer; belongs to the extent of . When an error is detected in a packet, data
packets can be recovered according to any or more data packets which are correctly received by the receiver node. Let denote the packet error rate of all the data packets, which is given by (1) where
the average packet error rate of system is denoted by . If the error is not corrected successfully in a packet, a retransmission request is generated and sent back to the transmitter through the MIMO
feedback channel. Therefore, we describe the wireless network condition with five parameters, which are .
2.2. Markov Chain Model
For studying the transitions process between different sensor nodes in wireless sensor networks exactly and conveniently, we proposed a new Markov chain model for FEC scheme after considering the
proposed Markov model for reliable packet delivery in the literature [12]. Let denote the length of elements in the finite field and let denote the number of data packets generated from the source
packets of RS coder in sensor node at time. Because the outage of wireless channel, packet error and overtime are the independent stochastic process each other, the future outage of wireless link has
nothing to do with the past damaged packets. Hence, the bidimensional random process of is discrete-time Markov chain depicted in Figure 2. In this Markov chain, the one-step transition probabilities
Here, note that . As shown in Figure 2, when the values of and are lager than and , respectively, at the same time, the data packet would be dropped by the sender node. So, let denote the packet
dropping rate. Then, let be the stationary distribution of this Markov chain. Hence, the closed-form solution is given by (3) where is the stationary probability when is 2 and is 1 and is the
stationary probability when is and is 1. According to (3), we can have the following equation:
The data packet would be dropped when is larger than and is larger than simultaneously, which is given by
When the physical characteristics of sensor node and of RS coder are known, and can be calculated from (1) to (5), which are unique. Therefore, the analytical model of saturation throughput, average
delay, and energy efficiency would be given and discussed in the next section.
2.3. QoS Analytical Model
At first, let denote the normalized saturation throughput, which is shown in
Secondly, saturation delay is the average delay under the saturation condition, which considers the modulation delay, RS encoder and decoder delay, and transmission delay, as well as being denoted by
and is given by
Here, let denote the transmission delay of one packet at data link layer.
Finally, based on our previous study [4], the encoding energy consumption is much lower than decoding in RS coder, so we only consider the energy consumption of decoding. Besides, the energy
consumption of starting a sensor, sending and receiving a packet are also taken into account (i.e., , , and ).
If the data length of transmitting is bits, the transmitting code number is given by
The energy consumption of RS decoder (i.e., ) is given by
Here, , , and , respectively, represent the energy consumption of addition, multiplication, and reciprocal in field. So the total energy consumption of FEC is given by
Note the parameter settings of (9) and (10) based on Mica2 sensor node, which are given in Table 1.
3. Energy Aware Adaptive Cooperative FEC
3.1. Relay Selection Protocol
We discover that the energy efficiency of FEC scheme has close relationship with SNR and at first. Then, the variation characteristic of this scheme energy efficiency is presented by mathematical
analyses in different SNR. The change trend of energy efficiency as a function of SNR is shown in Figure 3. It was obvious that the energy efficiency maintained a steady upward trend with increasing
In particular, the energy efficiency increases to the maximum value rapidly when the SNR between sender node and next hop receiver node is greater than 13dB using RS with . Hence, SNR between sender
node and next hop receiver node has a constant value, which is 13dB for this scheme. There is apparently an SNR threshold value based on energy efficiency of FEC scheme. Likewise, the SNR threshold
values of different RS coders are shown in Figure 3(a) with and in Figure 3(b) with . Therefore, different RS coder algorithms have always one constant and energy-efficiency aware SNR threshold
value. We can choose the best relay node as the next hop receiving node according to this conclusion.
In summary, if we can obtain the SNR of MIMO channel, the optimal relay node should be chosen according to SNR threshold. The process of relay selection is as follows.(1)The SNR of MIMO channel is
acquired with real-time detection.(2)Gain the SNR threshold based on the QoS analytical model.(3)If the SNR of one sensor node is less than or equal to , it would be selected as the relay node.
3.2. Combination of RS Coder and MIMO Channel
In this subsection, we define five combination scenarios of RS coder and MIMO channel: MIMO (2, 1), MIMO (2, 1) and RS (15, 11) with , MIMO (2, 2) and RS (15, 11) with , MIMO (2, 1) and RS (7, 3)
with , and MIMO (2, 2) and RS (7, 3) with . On the basis of the QoS analysis model based on Markov chain, the variation trend of bit error rate with SNR is illustrated in Figure 4.
From Figure 4, we found the relationship of five combination schemes of RS coder and MIMO channel, which are listed as follows: MIMO (2, 1) > MIMO (2, 1) and RS (15, 11) with > MIMO (2, 2) and RS
(15, 11) with MIMO (2, 1) and RS (7, 3) with MIMO (2, 2) and RS (7, 3) with .
We found out that the reliability of MIMO and RS coder is superior to one of MIMO channel alone. In particular, the bigger the value of transmit antennas and receive antennas, the better the
reliability of wireless channel. Therefore, the sender and receiver nodes could select the optimal combination scheme to satisfy the diversity requirement of QoS.
3.3. Algorithm of Adaptive Cooperative FEC
The energy-aware adaptive cooperative FEC mechanism (ACFEC) is proposed in this section, which is based on the combination of RS coder algorithm and MIMO channel. Because the relay node is selected
based on characteristics of energy efficiency and SNR, the performance of the proposed mechanism can be evaluated by the following equation: where is the performance metrics of ACFEC, which include
throughput ratio, packet error rate, average delay, and energy efficiency. Let denote the above four performance metrics when SNR is greater than . Let record these metrics when SNR is less than .
Afterwards, we present the basic principle of the ACFEC and its implementation at sender node, receiver node, and relay nodes in detail, which is illustrated as follows.
At Sender NodeStep : carry out the combination scheme of RS coder and MIMO channel technology. The guarantee priority of reliability is appointed according to the diversity requirement. Step :
initialize the network parameters: , , , , and on the basis of Step results. FEC scheme based on RS coder is implemented. Moreover, the values of and are obtained based on the QoS analytical model.
Step : relay selection mechanism based on energy efficiency and SNR is implemented when channel state information is known.Step : start to send data packets by means of MIMO (, ) channel.Step :
when the timer matures or NACK packet is received, go to Step .
The pseudocode for the proposed scheme at sender node is summarized in Algorithm 1.
Relay NodesStep : select the optimal relay node from candidate nodes according to energy-efficiency-aware SNR threshold.Step : Steps and are implemented repeatedly until the data packet is received
successfully or discarded actively.
The pseudocode for the proposed scheme at relay nodes is summarized in Algorithm 2.
Receiver NodeStep : checksum testing and RS encoding are implemented.Step : if the result obtained is right, the data packet is accepted and ACK packet is sent simultaneously; otherwise, it is
rejected and NACK packet is sent at the same time.Step : deliver the correct data packet to the upper layer.
The pseudocode for the proposed scheme at receiver node is summarized in Algorithm 3.
4. Performance Evaluation
In this work, on the basis of the literature [13], we use NS-2 and VC++6.0 to simulate, analyze, and evaluate the performance of ordinary data transmission and multimedia communication using ACFEC,
compared with no RS coder, RS coder alone, MIMO channel technology alone, combination of RS coder and MIMO through two group experiments. The experimental data is the average value after 100-time
simulation and mathematical analysis.
4.1. Parameter Settings of Simulation and Mathematics
In experiment 1, there are 50 sensor nodes, which move in a 1000m 1000m rectangular region. The mobility model is the random waypoint model. Each sensor node moves randomly at a speed uniformly,
which is a random number of the interval [0m/s, 20m/s]. The MAC protocol is IEEE 802.15.4 protocol. The more detailed parameter settings are illustrated in Table 2.
For multimedia traffic of experiment 2, we use a medium quality MPEG4 video clip from the movie forman_qcif.yuv [14], which consists of 400 video frames. The structure of the group of picture is
IBBPBBPBBPBB. The video frame rate is 25 frames per second.
4.2. Experiment Results and Discussion
Two case studies are designed and conducted, with the variation of SNR and simulation time, respectively. On one hand, Figure 5 shows three performance metrics of no RS coder, RS coder alone, MIMO
channel technology alone, combination scheme of RS coder and MIMO (RS (15, 11) & and MIMO(2, 1)), and ACFEC as a function of SNR in experiment 1.
The result indicates that channel state has a significant impact on quality of data transmission in wireless sensor networks. We can observe tremendous improvement of performance with ACFEC, compared
with the other four schemes. The throughput and energy efficiency of no RS coder, RS coder alone, and MIMO channel alone reduce and are close to zero gradually. That means that combination of RS
coder and MIMO channel is superior to the alone scheme. In particular, even at a lower SNR of wireless channel, the quality of ACFEC remains in a good condition. The reasons are that not only the
adaptive RS coder algorithm minimizes the average delay, but also the adaptive MIMO channel technology improves the energy efficiency; as a result, the network throughput ratio utilization increased.
On the other hand, Figure 6 shows three performance metrics of the static combination scheme of RS coder and MIMO channel (RS (15, 11) & and MIMO(2, 1)), as well as ACFEC in experiment 2. Figure 6(a)
provides packet error rate for each video frame in different mechanisms. It is obvious that ACFEC provides the most reliable transmission. When using ACFEC, the packet error rate fluctuates little,
and the average is the highest. These adaptive schemes in ACFEC are not only able to achieve higher decodable frame rate but also to improve the stability of the video transmission.
Figure 6(b) shows the result of the decodable frame rate. As the transmission rate of multimedia data increases, the collision probability of data packets transmission increases significantly,
leading to an unstable and dynamic decreasing tendency of decodable frame rate. The result demonstrates tremendous improvement of decodable frame rate with ACFEC as the static combination strategy.
On this basis, we determine that ACFEC can primely accommodate the poor and dynamic wireless sensor network environment. This evident improvement depends on their stable energy-efficiency-aware relay
selection scheme based on SNR and adaptive MIMO channel strategy.
Figure 6(c) shows the energy utilization efficiency of different mechanisms. In wireless sensor networks, sensor nodes consume most energy for data transmission and reception, as well as error
control. Obviously, the enhancement of data exchange gain and adaptive FEC can greatly reduce energy consumption for data transmission and retransmission. Comparing with other scheme, ACFEC achieves
significant improvement in energy utilization efficiency.
5. Conclusions
Supporting diversity application over wireless sensor networks is more challenging due to the characteristics of wireless transmission. In this paper, an adaptive cooperative forward error correction
(ACFEC) mechanism was proposed for satisfying the requirement.
Main contributions are as follows. First, considering the characteristics of FEC based on RS coder algorithm, we introduce a Markov chain model to study the QoS in wireless sensor networks. Second,
we present energy-efficiency-aware adaptive relay selection algorithm based on SNR to reduce the ratio of potential damaged or lost opportunities. The purpose is to achieve higher system throughput
and lower delay. Finally, we implement the ACFEC and carry out extensive evaluation. The results of mathematics and simulation demonstrate that ACFEC greatly improves the data transmission quality
and achieves significant gains in terms of throughput and energy efficiency. As a result, we determine that the proposed mechanism is feasible for data communication in wireless sensor network.
This work is supported in part by the National Natural Science Foundation of China under Grant no. 61073197 and Scientific & Technological Support Project of Jiangsu Province under Grant no.
1. J. Yick, B. Mukherjee, and D. Ghosal, “Wireless sensor network survey,” Computer Networks, vol. 52, no. 12, pp. 2292–2330, 2008. View at Publisher · View at Google Scholar · View at Scopus
2. I. F. Akyildiz, T. Melodia, and K. R. Chowdury, “Wireless multimedia sensor networks: a survey,” IEEE Wireless Communications, vol. 14, no. 6, pp. 32–39, 2007. View at Publisher · View at Google
Scholar · View at Scopus
3. J. S. Ahn, S. W. Hong, and J. Heidemann, “An adaptive FEC code control algorithm for mobile wireless sensor networks,” Journal of Communications and Networks, vol. 7, no. 4, pp. 489–498, 2005.
View at Scopus
4. Y. Jin and G. W. Bai, “A cooperative FEC based on the model of GM, (1,1) and IPv6 for wireless multimedia sensor networks,” Journal of Convergence Information Technology, vol. 7, no. 18, pp.
230–239, 2012.
5. G. Gupta, M. Misra, and K. Garg, “Energy efficient data gathering using prediction-based filtering in wireless sensor networks,” International Journal of Information and Communication Technology,
vol. 5, no. 1, pp. 75–94, 2013. View at Publisher · View at Google Scholar
6. S. K. Singh, M. P. Singh, and D. K. Singh, “Energy efficient transmission error recovery for wireless sensor networks,” International Journal of Grid and Distributed Computing, vol. 3, no. 4, pp.
89–104, 2010.
7. J. Vidhya and P. Dananjayan, “Cooperative MIMO transmissions in WSN using threshold based MAC protocol,” International Journal of Wireless & Mobile Networks, vol. 2, no. 3, pp. 196–210, 2010.
8. A. Del Coso, U. Spagnolini, and C. Ibars, “Cooperative distributed MIMO channels in wireless sensor networks,” IEEE Journal on Selected Areas in Communications, vol. 25, no. 2, pp. 402–414, 2007.
View at Publisher · View at Google Scholar · View at Scopus
9. M. Watson, M. Luby, and L. Vicisano, Forward Error Correction (FEC) Building Block, RFC, 5052, IETF, August 2007.
10. J. Lacan, V. Roca, J. Peltotalo, and S. Peltotalo, Reed-Solomon Forward Error Correction (FEC) Schemes, RFC, 5510, IETF, April 2009.
11. G. N. Bravos, G. Efthymoglou, and A. G. Kanatas, “MIMO-based and SISO multihop sensor networks: energy efficiency evaluation,” in Proceedings of 3rd IEEE International Conference on Wireless and
Mobile Computing, Networking and Communications (WiMob '07), pp. 13–18, October 2007. View at Publisher · View at Google Scholar · View at Scopus
12. S. M. Rizwan, V. Khurana, and G. Taneja, “Reliability analysis of a hot standby industrial system,” International Journal of Modelling and Simulation, vol. 30, no. 3, pp. 315–322, 2010. View at
Publisher · View at Google Scholar · View at Scopus
13. N. Zaeri and S. Habib, “Exploration of sensor technology under simulation and measurement approaches,” International Journal of Information and Communication Technology, vol. 3, no. 2, pp.
116–130, 2011. View at Publisher · View at Google Scholar · View at Scopus
14. “Video Trace Library,” http://trace.eas.asu.edu/yuv. | {"url":"http://www.hindawi.com/journals/jece/2013/891429/","timestamp":"2014-04-17T18:00:37Z","content_type":null,"content_length":"179952","record_id":"<urn:uuid:2bda11eb-5217-4d32-8a2d-bb5d193b6e25>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Real Pi Benchmark for Android
RealPi provides some of the best and most interesting Pi calculation algorithms out there. This app is a benchmark which tests your Android device's CPU and memory performance. It calculates the
value of Pi to the number of decimal places you specify. You can view and search for patterns in the resulting digits to find your birthday in Pi or find famous digit sequences like the "Feynman
Point" (six 9's in a row at the 762nd digit position). There are no hard limits on the number of digits, if you experience a freeze please see "Warnings" below.
Leave comments with your Pi calculation time on the AGM+FFT formula for 1 million digits. Also the most digits you can calculate, which tests your phone's memory. The author's fairly fast Huawei
Ascend P1 takes 25 sec for 1 million and can do at most 33 million digits. Note that the AGM+FFT algorithm works in powers of 2, so calculating 10 million digits takes just as much time and memory as
16 million digits (the internal precision is shown in the output). On multi-core processors RealPi tests the performance of a single core. For accurate benchmark timing ensure that no other
applications are running and your phone is not hot enough to throttle the CPU.
You can also run RealPi Benchmark on a Windows or Linux PC using virtualization, see
Here's a summary of the available algorithms:
-AGM + FFT formula (Arithmetic Geometric Mean): This is one of the fastest available methods to calculate Pi, and is the default formula used by RealPi when you press "Start". It runs as native C++
code and is based on Takuya Ooura's pi_fftc6 program. For many millions of digits it can require a lot of memory, which often becomes the limiting factor in how many digits you can calculate.
-Machin's formula: This formula was discovered by John Machin in 1706. It's not nearly as fast as AGM + FFT, but shows you all the digits of Pi accumulating in real time as the calculation proceeds.
Choose this formula in the settings menu and then press "Start". It's written in Java using the BigDecimal class. You should probably not ask it to compute much more than 10000 digits.
-Nth digit of Pi formula by Gourdon: This formula shows that it's possible (surprisingly) to calculate decimal digits of Pi "in the middle" without calculating the preceding digits, and needs very
little memory. When you press the "Nth Digit" button RealPi determines 9 digits of Pi ending with the digit position you specify. It runs as native C++ code and is based on Xavier Gourdon's pidec
program. Although it's faster than Machin's formula it can't beat the AGM + FFT formula in speed.
-Nth digit of Pi formula by Bellard: Gourdon's algorithm for the Nth digit of Pi can't be used for the first 50 digits, so this formula by Fabrice Bellard is used instead for digits < 50.
Search function:
Use this to find patterns in Pi like your birthday. For best results calculate at least one million digits using the AGM + FFT formula, then select the "Search for Patterns" menu option.
If you enable the "Calculate when in sleep" option RealPi will keep calculating while your screen is off, useful when calculating many digits of Pi. While not calculating or after the calculation
finishes your device will go into deep sleep as usual.
This app can drain your battery quickly when doing a long calculation, especially if the "Calculate when in sleep" option is on.
Calculation speed depends on your device's CPU speed and memory. At very large numbers of digits RealPi may terminate unexpectedly or not produce an answer. It could also take a very long time to run
(years). This is due to the large amount of memory and CPU time needed. The upper limit on the number of digits you can calculate depends on your Android device.
Changes to the "Calculate when in sleep" option take effect for the next Pi calculation, not in the middle of a calculation.
Tags: real pi benchmark, realpi, realpi apk, pi benchmark java, realpi benchmark, realpiapk, realpi benchmark apk, gourdon algorithm for pi, nth digit of pi java, benchmark tool processor and memory
real time freeware.
Recently changed in this version
1.0.8 2014-01-22
-Set default number of digits to 1000000
1.0.7 2013-11-03
-Added high resolution icons
-Added GUI accessibility labels
-Minor tweaks and bug fixes
Pi music may be coming one day in a future version of this app! Pi fanatics check out this song composed by Michael Blake based on Tau (2Pi) www.youtube.com/watch?v=3174T-3-59Q
Comments and ratings for Real Pi Benchmark
• (74 stars)
Running 4.4.2
• (74 stars)
Total fun using this app
• (74 stars)
I love the sound at the end of every test.
• (74 stars)
1 million digits - 11.69 seconds Max - 134,000,000 digits
• (74 stars)
1 million digits in 16.24 secs
• (74 stars)
Samsung Nexus 10 One million digits = 9.77 second
• (74 stars)
17.30 sec android 4.4.2 art runtime, nexus 4 | {"url":"http://www.appszoom.com/android_applications/tools/real-pi-benchmark_bbrht.html","timestamp":"2014-04-19T18:25:17Z","content_type":null,"content_length":"59349","record_id":"<urn:uuid:b7f61ca1-c669-4d2f-b2b1-726d423a3b0f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
How did Voyager 2 communicate to earth?
Name: Brian Lintz
Status: N/a
Age: N/A
Location: N/A
Country: N/A
Date: Around 1993
Question: The message sending device that Voyager 2 used to send pictures of Uranus was using 2 watts of power. Why was the power delivered to the Earth less than a trillionth of this?
Was not the transmitter directed toward the Earth? Does not the signal dissipation follow the inverse square law? Why do so many science articles use watts as units of energy instead
of power? Why do not they use joules? For example, Uranus receives 1/400th the solar light and heat that the Earth gets. Uranus is 20 A.U. from the sun while Earth is 1 A.U. 1/400 = 1/
20(squared): So the above makes sense according to the inverse square law. Then why does not the same reasoning apply to energy coming from Uranus to Earth? Why is not the radiation
decreased by (1/19)squared? Why is it really decreased by a trillionth instead?
Replies: Your understanding of the physics is excellent. Most writers do not understand the difference between energy and power. Much of what is written is thus not accurate. The
inverse square law is accurate and correctly applied here. Your arguments are OK for comparing the amount of energy from the sun as a relative ratio. Here we want to know the power we
would see over a certain area at a distance of R from the source. So our formula would be that the intensity over one square meter would be 4 * pi * (2 watts)/(R*R) where R is the
distance in meters. The distance by my rough estimate should be 2.8 x 10^12. If the antenna has been isotropic the power should have been down by a factor of 10 to the negative 24 , 10
^(-24) instead of only a trillion. If they are accurate in the report, it means that the directionality of the antenna was pretty great. Your understanding of the principles is great.
I am now surprised that the signal is so large when it gets back.
Sam Bowen
The first thing to realize about physics is that it does not just consist of mathematical formulas, but every formula has (or should have) an understandable meaning or implication. You
were quite right to think of the inverse square law here, but because you apparently did not really understand what that law means, you applied it incorrectly. I hope the following
discussion will make things clearer! First off, what could an inverse square law mean? Let us try and think of something that grows as the second power of distance. The first thing
that should come to mind is area, since an area is always given by a product of two distances. So, does the inverse square law have anything to do with inverse areas? Imagine a point
source of light. After the light is turned on, it spreads out in all directions at the speed of light and we can imagine traveling with the light as it goes on its way. At greater and
greater distances from the light source, the same amount of light is spread out over a bigger and bigger area. Ah-Hah! So, how does that area change with distance? As the square of the
distance of course! (The area of a sphere of radius r is 4 pi r^2.) So, if we detect light with our eyes, or with our fixed antenna, or whatever, the area of detection is the same for
every distance from the source, and therefore we see a smaller and smaller portion of the light as we go further and further away. What is that small fraction? It is the ratio of the
area of our detector (some fixed number) to the total area that the light is spread over, i.e. the fraction is some constant divided by the area of the big sphere the light is now
spread over, so that it decreases as 1/r^2 as the distance r gets bigger and bigger. That the meaning of the inverse square law. Now actually, it does not matter that the source is
radiating equally in all directions, because from far enough away, light from any kind of source (even a laser) has to spread apart over some possibly tiny portion of the sphere of
radius r. A laser will have light spread over a very tiny portion of the sphere, but nevertheless, it will (from far enough away) also obey this inverse square law. So, to go back to
your specific question - with the light from the sun, the earth is at 1/20 the distance of Uranus from the source (the sun), and therefore the area of the sphere the light from the sun
is spread over is 400 times as big when you are out as far as Uranus, and therefore the light intensity is only 1/400 as great. Fine. Now let us turn to the Voyager space craft,
sending 2 Watts of radio waves to the earth. You asked, why does not the earth receive 1/400 of 2 Watts? Well, where would this 1/400 come from? Remember, the 1/400 for the light from
the sun came from the ratio of two areas - the sphere at the earth's distance from the sun, and the sphere at Uranus' distance from the sun. What are the two areas we are taking a
ratio of for Voyager? Well, we a not interested in the light intensity (power per unit area) this time, but in the total received power. So, let us say we have some specific radio
detector in mind on the earth - say the Arecibo one, which is about 1 mile across. The area of our detector is thus about 1 square mile. What is the other area we need to divide into
this 1 square mile? It is the total area that Voyager's 2 Watts must be spread over by the time it reaches the earth. The distance to Voyager is about 20 Astronomical units or about 2
Billion miles. (1 AU is 93 million miles, if I recall correctly.) So the area of a sphere at that distance is somewhere around 10 Billion square miles. Now, Voyager's antenna does have
some directionality, so the radio waves spread out over only a small fraction of that total sphere. Even if that fraction was only 1 millionth, however, the area we are talking about
is still 10 thousand billion, or ten trillion square miles, so the power being received by our mile-wide receiver is going to be perhaps 2 tenths of a trillionth of a Watt.
Arthur Smith
Click here to return to the Physics Archives
NEWTON is an electronic community for Science, Math, and Computer Science K-12 Educators, sponsored and operated by
Argonne National Laboratory
Educational Programs
, Andrew Skipor, Ph.D., Head of Educational Programs.
For assistance with NEWTON contact a
System Operator (help@newton.dep.anl.gov)
, or at Argonne's
Educational Programs
NEWTON AND ASK A SCIENTIST
Educational Programs
Building 360
9700 S. Cass Ave.
Argonne, Illinois
60439-4845, USA
Update: June 2012 | {"url":"http://www.newton.dep.anl.gov/newton/askasci/1993/physics/PHY76.HTM","timestamp":"2014-04-16T16:14:54Z","content_type":null,"content_length":"15486","record_id":"<urn:uuid:924be2e0-b8f2-4450-b243-7b20b11a1492>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
GMAT Advanced Probability Problems
In the following probability problems, problems #1-3 function as a set, problems #4-5 are another set, and problems #6-7 are yet another set. The scenarios are all similar in a set, and the answer
choices for those problems in the same set are the same. What is going on there? Do all questions in the same set have the same answer? Do all have different answers? What is happening?
1) In a certain game, you perform three tasks. You flip a quarter, and success would be heads. You roll a single die, and success would be a six. You pick a card from a full playing-card deck, and
success would be picking a spades card. If any of these task are successful, then you win the game. What is the probability of winning?
2) In a certain game, you perform three tasks sequentially. First, you flip quarter, and if you get heads you win the game. If you get tails, then you move to the second task. The second task is
rolling a single die. If you roll a six, you win the game. If you roll anything other than a six on the second task, you move to the third task: drawing a card from a full playing-card deck. If
you pick a spades card you win the game, and otherwise you lose the game. What is the probability of winning?
3) In a certain game, you perform three tasks. You flip a quarter, and success would be heads. You roll a single die, and success would be a six. You pick a card from a full playing-card deck, and
success would be picking a spades card. If exactly one of these three tasks is successful, then you win the game. What is the probability of winning?
The following information accompanies questions 4-5
Johnson has a corporate proposal. The probability that vice-president Adams will approve the proposal is 0.7. The probability that vice-president Baker will approve the proposal is 0.5. The
probability that vice-president Corfu will approve the proposal is 0.4. The approvals of the three VPs are entirely independent of one another.
4) Suppose the Johnson must get VP Adam’s approval, as well as the approval of at least one of the other VPs, Baker or Corfu, to win funding. What is the probability that Johnson’s proposal is
1. 0.14
2. 0.26
3. 0.49
4. 0.55
5. 0.86
5) Suppose Johnson must get the approval of at least two of the three VPs to win funding. What is the probability that Johnson’s proposal is funded?
1. 0.14
2. 0.26
3. 0.49
4. 0.55
5. 0.86
The following information accompanies questions 6-7
Johnson has a corporate proposal. The probability that vice-president Adams will approve the proposal is 0.6. If VP Adams approves the proposal, then the probability that vice-president Baker will
approve the proposal is 0.8. If VP Adams doesn’t approve the proposal, then the probability that vice-president Baker will approve the proposal is 0.3.
6) What is the probability that one of the two VPs, but not the other, approves Johnson’s proposal?
1. 0.12
2. 0.24
3. 0.28
4. 0.48
5. 0.72
7) What is the probability that at least one of the two VPs, approves Johnson’s proposal?
1. 0.12
2. 0.24
3. 0.28
4. 0.48
5. 0.72
Solutions will come at the end of this blog.
Probability blogs
Here are some previous blogs on probability
2) The Probability “At Least” Question
3) Probability and Counting Techniques
5) Probability DS Practice Questions
Each of the first four have a few practice questions, and the fifth article has 8 DS questions, so combined with the seven here, that’s a great deal of probability practice!
A review of rules
One important idea in probability is mutually exclusive (a.k.a. “disjoint”). Two events are mutually exclusive if they both can’t happen at the same time, if the very fact that one happens
completely precludes the other from happening. For example, on a single coin toss, the results H and T are mutually exclusive. On a single die roll, the six numbers on the die are mutually
If events F and G are mutually exclusive, then
P(F and G) = 0
P(F or G) = P(F) + P(G)
The first equation expresses mathematically what we expressed in words: it’s impossible for outcomes F & G to occur at the same time. The second rule is the pure form of the rough probability idea
that “OR means add” — that rule is approximately true most of the time, but exactly true when the two events are mutually exclusive.
If events A and B are just general events, not mutually exclusive, then
P(A or B) = P(A) + P(B) – P(A and B)
That is the generalized OR rule, a very important rule.
Another important idea in probability is the idea of independent. Two events are independent if whether one happens has absolutely no bearing on whether the other happens; in other words, if we are
told the outcome of one event, the fact that this outcome occurred gives us absolutely no information that would help us predict whether the other event will occur. In tossing coins, the separate
coins are independent. In rolling dice, the separate dice are independent. In the real world, two absolutely unrelated things would be independent. Consider these two events:
Event A = the New York Mets win on a particular day
Event B = the Nikkei (the Japanese stock index) goes up on a particular day
Those two have absolutely no influence on one another, and even if we are given explicit information about the outcome of one, that would give us absolutely no insight into the outcome of the other.
If each of two variables is a numerical variable that could take a range of numerical values, then a synonym for “independent” would be “completely uncorrelated.” If two variables are correlated,
then having information about the value of one allows you to make an informed prediction about the value of the other. If two variables are independent, then knowing the value of one doesn’t give us
the foggiest idea of what the other could be.
If events X and Y are independent, then
P(X and Y) = P(X)*P(Y)
This rule is the pure form of the rough probability idea that “AND means multiply” — that rule is approximately true most of the time, but exactly true when the two events are independent. Notice,
for independent events, we can simplify the generalized OR rule:
P(X or Y) = P(X) + P(Y) – P(X)*P(Y)
Those rules will get you through a great deal of the probability on the GMAT. BUT, what if we need an AND rule for two events that are not independent? What would be the “generalized” version of the
AND rule for any two events, not just events that are independent. In order to talk about that, we need to introduce a new idea.
It’s worthwhile also mentioning — the conditions “mutually exclusive” and “independent” are special case scenarios. They are the opposite of common: they are relatively rare. Never make it your
default assumption that either is true unless the question makes clear that it must be the case.
Conditional probability
This is a term that, like many math terms, will not explicitly appear on the GMAT, and the notation I will show, standard in many probability textbooks, will not appear on the GMAT. Nevertheless,
the idea of conditional probability does appear on the GMAT.
The notation we use is P(A|B). Event A is the main focus: we are interested whether or not A occurs. Event B is some kind of condition we impose: the idea is, we will pretend that we live in a
world in which Event B is always true — under those conditions, what is the probability of A? P(A|B) is a “conditional probability”, a probability when we impose the condition of B. The notation P
(A|B) is read “the probability of A, given B.”
Here are a few examples. Suppose
A = on a given day in Berkeley, CA, it rains
B = on a given day in Berkeley, CA, there are no clouds in the sky.
Here P(A) would be the probability that here in Berkeley we get rain on a randomly selected day; that would be approximately 0.10 or 0.15. By contrast, if we impose the condition “no clouds”, then
the conditional probability, P(A|B), would have to be zero: how could it possibly rain when there are no clouds? This is an example of a condition lowering a probability; in the next example, the
condition will elevate the probability.
Here’s another example, more socially relevant. Suppose
A = a randomly selected felony defendant is convicted
B = the defendant is African-American
P(A) looks at all folks in the USA accused of and tried for felony, and regardless of any individual factors (race, age, evidence, crime, etc.) just looks at: what percent, overall, are convicted?
According to the BJS, this percent is P(A) = 0.68. In a world of perfect fairness and equality, P(A) and P(A|B) would equal exactly the same thing — in other words, a person’s race would play
absolutely no role in whether that person were convicted of a felony. Most regrettably, in America in 2013, 148 years after the end of the US Civil War, 45 years after the death of Dr. Martin
Luther King Jr., racism still has a large effect on American society and an overwhelming effect on the criminal justice system; in other words, P(A|B) > P(A). Conditional probability is not just a
mathematical idea: it has profound social and moral implications for all kinds of issues of justice and fairness in the real world.
On a more mathematical note, notice if events X and Y are independent, then whether Y occurs or not should have absolutely no bearing on whether X occurs. In other words, for independent events X &
Y, P(X|Y) = P(X).
The generalized AND rule
Now that we have discussed conditional probability, we can discuss the generalized AND rule.
If events A and B are two general events, not mutually exclusive, not independent, then, as a general rule:
P(A and B) = P(A)*P(B|A)
P(A and B) = P(B)*P(A|B)
Either one of these is the generalized AND rule. Notice that AND still means multiply, though what we multiply here is a little different from what was multiplied in the special case with independent
For example, suppose we are going to pick two cards from a full 52-card deck, without replacement, and we want to know the probability of picking two heart-cards. The phrase “without replacement”
means when we pick the first card, we put it aside and do not return it to the deck, so that the second card is picked from a deck of only 51 cards. That changes the probability. The words
“without replacement” always means the choices are NOT independent, because the outcome of the first choice has a big influence on the outcome of subsequent choices. For this example, let
A = first choice is a heart-card
B = second choice is a heart-card
There are four suits in a full deck, and each suit is equally represented, so P(A) = 1/4. Let’s think about P(B|A). If the first card was a heart-card and it was not replaced, that means the
second choice is made from a deck of 51 cards that has 13 of the other three suits but only 12 heart-cards. Thus, P(B|A) = 12/51 = 4/17, and P(A and B) = (1/4)*(4/17) = 1/17
The generalized AND rule is often used in sequential tasks such as this, in which there are earlier choices or trials, and the outcomes of these have various effects on later choices or trials. The
generalized AND rule is most often not applicable in a more side-by-side choice, in which all the choices are available at the outset.
If you understand everything in this post, you are a GMAT Probability pro. If you had some insights while reading, you may want to give the seven problems at the top another look before reading the
solutions below.
Here’s another probability question:
8) http://gmat.magoosh.com/questions/1038
If you would like to add anything or ask a clarifying question about anything I have said, please let know in the comments sections.
Practice problem solutions
1) In this scenario, winning combinations would include success on any one task as well as any combination of two or three successes. In other words, there are several cases that constitute the
winning combinations. By contrast, the only way to lose the game would be unsuccessful at all three tasks. Let’s use the complement rule.
P(lose game) = P(quarter = T AND dice ≠ 6 AND card ≠ spades)
= (1/2)*(5/6)*(3/4) = 5/16
P(win game) = 1 – P(lose game) = 1 – (5/16) = 11/16
Answer = (D)
2) In this scenario, there are several routes that would lead to winning the game. The only route that leads to losing the game is the route in which all three tasks are unsuccessful. We can do
this precisely as we did the previous problem.
P(lose game) = P(quarter = T AND dice ≠ 6 AND card ≠ spades)
= (1/2)*(5/6)*(3/4) = 5/16
P(win game) = 1 – P(lose game) = 1 – (5/16) = 11/16
Answer = (D)
3) This is very trick. We have to think of three cases.
Case One: success with coin, no success with die or card
P(coin = H AND die ≠ 6 AND card ≠ spade) = (1/2)*(5/6)*(3/4) = 15/48
Case Two: success with die, no success with coin or card
P(coin = T AND die = 6 AND card ≠ spade) = (1/2)*(1/6)*(3/4) = 3/48
Case Three: success with card, no success with die or coin
P(coin = T AND die ≠ 6 AND card = spade) = (1/2)*(5/6)*(1/4) = 5/48
The winning scenario could be Case One OR Case Two OR Case Three. Since these are joined by OR statements and are mutually exclusive, we simply add the probabilities.
Answer = (E)
4) We will use the abbreviation A = VP Adams approves, B = VP Baker approves, and C = VP Corfu approves.
P(funding) = P(A and (B or C)) = P(A)*P(B or C)
We can multiply because everything is independent of everything else. First look at P(B or C). These are not mutually exclusive, so we need to use the generalized OR rule:
P(B or C) = P(B) + P(C) – P(B and C)
Because B & C are independent, we can multiply to find P(A and B)
P(B or C) = (0.5) + (0.4) – (0.5)*(0.4) = 0.9 – 0.2 = 0.7
Now, multiply by P(A)
P(funding) = P(A)*P(B or C) = (0.7)*(0.7) = 0.49
Answer = (C)
5) For this one, we have to consider four different cases
P(A and B and (not C)) = (0.7)*(0.5)*(0.6) = 0.21
P(A and (not B) and C) = (0.7)*(0.5)*(0.4) = 0.14
P((not A) and B and C) = (0.3)*(0.5)*(0.4) = 0.06
P(A and B and C) = (0.7)*(0.5)*(0.4) = 0.14
These four are mutually exclusive and are joined by OR, so we add them.
P(funding) = 0.21 + 0.14 + 0.06 + 0.14 = 0.55
Answer = (D)
6) We will use the abbreviation A = VP Adams approves and B = VP Baker approves. We will consider two cases:
Case #1: Adams approves and not Baker
P(A and not B) = P(A)*P(not B|A) = (0.6)*(0.2) = 0.12
Case #2: Baker approves and not Adams
P(not A and B) = P(not A)*P(B|not A) = (0.4)*(0.3) = 0.12
These two cases are mutually exclusive and joined by OR, we add them.
P(only one VP approves) = 0.12 + 0.12 = 0.24
Answer = (B)
7) Here, the combinations (A and not B), (not A and B), and (A and B) all lead to approval of the proposal. The only one that doesn’t is the complement (not A and not B).
P(not A and not B) = P(not A)*P(not B|not A) = (0.4)*(0.7) = 0.28
P(at least one) = 1 – P(not A and not B) = 1 – 0.28 = 0.72
Answer = (E)
4 Responses to GMAT Advanced Probability Problems
1. Mathguy April 5, 2014 at 6:23 am #
There seems to be some erros in Answer 4.
We can multiply because everything is independent of everything else. First look at P(B or C). These are not mutually exclusive, so we need to use the generalized OR rule:
P(B or C) = P(B) + P(C) – P(A and B)
Because B & C are independent, we can multiply to find P(A and B)
P(B or C) = (0.5) + (0.4) – (0.5)*(0.4) = 0.9 – 0.2 = 0.7
I think it should be P(B and C), not P(A and B).
□ Mike April 5, 2014 at 8:29 pm #
Dear Mathguy,
Yes, that was a genuine typo, and I just corrected it. Thanks for pointing this out.
2. Milovan January 13, 2014 at 7:55 am #
Dear Mike,
Excellent article. Probability is one of the trickiest (if I can say like that) parts of the GMAT. This article really explains it in a very simple and efficient way.
Could you just check the solution for the 3rd example. I think the correct answer is under letter E) and not D) as currently stated.
□ Mike January 13, 2014 at 9:21 am #
Dear Milovan,
Yes, thank you for catching that typo. Great eye for detail! The answer to the third question is indeed (E). Best of luck to you.
Leave a Reply Click here to cancel reply. | {"url":"http://magoosh.com/gmat/2014/gmat-advanced-probability-problems/","timestamp":"2014-04-21T14:40:13Z","content_type":null,"content_length":"78805","record_id":"<urn:uuid:4acc7b12-1984-4e7d-819e-c78623f16cb1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
paper helicopter model
April 24th 2012, 04:09 PM #1
Apr 2012
paper helicopter model
ok so i got did the helicopter test several times, where you cut rotor and drop froma height then repeat over and over again
it says "develop a model" use your data to produce a model to predict the time it takes for the paper helicopter to drop to the floor (y) based on rotor length (x)
I think it means regression model? am i right ?
Anyways I did each regression model and got quartic which has r2=0.97 closer to r2 =1 than all the other regression models.
it says use the equation of your model to answer questions: I got a long formula in my calculator and im guessing i have to sub in numbers to get answers :0! am i going on the right track?
Re: paper helicopter model
plz anyone help
April 24th 2012, 04:19 PM #2
Apr 2012 | {"url":"http://mathhelpforum.com/algebra/197855-paper-helicopter-model.html","timestamp":"2014-04-19T05:08:45Z","content_type":null,"content_length":"31544","record_id":"<urn:uuid:8a9796f2-a296-48d5-b548-f4413ae22cc7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Square and rearrange
Incomplete This article is incomplete. This article needs more work
Quick description
To control an integral or a sum , take its magnitude squared, expand it into a double integral or a double sum , and then rearrange, for instance by making the change of variables or .
This often has the effect of replacing a phase or in the original integrand by a "differentiated" phase such as or . Such differentiated phases are often more tractable to work with, especially if
had a "polynomial" nature to it.
harmonic analysis, analytic number theory
Example 1
This is a classic example: to compute the integral , square it to obtain
then rearrange using polar coordinates to obtain
The right-hand side can easily be evaluated to be , so the positive quantity must also be .
Example 4
(The method, say to obtain Hormander's oscillatory integral estimate)
Login or register to post comments
Recent comments | {"url":"http://www.tricki.org/article/Square_and_rearrange","timestamp":"2014-04-16T21:52:36Z","content_type":null,"content_length":"22925","record_id":"<urn:uuid:123fd1ec-3b1f-4af6-bbb1-feeca837e68f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gauss Sums: Finding the Root of Unity
Bernhard Schmidt (Nanyang Technological University)
University of Calgary
A Gauss sum over a finite field GF(q) is a sum of q algebraic numbers.
It is often useful to evaluate Gauss sums explicitly, for instance, in
coding theoretic or cryptographic applications. For small q, the
evaluation of Gauss sums can be done naively by summing up all terms.
For large q, this is impossible, but in many cases the LLL algorithm
can be used to compute Gauss sums up to multiplication with a root of
unity. However, finding the exact root of unity is surprisingly
difficult. A special case of this problem is well known: the
determination of the sign of quadratic Gauss sums is a notorious,
difficult problem which was solved by Gauss in 1805. We will describe a
method to find the exact root of unity for all Gauss sums over finite
fields under the condition that the values of the Gauss sums are known
up to multiplication with a root of unity.
Other Information:
10th Anniversary Speaker Series 2007 | {"url":"http://www.pims.math.ca/scientific/scientific-lecture/gauss-sums-finding-root-unity","timestamp":"2014-04-16T07:20:08Z","content_type":null,"content_length":"16970","record_id":"<urn:uuid:b2944074-11ca-48d3-b937-e1d7c17dd8bd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Performance improvements in the universe checker of Coq
One of my parsing-related Coq files takes more than two hours to compile, which is pretty annoying when the file is less than one thousand lines long and does no complex computations. After some
profiling, it appeared that Coq was spending most of its time in the Univ module. This module is part of the Coq kernel, its role is to check that the user is using universes in a consistent way.
You can get an introduction of what universes are in this book chapter. In Coq, universes are implicit: it is the role of the Coq kernel to determine whether it is possible to give a number to each
occurrence of Type, in a manner that is proved not to introduce inconsistencies.
This computation is done in the Univ module: the rest of the Coq kernel generates some constraints, and this module is supposed to warn if they are not satisfiable. Roughly, theses constraints are of
the form: "universe A has an index strictly smaller than universe B", or "universe A has an index smaller or equal to universe B". In order to check that they are satisfiable, it is enough to see all
these constraints as a graph where edges are constraints and vertexes are universes, and check that there is no cycle containing a strict constraint.
This graph can be simplified: if there is a cycle of large constraints, that does mean that all universes in this cycle are in fact necessarily equivalent, and we can replace this cycle with only one
universe. We have to keep track of equivalences, using a specialized data structure. We recognize here the classical union-find problem, which have very efficient solutions.
However, in order to do this task, in Coq, a naive approach is used: it builds equivalence trees, but nothing is done to prevent these trees to be very unbalanced. And, in fact, that was the problem
in my file: for some reason, my tactics generated hundred of universes constraints. All these universes were inferred to be equivalent. In order to keep track of this information, a long chain of
universes were created: for each of them, Coq knows that it is equivalent to the next one, but it does not know how to go directly to the head of the chain, that is effectively present in the graph.
So, it was actually spending most of its time following this long chain...
In the literature, there are two main heuristics to solve this (classical) problem: "union by rank" and "path compression". Using either one gives logarithmic amortized complexity, and they give a
practically constant amortized complexity if implemented together. By experience, I know that each of these heuristics gives very good performances when used alone, because the worst case is very
rare. Moreover, it is not easy to implement path compression in a persistent way, because it involves side effects. So, I decided to implement "union by rank" inside the Univ module of Coq.
It was not really difficult: I just added a rank field in the record representing the nodes of the graph, that contains the length of the longest path of equivalences leading to this node. So, when
we are merging nodes, we can chose to keep the node having the longest chain as the head node. This keeps the graph balanced: if the other nodes were part of smaller chains, the longest chain's
length doesn't increase.
I got a spectacular improvement of performances when Coq is executed on my code: from more than two hours, I went to about 6.5 minutes. I get a reasonable 1% improvement when compiling the Coq
standard library: it is actually expected, because the standard library does not contain a lot of complex polymorphic code.
You can see the current patch (which may still evolve a bit) in this pull request. My first implementation was against 8.3, this patch is on the trunk version of Coq, and we're planning to port it to
8.4 soon. Do not hesitate to test it if you have universe-heavy developments; any feedback is welcome. | {"url":"http://gallium.inria.fr/~scherer/gagallium/union-find-and-coq-universes/index.html","timestamp":"2014-04-20T05:48:10Z","content_type":null,"content_length":"9683","record_id":"<urn:uuid:3085f73f-4339-4895-8c45-814fd4e5b4da>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from Christopher Oliver
Date: -- (:)
From: Christopher Oliver <oliver@f...>
Subject: Let rec trouble
I'm having trouble with the syntax of let rec. Consider the following
program for computing Van der Waerden's bound:
open Num
open Nat
open Big_int
open Ratio
let rec n k l =
let rec m i =
if i =/ Int 0 then
Int 1 else
Int 2
*/ (m (pred_num i))
*/ (n (k **/ (m (pred_num i))) (pred_num l)) in
if l =/ Int 2 then succ_num k else m k;;
print_string (string_of_num (n (Int 3) (Int 3)));;
I would like to restrict the lexical scope of 'n' by replacing the first
double semicolon with 'in.' I nest m precisely to capture k and l in m's
lexical environment. Why is this use forbidden? I.e. Why shouldn't I be
able to write:
let rec n k l =
let rec m i =
if i =/ Int 0 then
Int 1 else
Int 2
*/ (m (pred_num i))
*/ (n (k **/ (m (pred_num i))) (pred_num l)) in
if l =/ Int 2 then succ_num k else m k
print_string (string_of_num (n (Int 3) (Int 3)));;
I would prefer not to define a top level symbol, and this seems an
inconsistancy. Am I missing something?
Christopher Oliver Traverse Internet
Systems Coordinator 223 Grandview Pkwy, Suite 108
oliver@traverse.net Traverse City, Michigan, 49684
let magic f = fun x -> x and more_magic n f = fun x -> f ((n f) x);; | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/1998/07/3a05ed3940f82df77640f28a6824fae6.en.html","timestamp":"2014-04-16T15:57:04Z","content_type":null,"content_length":"6237","record_id":"<urn:uuid:2585d405-2920-4448-9f06-ad9d7a50836b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: NLS per individual
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: NLS per individual
From Anton Goeree <antongoeree@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject st: NLS per individual
Date Sun, 21 Oct 2012 16:30:16 +0200
Dear Statalist users,
Currently, I'm doing the following NLS on my dataset:
nl (p = 1 / (1 + exp( - {sigma} * (x - y ^ {beta} )))), nolog
The dataset contains 500 observations: 50 id's and 10 p/x/y per id.
The variable p is either zero or one, and x,y are values between 50 and 200.
I'd like to get individual estimates (for each id) for "sigma" and
"beta". I could do - bysort id: nl (p = 1 / (1 + exp( - {sigma} * (x -
y ^ {beta} )))), nolog - of course, but that would give me a messy
list of 50 outputs. Does any of you know if it is possible to put the
regression estimates into two new variables?
Thanks in advance for any aid!
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-10/msg00967.html","timestamp":"2014-04-16T16:02:36Z","content_type":null,"content_length":"7759","record_id":"<urn:uuid:615f2e82-d050-42c4-a579-f40dd2d6b9fa>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
1 Question 3 %u2013 DC/AC Inverters Consider The ... | Chegg.com
1 Question 3 %u2013 DC/AC Inverters
Consider the 3 phase inverter shown below.
Figure 1: 3-Phase Inverter
This is a Matlab exercise using Fourier series for star and delta loads.
The inverter frequency is 50 Hz, Vs = 500V and each GTO uses 180 degree conduction.
1.1 Phase voltages
Plot the phase a voltage versus time for 2 cycles. Hint: use the matlab function %u201Csquare(theta)%u201D.
Overlay the plot with the Fourier components up to the 17^th harmonic.
Plot the sum of the 1^st 17 harmonics.
Your output should look something like this:
To help you get started, here is a Matlab listing to do the above:
Vs = 500; % supply voltage
f0 = 50; % output fundamental frequency
w0 = 2*pi*f0;
T = 2/f0; % time (secs) for two cycles
dt = T/1000; % time step for plotting
t = 0:dt:T; % array of time values
tm = 1000*t; % time in milliseconds
theta = w0*t; % angle in radians
Va = Vs/2*square(theta); % Phase a voltage
Vb = Vs/2*square(theta-2*pi/3);
Vc = Vs/2*square(theta-4*pi/3);
Max_Harmonic = 17;
Nt = size(t, 2);
Va_Fourier_Components = zeros(Max_Harmonic, Nt);
Vb_Fourier_Components = zeros(Max_Harmonic, Nt);
Vc_Fourier_Components = zeros(Max_Harmonic, Nt);
for n = 1:2:Max_Harmonic
V_Phase_FC = Vs/2*4/n/pi; % Phase voltage Fourier coefficients
Va_Fourier_Components(n,:) = V_Phase_FC*sin(n*theta);
Vb_Fourier_Components(n,:) = V_Phase_FC*sin(n*(theta - 2*pi/3));
Vc_Fourier_Components(n,:) = V_Phase_FC*sin(n*(theta - 4*pi/3));
Va_Fourier_Approximation = sum(Va_Fourier_Components, 1);
Vb_Fourier_Approximation = sum(Vb_Fourier_Components, 1);
Vc_Fourier_Approximation = sum(Vc_Fourier_Components, 1);
m = 3; n = 1;
subplot(m, n, 1);
plot(tm, Va, tm, Va_Fourier_Components, tm, Va_Fourier_Approximation);
title('Phase a Voltage (Va)');
xlabel('Time (ms)');
subplot(m, n, 2);
plot(tm, Vb, tm, Vb_Fourier_Components, tm, Vb_Fourier_Approximation);
title('Phase b Voltage (Vb)');
xlabel('Time (ms)');
subplot(m, n, 3);
plot(tm, Vc, tm, Vc_Fourier_Components, tm, Vc_Fourier_Approximation);
title('Phase c Voltage (Vc)');
xlabel('Time (ms)');
To insert the figure into a word document, save the figure as a png file and insert the file into the word document.
1.2 Phase current %u2013 Neutral Grounded
Assume that the load is star connected with the neutral grounded. Each phase has a series RL load with R = 5 ohms and L = 20 mH. Calculate and plot the Phase a current Ia using Fourier techniques.
1.3 Open Neutral
Assume that the load is star connected with the neutral disconnected. Each phase has a series RL load with R = 5 ohms and L = 20 mH.
Plot the neutral voltage
Plot the phase to neutral voltage for phase a.
Calculate and plot the Phase a current Ia using Fourier techniques.
1.4 Delta load
Assume that the load is delta connected. Each phase has a series RL load with R = 5 ohms and L = 20 mH.
Plot the phase to phase voltage (Vab) versus time for one cycle. Hint: Vab = Va %u2013 Vb
Calculate and plot the Phase a current Ia using Fourier techniques. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/1-question-3-u2013-dc-ac-inverters-consider-3-phase-inverter-shown--figure-1-3-phase-inver-q4077067","timestamp":"2014-04-20T12:03:47Z","content_type":null,"content_length":"58832","record_id":"<urn:uuid:d0a35a05-c572-4c07-b60c-7b734e16fa7c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manassas, VA Geometry Tutor
Find a Manassas, VA Geometry Tutor
...I am a math teacher with many years of teaching and tutoring experience at secondary level. Please, contact me if your are interested in having me as your tutor. I am a certified math and
science teacher with many years of teaching experience at secondary level.
10 Subjects: including geometry, physics, algebra 2, algebra 1
Thanks for looking at my profile, I look forward to the opportunity to meet you and work together! I am a recently retired Federal government employee, who worked in Information Technology for
over 35 years. I am currently working as a substitute teacher for Fairfax County Public Schools, for seco...
7 Subjects: including geometry, algebra 1, algebra 2, trigonometry
...Students will also investigate exponential and logarithmic functions. Chemistry is the science of matter, particularly as it relates to its physical properties, structure, and chemical
reactions between various elements and compounds. Subjects that are typically covered include structural prope...
17 Subjects: including geometry, chemistry, ASVAB, calculus
...Math can be a challenging subject, but there is a true sense of satisfaction when you see the student actually understand the material and feel a confidence in working the problems. I believe
Math is best learned by doing and working problems. My goal is to reduce frustration and create confidence.
11 Subjects: including geometry, calculus, physics, statistics
...It can be intimidating to take that first step back into academics. If you need someone to help you on your path to your GED, I'm here. The GED Math test includes number computations, Algebra,
and Geometry.
10 Subjects: including geometry, algebra 1, algebra 2, GED
Related Manassas, VA Tutors
Manassas, VA Accounting Tutors
Manassas, VA ACT Tutors
Manassas, VA Algebra Tutors
Manassas, VA Algebra 2 Tutors
Manassas, VA Calculus Tutors
Manassas, VA Geometry Tutors
Manassas, VA Math Tutors
Manassas, VA Prealgebra Tutors
Manassas, VA Precalculus Tutors
Manassas, VA SAT Tutors
Manassas, VA SAT Math Tutors
Manassas, VA Science Tutors
Manassas, VA Statistics Tutors
Manassas, VA Trigonometry Tutors
Nearby Cities With geometry Tutor
Annandale, VA geometry Tutors
Burke, VA geometry Tutors
Centreville, VA geometry Tutors
Chantilly geometry Tutors
Fairfax, VA geometry Tutors
Falls Church geometry Tutors
Herndon, VA geometry Tutors
Manassas Park, VA geometry Tutors
Mc Lean, VA geometry Tutors
Reston geometry Tutors
Springfield, VA geometry Tutors
Stafford, VA geometry Tutors
Sterling, VA geometry Tutors
Vienna, VA geometry Tutors
Woodbridge, VA geometry Tutors | {"url":"http://www.purplemath.com/manassas_va_geometry_tutors.php","timestamp":"2014-04-19T15:13:44Z","content_type":null,"content_length":"23991","record_id":"<urn:uuid:d34d7962-0b17-4f2c-a33a-792016809b19>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
basic difference between summation and integration
June 20th 2012, 01:49 AM #1
Jun 2012
can you plz tell me what is the basic difference between summation and integration..? i was going through the Poisson distribution function and in one case it was discrete and we had to make
summation to get the result and other cases for continuous function we integrated it...now what is meant by discrete and continuous in probability and why do we integrate or sum up in different
Re: basic difference between summation and integration
Integration is a way to "sum over (usually uncountable) infinite sets", while sums just sum over finite sets. Series are sums over countable sets.
The integral is a more general, more powerful version of a sum. But if you can use a sum (because you can count the number of possible values), it is usually easier.
Re: basic difference between summation and integration
With summation, you're summing an expression for discrete values (such as 1,2,3...) while for integration, you're summing along the entire range of values.
For example, suppose you want to find the area under the curve $y = x^2$ from x = 0 to x = k. A common approach to approximate this is use rectangles of width 1 and use a summation:
$\sum_{i=0}^{k-1} i^2$
However, as the width of your rectangles decreases to an infinitely small length dx, your sum gets closer and closer to the desired area, which is given by the integral
$\int_{0}^{k} x^2 \, dx$
The integral sign functions exactly the same way as a sigma, but you are summing over a continuous range of numbers.
Integral - Wikipedia, the free encyclopedia
June 21st 2012, 07:10 AM #2
Junior Member
Jun 2012
June 21st 2012, 07:28 AM #3
Super Member
Jun 2012 | {"url":"http://mathhelpforum.com/calculus/200217-basic-difference-between-summation-integration.html","timestamp":"2014-04-20T06:28:12Z","content_type":null,"content_length":"37326","record_id":"<urn:uuid:38c22716-c427-4b4f-bcd8-49d2b44c42b6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brookline, MA Prealgebra Tutor
Find a Brookline, MA Prealgebra Tutor
...I have many success stories to tell, and most certainly would like to include yours, given the opportunity to work with you, answer all your questions and help you achieve the success you
deserve. This is a new start, for both of us! There is hard work involved, but now you know, you will be in...
6 Subjects: including prealgebra, physics, algebra 1, trigonometry
...I am a Massachusetts licensed teacher in the elementary grades of 1 through 6. I have been teaching since 2005. I have taught language arts for grades 6-8 and currently teach technology to
students in K0 through grade 8.
14 Subjects: including prealgebra, reading, ESL/ESOL, grammar
...Conversational and pronunciation skills are critical for students so that they can converse with and understand others at department stores, at the doctor's office, at the auto repair shop,
etc. In addition, it is important that non-native speakers be able to speak or write down the appropriate ...
30 Subjects: including prealgebra, reading, writing, English
...I am a recent graduate of the University of Oregon with a BS in Biology and Psychology, with a minor in Chemistry. I'm dedicated to the sciences and studied pre-medicine at UO. I've also
applied my knowledge to research and gained practical experience in a neuroscience lab.
17 Subjects: including prealgebra, chemistry, biology, reading
...I am the father of 3 teens, and have been a soccer coach, youth group leader, and scouting leader. I am also an engineering and business professional with BS and MS degrees. I tutor Algebra,
Geometry, Pre-calculus, Pre-algebra, Algebra 2, Analysis, Trigonometry, Calculus, and Physics.
15 Subjects: including prealgebra, calculus, physics, statistics | {"url":"http://www.purplemath.com/Brookline_MA_prealgebra_tutors.php","timestamp":"2014-04-18T06:16:15Z","content_type":null,"content_length":"24105","record_id":"<urn:uuid:defb490b-9760-4cd0-a77e-9abdbd6f0f61>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/samtab/answered/1","timestamp":"2014-04-21T05:03:14Z","content_type":null,"content_length":"111934","record_id":"<urn:uuid:b2fe05cd-8df2-4cac-af75-1aa4b6973b62>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] Iterative proportional fitting
Robert Kern robert.kern@gmail....
Thu Jan 8 21:32:37 CST 2009
On Thu, Jan 8, 2009 at 21:04, Dorian <wizzard028wise@gmail.com> wrote:
> Hi Kern , James
> I look at closely the "Maximum entropy method " and "NORTA method" , they correspond exactly
> to what I'm looking for to start thinking deeply about the problem of approaching likely the density
> function which will correspond to a given marginal densities functions.
I think NORTA may be adapted to your problem. NORTA is a method for
generating N-D random variates from a distribution characterized by N
marginal distributions and a correlation matrix. You sample from an
N-D normal distribution using a correlation matrix derived from the
target correlation matrix, then apply the inverse CDFs of the marginal
distributions. The magic is all in finding the right transformation of
the correlation matrix.
Instead of transforming randomly sampled points, you could instead
transform a grid. On that grid, you can find the values of the N-D CDF
of the corresponding NORTA normal distribution. Transforming the grid
locations back to your original space, the warped grid should now
correspond to the N-D CDF of the target joint distribution. Apply your
favorite interpolation scheme to evaluate the N-D CDF numerically on a
regular grid in the original space, and you should be able to evaluate
the PDF from that.
This will probably work okay for 2 dimensions, but it would be quite
challenging to do this for many more.
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
-- Umberto Eco
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2009-January/019314.html","timestamp":"2014-04-21T03:28:17Z","content_type":null,"content_length":"4335","record_id":"<urn:uuid:8acf75d2-25cb-4cc2-a6bd-a8e01739afe9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math -- 1
Course Description, Math 1 – Students will learn basic number concepts such as odd and even, more and less, patterns and ordinals. Students will write numbers to 100 and will count to 100 by fives
and tens. Students will also gain a basic understanding of fractions, graphing, telling time and counting money. Students will understand the concepts of addition and subtraction and will memorize
facts zero through five.
In the beginning of the year, they write number words. If writing is hard, use typing or handwriting tracing sheets or assign half of the writing that day.
Counting to 100, Odd and Even
Day 1
Day 2
1. Counting to 100 Click on level 1.
2. Ordering numbers Make sure you read the directions! It changes! Sometimes it says click on the cars smallest to largest. That means you will click on the lowest number first. If it says, click
on the cars largest to smallest, then you will click on the highest number first.
*Day 3
1. Watch the first odd and even video. Then listen to a couple of the song videos on the page.
2. *Color in the odd numbers Print out the 100s chart and fill in odd numbers. Keep your paper in your notebook.
Day 4
1. Color in the even numbers on your 100s chart.
2. Paint by number Use your 100s chart to help you.
Day 5
1. Odd and Evens Click on up to 100.
Writing numbers 1 – 100, Patterns
Day 6*
1. *Trace and write numbers to 20. Keep your paper!
2. Count by 2s out loud using your 100s chart. Say all the odd numbers. Use your finger to jump over the evens and to point to the odds.
Day 7
1. Trace and write numbers 21 – 40 using your sheet from day 6.
2. Count by 2s out loud using your 100s chart. Say all the even numbers.
Day 8
1. Trace and write numbers 41 – 60 using your sheet from day 6.
2. Count by 2s out loud. Say all the odd numbers. Try to not look at your paper.
Day 9
1. Trace and write numbers 61 – 80 using your sheet from day 6.
2. Count by 2s out loud. Say all the even numbers. Try to not look at your paper.
Day 10
1. Trace and write numbers 81-100 using your sheet from day 6.
2. Count backward out loud from 100 to 1. Try and not look at your paper.
3. Play the number sequence game. Choose Hard for your level.
Number Words, Ordinals
Day 11
1. Watch the First Circus Act.
2. Fill in numbers 1-10 on online worksheet.
3. Write these number words in your notebook: one, two, three, four, five, six, seven, eight, nine, ten.
Day 12
1. Watch ordinal number videos.
2. Write these ordinal number words in your notebook: first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth.
Day 13
1. Fill in online worksheet on higher ordinals or write these number words in your notebook: eleven, twelve, thirteen, fourteen, fifteen, sixteen, seventeen, eighteen, nineteen, twenty
2. Draw a picture of 7 objects in a line. Then circle the first. Draw a line under the third. Draw an X over the fifth. And write the ordinal number word for the last object in line. Have a parent
or older brother or sister check your work. Were you right?
Day 14
1. Fill in number words 1 -100 on online worksheet or write hese number words in your notebook: twenty-one, twenty-two, twenty-three, twenty-four, twenty-five, twenty-six, twenty-seven,
twenty-eight, twenty-nine, thirty
Day 15
1. Write in your notebook: forty, fifty, sixty, seventy, eighty, ninety, one hundred
Adding 0 and 1
Day 16
1. Put two coins in your hand (if you really can’t find 2 coins, you can use something else) Now go and ask your mom (or someone else) to give you zero more coins. How many coins do you have in your
hand now?
2. You just learned at 2 plus 0 more is still 2. We say 2 plus 0 equals 2 You can write it like this 2 + 0 = 2.
3. Do activity 1 again but this time put 5 coins in your hand. How many coins do you have in your hand after you ask for 0 (zero) more?
4. You just learned that 5 plus 0 more is still 5. We say 5 plus 0 equals 5. You can write it like this 5 + 0 = 5.
5. Activity 3 Practicing adding 0 online. You are allowed to do 20 problems.
*Day 17
1. *Do the first 2 lines of this worksheet. (You can do more if you like.) Save your paper in your notebook.
2. Gather 10 legos (or blocks or pennies or something — 10 of the same kind of thing)
3. step 2 Count out 3 legos and connect them (or stack together whatever you collected).
4. step 3 Add on one more. To do that connect one more lego (or add one more thing) to your stack.
5. step 4 Count how many are in your stack now.
You just learned that 3 plus 1 more is 4. We say 3 plus 1 equals 4. We write 3 + 1 = 4 .
1. Repeat steps 2, 3 and 4 but count out 4 legos.
• You just learned that 4 plus 1 more is 5. We say 4 plus 1 equals 5. We write 4 + 1 = 5 .
• Try again but count out 5 blocks this time.
• Keep experimenting.
*Day 18
1. Write 3 + 1 = 4 on a piece of paper and then draw a picture of that problem. Think about stacking and counting from day 12.
2. *Do the first two lines of the worksheet . You can do more if you like. Save your paper in your notebook.
Day 19
1. Get 11 pieces of scrap paper. Computer paper used on one side would be perfect. Write a number big on each piece of paper from 0 to 10.
2. Lay the papers out in order. This is a number line.
3. Stand on zero. Add one. Stand on the answer. Say, “Zero plus 1 equals 1.” Now add one again. Stand on the answer. Say, “1 + 1 = 2″ Keep doing the same until you get to ten.
Day 20
1. Build math problems on the computer. Click on “Manipulatives” and “Blocks” and build the problem 5 + 1 = 6 just like you drew a problem the other day. What other problems can you make a picture
Day 21*
1. *Print out this addition worksheet. Read the worksheet carefully and practice with the number line.
2. Now try this addition game.
Day 22
1. Draw a picture of 2 plus 3 equals 5, 2 + 3 = 5, like my baseball picture I made for you.
2. Do you remember how to count to add? Try this online activity.
Day 23
1. Do two more lines on your adding 0 (zero) and adding 1 (one) papers.
Day 24
1. Now, every day I’m going to tell you a new addition problem I want you to remember. Today’s is two plus two equals four. Say it out loud. Now write on a new page of your notebook, 2 + 2 = 4. At
the top of the page write Addition Facts.
2. Then you are going to practice what you know. Change the first ten to a 2. Change the second ten to a 2. Do ten problems. Click on the button “Generate” to start. (You might want to have a parent
check to make sure you are following these directions right the first time.)
Day 25
1. Here’s your problem for today. 2 + 3 = 5
2. Now I want you to look at your left hand. How many fingers are on it? 5, right? Now, hold three of your fingers together with your right hand. You have two fingers free and three fingers being
held. That’s two plus three equals five. Now, hold onto just two fingers. You have three fingers free and two fingers being held. That’s three plus two. So what does 3 + 2 = ? 5! You still
have five fingers! It doesn’t matter which way you hold them. So, we learned that 2 + 3 = 5 AND 3 + 2 = 5
3. Add it to your facts list.
4. Let’s practice. Change the first ten to a 2. Change the second ten to a 3. Do ten problems.
5. Play musical memory turtle.
Day 26
1. Do a fun dot to dot. Count by 2!
2. Now try a BIG dot to dot.
3. Now here’s your math problem to learn today. I want you to remember that 3 + 3 = 6. 3 + 3 = 6, Say it out loud, “Three plus three equals six.”
4. Add it to your facts list.
5. Let’s practice. Change the first ten to a 3. Change the second ten to a 3. Do ten problems.
Day 27
1. Get out six coins (or you could use something else). Put them all together in a pile. That’s 6 + 0. Six coins plus no more coins. Move one coin off all by itself. That’s 5 coins plus 1 coin. You
still have six coins, right? 5 + 1 = 6. And, if you look at it the other way it’s 1 + 5 = 6. Now move another coin to be with the one coin. Now you have a pile of 4 coins and a pile of 2 coins.
That’s 4 + 2 = 6. Move one more coin so they both have three coins. That’s 3 + 3 = 6. Do you see how there is always the same number of coins? The answer is always 6. But there are lots of ways
to get that answer because you can move the coins into different combinations.
2. Now here’s your math problem to learn today. I want you to remember that 2 + 4 = 6. That means that 4 + 2 = 6 too! 2 + 4 = 6 and 4 + 2 = 6 Say it out loud, “Two plus four equals six. Four plus
two equals six.”
3. Add it to your facts list.
4. Let’s practice. Change the first ten to a 2. Change the second ten to a 4. Do fifteen problems.
Day 28
1. Draw the problem 3 + 4 = 7 Draw three stars (or whatever you want) and then a + sign. Then draw four more. How many do you have? 7!
2. Here’s your math problem of the day. I want you to remember that 3 + 4 = 7. Say it out loud, “Three plus four equals seven. Four plus three equals seven.” 3 + 4 = 7 and 4 + 3 = 7
3. Add it to your facts list.
4. Let’s practice. Change the first ten to a 3. Change the second ten to a 4. Do fifteen problems.
Day 29
1. Either get out your number papers and line them up on the floor, or get out your baseball addition paper with the number line on the bottom. Find 4 and either stand on it or put your finger on
it. Now move four more. What number are you on?
2. Here’s today’s addition problem to remember. 4 + 4 = 8 Say it, “Four plus four equals 8.”
3. Add it to your facts list.
4. Let’s practice. Change the first ten to a 4. Change the second ten to a 4. Do fifteen problems.
Day 30
1. Addition counting game Remember you can count to add. Use the marbles IF you need to.
Counting (backwards, by ten)
Day 31
1. Let’s count backwards! Start at 20 and count down by connecting the dots.
2. Let’s practice addition. Change the first ten to a 4. Change the second ten to a 4. Do fifteen problems.
Day 32
1. Get a whole bunch of legos or something else you can stack. Count out ten and make them into a stack.
2. If you have enough, make another stack of ten.
3. Do you have more? If so, make another stack of ten. (It’s okay if you don’t.)
4. Put away the rest.
5. So, you should have 3 stacks of ten.
6. 1 stack of ten is ten legos, right? 10
7. 2 stacks of ten is twenty legos. 20 Is that right? (Count if you’re not sure.)
8. 3 stacks of ten would be thirty legos. 30
9. 4 stacks of ten would be forty legos. 40
10. 5 stacks of ten would be fifty, 50.
11. Here’s your math problem of the day. I want you to remember that 2 + 5 = 7. 2 + 5 = 7 and 5 + 2 = 7 Say it out loud, “Two plus five equals seven. Five plus two equals seven.”
12. Add it to your facts list.
13. Let’s practice. Change the first ten to a 5. Change the second ten to a 2. Do fifteen problems.
Day 33
1. Watch the video on counting by tens.
2. Here’s your math problem of the day. three plus five equals eight, 3 + 5 = 8, five plus three equals eight, 5 + 3 = 8
3. Add it to your facts list.
4. Let’s practice. Change the first ten to a 5. Change the second ten to a 3. Do fifteen problems.
Day 34
1. Play this connect the dots game. You want to count by 10. Click on the up arrow until it says 10.
2. Here’s your math problem of the day. four plus five equals nine, 4 + 5 = 9, five plus four equals nine, 5 + 4 = 9
3. Add it to your facts list.
4. Let’s practice. Change the first ten to a 4. Change the second ten to a 5. Do fifteen problems.
Day 35
1. Here’s your math problem to remember. Look at your two hands. Hold them out in front of you. You have five fingers on your left hand and five fingers on your right hand. That’s five plus five
fingers. How many fingers do you have in all? 10! So, 5 + 5 = 10, say it, “five plus five equals ten” That’s an easy one to remember, right?
2. Add it to your facts list.
3. Play this addition game. Click on “Practice Facts 1 – 5″
Addition Practice
Day 36
1. Play this addition game. Click on Practice Facts 1 – 5.
Day 37
1. Play this addition game.
Day 38
1. Play this addition game. Click on Relaxed Mode under Sums to 10.
Day 39
1. Play this addition game. Click on Practice Facts 1 – 5.
Day 40
1. Play this math game. Click on addition and easier.
2. Good job!
Addition and Review
Day 41*
1. *Print out this addition worksheet. Fill in all the blanks. (Give this to your homeschool parent to put in your portfolio.)
2. Remember odd and even? Read this story (click on the arrow to go to the next page) and answer the questions.
Day 42
1. Take this odd or even test.
2. Choose a game from Math 1 – Addition.
Day 43
1. Put the kids in their ride in order.
2. Choose a game from Math 1 – Addition.
Day 44
1. How well do you know your ordinal numbers? (first, second, third…)
2. Choose a game from Math 1 – Addition.
Day 45
1. Count by tens. Color in the square as you count: ten, twenty, thirty, forty… Click on a color and then the square.
2. Choose a game from Math 1 – Addition.
Patterns, Addition Practice
Day 46
1. Do this pattern game. Click on start.
2. Choose a game from Math 1 – Addition.
Day 47
1. Do this matching pattern game. Click on go and follow the instructions. Click on the space shuttle as many times as you
2. Choose a game from Math 1 – Addition.
Day 48
1. Do this pattern activity. Click on the color that comes next.
2. Choose a game from Math 1 – Addition.
Day 49
1. Can you answer all the questions in this pattern lesson?
2. Choose a game from Math 1 – Addition.
Day 50
1. Match the patterns. Choose level 1 and complete it. Then from the “Menu” at the top, choose level 2.
2. Choose a game from Math 1 – Addition.
Comparing Numbers
Day 51
1. Read the instruction at the top of the page. It will say something like “greater than 18.”
2. Double click on a number that is greater than 18. You have to get the rabbit just right.
3. When you are right, the rabbit will eat the carrot and a new instruction will come up.
4. Play several times and then close.
5. Choose a game from Math 1 – Addition.
Day 52
1. Follow the directions to label the first number is more than or less than the second number. (hint: The crocodile wants to eat the bigger number.)
2. Choose a game from Math 1 – Addition.
Day 53*
1. Print out and fill in this worksheet.
2. > means greater than
3. < means less than
4. The easiest thing to remember is that the big end points to the bigger number and the little end points to the smaller number.
5. Choose a game from Math 1 – Addition.
Day 54
1. Click on Compare Numbers. Click on “Numbers to 100.” Click on the Check button when you are done.
2. Choose a game from Math 1 – Addition.
Day 55
1. Practice your addition. Choose a game under Math 1 – Addition.
Day 56
1. Measure! Read the ruler and tell how long things are in centimeters.
2. Choose a game from Math 1 – Addition.
Day 57
1. Choose a game from Math 1 – Addition.
Day 58
1. Measure the teddy bear. Try different levels.
2. Choose a game from Math 1 – Addition.
Day 59
1. What to use to measure? Click on the arrow to turn the page.
2. Choose a game from Math 1 – Addition.
Day 60*
1. Do this addition facts to 5 worksheet.
Day 61*
1. Do this addition worksheet.
Day 62
1. Play with the thermometer. F on the right stands for Fahrenheit which is how we measure temperature in America. C on the left stands for Celsius which is how we measure temperature everywhere
2. How hot is the desert?
3. How cold is ice?
4. Choose a game from Math 1 – Addition.
Day 63
1. What temperature is it? Read the number at the top of the red line.
2. Choose a game from Math 1 – Addition.
Day 64
1. Weigh the poddles. Put the weights on the scale to balance it.
2. Choose a game from Math 1 – Addition.
Day 65
1. Choose a game from Math 1 – Addition.
Day 66
1. Less than maze — jump to a smaller number
2. Choose a game from Math 1 – Addition.
Day 67
1. Choose a game from Math 1 – Addition.
Day 68
1. Which cat is orange? Click on the right ordinal number (first, second, third…)
2. Choose a game from Math 1 – Addition.
Day 69
1. Match the pattern of the egg.
2. Choose a game from Math 1 – Addition.
Day 70
Day 71
1. Choose a game from Math 1 – Addition.
2. Can you find what’s the same?
Day 72
1. Find the pattern.
2. Choose a game from Math 1 – Addition.
Day 73
1. What can you build? Click on the different shapes and move them around.
2. Choose a game from Math 1 – Addition.
Day 74
1. Make shapes. Click on the bands — these are rubber bands. Then click on a red dot — these are pegs. Then click and drag the end of the band to another dot. Then stretch out the side of the band
to another dot. When you like your shape, click on a color to fill it in.
2. Choose a game from Math 1 – Addition.
Day 75*
1. Practice your addition. Choose a game under Math 1 – Addition.
2. Watch the shape video.
3. *Print the shapes on page 3. There is a mistake on the page! Can you find it?
4. Color or cut out the shapes.
5. Look at the ideas on page 4.
Day 76
1. Here’s a too easy shape quiz.
2. Choose a game from Math 1 – Addition.
Day 77
1. Here’s a shapes story to read.
2. Choose a game from Math 1 – Addition.
Day 78
1. Do this shape puzzle.
2. Choose a game from Math 1 – Addition.
Day 79*
1. * Print and color the shapes.
2. Choose a game from Math 1 – Addition.
Day 80
1. Practice your addition. Choose a game under Math 1 – Addition.
2. Play the shape game.
Day 81
1. Fractions are part of a whole number. You already know more about fractions than you think.
2. When you break a candy bar in half in order to share it with someone, that’s a fraction. You each have one half. We write that as a one over a two with a line in between. We type it like this 1/
3. If you have a small pizza, it is cut into four slices. If there are four people, you each take one of the four slices. We write 1/4. That just means one of the four. That’s how you write it in
math language. We say “one fourth.”
4. Go to this website and color in one block blue. One fourth of the square is blue.
5. Color two blocks red. 1/2 of the square is red, right? You could also say that 2 of the 4 squares are red, 2/4.
6. Click on clear. Color all the blocks the same color. Now four of the four blocks are colored, 4/4. We say the “whole” thing is colored.
7. Click on clear. Now color three of the blocks all the same color. “Three fourths” of the square is colored. In math we write 3/4 . That means that three of the four parts is colored.
8. Play around with it. Make different types of fractions.
Day 82
1. Let’s see if you can count the colored parts. Click on start. The square is divided into four parts like the square you painted. Count how many of the parts are colored blue. If one part is
colored blue, then it is 1/4, one fourth which just means in math language that one of the four parts is blue.
Day 83
1. Use the arrows at the bottom of the screen to choose into what fractions you want to divide your flag.
Day 84
1. Play Around the World in 80 Seconds. Click on addition and EASIER.
Day 85
1. Read about Frank and Fran’s Fabulous Fractions. (You can only shade the top half. If you attempt to shade the bottom half it will take you back to the first screen.)
Day 86
1. Let’s see if you can get across the river. Remember if there are three parts to the circle. Then the number on the bottom is a three. The number on the bottom tells us how many parts it’s divided
into to.
Day 87
1. Make the right fraction. Choose the right number of parts by using the arrow. Then click on the parts to color them in. Example: if it says 4 out of 4, 4/4, then you would click on the arrow
until it said 4. Then you would click on all four parts of the circle to color them all red.
2. Then click on check.
3. Click on New Whole.
Day 88
1. See if you can figure out these fractions. 1/5 means one of the five parts.
Day 89
1. Read this story about halves, thirds, fourths and fifths. Thirds are when something is divided into three parts, like 1/3. Fourths are when something is divided into four parts, like 3/4. Fifths
is when something is divided into five parts, like 2/5. In math language, two fifths, 2/5, just means two of the five parts. It could mean two of the five pieces of cake, or two of the five kids
are wearing hats. In math we say two fifths and write, 2/5.
Day 90
1. Play Dude’s Dilemma. Click on addition and EASY.
Day 91
1. Learn about pennies. Then click on the penny to practice counting money.
2. Take a penny quiz. Click on NEXT each time.
3. Get out some pennies and count them with a parent or sibling. If you are using a different currency, you can compare practice with your own coins.
4. Paint a car. Choose 0 – 5.
Day 92
1. Watch the video on counting by fives.
2. Count by five dot to dot.
3. Learn about nickels. Then click on the nickle to count money.
4. Take a nickel quiz. Click on NEXT each time.
5. Get out some nickles and count how much they are worth with a parent or sibling.
Day 93
1. Watch the video on counting by tens.
2. Connect the dots. Click on the up arrow to tell the game you want to count by 10.
3. Learn about dimes. Then click on the dime to count money.
4. Take a dime quiz. Click on NEXT each time.
5. Get out some dimes and count how much they are worth with a parent of sibling.
Day 94
1. How much money? BINGO Choose pennies and the 3×3 box.
2. Do your best to catch 10 blocks.
Day 95
Day 96
1. Count the nickles by counting by five. Then add on the pennies. Example: 3 nickels and 4 pennies. Hold up 3 fingers or make three marks on a paper and count them by five: 5, 10, 15. Add on 4
pennies. Hold up four fingers or make four marks on a paper and count ON from 15: 16,17,18,19.
2. How much money? BINGO Choose pennies and nickels.
Day 97
1. Count the dimes by counting by ten. Then add on the pennies. Example: 4 dimes, 1 nickle and 6 pennies. Hold up four fingers or make four marks on a paper and count them by ten: 10, 20, 30, 40.
Count on one five, 45. Add on 6 pennies: 46, 47, 48, 49, 50, 51.
2. How much money? BINGO Choose pennies, nickles and dimes.
3. Choose a game from Math 1 – Addition.
Day 98
1. Count the nickels and pennies.
2. Learn about quarters. Click on the quarter. 4 quarters is 100 cents or 1 dollar
3. Get out some quarters and count up how much they are worth with a parent or sibling.
Day 99
1. Learn about money and count it up!
2. If you are having trouble, get out some coins and practice adding them up with a grownup. Practice every day until it’s easy peasy for you.
Day 100
1. Click on practice facts 1 – 5.
Day 101
1. Take a trip to the candy store. If any of these money games are too hard, skip them and get out your parents’ coins and count money that way. Have a parent or older sibling check and make sure
you added correctly.
Day 102
1. Click on relaxed mode level 1. If the ad on the page is yucky, reload.
Day 103
1. This one is a bit trickier. Can you make the right amount?
Day 104
1. Go shopping with the farmer. Includes dollar bills. Each is 1 dollar. $1.00
Day 105
1. Choose a game from Math 1 – Addition.
Day 106
1. Get out a handful of coins.
2. Sort them into groups: all pennies into one group, all quarters into one group (or whatever currency you are using)
3. Now count up how many are in each group and write it down. For example: write “penny” or draw a penny and write “4″ if you have four pennies. Do that for each row, for each type of coin.
4. Now take the paper over to where you have Legos or some kind of block.
5. Make a tower for each type of coin. Get red Legos and if you have 4 pennies then build a tower with 4 red Legos.
6. Do that for each coin. Use a different colored Lego for each tower.
7. Here’s another example. If you have 6 nickles, then take 6 blue Legos and build a tower.
8. When you have all of our towers, line them up next to each other. This is a bar graph.
9. Save your paper. You are going to keep working on your towers and make more bar graphs.
Day 107
1. Get out your paper from yesterday.
2. Build your towers again using this online tool. Just like yesterday, if you had 4 pennies, then you will make a tower of four blocks. Each tower must be different. Use a different color or a
different shape for each tower. (If that online tool doesn’t work, here’s another graphing tool. You can delete all of the zeros in the side numbers. If you had four blocks in a tower, then you
will make your picture go up to four in that color.)
3. This is just an example. Your graph will look different because you have a different number of coins.
4. When you are finished making your towers, explain to someone what each tower means. For example, you will show them your tower with four blocks and tell them that means you had 4 pennies. Tell
them it is a bar graph.
5. Play this addition game. Click on practice facts 1-5.
Day 108*
1. Get out your paper again.
2. *Now I want you to draw towers for each of your piles. You can use special paper called graph paper. Print out this first grade graph paper.
3. If you have 4 pennies, then in the first column you will color in four blocks. Then you can turn the paper sideways and write penny next to the column. Make sure you use a different color for
each coin.
4. In the space on the left, you can turn the paper sideways and write “Coin Count.”
5. Play this addition game. Click on practice facts 1-5.
Day 109*
1. Print out this bar graph worksheet. Color in the right number of blocks for each kind of fruit. If there are none of a certain kind, color in zero blocks. You can click on the link to check your
graph when you are finished.
2. Play this addition game. Click on practice facts 1-5.
Day 110
1. Choose a game from Math 1 – Addition.
Day 111*
1. Play with this pie chart or pie graph. We built bar graphs before to show how much we had of different things. We can use fractions and pie graphs to show how much as well. (Note: 25% is 1/4,
50% is 1/2, 33% is 1/3)
2. We’re going to make a pie chart to show how many boys and girls are in your family. Go to this pie chart maker.
3. Type in Girls and the number of girls in your family.
4. Type in Boys and the number of boys in your family.
5. Your family is made up of boys and girls, so boys and girls together fill the whole circle. Is your family half and half or almost all boy or all girl?
6. *Draw your family boy/girl pie chart. Use this My Family pie graph worksheet. Choose which colors to use and label your graph by coloring in the boxes by the words boy and girl. If you color the
box by girl pink, then color the girl portion of your pie chart pink.
Day 112*
1. Choose a game from Math 1 – Addition.
2. Play this graph making game. There is a bar graph and a circle graph.
3. Here are the answers to check when you are done.
Day 113
1. Read “I Am Special.”
2. Play Fruit Fall.
Day 114
1. Read “Kids Have Pets.”
2. Choose a game from Math 1 – Addition.
Day 115
1. Read “Kinds of Graphs.” (If there is no next button, scroll down.)
2. Play with this bar graph maker. You can click on the words to delete them and write your own. Can you make a graph of how you spend your time? You can make your labels: school, play, read, eat,
sleep, and whatever else. Each block could be one hour. (You can click on the numbers to change them too.)
Day 116
1. Get out 5 blocks or coins or something, all the same. I’m going to use blocks.
2. You have 5 blocks. Lay them down. Pick up one in your hand. How many are laying there now? 4, of course!
3. There are 4 down and 1 in your hand. 4 + 1 = 5. You knew that. Now we are seeing that five take away one is four. In math we say: five minus one equals four or 5 – 1 = 4
4. This is called subtraction.
5. Write that big word on the top of a piece of paper and write underneath it 5 – 1 = 4
6. Now play with your blocks. If you take away 2, how many are left? If you take away 5, how many are left?
Day 117
1. Get out ten blocks (or whatever).
2. Lay five blocks out together.
3. Add on one block. 5 + 1 = 6 Say it out loud, “Five plus one equals six.”
4. Add another block. 6 + 1 = 7 Each time say the math problem out loud.
5. Add another block. 7 + 1 = 8
6. Add another block. 8 + 1 = 9
7. Add another block. 9 + 1 = 10
8. Now take away a block. 10 – 1 = 9 Say it out loud, “Ten minus one equals nine.”
9. Take away another block. 9 – 1 = 8 Continue to say each problem out loud.
10. Continue until you have no blocks.
Day 118*
1. If you had 100 blocks, and I took all 100 away, how many blocks would you have? (answer: zero)
2. If you had 1 million blocks, and I took away 1 million blocks, how many blocks would you have? (answer: zero)
3. If you had one block, and you gave me one block, how many blocks would you have? (answer: zero)
4. If you had five blocks, and you didn’t give me any, how many blocks would you have? (answer: you would still have five blocks)
5. If you had nine blocks, and you gave me zero blocks, how many blocks would you have? (answer: nine)
6. If you have seven blocks, and I took from you zero blocks, how many blocks would you have? (answer: seven)
7. Fill in subtracting zero and one worksheet. There is one problem at the beginning that we didn’t talk about (3 – 2). See if you can think about it and figure it out.
Day 119*
1. Fill in another subtracting zero and one worksheet.
2. Feeling good about starting to subtract?
Day 120
1. Play subtraction harvest. The wiggling apples are going to fall. How many apples will be left on the tree? If you aren’t sure, count the still apples.
2. Choose a game from Math 1 – Addition.
Subtraction Introduction cont. and Fraction Review
Day 121
1. When we subtract, we take away from what we already have. If you have 5 and take 0 away, you still have 5. If you have 5 and take 1 away, you have four. That’s 5 minus 1 equals 4. 5 – 1 = 4
Subtraction is the opposite of addition. If you have four and add back on one, then you have five. These facts are all relatives: 1 + 4 = 5 , 4 + 1 = 5 , 5 – 1 = 4, 5 – 4 = 1 Whenever you
see a subtraction problem with the two numbers right next to each other on the number line (like 5 and 4 or 6 and 7 or 8 and 9) then the answer will be one. If you have nine candies and I
take eight, then you will only have one left. If I take all nine away, you will have zero.
2. Play feed the frog. This game has missing numbers. It might ask 4 + __ = 5 In this game you will have to think, “What plus 4 equals 5?” The answer is 1. 1 + 4 = 5 This is practice for
3. Choose a game from Math 1 – Addition.
Day 122
1. Remember if you take away 5 from 5 you have nothing, 0. If you take 4 away from 5, you still have 1 left. 5 – 4 = 1
2. Do this subtraction worksheet.
3. Do you remember how to build fractions? The bottom number tells you how many pieces to divide the shape into. The top number tells you how many parts to fill in.
Day 123
1. Choose a game from Math 1 – Addition.
2. Here’s a new problem: 5 – 3 = 2 , 5 – 2 = 3 These are related to: 2 + 3 = 5 , 3 + 2 = 5
3. Do this subtraction worksheet.
Day 124
1. Play this fraction golf game. Remember that the bottom number tells how many parts the circle is divided into and the top part tells you how many of the parts should be colored in. Follow the
directions on the screen to get a hole in one!
2. Play this subtraction game. Pop the balloons to count down.
Day 125
1. Choose a game from Math 1 – Addition.
2. Play Fraction Pizza. Choose “Go Easy” under “Difficulty.”
3. Play subtraction harvest.
Day 126
1. Play feed the frog.
2. Hold one hand up. You have five fingers. Put down your thumb. That’s 5 – 1 = 4. Now switch your fingers. Put down four and leave your thumb up. That’s 5 – 4 = 1. Switch your fingers back and
forth. Now hold up four fingers. Now lift up your thumb. That’s 4 + 1 = 5. With your fingers you can make the whole 1, 4, 5 family. 1 + 4 = 5 , 4 + 1 = 5 , 5 – 1 = 4 , 5 – 4 = 1
3. What other subtraction problem can you make on your hand? How about 5 – 2 = 3 and 5 – 3 = 2?
4. Draw a picture to show 5 – 2 = 3.
Day 127
1. Choose a game from Math 1 – Addition.
2. Count down subtraction. You can just do 10 problems.
3. Get out blocks or coins or something and show that 3 + 3 = 6 and 6 – 3 = 3.
Day 128
1. Here’s your family of the day. 2 + 4 = 6 , 4 + 2 = 6 , 6 – 4 = 2 , 6 – 2 = 4
2. Draw a picture or use blocks to show how this family works.
3. Do these subtraction flash cards.
Day 129
1. Here’s your problem of the day. 4 + 4 = 8 , 8 – 4 = 4
2. Hold up four fingers on both hands and show that four plus four equals eight and eight minus four equals four.
3. Choose a game from Math 1 – Addition.
Day 130*
1. Here’s your problem of the day. 5 + 5 = 10 , 10 – 5 = 5
2. Do this subtraction worksheet.
Day 131*
1. Let’s fill in some fact families. Fact Families (You might want to print out three of these.)
2. In the top triangle of circles, write 5 in the “whole” circle. In the bottom two circles write 2 and 3. Now explain how they are a family. 2 and 3 are two parts of the whole. If you put them
together, they make the whole. 2 + 3 = 5 and 3 + 2 = 5. If you start with the whole, and take away one of the parts, you have the other part leftover. 5 – 2 = 3 and 5 – 3 = 2.
3. Now do the same thing with the other triangle of circles. This time fill it in for the 2, 4, 6 family. Explain how they are a fact family.
4. Choose a game from Math 1 – Addition.
Day 132*
1. Let’s learn a new subtraction fact. You know that 3 + 4 = 7 and that 4 + 3 = 7, right?
2. Here is the subtraction half of that fact family. 7 – 4 = 3 and 7 – 3 = 4.
3. Do this subtraction worksheet. Subtraction worksheet 2
4. Play this subtraction game. Click on facts 1 and 2. If you don’t know one of the answers, count down or try and think of its fact family.
Day 133(*)
1. Let’s learn another subtraction fact. You know that 2 + 5 = 7 and that 5 + 2 = 7.
2. To subtract we say 7 – 5 = 2 and 7 – 2 = 5.
3. *Fill in another fact family sheet for the families 3, 4, 7 and 2, 5, 7.
4. Choose a game from Math 1 – Addition.
Day 134
1. Let’s do one more fact family this week. 3 + 5 = 8 , 3 + 5 = 8
2. Subtract them. 8 – 3 = 5 and 8 – 5 = 3.
3. Take out 8 coins or blocks and show that 8 – 5 = 3 and 8 – 3 = 5.
4. Play this subtraction game. Click on facts 1 and 2.
Day 135
1. Choose a game from Math 1 – Addition.
2. Do you remember that 10 – 5 = 5 , 8 – 4 = 4 , 6 – 3 = 3 , 4 – 2 = 2 ? Go tell someone all those facts!
Day 136
1. Play feed the frog.
2. Do 15 Count Down Subtraction problems. Fill in the minimum and maximum as 3 and 8. Fill in the bottom numbers minimum and maximum as 0 and 5.
Day 137(*)
1. Let’s do another fact family. You know that 4 + 5 = 9 and that 5 + 4 = 9.
2. Let’s subtract them. 9 – 5 = 4 9 – 4 = 5 Go and tell someone.
3. Hold up your hands and fold down one thumb. On one hand you have 5 fingers showing. On the other hand you have four fingers showing. Your hands together show that 5 + 4 = 9.
4. Hide the hand with all five fingers out. That shows that 9 – 5 = 4.
5. Now show that 9 – 4 = 5.
6. Let’s fill in another fact family page. Do the families 4, 5, 9 and 3, 5, 8.
7. Choose a game from Math 1 – Addition.
Day 138
1. Play subtraction bowling.
2. Play subtraction baseball .
Day 139
1. Go fishing.
Day 140
1. Do you remember that 10 – 5 = 5 , 8 – 4 = 4 , 6 – 3 = 3 , 4 – 2 = 2 ? Go tell someone all those facts!
Review Money
Day 141
1. Choose a game from Math 1 – Addition.
2. Play this nickels and pennies game.
Day 142
1. Choose a game from Math 1 – Subtraction.
2. Add nickles and pennies. Which is worth more?
Day 143
1. Choose a game from Math 1 – Addition.
Day 144
1. Choose a game from Math 1 – Subtraction.
2. Match the quarters.
Day 145
1. Choose a game from Math 1 – Addition.
2. Do you have enough money?
Review graphs, fractions
Day 146
1. Play Minus Mission.
2. Play fruit fall. Remember bar graphs!
Day 147
1. Play Around the World in 80 Seconds, addition.
2. Make a circle graph (or pie chart). Type in each kind of fruit you have in your house. Then type in how many of each type of fruit you have. If you have 8 oranges, then type orange 8. When you
are done, click on Draw Chart. Which color is the biggest? Look at the color boxes at the bottom of your graph. Which fruit is marked by that color? That’s the fruit that you have the most of!
Day 148
1. Play Dude’s Dilemma, subtraction.
2. Make fractions. Make the right number of pieces by clicking on the arrow. The number of pieces should be the same as the bottom number. Click on the pieces to color them in. Here’s an example. 3
/4 means 3 out of 4. Make 4 pieces and then click on 3 to color them in. Three out of the four pieces are colored in.
Day 149
1. Choose a game from Math 1 – Addition.
2. Play cross the river.
Day 150
1. Choose a game from Math 1 – Subtraction.
Day 151
1. Choose a game from Math 1 – Addition.
2. Learn to tell time. Use the mouse to point to a number. It will tell you the time on the clock below. When you see 7:00, you read it “seven o’clock.” Click on the arrow to go to the next page.
Day 152
1. Choose a game from Math 1 – Subtraction.
2. *You could print out a clock to practice with.
Day 153
1. Choose a game from Math 1 – Addition.
Day 154
1. Learn about telling time. Click on the little clocks to turn the page.
Day 155
Day 156
Day 157
Day 158
1. Choose a game from Math 1 – Subtraction.
2. Play the time board game. Roll the die by click on it (the square at the bottom with a dot on it). Then click the gear to turn the clock hands. Match the clocks and click on done.
Day 159
Day 160
1. Choose a game from Math 1 – Addition.
2. Play with a clock. Type in a time.
Patterns/Beginner Algebraic Concepts
Day 161*
1. Build a train with Caillou. Click on one of the pictures. Then choose to play with the train.
2. *Fill in the time on this clock worksheet. Sometimes you have to draw the hands on the clock. Sometimes you have to write the time. Have a parent check your answers.
Day 162*
1. *Fill in the time on this clock worksheet. Have a parent check your answers.
Day 163*
Day 164
Day 165
1. Choose subtraction and addition games from Math 1.
Day 171
1. Practice with coins. First click on the papers on the bottom and choose “four numbers.” Put twenty-five cents in each square, but make each square different! When you are done, click on the
numbers “1 2 3″ at the bottom and it will count up your coins and see if you are right. Then you can play around with it!
Day 172
1. Today play with fractions. Use the scissors to cut your circle into parts. Then use the paint to color in some of the parts. Click on the numbers at the bottom to see what fractions you made.
Day 173
1. See if you can find all of the correct matches.
Day 174
1. Click on boxes in a row that add up to the target number.
Day 175
1. Here’s the same type of game as Day 174.
Day 176
2 Responses
awesome!! I had been looking for simple to use, yet effective curriculum for my son. This is the kind of stuff I want to do. But I’m thinking he might be more Kindergarten level. He is 6. But
maybe he’ll surprise me. Thank you for all your work into this and for making it available!!!
You are welcome. How is his reading? Use his reading level to decide if he should do kindergarten or 1st. | {"url":"http://allinonehomeschool.com/individual-courses-of-study/math/","timestamp":"2014-04-19T01:59:45Z","content_type":null,"content_length":"129369","record_id":"<urn:uuid:4fdfde9d-4937-4cfb-8972-8030e4c6e2ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Waveshaping can in some ways be thought of as a relation to modulation techniques such as frequency or phase modulation. Waveshaping can achieve quite dramatic sound tranformations through the
application of a very simple process. In FM (frequency modulation) synthesis modulation occurs between two oscillators, waveshaping is implemented using a single oscillator (usually a simple sine
oscillator) and a so-called 'transfer function'. The transfer function transforms and shapes the incoming amplitude values using a simple lookup process: if the incoming value is x, the outgoing
value becomes y. This can be written as a table with two columns. Here is a simple example:
Incoming (x) Value Outgoing (y) Value
-0.5 or lower -1
between -0.5 and 0.5 remain unchanged
0.5 or higher 1
Illustrating this in an x/y coordinate system results in the following image:
Basic Implementation Model
Implementing this as Csound code is pretty straightforward. The x-axis is the amplitude of every single sample, which is in the range of -1 to +1.^1 This number has to be used as index to a table
which stores the transfer function. To create a table like the one above, you can use Csound's sub-routine GEN07^2 . This statement will create a table of 4096 points in the desired shape:
giTrnsFnc ftgen 0, 0, 4096, -7, -0.5, 1024, -0.5, 2048, 0.5, 1024, 0.5
Now, two problems must be solved. First, the index of the function table is not -1 to +1. Rather, it is either 0 to 4095 in the raw index mode, or 0 to 1 in the normalized mode. The simplest solution
is to use the normalized index and scale the incoming amplitudes, so that an amplitude of -1 becomes an index of 0, and an amplitude of 1 becomes an index of 1:
aIndx = (aAmp + 1) / 2
The other problem stems from the difference in the accuracy of possible values in a sample and in a function table. Every single sample is encoded in a 32-bit floating point number in standard audio
applications - or even in a 64-bit float in recent Csound.^3 A table with 4096 points results in a 12-bit number, so you will have a serious loss of accuracy (= sound quality) if you use the table
values directly.^4 Here, the solution is to use an interpolating table reader. The opcode tablei (instead of table) does this job. This opcode then needs an extra point in the table for
interpolating, so it is wise to use 4097 as size instead of 4096.^5
This is the code for the simple waveshaping with the transfer function which has been discussed so far:
EXAMPLE 04E01_Simple_waveshaping.csd
sr = 44100
ksmps = 32
nchnls = 2
0dbfs = 1
giTrnsFnc ftgen 0, 0, 4097, -7, -0.5, 1024, -0.5, 2048, 0.5, 1024, 0.5
giSine ftgen 0, 0, 1024, 10, 1
instr 1
aAmp poscil 1, 400, giSine
aIndx = (aAmp + 1) / 2
aWavShp tablei aIndx, giTrnsFnc, 1
outs aWavShp, aWavShp
i 1 0 10
Chebychev Polynomials as Transfer Functions
Coming in a future release of this manual... | {"url":"http://en.flossmanuals.net/csound/e-waveshaping/","timestamp":"2014-04-17T04:35:05Z","content_type":null,"content_length":"15645","record_id":"<urn:uuid:4a8fd9fd-9416-4211-a9f5-304e01c1499e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: "The Complete Theory of Everything"
Stephen G Simpson simpson at math.psu.edu
Fri Oct 31 11:20:32 EST 1997
There is a vast gulf between the foundational point of view and the
typical pure mathematician's point of view. I'd like to illustrate
this with an extreme example, namely the divergent views of Harvey's
"The Complete Theory of Everything: Validity in the Universal Domain"
which is available at
Harvey's motivation was (of course) foundational, going all the way
back to Frege's original purpose in inventing first order logic, in
the late 19th century. Frege's intention was also (of course)
foundational: to give a precise analysis of the philosophical notion
of logical validity, by setting up formal axioms and rules to capture
this. Much later Frege's analysis was nonconstructively vindicated by
the G"odel completeness theorem. Harvey revisits and critically
reexamines Frege's work from a fresh perspective, which Frege himself
might have found congenial.
Harvey's new twist on Frege is that, while first order logic captures
validity in structures whose domain is a set, it arguably does not
capture validity in the universal domain, which consists of absolutely
everything that exists. This is because the universal domain arguably
has certain indiscernibility properties which, for instance, prevent
it from being linearly ordered. Harvey writes down appropriate and
elegant axioms for the universal domain and then proves completeness
of the resulting axiom system. It's a clever proof, using familiar
techniques of model theory in a novel way. And it's a striking
result, which says something of basic philosophical importance about
logical validity. As might be expected, philosophers interested in
mathematical logic have been captivated. Also as might be expected,
pure mathematicians have been largely indifferent, although some of
them also get the point.
Recently Harvey visited Anand's university and spoke about "The
Complete Theory of Everything" as well as other results in f.o.m.
Here is Anand's somewhat patronizing description of the encounter.
Read this carefully:
Harvey - you told me some results when you were in Urbana. What I
remember is something about the first order validities in a
structure in which both a linear ordering and pairing function are
defined. I recall it being an interesting result. The abstract you
sent around (list of your recent talks) is too vague. In any case,
if I remember, this looked like a nice result firmly in the sphere
of modern model theory (which by the way doesn't need to be
marketed as the "theory of everything" ). Anyway I'd like to know
more details.
What I see here is that Anand, being a pure mathematician who uses
model-theoretic methods in his own pure mathematical research, is
readily able to grasp and remember the technical details of Harvey's
proof (indiscernibles, linear orderings, pairing functions, etc.) And
Anand appreciates this technical, mathematical, model-theoretic
aspect; he finds it "nice". But Anand is totally turned off by and
left completely cold by the foundational aspect, i.e. the analysis of
validity in the universal domain. To Anand's mind, the foundational
aspect of Harvey's result is comprehensible only as a marketing ploy.
In short, Anand just doesn't get it.
It's also interesting to observe the applied model theorists' reaction
to other foundational programs and results. Anand's expressed reaction
to "The Complete Theory of Everything" is similar to his expressed
reaction to Reverse Mathematics:
Rather paradoxically (and provocatively), from what I understand of
results in reverse mathematics, I tend to view the results as quite
interesting from the mathematical point of view, but just a
curiosity from the foundational point of view. But I'd be happy to
revise my judgement after looking at Steve's book.
In other words, Anand finds the purely mathematical, technical details
of Reverse Mathematics amusing at some level. But the foundational
aspects of Reverse Mathematics (including a rich and precise analysis
of various classical f.o.m. programs such as finitistic reductionism a
la Hilbert, predicative reductionism a la Weyl, etc., and the
limitations of these programs with respect to mathematical practice)
is totally alien to Anand. Once again, Anand probably regards the
invocation of Hilbert and Weyl as nothing but a marketing ploy. Once
again, Anand just doesn't get it.
I view these remarks by Anand as excellent examples of the typical
pure mathematician's mindset with respect to foundational issues. Or
maybe they are representative of a similar but more combative mindset,
such as might be expressed by a pure mathematician's adoring little
I hope that nobody here is offended by what Anand and I have said. In
my opinion, these revealing exchanges are one of the fascinating
aspects of what has been going on here in the FOM list. I want this
to continue and grow. Hurrah for the FOM list!
-- Steve
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1997-October/000126.html","timestamp":"2014-04-19T12:00:32Z","content_type":null,"content_length":"7224","record_id":"<urn:uuid:70816a4b-f9f3-4a37-b63a-57c193ea5911>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Folding the future: From origami to engineering
Old-fashioned navigation aid.
Back in the days before smart phones with GPS functions became ubiquitous, we had maps. They were big, flat paper objects that worked pretty well, except when you were done navigating and needed to
fold them back up. It was never obvious how to re-fold them. Even though they came pre-folded and the creases were on a simple grid pattern. To get the map back to a rectangular shape that fit neatly
in your pocket, you needed to carefully fold it section by section along straight lines. It seemed so simple an idiot could do it, but invariably you would get some critical fold backwards or
forwards when it was supposed to be the opposite and the map would skew and you would curse and try again and again and again until you threw the thing down in disgust and lost it somewhere under one
of the car seats.
Mathematicians feel your pain. No one has ever been able to figure out a general formula to calculate how many different ways a grid of arbitrary size can be folded. It's bedeviled researchers for
generations. People are still writing papers about it.
But a recent insight by a maths student suggests there might be another way to approach the problem, one that makes an unlikely connection between combinatorics, colouring problems, and origami.
Tom Hull, a mathematician at Western New England University in Springfield, Massachusetts, has been exploring the Miura map fold, which is a variant of the classic map. Instead of a grid of
perpendicular folds, the Miura fold is a grid where the lines intersect at 84/96 degree angles. It looks like this:
The Miura fold was invented by Japanese astrophysicist Koryo Miura, who was just as frustrated with the typical map fold as everyone else. In addition to making pocket maps more convenient, the Miura
fold allows space satellites to safely stow and unfold their solar panels.
Let me count the ways...
The genius of the Miura map is that it is easy to open and close the same way every time, just by pushing two diagonally opposite corners together or pulling them apart. But that's not the only
possible way to fold it — just the easiest. There are many other possibilities. Almost, but not quite, as many as a standard 90 degree angle map fold, which allows the paper to fold in eight
different arrangements around each intersection point, or vertex. A Miura map can fold in only six arrangements around a vertex. Hull thought the Miura's more constraining geometry would make
calculating all the possible ways to fold it a more tractable problem.
He gave the challenge to an undergraduate student, Jessica Ginepro. Initially Ginepro and Hull hoped to find a generalised formula for a Miura map grid
For the
And for the
"It was nasty. Really, really nasty," Hull says. Ginepro is still working on finding a formula for the Online Encyclopedia of Integer Sequences. Much to her surprise, the encyclopedia spat out a
match. The numbers were exactly the same as those generated by an apparently unrelated colouring problem.
It was wonderfully unexpected. Colouring problems represent
constraint satisfaction challenges
. A classic example would be colouring countries on a map so that no two countries of the same colour share a border (see
for more on this). Sudoku, in which the value of each box in the puzzle is constrained by the surrounding boxes' values, is another example. Commercial entities such freight shippers and airlines
deal with constraint problems all the time, and would like a better way to frame and solve them. But no one had ever thought colouring problems could have a connection with folding.
Hull and Ginepro have not yet found a simple way to convert the Miura map folding problem to a colouring problem, much less a method to generalise it for any fold. But to get a concrete idea of how
colouring and folding are related, consider an origami figure that can fold flat in a plane, like this crane:
If you wanted to colour every region on the crane such that no two regions that share a crease are the same colour, you would only need two colours to do it. And when you fold the crane back up, all
the regions of one colour will face one direction, and the regions of the other color will face the other direction.
Most colouring problems are more abstract, but the colouring connection provides a promising approach.
Buckling down
And that is not all. Folding problems can also be related to statistical mechanics, a branch of physics that deals with the behaviour of various materials from a probabilistic point of view. There
are physicists collaborating with Hull and Ginepro on the statistical mechanics of folding. They don't care so much about finding every single possible way a map or collapsed cone or other origami
object might fold itself into existence; they want to know how to make it fold the one, right way. Or rather, make the right fold the most probable fold. "The math doesn't care. But the physicists
do," says Hull.
If the mountains and valleys don't fold properly, this collapsed cone is nothing but crumpled paper.
The physicists care because they are trying to create self-assembling objects from sheets of polymer. When the polymers swell — from getting wet, or changing pH — they buckle and fold. The
researchers can control the outline of the buckles and folds, but not whether they fold into mountains or valleys. That's the key. Whether the fold becomes a mountain or a valley depends on the
stresses within the material as it is forming. Those stresses can be described by statistical mechanics, but it's a nonlinear problem. In the twentieth century when nonlinear problems were unpopular,
most researchers dismissed folding and buckling as junk. But more recent work has shown that our intestines, brains and other tissues self-assemble into predictable folded shapes that are controlled
by the material properties of the tissue. The leaves of plants and torn plastic bags seem to follow similar rules. Evolution has managed to master origami engineering, so why shouldn't we?
There are three ways in which formalising the mathematical connection between colouring, folding, and statistical mechanics problems might be useful. One, it might allow us to understand why
structures get stuck in undesirable shapes. Two, it might allow us to identify the important folds. Perhaps there are a subset of folds in a figure that, if folded the right way, force the whole
object to self-assemble properly. And third, knowing that might allow us to engineer origami shapes that have two forms and can switch back and forth, depending on whether certain folds become
mountains or valleys, says Christian Santangelo, a physicist at University of Massachusetts at Amherst who is collaborating on the project.
It's an unwieldy leap from folding maps, to colouring problems, to a statistical mechanics tour-de-force of soft-materials engineering. But the greatest research breakthroughs are often found in the
most unlikely of places. "The real hope is that if map coloring turns out to be related to origami, Tom [Hull] can use that to say something truly cool about origami," Santangelo says.
About the author
Kim Krieger is a freelance writer based in Norwalk, Connecticut. She has covered science policy in Washington, D.C., commodities prices from the floor of the New York Mercantile Exchange, and maths
innovation everywhere.
Submitted by Anonymous on May 26, 2013.
"...the Miura fold is a grid where the lines intersect at 60 degree angles."
Typo? I think it's more like 90 degrees, plus or minus 6 degrees.
A good read, much fodder for thought.
Philip Chapman-Bell
Northampton, MA
Submitted by mf344 on June 6, 2013.
Dear Philip,
Thanks very much for pointing that out, we have corrected the error. | {"url":"http://plus.maths.org/content/folding-future","timestamp":"2014-04-16T05:12:58Z","content_type":null,"content_length":"36392","record_id":"<urn:uuid:e164c61c-e529-43d0-b9ce-36f50405250d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Excel T.TEST Function
The Excel T.TEST Function
Related Function:
T.DIST Function
Basic Description
The Excel T.TEST function calculates the probability associated with the Student's T Test, which is commonly used for identifying whether two data sets are likely to have come from the same two
underlying populations with the same mean.
The T.TEST function is new to Excel 2010. However, this is simply a renamed version of the TTEST function, that is available in earlier versions of Excel.
The format of the T.Test function is :
T.TEST( array1, array2, tails, type )
where the function arguments are:
array1 - The first data set
array2 - The second data set (must have the same length as array1)
The number of tails for the distribution. This can be either :
tails - 1 - uses the one-tailed distribution
2 - uses the two-tailed distribution
An integer that represents the type of t-test. This can be either :
type - 1 - Paired T-Test
2 - Two-sample equal variance T-Test
3 - Two-sample unequal variance T-Test
T.Test Function Examples
Columns A and B of the spreadsheet on the right contain two arrays of data.
The probability associated with the Student's paired t-test with a one-tailed distribution, for the two arrays of data can be calculated using the Excel T.Test function as follows :
=T.TEST( A1:A12, B1:B12, 1, 1 )
This gives the result 0.449070689.
The probability associated with the Student's paired t-test with a two-tailed distribution, for the same two arrays of data, is calculated as follows :
=T.TEST( A1:A12, B1:B12, 2, 1 )
This gives the result
Note that the probability associated with the two-tailed T-Test is double that of the one-tailed T-Test.
Further information and examples of the Excel T.Test function are provided on the Microsoft Office website.
T.Test Function Errors
If you get an error from the Excel T.Test Function, this is likely to be one of the following :
Common Errors
#N/A - Occurs if the two supplied arrays have different lengths
Occurs if either:
#NUM! - - the supplied tails argument has any value other than 1 or 2
- the supplied type argument is not equal to 1, 2 or 3
#VALUE! - Occurs if either the tails argument or the type argument is non-numeric | {"url":"http://www.excelfunctions.net/Excel-T-Test-Function.html","timestamp":"2014-04-19T19:39:28Z","content_type":null,"content_length":"16636","record_id":"<urn:uuid:02201cad-6217-4d71-bc62-a977f2087e21>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
2nd Annual Kepler Science Conference - 10 NEW Earthlike Planets in Habitable Zones -LIVE STREAM- Oct
Any wonder we are not alone and being visited for a very long time. So if there are so many earth like planets and if we were to guess that 0.1% had life. How many Intelligent species would there
be? More than 57 thats for sure!
Gets more interesting by the month.]
I did a rough calculation:
200 billion (number of stars in our Milky Way galaxy) *times* 11% (the percentage of stars like our Sun) = 22,000,000,000 Sunlike stars
22,000,000,000 Sunlike stars *times* 22% (the percentage of Sunlike stars that have habitable Earth sized planets based on today's announcement) = 440,000,000 Earthlike habitable planets in our Milky
Way galaxy
440,000,000 Earthlike habitable worlds /divided by volume of the Milky Way Galaxy.....
Uh oh... time to do some more math...
The volume of the Milky Way can be approximated by a disk with a thickness of 1000 light years and a radius of 50,000 light years.
For simplicity I will use the formula for calculating the volume of a sphere: The formula is V = 4/3 π r cubed.
r = 50,000 light years
The cube of the radius - 1,350,000 light years
Multiply 1,350,000 light years by 4 over 3 (4/3 or 1.33333333333 in decimals) = 1,800,000 light years
Multiply 1,800,000 light years by π (pi) = 5,654,867
So the volume of the Milky Way - 5,654,867 light years
So to conclude 440,000,000 Earths divided by 5,654,867 (the volume of our Milky Way galaxy in light years) = 78.6 Earthlike planets cubed per a 100 light year volume on average = 647 Earths within a
100 light year sphere
So about 647 stars within 100 light years should have a planet like the Earth.
If we assume only 1% of those Earthlike planets produce technological intelligences then that = around 6.5 intelligent species within 100 light years of the Earth and a total of 44 million
intelligent species in the Galaxy.
(I put money on us being the 0.5 in that figure by the way )
edit on 4-11-2013 by JadeStar because: (no reason given)
edit on 4-11-2013 by JadeStar because: (no reason given) | {"url":"http://www.abovetopsecret.com/forum/thread980726/pg","timestamp":"2014-04-20T11:28:35Z","content_type":null,"content_length":"68925","record_id":"<urn:uuid:aacb6a35-73b3-49b4-9d7d-9341a004c271>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical and Computing Techniques in Finance
MSc in Mathematical Finance
by Online Distance Learning
make options your future
make futures your option
Numerical and Computing Techniques in Finance (Online Version)
Level M, Diploma Stage, 20 credits
Old code: 0571002 (until 2010/11)
New code: MAT00031M (from 2011/12)
Pre-requisites: Mathematical Methods of Finance (Online Version), Discrete Time Modelling and Derivative Securities (Online Version), Portfolio Theory and Risk Management (Online Version), or
Aims and Distinctive Features: The aim of the module is to provide programming skills required for the implementation of mathematical models in quantitative finance. The focus will be on the C++
programming language, which is widely accepted as the main tool amongst practitioners in the financial community. The implementation of a given model rarely narrows down to the pricing of a single
particular financial instrument. Most often it is possible to devise general numerical schemes which can be applied to various types of derivatives. The code should be designed so that it easily
integrates with the work of other developers and can be modified by other users. The student will learn such skills by writing C++ programs designed for pricing various types of derivatives, starting
from the simplest discrete time models and finishing with continuous time models based on finite difference or Monte Carlo methods.
Learning Outcomes: By the end of the module, students should:
a) be able to write comprehensive C++ programs;
c) be familiar with functions and function pointers;
b) be familiar with classes and handle virtual functions, inheritance and multiple inheritance;
d) be able to implement non-linear solvers;
e) be familiar with data structures and dynamic memory allocation;
f) understand and have experience of using class and function templates;
g) be familiar with standard numerical methods (finite difference, Monte Carlo) for solving representative problems;
h) be able to price European and American options under the CRR model;
i) be able to price American options by means of finite difference methods under assumptions of the Black Scholes model;
j) be able to price barrier and Asian options by means of Monte Carlo simulation.
Indicative Content:
Numerical techniques
1. Modelling principles. Using data in mathematical modelling.
2. Pricing by backward induction.
3. Representative equations of Black-Scholes type and elementary finite difference approximations.
4. Basic concepts of consistency, convergence and stability.
5. Explicit and implicit difference methods.
6. Monte Carlo Methods for exotic options. Generating random variables with multidimensional normal distributions by the Box-Muller method. Generating sample paths of Brownian Motion.
7. Generating sample paths of solutions to stochastic differential equations. Application to pricing barrier options and Asian call options.
C++ programming
9. Basic features: syntax, variables and their types, arrays, pointers, conditional statements and loops.
10. Functions and function pointers.
11. Classes.
12. Dynamic memory allocation.
13. Inheritance.
14. Virtual functions and polymorphism.
15. Templates.
Learning and Teaching Strategy: Interactive presentations recorded on CD/DVD in lieu of lectures, equivalent to 30 one-hour lectures, and 10 one-hour one-to-one online tutorials scheduled at regular
intervals covering the three core modules comprising the Diploma Stage of the programme. Individual feedback and advice will be offered to students during scheduled online tutorials and via a VLE
discussion forum. The final online tutorial will include a recorded online viva.
Arrangements for Revision and Private Study: Students are expected to contribute a considerable amount (of the order of 180 hours) of individual study time, including interactive CD/DVD presentations
in lieu of lectures, exercises and assessed coursework assignments, library and textbook work. The final week (for Fast Track students) or two weeks (for Standard Track students) of the Diploma Stage
preceding the online viva will be devoted to revision and no new material will be covered during that period.
Assessment: Four assessed computer-based coursework assignments comprising 100% of the final mark followed by a recorded online viva to authenticate the work submitted for assessment. The weightings
of individual coursework assignments to be advised prior to commencing the Diploma Stage of the online programme. Marking will be based exclusively on written work submitted electronically for each
assignment, whereas the online viva scheduled at the end of the Diploma Stage will serve to authenticate the work submitted for assessment but will not otherwise affect the marks. Assessed work will
be routinely screened using online tools for the detection of unfair means such as unacknowledged copying of material or collusion.
Recommended Texts:
1. K. Back, A course in Derivative Securities: Introduction to Theory and Computation.
2. D.J. Duffy, Introduction to C++ for Financial Engineers. An Object-Oriented Approach, John Wiley & Sons (2006).
3. P. Glasserman, Monte Carlo Methods in Financial Engineering.
4. M. Joshi, C++ Design Patterns and Derivatives Pricing, Cambridge University Press (2004).
5. D. Lamberton, B. Lapeyre Introduction to Stochastic Calculus Applied to Finance, Second Edition, Chapman & Hall/Crc Financial Mathematics Series.
6. P. Wilmott, Paul Wilmott Introduces Quantitative Finance, John Wiley & Sons, Chichester (2001).
7. D. Yang, C++ and Object-Oriented Numeric Computing for Scientists and Engineers. | {"url":"http://maths.york.ac.uk/www/NCTF_Online","timestamp":"2014-04-21T12:21:38Z","content_type":null,"content_length":"21466","record_id":"<urn:uuid:31c4f38d-1893-4b07-95a4-539bc21133ae>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
The gyroscopic stability condition
│ c[Ma] │ Overturning moment coefficient derivative; c[Ma](B,Ma) │
│ s[g] │ Gyroscopic stability factor │
A spin-stabilized projectile is said to be gyroscopically stable, if, in the presence of a yaw angle d, it responds to an external wind force F[1] with the general motion of nutation and precession.
In this case the longitudinal axis of the bullet moves into a direction perpendicular to the direction of the wind force.
It can be shown by a mathematical treatment that this condition is fulfilled, if the gyroscopic stability factor s[g] exceeds unity. This demand is called the gyroscopic stability condition. A bullet
can be made gyroscopically stable by sufficiently spinning it (by increasing w!).
As the spin rate w decreases more slowly than the velocity v[w], the gyroscopic stability factor s[g], at least close to the muzzle, continuously increases. An practical example is shown in a figure
Thus, if a bullet is gyroscopically stable at the muzzle, it will be gyroscopically stable for the rest of its flight. The quantity s[g] also depends on the air density r and this is the reason, why
special attention has to be paid to guarantee gyroscopic stability at extreme cold weather conditions.
Bullet and gun designers usually prefer s[g] > 1.2...1.5, but it is also possible to introduce too much stabilization. This is called over-stabilization.
The gyroscopic (also called static) stability factor depends on only one aerodynamic coefficient (the overturning moment coefficient derivative c[Ma]) and thus is much easier to determine than the
dynamic stability factor. This may be the reason, why some ballistic publications only consider static stability if it comes to stability considerations.
However, the gyroscopic stability condition only is a necessary condition to guarantee a stable flight, but is by no means sufficient. Two other conditions - the conditions of dynamic stability and
the tractability condition must be fulfilled. | {"url":"http://www.nennstiel-ruprecht.de/bullfly/gyrocond.htm","timestamp":"2014-04-19T14:37:19Z","content_type":null,"content_length":"4038","record_id":"<urn:uuid:8c89fdb7-6f14-410f-aa8c-c51e9733f388>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Molecular Expressions Microscopy Primer: Anatomy of the Microscope - Objective Magnification in Infinity Optical Systems: Interactive Java Tutorial
Interactive Java Tutorials
Objective Magnification in Infinity Optical Systems
Infinity-corrected microscope optical systems are designed to enable the insertion of auxiliary devices, such as vertical illuminators and intermediate tubes, into the optical pathway between the
objective and eyepieces without introducing spherical aberration, requiring focus corrections, or creating other image problems. In a finite optical system, light passing through the objective
converges at the image plane to produce an image. The situation is quite different for infinity-corrected optical systems where the objective produces a flux of parallel light wavetrains imaged at
infinity, which are brought into focus at the intermediate image plane by the tube lens. This tutorial explores how changes in tube lens and objective focal length affect the magnification power of
the objective in infinity-corrected microscopes.
The tutorial initializes with the major optical train components (condenser, specimen, objective, tube lens, and eyepiece) of a virtual infinity-corrected microscope appearing in the window. A beam
of semi-coherent light generated by the source passes through the condenser and is focused onto the specimen plane, subsequently being collected by the objective. The parallel flux of light rays
exiting the objective are focused by the tube lens onto the intermediate image plane positioned at the fixed diaphragm of the eyepiece. The distance between the tube lens and the fixed eyepiece
diaphragm is adjustable within a range of 160 and 200 millimeters using the Reference Focal Length (L) slider (equivalent to the tube length in older microscopes). In addition, the objective focal
length can be varied from 2 to 40 millimeters by translating the Objective Focal Length (F) slider. As these sliders are translated, the individual components of the virtual microscope are readjusted
to new positions.
To operate the tutorial, use the Reference Focal Length and Objective Focal Length sliders to alter the specifications of the virtual infinity optical system. The objective magnification (M) is
calculated by dividing the reference focal length (L) of the tube lens by the objective focal length (F). As the critical focal length parameters of the microscope are varied, this calculation is
automatically performed and the result is continuously updated and displayed in the space to the right of the objective drawing in the tutorial window. For example, a reference focal length of 180
millimeters and an objective focal length of 18 millimeters yield a magnification of 10x. The objective working distance is also presented graphically and updated as the microscope focal lengths are
As previously listed, the basic optical components of an infinity system are the objective, tube lens, and the eyepieces. The specimen is located at the front focal plane of the objective, which
gathers light transmitted through or reflected from the central portion of the specimen and produces a parallel bundle of rays projected along the optical axis of the microscope toward the tube lens.
A portion of the light reaching the objective emanates from the periphery of the specimen, and enters the optical system at oblique angles, advancing diagonally (but still in parallel bundles) toward
the tube lens. All of the light gathered by the tube lens is then focused at the intermediate image plane, and subsequently enlarged by the eyepiece.
In a finite optical system of fixed tube length, light passing through the objective is directed toward the intermediate image plane (located at the front focal plane of the eyepiece) and converges
at that point, undergoing constructive and destructive interference to produce an image. The situation is quite different for infinity-corrected optical systems where the objective produces a flux of
parallel light wavetrains imaged at infinity (often referred to as infinity space, and labeled in the tutorial window), which are brought into focus at the intermediate image plane by the tube lens.
It should be noted that objectives designed for infinity-corrected microscopes are usually not interchangeable with those intended for a finite (160 or 170 millimeter) optical tube length microscope
and vice versa. Infinity lenses suffer from enhanced spherical aberration when used on a finite microscope system due to lack of a tube lens. In some circumstances it is possible, however, to utilize
finite objectives on infinity-corrected microscopes, but with some drawbacks. The numerical aperture of finite objectives is compromised when they are used with infinity systems, which leads to
reduced resolution. Also, parfocality is lost between finite and infinity objectives when used in the same system. The working distance and magnification of finite objectives will also be decreased
when they are used with a microscope having a tube lens.
The tube length in infinity-corrected microscopes is referred to as the reference focal length and ranges between 160 and 200 millimeters, depending upon the manufacturer. Correction for optical
aberration in infinity systems is accomplished either through the tube lens or the objective(s). Residual lateral chromatic aberration in infinity objectives can be easily compensated by careful tube
lens design, but some manufacturers choose to correct for spherical and chromatic aberrations in the objective lens itself. This is possible because of the development of proprietary new glass
formulas that have extremely low dispersions. Other manufacturers utilize a combination of corrections in both the tube lens and objectives.
Contributing Authors
William K. Fester and Mortimer Abramowitz - Olympus America, Inc., Two Corporate Center Drive., Melville, New York, 11747.
Ian D. Johnson, Robert T. Sutter, Matthew J. Parry-Hill, and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee,
Florida, 32310.
BACK TO INFINITY OPTICAL SYSTEMS
BACK TO ANATOMY OF THE MICROSCOPE
Questions or comments? Send us an email.
© 1998-2013 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the
copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
Last modification: Wednesday, Mar 26, 2014 at 02:23 PM
Access Count Since September 16, 1998: 57708
For more information on microscope manufacturers,
use the buttons below to navigate to their websites: | {"url":"http://www.micro.magnet.fsu.edu/primer/java/infinityoptics/magnification/index.html","timestamp":"2014-04-18T20:46:07Z","content_type":null,"content_length":"46218","record_id":"<urn:uuid:c84bdb79-e60b-433c-9110-6d334373029c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Linear dependence of a function set of m variables with vanishing generalized Wronskians.
(English) Zbl 0724.15004
The author considers necessary and sufficient conditions for a set
of n functions
${\phi }_{i}:{E}^{m}\to {E}^{1}$
, which together with their partial derivatives of order at least n-1 are continuous, to be linearly dependent. After giving some definitions he shows that the vanishing of all generalized Wronskians
$\phi =\left({\phi }_{1}\left(t\right),···,{\phi }_{n}\left(t\right)\right)$
in an open set
$G\subset {E}^{m}$
implies that G contains a countable set of disjoint, open, connected components of the interiors of set of constant order such that (1) on each such component
is linearly independent, (2) the union of these components is dense in G.
15A03 Vector spaces, linear dependence, rank
53A45 Vector and tensor analysis
26B12 Calculus of vector functions | {"url":"http://zbmath.org/?q=an:0724.15004&format=complete","timestamp":"2014-04-18T11:19:55Z","content_type":null,"content_length":"22116","record_id":"<urn:uuid:2da7eb3f-1333-4075-9953-ff2aef567b0f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: March 1997 [00173]
[Date Index] [Thread Index] [Author Index]
Plot3D precision limits?
• To: mathgroup at smc.vnet.net
• Subject: [mg6504] Plot3D precision limits?
• From: Jim Hicks <jim at cern36.ce.uiuc.edu>
• Date: Thu, 27 Mar 1997 02:42:45 -0500 (EST)
• Organization: University of Illinois at Urbana-Champaign
• Sender: owner-wri-mathgroup at wolfram.com
I have two questions regarding the statements listed below that find the
numerical solution to a system of two equations and two unknowns (x and
y). This solution happens to be a maximum likelihood solution to the
problem of estimating two unknown parameters, x and y. The solution
provided by FindRoot using Mathematica 3.0 under Solaris 2.5 for SPARC
is the following:
{x-> -0.237575444848816, y-> -0.0531098274657849}
I am interested in visualizing the surface of the log-likelihood
function in the immediate vicinity of the optimal solution. As you can
see, I have used Plot3D to plot this function near the solution.
Question 1:
As I increase the number of digits specified for the ranges of x and y
to zoom in closer to the actual solution, Plot3D produces a non-smooth
surface. Plot3D produces a smooth surface if fewer digits are specified
following the decimal for the plot ranges. I assume that I am crossing
some machine precision threshold. Is this true and is there anyway to
overcome it so that I can see an accurate representation of the surface
near the solution given by x and y above?
Question 2:
If you reproduce the plot that I am considering, you will notice that
all the axis labels are printed with up to 6 digits after the decimal.
Is it possible to change this by some option such that for example, as
many as 15 digits would be displayed?
Thank you very much for your consideration of my questions.
je-hicks at uiuc.edu
PlotPoints->15, BoxRatios->{1.0,1.0,0.4}] | {"url":"http://forums.wolfram.com/mathgroup/archive/1997/Mar/msg00173.html","timestamp":"2014-04-16T07:17:04Z","content_type":null,"content_length":"36063","record_id":"<urn:uuid:b0660cf9-e405-4d57-a76a-a037c3947c95>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Ask Dr. Math Archives: High School Logarithms
This page:
Dr. Math
See also the
Internet Library:
About Math
basic algebra
linear algebra
linear equations
Complex Numbers
Discrete Math
Fibonacci Sequence/
Golden Ratio
conic sections/
coordinate plane
practical geometry
Negative Numbers
Number Theory
Square/Cube Roots
Browse High School Logarithms
Stars indicate particularly interesting answers or good places to begin browsing
Why might empirical scientific data be transformed to logarithmic data and presented on a logarithmic scale?
What is a logarithm?
How do I explain why logarithms work to non-math-oriented people? Also, what is the history of the development of the concept of logarithm? Why is it called a "logarithm"?
I understand what logs are, and their relation to Euler's constant, but I don't understand why they are what they are.
Provided only that, in some bases, log 2 = 0.607 and log 3 = 0.959, find a good value for log 5.
How many significant digits are there in a number with no non-zero digits? Example: 00.000 Are there any?
I have two formulas for growth and decay: q = q_0*e^(rt), and q = q_0*a^t. What is the difference between them?
A math class asks how to convert between bases in logarithms.
Could you please explain the concept of anti-logarithms?
If log sub(a)10 = 0.250, then log sub(10)a equals ? The solution requires me to the take the base(a) antilogarithm of both sides. That would be 10 = a^0.250. Why is this the final answer?
How can I calculate the logarithm function on a number without a calculator? What are some of the uses of the logarithm function?
To solve the equation ln x^2 = -7, I can use two methods, each supported by properties of logarithms. But with one method I get two answers and with the other method I only get one answer. Is
there an inconsistency in the properties of logarithms?
Why is the base of common logarithms 10? Is it easier to work with, or is there a mathematical explanation?
The Ph of an acidic pond is 5. What is the hydrogen ion concentration in moles per liter?
Why did Briggs raise 2 to the tenth power? Why did he extract the square root of 1.024 instead of 1024? Why did he extract the square root of 1.024 forty seven times?...
I want to know how Briggs constructed logarithmic tables of common logarithms.
I need to find an algorithm to determine any root of a number. I was told I could determine the estimated value by using Newton's Method...
How do you find the characteristic and mantissa of a negative logarithm?
How long will coal reserves last if consumption increase at the rate of 3.1 percent per year?
I am currently studying logarithims and I see that logarithms can take the form of ln or log. What is the difference between the two?
How do you decide which large number is greater? For instance, how do you know whether 9!^(9!^9) is greater than 10^(10!^10)?
Suppose f and g are functions defined by f(x) = x+2 and g(x) = x. Find all x > -2 for which: 3^[g(x)*logbase3 f(x)] = f(x).
2147483647 is one less than what power of 2?
How can I use the logarithm function in base e to find a logarithm in base 10 or a logarithm in base n?
I found this equation during a radio receiver discussion: 4x10-12mW = - 114dBm. How does it equate?
Find x^y, where y is a decimal, without using a calculator.
In the definition of logarithm, a^x = b iff x = log_a(b) where a and b are greater than 0 and a is not equal to 1, why the stipulations? Are there any problems or contradictions that arise if
they are not given?
How can I find the derivative of an exponential function like y = 4^x?
I'm studying logs and have learned the change of base formula that says log_a(x) = log_b(x) / log_b(a). I know how to use the formula, but I don't understand why it works. Can you show me?
What is the difference between log and natural log?
I know that logs and exponents are inverse functions. If an exponential function correlates to repeated multiplication, does a logarithmic function correlate to repeated division?
How can I show that e^pi is greater than pi^e without using a calculator?
I have a given graph of a straight line on a log-log axis system and need to find the equation of that line. How do I do it?
Find those values of 'm' for which this equation has three different solutions: x^3 - (3/2)x^2 + 5/2 = log (base 1/4) (m).
I need to know how to calculate the addition of numbers using logarithms: 1 + 2 + 3. There is a step that requires adding numbers that exceed Excel's limits. How can I use logs to add the
We were discussing a problem in precalculus today and seemed to discover a basic flaw in one of the exponent laws.
How can I find log_4 (12) without a calculator?
How does the term 'logarithm' relate to the terms 'exponent' and 'knots', and to nautical logbooks?
How do you evaluate 2^71 * 3? Can you use logarithms?
I can solve these problems but I don't know how to prove them: 9^1+x= 27; 2^-x-4=1/32; 243=(1/3)^x+4; 64=0.5)^3-x.
Page: 1 2 3 [next>] | {"url":"http://mathforum.org/library/drmath/sets/high_logs.html?start_at=1&num_to_see=40&s_keyid=39943422&f_keyid=39943423","timestamp":"2014-04-19T17:03:42Z","content_type":null,"content_length":"22996","record_id":"<urn:uuid:fc315ffc-b29b-4ffc-9ded-4604b3ea4688>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Related Rates Word Problem
November 27th 2011, 04:51 PM
Related Rates Word Problem
"Coffee is draining from a conical filter(height 6" diameter 6") into a cylindrical coffeepot (diameter 8" height not given) at a rate of 10in^3/min. How fast is the level in the pot rising and
how fast is the level in the cone falling when the coffee is 5"deep?"
For the first part, I got as far as h'=(v'-2pirh)/(pir^2) or h'= (10-8pih)/(16pi) but I'm not sure how to find h without aditional conditions. Am I even headed in the right direction?
I havent the slightest idea how to go about the second part. Is the 5" deep is referring to the coffee in the pot?
Any and all help appreciated.
November 27th 2011, 05:19 PM
Re: Related Rates Word Problem
$\frac{dV}{dt} = -10$ cubic in per min for the cone
$\frac{dV}{dt} = 10$ cubic in per min for the cylindrical pot
coffee volume in the cone ...
$V = \frac{\pi}{3} r^2 h$
since $\frac{r}{h} = \frac{1}{2}$ ...
$V = \frac{\pi}{12} h^3$ , where $h$ = depth of coffee in the cone
coffee volume in the pot
$V = 16\pi y$ , where $y$ = depth of coffee in the pot
as far as the 5" depth info ... I'd say it's depth of coffee in the cone since the question wants the rate of change of depth in the cone at that time.
try it again ...
November 27th 2011, 05:43 PM
Re: Related Rates Word Problem
That definately helped. I got both answers, but on the volume of the cone I'm not quite sure how you went from v= (pi/3)r^2h to v=(pi/12)h^3
November 27th 2011, 06:08 PM
Re: Related Rates Word Problem
$V = \frac{\pi}{3} r^2 h$
note that for the cone ...
$\frac{r}{h} = \frac{3}{6} = \frac{1}{2}$
therefore, $r = \frac{h}{2}$
substitute $\frac{h}{2}$ for $r$ in the volume formula above to get V strictly in terms of h | {"url":"http://mathhelpforum.com/calculus/192853-related-rates-word-problem-print.html","timestamp":"2014-04-16T08:16:51Z","content_type":null,"content_length":"8512","record_id":"<urn:uuid:ea8feca9-cbee-4774-aac8-6a6e3b27ed91>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Borough Park, New York, NY
New York, NY 10029
Expert ISEE/SSAT/SAT/ACT/GRE/GMAT Tutor
...TANDARDIZED TEST PREPARATION I help students achieve their best possible results by combining expert instruction with a comprehensive practice and feedback system for the SAT, ACT, ISEE, SSAT,
GRE, and
. I provide an assessment of their strengths and weaknesses...
Offering 10+ subjects including GMAT | {"url":"http://www.wyzant.com/Borough_Park_New_York_NY_GMAT_tutors.aspx","timestamp":"2014-04-21T05:37:38Z","content_type":null,"content_length":"61723","record_id":"<urn:uuid:679ad566-e716-4130-8f14-b2b4712eda6a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calumet City Algebra 1 Tutor
Find a Calumet City Algebra 1 Tutor
...In short, I can help you learn and master Excel! I've completed the equivalent of 3 full first-year undergraduate physics courses, and I earned solid A's or better every single time! My
greatest strength is in kinematics, but I'm no slouch when it comes to optics, electromagnetism, and other first year topics, either.
28 Subjects: including algebra 1, English, reading, physics
...I also offer online lessons with a special discount--contact me for more details if interested.Before I knew I was going to teach French, I was originally going to become a math teacher. I have
had numerous students tell me that I should teach math, as they really enjoy my step-by-step, simple breakdown method. I have helped a lot of people conquer their fear of math.
16 Subjects: including algebra 1, chemistry, English, ACT Math
...This is probably because I tend to think in a logical and orderly way and I am a determined and creative problem solver. I first tutored math as I helped students prepare for standardized
tests. I've spent the last six years teaching math (and English) prerequisite courses at a small private nursing college.
17 Subjects: including algebra 1, English, reading, grammar
...I have been playing guitar from about the age of twelve. I first learned by ear how to make chords. Later on, I played with others in high school and took music classes in college which taught
me a basic knowledge of guitar.
40 Subjects: including algebra 1, reading, English, ASVAB
...I can also teach mechanical engineering and basic electrical engineering subjects as well. I work as a full time employee at CNH, Inc. Currently I have free time on weekday evenings (after 6PM)
& weekends.
16 Subjects: including algebra 1, chemistry, physics, calculus | {"url":"http://www.purplemath.com/calumet_city_algebra_1_tutors.php","timestamp":"2014-04-20T06:35:25Z","content_type":null,"content_length":"24091","record_id":"<urn:uuid:86c40ef6-5db5-48f6-b96f-cd73910250e3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
ChaCha and Wolfram|Alpha Bring Computation to Q&A and SMS
June 28, 2011
Posted by
Today we are pleased to share that ChaCha, the #1 free real-time question and answer service, has enhanced the depth, accuracy, and speed of its online and mobile Q&A service by adding computational
knowledge from Wolfram|Alpha. The Wolfram|Alpha integration provides ChaCha users with instantly computed facts and answers to questions from over 100 topic areas, such as demographics, definitions,
mathematics, geography, and celebrity facts.
ChaCha has historically used people to answer difficult questions to ensure a high-quality answer. We think ChaCha’s decision to incorporate Wolfram|Alpha, the world’s largest curated data
repository, made accessible via free-form queries, is a natural extension of ChaCha’s service. Our match up with ChaCha also opens up Wolfram|Alpha to the larger community serviced by SMS mobile text
“At ChaCha we are constantly looking at ways in which we can better our users’ experience and provide the fastest and most accurate answers to their questions,” said Scott Jones, ChaCha’s CEO. “By
partnering with Wolfram|Alpha and tapping into their vast database of computational knowledge, we are enhancing the scope and efficiency of our service.”
On the first day of integration, Wolfram|Alpha answered 32,000 of ChaCha’s incoming questions in a wide range of topics through ChaCha’s SMS service and mobile app. For example, need some quick
homework help? Text “what is the inverse of Xlog3(4)?” to 242-242, and in just seconds ChaCha will text you the correct answer:
Or download the ChaCha mobile app and get Wolfram|Alpha-powered responses, such as “How old is Snoop Dogg?”:
We’re looking forward to providing unique and dynamically computed facts to the ChaCha Community!
9 Comments
ok – how can i use the sms service via android in germany ?
i have tried +1242242 but this seems to be incorrect.
Posted by recompile June 28, 2011 at 4:23 pm Reply
I have same problem in Alaska
Posted by Steve February 10, 2012 at 8:51 pm Reply
To Wolframalpha Management:
Somehow I would like to think that this is one of the way that wolframalpha can generate for cash.
I am wondering if wolframalpha can be made like HowStuffWorks and ChaCha, Facebook, and Twitter. I believe this is one of the way for cashing of knowledge.
Perhaps wolframalpha can work together with Facebook ?
Wolframalpha should become a number 1 of people choice in answering any question, from science to fictional.
Posted by Business June 28, 2011 at 10:22 pm Reply
You can’t be serious about this example. What on earth is “Xlog3(4)” supposed to mean?! And what was “inverse” supposed to mean in this ill-defined context?
Alpha interpreted the input as x*log(3)*4, and “inverse” as inverse function, a gallant try. But it is just as likely that the input was intended to mean x times the logarithm base 3 of 4, or that
“inverse” meant reciprocal.
I guess the point is that Alpha can produce possibly helpful answers to poorly-phrased or even nonsensical questions. Is that the message you want to convey?
Posted by Richard June 29, 2011 at 8:54 am Reply
When input is ambiguous Wolfram Alpha is intended to guess the most likely intention but say that this is its assumption. It is also intended to offer all the alternatives and allow the user to
select the alternative/s if they wish to do so. W|A is still under development so when it fails the user should report it using the feedback facility at the bottom of the output.
Posted by Brian Gilbert July 1, 2011 at 3:19 am Reply
Sadly not available in Canada.
Posted by Jordan October 6, 2011 at 8:12 pm Reply
can i use in mexico with telcel as an operator?
Posted by Arturo z October 12, 2011 at 7:45 pm Reply
Is it available in Australia?
Posted by dom December 12, 2011 at 4:10 am Reply
Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more.
Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies…
Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes!
Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon?
Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step. | {"url":"http://blog.wolframalpha.com/2011/06/28/chacha-and-wolframalpha-bring-computation-to-qa-and-sms/","timestamp":"2014-04-18T08:57:00Z","content_type":null,"content_length":"50294","record_id":"<urn:uuid:5ba03737-ff28-4a18-927e-31469be41081>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
*** Problems
Two numbers x and y are chosen at random from the set [1,2,3,4,..........,3n]. What is the probability that x² -y² is divisible by 3?
Character is who you are when no one is looking.
Re: *** Problems
This sounded like a trick question at first, but having spent a little time on it, the pattern is somewhat surprising and gives a remarkably high number of results divisible by 3!
The answer I get seems to tend towards , but it does vary depending on the value of n. (eg, if n=1, then it is 33.33%, as I had expected).
Having worked on this the hard way, let's now try to get a formula.
Re: *** Problems
I'm afraid not. If n = 1, then there are 9 possible combinations for the two numbers.
1-1=0 --> Yes
4-1=3 --> Yes
9-1=8 --> No
1-4=-3 --> Yes
4-4=0 --> Yes
4-9=-5 --> No
9-1=-8 --> No
9-4=-5 --> No
9-9=0 --> Yes
So there we have 5 out of 9 combinations, which is 55.55%.
But actually, it doesn't matter what the number is, all that matters is whether the numbers you pick are of the form 3k, 3k-1 or 3k-2. And as the set ends in 3n, there is always an equal chance of
picking any of these. So if the above list was generalised, it could be used to show that the chance is always 55.55% for any value of n.
Why did the vector cross the road?
It wanted to be normal.
Re: *** Problems
Mathsy, I agree with your logic, but the question asked for two numbers to be selected at random, not the same number to be selected twice. Hence, I excluded the possibility of x=y. This has the
effect of removing 3n from both numerator and denominator. Progressing through the series, up to very high values of n will still tend towards 55.55%, but always slightly less than this and, at very
low values of n, significantly less.
A technicality, maybe, but if I am playing poker, I like to know how many aces are in the deck!
Re: *** Problems
Ah. True point. Well, in that case, you're completely correct. If n = 1, then the probability is 1/3 and as n increases the probability will tend to 5/9 without ever reaching it.
It'll probably be extremely difficult to calculate the formula though. I'll have a go, but not right now.
Why did the vector cross the road?
It wanted to be normal.
Re: *** Problems
Try this:
[((3n)^2 x 5/9) -3n] / [3n x (3n-1)]
Re: *** Problems
Good going, mathsyperson and ashwil! I shall wait for more responses. In a day or two, I shall post the solution.
Character is who you are when no one is looking.
Re: *** Problems
Solve for x.
Character is who you are when no one is looking.
Re: *** Problems
find the sum
1² + 3²/1! + 5²/3! + 7²/5! + ............∞
Character is who you are when no one is looking.
Re: *** Problems
do you want complex numbers?
so x=3 or x=3+I√6 or x=3-I√6
IPBLE: Increasing Performance By Lowering Expectations.
Re: *** Problems
cool. Very interesting question. I get 1+5e but my solution is stinky. I want to see better.
IPBLE: Increasing Performance By Lowering Expectations.
Re: *** Problems
Well, the answer is , but my algebra is so rusty that I am struggling to come up with the elegant solution rather than the trial and error version! Give me time and I WILL get there!
Re: *** Problems
If you ractorize what you got as equation you'll get the above equation.
From the first term x=3 but the second term is quadratic equation:
The discriminant is <0 So there are 2 complex answers too.
IPBLE: Increasing Performance By Lowering Expectations.
Re: *** Problems
Thanks, Krassi.
You were obviously posting your answer while I was still working on mine, so I saw yours after posting mine! I had got as far as multiplying out and simplifying, but I just hadn't got as far as the
final factorisation. It used to be so easy when I was 18, as I was doing it all the time, but you don't have much need for this kind of maths as a hotelier, so it takes me a bit longer to get my
thinking in gear!!
Re: *** Problems
You mathz is (the moderators won't allow me to say this
Do you know how many people know a 1/10 of what you know?
I would say 5-10% MAX!
So... Get back to work!!!
(If you understand me
Can you give some solution of ***3?
IPBLE: Increasing Performance By Lowering Expectations.
Re: *** Problems
What a lovely compliment! I wish that my teacher had agreed with you in 1978 (or, even the examination board!!). I was far from the perfect student and my maths was only average at best. But what I
have done in life since then is to learn from mistakes, work from first principles wherever possible and always try to understand WHY things are the way they are. It may take a lot longer than
remembering formulae, but it is much more sound in the long-term.
Alas, question ***3 is currently beyond me and I really need to dust away a lot more cobwebs before I can suggest a properly worked solution, but I do agree with 1 + 5e being the correct answer,
based on observation rather than on algebraic proof.
Re: *** Problems
I used ugly algebra so I prefer you to post your proof.
For the first: Your teacher drags!!!
IPBLE: Increasing Performance By Lowering Expectations.
Re: *** Problems
I wanted to say another thing from drags but the word blocker from the forum didn't allowed me to do it.
IPBLE: Increasing Performance By Lowering Expectations.
Re: *** Problems
krassi_holmz and ashwil, I shall post the solutions (with steps) to ***2 and ***3 by the weekend!
Character is who you are when no one is looking.
Re: *** Problems
How many number of n digits can be made with nonzero digits in which no two consecutive digits are the same?
Character is who you are when no one is looking.
Re: *** Problems
Why did the vector cross the road?
It wanted to be normal.
Re: *** Problems
Excellent, mathsyperson!
Character is who you are when no one is looking.
Re: *** Problems
Prove that
is greater than or equal to where a, b, and c are positive real numbers.
Character is who you are when no one is looking.
Re: *** Problems
ganesh, I am seriously struggling with the proof of ****3. I can work through the logical structure of the 4 given equations and I can see an obvious similarity with the expression to be solved.
However, for the life of me, I just cannot get an expression that absolutely links this expression to those funtions of e. The problem is that the numerator's progression is a funtion of (1+2n)^2
rather than x^n, as given in the examples, which is destined to diverge greatly as n increases.
My solution was simply based around common sense, in that the answer HAD to have a relationship to e, otherwise the examples are pointless!. Doing a sum of the first few values of the expression soon
led to an answer of 14.5914, which is clearly 1 + 5e. Equally, the progression follows a very similar pattern to the formula for e, so it was clear that it is not necessary to bring in high values of
x as their effect is negligable.
The worked example that gives this missing link would be greatly appreciated!
Re: *** Problems
I shall post the proof soon.
I shall try to do it this evening.
Character is who you are when no one is looking. | {"url":"http://mathisfunforum.com/viewtopic.php?id=3043&p=1","timestamp":"2014-04-16T16:00:38Z","content_type":null,"content_length":"38201","record_id":"<urn:uuid:dbc293f7-ee43-41b7-9727-1150431e1dd2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATHEMATICA BOHEMICA, Vol. 124, No. 2–3, pp. 245-254 (1999)
Local solvability and regularity results for a class of semilinear elliptic problems
in nonsmooth domains
M. Bochniak, A.-M. Sändig
M. Bochniak, A.-M. Sändig, Mathematisches Institut A/6, Universität Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany, e-mails: bochniak @mathematik.uni-stuttgart.de,
Abstract: We consider a class of semilinear elliptic problems in two- and three-dimensional domains with conical points. We introduce Sobolev spaces with detached asymptotics generated by the
asymptotical behaviour of solutions of corresponding linearized problems near conical boundary points. We show that the corresponding nonlinear operator acting between these spaces is Fréchet
differentiable. Applying the local invertibility theorem we prove that the solution of the semilinear problem has the same asymptotic behaviour near the conical points as the solution of the
linearized problem if the norms of the given right hand sides are small enough. Estimates for the difference between the solution of the semilinear and of the linearized problem are derived.
Keywords: semilinear elliptic problems, spaces with detached asymptotics, asymptotic behaviour near conical points
Classification (MSC2000): 35A07, 35B65, 35C20, 35J60
Full text of the article:
[Previous Article] [Next Article] [Contents of this Number] © 2004—2005 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/MB/124.2-3/11.html","timestamp":"2014-04-19T17:03:01Z","content_type":null,"content_length":"3354","record_id":"<urn:uuid:8684bdfd-7261-4bd7-96c7-d53fee83d20b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
\mathbb{Q}\times\mathbb{Q} is not cyclic
Prove that $\mathbb{Q}\times\mathbb{Q}$ is not cyclic. How do I do this my book says nothing.
If $\mathbb{Q}^2$ was cyclic, then $\mathbb{Q}$ being the image of $\mathbb{Q}^2$ under the projection mapping would be cyclic, so it suffices to show that $\mathbb{Q}$ is not cyclic. To see this
suppose that there was an isomorphism $f:\mathbb{Q}\to\mathbb{Z}$ we see then $f(1)=f(m\frac{1}{m})=mf(\frac{1}{m})$ so that $f(1)$ is divisible by $m$ for every $m\in\mathbb{Z}$ and so $f(1)=0$, but
since $f(0)=0$ this contradicts injectivity of $f$.
To show that $\mathbb{Q}$ is not cyclic, it seems conceptually more straightforward to take an arbitrary number $m/n$ (say, in lowest terms) and say that it can never meet $1/p$, where $p$ is some
prime that is not a factor of $n$. (For example, no multiples of $3/16$ will ever be $1/5$.) So no single number can generate all of $\mathbb{Q}$.
Sure, either way works. Another good way to look at it! My method shows more generally that any divisible group is not cyclic. | {"url":"http://mathhelpforum.com/advanced-algebra/189873-mathbb-q-times-mathbb-q-not-cyclic.html","timestamp":"2014-04-18T15:11:13Z","content_type":null,"content_length":"52441","record_id":"<urn:uuid:bb9f56d9-1d4c-4967-ad21-83d9355beed1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Jess
Total # Posts: 816
Solve: cos(2x-180) - sin(x-90)=0 my work: cos2xcos180 + sin2xsin180= sinxcos90 - sin90cosx -cos2x - sin2x= cosx -cos^2x + sin^2x -2sinxcosx=cosx I'm stuck here. I tried subtracting cosx from both
sides and making sin^2x into 1- cos^2x, but I still can't seem to solve t...
How do I solve this? My work has led me to a dead end. tan(45-x) + cot(45-x) =4 my work: (tan45 - tanx)/(1+ tan45tanx) + (cot45 - cotx)/(1 + cot45cotx) = 4 (1-tanx)/(1+tanx) + (1-cotx)/(1+cotx) = 4
Then I found a common denominator, giving me this: (2-2cotxtanx)/(1+cotx+tanx+c...
Does my work here look right? 3. One mole of an ideal gas expands adiabatically into a vacuum. Calculate q, delta-e, w, and delta-H for the process. q= 0 [adiabatic] w= -Pext(V2-V1) w= 0 delta-e = w
+ q delta-e = 0 + 0 = 0 delta-h= delta-e + P(delta V) delta-h= 0 + 0 = 0 does ...
1. How much work is necessary to accelerate a bullet having a mass of 5.0 g to a velocity of 1000. m/sec? This was on a chemistry problem set of mine, and it seems like a very physics-oriented
question [I have not taken physics.] Could someone point me in the right direction a...
A few classmates and I were arguing about a problem we had to do for homework. The problem states: Limestone, CaCO3, when subjected to a temperature of 900 degrees C in a kiln, decomposes to calcium
oxide and carbon dioxide . How much heat is evolved/absorbed when one gram of ...
8th grade algebra
ten over eight = what over ten????? please help me!!!!!!!!!! 10 / 8 = x / 10 This means that 8 * x = 10 * 10 Solve for x.
Strontium metal is responsible for the red color in fireworks. Fireworks manufacturers use strontium carbonate, which can be produced by combining strontium metal, graphite (C), and oxygen gas. The
formation of one mole of SrCO3 releases 1.220 x 10^3 kJ of energy. a) Write the...
3x + 1 < -2 subtract one from each side, then divide both sides by three. first you need to look at each side of what ever your looking at and subtract 1 from each number then divide the numbers by 3
I have trouble with math all the time. so I was wondering if you could hel...
suppose that the gas tank of a car holds 12 gallons and that the car uses 1/ 20 of a gallon per mile .let x be the nuber of miles the car has gone since the tank was filled? (1/20x)= 12 so now get x
by itslef! (1/20x)/ (1/20)= 12/(1/20) x= 240! yeah I am good! that is good! go...
suppose that the gas tank of a car holds 12 gallons and that the car uses 1/ 20 of a gallon per mile .let x be the nuber of miles the car has gone since the tank was filled? (1/20x)= 12 so now get x
by itslef! (1/20x)/ (1/20)= 12/(1/20) x= 240! yeah I am good! that is good! go...
97x+1/4=4 okay well minus 4 from 1/4 wich gives you?? 97x= 15/4 now get x by itself by dividing 97 by 15/4 wow! 1). Subtract 1/4 from both sides 97x+1/4-1/4=4-1/4 97x=31/4 (3.75) 2). Devide both
sides by 97 97x/97=3.75/97 <-or 3.75 devided by 97 x=?
97x+14=4 First, subtract 14 from both sides. Then, let us know if you get stuck.
gasoline consumption problem suppose that the gas tank of a car holds 12 gallons and that the car uses 1/ 20 of a gallon per mile .let x be the nuber of miles the car has gone since the tank was
filled? car holds 12 gallons 1/20 gallon/mile 12*20 = 240 miles possible what is t...
word unscramble
daes sldrp
The main difference between block letter style and modified block letter style is/are the
G > (J > O) ~ ~ M v V ~ O * ~ ~ J ~ V > K / ~ G ~ (K * M) / V (L * A) > Z W v S ~(~ L * T) S > ~ (P > V) A / S = ~ W / Z v T Rules of replacment- DeMorgans Rule**
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jess&page=9","timestamp":"2014-04-16T21:57:37Z","content_type":null,"content_length":"11104","record_id":"<urn:uuid:3fbf4139-efcc-47c3-b9b7-83590b80a71d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cabin John Trigonometry Tutor
Find a Cabin John Trigonometry Tutor
...Since graduating I have begun to tutor again and being new to the area I am currently working to expand my number of students. I believe that everyone can learn math and value it as a problem
solving tool. I expect my students to be open about their needs, concerns and communicate what works well for them.
22 Subjects: including trigonometry, calculus, geometry, GRE
I have Bachelor of Science degrees in Physics and Electrical Engineering and PhD in Physics. I have more than 10 years of experience in teaching math, physics, and engineering courses to science
and non-science students at UMCP, Virginia Tech, and in Switzerland. I am a dedicated teacher and I alw...
16 Subjects: including trigonometry, calculus, physics, statistics
My name is Bekah and I graduated from BYU with a degree in Math Education. While I was in college, I was a professor's assistant for 3 years in a calculus class, which included me lecturing twice
a week, and working one-on-one with students. After graduating, I taught high school math for one year...
10 Subjects: including trigonometry, calculus, geometry, algebra 1
...I played on my high school varsity team for three years and earned team MVP for my senior season. I encourage a smooth, easy swing in my students in order to lessen stress on the body and
improve accuracy. I also cover short game, etiquette, and how to choose shots well so you can reach your goal out on the course.
13 Subjects: including trigonometry, writing, calculus, algebra 1
I have received my BA from The George Washington University few years ago and now am attending George Mason University pursuing a chemistry degree to finish medical school pre-requisites. I was on
Dean's List in Spring 2010 and hope to be on it again this semester. I would like to help student...
17 Subjects: including trigonometry, chemistry, linear algebra, SOL | {"url":"http://www.purplemath.com/Cabin_John_Trigonometry_tutors.php","timestamp":"2014-04-18T08:18:02Z","content_type":null,"content_length":"24390","record_id":"<urn:uuid:c3a9e702-10d1-4ea4-a645-77c154f017e2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Larkspur, CA Algebra 2 Tutor
Find a Larkspur, CA Algebra 2 Tutor
...I also work with students who are looking for a leg up in college admissions by accelerating their progress. WHY I AM A GOOD MATH TUTOR I am passionate about teaching and about math. I have
much insight into tutoring, and I am compassionate, patient, and dedicated in doing it.
12 Subjects: including algebra 2, calculus, geometry, statistics
...I have taught the math part to both the SAT and ACT prep courses through Silvan Learning for the past 3 years. I have helped my students raise their scores significantly with this program.
Given one of the test prep manuals I can help you raise your scores too.
13 Subjects: including algebra 2, French, algebra 1, biology
...My students typically improve their overall scores by 275+ points. Many of my students have been admitted to UC schools, and a few have been admitted to ivy league schools. My test prep
students have seen their overall test scores and certain subject scores increase by 3 - 4+ points.
34 Subjects: including algebra 2, Spanish, reading, English
...I have taught students at all levels of competency and comfort with these concepts. I earned a B.A. in Philosophy (Cum Laude) from the University of California-Santa Cruz. During that time, I
served as an Undergraduate Tutor.
29 Subjects: including algebra 2, reading, English, algebra 1
...I also have experience in the "Lindamood-Bell" literacy, comprehension, and math techniques. I graduated Summa Cum Laude from Creighton University with a B.S. in Environmental Science and
Spanish. I love teaching and have experience with a wide range of students.
24 Subjects: including algebra 2, chemistry, Spanish, reading
Related Larkspur, CA Tutors
Larkspur, CA Accounting Tutors
Larkspur, CA ACT Tutors
Larkspur, CA Algebra Tutors
Larkspur, CA Algebra 2 Tutors
Larkspur, CA Calculus Tutors
Larkspur, CA Geometry Tutors
Larkspur, CA Math Tutors
Larkspur, CA Prealgebra Tutors
Larkspur, CA Precalculus Tutors
Larkspur, CA SAT Tutors
Larkspur, CA SAT Math Tutors
Larkspur, CA Science Tutors
Larkspur, CA Statistics Tutors
Larkspur, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Larkspur_CA_Algebra_2_tutors.php","timestamp":"2014-04-20T02:27:50Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:4b8b093e-2c8b-4433-854c-6b53077582fa>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
5.7 feet in cms
You asked:
5.7 feet in cms
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/5.7_feet_in_cms","timestamp":"2014-04-16T04:33:38Z","content_type":null,"content_length":"58460","record_id":"<urn:uuid:c304a231-9659-4e1b-a854-900dec9eb6ee>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Construction Loan Calculator
So what is a Construction Loan Calculator and how do you know what it calculates? First you need to understand what a construction loan is and how it works. Then you can start to use a construction
loan calculator. Only then can you compare different construction loans to see which is best for you.
A construction loan is a special mortgage. Lets say you have a piece of land and would like to build a house. You don’t have the cash necessary to build the house so you need a mortgage. The trick is
to get a mortgage you’ll have to pledge your house against the loan so if for some reason you can’t repay the loan the bank can take and sell your house. Since there is no house, the bank has nothing
to use to secure the loan and protect itself. This is where a construction loan comes in. A bank will ask you to put together a building plan and budget. You will need to front a percentage of the
money to get started. Then when you have completed the agreed upon first step of the building they’ll come and inspect. If you have indeed completed what is required they’ll give you the first
payment. This process continues until all the steps are completed and the house is ready to be lived in. Until you have finished the whole house you’ll be paying monthly interest only payments to the
bank based on the principle they’ve paid you so far. Once your house is complete then the loan will convert into a traditional mortgage.
A construction loan calculator is a piece of software or page on the internet where you can plug in some basic factors and it will calculate how much the construction loan will cost. So you will tell
it how much you’ll need for each step and how long it’ll take you to complete. It’ll then give you a listing of the monthly interest payments as well as the amortized interest over the life of the
traditional mortgage. Once you know the formulas a loan calculation isn’t all that difficult. It’s the repetition that’ll get you. Say you find an interest rate of 4%. You do the math to see what a
4% rate will look like for a payment on 30 years. Then you want to see what the payments looks like for 15 years and you have to do all the math over again. This is where a construction loan
calculator really comes in handy.
To compare all your construction loan offers and see which is best for you, you’ll need to take the total amount of interest in dollars for each loan given to you from the construction loan
calculator. You’ll then compare these amounts, again in dollars, for all the loans you are considering and find out which is the cheapest.
So a construction loan calculator can be a great tool to use to find the right loan for you when you are considering building a home.
There are no comments yet. Be the first and leave a response! | {"url":"http://bestbuildingloan.com/construction-loan-calculator/","timestamp":"2014-04-18T05:30:37Z","content_type":null,"content_length":"16569","record_id":"<urn:uuid:17d14490-5bbe-46bb-ae46-46bfac6e98c1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
In order to prove any hypothesis false, it is enough to find a single counterexample. Assuming such counter-examples exist and are discrete, we can conclude from the well-ordering axiom that one of
them must be the smallest. This smallest counter-example is sometimes called the minimal criminal. Furthermore, in order to prove our hypothesis true, it is enough to show that there is no such
minimal criminal. After all, if no counter-example is smallest, there can be no counter-example.
One of the most famous uses of minimal criminals is in the proof of the four-colour theorem, where German geometer Heinrich Heesch realized in 1948 that if he could find a master set of patterns that
can't appear in a minimal criminal, he would rule out all possible counter-examples and have a proof for it. These patterns were subsequently found using computer aid in 1976. | {"url":"http://everything2.com/title/minimal+criminal","timestamp":"2014-04-19T17:30:58Z","content_type":null,"content_length":"20092","record_id":"<urn:uuid:ca1bd168-ca44-4764-9839-53e9735aba7a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
centrifugal force
If you want to have fun with someone who is
about these things, do what I do and call it by a more correct name:
Centrifugal Effect
Nine times out of ten the person you used this term on will begin their
about there being
no such thing as centrifugal force
what you actually said
will dawn on him or her during the rant, but you will more often have to to interrupt the rant and point it out.
Interrupting a rant makes me feel all warm and mooshey inside | {"url":"http://everything2.com/title/Centrifugal+Force","timestamp":"2014-04-17T18:32:11Z","content_type":null,"content_length":"35188","record_id":"<urn:uuid:dca8000b-ce62-4dd3-986b-b1cd14e64f2a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
hich statement about a kite and trapezoid is true? Answer A trapezoid is similar to a kite because both have one pair of parallel opposite sides. A trapezoid is similar to a kite because both have
two pairs of opposite sides parallel. A trapezoid is different from a kite because it does not have any pair of parallel sides. A trapezoid is different from a kite because it does not have diagonals
that intersect at right angles.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
i think its the 3rd option
Best Response
You've already chosen the best response.
Well a Trapezoid|dw:1338392894240:dw| has One pair of parallel sides
Best Response
You've already chosen the best response.
And do we have like a picture of the kite because kites come in ALL shapes and sizes
Best Response
You've already chosen the best response.
no unfortunately not =/ that's why i needed some help with this question cuz i know trapeziods and kites can look different.
Best Response
You've already chosen the best response.
I See well lets base it off a standard kite like this one |dw:1338393368548:dw|
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
It Cannot be #1 because this kite has NO parallel sides t cannot be #2 Because the kite has NO parallel sides and the trapezoid has only one pair of parallel sides I guess it can't be #4 because
neither the kite or trapezoid has any right angles so i guess you are right with three
Best Response
You've already chosen the best response.
Does this help?
Best Response
You've already chosen the best response.
yes it does thank-you! =)
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fc640e3e4b022f1e12db02b","timestamp":"2014-04-20T01:05:11Z","content_type":null,"content_length":"114213","record_id":"<urn:uuid:897fcd11-cf4e-48f0-9895-e41cc29300e7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Microsoft® Office Excel® 2007: Data Analysis and Business Modeling, Second Edition
Master the techniques that business analysts at leading companies use to transform data into bottom-line results. For more than a decade, well-known consultant and business professor Wayne Winston
has been teaching corporate clients and MBA students the most effective ways to use Microsoft Office Excel for data analysis, modeling, and decision making. Now this award-winning educator shares the
best of his classroom experience in this practical, business-focused guide—updated and expanded for Excel 2007. Each chapter advances your data analysis and modeling expertise using real-world
examples and learn-by-doing exercises. You’ll learn how to create best, worst, and most-likely scenarios for sales, estimate a product’s demand curve, forecast using trend and seasonality, and
determine which product mix will yield the greatest profit. You’ll even discover how to interpret the effects of price and advertising on sales and how to assign a dollar value to customer loyalty.
Subscriber Reviews
Average Rating: Based on 1 Rating
"Great Examples, Clear Exposition" - by Jane H on 05-JAN-2012
Reviewer Rating:
For those who want plain language instructions on how to use Excel to solve common business problems, this book, I believe, is unequalled.
Report as Inappropriate | {"url":"http://my.safaribooksonline.com/9780735623965?portal=oreilly&cid=orm-cat-readnow-9780735623965","timestamp":"2014-04-20T17:09:02Z","content_type":null,"content_length":"128326","record_id":"<urn:uuid:000ff212-7aa5-419d-a57f-be45ef776210>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing Lines
You've now worked pretty extensively with equations containing one variable. We are now going to briefly work with ones involving two variables, x and y.
Equations that have an x term and a y term, or only one of the two, can be graphed as lines on a basic coordinate grid.
You may see equations that look like this:
or this:
Both are different representations of the same line; in the 2nd equation the -2x is moved to the other side. The first equation is in what mathematical people call "slope-intercept form" and the
second is in "standard form" (more on these later).
Slope: (usually represented by the variable m) of a line measures its steepness; the larger the absolute value of the slope, the greater the steepness.
Slope is calculated as change in the vertical direction (y) รท change in the horizontal (x); this is often called rise over run.
To find the slope you can pick two points on the line and find the difference in the y values, and divide it by the difference in the x values. The important thing to remember is to keep the points
in the same order in the numerator and denominator.
A positive slope tells us that the line goes uphill, from low to high. A negative slope tells us that the line goes downhill, from high to low.
The two graphs on the left have steeper slopes, so the absolute value of their slopes is larger than the absolute value of the slopes on the right.
To calculate the slope of a line, start by picking any two points on the line. Calculate the vertical difference and divide by the horizontal difference, you can do this by counting, or by using the
formula. If you use the counting method, don't forget to add the sign, positive if it slopes uphill and negative if it slopes downhill (the formula will do this for you). | {"url":"http://www.shmoop.com/basic-algebra/graphing-lines.html","timestamp":"2014-04-19T04:30:26Z","content_type":null,"content_length":"37741","record_id":"<urn:uuid:cafd446e-9059-49be-ad90-54e5b48bf5c0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Caddo Mills Prealgebra Tutor
...I specialize in preparing students for the TAKS test. I utilize previous test questions and implement strategic techniques to ensure students' comprehension of the material. I focus on
illustrating the certain "traps" and "pitfalls" that students can fall into.
27 Subjects: including prealgebra, reading, geometry, statistics
...I believe that enjoyment and encouragement can be fundamental motivators for success. These will necessarily be qualities I commit to in any tutoring relationship. Also, I stand by the need to
enable students to be their own teachers for continued and future individual success.I am a college graduate with a Bachelor of Art in Philosophy from Texas A&M University.
17 Subjects: including prealgebra, reading, chemistry, geometry
...Students practice using sample tests and see improvement after 4-5 sessions. I have helped two students achieve Semifinalist status (224+ scores). I teach an ACT prep class at a private school,
helping students grasp underlying concepts so they can work a variety of problems. Students improve an average of 6+ points.
11 Subjects: including prealgebra, algebra 1, algebra 2, SAT math
...I can also give you valuable advice on the right resources (there are plenty of resources available out there, but some are far more useful than others) to use to prepare for it. I was also
instructor for the GMAT (along with the GRE) at Kaplan. The GMAT is slightly different format-wise and sc...
41 Subjects: including prealgebra, chemistry, French, calculus
...Finally, I research topical threads in the Bible using Thompson's Chain Reference Study Bible. I have tutored students for the last 15 years in social studies, from Kindergarten through 8th
grade. I have taught and reviewed with children a wide variety of topics regarding countries, cultures, and current events.
41 Subjects: including prealgebra, English, reading, elementary (k-6th) | {"url":"http://www.purplemath.com/caddo_mills_prealgebra_tutors.php","timestamp":"2014-04-16T13:39:43Z","content_type":null,"content_length":"24130","record_id":"<urn:uuid:2ac1ad46-0ca1-4d9f-9f0a-bb679cffedcb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
neutrinos change flavor, energy transfer
as just energy has to be conserved.
...and momentum, of course.
The kinetic energies and momenta of the beta-decay electron (or positron) and remaining (recoiling) nucleus are slightly different for different neutrino masses. For that matter, so is the kinetic
energy and momentum of the outgoing neutrino.
Therefore, for three different neutrino masses, the entire final state of the decay (neutrino, electron and nucleus) is a superposition of three states with (very) slightly differing neutrino mass,
and with (very) slightly differing kinetic energies and momenta for the electron and nucleus. | {"url":"http://www.physicsforums.com/showthread.php?t=733451","timestamp":"2014-04-16T19:02:53Z","content_type":null,"content_length":"71702","record_id":"<urn:uuid:9ed21a98-0bb3-4d21-84b1-2f1851ff340c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Expected value of ratio: please help! [Archive] - Statistics Help @ Talk Stats Forum
03-19-2010, 02:49 PM
I am trying to find an unbiased estimator for E(X/Y), where X is Normal(mu_1,sigma_1) and Y is Normal (mu_2,sigma_2). Does anyone happen to know such an unbiased estimator?
Also, I was wondering if anyone knows the bias of the naive estimator:
(Sample mean of X) / (Sample mean of Y).
Thank you so much in advance! | {"url":"http://www.talkstats.com/archive/index.php/t-11318.html","timestamp":"2014-04-20T13:21:31Z","content_type":null,"content_length":"3419","record_id":"<urn:uuid:dd920580-a9dc-490a-964b-1adb437d3b08>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Algebra/Proportions and Proportional Reasoning/Proportions
Ratio: The comparison of two numbers.
Reciprocal: The multiplicative inverse of a number -- when the numerator and denominator of a fraction are switched around.
Equivalent Ratios: Two ratios that reduce to the same ratio.
Proportion: An equation stating two ratios are equal.
To Cross-multiply: see below.
There are several ways to express a "ratio". Let's compare the number of boys with the number of girls in a particular classroom. Let's say that our classroom has 25 students, 10 of whom are boys.
That means there are 15 girls. So, the ratio of boys to girls is 10 to 15.
Writing a Ratio
There are three ways of expressing a ratio. You can simply use words like we did above, or you can separate the two numbers using a colon or a fraction (the mathematician's choice).
10:15 or 10/15 or $\frac{10}{15}$
In mathematics, we always use fractions to represent ratios. In this classroom example, many other ratios that can be made:
$\frac{10}{25}$, the number of boys out of the total number of students
$\frac{15}{25}$, the number of girls out of the total number of students
$\frac{15}{10}$, the number of girls to the number of boys
Simplifying a Ratio
Since we're using a fraction to represent ratios, the ratios can sometimes be reduced. For example, the ratio of boys to girls in our hypothetical classroom is $\frac{10}{15}$, but can be reduced to
$\frac{2}{3}$. So, if we would be correct we said that the ratio of boys to girls in our hypothetical classroom is 2:3, or two boys for every 3 girls.
The following equation is a proportion:
$\frac{10}{15} = \frac{2}{3}$
Any proportion is simply an equation the states two ratios are equal to one another. Sometimes, a proportion may contain variables:
$\frac{2}{3} = \frac{x}{30}$
If we wish to solve such an equation, we can use a process called cross-multiplication.
Solving Proportions using Cross-Multiplication
If $\frac{a}{b} = \frac{c}{d}$, then the products that are formed by diagonals across the equal sign are also equal: $ad = bc$. Note that this would be equivalent to saying $bc = ad$.
Example ProblemsEdit
1) Is the ratio $\frac{4}{6}$ equal to the ratio $\frac{12}{16}$?
We can test by cross-multiplying. If the cross-products are equal, then so are the original two ratios.
$4 \cdot 16$ ?= $6 \cdot 12$
$64$ ?= $72$
No, $64 ot= 72$, so $\frac{4}{6} ot= \frac{12}{16}$.
2) Solve the following proportion for x.
$\frac{3}{4} = \frac{x}{10}$
$3 \cdot 10 = 4x$
$30 = 4x$
Solve for x by dividing both sides of the equation by 4. Your result: $x = \frac{30}{4} = 7.5$.
3) Solve for x:
$\frac{x + 1}{4} = \frac{3}{8}$
Cross multiply:
$8 \cdot (x+1) = 3 \cdot 4$
$8x + 8 = 12$
Now, solve for x:
Subtract 8 from both sides: $8x = 4$
Divide both sides by 8: $x = \frac{1}{2}$
4) A 9th-grade algebra classroom has a ratio of boys to girls of \frac{1} {2}. If there are 14 girls, how many boys are in the class?
We can create a proportion that compares the ratios of boys to girls, where x is the number of boys in the class:
$\frac{1}{2} = \frac{x}{14}$
Solve the proportion by cross-multiplying:
$2x = 14$
$x = 7$
There are 7 boys in the class.
Practice GamesEdit
Practice ProblemsEdit
Reduce each ratio.
1) $\frac{2}{10}$
2) $\frac{12}{48}$
3) $\frac{3x}{9x}$
Solve each proportion.
4) $\frac{x}{4} = \frac{8}{12}$
5) $\frac{10}{17} = \frac{14}{x}$
6) $\frac{2}{x-2} = \frac{3}{7}$
Set up a proportion to answer the following questions.
7) If the ratio of boys to girls in a particular classroom was 2 to 3, and there are 12 boys, how many girls are in the class?
8) A recipe for pizza dough calls for 10 cups of flour and 2 cups of water. If Susan wants to only use 3 cups of flour, how many cups of water must she use?
1) 1/5
2) 1/4
3) 1/3
4) 8/3 = 2.67
5) 238/10 = 23.8
6) 20/3 = 6.67
7) 18 girls
8) 3/5 cups = 0.6 cups
Last modified on 15 May 2013, at 22:44 | {"url":"https://en.m.wikibooks.org/wiki/Basic_Algebra/Proportions_and_Proportional_Reasoning/Proportions","timestamp":"2014-04-16T10:46:22Z","content_type":null,"content_length":"22557","record_id":"<urn:uuid:0fd9ca10-df60-4bbb-9911-361452f5cd01>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
New look-ahead Lanczos-type algorithms for linear systems.
(English) Zbl 0932.65040
Authors’ summary: A breakdown (due to a division by zero) can arise in the algorithms for implementing Lanczos’ method because of the non-existence of some formal orthogonal polynomials or because
the recurrence relationship used is not appropriate. Such a breakdown can be avoided by jumping over the polynomials involved. This strategy was already used in some algorithms such as the MRZ and
its variants.
In this paper, we propose new implementations of the recurrence relations of these algorithms which only need the storage of a fixed number of vectors, independent of the length of the jump. These
new algorithms are based on Homer’s rule and on a different way for computing the coefficients of the recurrence relationships. Moreover, these new algorithms seem to be more stable than the old ones
and they provide better numerical results.
Numerical examples and comparisons with other algorithms will be given.
65F10 Iterative methods for linear systems | {"url":"http://zbmath.org/?q=an:0932.65040","timestamp":"2014-04-18T08:37:43Z","content_type":null,"content_length":"21380","record_id":"<urn:uuid:a42b8348-6028-46fd-8654-b105ed5bb274>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
TMT Thesis Project
From OpenWetWare
(Difference between revisions)
m (Ty Thomson moved to TMT Thesis Project) (→Research Goals)
Next diff →
← Previous diff
Line 9: Line 9:
#Build a model of the pheromone response pathway #Build a model of the pheromone response pathway
#*Develop a model of the pheromone response pathway that can be used in conjunction with #*Develop a model of the pheromone response pathway that can be used in conjunction with
time-dependent stimulation and analysis of the pathway to propose and test hypotheses. Once completed, time-dependent stimulation and analysis of the pathway to propose and test hypotheses. Once
this model can be used as a predictive tool for pathway response. completed, this model can be used as a predictive tool for pathway response.
#**This model is largely already built (with instances in Matlab and [http://www.ncbi.nlm.nih.gov/ #**This model is largely already built (with instances in Matlab and [http://
- entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=15637632&query_hl=1 Moleculizer]). It + www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=
needs to be further refined using data from the literature, and data that I will generate myself. 15637632&query_hl=1 Moleculizer]).
It needs to be further refined using data from the literature, and data that I will generate
+ myself.
#Build a microfluidic device for time-dependent stimulation of cells #Build a microfluidic device for time-dependent stimulation of cells
#*Design, build and characterize a device to allow for rapid variation of extracellular conditions for #*Design, build and characterize a device to allow for rapid variation of extracellular
cells fixed in a microfluidic channel. conditions for cells fixed in a microfluidic channel.
Revision as of 18:40, 5 January 2006
Thesis Topic
The main objectives of my work is to develop the tools to perform time-dependent stimulation and analysis of signaling pathways, and show that this is more powerful than traditional time-independent
or step response analysis. I am using a computational model of the prototype system, the yeast pheromone response pathway, to generate hypotheses about the pathway. In order to test these hypotheses,
time-dependent stimuli will be delivered to cells via a microfluidic device, and in vivo fluorescent reporters will be used to observe the system state. In addition to showing the strengths of this
new approach to studying biological systems, I would like to use it to further our understanding of the pheromone response pathway.
Research Goals
My research can be broken down into 4 main goals that follow (for the most part) chronologically.
1. Build a model of the pheromone response pathway
□ Develop a model of the pheromone response pathway that can be used in conjunction with time-dependent stimulation and analysis of the pathway to propose and test hypotheses. Once completed,
this model can be used as a predictive tool for pathway response.
☆ This model is largely already built (with instances in Matlab and Moleculizer). Also, using BioNetGen2 I have generated an SBML version of my model, which can be read as input by Jacobian
, SloppyCell, and the SimBio toolbox in Matlab.
☆ It needs to be further refined using data from the literature, and data that I will generate myself.
2. Build a microfluidic device for time-dependent stimulation of cells
□ Design, build and characterize a device to allow for rapid variation of extracellular conditions for cells fixed in a microfluidic channel.
☆ This chip has been designed using the technology out of the Quake Lab at Stanford (formerly Caltech). See protocols for more info on chip design. Early versions of the chip (called the
Stimulator) have shown great promise for my purposes. Preliminary tests have shown that I can vary the extracellular environment (with NO cells in the channel) on a sub 100ms timescale.
I've also successfully adhered cells to the bottom of the channel, and had them resist detachment under fluid flow, though this needs further characterization. I made a Image:Cells in
stimulator.avi with the most recent version showing that I can change the fluid environment of cells in the channel (video in real time, with food dye used to color one of the fluids).
3. Investigate the pathway with time-dependent stimulation
□ Examine the frequency filtering characteristics of the pheromone response pathway in order to study the limits of propagation of time-varying signals through the pathway. Use the model to
form and test hypotheses generated by studying the response of the pathway to time-dependent stimulation.
4. Identify and apply techniques for non-linear system identification
□ Identify and apply tools developed for other fields to the analysis of signaling pathways, particularly with respect to time-dependent stimulation.
□ Notes on Parameter Estimation in Matlab
Q. What will determine if using time-varying stimuli is a success?
A. I think that it would be sufficient for me to show that you can get better parameter estimates using time varying stimuli than you can with step increase. When I say better estimate, I mean that
we can decrease the error bounds on parameters. This hinges on some intelligent way to put bounds or confidence limits on parameters. This is probably linked to the independence/coupling of paramters
topic listed below under Signal Design.
Q. I say that a time varying stimulus can drive a system to a state that it won't normally attain in response to a step increase stimulus. For what types of systems is this true?
A. I think that I can concoct systems that this is true for, but I should try to show that this is indeed true for the pheromone response pathway.
Near Future Plan
1. Show that cells can live on chip
□ Stick yeast cells down in the channel, and flow media (at a slow rate) over them. Take a picture every 5 mintutes and compile into a movie of yeast cells growing (hopefully). Need to start
with cells growing exponentially, and concentrate to OD 1.5-2. Try sonicating briefly to break up clumps (talk to Jeff).
2. Show that you can control in ON/OFF fashion response of cells to alpha factor
□ Using strain with YFP driven by P[prm1] promoter. Show that cells won't react (ie fluoresce) when they arenot in part of channel where alpha factor is flowing, and that they do react when
they are exposed to alpha factor.
3. Find out if reset of receptor/G protein sub-system is limited by pheromone dissociation or Ste2 internalization.
□ Hit cells with a short dose of pheromone and see if reset is on the order of 4-5 mintutes (internalization) or 10 minutes (dissociation). See if Alejandro has already done this.
4. (Is yeast pheromone response the best model system for this project?)
Data Collection/Analysis
1. Show that you can measure Ste5-YFP translocation to membrane
□ This will involve either using or reimplementing the image analysis tools used by Alejandro and Andrew. Also, I might want to use/reimplement their autofocus routine. I should look into this
Signal Design Find out the extent of coupling and independence of parameters
• How can we use the model to get an idea of how well we're going to be able to estimate parameters? How many of the parameters are linked (eg, what if only the ratio of param1 to param2 matters)?
• SloppyCell appears to be useful for this purpose. I need to do some validation to prove to myself that SloppyCell is working as expected
1. Compare species timecourses (vs Matlab, Jacobian & BNG2)to ensure input file and simulation engine are good.
2. Check sensitivities and normalized sensitivities to make sure they're as expected.
☆ Can compare with Jacobian (generates automatically) and Matlab (need to make a numerical approximation).
○ So far, the sensitivities are not matching. I need to debug this. I'm waiting on hearing back from Ryan about some sloppycell questions.
3. Interpret parameter groupings.
☆ Should be able to compare the stiffness of the parameter groupings (as given by the relative eigenvalues) with the sensitivities of the individual parameters in that grouping. Higher
eigenvalue/stiffness should correlate with higher sensitivity?
Parameter Estimation
1. Get jacobian working
2. Would knowledge of parameter groupings affect parameter estimation? I want to think about this some more, maybe chat with some people in the lab, and try to talk to John Tolsma (@Jacobian) about
Thesis Committee
Q1. Do we need Thorner on the committee?
Q2. Do we need a yeast person on the committee?
Q3. Do we need a dynamic systems person on the committee?
• It's looking more and more like the answer is yes. I don't know enough to be efficient at guiding myself through the parameter estimation and dynamic system analysis. Figuring out who we can add
should be a top priority.
• I talked to Bree about this, and she had some advice (since she went through the ordeal of trying to find the best systems person for her committee). One drawback to putting a systems person who
doesn't know biology on your committee is that a lot of time is spent explaining why the system isn't Hamiltonian(?) and why you don't need to conserve engergy, and stuff like that. So she is
collaborating with math types, but she has van Oudenaarden on her committee as an in-between guy. He understands the biology, and he can understand the math/analysis stuff, but he might not be
the best guy to direct her in the math/analysis direction (hence collaborating with math-types). Anyway, she suggested talking to Doug, who she called the great connecter. She said that he would
probably be very good at helping me find the appropriate person. She also suggested talking to Jacob White, who is getting more and more interested in biological systems analysis, and is a pretty
smart guy. He doesn't know the bio really well, but she said that he is very good at giving you his full attention until he gets the biology and might be able to give some help. So course of
action would be to saet up meetings with Jacob and Doug. Problem is that it might be ~2 weeks until Doug has time to meet.
Should have a committee meeting ASAP to discuss current directions.
Should have another committee meeting mid/late Spring 2006 to update and plan. | {"url":"http://www.openwetware.org/index.php?title=TMT_Thesis_Project&diff=16279&oldid=16228","timestamp":"2014-04-18T21:37:13Z","content_type":null,"content_length":"28464","record_id":"<urn:uuid:4acd14f9-bd63-44f3-872b-c3a2e25504dc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/alyssa_may_de_jesus/asked","timestamp":"2014-04-16T22:50:47Z","content_type":null,"content_length":"105485","record_id":"<urn:uuid:e8b60bc9-4746-4a91-bc40-6b26f0197452>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from Jon Harrop
Date: -- (:)
From: Jon Harrop <jon@j...>
Subject: Re: [Caml-list] Complexity of Set.union
On Friday 25 February 2005 17:48, Xavier Leroy wrote:
> My hope is that union takes time O(N log N) where N is the sum of the
> numbers of elements in the two sets. I'm thoroughly unable to prove
> that, though, in particular because the complexity of the "split"
> operation is unclear to me.
Am I correct in thinking that your derivation of this assumes roughly
equal-sized sets and that your complexity could be tightened a bit by using
the two different set cardinalities explicitly?
I ask this because the STL set_union is probably O(n+N) (inserting an already
sorted range into a set is apparently linear) which is worse than the O((n+N)
log(n+N)) which you've suggested for OCaml.
But my OCaml code is vastly faster, so OCaml's complexity seems to be
significantly better than that. At least in the special case of one small and
one large set, which my code is bound by. Specifically, the sets have O(1)
and O(i^2) elements when looking for the "i"th nearest neighbour. In reality
this corresponds to computing the unions of sets containing 4 elements with
sets containing 10^4 elements.
Hmm, now that I come to think of it, my performance measurements have all been
specific to silicon (that's where the 4 comes from). I'll try retiming on
some other atomic structures, where the small sets will contain about 12
elements. I predict the OCaml code will do better relative to the C++ code,
because the smaller sets won't be so small...
> This bound is "reasonable", however, in that the trivial union
> algorithm (repeatedly add every element of one of the sets to the
> other one) achieves this bound, and the trick with "joining" is,
> intuitively, just an optimization of this trivial algorithm.
I see. This could be improved in the unsymmetric case, by adding elements from
the smaller set to the larger set. But the size of the set isn't stored so
you'd have to make do with adding elements from the shallower set to the
deeper set. I've no idea what the complexity of that would be...
As I know which of the two sets will be the larger and which will be the
smaller, I'll try a customized union function which folds Set.add over the
smaller set.
> > Now, what about best case behaviour: In the case of the union of two
> > equal height, distinct sets, is OCaml's union T(1)?
> Did you mean "of two equal height sets such that all elements of the
> first set are smaller than all elements of the second set"?
Yes, that's what I meant. :-)
> That
> could indeed run in constant time (just join the two sets with a
> "Node" constructor), but I doubt the current implementation achieves
> this because of the repeated splitting.
Having said that, wouldn't it take the Set.union function O(log n + log N)
time to prove that the inputs are non-overlapping, because it would have to
traverse to the min/max elements of both sets?
Dr Jon D Harrop, Flying Frog Consultancy Ltd. | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2005/02/ecd6a26a48cdc44fff2fa860135eca66.en.html","timestamp":"2014-04-16T09:38:40Z","content_type":null,"content_length":"9716","record_id":"<urn:uuid:444b54ba-ff2e-40e0-9af1-9d7020f330cf>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
A plane old stuffed cube - New Logic/Math PuzzlesA plane old stuffed cube - New Logic/Math PuzzlesA plane old stuffed cube - New Logic/Math Puzzles
Welcome to BrainDen.com - Brain Teasers Forum
Welcome to BrainDen.com - Brain Teasers Forum. Like most online communities you must register to post in our community, but don't worry this is a simple free process. To be a part of BrainDen Forums
you may create a new account or sign in if you already have an account.
As a member you could start new topics, reply to others, subscribe to topics/forums to get automatic updates, get your own profile and make new friends.
Of course, you can also enjoy our collection of amazing optical illusions and cool math games.
If you like our site, you may support us by simply clicking Google "+1" or Facebook "Like" buttons at the top.
If you have a website, we would appreciate a little link to BrainDen.
Thanks and enjoy the Den :-)
A plane old stuffed cube
Started by
Sep 24 2013 02:44 AM
Best Answer plasmid, 27 September 2013 - 04:49 AM
If no one else is going to go after this one... I can give an answer based on, well, working in the spirit of the best solution the program could find. But I can't prove that there aren't any larger
squares that can fit in the unit cube.
Spoiler for
Go to the full post
9 replies to this topic
Posted 24 September 2013 - 02:44 AM
What is the area of the largest square that can fit entirely within a unit cube?
The greatest challenge to any thinker is stating the problem in a way that will allow a solution.
- Bertrand Russell
Posted 24 September 2013 - 03:25 AM
I just realized where I went wrong in my first guess... I'll have to figure a different answer:
Spoiler for Estimation now...
Edited by Dariusray, 24 September 2013 - 03:34 AM.
Posted 24 September 2013 - 03:26 AM
Spoiler for First guesses
Perfecting Mafia suicide since August 2008
Posted 24 September 2013 - 03:37 AM
I just realized where I went wrong in my first guess... I'll have to figure a different answer:
Spoiler for Estimation now...
Since it's a unit cube, your answer simplifies (if I'm reading it correctly) to Sqrt(2) / 2 = .707.
This is smaller than a cube face. But maybe I don't interpret your answer correctly.
The greatest challenge to any thinker is stating the problem in a way that will allow a solution.
- Bertrand Russell
Posted 24 September 2013 - 03:39 AM
Spoiler for First guesses
A little too large, but close. And the answer is in fact a rational number.
If it helps, there is a close relation to the question that asks whether a cube can be pushed through a square hole in a smaller cube.
The greatest challenge to any thinker is stating the problem in a way that will allow a solution.
- Bertrand Russell
Posted 25 September 2013 - 03:42 AM
So far the best that my horribly inefficient and possibly buggy java code came up with is an area of 1.0984. But it's still running.
I probably ought to have searched for a reasonably efficient Windows C compiler instead.
Unfortunately I don't see any easy way of modifying the code to work in 4 dimensions. Especially since it uses cross products which I don't think are defined in 4D.
Posted 26 September 2013 - 02:42 AM
This is what my brute force approach came up with. If you look at the coordinates of the square that it fit into the unit cube, it should become clear how to imagine that it's oriented, and allow you
to come up with a more analytical approach to solving the problem. Which I'm not going to attempt myself for now.
Spoiler for Coordinates for the vertices of the square
Spoiler for The Java code (obviously written by someone not used to naming conventions)
Spoiler for How to run the code on a Windows machine (if you're comfortable working from a command line and editing system environment variables)
Posted 26 September 2013 - 07:23 AM
Thanks for the programming detail.
I'm learning how to do java programming.
You get an area of 1.0984, a rational number.
But it can be bigger than that.
The greatest challenge to any thinker is stating the problem in a way that will allow a solution.
- Bertrand Russell
Posted 27 September 2013 - 04:49 AM Best Answer
If no one else is going to go after this one... I can give an answer based on, well, working in the spirit of the best solution the program could find. But I can't prove that there aren't any larger
squares that can fit in the unit cube.
Spoiler for
Posted 11 October 2013 - 08:47 PM
I'm late marking this one solved. Nice, clear approach.
The greatest challenge to any thinker is stating the problem in a way that will allow a solution.
- Bertrand Russell
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users | {"url":"http://brainden.com/forum/index.php/topic/16616-a-plane-old-stuffed-cube/","timestamp":"2014-04-23T09:20:52Z","content_type":null,"content_length":"110121","record_id":"<urn:uuid:4d0bd57d-4bbc-4f4b-b829-d0a87869a028>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
Microsoft Research
Microsoft Research -- Silicon Valley
Address: 1065 La Avenida, Mountain View, CA 94043, USA
Phone/fax: +1(650) 693-1787 / +1(650) 693-2005 (recipient name required)
Email: <last_name><at>microsoft<dot>com
Short Bio
Andrew V. Goldberg is a Principal Researcher at Microsoft Research -- Silicon Valley. His research interests include design, analysis, and experimental evaluation of algorithms, data structures,
algorithm engineering, and computational game theory. Goldberg received his PhD degree in Computer Science from M.I.T. in 1987. Before joining Microsoft, he worked for Stanford University, NEC
Research Institute, and InterTrust STAR Lab. His graph algorithms are taught in computer science and operations research classes and their implementations are widely used in industry and
academia. Goldberg received a number of awards, including the NSF Presidential Young Investigator Award, the ONR Young Investigator Award, the Mathematical Programming Society A.W. Tucker Prize,
and INFORMS Optimization Society Farkas Prize. He is a Fellow of ACM and SIAM.
More information, including software downloads, is available here.
Below is my work published since I joined Microsoft. More can be found here.
Books and Portions of Books
Refereed Journals and Equivalent | {"url":"http://research.microsoft.com/en-us/people/goldberg/default.aspx","timestamp":"2014-04-16T12:08:54Z","content_type":null,"content_length":"32355","record_id":"<urn:uuid:a8e87c6e-5952-45c8-bde1-e840134e6d36>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binding Energ
┃ ┃
┃ San José State University ┃
┃ ┃
┃ ┃
┃ applet-magic.com ┃
┃ Thayer Watkins ┃
┃ Silicon Valley ┃
┃ & Tornado Alley ┃
┃ U.S.A. ┃
┃ ┃
┃ ┃
┃ The Binding Energies of Nuclei ┃
┃ as Determined by the Energy of ┃
┃ Formation of their Substructures and ┃
┃ the Interactions Among the Nucleons ┃
┃ in their Shells ┃
┃ ┃
Dedicated to Betty
A true and precious friend
The purpose of this material is to develop an equation explaining the binding energies of about three thousand nuclides in terms of the nucleonic substructures they contain and the interaction of
their nucleons through the nuclear strong force. A previous study had good success using the total neutrons and total protons to determine the number of interactions. The errors in the estimates
based on the regression equation found were associated with shell structure. This study uses an abreviated shell structure and determines the number of interactions between the nucleons in different
shells as well as in the same shell.
Substructures of a Nucleus
One type of substructure in nuclei is a spin pair of nucleons. There are neutron-neutron spin pairs, proton-proton spin pairs and neutron-proton spin pairs. There is an exclusiveness in the formation
of spin pairs in that a neutron can form a spin pair with only one other neutron and one proton and likewise for a proton. This means that there are linkages of neutrons and protons of the form
-n-p-p-n-, or equivalently, -p-n-n-p-. These linkages induce binding energy effects similar to alpha particles. In the following analysis these linkages are called alpha modules. Below is shown a
depiction of simple chain of four alpha modules.
Binding energy is also determined by the interactions of the various substructures but the analysis below presumes that the interactions of the substructures reduce down to interactions among
neutrons and protons contained in those substructures.
The notation which is used is
• #a = number of alpha modules
• #nn = number of neutron-neutron spin pairs
(inside and outside of alpha modules)
• #pp = number of proton-proton spin pairs
(inside and outside of alpha modules)
• #np = number of neutron-proton spin pairs
(inside and outside of alpha modules)
• N1 = number of neutrons in nucleus in a shell of 0 to 50
• P1 = number of protons in nucleus in a shell of 0 to 50
• N2 = number of neutrons in nucleus in a shell of 51 or more
• P2 = number of protons in nucleus in a shell of 51 or more
• N1N1 = number of neutron-neutron interactions in first neutron shell (N1(N1-1)/2)
• N1P1 = number of interactions of neutrons in the first neutron shell with protons in the first proton shell (N1P1)
• N1N2 = number of neutron-neutron interactions in first neutron shell with neutrons in the second neutron shell (N1N2)
• N1P2 = number of interactions of neutrons in first neutron shell with protons in the second neutron shell (N1P2)
• P1P1 = number of proton-proton interactions in the first proton shell (P1(P1-1)/2)
• P1N2 = number of interactions of protons in first proton shell with neutrons in the second neutron shell (P1N2)
• P1P2 = number of interactions of protons in first proton shell with protons in the second proton shell (P1P2)
• N2N2 = number of neutron-neutron interactions of neutrons in second neutron shell (N2(N2-1)/2)
• N2P2 = number of interactions of neutrons in second neutron shell with protons in the second proton shell (P1N2)
• P1P1 = number of proton-proton interactions in the second proton shell (P2(P2-1)/2)
The number of interactions of Q nucleons with each other is Q(Q-1)/2. The number of separate interactions of N neutrons with P protons is NP. The binding energy BE is then assumed to be a linear
homogeneous function of the numbers of substructures and the numbers of interactions.
This scheme was used in previous studies and the results were good, but the presumption in those studies was that the binding energies due to the formation of substructures were constants independent
of the scale of the nucleus in which they are formed. It has been found that this is not true. Below are shown the estimates of the binding energies due strictly to the formation of an alpha module.
The form of this relationship is approximately
e(x) = c + b/x²
and hence
∫e(x)dx = cx + ∫(b/x²) = cx − b/x
If the integration is carried out from 1 to x then the binding energy associated with the formation of #a alpha modules is of the form
c[0] + k#a −b/#a
and likewise for the formation of #nn, #pp and #np spin pairs.
The regression equation based upon the above is:
BE = −11.2264 + 8.3481#a + 14.1161#nn + 14.2735#pp −29.0548 #np
+ 0.3745N1N1 −0.2319N1P1 −0.2774N1N2 − 0.6836N1P2
−0.3786P1P1 + 0 P1N2 −0.1967P1P2 + 0.2870N2N2 − 0.54068P2P2
+ 18.6838(1/#a) + 25.79166531(1/#nn) +7.4881(1/#pp) −0.1925(1/#np)
The regression package (EXCEL) did not compute a coefficient for P1N2 due to some problem with the data, such as that that variable is a linear combination of the other variables.
The data for the regression included all nuclides except the neutron, the proton and Beryllium 5, which has a negative binding energy. The coefficient of determination for this equation is 0.999945
and the standard error of the estimate is 3.75 MeV. The t-ratios (ratio of coefficient to its standard deviation) are all significantly different from zero at the 99.9 percent level of confidence.
Implications of the Results
If the strong force charge of a proton is taken to be 1.0 and that of a neutron denoted as q, where q may be a negative number, then the regression coefficients should be related to q. (See Appendix
I.) However the interaction of protons is modified by the effect of the electrostatic repulsion between protons. Let the effective charge of a proton in proton-proton interactions be dentoted as
(1+δ). (See Appendix II for the justification.) The parameter δ is positive if the interaction of protons through the strong force is of the same nature as through the electrostatic force;i.e.,
repulsion. The relationships are for coefficient for interactions in the same shell:
C[NN]/C[NP] = q²/q = q
C[NP]/C[PP] = q/(1+δ)
C[NN]/C[PP] = q²/(1+δ)
Previous studies have concluded that q is equal to −2/3.
When the regression coefficients for the second shell are applied the results are:
C[N2N2]/C[N2P2] = −0.6854
C[N2P2]/C[P2P2] = −0.5307
C[N2N2]/C[P2P2] = 0.3638
The value of −0.6854 is close of enough to −2/3 treat it as a confirmation of that value. If q=−2/3 then the second equation implies that (1+δ) is equal to 1.233. If q=−2/3 then the third equation
implies that (1+δ) is equal to 1.245. This is a good correspondence.
The statistical significance of the coefficient of #a is welcome; the coefficient of 1/#a is also statistically significantly different from zero at the 95 percent level of confidence. The total
effect for the formation of alpha modules is given by
BE(#a) = 8.3481#a + 18.6838(1/#a)
and hence
(∂BE/∂#a) = 8.3481 − 18.6838(1/#a²)
Thus for large values of #a the effect of the formation of an alpha module approaches a level of 8.3481 MeV but for smaller levels of #a the effect is of a lesser magnitude.
The Nature of the Errors in the Regression
Equation Estimates of Binding Energy
The scatter diagram for the errors in the regression estimates plotted against the actual values shows that there is an association of the size of the error and the filling of the neutron and proton
The Allowance of the Occupancy of Three Shells
In an attempt to improve the statistical performance of the model three shell categories were incorporated in the model. Shell one was from 0 to 28; shell two from 29 to 82 and shell three for 83 and
above. This generated 21=6*7/2 interaction variables to be included with the 8 variables representing the substructure formation.. However the EXCEL regression program was not able to compute
coefficients for 6 of the interaction variables and 2 of the substructure formation variable. As a result the coefficient of determination was less and the standeard error of the estimate was larger
than those found for the previous version of the model. So it appears that the results of that previous version are best that can be achied; i.e., 0.999945 for the coefficient of determination and
3.75 MeV for the standard error of the estimate.
The previous results indicate
• that the binding energies of nuclides can be overwhelmingly explained by the binding energies associated with their substructures and the interaction of their nucleons
• that definitely the strong force charge of a neutron is opposite in sign to that of a proton
• that definitely the strong force charge of a neutron is smaller in magnitude than that of a proton
• that very likely the value of the strong force charge of a neutron relative to that of a proton is −2/3
Appendix I
The force F between two particles of charges Q[1] and Q[2] separated by a distance S can be represented as
F = HQ[1]Q[2]f(S)/S²
where H is a constant and f(S) is a function characteristic of the type of force.
The potential energy V(S) is then
V(S) = −∫[S]^∞F(s)ds
W= −HQ[1]Q[2]∫[S]^∞(f(s)/s²)ds
= −HQ[1]Q[2]W(S)
where W(S)=∫[S]^∞(f(s)/s²)ds.
The binding energy for the two particle is just the negative of the potential energy. Thus
B(S) = HQ[1]Q[2]W(S)
Thus the ratio of binding energies of two pairs of particles each having the same separation distance S is the ratio of the products of the charges in each pair.
Appendix II
The force between two protons separated by a distance S due to the strong force and the electrostatic force is given by
s F = Hf(S)/S² + Ke²/S²
where the strong force charge of the proton is taken to be unity and H is the corresponding strong force constant. The constant for the electrostatic force is K and the electrostatic charge of the
proton is e. The binding energy for the protons is then
B(S) = HW(S) + Ke²/S
which can be expressed as
B(S) = HW(S)[1 + Ke²/(HW(S)S]
or as
B(S) = HW(S)[1 + δ]
δ = Ke²/(HW(S)S | {"url":"http://applet-magic.com/nuceq3.htm","timestamp":"2014-04-16T11:22:58Z","content_type":null,"content_length":"18526","record_id":"<urn:uuid:2b569bb4-6e62-405a-9d31-977a3ec6cfc8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Age Dependency Ratios
Age Dependency Ratios
The age dependency ratio expresses the relationship between three age groups within a population: ages 0-15, 16-64 and 65-plus. Higher values indicate a greater level of age-related dependency in the
population. In WISH, the "dependent population" is defined as people ages 0-15 and 65-plus, while the "working age population" is defined as people between ages 16 and 64. This is consistent with the
definition used by the U.S. Bureau of Labor Statistics.
There are three types of age dependency ratio. The youth dependency ratio is the population ages 0-15 divided by the population ages 16-64. The old-age dependency ratio is the population ages 65-plus
divided by the population ages 16-64. The total age dependency ratio is the sum of the youth and old-age ratios. All three ratios are commonly multiplied by 100 and WISH follows this convention.
As an example, the chart below shows dependency ratios for 2011 by Hispanic ethnicity in Wisconsin. The total dependency ratio for non-Hispanics is 51.6, slightly lower than the total of 52.4 for
Wisconsin. The total dependency ratio for Hispanics is higher, at 66.0. The youth and old-age dependency ratios differ dramatically between the Hispanic and non-Hispanic populations. The Hispanic
youth dependency ratio of 60.7 is much higher than the non-Hispanic ratio of 29.5, while the Hispanic old-age dependency ratio of 5.2 is much lower than the non-Hispanic ratio of 22.1.
These ratios indicate that while the total age dependency in the Hispanic population is higher compared to non-Hispanics, this is entirely due to the relative youthfulness of the Hispanic population.
The total dependency ratio is calculated as:
(([Population ages 0-15] + [Population ages 65-plus]) ÷ [Population ages 16-64]) × 100
The old-age dependency ratio is calculated as:
([Population ages 65-plus] ÷ [Population ages 16-64]) × 100
The formula for the youth dependency ratio is:
([Population ages 0-15] ÷ [Population ages 16-64]) × 100
It is important to note that these definitions do not take into account labor force participation rates by age group. Some portion of the "dependent" population may be employed and not necessarily
economically dependent. The Bureau of Labor Statistics addresses this through a related measure, the economic dependency ratio, which includes survey-based estimates of labor force participation
rates by age group.
Economic dependency ratio (exit DHS)
Age dependency ratios by state: 2000 and 2010 (exit DHS, PDF, 56 KB)
RETURN to previous page
Last Revised: November 28, 2012 | {"url":"http://www.dhs.wisconsin.gov/wish/main/wis_pop/dependencyratio.htm","timestamp":"2014-04-19T11:59:36Z","content_type":null,"content_length":"11815","record_id":"<urn:uuid:a0ca301a-9f65-4ebb-998c-4bceb73b9f43>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Subsampling: An Interesting Fail
Suppose I have a large data set $\{ (X_i, Y_i) \}$ sampled i.i.d. from a distribution $D$ on $\mathcal{X} \times \mathcal{Y}$, where $\mathcal{X}$ are features and $\mathcal{Y}$ are labels. The data
set $\{ (X_i, Y_i) \}$ is a random variable but for the moment consider it fixed (i.e., condition on it). I could use my large data set to compute empirical means of functions $f: \mathcal{X} \times
\mathcal{Y} \to [-1, 1]$, where $f$ is something like the regret of a hypothesis, \[
\frac{1}{n} \sum_{i=1}^n f (X_i, Y_i).
\] However this data set doesn't fit on my laptop so I don't want to use all of it; instead I'm going to censor some of the data points to construct an alternate estimator, \[
\frac{1}{n} \sum_{i=1}^n \frac{Q_i}{P_i} f (X_i, Y_i).
\] Here $Q_i$ is an indicator variable which says whether I use the $i^\mathrm{th}$ data point and $P_i = E[Q_i | \{ (X_i, Y_i) \}]$ is the probability of using the $i^\mathrm{th}$ data point, which
I have to scale the values by in order to remain unbiased.
So far I've just described the
importance-weighted active learning
framework. However suppose I'm lazy and instead of using a real active learning algorithm I'm going to consider two strategies for shrinking my data set: the first is uniform subsampling, and the
second is subsampling data associated with the more prevalent label (which I'll just assume is label 0). I want my estimates to be good, so I'll try to minimize a bound on \[
\mathrm{Pr} \left( \left| \frac{1}{n} \sum_{i=1}^n \frac{Q_i}{P_i} f (X_i, Y_i) - \frac{1}{n} \sum_{i=1}^n f (X_i, Y_i) \right| \geq \delta \right).
Hoeffding's inequality
on uniform subsampling $P_i = p_u$ applies to the sequence \[
A_i &= \frac{Q_i}{P_i} f (X_i, Y_i) - f (X_i, Y_i), \\
\max (A_i) - \min (A_i) &= \left( \frac{1}{p_u} - 1 \right) |f (X_i, Y_i)| \leq \left( \frac{1}{p_u} - 1 \right),
\] and yields the bound, \[
\mathrm{Pr} \left( \left| \frac{1}{n} \sum_{i=1}^n \frac{Q_i}{P_i} f (X_i, Y_i) - \frac{1}{n} \sum_{i=1}^n f (X_i, Y_i) \right| \geq \delta \right) \leq 2 \exp \left( -\frac{2 \delta^2 n}{\left(\frac
{1}{p_u} - 1\right)^2} \right).
\] Similarly for one-label subsampling $P_i = p_o 1_{Y_i=0} + 1_{Y_i=1}$, \[
A_i &= \frac{Q_i}{P_i} f (X_i, Y_i) - f (X_i, Y_i), \\
\max (A_i) - \min (A_i) &= \left( \frac{1}{p_o} - 1 \right) |f (X_i, Y_i)| 1_{Y_i=0} \leq \left( \frac{1}{p_o} - 1 \right) 1_{Y_i=0},
\] yielding \[
\mathrm{Pr} \left( \left| \frac{1}{n} \sum_{i=1}^n \frac{Q_i}{P_i} f (X_i, Y_i) - \frac{1}{n} \sum_{i=1}^n f (X_i, Y_i) \right| \geq \delta \right) \leq 2 \exp \left( -\frac{2 \delta^2 n^2}{\left( \
frac{1}{p_o} - 1 \right)^2 \sum_{i=1}^n 1_{Y_i=0}}\right).
\] Both bounds are minimized at $p \to 1$, which is just a fancy way of saying ``not subsampling is the most accurate.'' To get a more interesting statement I'll compare them by equating their
expected data set sizes, \[
p_u = p_o \sum_{i=1}^n I_{Y_i=0} + (1 - \sum_{i=1}^n I_{Y_i=0}),
\] and then I'll take the strategy with the better bound, \[
\log \left( \frac{\mathrm{uniform}}{\mathrm{onelabel}} \right) &= -2 \delta^2 n \left(n - \sum_{i=1}^n I_{Y_i = 0} \right) \frac{n - (1 - p_o)^2 \sum_{i=1}^n I_{Y_i=0}}{(1 - p_o)^2 (\sum_{i=1}^n I_
{Y_i=0})^2} \\
&\leq 0.
\] Yikes! The uniform subsampling bound is always better.
I don't think this means subsampling the more prevalent label is a bad idea, after all,
I've seen it work in practice
. However what I think this does mean is that the details of the $f$ being evaluated matters. In the above bounds I just used $|f| \leq 1$ but the result is too pessimistic. In particular the $f$ I
really care about is the instantaneous regret between an empirical risk minimizing hypothesis and a true risk minimizing hypothesis, so I'll have to step up my game and understand some concepts like
disagreement coefficient
. I suspect incorporating that will allow me to leverage the assumption that one label is far more prevalent than the other in the above analysis, which I presume is critical. | {"url":"http://www.machinedlearnings.com/2012/01/subsampling-interesting-fail.html","timestamp":"2014-04-18T05:38:37Z","content_type":null,"content_length":"75989","record_id":"<urn:uuid:c83e189d-5e9c-477f-b88d-2375aee6fbd9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Project 1: Scientific Calculator
We intend to start a series of VB projects so that together we will make this tutorial website more interesting learning platform for all VB hobbyists in the world. We will first post the ideas of
any project in our blog( http://vbtutorblog.blogspot.com/) and then gradually well work through the steps until it is completed. We wish that everyone will contribute your ides so that we can make
better and more efficient VB programs.
Our first project is a scientific calculator. We have entered the second phase of the project and the details were posted in the vbtutor blog.
Phase 1
Creating a scientific calculator is not an easy task in Visual Basic 6. However, you don't have to worry. we will start with a simple interface and gradually we will progress to a more complex
interface, and then a fully functional scientific calculator. So, please follow our tutorial or our blog as often as possible for updates on this project. Meanwhile, if you have any idea about the
calculator, please share with us. The Interface for phase 1.
The code:
Private Sub Command1_Click(index As Integer)
Select Case index
Dim x As Double
Case 0
x = Val(Text1.Text)
Text1.Text = Round((Sin(x * 3.14159265 / 180)), 4)
Case 1
x = Val(Text1.Text)
Text1.Text = Round((Cos(x * 3.14159265 / 180)), 4)
Case 2
x = Val(Text1.Text)
Text1.Text = Round((Tan(x * 3.14159265 / 180)), 4)
End Select
End Sub
*In Visual Basic, we have to convert degree to radian before we can compute the values of the trigonometric functions. The conversion is based on the formula pi radian=180^o , therefore 1 degree=pi/
Phase 2
Now, we shall add the number buttons, a cancel button and a dot button. The number buttons are created as an array of buttons, and each button will be identified with its index, corresponding to
its caption.
We will also replace the display panel with a label so that the users won't be able to erase the number. They can only clear the number by clicking the cancel button. The alignment property of the
label is set to right justified, so that the digit entered will start from the right position, similar to the conventional calculator.
The value of pi is now obtained by using the Atn function (Arc Tangent). We know that Tan(pi/4)=1, so Atn(1)=pi/4, therefore pi=Atn(1)*4. By using this function, we will obtain a more accurate
value of pi.
In this project, we shall also limit the numbers to be less than 30 digit. If any user enters a number with more than 30 digits, an error message "data overflow" will be shown. | {"url":"http://www.vbtutor.net/vbprojects/vbproj1.html","timestamp":"2014-04-18T13:06:49Z","content_type":null,"content_length":"11659","record_id":"<urn:uuid:41699fbe-54f6-49dc-9e61-6fb9f9b501d5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Navigation Panel: (These buttons explained below)
Society Investigating Mathematical Mind-Expanding Recreations
March 1998 Feature Presentation
Hands-On Algebraic Topology
Jonathan Scott
This is a summary of what was presented and discussed at the March 26th SIMMER meeting, along with some problems and questions to think about. We will be investigating objects called surfaces. Rather
than giving a precise definition, we'll just give some examples. Note that in studying topology, any surface that can be bent, stretched, or otherwise deformed into a given surface X, and back again,
is considered to be the same thing as X (or topologically equivalent, or homeomorphic).
1. the cylinder. This is the surface formed by taking a rectangular sheet and joining two opposite edges.
2. the Möbius band. This surface is created much like the cylinder, but the sheet is given a half-twist before joining the opposite edges.
3. the sphere. This is the surface of a ball.
4. the torus. The torus is a doughnut-shaped surface formed by taking a cylinder and joining the two circular ends together.
Stranger surfaces can be created which cannot be embedded in three-dimensional space; examples are the Klein bottle and the projective plane, which we'll see later. Also, two surfaces may be joined
together to form their connected sum. For example, the connected sum of two tori is the two-holed torus.
How can we predict what will happen if we cut a surface apart, join two surfaces together, or perform any other such "surgery"? How can we tell when it is possible to transform one surface into
another? One way of approaching such questions is through the use of plane diagrams to represent the surfaces.
We will give plane diagrams for the cylinder, the Möbius band, and the torus. Then two new surfaces, the Klein bottle and the projective plane, will be introduced via their diagrams.
As an example of the utility of plane diagrams, we use them to represent the slicing of a surface along a curve on the surface.
Problem 1: Using the plane diagram for the Möbius band, predict the outcome of cutting the surface down the middle (laterally). Verify experimentally. Then repeat the experiment, this time cutting
the surface starting one-third of the way across.
Problem 2: Find a way to cut the Klein bottle to get (i) a Möbius band, and (ii) two Möbius bands.
We next show how to join two surfaces together, (i.e. perform connected sums) by way of their plane diagrams.
Problem 3: Find a plane diagram for the two-holed torus, then the three-holed torus. Try to find a pattern, and find a plane diagram for the general n-holed torus.
Problem 4: Show that the Klein bottle is the connected sum of two projective planes.
In fact, the last problem is just a consequence of the classification theorem, which states that any surface is either a sphere, or the connected sum of tori, or the connected sum of projective
On any surface, we can draw graphs, or networks. Edges meet at vertices, and divide the surface up into faces. For a graph on a surface, let V be the number of vertices, E be the number of edges, and
F be the number of faces. It turns out that for a given surface, the quantity V - E + F is constant, for any graph on the surface. This quantity is what is called an invariant, and has been dubbed
the Euler characteristic,
We can calculate the Euler characteristic of a surface by looking at its plane diagram. For example, the characteristic of a sphere is 2, while that of a torus is 0.
Problem 5: Find the Euler characteristic for the projective plane, the Klein bottle, and the two-holed torus, the n-holed torus, and the connected sum of n projective planes.
Problem 6: Show that the two-holed torus and the connected sum of the one-holed torus and the projective plane are not topologically equivalent.
Problem 7: Compare the Euler characteristics of the torus to that of the Klein bottle. Are the two surfaces topologically equivalent? Why or why not?
Navigation Panel:
Switch to text-only version (no graphics)
Access printed version in PostScript format (requires PostScript printer)
Go to SIMMER Home Page
Go to The Fields Institute Home Page
Go to University of Toronto Mathematics Network Home Page | {"url":"http://www.math.toronto.edu/mathnet/simmer/topic.mar98.html","timestamp":"2014-04-17T02:03:50Z","content_type":null,"content_length":"7965","record_id":"<urn:uuid:086de313-6bc5-4b02-91ef-223ca54d6ea1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
uniqueness of linear map
January 3rd 2009, 07:46 AM #1
Jan 2009
uniqueness of linear map
I have a set of 4 vectors (a1,a2,a3,a4) in R3 and a set of 4 vectors (b1,b2,b3,b4)in R4. I need to show that there is precisely one linear map f: R3 --> R4 with f(ai) = bi
a1 = (1,0,0)
a2= (0,1,0)
a3 = (0,0,1)
a4 = (2,1,3)
b1 = (1,2,4,1)
b2 = (1,1,0,1)
b4 = (0,5,20,0)
I have found the linear map (x+y-z,2x+y,4x+4z,x+y-z)
how would I show that this is unique? I then need to find the kernel and the image of f.. would the image just be (x+y-z,2x+y,4x+4z,x+y-z)
and the kernel would be (1,-2,-1)^T ?
many thanks
Let $g:\mathbb{R}^3\to\mathbb{R}^4$ be any linear transformation satisfying $g(a_i)=b_i$ for $i=1,2,3,4.$
Given any vector $\mathbf{u}=(x,y,z)\in\mathbb{R}^3,$ we have
$g(\mathbf{u})\ =\ g(xa_1+ya_2+za_3)$
___... $=\ xg(a_1)+yg(a_2)+zg(a_3)$
___... $=\ xf(a_1)+yf(a_2)+zf(a_3)$
___... $=\ f(xa_1+ya_2+za_3)$
___... $=\ f(\mathbf{u})$
Hence $g(\mathbf{u})=f(\mathbf{u})$ for all $u\in\mathbb{R}^3,$ proving the uniqueness of $f.$
and the kernel would be (1,-2,-1)^T ?
You need to write it down properly.
$\ker{f}\ =\ \{t(1,-2,-1)\in\mathbb{R}^3:t\in\mathbb{R}\}$
would the image just be (x+y-z,2x+y,4x+4z,x+y-z)
$\mathrm{image}\,{f}\ =\ \{(x+y-z,2x+y,4x+4z,x+y-z)\in\mathbb{R}^4:x,y,z\in\mathbb{R}\}$
Write $(x+y-z,2x+y,4x+4z,x+y-z)=(u,v,w,u)$ and express $v$ and $w$ in terms of $u$.
January 3rd 2009, 09:13 AM #2 | {"url":"http://mathhelpforum.com/advanced-algebra/66663-uniqueness-linear-map.html","timestamp":"2014-04-18T05:32:38Z","content_type":null,"content_length":"36596","record_id":"<urn:uuid:f2b0ddb4-7b97-4680-b3ab-81ba18644ec1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Influence of k-Dependence on the Complexity of Planning
Last modified: 2009-10-16
A planning problem is k-dependent if each action has at most k pre-conditions on variables unaffected by the action. This concept is well-founded since k is a constant for all but a few of the
standard planning domains, and is known to have implications for tractability. In this paper, we present several new complexity results for P(k), the class of k-dependent planning problems with
binary variables and polytree causal graphs. The problem of plan generation for P(k) is equivalent to determining how many times each variable can change. Using this fact, we present a polytime plan
generation algorithm for P(2) and P(3). For constant k > 3, we introduce and use the notion of a cover to find conditions under which plan generation for P(k) is polynomial. | {"url":"http://aaai.org/ocs/index.php/ICAPS/ICAPS09/paper/viewPaper/704","timestamp":"2014-04-21T10:44:44Z","content_type":null,"content_length":"11936","record_id":"<urn:uuid:5742e4f3-263b-4e27-837f-a3dabaa9e1ab>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
There Are no Hybrid Systems - A Multiple-Modeling Approach to Hybrid Modeling
P. Struss
There are no hybrid systems, there are only hybrid models. Whether or not a change is modeled as a continuous or discontinuous one, depends on the purpose of the model. A proper treatment of hybrid
models is, hence, a matter of multiple modeling and model abstraction and approximation. More specifically, a proper theory of hybrid models has to be a theory of temporal (or behavioral) abstraction
and approximation. The primary problem is: How and under which circumstances can we transform a continuous change into a discontinuous one and vice versa? The core of this question is whether or not
a certain distinction is significant. This depends on the context which includes the overall system and the purpose of its modeling. The paper deals with the problem of deriving the sets of
qualitative values of model variables that allow to generate the distinctions required by the goal of model based prediction and the structure of the system. We present a formal definition and
analysis of the problem and an algorithm for computing appropriate qualitative values based on propagation of distinctions. We outline how this generic solution can be used for deriving models of
time scale abstraction including discontinuous changes.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://aaai.org/Library/Symposia/Spring/1999/ss99-05-034.php","timestamp":"2014-04-17T18:35:41Z","content_type":null,"content_length":"3182","record_id":"<urn:uuid:18857d07-9837-47f4-a9b6-aedcf5ba1fec>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |