content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Higher Dimensions
At the moment, we can produce three different objects by tapering a torus. We make torus -> circle by reducing the minor radius to zero. We make torus -> sphere by reducing the major radius. And we
make torus -> point by reducing (or increasing) both radii simultaneously.
Now, there is a fourth object we can make, by reducing one radius while simultaneously increasing the other.
We would start with a circle, i.e. a torus with a minor radius of 0. Then we increase the minor radius and decrease the major radius while going up the w-axis. Eventually we end up with a sphere,
i.e. a torus with a major radius of 0.
I don't think it's a good idea to add this to the official list of tapertopes, but it's an interesting object. It's RNS representation might be (2<sup>1</sup>1)<sup>-1</sup> or (2<sup>-1</sup>1)<sup>
1</sup>. It could also be called "circle -> sphere via torus"
Re: the fourth tapered torus
Why didn't you post this in cone-like objects?
PWrong wrote:(2<sup>1</sup>1)<sup>-1</sup> or (2<sup>-1</sup>1)<sup>1</sup>
I wouldn't agree with either of those. How can you taper negative times?
Why didn't you post this in cone-like objects?
I just didn't want to change the subject.
I wouldn't agree with either of those. How can you taper negative times?
It just means you're tapering once in the opposite direction.
Well, we don't have to use that notation. The object itself is interesting.
Anyway, I noticed something interesting about the torus->sphere. Suppose the angle at the base is pi/4. Then take a cross section, so the torus becomes a pair of circles. The object will be a pair of
cylinders intersecting at right angles. So the torus->sphere is somehow related to the crind. | {"url":"http://hddb.teamikaria.com/forum/viewtopic.php?f=24&t=595&p=7899","timestamp":"2014-04-17T12:32:01Z","content_type":null,"content_length":"23332","record_id":"<urn:uuid:cffe3f13-1b48-4e03-a82a-369309bad3bb>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Affine Finite Crystals
In this document we briefly explain the construction and implementation of the Kirillov–Reshetikhin crystals of [FourierEtAl2009].
Kirillov–Reshetikhin (KR) crystals are finite-dimensional affine crystals corresponding to Kirillov–Reshektikhin modules. They were first conjectured to exist in [HatayamaEtAl2001]. The proof of
their existence for nonexceptional types was given in [OkadoSchilling2008] and their combinatorial models were constructed in [FourierEtAl2009]. Kirillov-Reshetikhin crystals \(B^{r,s}\) are indexed
first by their type (like \(A_n^{(1)}\), \(B_n^{(1)}\), ...) with underlying index set \(I = \{0,1,\ldots, n\}\) and two integers \(r\) and \(s\). The integers \(s\) only needs to satisfy \(s >0\),
whereas \(r\) is a node of the finite Dynkin diagram \(r\in I \setminus \{0\}\).
Their construction relies on several cases which we discuss separately. In all cases when removing the zero arrows, the crystal decomposes as a (direct sum of) classical crystals which gives the
crystal structure for the index set \(I_0 = \{ 1,2,\ldots, n\}\). Then the zero arrows are added by either exploiting a symmetry of the Dynkin diagram or by using embeddings of crystals.
Type \(A_n^{(1)}\)
The Dynkin diagram for affine type \(A\) has a rotational symmetry mapping \(\sigma: i \mapsto i+1\) where we view the indices modulo \(n+1\):
sage: C = CartanType(['A',3,1])
sage: C.dynkin_diagram()
| |
| |
The classical decomposition of \(B^{r,s}\) is the \(A_n\) highest weight crystal \(B(s\omega_r)\) or equivalently the crystal of tableaux labelled by the rectangular partition \((s^r)\):
\[B^{r,s} \cong B(s\omega_r) \quad \text{as a } \{1,2,\ldots,n\}\text{-crystal}\]
In Sage we can see this via:
sage: K = KirillovReshetikhinCrystal(['A',3,1],1,1)
sage: K.classical_decomposition()
The crystal of tableaux of type ['A', 3] and shape(s) [[1]]
sage: K.list()
[[[1]], [[2]], [[3]], [[4]]]
sage: K = KirillovReshetikhinCrystal(['A',3,1],2,1)
sage: K.classical_decomposition()
The crystal of tableaux of type ['A', 3] and shape(s) [[1, 1]]
One can change between the classical and affine crystal using the methods lift and retract:
sage: K = KirillovReshetikhinCrystal(['A',3,1],2,1)
sage: b = K(rows=[[1],[3]]); type(b)
<class 'sage.combinat.crystals.kirillov_reshetikhin.KR_type_A_with_category.element_class'>
sage: b.lift()
[[1], [3]]
sage: type(b.lift())
<class 'sage.combinat.crystals.tensor_product.CrystalOfTableaux_with_category.element_class'>
sage: b = CrystalOfTableaux(['A',3], shape = [1,1])(rows=[[1],[3]])
sage: K.retract(b)
[[1], [3]]
sage: type(K.retract(b))
<class 'sage.combinat.crystals.kirillov_reshetikhin.KR_type_A_with_category.element_class'>
The \(0\)-arrows are obtained using the analogue of \(\sigma\), called the promotion operator \(\mathrm{pr}\), on the level of crystals via:
\[f_0 = \mathrm{pr}^{-1} \circ f_1 \circ \mathrm{pr}\]\[e_0 = \mathrm{pr}^{-1} \circ e_1 \circ \mathrm{pr}\]
In Sage this can be achieved as follows:
sage: K = KirillovReshetikhinCrystal(['A',3,1],2,1)
sage: b = K.module_generator(); b
[[1], [2]]
sage: b.f(0)
sage: b.e(0)
[[2], [4]]
sage: K.promotion()(b.lift())
[[2], [3]]
sage: K.promotion()(b.lift()).e(1)
[[1], [3]]
sage: K.promotion_inverse()(K.promotion()(b.lift()).e(1))
[[2], [4]]
KR crystals are level \(0\) crystals, meaning that the weight of all elements in these crystals is zero:
sage: K = KirillovReshetikhinCrystal(['A',3,1],2,1)
sage: b = K.module_generator(); b.weight()
-Lambda[0] + Lambda[2]
sage: b.weight().level()
The KR crystal \(B^{1,1}\) of type \(A_2^{(1)}\) looks as follows:
In Sage this can be obtained via:
sage: K = KirillovReshetikhinCrystal(['A',2,1],1,1)
sage: G = K.digraph()
sage: view(G, pdflatex=True, tightpage=True) # optional - dot2tex graphviz
Types \(D_n^{(1)}\), \(B_n^{(1)}\), \(A_{2n-1}^{(2)}\)
The Dynkin diagrams for types \(D_n^{(1)}\), \(B_n^{(1)}\), \(A_{2n-1}^{(2)}\) are invariant under interchanging nodes \(0\) and \(1\):
sage: n = 5
sage: C = CartanType(['D',n,1]); C.dynkin_diagram()
0 O O 5
| |
| |
sage: C = CartanType(['B',n,1]); C.dynkin_diagram()
O 0
sage: C = CartanType(['A',2*n-1,2]); C.dynkin_diagram()
O 0
The underlying classical algebras obtained when removing node \(0\) are type \(\mathfrak{g}_0= D_n, B_n, C_n\), respectively. The classical decomposition into a \(\mathfrak{g}_0\) crystal is a direct
\[B^{r,s} \cong \bigoplus_\lambda B(\lambda) \quad \text{as a } \{1,2,\ldots,n\}\text{-crystal}\]
where \(\lambda\) is obtained from \(s\omega_r\) (or equivalently a rectangular partition of shape \((s^r)\)) by removing vertical dominoes. This in fact only holds in the ranges \(1\le r\le n-2\)
for type \(D_n^{(1)}\), and \(1\le r\le n\) for types \(B_n^{(1)}\) and \(A_{2n-1}^{(2)}\):
sage: K = KirillovReshetikhinCrystal(['D',6,1],4,2)
sage: K.classical_decomposition()
The crystal of tableaux of type ['D', 6] and shape(s) [[], [1, 1], [1, 1, 1, 1], [2, 2], [2, 2, 1, 1], [2, 2, 2, 2]]
For type \(B_n^{(1)}\) and \(r=n\), one needs to be aware that \(\omega_n\) is a spin weight and hence corresponds in the partition language to a column of height \(n\) and width \(1/2\):
sage: K = KirillovReshetikhinCrystal(['B',3,1],3,1)
sage: K.classical_decomposition()
The crystal of tableaux of type ['B', 3] and shape(s) [[1/2, 1/2, 1/2]]
As for type \(A_n^{(1)}\), the Dynkin automorphism induces a promotion-type operator \(\sigma\) on the level of crystals. In this case in can however happen that the automorphism changes between
classical components:
sage: K = KirillovReshetikhinCrystal(['D',4,1],2,1)
sage: b = K.module_generator(); b
[[1], [2]]
sage: K.automorphism(b)
[[2], [-1]]
sage: b = K(rows=[[2],[-2]])
sage: K.automorphism(b)
This operator \(\sigma\) is used to define the affine crystal operators:
\[f_0 = \sigma \circ f_1 \circ \sigma\]\[e_0 = \sigma \circ e_1 \circ \sigma\]
The KR crystals \(B^{1,1}\) of types \(D_3^{(1)}\), \(B_2^{(1)}\), and \(A_5^{(2)}\) are, respectively:
Type \(C_n^{(1)}\)
The Dynkin diagram of type \(C_n^{(1)}\) has a symmetry \(\sigma(i) = n-i\):
sage: C = CartanType(['C',4,1]); C.dynkin_diagram()
The classical subalgebra when removing the 0 node is of type \(C_n\).
However, in this case the crystal \(B^{r,s}\) is not constructed using \(\sigma\), but rather using a virtual crystal construction. \(B^{r,s}\) of type \(C_n^{(1)}\) is realized inside \(\hat{V}^
{r,s}\) of type \(A_{2n+1}^{(2)}\) using:
\[e_0 = \hat{e}_0 \hat{e}_1 \quad \text{and} \quad e_i = \hat{e}_{i+1} \quad \text{for} \quad 1\le i\le n\]\[f_0 = \hat{f}_0 \hat{f}_1 \quad \text{and} \quad f_i = \hat{f}_{i+1} \quad \text{for} \
quad 1\le i\le n\]
where \(\hat{e}_i\) and \(\hat{f}_i\) are the crystal operator in the ambient crystal \(\hat{V}^{r,s}\):
sage: K = KirillovReshetikhinCrystal(['C',3,1],1,2); K.ambient_crystal()
Kirillov-Reshetikhin crystal of type ['B', 4, 1]^* with (r,s)=(1,2)
The classical decomposition for \(1\le r<n\) is given by:
\[B^{r,s} \cong \bigoplus_\lambda B(\lambda) \quad \text{as a } \{1,2,\ldots,n\}\text{-crystal}\]
where \(\lambda\) is obtained from \(s\omega_r\) (or equivalently a rectangular partition of shape \((s^r)\)) by removing horizontal dominoes:
sage: K = KirillovReshetikhinCrystal(['C',3,1],2,4)
sage: K.classical_decomposition()
The crystal of tableaux of type ['C', 3] and shape(s) [[], [2], [4], [2, 2], [4, 2], [4, 4]]
The KR crystal \(B^{1,1}\) of type \(C_2^{(1)}\) looks as follows:
Types \(D_{n+1}^{(2)}\), \(A_{2n}^{(2)}\)
The Dynkin diagrams of types \(D_{n+1}^{(2)}\) and \(A_{2n}^{(2)}\) look as follows:
sage: C = CartanType(['D',5,2]); C.dynkin_diagram()
sage: C = CartanType(['A',8,2]); C.dynkin_diagram()
The classical subdiagram is of type \(B_n\) for type \(D_{n+1}^{(2)}\) and of type \(C_n\) for type \(A_{2n}^{(2)}\). The classical decomposition for these KR crystals for \(1\le r < n\) for type \
(D_{n+1}^{(2)}\) and \(1\le r\le n\) for type \(A_{2n}^{(2)}\) is given by:
\[B^{r,s} \cong \bigoplus_\lambda B(\lambda) \quad \text{as a } \{1,2,\ldots,n\}\text{-crystal}\]
where \(\lambda\) is obtained from \(s\omega_r\) (or equivalently a rectangular partition of shape \((s^r)\)) by removing single boxes:
sage: K = KirillovReshetikhinCrystal(['D',5,2],2,2)
sage: K.classical_decomposition()
The crystal of tableaux of type ['B', 4] and shape(s) [[], [1], [2], [1, 1], [2, 1], [2, 2]]
sage: K = KirillovReshetikhinCrystal(['A',8,2],2,2)
sage: K.classical_decomposition()
The crystal of tableaux of type ['C', 4] and shape(s) [[], [1], [2], [1, 1], [2, 1], [2, 2]]
The KR crystals are constructed using an injective map into a KR crystal of type \(C_n^{(1)}\)
\[S : B^{r,s} \to B^{r,2s}_{C_n^{(1)}} \quad \text{such that } S(e_ib) = e_i^{m_i}S(b) \text{ and } S(f_ib) = f_i^{m_i}S(b)\]
\[(m_0,\ldots,m_n) = (1,2,\ldots,2,1) \text{ for type } D_{n+1}^{(2)} \quad \text{and} \quad (1,2,\ldots,2,2) \text{ for type } A_{2n}^{(2)}.\]
sage: K = KirillovReshetikhinCrystal(['D',5,2],1,2); K.ambient_crystal()
Kirillov-Reshetikhin crystal of type ['C', 4, 1] with (r,s)=(1,4)
sage: K = KirillovReshetikhinCrystal(['A',8,2],1,2); K.ambient_crystal()
Kirillov-Reshetikhin crystal of type ['C', 4, 1] with (r,s)=(1,4)
The KR crystals \(B^{1,1}\) of type \(D_3^{(2)}\) and \(A_4^{(2)}\) look as follows:
As you can see from the Dynkin diagram for type \(A_{2n}^{(2)}\), mapping the nodes \(i\mapsto n-i\) yields the same diagram, but with relabelled nodes. In this case the classical subdiagram is of
type \(B_n\) instead of \(C_n\). One can also construct the KR crystal \(B^{r,s}\) of type \(A_{2n}^{(2)}\) based on this classical decomposition. In this case the classical decomposition is the sum
over all weights obtained from \(s\omega_r\) by removing horizontal dominoes:
sage: C = CartanType(['A',6,2]).dual()
sage: Kdual = KirillovReshetikhinCrystal(C,2,2)
sage: Kdual.classical_decomposition()
The crystal of tableaux of type ['B', 3] and shape(s) [[], [2], [2, 2]]
Looking at the picture, one can see that this implementation is isomorphic to the other implementation based on the \(C_n\) decomposition up to a relabeling of the arrows
sage: C = CartanType(['A',4,2])
sage: K = KirillovReshetikhinCrystal(C,1,1)
sage: Kdual = KirillovReshetikhinCrystal(C.dual(),1,1)
sage: G = K.digraph()
sage: Gdual = Kdual.digraph()
sage: f = { 1:1, 0:2, 2:0 }
sage: for u,v,label in Gdual.edges():
....: Gdual.set_edge_label(u,v,f[label])
sage: G.is_isomorphic(Gdual, edge_labels = True, certify = True) #todo not implemented (see #10904 and #10549)
(True, {[[-2]]: [[1]], [[-1]]: [[2]], [[1]]: [[-2]], []: [[0]], [[2]]: [[-1]]})
Exceptional nodes
The KR crystals \(B^{n,s}\) for types \(C_n^{(1)}\) and \(D_{n+1}^{(2)}\) were excluded from the above discussion. They are associated to the exceptional node \(r=n\) and in this case the classical
decomposition is irreducible
\[B^{n,s} \cong B(s\omega_n)\]
In Sage:
sage: K = KirillovReshetikhinCrystal(['C',2,1],2,1)
sage: K.classical_decomposition()
The crystal of tableaux of type ['C', 2] and shape(s) [[1, 1]]
sage: K = KirillovReshetikhinCrystal(['D',3,2],2,1)
sage: K.classical_decomposition()
The crystal of tableaux of type ['B', 2] and shape(s) [[1/2, 1/2]]
The KR crystals \(B^{n,s}\) and \(B^{n-1,s}\) of type \(D_n^{(1)}\) are also special. They decompose as:
\[B^{n,s} \cong B(s\omega_n)\]\[B^{n-1,s} \cong B(s\omega_{n-1}).\]
sage: K = KirillovReshetikhinCrystal(['D',4,1],4,1)
sage: K.classical_decomposition()
The crystal of tableaux of type ['D', 4] and shape(s) [[1/2, 1/2, 1/2, 1/2]]
sage: K = KirillovReshetikhinCrystal(['D',4,1],3,1)
sage: K.classical_decomposition()
The crystal of tableaux of type ['D', 4] and shape(s) [[1/2, 1/2, 1/2, -1/2]]
Type \(E_6^{(1)}\)
In [JonesEtAl2010] the KR crystals \(B^{r,s}\) for \(r=1,2,6\) in type \(E_6^{(1)}\) were constructed exploiting again a Dynkin diagram automorphism, namely the automorphism \(\sigma\) of order 3
which maps \(0\mapsto 1 \mapsto 6 \mapsto 0\):
sage: C = CartanType(['E',6,1]); C.dynkin_diagram()
O 0
O 2
The crystals \(B^{1,s}\) and \(B^{6,s}\) are irreducible as classical crystals:
sage: K = KirillovReshetikhinCrystal(['E',6,1],1,1)
sage: K.classical_decomposition()
Direct sum of the crystals Family (Finite dimensional highest weight crystal of type ['E', 6] and highest weight Lambda[1],)
sage: K = KirillovReshetikhinCrystal(['E',6,1],6,1)
sage: K.classical_decomposition()
Direct sum of the crystals Family (Finite dimensional highest weight crystal of type ['E', 6] and highest weight Lambda[6],)
whereas for the adjoint node \(r=2\) we have the decomposition
\[B^{2,s} \cong \bigoplus_{k=0}^s B(k\omega_2)\]
sage: K = KirillovReshetikhinCrystal(['E',6,1],2,1)
sage: K.classical_decomposition()
Direct sum of the crystals Family (Finite dimensional highest weight crystal of type ['E', 6] and highest weight 0,
Finite dimensional highest weight crystal of type ['E', 6] and highest weight Lambda[2])
The promotion operator on the crystal corresponding to \(\sigma\) can be calculated explicitly:
sage: K = KirillovReshetikhinCrystal(['E',6,1],1,1)
sage: promotion = K.promotion()
sage: u = K.module_generator(); u
sage: promotion(u.lift())
[(-1, 6)]
The crystal \(B^{1,1}\) is already of dimension 27. The elements \(b\) of this crystal are labelled by tuples which specify their nonzero \(\phi_i(b)\) and \(\epsilon_i(b)\). For example, \([-6,2]\)
indicates that \(\phi_2([-6,2])= \epsilon_6([-6,2])=1\) and all others are equal to zero:
sage: K = KirillovReshetikhinCrystal(['E',6,1],1,1)
sage: K.cardinality()
An important notion for finite-dimensional affine crystals is perfectness. The crucial property is that a crystal \(B\) is perfect of level \(\ell\) if there is a bijection between level \(\ell\)
dominant weights and elements in
\[B_{\mathrm{min}} = \{ b \in B \mid \mathrm{lev}(\varphi(b)) = \ell \}\;.\]
For a precise definition of perfect crystals see [HongKang2002] . In [FourierEtAl2010] it was proven that for the nonexceptional types \(B^{r,s}\) is perfect as long as \(s/c_r\) is an integer. Here
\(c_r=1\) except \(c_r=2\) for \(1\le r<n\) in type \(C_n^{(1)}\) and \(r=n\) in type \(B_n^{(1)}\).
Here we verify this using Sage for \(B^{1,1}\) of type \(C_3^{(1)}\):
sage: K = KirillovReshetikhinCrystal(['C',3,1],1,1)
sage: Lambda = K.weight_lattice_realization().fundamental_weights(); Lambda
Finite family {0: Lambda[0], 1: Lambda[1], 2: Lambda[2], 3: Lambda[3]}
sage: [w.level() for w in Lambda]
[1, 1, 1, 1]
sage: Bmin = [b for b in K if b.Phi().level() == 1 ]; Bmin
[[[1]], [[2]], [[3]], [[-3]], [[-2]], [[-1]]]
sage: [b.Phi() for b in Bmin]
[Lambda[1], Lambda[2], Lambda[3], Lambda[2], Lambda[1], Lambda[0]]
As you can see, both \(b=1\) and \(b=-2\) satisfy \(\varphi(b)=\Lambda_1\). Hence there is no bijection between the minimal elements in \(B_{\mathrm{min}}\) and level 1 weights. Therefore, \(B^{1,1}
\) of type \(C_3^{(1)}\) is not perfect. However, \(B^{1,2}\) of type \(C_n^{(1)}\) is a perfect crystal:
sage: K = KirillovReshetikhinCrystal(['C',3,1],1,2)
sage: Lambda = K.weight_lattice_realization().fundamental_weights()
sage: Bmin = [b for b in K if b.Phi().level() == 1 ]
sage: [b.Phi() for b in Bmin]
[Lambda[0], Lambda[3], Lambda[2], Lambda[1]]
Perfect crystals can be used to construct infinite-dimensional highest weight crystals and Demazure crystals using the Kyoto path model [KKMMNN1992].
Energy function and one-dimensional configuration sum
For tensor products of Kirillov-Reshehtikhin crystals, there also exists the important notion of the energy function. It can be defined as the sum of certain local energy functions and the \(R\)
-matrix. In Theorem 7.5 in [SchillingTingley2011] it was shown that for perfect crystals of the same level the energy \(D(b)\) is the same as the affine grading (up to a normalization). The affine
grading is defined as the minimal number of applications of \(e_0\) to \(b\) to reach a ground state path. Computationally, this algorithm is a lot more efficient than the computation involving the \
(R\)-matrix and has been implemented in Sage:
sage: K = KirillovReshetikhinCrystal(['A',2,1],1,1)
sage: T = TensorProductOfCrystals(K,K,K)
sage: hw = [b for b in T if all(b.epsilon(i)==0 for i in [1,2])]
sage: for b in hw:
....: print b, b.energy_function()
[[[1]], [[1]], [[1]]] 0
[[[1]], [[2]], [[1]]] 2
[[[2]], [[1]], [[1]]] 1
[[[3]], [[2]], [[1]]] 3
The affine grading can be computed even for nonperfect crystals:
sage: K = KirillovReshetikhinCrystal(['C',4,1],1,2)
sage: K1 = KirillovReshetikhinCrystal(['C',4,1],1,1)
sage: T = TensorProductOfCrystals(K,K1)
sage: hw = [b for b in T if all(b.epsilon(i)==0 for i in [1,2,3,4])]
sage: for b in hw:
....: print b, b.affine_grading()
[[], [[1]]] 1
[[[1, 1]], [[1]]] 2
[[[1, 2]], [[1]]] 1
[[[1, -1]], [[1]]] 0
The one-dimensional configuration sum of a crystal \(B\) is the graded sum by energy of the weight of all elements \(b \in B\):
\[X(B) = \sum_{b \in B} x^{\mathrm{weight}(b)} q^{D(b)}\]
Here is an example of how you can compute the one-dimensional configuration sum in Sage:
sage: K = KirillovReshetikhinCrystal(['A',2,1],1,1)
sage: T = TensorProductOfCrystals(K,K)
sage: T.one_dimensional_configuration_sum()
B[-2*Lambda[1] + 2*Lambda[2]] + (q+1)*B[-Lambda[1]]
+ (q+1)*B[Lambda[1] - Lambda[2]] + B[2*Lambda[1]]
+ B[-2*Lambda[2]] + (q+1)*B[Lambda[2]] | {"url":"http://sagemath.org/doc/thematic_tutorials/lie/affine_finite_crystals.html","timestamp":"2014-04-21T04:36:05Z","content_type":null,"content_length":"71170","record_id":"<urn:uuid:8b46c2cd-4573-4324-afe7-43a8003cffa4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with Surds
January 5th 2012, 12:09 PM
Help with Surds
I need a bit of help solving a question on surds if anyone can push me in the right direction with it.
The question is:
I've started multiplying out the bottom row, and have got as far as this:
$\frac{4\sqrt{3}+3\sqrt{7}}{3\sqrt{3}+\sqrt{7}} * \frac{3\sqrt{3}-\sqrt{7}}{3\sqrt{3}-\sqrt{7}}$
I then proceed to multiplying out the bottom row;
I then proceed to collecting the like terms;
$= 5 + 6\sqrt{3}$
From here, I really can't see how to 'rationalise the denominator'.
I'm only working on the bottom row right now, but if anyone can tell me where I'm going wrong with this that would be great. I'm sure it's something simple, I'm just not seeing it.
Thanks in advance.
January 5th 2012, 12:37 PM
Re: Help with Surds
I need a bit of help solving a question on surds if anyone can push me in the right direction with it.
The question is:
I've started multiplying out the bottom row, and have got as far as this:
$\frac{4\sqrt{3}+3\sqrt{7}}{3\sqrt{3}+\sqrt{7}} * \frac{3\sqrt{3}-\sqrt{7}}{3\sqrt{3}-\sqrt{7}}$
I then proceed to multiplying out the bottom row;
Not sure what you've done in the next steps.
Use (a + b)(a - b) = aČ - bČ
That means the denominator becomes:
$({3\sqrt{3}+\sqrt{7}}) \cdot ({3\sqrt{3}-\sqrt{7}})=9\cdot 3 - 7 =20$
- expand the numerator
- collect those terms with $\sqrt{21}$ and those with simple integers
- factor out 5 at numerator and denominator
- cancel the common factor
You should come out with $\frac{3+\sqrt{21}}4$
January 5th 2012, 01:04 PM
Re: Help with Surds
Thanks for the reply, this is starting to make sense.
The only confusion I still have is how multiplying out the numerator becomes $15+\sqrt{21}$
$(4*3)*3 = 36$
$(3\sqrt{7}*-\sqrt{7}) = -21$
$\Rightarrow 36 - 21$ to me is making 15, only.
In what order is it multiplied out in to make $15+\sqrt{21}$?
Thanks again.
January 5th 2012, 01:33 PM
Re: Help with Surds
So you need to expand this guy?
Follow this method $(a+b)(c+d) = ac+ad+bc+bd$
Does this make sense?
January 5th 2012, 02:19 PM
Re: Help with Surds
Ah, yes. I was trying to use the $(a+b)(c+d) = ac + ad + bc + bd$ before except I was separating the coefficients from the roots, completely, hence why my multiplying out was so long because I
was multiplying through several times too many. Basically, I didn't understand that $3\sqrt{7}$ was a single term, I was treating the $3$ and $\sqrt{7}$ as if they were separate.
Many thanks.
There's just one last thing I don't get... For the numerator, I got $15+5\sqrt{21}$. Where does the 5 go? I know earboth said about factoring it out at the numerator and denominator, but I'm not
sure I understand.
January 5th 2012, 02:33 PM
Re: Help with Surds
$15+5\sqrt{21} = 5\times 3 +5\times \sqrt{21} = 5(3 + \sqrt{21})$ now what did you get for the denominator?
January 5th 2012, 02:51 PM
Re: Help with Surds
I got $20$ for the denominator, I worked this out through;
$= 9\sqrt{9} -3\sqrt{21}+3\sqrt{21}-7$
$= 9(3)-7$
$= 27 - 7 = 20$
I'm still a bit unsure about the last bit, where the 5 goes. I can understand that $15+5\sqrt{21}$ is equivalent to $5(3+\sqrt{21})$, but multiplied out that still gives $15+5\sqrt{21}$.
Apologies, I really don't get this last bit.
January 5th 2012, 02:56 PM
Re: Help with Surds
Ok, you have done all the hard work here, just need to cancel some terms,
$\frac{15+5\sqrt{21}}{20} = \frac{5\times 3 +5\times \sqrt{21}}{5\times 4} = \frac{5(3 + \sqrt{21})}{5\times 4} = \frac{3 + \sqrt{21}}{4}$
as given in post #2.
January 5th 2012, 03:08 PM
Re: Help with Surds
Ahh.. I understand now. It's just simplifying the fraction ultimately.
Many thanks pickslides and earboth. | {"url":"http://mathhelpforum.com/algebra/194949-help-surds-print.html","timestamp":"2014-04-18T15:48:05Z","content_type":null,"content_length":"15855","record_id":"<urn:uuid:6a24dfa6-a371-4b1e-9677-70bd5836b58a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
dont want answers want explanation
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Graph y=3x+13 y=x-3
Best Response
You've already chosen the best response.
Start with the graph of x. you know that the the graph is x is a line. You know that because: y = mx + b. Now, you have 3x + 13. and you know that b = y-coordinate. So that means you're shifting
3x UP by 13 units on the y-axis. Same with the 2nd function, it's a linear function. Except now you're not adding , your subtracting, meaning negative, shifting your line negatively on the
Best Response
You've already chosen the best response.
sorry still stlighty confused
Best Response
You've already chosen the best response.
y = mx + b is the function for a LINE. |dw:1356778518146:dw| where m = slope b = y coordinate for instance, if I have 3x + 2 then your graph is: |dw:1356778594056:dw| it means 3 (x) + 2 2 is y so
2 shifted UP two. if it was -2, then I would of shifted the line DOWN two.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50d4ab57e4b0d6c1d5418456","timestamp":"2014-04-19T13:01:59Z","content_type":null,"content_length":"42635","record_id":"<urn:uuid:9400f5ea-bcab-45d7-8aab-2f84ab8b37e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Activities
Winter weather presents many opportunities to integrate math skills into literature and science lessons. Teachers often find that winter themes, such as snowmen or penguins, motivate students to play
games to help master basic facts or skills. Geometry is also a natural winter theme as students examine snowflake symmetry or symmetry in holiday shapes and cookie cutters.
Winter Geometry
Symmetry: Incorporate a study of symmetry as students create holiday projects. Ellison cutouts and cookie cutters are very often symmetrical figures. Have students trace, cut out and fold the objects
to find the line of symmetry. Mrs. Soranno's kindergarten students created lots of symmetric holiday decorations, pictured above as well as the Symmetry Forest, pictured below on the right.
Shapes: Students also used marshmallows and toothpicks to create marshmallow shapes, such as those pictured to the right. This is a great way to reinforce student knowledge of 2-dimensional shapes.
Additionally, the marshmallows make it easy to count vertices.
Challenge students to predict and draw what a snowflake will look like when the given pattern is cut out. Then, have students actually cut out the snowflake pattern to check their prediction.
Finally, students will identify and draw in lines of symmetry for the snowflake pattern. Train students to think about and visualize the Snowflake Symmetry.
• Make a Flake. Challenge students to draw what the opened snowflake pattern will look like before previewing the finished product. Repeat several times so that students develop spatial sense and
are better able to predict what the cut pattern will create.
• Enrichment: provide copies of the Snowflake Symmetry Student Handout so that students can record their cut design and challenge their classmates to predict what the finished product will look
• Bulletin Board: display both the folded designs and the finished products in random order. Label each design with a number and each finished product with a letter. Challenge students to correctly
match each pair.
Match the Snowflake Halves: Mount student original snowflake designs on dark blue or black construction paper. Cut student snowflakes in half along a line of symmetry. Challenge students to find the
matching snowflakes, based on their analysis of the symmetry of each design.
Symmetric Snowflakes: Check out these sites for snowflake patterns that can be printed for students to fold and cut into symmetric snowflakes. Once students have mastered these patterns, they're
ready for their own creations.
Pattern Block Snowflakes
Use Ellison cutouts of the pattern blocks on white construction paper, if possible. If not, download the Pattern Blocks and provide copies so that students may cut apart the blocks. Students then
arrange the blocks in a symmetric snowflake pattern on blue or black paper for the best contrast. When students are pleased with the arrangement, they may glue the pieces in place.
Variation: Provide students with the Symmetric Snowflake made from pattern blocks and challenge them to complete the design so that it is has line symmetry. Have students use the blank symmetric
snowflake design handout to create their own symmetric snowflake challenges.
Snowflake Quilt Squares
Visit Mrs. Burns' Quilt Monthly Quilt Squares to download patterns for snowflake quilt squares. Bookmark this site for your math quilting unit. Extend the activity by including an investigation of
the symmetry in the patterns.
Teachers and students love snow days and snow men. These math activities capture student enthusiasm for this seasonal theme.
Math Activities Themes: Snowmen
See more snowman math activities on the new Mathwire.com Snowman Math Activities theme page which is a collection of all things snowmen on the Mathwire.com site. conveniently pulled together for easy
browsing. The collection includes new Snowman Math Activities including a new snowman math mat, and a new Last Snowman Standing Game.
View the Snowman Math Activities theme page .
These activities capitalize on student fascination with penguins.
Math Activities Themes: Penguins
See more penguin math activities on the new Penguin Math theme page which is a collection of all things penguin on the Mathwire.com site. conveniently pulled together for easy browsing. The
collection includes new Penguin Math Activities including a new penguin math mat, math-literature connection for 365 Penguins and a new Free the Penguins Game.
Gingerbread men and gingerbread houses enjoy special popularity around the holidays, but many of these gingerbread activities are timeless and complement literature titles that teachers use at the
beginning of school or after the holidays. It's very easy to incorporate mathematics into a study of gingerbread men, and students will enjoy the data collection activities and games while learning
math skills and deepening their understanding of important mathematical concepts.
Data Collection Activity: Run, Gingerbread Men, Run! Game
This game was designed to introduce students to the randomness of spinners and dice. Each color gingerbread man starts at the same place and has the same chance of winning by crossing the finish
line, but does it work out that way? Students will enjoy playing the game AND use a clothespin graph [see sample on right] to collect some useful data on the winners.
Once students have collected class data from playing many games, they will come together to analyze the clothespin graph results. Students will be asked to discuss whether or not they think the game
is fair for all of the gingerbread men and explain their reasoning.
Download the Run, Gingerbread Men, Run! Game so that students can get started playing and collecting data. The pdf file contains the spinner, gameboard, clothespin graph icons, and an optional tally
Math Activities Themes: Gingerbread Man
See more gingerbread math activities on the new Gingerbread Man Math theme page which is a collection of all things gingerbread on the Mathwire.com site. conveniently pulled together for easy
browsing. This new collection also includes a gingerbread person math math and links to more gingerbread math activities on the web.
Each of these glyphs capture data about students in a visual mode.   Students should analyze the class data by creating tally charts, Venn diagrams, bar graphs, etc. Students should talk about
and write about what they learned from looking at the glyphs of their classmates. Or, play a guessing game where students try to match a student's answers to the legend by identifying which glyph
belongs to that student.
Consider turning some of your favorite winter crafts projects into math glyphs. It's easy to create a legend that fits your students. Older students enjoy working with teachers to create a class
legend for the project.
Winter Glyphs
• Mrs. Arocho's kindergarten students created Snowman glyphs. See the picture on the right to view the legend students used and to see samples of their glyphs which were proudly displayed in the
school hallway for all to admire.
• Mrs. Ritenour's Snowman glyph includes a questionaire, legend and pictures of student glyphs.
• Lynn Weber's Snowperson Glyph is a legend for creating glyphs.
• This Snowman Glyph is a simple coloring project
More Winter Glyphs
• See Winter Glyphs from the Mathwire.com Winter 2005 Math Activities collection.
• See Winter Glyphs from the Mathwire.com Winter 2006 Math Activities collection.
Coordinate Graphing Picture:     Holiday Quilt
This activity requires students to use coordinate pairs to correctly color in the squares of the grid to create a picture. Students need only red, yellow, blue and green crayons or markers to color
in squares to create this Holiday Quilt of traditional quilt squares.
If students like this activity, try the Quilt Square Challenge. Complete directions are given as are the pdf files for the quilt squares and quilt mats. This activity was designed to help students
develop spatial memory and spatial sense.
Coordinate Graphing Game:     Catch the Gingerbread Men
For this game, students toss two dice (one regular and one marked A-B-C-D-E-F), form an ordered pair (e.g. B5), then remove the gingerbread man from that space, if there is one. Play continues until
the timer rings or until one player has caught 10 gingerbread men. Students love playing the game and they get to practice their coordinate graphing skills in the process.
Download Catch the Gingerbread Men game mat, game pieces and directions for playing the game.
If students enjoy this game, they will also like the Capture the Penguins Game from the Mathwire Winter 2006 collection. Students especially love the clothespin penguins so they're worth the time and
effort to make!
More Winter Coordinate Graphing Activities
• See more Winter Coordinate Graphing Activities from the Winter 2005 collection including a Gingerbread House activity and Grab the Candy game.
• See Winter Coordinate Graphing Games from the Winter 2006 collection to download directions and game mat for the Capture the Penguins game.
• Have students assemble a Snowman Quilt Square, designed by Ms. Burns. Students place colored squares in the correct locations to form a snowman picture.
More Seasonal Coordinate Graphing Resources
Consult these teacher resources for additional coordinate graphing activities, especially for the holidays. Look through closets as these are oldies-but-goodies. These activities introduce young math
students to coordinate graphing and allow teachers to plan seasonal math activities that effectively develop math skills and concepts.
• Holiday Graph Art by Erling and Dolores Freeberg, published by Teacher Created Materials, Inc., 1987. This book contains graphing art directions for these winter activities: Santa Claus, Rudolph,
Christmas Tree, Candle, Angel, Baby New Year. See Teacher Created Materials Website to view sample pages from this book.
These open-ended assessments require students to apply mathematical concepts and skills to solve problems and explain their thinking using words, pictures and numbers.
• Let students practice fraction skills to solve Winter Fraction Words, then challenge students to make up their own fraction word puzzles.
• Challenge students with a Wrapping Paper problem. Will either sheet of paper completely wrap the birthday present? Students are asked to use numbers, words, pictures or diagrams to explain their
best thinking. Or, actually place paper and boxes in a math center and challenge students to figure out the smallest piece of paper that will completely wrap the gift.
• Students will use statistical skills to solve the Weekly Weather problem. Two temperatures are missing but the teacher has supplied some statistical clues to help students figure out the missing
• Because sales are a part of holiday shopping, students will practice percentage skills in Shopping for Sales as they figure out where they should buy mom's present to get the cheapest price.
• Motivate students to try their hand at some code-breaking with Crypto-Lists for Winter.
• See Snowman Problem Solving from the Winter 2005 collection which includes Frosty's Estimation Station, Dress the Snowman templates, and links to additional snowman activities on the web.
• Holiday Problem Solving from the Winter 2005 collection includes links to the Twelve Days of Christmas and Pascal's triangle math activities on the web.
• More Winter Problem Solving from the Winter 2006 collection includes Gingerbread and Snowman combination problems as well as several different patterning problems.
Many books may be used as a springboard for mathematical discussions and activities. These are included to integrate winter themes into mathematics:
The Elves and the Shoemaker retold from the Brothers Grimm and illustrated by Jim La Marche. After reading the book, have fun with patterns!
• Download Scholastic's The Elves and The Shoemaker short story and math lesson that uses input-output tables to study the patterns.
• Challenge students to make up their own variations on the elves' patterns.
• Read the book and then do A Shoe In, an extension activity found in NCTM's Amazing Attributes online lesson. Students use a Venn Diagram to sort and classify shoes worn by students in the class.
• Create an Elf Glyph. This Winter 2006 math activity includes legend and patterns to create the elves.
The Mitten by Jan Brett: After reading the book, have students do a math estimation activity:
Download Math-Literature Connections: The Mitten by Jan Brett for a copy of these Mathwire.com lesson plans.
How Big Is the Mitten?: a lesson on volume of a 3-dimensional object
• Student Estimates: Show students a mitten and several linking cubes. Ask students to estimate how many cubes would fit inside the mitten. Record student estimates and ask students to explain how
they figured out their estimate and why they believe it is correct.
• Data Collection: Provide mittens and linking cubes for each small group. Ask students to work together to fill the mitten with cubes. Be sure to explain what filled means for your class (e.g. no
cubes sticking over the edge, or no cubes falling out when you hold the mitten in the middle).
• Organizing Data: Draw a line plot on the chalkboard or chart paper. Ask student groups to make an X to mark how many cubes they fit into their mitten.
• Analyzing the Data: Lead a discussion about the data, including an informal discussion of both range (everyone in the class was between ____ and ____) and mode (most groups were able to fit about
____ cubes in their mittens) to introduce the mathematical language of statistics.
• Math Vocabulary: Tell students that when mathematicians talk about filling an object, they are talking about the volume of the object. Also model using the terms range and mode in discussing the
line plot.
How Many Cubes Will Cover the Mitten?: a lesson on area of a 2-dimensional object
• Student Estimates: Show students a mitten cutout and several linking cubes. Ask students to estimate how many cubes would completely cover the mitten. Record student estimates and ask students to
explain how they figured out their estimate and why they believe it is correct.
• Data Collection: Provide mitten cutouts and linking cubes for each student. Ask students to completely cover the mitten with cubes. Be sure to explain that no cubes should hang over the edge of
the mitten.
• Organizing Data: Draw a line plot on the chalkboard or chart paper. Ask student groups to make an X to mark how many cubes they fit on their mitten.
• Analyzing the Data: Lead a discussion about the data, including an informal discussion of both range (everyone in the class was between ____ and ____) and mode (most groups were able to fit about
____ cubes on their mittens) to introduce the mathematical language of statistics.
• Math Vocabulary: Tell students that when mathematicians talk about covering an object, they are talking about the area of the object. Also model using the terms range and mode in discussing the
line plot.
Graphing Ideas:Do you wear mittens or gloves in the winter?
• Create a clothespin graph
• Create a Venn diagram which allows for the possibility that some students wear both, depending on the day
• Create a pictograph with mitten and glove cutouts
• Create a bar graph using colored index cards with each student's name that can be placed end to end to form bars
Jan Brett activities for The Mitten
These math activities are organized by seasons.   Elementary teachers often incorporate seasonal activities as craft projects.   Many of these seasonal craft projects can be mathematical as
well with a little forethought.   Browse the activities for projects to add that reinforce mathematical concepts and skills through seasonal and holiday themes.
Teachers can find many shared activities on the internet. These activities integrate mathematical ideas using fall materials and themes. The activities address multiple mathematical strands (e.g.
measurement, number sense, probability, estimation, money, data collection, etc.), making it possible for teachers to plan effective mathematics instruction that also captures students' seasonal
interest. Use or modify the lessons to fit the needs of your students. Build upon the ideas shared by other teachers through internet sites. These are presented in alphabetical order by activity | {"url":"http://www.mathwire.com/seasonal/winter07.html","timestamp":"2014-04-16T16:24:21Z","content_type":null,"content_length":"34135","record_id":"<urn:uuid:0829baa9-0ce6-4f3f-9220-d5378bad0b31>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Browse by Author
Jump to:
Baker, R
Chunong, C
Chuong, C
Hughes, M
Jiang, T
Lin, C
Maini, P
Paus, R
Plikus, M
Widelitz, R
Number of items: 3.
Baker, R
Lin, C.M and Jiang, T.X. and Baker, Ruth E. and Maini, P. K. and Widelitz, R. B. and Chunong, C.M. (2009) Spots & stripes: pleomorphic patterning of stem cells via p-ERK-depenendent cell chemotaxis
shown by feather morphogenesis & mathematical simulation. Developmental Biology , 334 (2). pp. 369-382.
Lin, C. and Jiang, T. and Baker, Ruth E. and Maini, P. K. and Hughes, M. and Widelitz, R. B. and Chuong, C. (2008) Periodic patterning stem cells and induction of skin appendages: p-ERK-dependent
mesenchymal condensation is coupled with Turing mechanism to convert stripes to spots. Journal of Investigative Dermatology, 128 (S1). S156.
Widelitz, R. B. and Baker, Ruth E. and Plikus, M. V. and Lin, C. and Maini, P. K. and Paus, R. and Chuong, C. M. (2006) Distinct mechanisms underlie pattern formation in the skin and skin appendages.
Birth Defects Research (Part C),, 78 (3). pp. 280-291.
Chunong, C
Lin, C.M and Jiang, T.X. and Baker, Ruth E. and Maini, P. K. and Widelitz, R. B. and Chunong, C.M. (2009) Spots & stripes: pleomorphic patterning of stem cells via p-ERK-depenendent cell chemotaxis
shown by feather morphogenesis & mathematical simulation. Developmental Biology , 334 (2). pp. 369-382.
Chuong, C
Lin, C. and Jiang, T. and Baker, Ruth E. and Maini, P. K. and Hughes, M. and Widelitz, R. B. and Chuong, C. (2008) Periodic patterning stem cells and induction of skin appendages: p-ERK-dependent
mesenchymal condensation is coupled with Turing mechanism to convert stripes to spots. Journal of Investigative Dermatology, 128 (S1). S156.
Widelitz, R. B. and Baker, Ruth E. and Plikus, M. V. and Lin, C. and Maini, P. K. and Paus, R. and Chuong, C. M. (2006) Distinct mechanisms underlie pattern formation in the skin and skin appendages.
Birth Defects Research (Part C),, 78 (3). pp. 280-291.
Hughes, M
Lin, C. and Jiang, T. and Baker, Ruth E. and Maini, P. K. and Hughes, M. and Widelitz, R. B. and Chuong, C. (2008) Periodic patterning stem cells and induction of skin appendages: p-ERK-dependent
mesenchymal condensation is coupled with Turing mechanism to convert stripes to spots. Journal of Investigative Dermatology, 128 (S1). S156.
Jiang, T
Lin, C.M and Jiang, T.X. and Baker, Ruth E. and Maini, P. K. and Widelitz, R. B. and Chunong, C.M. (2009) Spots & stripes: pleomorphic patterning of stem cells via p-ERK-depenendent cell chemotaxis
shown by feather morphogenesis & mathematical simulation. Developmental Biology , 334 (2). pp. 369-382.
Lin, C. and Jiang, T. and Baker, Ruth E. and Maini, P. K. and Hughes, M. and Widelitz, R. B. and Chuong, C. (2008) Periodic patterning stem cells and induction of skin appendages: p-ERK-dependent
mesenchymal condensation is coupled with Turing mechanism to convert stripes to spots. Journal of Investigative Dermatology, 128 (S1). S156.
Lin, C
Lin, C.M and Jiang, T.X. and Baker, Ruth E. and Maini, P. K. and Widelitz, R. B. and Chunong, C.M. (2009) Spots & stripes: pleomorphic patterning of stem cells via p-ERK-depenendent cell chemotaxis
shown by feather morphogenesis & mathematical simulation. Developmental Biology , 334 (2). pp. 369-382.
Lin, C. and Jiang, T. and Baker, Ruth E. and Maini, P. K. and Hughes, M. and Widelitz, R. B. and Chuong, C. (2008) Periodic patterning stem cells and induction of skin appendages: p-ERK-dependent
mesenchymal condensation is coupled with Turing mechanism to convert stripes to spots. Journal of Investigative Dermatology, 128 (S1). S156.
Widelitz, R. B. and Baker, Ruth E. and Plikus, M. V. and Lin, C. and Maini, P. K. and Paus, R. and Chuong, C. M. (2006) Distinct mechanisms underlie pattern formation in the skin and skin appendages.
Birth Defects Research (Part C),, 78 (3). pp. 280-291.
Maini, P
Lin, C.M and Jiang, T.X. and Baker, Ruth E. and Maini, P. K. and Widelitz, R. B. and Chunong, C.M. (2009) Spots & stripes: pleomorphic patterning of stem cells via p-ERK-depenendent cell chemotaxis
shown by feather morphogenesis & mathematical simulation. Developmental Biology , 334 (2). pp. 369-382.
Lin, C. and Jiang, T. and Baker, Ruth E. and Maini, P. K. and Hughes, M. and Widelitz, R. B. and Chuong, C. (2008) Periodic patterning stem cells and induction of skin appendages: p-ERK-dependent
mesenchymal condensation is coupled with Turing mechanism to convert stripes to spots. Journal of Investigative Dermatology, 128 (S1). S156.
Widelitz, R. B. and Baker, Ruth E. and Plikus, M. V. and Lin, C. and Maini, P. K. and Paus, R. and Chuong, C. M. (2006) Distinct mechanisms underlie pattern formation in the skin and skin appendages.
Birth Defects Research (Part C),, 78 (3). pp. 280-291.
Paus, R
Widelitz, R. B. and Baker, Ruth E. and Plikus, M. V. and Lin, C. and Maini, P. K. and Paus, R. and Chuong, C. M. (2006) Distinct mechanisms underlie pattern formation in the skin and skin appendages.
Birth Defects Research (Part C),, 78 (3). pp. 280-291.
Plikus, M
Widelitz, R. B. and Baker, Ruth E. and Plikus, M. V. and Lin, C. and Maini, P. K. and Paus, R. and Chuong, C. M. (2006) Distinct mechanisms underlie pattern formation in the skin and skin appendages.
Birth Defects Research (Part C),, 78 (3). pp. 280-291.
Widelitz, R
Lin, C.M and Jiang, T.X. and Baker, Ruth E. and Maini, P. K. and Widelitz, R. B. and Chunong, C.M. (2009) Spots & stripes: pleomorphic patterning of stem cells via p-ERK-depenendent cell chemotaxis
shown by feather morphogenesis & mathematical simulation. Developmental Biology , 334 (2). pp. 369-382.
Lin, C. and Jiang, T. and Baker, Ruth E. and Maini, P. K. and Hughes, M. and Widelitz, R. B. and Chuong, C. (2008) Periodic patterning stem cells and induction of skin appendages: p-ERK-dependent
mesenchymal condensation is coupled with Turing mechanism to convert stripes to spots. Journal of Investigative Dermatology, 128 (S1). S156.
Widelitz, R. B. and Baker, Ruth E. and Plikus, M. V. and Lin, C. and Maini, P. K. and Paus, R. and Chuong, C. M. (2006) Distinct mechanisms underlie pattern formation in the skin and skin appendages.
Birth Defects Research (Part C),, 78 (3). pp. 280-291.
This list was generated on Fri Apr 18 06:30:55 2014 BST. | {"url":"http://eprints.maths.ox.ac.uk/view/author/Widelitz=3AR=2E_B=2E=3A=3A.html","timestamp":"2014-04-18T05:33:02Z","content_type":null,"content_length":"20825","record_id":"<urn:uuid:d6112cbb-57b3-4ea3-8be6-60ba05a5092c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert Pa-s to centipoise - Conversion of Measurement Units
›› Convert Pascal-second to centipoise
›› More information from the unit converter
How many Pa-s in 1 centipoise? The answer is 0.001.
We assume you are converting between Pascal-second and centipoise.
You can view more details on each measurement unit:
Pa-s or centipoise
The SI derived unit for dynamic viscosity is the pascal second.
1 pascal second is equal to 1 Pa-s, or 1000 centipoise.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between Pascal seconds and centipoise.
Type in your own numbers in the form to convert the units!
›› Definition: Centipoise
A unit of dynamic viscosity in the CGS system of units. A centipoise is one millipascal second (mPa·s) in SI units. Water has a viscosity of 0.0089 poise at 25 °C, or 1 centipoise at 20 °C.
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0027 seconds. | {"url":"http://www.convertunits.com/from/Pa-s/to/centipoise","timestamp":"2014-04-17T04:11:02Z","content_type":null,"content_length":"20036","record_id":"<urn:uuid:e7cfeeec-ae9d-442c-88f0-866319abc780>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Piecewise functions in matlab
Not the most difficult thing to do by any means. But here’s a handy conversion from a math formula to matlab. Say you have the piecewise polynomial, m, defined as:
/ 0 if x < 0,
m(x) = | -2x²(x - 3/2) if 0 ≤ x < 1,
| 1 + (x - 1) if 1 ≤ x < 3/2,
\ x - 1/4 if x ≥ 3/2.
So in matlab if I have some variable x with some values, i.e.
x = -1:0.01:2;
then I can formulate the above as:
m = ...
(0 ) .* (x < 0 ) + ...
(-2*x.^2.*(x-3/2)) .* (0 <= x & x < 1 ) + ...
(1+(x-1).^2 ) .* (1 <= x & x < 3/2) + ...
(x-1/4 ) .* (x >= 3/2 );
Now, I can show a plot:
This works because the conditions in matlab are now logicals that return a vector the same size as x, with 1′s if the condition was true and 0′s otherwise. Multiplied against the value for the
condition and added to the next gives the correct solution. I guess this method is somewhat risky in the sense that if you mess up your logicals or inequalities the addition could sum up erroneous
values without recognizing the error. But I think the presentation is very nice and is easily broken up if need be.
Note: The polynomial above is from Higher Order Barycentric Coordinates by Torsten Langer and Hans-Peter Seidel.
Tags: math, matlab, plot, polynomial
Very nice example, thank you for sharing.
you are a fucking genius. Thank you so much. I think this will work.
Alec, would you happen to know how to sum an iterated piecewise function? Say f(x) = x+x^2 and you need to sum on an interval [a,b] the iteration of f as long as x_i is in [a,b] else the sum is zero.
Thank you in adavance.
I’m not sure what you mean. Do you mean something like this:
a = 0;
b = 1;
f = a:0.1:b;
max_iterations = 10;
for i = 1:max_iterations
f = (f + f.^2) .* (f >= a & f <=b) + ...
(0) .* (f<a | f >b);
Otherwise what do you mean by “sum an iterated piecewise function” can you explain how you would write this function with math notation?
it doesn’t work
but i used if/else conditions instead
@Tom Which doesn’t work?!?!
This didn’t work for my piecewise function
@mac What function are you trying to do?
THANK YOU SO MUCH!!!!
Many thanks , it’s very simple and useful
using your same function m(x) say now i want to plot m(-x+1) how would i do so?
m’s only a function of x, right before it’s evaluated. So to do what you want you could try:
y = -x+1;
m = ...
(0 ) .* (y < 0 ) + ...
(-2*y.^2.*(y-3/2)) .* (0 <= y & y < 1 ) + ...
(1+(y-1).^2 ) .* (1 <= y & y < 3/2) + ...
(y-1/4 ) .* (y >= 3/2 );
Just what I was looking for…
Many thanks.
Awfully useful, thank you very much.
sir, thank you you are a genius
Please, how to calculate de integral of this function on interval [-1,2] ?
Andre, remember that you can just calculate the sum of the integrals of each subfunction on its respective interval. A more detailed explanation.
Dear Ajx,
Thank you for your answer, but I really would like to know if there is a way to make it directly, that is, once we have the expression, is there a command that make the calculation whitout I have to
command interval by interval the respective expression?
Sorry for my english … my corrent language is portuguese.
I’m not sure exactly what you want to do. You could try MATLAB’s symlink, or maple or mathematic to compute the integral automatically. All of them should be able to easily handle piecewise
thanks alot deaR….exactly what I was googling for
How would you do a multi-variable piecewise function?
0 when x<0 y<0
x*y 0<=x<=1 0<=y<=1
x 0<=x1
y 0<=y1
1 x>1 y>1
I tried the method given but it adds up the values
z= (0) .*(x<0 | y<0)+…
(x.*y) .*(0<=x<=1 | 0<=y<=1)+…
(x) .*(0<=x1)+…
(y) .*(0<=y1)+…
(1) .*(x>1 | y>1);
any help would be great, thanks
Hi Frank,
I’m not sure I understand. How many variables are there? Just x and y? What are x1 and y1? Also does “x<0 y<0″ mean x less than zero OR y less than zero or does it mean x less than zero AND y less
than zero? I have a feeling the component-wise ‘or’ operator ‘|’ is the culprit.
/ 0 if x<0 and y<0
| x*y if 0<=x<=1 and 0<=y<=1
F(x,y)= | x if 0<=x1
| y if 0<=y1
\ 1 if x>1 and y>1
Sorry about that, I must have typed my previous comment half asleep when I was frustrated trying to figure out how to use your method to plot the function. The one I wrote is the correct function, I
also changed the “|” to “&”. eventually I plotted the function, though it required a bit of loops. it would be great if there was a way to do it your way. There are 2 variables, x and y.
Hi ajx,
I highly appreciate the fact that you took your time to write such a nice program for graphing piecewise functions. I have tried your code to some other similar functions like the one you have, and
it works very well, except for the one that I have to do regarding my project.
I am working on an estimation problem in wish I need to generate a plot of the following function ( a cubic spline K0 = -3):
B0(x) =
0.25*y.^3 if k0 ≤ x < k1,
-0.75*y.^3+0.75*X.^2+0.75*y+0.75 if k1 ≤ x < k2,
0.75*y.^3 – 0.75*y.^2 +1 if k2 ≤ x < k3,
-0.25*y.^3+0.75*y.^2-0.75*y+0.75 if k3 ≤ x < k4,
where x is generated from
N =100;
x = rand(1,N); and decomposed into an interval kx given:
kx = floor(K*x) and
y = K*x-kx
Since B0 has 4 intervals, K = 4.
The peremeter to be estimated is a sum given by:
g = aK0*BK0(x) +aK0+1*BK0+1(x) +…aK-1BK-1(x)
where the other B-splines are translations of B0(x) given by:
BJ(x) = B0(x-kj). for j = K0, K0+1,….K-1, I also would like to plot these translated version of B0(x). K0 = -3 for cubic splines, and is not the same as for the intervals for B0, k0.
I would really be glad if you can help me.
does anybody know how to handle relational operation in symbolic computation. For eg. if I want to check certain condition, if a>=b else NaN.
does anybody have idea ingerneal about looping in symobilc computation..
This was a Great Trick. hatsoff
Could have read documentation. But thanks to google and you I got to the stuff i needed without reading boring definitions of this and that!
This is a nice trick, but it has a serious flaw that comes up regularly when I try to use. Consider this example:
When x==0, the function "should" return 0, but it returns NaN. Is there a real way to deal with piecewise functions in matlab?
I’ve run into this too. Usually I give up on having a one-liner in these situations, but it seems you can taylor-make something for your situation. There are a few options here: http://
www.mathworks.com/matlabcentral/newsreader/view_thread/147044. I like this one for example:
% Assumes x is in ascending order
g = @(x) [x(x<=0) 1./x(x>0)];
Absolute genius, worked like a charm. Thanks for your help!
THANK YOU!!!! I’ve been trying to figure out how to do this!!
Hey just curious, I tried using trig such as
ff = …
(sin(x)) .* (0 <= x <= 10) + …
(2) .* (10 <= x);
When I use trig it seems to behave differently, any help?
Actually you can delete it haha, I see my mistake. Thanks for the code!
I want to use truncated ramp function defined as below:
f(t)=t for t=[0:1:100];
f(t)=1 for t=[100:1:6000];
I want to use this function in the form such that I can use f(t+3) + f(t-2) etc.
How should I define this function?
@Preet, aside from the fact that your f is defined twice for 100 (does f(100) equal 100 or 1?) then you would use f = @(t) (t<100).*t + (t>=100).*1;
I know this is out of date, but the way I deal with singularities is to make a second variable then substitute that in. It is two lines, but it allows anonymous functions to still be used->
NewX = @(x) (x==0).*(1) + (x~=0).*(x)
MySinc = @(x) (x==0).*(1) + (x~=0).*(sin(pi*x)./(pi*NewX(x)))
Not sure where my dot-times went, tho
@Michael, nice. The asterisks got lost during the markdown parsing. I fixed your comment accordingly.
Thanks so musch
it worked! | {"url":"http://www.alecjacobson.com/weblog/?p=1097","timestamp":"2014-04-17T04:25:14Z","content_type":null,"content_length":"49879","record_id":"<urn:uuid:4690bc33-9f76-4d9c-8561-abff46fa0cfc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Excel formulas are not automatically updating
I did a search for this question, and came up with a post, however the response did not make sense to me, so I am posting my question new (rather than replying to the other, since it was from 1+
years ago).
I know next to nothing about VBA (yet). I found the code I am using from this website (and altered it a bit), and it's purpose is to sum the cells that have a particular font color format. It is:
'Written by Ozgrid Business Applications
'Sums cells based on a specified fill color.
Dim rCell As Range
Dim iCol As Integer
Dim vResult
iCol = rColor.Interior.ColorIndex
For Each rCell In rSumRange
If rCell.Font.ColorIndex = iCol Then
vResult = WorksheetFunction.Sum(rCell) + vResult
End If
Next rCell
SumColor = vResult
End Function
If you like these VB formatting tags please consider sponsoring the author in support of injured Royal Marines
When cells within the range are given a particular color, the function does not automatically update; it requires me to double click on the cell w/ the function. The only time it does auto update is
when I subtract a cell with a particular color.
How can I automate this function, just like all the others that are built into Excel act?
Here is the forum post from last year with the answer I do not quiet understand.
...how do I incorporate 'Application.Volatile' in the above code?
Thank you for your time & help... | {"url":"http://www.knowexcel.com/view/413997-excel-formulas-are-not-automatically-updating.html","timestamp":"2014-04-18T05:32:56Z","content_type":null,"content_length":"59656","record_id":"<urn:uuid:a1ab9a13-3afe-46ac-8ef3-374b3d712f5b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
Project Austin Part 3 of 6: Ink Smoothing
Comments 2
Hi, my name is Eric Brumer. I’m a developer on the C++ compiler optimizer, but I’ve spent some time working on Project Code Name Austin to help showcase the power and performance of C++ in a real
world program. For a general overview of the project, please check out the introduction blog post.
This blog post describes how we perform ink smoothing.
Consider a straightforward ink drawing mechanism: draw straight lines between each stylus input point that is sampled. The devices and drivers we have been using on Windows 8 sample 120 input points
per second. This may seem like a lot, but very swift strokes can sometimes cause visible straight edges. Here’s a sample from the app (without ink smoothing) which shows some straight edges:
Here is the same set of ink strokes, but with the ink stroke smoothed.
We are using a spline technique to do real time ink smoothing. Other options were considered, but the spline (a) can be done in real time so the strokes you draw are always smooth as new input points
are sampled and (b) are computationally feasible.
There is plenty of literature online about spline smoothing techniques, but in my (limited) research I have either found descriptions that are too simplistic, or descriptions that require a degree in
computer graphics to understand. So here’s my shot at something in the middle...
Before computers, a technique was used to create smoothed curves using a tool called a spline. This was a flexible material (heavy rope, a flexible piece of wood, etc) that could bend into shape, but
also be fixed at certain locations along its body. For example, you could take a piece of heavy rope, pin the rope to a wall using a bunch of pins in different locations along the rope, then trace
the outline of the bendy rope to yield a spline-smoothed curve.
Fast forward several decades and now we are using the same principles to create a smoothed line between a set of points. Say we have a line with many points P0, P1, P2, … To smooth it using a spline,
we take the first 4 points (P0, P1, P2, P3) and draw a smooth curve that passes through P1 & P2. Then we move the window of 4 points to (P1, P2, P3, P4) and draw a smooth curve that passes through P2
& P3. Rinse and repeat for the entire curve. The reason it’s a spline technique is that we consider the two points as being ‘pinned’, just like pinning some rope to a wall.
Before going into how we draw the smoothed line between those points, let’s examine the benefits:
1. We only need four points to draw a smoothed line between the middle two. As you are drawing an ink stroke with your stylus, we are constantly able to smooth the stroke. I.e. we can do real time
2. The computation is bounded, and by some neat compiler optimizations and limiting the number of samples when drawing the smoothed line (see item 2 below) we can ensure ink smoothing won’t be on
the critical path of performance.
There are a few things to keep in mind:
1. We need to handle drawing a smoothed line between the first two points (P0 & P1), as well as drawing the smoothed line between the last two points on the curve. I do these by faking up those
points and applying the same spline technique.
2. I keep writing “draw a smoothed line between two points”. We can’t draw a smoothed line; we can only draw a bunch of straight lines that look smooth. So when I say “draw a smoothed line between
two points” what I mean to say is “draw many straight lines that look smooth which connect two points”. We just sample points along the curved line at regular intervals which are known to look
smooth at the pixel level.
Cubic Spline & Cardinal Spline
Now on to the mathematical meat… When a graphics person says that a line is smooth at a given point, what they are saying is that the line is contiguous at that point, the first derivative of the
line is contiguous at that point, and the second derivative is contiguous at that point. Apologies if I’m bringing back horrible memories of high school or college calculus.
Here’s a visual of five points with the smoothed line already drawn in blue.
We can define each segment of the smoothed blue curve as being parameterized by a parameter “t” which goes from 0 to 1. So the blue line is the concatenation of 4 curves given by:
P01(t) where t ranges from 0 to 1 for the first segment (from P0 to P1)
P12(t) where t ranges from 0 to 1 for the second segment (from P1 to P2)
… etc …
Using the ` character to mean derivative, applying the definition of smooth at the endpoints of each of the segments yields a bunch of equations:
P01(t=1) = P12(t=0) P`01(t=1) = P`12(t=0) P``01(t=1) = P``12(t=0)
P12(t=1) = P23(t=0) P`12(t=1) = P`23(t=0) P``12(t=1) = P``23(t=0)
… etc …
Solving those equations exactly is trying. See spline interpolation. In general, if you are looking for a polynomial to satisfy an equation with second derivatives, you are shopping for a polynomial
of degree 3, aka a cubic polynomial. Hence the ‘cubic’ in cubic spline.
The Wikipedia page shows a solution to fit the smoothness equations, but a lot of work has been done in this space to come up with a more computationally feasible solution that looks just as smooth.
Basically, we lessen the second derivative equations and say P``01(t=1) ~= P``12(t=0), etc. This opens up many possibilities – look up any cubic spline and you’ll see many options.
After much experimenting, I found that the Cardinal spline works best for our ink strokes. The cardinal spline solution for the smoothed curve between 4 points P0, P1, P2, P3 is as follows:
The factor L is used to simulate the “tension in the heavy rope”, and can be tuned as you see fit. We chose a value around 0.5. If you are so inclined, you can also write out P23(t), take a bunch of
derivatives and see this fits the smoothness equations. If you are a high school calculus teacher, please don’t make your students do this for homework.
The formula can be expressed in C++:
for (int i=0; i<numPoints; i++)
float t = (float)i/(float)(numPoints-1);
smoothedPoints_X[i] = (2*t*t*t - 3*t*t + 1) * p2x
+ (-2*t*t*t + 3*t*t) * p3x
+ (t*t*t - 2*t*t + t) * L*(p3x-p1x)
+ (t*t*t - t*t) * L*(p4x-p2x);
smoothedPoints_Y[i] = (2*t*t*t - 3*t*t + 1) * p2y
+ (-2*t*t*t + 3*t*t) * p3y
+ (t*t*t - 2*t*t + t) * L*(p3y-p1y)
+ (t*t*t - t*t) * L*(p4y-p2y);
numPoints (the number of points to sample on our smoothed line) is based on the minimum interval for what we thought looked good.
Like I mentioned before, we do real-time ink smoothing. That is to say an ink stroke is smoothed as it is drawn. We need to make sure that drawing a smooth line does not take too long otherwise we’ll
notice a drop in frame rate where the ink stroke lags behind your stylus.
One of the benefits of writing this app in C++ is the opportunity for compiler optimizations to kick in. In this particular case, the cardinal spline equations are auto-vectorized by the Visual
Studio 2012 C++ compiler. This yields a 30% performance boost when smoothing ink strokes, ensuring we can smooth ink points as fast as Windows can sample them. Also, any extra computing time saved
lets us (a) do more computations to make the app better, or (b) finish our computations early, putting the app to sleep thus saving power.
Read all about the auto vectorizer here: http://blogs.msdn.com/b/nativeconcurrency/archive/2012/04/12/auto-vectorizer-in-visual-studio-11-overview.aspx
That's nice but you should really use the GPU for this, anything that has to do with rendering is not a job for a latency optimized processor. Google for "stencil then cover". Try it, then compare
the performance and power consumption (if you can). An order-of-magnitude difference is common.
When you see the bright flash, stencil and cover!
Or if that doesn't suit, consider a localized reason to pursue instead of the vague and maybe goes some where google box filler thing. You have plenty of space. Type it there first. These blogs are
poor at processing a post. And don't use that as an excuse. | {"url":"http://blogs.msdn.com/b/vcblog/archive/2012/10/04/10348486.aspx","timestamp":"2014-04-17T09:39:23Z","content_type":null,"content_length":"81845","record_id":"<urn:uuid:78085c36-c729-4b40-909e-99883f252bb7>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Houston Heights, TX
Houston, TX 77027
Yale Graduate, Tutoring College Essay, ACT, SAT, and more
...) Send me a line (or, you know, message) and let's see if I'm the right tutor for you! I received As in Algebra II, Geometry, Trigonometry, Pre-Calculus, and Calculus in high school. I also
received a B in
115 (Calculus II) and an A- in Statistics at Yale. I'm...
Offering 10+ subjects including algebra 1, prealgebra and SAT math | {"url":"http://www.wyzant.com/geo_Houston_Heights_TX_Math_tutors.aspx?d=20&pagesize=5&pagenum=6","timestamp":"2014-04-20T19:13:04Z","content_type":null,"content_length":"61293","record_id":"<urn:uuid:b845eddd-f642-441e-9a76-83e118c0083f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Practice Questions for test
March 23rd 2008, 01:48 PM #1
Practice Questions for test
Hey Everyone,
First let me start by saying, I'm not sure if such a book exist.
Does anyone know of any good textbooks/books/reference books for first year (single variable, early transcendentals) Calculus, that have more challenging/not common/bit more thinking and
application questions? Or tough definate/indefinate integrals.
For example questions like, $\int ^b_a \frac{dx}{\sqrt{(x-a)(b-x)}}$ or If $\int ^{\frac{\pi}{4}}_0 \tan^6 x\sec x\;dx=L$ find the value of $\int ^{\frac{\pi}{4}}_0 \tan^8 x\sec x\;dx$ in terms
of $L$.
Prove for $m,n\ge0$ that $<br /> \int_0^1 {x^m (1 - x)^n \,dx} = \frac{{m!n!}}<br /> {{(m + n + 1)!}}.$ This is actually a direct application of Beta function.
This is also a really nice problem, the first time I saw it, I attempted to compute both values of the integrals.
No matter what $a$ is. Let $I=\int_0^1\frac{e^u}{1+u}\,du$ & $J=\int_{a-1}^a\frac{e^{-x}}{x-a-1}\,dx.$ Assume that $\lambda\cdot I=J,$ compute $\lambda.$
I recently posted (in another forum) a proof for $\left(\frac12\right)!=\frac{\sqrt\pi}2,$ that also could be another nice problem to do. (You can see its solution here. It's written in spanish,
but I think you can get the main ideas.)
Compute $\int_0^1\ln x\ln(1-x)\,dx.$ When I saw this problem, I developed a solution which involves double integration & series application. (If someone is interested, see my solution here.)
Evaluate $\lim_{n \to \infty } \left\{ {\frac{1}{{n^2 }}\sum\limits_{k = 0}^{n - 1} {\left[ {k\int_k^{k + 1} {\sqrt {(x - k)(k + 1 - x)} \,dx} } \right]} } \right\}.$ This one I love it! It's
easy and nice!
I have many others problems, but I don't remember them now, but try these ones.
Thank You VERY much; I appreciate the effort and time! ....However...
these are not exactly the types of questions I'm looking for. I'm looking for easier questions to do with integration and series. So I guess Krizalid maybe instead of your interesting questions
give me YOUR "very easy" question.
Let me tell ya something: you asked for tough integrals and I haven't given you tough integrals.
Take a look at my problems, last two involves series (first one more than the second one), and the penultimate problem not necessarily has to be computed with double integration, I actually made
that solution to simplify things. The second one is a really nice problem, if you don't try it, you can't say it is a "hard" problem. As for the third problem, it's also a nice problem which
involves integration, take a look at my solution, if you want to, I can translate my message.
I'd like to see more "easy-interesting" problems from another members.
P.S.: by the way, it's "definite integral."
March 23rd 2008, 02:49 PM #2
March 23rd 2008, 03:18 PM #3
March 23rd 2008, 03:25 PM #4 | {"url":"http://mathhelpforum.com/calculus/31805-practice-questions-test.html","timestamp":"2014-04-17T01:19:06Z","content_type":null,"content_length":"44435","record_id":"<urn:uuid:fde67833-c13f-4232-9dbf-17bed30faa83>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
perplexus.info :: Science : Sliding blocks
Triangle PQR with area A is given in a plane parallel to the gravitational acceleration g, with the length QR perpendicular to g, and the point P is above QR.
Now we let two blocks (considered as points) slide down PQ and PR respectively. This gives the different times t1 and t2 respectively for each block. If one of the angles Q or R is obtuse, the
corresponding block will slide on the inside of the triangle.
If we assume there is no friction, find the minimum of T=t1+t2 in terms of A and g and determine the triangle when this occurs. | {"url":"http://perplexus.info/show.php?pid=6164&cid=40975","timestamp":"2014-04-16T10:11:49Z","content_type":null,"content_length":"12594","record_id":"<urn:uuid:cf03fd9c-0296-4d52-886b-b907fc93a081>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
IB Mathematics
Horizontal and Vertical Shifts
If $f(x)$ is the original function where $c>0$ then the graph of f(x)+c is shifted up $c$ units,
and the graph of $f(x)-c$ is shifted down $c$ units
A vertical shift means that every point$(x,y)$ on the graph of the original function $f(x)$ is transformed to $(x,y\pm c)$ on the graph of the transformed function $f(x)+c \ or \ f(x)-c$
The graph of $f(x+c)$ is shifted left $c$ units
The graph of $f(x-c)$ is shifted right $c$ units
A horizontal shift means that every point$(x,y)$ on the graph of the original function $f(x)$ is transformed to $(x\pm c,y)$ on the graph of the transformed function $f(x-c) \ or \ f(x+c)$
If $f(x)$ is the original function then
The graph of $-f(x)$ is a reflection in the x-axis.
The graph of $f(-x)$ is a reflection in the y-axis.
Absolute value transformation
$|f(x)|$ : Every part of the graph which is below x-axis is reflected in x-axis.
$f(|x|)$ : For $x \geq 0$ the graph is exactly the same as this of the original function.
For $x <0$ the graph is a reflection of the graph for x≥0 in y-axis.
Stretching and Shrinking
If $f(x)$ is the original function, $c>1$ then
The graph of $cf(x)$ is a vertical stretch by a scale factor of $c$
If $f(x)$ is the original function, $0<c<1$ then
The graph of $cf(x)$ is a vertical shrink by a scale factor of $c$.
A vertical stretch or shrink means that every point $(x,y)$ on the graph of the original function $f(x)$ is transformed to $(x,cy)$ on the graph of the transformed function $cf(x)$.
If $f(x)$ is the original function, $c>1$ then
The graph of $f(cx)$ is a horizontal shrink by a scale factor of $\frac{1}{c}$.
If $f(x)$ is the original function, $0<c<1$ then
The graph of $f(cx)$ is a horizontal stretch by a scale factor of $\frac{1}{c}$.
A horizontal stretch or shrink means that every point $(x,y)$ on the graph of the original function $f(x)$ is transformed to $(\frac{x}{c},y)$ on the graph of the transformed function $f(cx)$.
Order of Tranformation
When we perform multiple transformations the order of these transformations may affect the final graph. Therefore we could follow the proposed order (with some exceptions) below to avoid possible
wrong final graphs.
1. Horizontal Shifts
2. Stretch / Shrink
3. Reflections
4. Vertical Shifts
A Joint Meeting with the Canadian Society for the History & Philosophy of Mathematics (CSHPM)
Hartford, CT
MAA MathFest 2013 will be held at the Connecticut Convention Center and Hartford Marriott Downtown in Hartford, Connecticut. There will be a complimentary Grand Opening Reception on the evening of
Wednesday, July 31, and the mathematical sessions will take place from Thursday, August 1 through Saturday, August 3.
- See more at: http://www.maa.org/meetings/mathfest#sthash.OZHlftIY.dpuf
5 July 2013. Over 127,000 students worldwide are today receiving their results from the May 2013 IB Diploma Programme examination session.
Jeffrey Beard, Director General of the International Baccalaureate, says: “I would like to congratulate all students on their great achievements. Today’s IB diploma graduates can be confident that
they possess the skills needed to excel in an increasingly international world, with students uniquely poised for success both at university and beyond. I wish every individual the very best and look
forward to hearing of their accomplishments through our global network of IB alumni.”
IB Maths Revision Notes - IB Mathematics HL, SL, Studies Revision Notes by www.IBmaths4u.com
Complex Numbers for IB Mathematics HL
Mathematical Induction for IB Mathematics HL
Trigonometry for IB Mathematics HL
Sequences-Series and Binomial Theorem for IB Mathematics HL
Exponential and Logarithmic Functions for IB Mathematics HL
Differentiation for IB Mathematics HL
Integration for IB Mathematics HL
Applications of Integration and Differential Equations for IB Mathematics HL
Probability, Set Theory and Counting Principles for IB Mathematics HL
Continuous Probability Distributions, Normal Distribution - IB Maths HL
How can we find the standard deviation of the weight of a population of cats which is found to be normally distributed with mean 2.1 Kg and the 60% of the dogs weigh at least 1.9 Kg.
The Answer is from www.ibmaths4u.com
IB Mathematics HL – Continuous Probability Distribution, Normal Distribution
A normal distribution is a continuous probability distribution for a random variable X. The graph of a normal distribution is called the normal curve. A normal distribution has the following
1. The mean, median, and mode are equal.
2. The normal curve is bell shaped and is symmetric about the mean.
3. The total are under the normal curve is equal to one.
4. The normal curve approaches, but never touches, the x-axis as it extends farther and farther away from the mean.
Approximately 68% of the area under the normal curve is between $\mu - \sigma$ and $\mu + \sigma$
and . Approximately 95% of the area under the normal curve is between $\mu - 2 \sigma$ and $\mu +2 \sigma$. Approximately 99.7% of the area under the normal curve is between $\mu - 3 \sigma$ and $\mu
+ 3 \sigma$
The standard normal distribution is a normal probability distribution that has a mean of 0 and a standard deviation of 1.
$Z\sim N(0, 1 ^2)$
Concerning your question
Let the random variable $C$ denotes the weight of the cats, so that
$C\sim N(2.1, \sigma ^2)$
We know that $P(C \geq 1.9)=0.6$
Since we don’t know the standard deviation, we cannot use the inverse normal. Therefore we have to transform the random variable $C$ to that of
$Z\sim N(0,1)$ , using the transformation $Z= \frac{C- \mu}{\sigma}$
we have the following
$P(C \geq 1.9)=0.6 \Rightarrow P(\frac{C- 2.1}{\sigma} \geq \frac{1.9- 2.1}{\sigma})=0.6$
$\Rightarrow P(Z \geq \frac{-0.2}{\sigma})=0.6$
Using GDC Casio fx-9860G SD
MAIN MENU > STAT>DIST(F5)>NORM(F1)>InvN>
Setting Tail: right
Area: 0.6
We find that the standardized value is -0.2533471
$\frac{-0.2}{\sigma}=-0.2533471\Rightarrow \sigma=\frac{-0.2}{-0.2533471}=0.789 (3 s.f.)$ | {"url":"http://ib-mathematics.blogspot.com/","timestamp":"2014-04-18T15:38:41Z","content_type":null,"content_length":"85037","record_id":"<urn:uuid:0dc807fc-b734-41c9-b3f8-f4e5e0319e93>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
The TooN library is a set of C++ header files which provide basic numerics facilities:
It provides classes for statically- (known at compile time) and dynamically- (unknown at compile time) sized vectors and matrices and it can delegate advanced functions (like large SVD or
multiplication of large matrices) to LAPACK and BLAS (this means you will need libblas and liblapack).
The library makes substantial internal use of templates to achieve run-time speed efficiency whilst retaining a clear programming syntax.
Why use this library?
• Because it supports statically sized vectors and matrices very efficiently.
• Because it provides extensive type safety for statically sized vectors and matrices (you can't attempt to multiply a 3x4 matrix and a 2-vector).
• Because it supports transposition, subscripting and slicing of matrices (to obtain a vector) very efficiently.
• Because it interfaces well to other libraries.
• Because it exploits LAPACK and BLAS (for which optimised versions exist on many platforms).
• Because it is fast, but not at the expense of numerical stability.
Design philosophy of TooN
• TooN is designed to represent mathematics as closely as possible.
• TooN is a linear algebra library.
□ TooN is designed as a linear algebra library and not a generic container and array mathematics library.
• Vectors are not matrices.
□ The Vector and Matrix objects are distinct. Vectors and matrices are closely related, but distinct objects which makes things like outer versus inner product clearer, removes ambiguity and
special cases and generally makes the code shorter.
• TooN generally doesn't allow things which don't make much sense.
□ Why would you want to multiply or add Zeros?
• A vector is always a Vector and a matrix is always a Matrix
□ Both concrete and generic functions take variations on the Vector and Matrix class, no matter where the data comes from. You will never see anything like a BaseVector.
How to use TooN
This section is arranged as a FAQ. Most answers include code fragments. Assume using namespace TooN;.
Getting the code and installing
To get the code from cvs use:
cvs -z3 -d:pserver:anoncvs@cvs.savannah.nongnu.org:/cvsroot/toon co TooN
The home page for the library with a version of this documentation is at:
The code will work as-is, and comes with a default configuration, which should work on any system.
On a unix system, ./configure && make install will install TooN to the correct place. Note there is no code to be compiled, but the configure script performs some basic checks.
On non-unix systems, e.g. Windows and embedded systems, you may wish to configure the library manually. See Manual configuration.
Getting started
To begin, just in include the right file:
Everything lives in the TooN namespace.
Then, make sure the directory containing TooN is in your compiler's search path. If you use any decompositions, you will need to link against LAPACK, BLAS and any required support libraries. On a
modern unix system, linking against LAPACK will do this automatically.
Comilation errors on Win32 involving TOON_TYPEOF
If you get errors compiling code that uses TooN, look for the macro TOON_TYPEOF in the messages. Most likely the file internal/config.hh is clobbered. Open it and remove all the defines present
Also see Manual configuration for more details on configuring TooN, and Functions using LAPACK, if you want to use LAPACK and BLAS. Define the macro in internal/config.hh.
How do I create a vector?
Vectors can be statically sized or dynamically sized.
Vector<3> v1; //Create a static sized vector of size 3
Vector<> v2(4); //Create a dynamically sized vector of size 4
Vector<Dynamic> v2(4); //Create a dynamically sized vector of size 4
See also Can I have a precision other than double?.
How do I create a matrix?
Matrices can be statically sized or dynamically sized.
Matrix<3> m; //A 3x3 matrix (statically sized)
Matrix<3,2> m; //A 3x2 matrix (statically sized)
Matrix<> m(5,6); //A 5x6 matrix (dynamically sized)
Matrix<3,Dynamic> m(3,6); //A 3x6 matrix with a dynamic number of columns and static number of rows.
Matrix<Dynamic,2> m(3,2); //A 2x3 matrix with a dynamic number of rows and static number of columns.
See also Can I have a precision other than double?.
How do I write a function taking a vector?
To write a function taking a local copy of a vector:
template<int Size> void func(Vector<Size> v);
To write a function taking any type of vector by reference:
template<int Size, typename Precision, typename Base> void func(const Vector<Size, Precision, Base>& v);
See also Can I have a precision other than double?, How do I write generic code? and Why don't functions work in place?
Slices are strange types. If you want to write a function which uniformly accepts const whole objects as well as slices, you need to template on the precision.
Note that constness in C++ is tricky (see What is wrong with constness?). If you write the function to accept Vector<3, double, B>& , then you will not be able to pass it slices from const Vectors.
If, however you write it to accept Vector<3, const double, B>& , then the only way to pass in a Vector<3> is to use the .as_slice() method.
See also How do I write generic code?
What is wrong with constness?
In TooN, the behaviour of a Vector or Matrix is controlled by the third template parameter. With one parameter, it owns the data, with another parameter, it is a slice. A static sized object uses the
to hold the data. A slice object uses:
When a Vector is made const, C++ inserts const in to those types. The const it inserts it top level, so these become (respectively):
const double my_data[3];
double * const my_data;
Now the types behave very differently. In the first case my_data[0] is immutable. In the second case, my_data is immutable, but my_data[0] is mutable.
Therefore a slice const Vector behaves like an immutable pointer to mutable data. TooN attempts to make const objects behave as much like pointers to immutable data as possible.
The semantics that TooN tries to enforce can be bypassed with sufficient steps:
//Make v look immutable
template<class P, class B> void fake_immutable(const Vector<2, P, B>& v)
Vector<2, P, B> nonconst_v(v);
nonconst_v[0] = 0; //Effectively mutate v
void bar()
Vector<3> v;
//Now v is mutated
See also How do I write a function taking a vector?
What elementary operations are supported?
Assignments are performed using =. See also Why does assigning mismatched dynamic vectors fail?.
These operators apply to vectors or matrices and scalars. The operator is applied to every element with the scalar.
Vector and vectors or matrices and matrices:
Dot product:
Matrix multiply:
Matrix multiplying a column vector:
Row vector multiplying a matrix:
3x3 Vector cross product:
All the functions listed below return slices. The slices are simply references to the original data and can be used as lvalues.
Getting the transpose of a matrix:
Accessing elements:
Vector[i] //get element i
Matrix(i,j) //get element i,j
Matrix[i] //get row i as a vector
Matrix[i][j] //get element i,j
Turning vectors in to matrices:
Vector.as_row() //vector as a 1xN matrix
Vector.as_col() //vector as a Nx1 matrix
Slicing with a start position and size:
Vector.slice<Start, Length>(); //Static slice
Vector.slice(start, length); //Dynamic slice
Matrix.slice<RowStart, ColStart, NumRows, NumCols>(); //Static slice
Matrix.slice(rowstart, colstart, numrows, numcols); //Dynamic slice
Slicing diagonals:
Matrix.diagonal_slice(); //Get the leading diagonal as a vector.
Vector.as_diagonal(); //Represent a Vector as a DiagonalMatrix
Like other features of TooN, mixed static/dynamic slicing is allowed. For example:
Vector.slice<Dynamic, 2>(3, 2); //Slice starting at index 3, of length 2.
See also What are slices?
How I initialize a vector/matrix?
Vectors and matrices start off uninitialized (filled with random garbage). They can be easily filled with zeros, or ones (see also TooN::Ones):
Vector<3> v = Zeros;
Matrix<3> m = Zeros
Vector<> v2 = Zeros(2); //Note in they dynamic case, the size must be specified
Matrix<> m2 = Zeros(2,2); //Note in they dynamic case, the size must be specified
Vectors can be filled with makeVector:
Vector<> v = makeVector(2,3,4,5,6);
Matrices can be initialized to the identity matrix:
Matrix<2> m = Idendity;
Matrix<> m2 = Identity(3);
note that you need to specify the size in the dynamic case.
Matrices can be filled from data in row-major order:
Matrix<3> m = Data(1, 2, 3,
4, 5, 6,
7, 8, 9);
A less general, but visually more pleasing syntax can also be used:
Vector<5> v;
Fill(v) = 1,2,3,4,5;
Matrix<3,3> m;
Fill(m) = 1, 2, 3,
4, 5, 6,
7, 8, 9;
Note that underfilling is a run-time check, since it can not be detected at compile time.
They can also be initialized with data from another source. See also I have a pointer to a bunch of data. How do I turn it in to a vector/matrix without copying?.
How do I add a scalar to every element of a vector/matrix?
Addition to every element is not an elementary operation in the same way as multiplication by a scalar. It is supported throught the ::Ones object:
Vector<3> a, b;
b = a + Ones*3; // b_i = a_i + 3
a+= Ones * 3; // a_i <- a_i + 3
It is supported the same way on Matrix and slices.
Why does assigning mismatched dynamic vectors fail?
Vectors are not generic containers, and dynamic vectors have been designed to have the same semantics as static vectors where possible. Therefore trying to assign a vector of length 2 to a vector of
length 3 is an error, so it fails. See also How do I resize a dynamic vector/matrix?
How do I store Dynamic vectors in STL containers.
As C++ does not yet support move semantics, you can only safely store static and resizable Vectors in STL containers.
How do I resize a dynamic vector/matrix?
Do you really want to? If you do, then you have to declare it:
Vector<Resizable> v;
v = makeVector(1, 2, 3);
v = makeVector(1, 2); //resize
v = Ones(5); //resize
v = Zeros; // no resize
The policy behind the design of TooN is that it is a linear algebra library, not a generic container library, so resizable Vectors are only created on request. They provide fewer guarantees than
other vectors, so errors are likely to be more subtle and harder to track down. One of the main purposes is to be able to store Dynamic vectors of various sizes in STL containers.
Assigning vectors of mismatched sizes will cause an automatic resize. Likewise assigning from entities like Ones with a size specified will cause a resize. Assigning from an entities like Ones with
no size specified will not cause a resize.
They can also be resized with an explicit call to .resize(). Resizing is efficient since it is implemented internally with std::vector. Note that upon resize, existing data elements are retained but
new data elements are uninitialized.
Currently, resizable matrices are unimplemented. If you want a resizable matrix, you may consider using a std::vector, and accessing it as a TooN object when appropriate. See I have a pointer to a
bunch of data. How do I turn it in to a vector/matrix without copying?. Also, the speed and complexity of resizable matrices depends on the memory layout, so you may wish to use column major matrices
as opposed to the default row major layout.
What debugging options are there?
By default, everything which is checked at compile time in the static case is checked at run-time in the dynamic case (with some additions). Checks can be disabled with various macros. Note that the
optimizer will usually remove run-time checks on static objects if the test passes.
Bounds are not checked by default. Bounds checking can be enabled by defining the macro TOON_CHECK_BOUNDS. None of these macros change the interface, so debugging code can be freely mixed with
optimized code.
The debugging checks can be disabled by defining either of the following macros:
Additionally, individual checks can be disabled with the following macros:
• Static/Dynamic mismatch
□ Statically determined functions accept and ignore dynamically specified sizes. Nevertheless, it is an error if they do not match.
□ Disable with TOON_NDEBUG_MISMATCH
• Slices
□ Disable with TOON_NDEBUG_SLICE
• Size checks (for assignment)
□ Disable with TOON_NDEBUG_SIZE
• overfilling using Fill
□ Disable with TOON_NDEBUG_FILL
• underfilling using Fill (run-time check)
□ Disable with TOON_NDEBUG_FILL
Errors are manifested to a call to std::abort().
TooN does not initialize data in a Vector or Matrix. For debugging purposes the following macros can be defined:
• TOON_INITIALIZE_QNAN or TOON_INITIALIZE_NAN Sets every element of newly defined Vectors or Matrixs to quiet NaN, if it exists, and 0 otherwise. Your code will not compile if you have made a
Vector or Matrix of a type which cannot be constructed from a number.
• TOON_INITIALIZE_SNAN Sets every element of newly defined Vectors or Matrixs to signalling NaN, if it exists, and 0 otherwise.
• TOON_INITIALIZE_VAL Sets every element of newly defined Vectors or Matrixs to the expansion of this macro.
• TOON_INITIALIZE_RANDOM Fills up newly defined Vectors and Matrixs with random bytes, to trigger non repeatable behaviour. The random number generator is automatically seeded with a granularity of
1 second. Your code will not compile if you have a Vector or Matrix of a non-POD type.
What are slices?
Slices are references to data belonging to another vector or matrix. Modifying the data in a slice modifies the original object. Likewise, if the original object changes, the change will be reflected
in the slice. Slices can be used as lvalues. For example:
Matrix<3> m = Identity;
m.slice<0,0,2,2>() *= 3; //Multiply the top-left 2x2 submatrix of m by 3.
m[2] /=10; //Divide the third row of M by 10.
m.T()[2] +=2; //Add 2 to every element of the second column of M.
m[1].slice<1,2>() = makeVector(3,4); //Set m_1,1 to 3 and m_1,2 to 4
Slices are usually strange types. See How do I write a function taking a vector?
See also
Can I have a precision other than double?
Vector<3, float> v; //Static sized vector of floats
Vector<Dynamic, float> v(4); //Dynamic sized vector of floats
Vector<Dynamic, std::complex<double> > v(4); //Dynamic sized vector of complex numbers
Likewise for matrix. By default, TooN supports all builtin types and std::complex. Using custom types requires some work. If the custom type understands +,-,*,/ with builtin types, then specialize
TooN::IsField on the types.
If the type only understands +,-,*,/ with itself, then specialize TooN::Field on the type.
Note that this is required so that TooN can follow the C++ promotion rules. The result of multiplying a Matrix<double> by a Vector<float> is a Vector<double>.
How do I return a slice from a function?
If you are using C++11, returning slices is now easy:
auto sliceof(Vector<4>& v)->decltype (v.slice<1,2>())
return v.slice<1,2>();
If not, some tricks are required. Each vector has a SliceBase type indicating the type of a slice.
They can be slightly tricky to use:
Vector<2, double, Vector<4>::SliceBase> sliceof(Vector<4>& v)
return v.slice<1,2>();
template<int S, class P, class B>
Vector<2, P, Vector<S, P, B>::SliceBase> sliceof(Vector<S, P, B>& v)
return v.template slice<1,2>();
template<int S, class P, class B>
const Vector<2, const P, typename Vector<S, P, B>::ConstSliceBase > foo(const Vector<S, P, B>& v)
return v.template slice<1,2>();
How do I invert a matrix / solve linear equations?
You use the decomposition objects (see below), for example to solve Ax=b:
Matrix<3> A;
Vector<3> b = makeVector (2,3,4);
// solve Ax=b using LU
LU<3> luA(A);
Vector<3> x1 = luA.backsub(b);
// solve Ax=b using SVD
SVD<3> svdA(A);
Vector<3> x2 = svdA.backsub(b);
Similarly for the other decomposition objects
For 2x2 matrices, the TooN::inv function can be used.
Which decomposisions are there?
For general size matrices (not necessarily square) there are: LU , SVD , QR, LAPACK's QR and gauss_jordan()
For square symmetric matrices there are: SymEigen and Cholesky
If all you want to do is solve a single Ax=b then you may want gaussian_elimination()
What other stuff is there:
Look at the modules .
What handy functions are there (normalize, identity, fill, etc...)?
See here .
Does TooN support automatic differentiation?
TooN has buildin support for FADBAD++. Just do:
#include <functions/fadbad.h>
Then create matrices and vectors of FADBAD types. See functions/fadbad.h for available functions and parameterisations.
TooN is type generic and so can work on any reasonable types including AD types if a small amount of interfacing is performed. See .
Why don't functions work in place?
Consider the function:
It can accept a Vector<3> by reference, and operate on it in place. A Vector<3> is a type which allocates memory on the stack. A slice merely references memory, and is a subtly different type. To
write a function taking any kind of vector (including slices) you can write:
template<class Base> void func(Vector<3, double, Base>& v);
A slice is a temporary object, and according to the rules of C++, you can't pass a temporary to a function as a non-const reference. TooN provides the .ref() method to escape from this restriction,
by returning a reference as a non-temporary. You would then have to write:
Vector<4> v;
to get func to accept the slice.
You may also wish to consider writing functions that do not modify structures in place. The unit function of TooN computes a unit vector given an input vector. In the following context, the code:
//There is some Vector, which may be a slice, etc called v;
v = unit(v);
produces exactly the same compiler output as the hypothetical Normalize(v) which operates in place (for static vectors). Consult the ChangeLog entries dated ``Wed 25 Mar, 2009 20:18:16'' and ``Wed 1
Apr, 2009 16:48:45'' for further discussion.
Can I have a column major matrix?
Matrix<3, 3, double, ColMajor> m; //3x3 Column major matrix
I have a pointer to a bunch of data. How do I turn it in to a vector/matrix without copying?
To create a vector use:
double d[]={1,2,3,4};
Vector<4,double,Reference> v1(d);
Vector<Dynamic,double,Reference> v2(d,4);
Or, a functional form can be used:
double d[]={1,2,3,4};
wrapVector<4>(d); //Returns a Vector<4>
wrapVector<4,double>(d); //Returns a Vector<4>
wrapVector(d,3); //Return a Vector<Dynamic> of size 3
wrapVector<Double>(d,3); //Return a Vector<Dynamic> of size 3
To crate a matrix use
double d[]={1,2,3,4,5,6};
Matrix<2,3,double,Reference::RowMajor> m1(d);
Matrix<2,3,double,Reference::ColMajor> m2(d);
Matrix<Dynamic, Dynamic, double, Reference::RowMajor> m3(d, 2, 3);
Matrix<Dynamic, 3, double, Reference::RowMajor> m4(d, 2, 3); // note two size arguments are required for semi-dynamic matrices
See also wrapVector() and wrapMatrix().
How do I write generic code?
The constructors for TooN objects are very permissive in that they accept run-time size arguments for statically sized objects, and then discard the values, This allows you to easily write generic
code which works for both static and dynamic inputs.
Here is a function which mixes up a vector with a random matrix:
template<int Size, class Precision, class Base> Vector<Size, Precision> mixup(const Vector<Size, Precision, Base>& v)
//Create a square matrix, of the same size as v. If v is of dynamic
//size, then Size == Dynamic, and so Matrix will also be dynamic. In
//this case, TooN will use the constructor arguments to select the
//matrix size. If Size is a real size, then TooN will simply ighore
//the constructor values.
Matrix<Size, Size, Precision> m(v.size(), v.size());
//Fill the matrix with random values that sum up to 1.
Precision sum=0;
for(int i=0; i < v.size(); i++)
for(int j=0; j < v.size(); j++)
sum += (m[i][j] = rand());
m/= sum;
return m * v;
Writing functions which safely accept multiple objects requires assertions on the sizes since they may be either static or dynamic. TooN's built in size check will fail at compile time if mismatched
static sizes are given, and at run-time if mismatched dynamic sizes are given:
template<int S1, class B1, int S2, class B2> void func_of_2_vectors(const Vector<S1, double, B1>& v1, const Vector<S2, double, B2>& v2)
//Ensure that vectors are the same size
SizeMismatch<S1, S2>::test(v1.num_rows(), v2.num_rows());
For issues relating to constness, see and
What about C++ 11 support?
TooN compiles cleanly under C++ 11, but does not require it. It can also make use of some C++11 features where present. Internally, it will make use of decltype if a C++11 compiler is present and no
overriding configuration has been set. See Typeof for more information.
Are there any examples?
Create two vectors and work out their inner (dot), outer and cross products
// Initialise the vectors
Vector<3> a = makeVector(3,5,0);
Vector<3> b = makeVector(4,1,3);
// Now work out the products
double dot = a*b; // Dot product
Matrix<3,3> outer = a.as_col() * b.as_row(); // Outer product
Vector<3> cross = a ^ b; // Cross product
cout << "a:" << endl << a << endl;
cout << "b:" << endl << b << endl;
cout << "Outer:" << endl << outer << endl;
cout << "Cross:" << endl << cross << endl;
Create a vector and a matrix and multiply the two together
// Initialise a vector
Vector<3> v = makeVector(1,2,3);
// Initialise a matrix
Matrix<2,3> M(d);
M[0] = makeVector(2,4,5);
M[1] = makeVector(6,8,9);
// Now perform calculations
Vector<2> v2 = M*v; // OK - answer is a static 2D vector
Vector<> v3 = M*v; // OK - vector is determined to be 2D at runtime
Vector<> v4 = v*M; // Compile error - dimensions of matrix and vector incompatible
How is it implemented
Static-sized vectors and matrices
One aspect that makes this library efficient is that when you declare a 3-vector, all you get are 3 doubles - there's no metadata. So sizeof(Vector<3>) is 24. This means that when you write Vector<3>
v; the data for v is allocated on the stack and hence new/delete (malloc/free) overhead is avoided. However, for large vectors and matrices, this would be a Bad Thing since Vector<1000000> v; would
result in an object of 8 megabytes being allocated on the stack and potentially overflowing it. TooN gets around that problem by having a cutoff at which statically sized vectors are allocated on the
heap. This is completely transparent to the programmer, the objects' behaviour is unchanged and you still get the type safety offered by statically sized vectors and matrices. The cutoff size at
which the library changes the representation is defined in TooN.h as the const int TooN::Internal::max_bytes_on_stack=1000;.
When you apply the subscript operator to a Matrix<3,3> and the function simply returns a vector which points to the the apropriate hunk of memory as a reference (i.e. it basically does no work apart
from moving around a pointer). This avoids copying and also allows the resulting vector to be used as an l-value. Similarly the transpose operation applied to a matrix returns a matrix which referes
to the same memory but with the opposite layout which also means the transpose can be used as an l-value so M1 = M2.T(); and M1.T() = M2; do exactly the same thing.
Warning: This also means that M = M.T(); does the wrong thing. However, since .T() essentially costs nothing, it should be very rare that you need to do this.
Dynamic sized vectors and matrices
These are implemented in the obvious way using metadata with the rule that the object that allocated on the heap also deallocates. Other objects may reference the data (e.g. when you subscript a
matrix and get a vector).
Return value optimisation vs Lazy evaluation
When you write v1 = M * v2; a naive implementation will compute M * v2 and store the result in a temporary object. It will then copy this temporary object into v1. A method often advanced to avoid
this is to have M * v2 simply return an special object O which contains references to M and v2. When the compiler then resolves v1 = O, the special object computes M*v2 directly into v1. This
approach is often called lazy evaluation and the special objects lazy vectors or lazy matrices. Stroustrup (The C++ programming language Chapter 22) refers to them as composition closure objects or
The killer is this: What if v1 is just another name for v2? i.e. you write something like v = M * v;. In this case the semantics have been broken because the values of v are being overwritten as the
computation progresses and then the remainder of the computation is using the new values. In this library v1 in the expression could equally well alias part of M, thus you can't even solve the
problem by having a clever check for aliasing between v1 and v2. This aliasing problem means that the only time the compiler can assume it's safe to omit the temporary is when v1 is being constructed
(and thus cannot alias anything else) i.e. Vector<3> v1 = M * v2;.
TooN provides this optimisation by providing the compiler with the opportunity to use a return value optimisation. It does this by making M * v2 call a special constructor for Vector<3> with M and v2
as arguments. Since nothing is happening between the construction of the temporary and the copy construction of v1 from the temporary (which is then destroyed), the compiler is permitted to optimise
the construction of the return value directly into v1.
Because a naive implemenation of this strategy would result in the vector and matrix classes having a very large number of constructors, these classes are provided with template constructors that
take a standard form. The code that does this, declared in the header of class Vector is:
template <class Op>
inline Vector(const Operator<Op>& op)
: Base::template VLayout<Size, Precision> (op)
How it all really works
This documentation is generated from a cleaned-up version of the interface, hiding the implementation that allows all of the magic to work. If you want to know more and can understand idioms like:
template<int, typename, int, typename> struct GenericVBase;
template<int, typename> struct VectorAlloc;
struct VBase {
template<int Size, class Precision>
struct VLayout : public GenericVBase<Size, Precision, 1, VectorAlloc<Size, Precision> > {
template <int Size, class Precision, class Base=VBase>
class Vector: public Base::template VLayout<Size, Precision> {
then take a look at the source code ...
Manual configuration
Configuration is controlled by internal/config.hh. If this file is empty then the default configuration will be used and TooN will work. There are several options.
TooN needs a mechanism to determine the type of the result of an expression. One of the following macros can be defined to control the behaviour:
• TOON_TYPEOF_DECLTYPE
□ Use the C++11 decltype operator.
• TOON_TYPEOF_TYPEOF
□ Use GCC's typeof extension. Only works with GCC and will fail with -pedantic
• TOON_TYPEOF___TYPEOF__
□ Use GCC's __typeof__ extension. Only works with GCC and will work with -pedantic
• TOON_TYPEOF_BOOST
□ Use the Boost.Typeof system. This will work with Visual Studio if Boost is installed.
• TOON_TYPEOF_BUILTIN
□ The default option (does not need to be defined)
□ Only works for the standard builtin integral types and std::complex<float> and std::complex<double>.
Under Win32, the builtin typeof needs to be used. Comment out all the TOON_TYPEOF_ defines to use it.
If no configuration is present and C++11 is detected, then decltype will be used.
Functions using LAPACK
Some functions use internal implementations for small sizes and may switch over to LAPACK for larger sizes. In all cases, an equivalent method is used in terms of accuracy (eg Gaussian elimination
versus LU decomposition). If the following macro is defined:
• TOON_USE_LAPACK then LAPACK will be used for large systems, where optional. The individual functions are:
• TooN::determinant is controlled by TOON_DETERMINANT_LAPACK
□ If the macro is undefined as or defined as -1, then LAPACK will never be used. Otherwise it indicated which the size at which LAPACK should be used.
Note that these macros do not affect classes that are currently only wrappers around LAPACK. | {"url":"http://www.edwardrosten.com/cvd/toon/html-internals/index.html","timestamp":"2014-04-20T09:14:16Z","content_type":null,"content_length":"56498","record_id":"<urn:uuid:efdf610c-569e-4aef-b1d4-60438baf8672>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electric Power
Power Dissipated in Resistor
Convenient expressions for the power dissipated in a resistor can be obtained by the use of Ohm's Law.
These relationships are valid for AC applications also if the voltages and currents are rms or effective values. The resistor is a special case, and the AC power expression for the general case
includes another term called the power factor which accounts for phase differences between the voltage and current.
The fact that the power dissipated in a given resistance depends upon the square of the current dictates that for high power applications you should minimize the current. This is the rationale for
transforming up to very high voltages for cross-country electric power distribution. | {"url":"http://hyperphysics.phy-astr.gsu.edu/hbase/electric/elepow.html","timestamp":"2014-04-17T19:08:55Z","content_type":null,"content_length":"6202","record_id":"<urn:uuid:d10af0a4-7d26-4951-86dd-5bca4cc11eb8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Best way to print the result of a bool as 'false' or 'true' in c?
up vote 6 down vote favorite
I have to write a program in which main calls other functions that test a series of number if any are less than a number, if all the series' numbers are between two limits, and if any are negative.
My code returns the values of 1 for true and 0 for false, but the assignment asks that they be printed as 'true' or 'false'. I'm not sure how to get the bool answers to print as a string from printf.
I used if (atl == false) printf("false"); in my at_least.c and in main.c, but it returns only a long string of true or false (ex: truetruetrue....). I'm not sure if that is the correct coding and I'm
putting it in the wrong spot or there was some other code that I need to use.
This is my main.c:
#include "my.h"
int main (void)
int x;
int count = 0;
int sum = 0;
double average = 0.0;
int largest = INT_MIN;
int smallest = INT_MAX;
bool atlst = false;
bool bet = true;
bool neg = false;
int end;
while ((end = scanf("%d",&x)) != EOF)
sumall(x, &sum); //calling function sumall
larger_smaller(x, &largest, &smallest); //calling function larger_smaller
if (atlst == false)
at_least(x, &atlst); //calling function at_least if x < 50
if (bet == true)
between(x, &bet); //calling function between if x is between 30 and 40 (inclusive)
if (neg == false)
negative(x, &neg); //calling function negative if x < 0
average = (double) sum / count;
print(count, sum, average, largest, smallest, atlst, bet, neg);
my results for a set of numbers:
The number of integers is: 15
The sum is : 3844
The average is : 256.27
The largest is : 987
The smallest is : -28
At least one is < 50 : 1 //This needs to be true
All between 30 and 40 : 0 //This needs to be false
At least one is negative : 1 //This needs to be true
This is in C, which I can't seem to find much on.
Thanks in advance for your help!
This is repeated from an answer below.
This worked for the at_least and negative functions, but not for the between function. I have
void between(int x, bool* bet)
if (x >= LOWER && x <= UPPER)
*bet = false;
as my code. I'm not sure what's wrong.
Thanks again!
c printf boolean
1 Just a side note, x == true is redundant in a Boolean expression; you can just say x. Similarly, x == false is just !x. – Jon Purdy Oct 1 '11 at 20:57
add comment
4 Answers
active oldest votes
You could use C's conditional (or ternary) operator :
(a > b) ? "True" : "False";
up vote 12 down vote or perhaps in your case:
x ? "True" : "False" ;
This worked for the at_least and negative functions, but not for the between function. I have void between(int x, bool* bet) { if (x >= LOWER && x <= UPPER) *bet = false;
return; } as my code. I'm not sure what's wrong. – Piseagan Oct 1 '11 at 20:28
add comment
Alternate branchless version:
up vote 23 "false\0true"+6*x
down vote
1 Very unreadable though. But nice solution otherwise. – DeCaf Oct 1 '11 at 14:49
12 An inline function or macro with a self-documenting name (e.g. bool2str) would fix that. – R.. Oct 1 '11 at 15:07
Very true indeed. :) – DeCaf Oct 1 '11 at 17:38
Sorry to bring this ancient question up again, but I figured I'd say that I like the solution, but a macro wouldn't work unless x was casted to a BOOL which could only be a 1 or a
0. This one took me a minute to figure out. – Josh The Geek Aug 27 '13 at 1:40
1 @JoshTheGeek: Yes, I didn't mean for it to be copied verbatim to a macro. There are a few changes you'd need to make like proper parentheses and collapse of the value to 0/1.
("false\0true"+6*!!(x)) is probably the cleanest way. – R.. Aug 27 '13 at 5:14
add comment
x ? "true" : "false"
The above expression returns a char *, thus you can use like this:
up vote 9 down vote puts(x ? "true" : "false"); or printf(" ... %s ... ", x ? "true" : "false");
You may want to make a macro for this.
1 Technically it's a const char*, not a char* (though there is a deprecated conversion from the former to the latter), but that's irrelevant here. – Adam Rosenfield Oct 1 '11 at
1 There's no point in creating a macro when an inline function can do the job. – kevin cline Oct 1 '11 at 1:49
2 No, it's char *. This is C not C++. – R.. Oct 1 '11 at 4:44
add comment
So what about this one:
#include <stdio.h>
#define BOOL_FMT(bool_expr) "%s=%s\n", #bool_expr, (bool_expr) ? "true" : "false"
int main(int iArgC, char ** ppszArgV)
int x = 0;
int y = 1;
up vote 2 down printf(BOOL_FMT(y));
return 0;
This prints out the following:
Using this with type bool but int should work the same way.
You probably want to add the printf call into the macro. Although because of how the comma operator works, you can use extra parentheses to do stuff like puts((BOOL_FMT(1))) to
print true. – Oscar Korz Oct 1 '11 at 17:27
add comment
Not the answer you're looking for? Browse other questions tagged c printf boolean or ask your own question. | {"url":"http://stackoverflow.com/questions/7617479/best-way-to-print-the-result-of-a-bool-as-false-or-true-in-c/7618231","timestamp":"2014-04-21T08:02:09Z","content_type":null,"content_length":"86369","record_id":"<urn:uuid:0975f5b4-23b8-4193-8ba7-447f65d9df76>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Linear Equations
Systems of Linear Equations
Solving Linear Equations Graphically
Algebra Expressions
Evaluating Expressions and Solving Equations
Fraction rules
Factoring Quadratic Trinomials
Multiplying and Dividing Fractions
Dividing Decimals by Whole Numbers
Adding and Subtracting Radicals
Subtracting Fractions
Factoring Polynomials by Grouping
Slopes of Perpendicular Lines
Linear Equations
Roots - Radicals 1
Graph of a Line
Sum of the Roots of a Quadratic
Writing Linear Equations Using Slope and Point
Factoring Trinomials with Leading Coefficient 1
Writing Linear Equations Using Slope and Point
Simplifying Expressions with Negative Exponents
Solving Equations 3
Solving Quadratic Equations
Parent and Family Graphs
Collecting Like Terms
nth Roots
Power of a Quotient Property of Exponents
Adding and Subtracting Fractions
Solving Linear Systems of Equations by Elimination
The Quadratic Formula
Fractions and Mixed Numbers
Solving Rational Equations
Multiplying Special Binomials
Rounding Numbers
Factoring by Grouping
Polar Form of a Complex Number
Solving Quadratic Equations
Simplifying Complex Fractions
Common Logs
Operations on Signed Numbers
Multiplying Fractions in General
Dividing Polynomials
Higher Degrees and Variable Exponents
Solving Quadratic Inequalities with a Sign Graph
Writing a Rational Expression in Lowest Terms
Solving Quadratic Inequalities with a Sign Graph
Solving Linear Equations
The Square of a Binomial
Properties of Negative Exponents
Inverse Functions
Rotating an Ellipse
Multiplying Numbers
Linear Equations
Solving Equations with One Log Term
Combining Operations
The Ellipse
Straight Lines
Graphing Inequalities in Two Variables
Solving Trigonometric Equations
Adding and Subtracting Fractions
Simple Trinomials as Products of Binomials
Ratios and Proportions
Solving Equations
Multiplying and Dividing Fractions 2
Rational Numbers
Difference of Two Squares
Factoring Polynomials by Grouping
Solving Equations That Contain Rational Expressions
Solving Quadratic Equations
Dividing and Subtracting Rational Expressions
Square Roots and Real Numbers
Order of Operations
Solving Nonlinear Equations by Substitution
The Distance and Midpoint Formulas
Linear Equations
Graphing Using x- and y- Intercepts
Properties of Exponents
Solving Quadratic Equations
Solving One-Step Equations Using Algebra
Relatively Prime Numbers
Solving a Quadratic Inequality with Two Solutions
Operations on Radicals
Factoring a Difference of Two Squares
Straight Lines
Solving Quadratic Equations by Factoring
Graphing Logarithmic Functions
Simplifying Expressions Involving Variables
Adding Integers
Factoring Completely General Quadratic Trinomials
Using Patterns to Multiply Two Binomials
Adding and Subtracting Rational Expressions With Unlike Denominators
Rational Exponents
Horizontal and Vertical Lines | {"url":"http://www.polymathlove.com/math-tutorials/solving-non-linear-differentia.html","timestamp":"2014-04-21T09:49:11Z","content_type":null,"content_length":"40777","record_id":"<urn:uuid:c8ebb549-a06b-40f3-a955-df494e6a3053>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
1. A public storage company charges new customers an initial fee of $10.50. Each month of storage costs $13.50. For how many months could a customer rent storage space for $301.00? 2. The time
between a lightning flash and the following thunderclap may be used to estimate, in kilometers, how far away a storm is. How far away is a storm if 6 seconds elapse between the lightning and the
thunderclap? Use the formula d =t/3, where t is the time, in seconds, between the flash and the thunderclap. For number 1. I got 21 months and for 2. I got 2 kilometers away. Is that right?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fb4ede8e4b05565342aa8d4","timestamp":"2014-04-18T03:47:18Z","content_type":null,"content_length":"35232","record_id":"<urn:uuid:4b09f088-22ac-4715-9a4c-4133a6b9125a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mc Cook, IL Algebra 1 Tutor
Find a Mc Cook, IL Algebra 1 Tutor
...During my two and half years of teaching high school math, I have had the opportunity to teach various levels of Algebra 1 and Algebra 2. I have a teaching certificate in mathematics issued by
the South Carolina Department of Education. During my two and a half years of teaching high school, I have taught various levels of Algebra 1 and Algebra 2.
12 Subjects: including algebra 1, calculus, algebra 2, geometry
...I am a licensed attorney in the State of Illinois and have passed the Illinois bar exam. I have been using Microsoft Access since 2001 in a corporate setting. I use Access to gather client
phone numbers, create inventory and guest lists, and overall to track various pieces of information.
36 Subjects: including algebra 1, English, reading, writing
...I believe that World History, Art History, and Archaeology are important because they tell the story of our world. World History develops critical thinking and asks students to think through
issues unique to each time and place. Art demands that we tap into our cultural side to understand how we as people create and share ideas visually that are central to our experiences and
10 Subjects: including algebra 1, calculus, trigonometry, geometry
...I taught three of my four daughters algebra at home so they could enter high school math a year ahead. They all performed in the >90% range in the New York State algebra regents exam. I have
also tutored neighbors, both high school age and middle age adults, in math and physics.
17 Subjects: including algebra 1, reading, chemistry, statistics
...Please inquire for details. My cancellation/rescheduling policy is as followed: I require an12 hour notice of cancellation/rescheduling. If I am notified greater than 12 hours before the
session, there is no fee to cancel or reschedule.
15 Subjects: including algebra 1, chemistry, physics, geometry
Related Mc Cook, IL Tutors
Mc Cook, IL Accounting Tutors
Mc Cook, IL ACT Tutors
Mc Cook, IL Algebra Tutors
Mc Cook, IL Algebra 2 Tutors
Mc Cook, IL Calculus Tutors
Mc Cook, IL Geometry Tutors
Mc Cook, IL Math Tutors
Mc Cook, IL Prealgebra Tutors
Mc Cook, IL Precalculus Tutors
Mc Cook, IL SAT Tutors
Mc Cook, IL SAT Math Tutors
Mc Cook, IL Science Tutors
Mc Cook, IL Statistics Tutors
Mc Cook, IL Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Argo, IL algebra 1 Tutors
Brookfield, IL algebra 1 Tutors
Countryside, IL algebra 1 Tutors
Forest View, IL algebra 1 Tutors
Hodgkins, IL algebra 1 Tutors
La Grange Park algebra 1 Tutors
La Grange, IL algebra 1 Tutors
Lyons, IL algebra 1 Tutors
Mccook, IL algebra 1 Tutors
North Riverside, IL algebra 1 Tutors
Riverside, IL algebra 1 Tutors
Summit Argo algebra 1 Tutors
Summit, IL algebra 1 Tutors
Western, IL algebra 1 Tutors
Willow Springs, IL algebra 1 Tutors | {"url":"http://www.purplemath.com/Mc_Cook_IL_algebra_1_tutors.php","timestamp":"2014-04-16T21:54:16Z","content_type":null,"content_length":"24190","record_id":"<urn:uuid:4b2c6e98-622a-4168-9a19-83dffad14fc3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
32nd Annual Math Field Day
Math Field Day
Rules for Math Bowl
The Math Bowl will be conducted in two divisions: Junior Varsity and Varsity. A Junior Varsity Team will consist of 4 students in grades 10 or lower. The Varsity Team will consist of 4 students in
grades 12 or lower. In each division, team members are to be ranked by the advisor according to ability: 1, 2, 3, and 4, with "1" being the strongest, "2" being the next strongest, etc. Each contest
will consist of 4 rounds. The number 4 mathlete of each of the competing schools will compete in Round 1, the number 3 mathlete in Round 2, the Number 2 mathlete in Round 3, and the Number 1 mathlete
in the final round. The difficulty of questions increases after each round.
Prior to the competition, each mathlete will receive from the team advisor (who received them at morning registration) a set of answer sheets and fill in the top of each sheet with his/her name,
school, round number and question number.
At the beginning of the competition, the round 1 AND 2 mathletes go on stage, bringing their answer sheets. The round 1 student takes his/her seat and the round 2 player stands behind the chair and
will serve as checker. When everyone is in position, the answer packets will be passed out to the checkers. At this time they are not allowed to open the answers.
A question will be shown on the screen. The seated mathletes will begin to work on the problem. At the same time, the standing mathletes open the answer to that question. Each problem solver will
write the answer on the answer form in the appropriate place, in dark pencil or black or blue ink, and hand the form to the checker. The checker of the first correct answer will announce "ONE"; of
the second correct answer "TWO"; and the third correct answer - "THREE," etc. After their rank is confirmed by the judge the checker will mark the rank on the answer form, with a provided red pen. At
the end of the allotted time, checkers will hand all correct answer sheets to a contest official who will take them to the scoring table. Incorrect answers should be placed in a pile on the floor
under the seat. Scorekeepers will award 5 points for a first, 3 for a second, 2 for a third, and l for all other correct answers. The scorekeepers will verify that the answers are correct. Any answer
that is deemed incorrect, will invoke a penalty equal to how many points that team would have gotten if the answer had been correct (e.g. an incorrect second place answer results in 3 points deducted
from the score instead of added). Also of course, if an answer is incorrect, all lower rankings move up by one (this is why it is important to keep track of who¡¯s answer is fourth, fifth, etc.).
At the end of round 1, the two mathletes from each school switch places to prepare for round 2. At the end of round 2, both mathletes return to the audience and the other two members of each team
take their place on stage for rounds 3 and 4. Scoring is cumulative, and the winning team is the one that accumulates the most points over the four rounds.
Furthermore, please note:
During competition there is to be no communication of any kind between the checkers and the problem solvers. Violations will be dealt with at the discretion of the math bowl director.
Though, 9^th and 10^th graders are eligible to be on a varsity team, no contestant
may participate in both the junior varsity and varsity competitions.
A team may compete with fewer than 4 participants. A team with 1, 2, or 3
members participates in the last 1, 2, or 3 rounds.
USE OF CALCULATORS: No calculators allowed. | {"url":"http://www.csub.edu/~dgove/Math%20%20Bowl%20Rules%202004.htm","timestamp":"2014-04-19T17:03:58Z","content_type":null,"content_length":"10208","record_id":"<urn:uuid:07625a86-ffda-4129-bf34-0783621f3908>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Polkowski, Lech (ed.) et al., Rough sets in knowledge discovery 1. Methodology and applications. Heidelberg: Physica-Verlag. Stud. Fuzziness Soft Comput. 18, 286-318 (1998).
From the introduction: This paper is a sequel of the review by
Y. Y. Yao
S. K. M. Wong
T. Y. Lin
[A review of rough set models. T. Y. Lin (ed.) et al., Rough sets and data mining: analysis of imprecise data. Selected papers presented at a workshop of the 1995 ACM computer science conference, CSC
’95. Boston, MA: Kluwer Academic Publishers. 47-75 (1997;
Zbl 0861.68101
)]. We present some new results on generalized rough set models from both the constructive and the algebraic point of view. The rest of the paper is organized as follows. From Section 2 to 5, we
concentrate on an operator-oriented view of rough sets. In Section 2, we review a constructive method of rough set theory, which builds approximation operators from binary relations. In Section 3, we
introduce and examine alternative representations of approximation operators, and transformations from one to another. In Section 4, we present an algebraic method of rough set theory. Axioms for
approximation operators are studied. In Section 5, we study the connections between the theory of rough sets and other related theories of uncertainty. Two special classes of rough set models are
studied. They are related to belief and plausibility functions, and necessity and possibility functions, respectively. Section 6 deals with a set-oriented view of rough sets based on probabilistic
rough set models and rough membership functions. It enables us to draw connections between rough sets and fuzzy sets. The notion of interval rough membership functions is introduced. For simplicity,
we restrict our discussion to finite and nonempty universes. Some of the results may not necessarily hold for infinite universes.
68T30 Knowledge representation | {"url":"http://zbmath.org/?q=an:0946.68137","timestamp":"2014-04-16T22:42:24Z","content_type":null,"content_length":"21255","record_id":"<urn:uuid:6de128be-8005-4d6d-b349-81e36f715d1e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Oscillations, quasi-oscillations and joint continuity.
(English) Zbl 1216.54005
The following property of mappings of two variables is introduced. A function
$f:X×Y\to ℝ$
is called quasi-separately continuous at a point
if: (1)
– the
-section of
– is continuous at
, and (2) for every finite set
$F\subset Y$
$\epsilon >0$
there is an open set
$V\subset X$
such that
${x}_{0}\in \text{cl}\left(V\right)$
$x\in V$
$y\in F$
is quasi-separately continuous provided if it is quasi-separately continuous at each point
$\left(x,y\right)\in X×Y$
. It is shown that if
is a separable Baire space and
is compact then every quasi-separately continuous function
$f:X×Y\to ℝ$
has the Namioka property, i.e., there exists a dense
${G}_{\delta }$
$D\subset X$
such that
is jointly continuous at each point of
. To prove this result the author introduces a new version of a topological game and the notion of quasi-oscillation of a function. The game is a modification of the Saint-Raymond game [
J. Saint Raymond
, Proc. Am. Math. Soc. 87, 499–504 (1983;
Zbl 0511.54007
54C08 Weak and generalized continuity
54C05 Continuous maps
26B05 Continuity and differentiation questions (several real variables)
54C30 Real-valued functions on topological spaces
91A44 Games involving topology or set theory | {"url":"http://zbmath.org/?q=an:1216.54005&format=complete","timestamp":"2014-04-18T15:52:31Z","content_type":null,"content_length":"24852","record_id":"<urn:uuid:12df18ab-d531-4b83-b544-e1982170ba5b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Symmetry solutions and conservation laws for some partial differential equations in fluid mechanics
ABSTRACT In jet problems the conserved quantity plays a central role in the solution process. The conserved quantities for laminar jets have been established either from physical arguments or by
integrating Prandtl's momentum boundary layer equation across the jet and using the boundary conditions and the continuity equation. This method of deriving conserved quantities is not entirely
systematic and in problems such as the wall jet requires considerable mathematical and physical insight. A systematic way to derive the conserved quantities for jet °ows using conservation laws is
presented in this dissertation. Two-dimensional, ra- dial and axisymmetric °ows are considered and conserved quantities for liquid, free and wall jets for each type of °ow are derived. The jet °ows
are described by Prandtl's momentum boundary layer equation and the continuity equation. The stream function transforms Prandtl's momentum boundary layer equation and the continuity equation into a
single third- order partial di®erential equation for the stream function. The multiplier approach is used to derive conserved vectors for the system as well as for the third-order partial di®erential
equation for the stream function for each jet °ow. The liquid jet, the free jet and the wall jet satisfy the same partial di®erential equations but the boundary conditions for each jet are di®erent.
The conserved vectors depend only on the partial di®erential equations. The derivation of the conserved quantity depends on the boundary conditions as well as on the di®erential equations. The
boundary condi- tions therefore determine which conserved vector is associated with which jet. By integrating the corresponding conservation laws across the jet and imposing the boundary conditions,
conserved quantities are derived. This approach gives a uni¯ed treatment to the derivation of conserved quantities for jet °ows and may lead to a new classi¯cation of jets through conserved vectors.
The conservation laws for second order scalar partial di®erential equations and systems of partial di®erential equations which occur in °uid mechanics are constructed using di®erent approaches. The
direct method, Noether's theorem, the characteristic method, the variational derivative method (mul- tiplier approach) for arbitrary functions as well as on the solution space, symmetry conditions on
the conserved quantities, the direct construction formula approach, the partial Noether approach and the Noether approach for the equation and its adjoint are discussed and explained with the help of
an illustrative example. The conservation laws for the non-linear di®usion equa- tion for the spreading of an axisymmetric thin liquid drop, the system of two partial di®erential equations governing
°ow in the laminar two-dimensional jet and the system of two partial di®erential equations governing °ow in the laminar radial jet are discussed via these approaches. The group invariant solutions
for the system of equations governing °ow in two-dimensional and radial free jets are derived. It is shown that the group invariant solution and similarity solution are the same. The similarity
solution to Prandtl's boundary layer equations for two- dimensional and radial °ows with vanishing or constant mainstream velocity gives rise to a third-order ordinary di®erential equation which
depends on a parameter. For speci¯c values of the parameter the symmetry solutions for the third-order ordinary di®erential equation are constructed. The invariant solutions of the third-order
ordinary di®erential equation are also derived. | {"url":"http://wiredspace.wits.ac.za/handle/10539/6982","timestamp":"2014-04-19T12:15:04Z","content_type":null,"content_length":"26426","record_id":"<urn:uuid:5d471369-f755-469c-9a70-aa51e3c36e81>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Convert Decimals to Ratios
Converting or changing decimals to ratios can be very tricky sometimes. Not a lot of nursing students find it easy to calculate basic math calculations in nursing school. I will help you to solve
math by converting decimals to ratios in a quick and easy manner.
To be able to actually change decimals to a ratio without ample troubles, you nee to understand that when switching decimals to ratios, percentages cannot be overlooked. You will convert decimals to
ratios by writing decimals as percentages first.
Hence, if you had to change 0.25 as a decimal number to a ratio, you will have to change 0.25 to a percentage first by writing; 0.25 and moving the decimal point to two decimal places to the right.
Do you understand? A percentage is actually 100% so that means you will move the decimal point by two places or you can just multiply 0.25 by 100 to produce the right percentage.
So, you will in no doubt produce 25% after switching the decimal point which is written as 25 / 100. Now you have a nice fraction which can be deciphered into a perfect decimal number without
Perform a division of 25 / 100 by making the 25 as a numerator and the 100 a denominator. You will reduce the fraction to 1 / 4 by using the number 5 to decipher 25 / 100 so that the lowest term of
numbers are gotten.
The ratio would be written as 1 : 4. This 1 : 4 is your answer when 0.25 is changed to a ratio.
You should now know how to convert decimals to ratios easily after reading everything here.
Tips & Warnings
• During the fractions, I reduced 25 / 100 to its lowest term which became 1 / 4.
• convert decimal to ratio
• how to convert decimals to ratios
• decimal to ratio
• decimals to ratios
• converting decimals to ratios | {"url":"http://www.choicehow.com/2009/02/how-to-convert-decimals-to-ratios/","timestamp":"2014-04-19T15:15:51Z","content_type":null,"content_length":"15940","record_id":"<urn:uuid:51cef66c-fd81-4883-b712-cc0de90fc886>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
One factor of 4x^3+15x^2-31x-30 is x-2. Find the other factors
• one year ago
• one year ago
Best Response
You've already chosen the best response.
You could use long division or synthetic division. Your choice.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
with long division
Best Response
You've already chosen the best response.
Well to start with you know that \[4x^2 \times (x-2) = 4x^3-8x^2\] so just like you would do long division with numbers you can also do it with variables like this. so \[ 4x^3-15x^2 -\bigg[4x^
3-8x^2 \bigg]=-7x^2\] you drop the -31x like long division to get \[-7x^2 -31x\] and then divide this by (x-2) again \[-7x\times (x-2) = -7x^2 +14x\] and so on
Best Response
You've already chosen the best response.
sorry the \[-15x^2 \] should be positive 15
Best Response
You've already chosen the best response.
so \[4x^3+15x^2 -\bigg[4x^3-8x^2 \bigg]=23x^2\] \[23x\times (x-2) = 23x^2-46x\] \[23x^2-31x -\bigg[23x^2-46x\bigg]=+15x\] \[15\times(x-2)=15x-30\] \[+15x-30 -\bigg[15x-30 \bigg]=0\] so \[4x^3+15x
^2-31x-30 \to (x-2)(4x^2+23x+15)\] and you can factor the rest... hopefully
Best Response
You've already chosen the best response.
Explaining how to do long division or synthetic division is a real pain on open study, I would suggest you take the time outside of this to study it from a book with examples. Or go online and
search for a youtube video. It isn't hard.
Best Response
You've already chosen the best response.
thank you
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ae8830e4b0e906b4a5a0a1","timestamp":"2014-04-20T08:32:53Z","content_type":null,"content_length":"44839","record_id":"<urn:uuid:c379c08a-1730-4559-a756-942ca83a1536>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the complexity of vertex-coloring edge-weightings
Andrzej Dudek, David Wajc
Given a graph G=(V,E) and a weight function w:E →R, a coloring of vertices of G, induced by w, is defined by χ[w](v) = ∑[e∋v] w(e) for all v∈V. In this paper, we show that determining whether a
particular graph has a weighting of the edges from {1,2} that induces a proper vertex coloring is NP-complete.
Full Text:
PDF PostScript | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/1508","timestamp":"2014-04-21T16:35:29Z","content_type":null,"content_length":"10809","record_id":"<urn:uuid:d4a0c542-8683-411d-9b74-5203d595c5ac>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hi - I have written a simple macro that takes a column of data & creates 3 temperature columns of data tempK, tempC, tempF. These are actually Names that I create.
When I run the macro on a new file, it doesn't seem to re-define tempK (for instance) as a cell range, and instead tries to use the original tempK cell range defined when I had created the macro...
How can I modify my macro so that the tempK Name is re-defined anew every time the macro is invoked ?
Sub temp_uniformity()
Selection.Cut Destination:=Range("B25:J25")
ActiveCell.FormulaR1C1 = "tempK"
ActiveCell.FormulaR1C1 = "tempC"
ActiveCell.FormulaR1C1 = "tempF"
ActiveCell.FormulaR1C1 = "=(RC[-8]/0.0000000000366)^0.25"
Selection.AutoFill Destination:=Range("L26:L5025"), Type:=xlFillDefault
ActiveWorkbook.Names.Add Name:="tempK", RefersToR1C1:= _
ActiveCell.FormulaR1C1 = "=RC[-1]-273.15"
Selection.AutoFill Destination:=Range("M26:M5025"), Type:=xlFillDefault
ActiveWorkbook.Names.Add Name:="tempC", RefersToR1C1:= _
ActiveCell.FormulaR1C1 = "=(RC[-2]-273.15)*1.8+32"
Selection.AutoFill Destination:=Range("N26:N5025"), Type:=xlFillDefault
ActiveWorkbook.Names.Add Name:="tempF", RefersToR1C1:= _
Selection.FormulaArray = "=STDEV(IF(tempK=0,"""",tempK))"
Selection.FormulaArray = "=STDEV(IF(tempC=-273.15,"""",tempC))"
Selection.FormulaArray = "=STDEV(IF(tempF=-459.67,"""",tempF))"
End Sub
Thanks ! | {"url":"http://www.knowexcel.com/view/171641-stdev-if.html","timestamp":"2014-04-19T17:16:15Z","content_type":null,"content_length":"61181","record_id":"<urn:uuid:6241b918-f5dd-4721-ab3f-835269a606d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exploiting Data-Independence for Fast Belief-Propagation
Julian McAuley and Tiberio Caetano
International Conference on Machine Learning , 2010.
Maximum a posteriori (MAP) inference in graphical models requires that we maximize the sum of two terms: a data-dependent term, encoding the conditional likelihood of a certain labeling given an
observation, and a data-independent term, encoding some prior on labelings. Often, the data-dependent factors contain fewer latent variables than the data-independent factors -- for instance, many
grid and tree-structured models consist of only first-order conditionals despite having pairwise priors. In this paper, we note that MAP-inference in any such graphical model can be made
substantially faster by appropriately preprocessing its data-independent terms. Our main result is to show that message-passing in any such pairwise model has an expected-case exponent of only $1.5$
on the number of states per node, leading to significantly faster algorithms than the standard quadratic time solution.
PDF - Requires Adobe Acrobat Reader or other PDF viewer. | {"url":"http://eprints.pascal-network.org/archive/00007293/","timestamp":"2014-04-16T19:16:04Z","content_type":null,"content_length":"7255","record_id":"<urn:uuid:a7310451-a0b3-4823-911d-b5e1ac746808>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating the Pi constant - rcor cs
The mathematical constant π (Greek pi) is commonly used in mathematics. It is also known as Archimedes' constant.
is an irrational number, which means that its value cannot be expressed exactly as a fraction m/n, where m and n are integers. Consequently, its decimal representation never ends or repeats.
This post shows a program that calculates an estimation of the
π constant. This approach is based on the attached file Pi as an integral.
The advantage of this approach is that it calculates the π with a summation, it means that the task can be easily divided among computers so that the final result, that is the π estimation, will be
the sum of the result of each computer.
The following program is a naive implementation of the method shown in Pi as an integral.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
long double pi = 0.0;
long long int n = 9999999999;
long long int i;
for( i = 1; i<=n; i++ ){
pi += sqrt(
(long double)1.0 -
( (long double)(2.0*i)/n - (long double)1.0 )
, 2
)*((long double)2.0/n);
pi *= (long double)2.0;
printf("%.20Lf\n", pi);
Note that the greater is n the more precise will be the estimation.
For this example the value calculated for π was 3.141592653589788512... .
by rcor | {"url":"https://sites.google.com/site/rcorcs/posts/calculatingthepiconstant","timestamp":"2014-04-20T03:10:25Z","content_type":null,"content_length":"28736","record_id":"<urn:uuid:e00e4eb4-acb1-4dca-ac11-5daa55b14bfe>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
I'm not sure why but I dreamed about this equation
Re: I'm not sure why but I dreamed about this equation
Hi Maiya
That is just an equality that many people find intriguing because it connects 5 major constants in math: e,pi,i,1 and 0.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=293180","timestamp":"2014-04-21T15:04:18Z","content_type":null,"content_length":"22823","record_id":"<urn:uuid:3e8813c7-cd99-48db-894f-e26fa97a738d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
three-dimensional space
The topic three-dimensional space is discussed in the following articles:
linear algebra
• TITLE: mathematicsSECTION:
Linear algebra
...spaces. These are sets whose elements can be added together and multiplied by arbitrary numbers, such as the family of solutions of a linear differential equation. A more familiar example is
that of three-dimensional space. If one picks an origin, then every point in space can be labeled by the line segment (called a vector) joining it to the origin. Matrices appear as ways of
• TITLE: mechanics (physics)SECTION:
Projectile motion
Projectile motion may be thought of as an example of motion in space—that is to say, of three-dimensional motion rather than motion along a line, or one-dimensional motion. In a suitably
defined system of Cartesian coordinates, the position of the projectile at any instant may be specified by giving the values of its three coordinates, x(t), y(t), and... | {"url":"http://www.britannica.com/print/topic/593723","timestamp":"2014-04-21T03:08:02Z","content_type":null,"content_length":"7522","record_id":"<urn:uuid:64511662-d510-4438-9c7d-5ca9fb009e3c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
Outcomes and Assessment Plan
The Department of Mathematics has adopted the following Student Learning Outcomes for our programs, which are divided into two subsets. The first is applicable to all our classes and programs while
the second applies only to our majors. Following our outcomes below, we detail our recently adopted Assessment Plan for Majors, as well as describe the main Assessment Tools we use to evaluate our
Student Outcomes
We expect all students who complete math classes to demonstrate the ability to:
a) understand and utilize course contents at an appropriate level;
b) use problem solving skills by developing a strategic overview of a mathematical situation and using this overview to analyze that situation;
c) recognize that a problem can have different useful representations (graphical, numerical, or symbolic) and select the most appropriate methods and formats;
d) model real world problems mathematically and interpret the results appropriately;
e) use appropriate software and technological tools and judge when such use is helpful;
f) communicate mathematical results and arguments clearly, both orally and in writing;
g) appreciate the central role of mathematics in the sciences and the wider world.
The Mathematics Department offers a total of 10 major options, including combined majors with other departments. These can be grouped into three degree categories: Bachelor of Arts, Bachelor of Arts
in Education and Bachelor of Science.
Below is a list of student learning outcomes that are relevant to one or more of our major options. None of our majors require the achievement of all of them. A table below summarizes which of these
outcomes we expect for each of our major options.
In completing a major in the Mathematics Department, for each of the following items relevant to that major we expect a student to demonstrate:
1) Mastery of the essentials of core lower division mathematics courses: calculus and linear algebra (Core Math);
2) Understanding of the importance of abstraction and rigor in mathematics, ability to construct complete proofs and to critically examine the correctness of mathematical work and logical arguments (
3) Knowledge of concepts and techniques from a variety of mathematical areas, by demonstrating understanding of material in upper division courses in at least two of the following disciplines:
abstract algebra, differential equations, geometry, linear algebra, mathematical analysis, number theory, optimization, numerical analysis and probability and statistics (Breadth);
4) Awareness of the historical context of areas of mathematics studied and familiarity with major contributions of some prominent mathematicians of the past and present (History);
5) In-depth understanding of at least two mathematical subjects at an advanced level, by showing understanding of material in a second course of a sequence in these subjects (Depth);
6) Completion of the appropriate professional preparation program, including the earning of the appropriate professional certification (Certification).
The table below summarizes which student learning outcomes we expect for each of our major options. The combined majors combine in-depth study of another discipline with the mathematics most relevant
for that subject.
│Degree/Major │Core Math│Rigor│Breadth│History│Depth│Certification│
│BA Math │ X │ X │ X │ X │ X │ │
│BA Econ/Math │ X │ X │ X │ X │ │ │
│BAE Math Elem │ X │ X │ X │ X │ │ X │
│BAE Math Sec │ X │ X │ X │ X │ │ X │
│BAE Chem/Math │ X │ X │ X │ X │ │ X │
│BAE Phys/Math │ X │ X │ X │ X │ │ X │
│BS Math │ X │ X │ X │ X │ X │ │
│BS Applied Math │ X │ X │ X │ X │ X │ │
│BS Math/CS │ X │ X │ X │ X │ X │ │
│BS Bio/Math │ X │ │ X │ X │ X │ │
The following table indicates when and how we will assess each of the outcomes for majors over the next six years, realizing that the results of and experience with assessment in the beginning of
this schedule may suggest changes to this schedule and the way in which the outcomes themselves are assessed. ES stands for Exit Survey, given to each of our graduating students, and Ct stands for
Count. More specific comments about the assessment of each of them follow.
│ │2010-11 │2011-12 │2012-13 │2013-14 │2014-15 │2015-16 │
│Core Math* │ Grades, ES │Grades, ES │Grades, ES │Grades, ES │Grades, ES │Grades, ES │
│Rigor │ 312, ES │ ES │ 302, ES │ ES │ 360, ES │ ES │
│Breadth │ Ct, ES │331, Ct, ES│ Ct, ES │304, Ct, ES│ Ct, ES │341, Ct, ES│
│History │ Ct, ES │ Ct, ES │ Ct, ES │ Ct, ES │ Ct, ES │ Ct, ES │
│Depth │430/432, Ct, ES │ Ct, ES │475, Ct, ES│ Ct, ES │402, Ct, ES│ Ct, ES │
│Certification│ Count │ Count │ Count │ Count │ Count │ Count │
Core Math: Assessed every year. Since success in Math 224 depends heavily on success in Math 124 and 125, we will record the grades of graduating seniors in Math 224 and 204. Since calculus and Math
204 classes are made up largely of non-majors, assessment of learning in those courses does not tell us how our majors are achieving this outcome. Exit Survey. *In later courses that do require the
mastery of this material, instructors who are teaching courses that are used for assessment of other outcomes (say Math 331 or 341) could be encouraged to collect data to measure how well students
understand the core material.
Rigor: Assessed by in-class performance every other year, using a three-course cycle (Math 312, 302, 360). Instructor of each section of that course could count students who "met/exceeded/did not
meet expectations" concerning, for example, the ability to independently construct a complete and correct proof of a theorem not seen before. How this will be measured would be up to the individual
instructor, but there should be agreement among instructors about what the expectations are. Data collected and used to improve course, if warranted. Exit Survey.
Breadth: Assessed by in-class performance every other year, using a three-course cycle (Math 331, 304, 341). These courses are taken by a large number of students from all of our major options. At
the beginning of the year, instructors of course used for assessment agree on which course objectives to measure that year. Instructors choose how to assess the achievement of those objectives in
their classes. Data compiled and used to improve course, if warranted. Count of number of different areas studied (successfully) at upper division level by graduating seniors (every year). Exit
Survey (New question needed).
History: Assessed every year. Since many instructors incorporate history into their classes as time permits and when appropriate, this is maybe best measured by the question on the exit survey. It
seems that most of our majors take Math 419, although it is required only of our BA and BAE students. Count the number of graduating students who take Math 419. Maintain a list of topics of term
papers completed by students in Math 419 to document what students actually study outside of class, and maintain an archive of completed term papers.
Depth: Assessed by in-class performance every other year, using the following courses: Math 402, Math 475, and Math 430 and/or 432. Most of these courses, with the possible exception of 475, are
taken by a large number of students. Math 475 is required for the BS Applied Math major, and 435 is required for the Operations Research concentration for the BS Applied. Similar to assessment in the
breadth category (except that there will typically only be one section of each course used for assessment of this outcome): instructor of the course could choose some combination of homework and exam
questions clearly connected to course objectives to measure student understanding of course material. Data collected and analyzed to improve course, if warranted. Count the number of sequences
successfully completed by graduating seniors. Exit survey (new question needed).
Certification: Assessed every year. Count the number of students who get certification.
Review of Assessment Data and Activities
The department's assessment coordinator will collect data from the instructors of the courses used for assessment, and assist those instructors in their assessment activities. This person will also
analyze the data from the exit survey and from the analysis of transcripts of graduating majors. The results of all of these activities will be reviewed and discussed in the department's curriculum
committee. This committee will also decide what action and additional assessment activities, if any, should be taken as a result of the collected information.
Here we describe some of the tools which are used broadly to assess many aspects of our program.
Student Exit Surveys
Exit surveys are administered to all graduating students with majors in mathematics, including joint majors and math education majors.
The survey contains multiple choice questions as well as opportunities for more extensive open-ended responses.
These surveys are distributed and collected by our office staff, who compile the results and present them for review to the Department Chair and the Undergraduate Committee.
The survey seeks undergraduate student input in four areas:
(i) how well the department and the student's particular academic program are perceived to have satisfied each of the listed desired student learning outcomes for our programs;
(ii) an evaluation of other aspects of the student's experience in the department, such as the quality of advisement and teaching, the range and availability of courses offered, and the quality of
computing facilities.;
(iii) identifying what the department does particularly well;
(iv) identifying areas in which the department might improve.
A copy of the Undergraduate Student Exit Survey may be found here.
Graduate Students
Exit surveys are administered to all graduate prior to completing their degree programs.
The survey contains multiple choice questions as well as opportunities for more extensive open-ended responses.
These surveys are distributed and collected by our office staff, who compile the results and present them for review to the Department Chair and the Graduate Committee.
The survey seeks graduate student input in six areas, with a number of specific topics in each area being addressed:
(i) how well the department and the student's particular academic program are perceived to have satisfied each of the listed desired student learning outcomes for our programs;
(ii) the quality of the academic program, such as the range and quality of the courses offered and required;
(iii) the admission process, academic advising and wider professional support;
(iv) their experience in their roles as graduate teaching assistants;
(v) the physical environment, such as office space and computing facilities;
(vi) the departmental human / social environment.
A copy of the Graduate Student Exit Survey may be found here.
Individual Course Assessment
Course Goals and Assessment
Every class syllabus includes a detailed set of desired student learning outcomes, formulated specifically for the content and other goals of that course. Such outcomes are typically measured in the
course of the usual student evaluation process, namely as particular items in examinations or components of student assignments. Data to assess the extent to which each desired outcome is met is
accumulated by flagging the particular exam items or assignments that relate to that particular goal and then collecting data that documents the level of student success. Faculty maintain records on
the particular course goals, related assessment items, and measured levels of student success on those items. The data is analyzed to determine whether the associated goal is being satisfactorily met
or needs to be addressed further by curricular or instructional changes. Records of the relevant items, data and faculty responses are maintained by individual faculty.
Lists of the course objectives / student learning outcomes (both generic and specific) for each course taught are maintained in the department office and shared amongst faculty.
Skills Tests
To ensure that the goals of basic computational proficiency are met in the most fundamental courses, namely the pre-calculus and calculus sequences, we have established skills tests for students in
those classes. This requires that students score at least 80% on a test focused entirely on one particular skill (such as differentiating 10 functions of specific types). Failure to pass the test
(which may be taken, in different forms, several times) results in either lowering the course grade by one letter or course failure (depending on the course). This distinguishes such fundamental
skills from the higher order goals of these courses, and is a very effective tool in forcing students to acquire the necessary basic computational skills.
Feedback Loop
The primary conduit for curricular and programmatic review and change is the Math Curriculum Committee. This committee formulated the overall program objectives, coordinates the formulation of the
particular course objectives, and is responsible for all aspects of the outcomes assessment process and data review. All the assessment data from individual faculty and the student surveys is
available to that group. The Curriculum Committee meets regularly to discuss programmatic initiatives and course modifications.
While such changes have historically been at the instigation of particular faculty responding to perceived needs in specific areas, or sometimes at the behest of the Chair promoting more extensive
programmatic changes in response to perceived weaknesses in the program or changes in the field, these discussions have not generally been driven by data on outcomes assessment. This has now changed,
with the relevant assessment data being available to drive and direct the discussion. While this is unlikely to accelerate the on-going evolutionary changes in the nature and sequencing of most
courses, it has facilitated more rapid and substantial changes to the structure and emphases of courses offered in multiple sections, enhancing the consistency of these offerings and the extent to
which all students meet the desired learning outcomes. | {"url":"http://www.wwu.edu/math/about/assessment.shtml","timestamp":"2014-04-24T03:44:05Z","content_type":null,"content_length":"33207","record_id":"<urn:uuid:168eca0a-f143-48ce-ad16-d879a36dbcb6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Tangled Tale
Problem. — (1) Two travelers, starting at the same time, went opposite ways round a circular railway. Trains start each way every 15 minutes, the easterly ones going round in 3 hours, the westerly in
2. How many trains did each meet on the way, not counting trains met at the terminus itself? (2) They went round, as before, each traveler counting as “one” the train containing the other traveler.
How many did each meet?
Answers. — (1) 19. (2) The easterly traveler met 12; the other 8.
The trains one way took 180 minutes, the other way 120. Let us take the l.c.m., 360, and divide the railway into 360 units. Then one set of trains went at the rate of 2 units a minute and at
intervals of 30 units; the other at the rate of 3 units a minute and at intervals of 45 units. An easterly train starting has 45 units between it and the first train it will meet: it does 2/5 of this
while the other does 3/5, and thus meets it at the end of 18 units, and so all the way round. A westerly train starting has 30 units between it and the first train it will meet: it does 3/5 of this
while the other does 2/5, and thus meets it at the end of 18 units, and so all the way round. Hence if the railway be divided, by 19 posts, into 20 parts, each containing 18 units, trains meet at
every post, and, in (1) each traveler passes 19 posts in going round, and so meets 19 trains. But, in (2), the easterly traveler only begins to count after traversing 2/5 of the journey, i.e. on
reaching the 8th post, and so counts 12 posts: similarly, the other counts 8. They meet at the end of 2/5 of 3 hours, or 3/5 of 2 hours, i.e. 72 minutes.
Forty-five answers have been received. Of these, 12 are beyond the reach of discussion, as they give no working. I can but enumerate their names, Ardmore, E. A., F. A. D., L. D., Matthew Matticks, M.
E. T., Poo-Poo, and The Red Queen are all wrong. Beta and Rowena have got (1) right and (2) wrong. Cheeky Bob and Nairam give the right answers, but it may perhaps make the one less cheeky, and
induce the other to take a less inverted view of things, to be informed that, if this had been a competition for a prize, they would have got no marks. (N.B. — I have not ventured to put E. A.‘s name
in full, as she only gave it provisionally, in case her answer should prove right.)
Of the 33 answers for which the working is given, 10 are wrong; 11 half-wrong and half-right; 3 right, except that they cherish the delusion that it was Clara who traveled in the easterly train — a
point which the data do not enable us to settle; and 9 wholly right.
The 10 wrong answers are from Bo-Peep, Financier, I. W. T., Kate B., M. A. H., Q. Y. Z., Sea-Gull, Thistle-Down, Tom-Quad, and an unsigned one. Bo-Peep rightly says that the easterly traveler met all
trains which started during the 3 hours of her trip, as well as all which started during the previous 2 hours, i. e. all which started at the commencements of 20 periods of 15 minutes each; and she
is right in striking out the one she met at the moment of starting; but wrong in striking out the last train, for she did not meet this at the terminus, but 15 minutes before she got there. She makes
the same mistake in (2). Financier thinks that any train, met for the second time, is not to be counted. I. W. T. finds, by a process which is not stated, that the travelers met at the end of 71
minutes and 26½ seconds. Kate B. thinks the trains which are met on starting and arriving are never to be counted, even when met elsewhere. Q. Y. Z. tries a rather complex algebraic solution, and
succeeds in finding the time of meeting correctly: all else is wrong. Sea-Gull seems to think that, in (1), the easterly train stood still for 3 hours; and says that, in (2) the travelers meet at the
end of 71 minutes 40 seconds. Thistledown nobly confesses to having tried no calculation, but merely having drawn a picture of the railway and counted the trains; in (1) she counts wrong; in (2) she
makes them meet in 75 minutes. Tom-Quad omits (1); in (2) he makes Clara count the train she met on her arrival. The unsigned one is also unintelligible; it states that the travelers go “1/24 more
than the total distance to be traversed”! The “Clara” theory, already referred to, is adopted by 5 of these, viz., Bo Peep, Financier, Kate B., Tom-Quad, and the nameless writer.
The 11 half-right answers are from Bog-Oak, Bridget, Castor, Cheshire Cat, G. E. B., Guy Mary, M. A. H., Old Maid, R. W., and Vendredi. All these adopt the “Clara” theory. Castor omits (1). Vendredi
gets (1) right, but in (2) makes the same mistake as Bo-Peep. I notice in your solution a marvellous proportion-sum: “300 miles: 2 hours:: one mile: 24 seconds.” May I venture to advise your
acquiring, as soon as possible, an utter disbelief in the possibility of a ratio existing between miles and hours? Do not be disheartened by your two friends’ sarcastic remarks on your “roundabout
ways”. Their short method, of adding 12 and 8, has the slight disadvantage of bringing the answer wrong: even a “roundabout” method is better than that! M. A. H., in (2) makes the travelers count
“one” after they met, not when they met. Cheshire Cat and Old Maid get “20” as answer for (1), by forgetting to strike out the train met on arrival. The others all get “18” in various ways. Bog-Oak,
Guy and R. W. divide the trains which the westerly traveler has to meet into 2 sets viz., those already on the line which they (rightly) make “11”, and those which started during her 2 hours’ journey
(exclusive of train met on arrival), which the (wrongly) make “7”; and they make a similar mistake with the easterly train. Bridget (rightly) says that the westerly traveler met a train every 6
minutes for 2 hours, but (wrongly) makes the number “20”; it should be “21”. G. E. B. adopts Bo-Peep’s method, but (wrongly) strikes out (for the easterly traveler) the train which started at the
commencement of the previous 2 hours. Mary thinks a train met on arrival must not be counted, even when met on a previous occasion.
The 3 who are wholly right but for the unfortunate “Clara” theory, are F. Lee, G. S. C., and X. A. B.
And now “descend, ye classic ten!” who have solved the whole problem. Your names are Aix-les-Bains, Algernon Bray (thanks for a friendly remark, which comes with a heart-warmth that not even the
Atlantic could chill),Arvon, Bradshaw of the Future, Fifee, H.L.R., J. L. O., Omega, S. S. G., and Waiting for the Train. Several of these have put Clara, provisionally, into the easterly train: but
they seem to have understood that the data do not decide that point.
Class List
H. L. R.
Algernon Bray.
Bradshaw of the Future.
S. S. G.
Waiting for the Train.
F. Lee.
G. S. C.
X. A. B. | {"url":"http://ebooks.adelaide.edu.au/c/carroll/lewis/tangled/answers3.html","timestamp":"2014-04-17T10:09:15Z","content_type":null,"content_length":"10516","record_id":"<urn:uuid:3fb99ff5-a559-4921-b23c-e6936939f354>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coronado Prealgebra Tutor
Find a Coronado Prealgebra Tutor
...I earned a 35 on my ACT Exam. Since then, I have been attending UCSD, from which I will be graduating in June. In the fall, I will be leaving San Diego to attend medical school.
42 Subjects: including prealgebra, English, Spanish, writing
...Within the first session, I am able to pinpoint exactly what is hindering a student from understanding the subject matter. From there, I work with the student to help them understand the basics
of the problem by using effective math methods, such as real life mathematical concepts. This is very important, because math is a series of building blocks.
35 Subjects: including prealgebra, reading, writing, English
...As well as middle school mathematics and language arts. I also have experience tutoring college algebra and statistics.I am a certified elementary (K-6) teacher in Kansas. I am a certified
elementary teacher.
18 Subjects: including prealgebra, reading, geometry, GED
...I hold myself to the highest standards and understand that hard work and dedication can help any person succeed at whatever it is they are working towards. Therefore my goal in tutoring is to
keep the student challenged but engaged by creating each lesson specifically to match their interests. I specialize in tutoring math, reading, writing and study skills.
23 Subjects: including prealgebra, English, reading, writing
I am a full time student at San Diego State University, and am working to become an elementary school teacher or a middle school science teacher. My goal is to instill a love of learning in all my
future students. I hope to do this by making the subjects I teach more enjoyable and relevant to my student's lives.
13 Subjects: including prealgebra, reading, English, writing | {"url":"http://www.purplemath.com/Coronado_Prealgebra_tutors.php","timestamp":"2014-04-20T11:05:15Z","content_type":null,"content_length":"23953","record_id":"<urn:uuid:f20f6a1a-0d9a-4824-959e-d9bd0d4d2644>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coordinate geometry
April 1st 2008, 06:38 AM #1
Jul 2007
Coordinate geometry
Dear forum members,
I have a following problem
A circle touches both the x-axis and the y-axis and passes through the point (1,2). Find the equation of the circle. How many possibilities are there?
My solution
If the circle touches the x-axis and the y-axis, it means that both the x-axis and the y-axis are tangents to the circle, meaning that the distance from the centre to these tangents equals the
radius, right? And the distance from the point (1,2) to the centre equals the radius as well.
Using a distance from a line formula I get that the distance from the center to the y-axis =(y)(Please pretend the parentheses are absolute value bars). The distance from the center to the x-axis
is (x), pretend the parentheses are absolute value bars again.
that means (y)=(x)
if the center is marked as (x,y)
the distance from the point (1,2) to (x,y) is
$y^2=x^2-2x-4y+5=0$ (I plugged in y instead of the radius)
My question is, is the above argument (y)=(x) good enough, for me to be able to substitute x to the place of y into the above equation?
Thank you in advance!
All your considerations are OK.
I would use C(r, r) as the center of the circle. Thus the distance between the given point and the center of the circle must be r too:
$(1-r)^2+(2-r)^2 = r^2$
Solve for r. I've got (1,1) or (5,5)
Ok, thank you so much!
April 1st 2008, 07:47 AM #2
April 1st 2008, 11:07 AM #3
Jul 2007 | {"url":"http://mathhelpforum.com/math-topics/32809-coordinate-geometry.html","timestamp":"2014-04-18T03:58:09Z","content_type":null,"content_length":"36527","record_id":"<urn:uuid:f76293ac-7949-4957-89ad-3ea6f27725c7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Law of series
Law of series
From Scholarpedia
Say that a series is noted when a random event (usually extremely rare) happens several (at least two) times in a relatively short period of time (much shorter than the intuitive average waiting
time), without an obvious reason for such untimely repetitions. In colloquial meaning the law of series is the belief that such series happen more often than they should by "pure chance". This belief
is usually associated with another, that there exists some unexplained physical force or statistical rule behind this "law".
Mystery or science
• In July 1891 Charles Wells went to Monte Carlo with 4,000 pounds that he had defrauded from investors in a bogus invention, a "musical jump rope". In an eleven-hour session Wells "broke the bank"
twelve times, winning a million francs. At one stage he won 23 times out of 30 successive spins of the wheel. Wells returned to Monte Carlo in November of that year and won again. During this
session he made another million francs in three days, including successful bets on the number five for five consecutive turns. Despite hiring private detectives the Casino never discovered
Wells's system; Wells later admitted it was just a lucky streak. His system was the high-risk martingale, doubling the stake to make up losses. (after Wikipedia, Charles Wells)
• For 250 years, people had wondered whether a certain bell-ringing pattern -- called "common Bob Stedman triples" -- could be rung. On 22 January 1995 a team at St John's Church, London, finally
succeeded. Within days it emerged that two other groups, both working independently, had also solved the centuries-old mystery. (after Robert Matthews)
• In Lower Silesia, Poland, two major floods occurred in 1997 and 1998, after many years of silence.
Debate around the law of series
Serial occurrences of certain types of events is perfectly understandable as a result of physical dependence. Various events reveal increased frequency of occurrences in so-called periods of
propitious conditions, which in turn, follow a slowly changing process. For example, volcanic eruptions appear in series during periods of increased tectonic activity, floods prevail in periods of
global warming. In some other cases, the first occurrence physically provokes subsequent repetitions. A good example here are series of people falling ill due to a contagious disease. The dispute
around the law of series clearly concerns only such events for which there are no obvious clustering mechanisms, and they are expected to appear completely independently from each-other, and yet,
they do appear in series. With this restriction the law of series belongs to the category of unexplained mysteries, such as synchronicity, telepathy or Murphy's Law, and is often considered a
manifestation of paranormal forces that exist in our world and escape scientific explanation. It is a subject of a long-lasting controversy centered around two questions:
• 1. Does there indeed exist a law of series in reality or is it just an illusion, a matter of our selective perception or memory?
• 2. Assuming it does exist, what could it be caused by?
This debate has avoided strict scientific language; even its subject is not precisely defined, and it is difficult to imagine appropriate repetitive experiments in a controlled environment. Thus, in
this approach, the dispute is probably fated to remain an exchange of speculations.
There is also a scientific approach, embedded in the ergodic theory of stochastic processes. Surprisingly, the study of stochastic processes supports the law of series against the skeptic point of
An Austrian biologist dr. Paul Kammerer (1880-1926) was the first scientist to study the law of series (law of seriality, in some translations). His book Das Gesetz der Serie (Kammerer, 1919)
contains many examples from his and his nears' lives. Here is a sample:
(22) On July 28, 1915, I experienced the following progressive series: (a) my wife was reading about "Mrs Rohan", a character in the novel Michael by Hermann Bang; in the tramway she saw a man who
looked like her friend, Prince Josef Rohan; in the evening Prince Rohan dropped in on us. (b) In the tram she overheard somebody asking the pseudo-Rohan whether he knew the village of Weissenbach at
Lake Attersee, and whether it would be a pleasant place for a holiday. When she got out of the tram, she went to the delicatessen shop on the Naschmarkt, where the attendant asked her whether she
happened to know Weissenbach on Lake Attersee - he had to make a delivery by mail and did not know the correct postal address.
Richard von Mises in his book (von Mises, 1981) describes that Kammerer conducted many (rather naive) experiments, spending hours in parks noting occurrences of pedestrians with certain features
(glasses, umbrellas, etc.) or in shops, noting precise times of arrivals of clients, and the like. Kammerer "discovered", that the number of time intervals (of a fixed length) in which the number of
objects under observation agrees with the average is by much smaller than the number of intervals, where that number is either zero or larger than the average. This, he argued, provided evidence for
clustering. From today's perspective, Kammerer merely noted the perfectly normal spontaneous clustering of signals in the Poisson process. Nevertheless, Kammerer's book attracted some attention of
the public and even of some serious scientists toward the phenomenon of clustering. Kammerer himself lost authority due to accusations of manipulating his biological experiments (unrelated to our
topic), which eventually drove him to suicide.
Pauli and Jung
Examples of series are, in the literature, mixed with examples of other kinds of "unbelievable" coincidences. Their list is long and fascinating, but quoting them would lead away from the subject.
Pioneer theories about coincidences (including series) were postulated, not only by Kammerer, but also by a noted Swiss psychologist, Carl Gustav Jung (1875-1961), and a Nobel prize winner in
physics, Austrian, Wolfgang Pauli (1900-1958). They believed that there exist undiscovered physical "attracting" forces driving objects that are alike, or have common features, closer together in
time and space (so-called theory of synchronicity). See Jung's book Synchronicity: An Acausal Connecting Principle.
The law of series and synchronicity interests the investigators of spirituality, magic and parapsychology. It fascinates with its potential to generate "meaningful coincidences". A Frenchman, Jean
Moisset (born 1924), a self-educated specialist in parapsychology, wrote a number of books on synchronicity, law of series, and similar phenomena. He connects the law of series with psychokinesis and
claims that it is even possible to use it (Moisset, 2000). It is believed that Adolf Hitler, in spite of hiring an astrologist and a fortune-teller to help him plan, also trusted that "forces of
synchronicity" could be employed for a purpose.
In opposition to the theory of synchronicity is the belief, represented by many statisticians, among others by an American mathematician, Warren Weaver (1894-1978), that any series, coincidences and
the like, appear exclusively by pure chance and that there is no mysterious or unexplained force behind them. Around the world in every instant of time reality combines so many different names,
numbers, events, etc., that there is nothing unusual if some combinations considered series or "unbelievable coincidences" occur here and there from time to time. Every such coincidence has nonzero
probability, which implies not only that it can, but even must occur, if sufficiently many trials are performed. People's perception has the tendency to ignore all those sequences of events, which do
not posses the attribute of being unusual, so that we largely underestimate the enormous number of "failures" accompanying every single "successful" coincidence. Human memory registers coincidences
as more frequent simply because they are more distinctive. This is the "mysterious force" behind synchronicity. A similar point of view is explained by Robert Matthews in his essay The laws of freak
With regard to series of repetitions of identical or similar events, the skeptics' argumentation refers to the effect of spontaneous clustering. For an event, to repeat in time by "pure chance" means
to follow a trajectory of a Poisson process. In a typical realization of a Poisson process the distribution of signals along the time axis is far from being uniform; the gaps between signals are
sometimes bigger, sometimes smaller. Places where several smaller gaps accumulate (which obviously happens here and there along the time axis) can be interpreted as "spontaneous clusters" of signals.
It is nothing but these natural clusters that are being observed and over-interpreted as the mysterious "series". It is this kind of "seriality" that has been seen by Kammerer in most of his
Yet another "cool-minded" explanation of synchronicity (including the law of series) asserts that very often events that seem unrelated (hence should appear independently of each-other) are in fact
strongly related. Many "accidental" coincidences or series of similar events, after taking a closer look at the mechanisms behind them, can be logically explained as "not quite accidental".
"Ordinary" people simply do not bother to seek the logical connection. After all, it is much more exciting to "encounter the paranormal".
Mathematical approach
A systematic, purely mathematical approach to the phenomenon can be found in papers of Downarowicz, Lacroix et al. (Downarowicz and Lacroix, 2006, Downarowicz, Lacroix and Leandri, preprint,
Downarowicz, Grzegorek and Lacroix, preprint). The law of series is formally defined in terms of ergodic theory of stochastic processes, and linked to the notion of attracting (or clustering) of
occurrences of an event. Using entropy theory, it has been proved that in non-deterministic processes for events of certain type (rare cylinder sets) the opposite effect to attracting, i.e.,
repelling can be at most marginal, while in the majority of processes such events will in fact reveal strong attracting properties. This can be regarded as a positive answer to the question 1 of the
general debate. Also some answer to the question 2 can be deduced. The details are described in the following section. Clearly, the theory investigates the behavior of mathematical models, not of
reality itself, hence one can continue to speculate whether it applies or not to the law of series in the colloquial meaning.
Law of series in ergodic theory
The starting point is the assertion that the clustering observed for signals arriving completely independently from each-other (i.e., forming a Poisson process) will be considered neutral, i.e.,
neither attracting nor repelling. Attracting (and repelling) is defined as the deviation of a signal process from the Poisson process toward stronger (weaker) clustering. It turns out that both
deviations can be defined in terms of only one variable associated with the signal process, namely with the waiting time. The precise meaning is stated below.
Formal definitions
A signal process is a continuous time stochastic process i.e., one-parameter family of random variables \((X_t)_{t\ge 0}\) defined on a probability space \((\Omega,P)\) and assuming integer values,
with the following two properties: 1. \(X_0 = 0\) almost surely, 2. the trajectories \(t\mapsto X_t(\omega)\) are almost surely nondecreasing in \(t\ .\) Clearly, the trajectories must have
discontinuities (jumps from one integer to a higher one). These jumps are interpreted as signals. A signal process is homogeneous if for any fixed \(s\ge 0\) the finite-dimensional distributions of \
((X_t)\) are the same as those of \((X_{t+s}-X_s)\ .\) An example of a homogeneous signal process is the Poisson process.
Given a homogeneous signal process, the waiting time is the random variable defined on \(\Omega\) as the time of the first signal after time 0: \[V(\omega) = \inf\{t: X_t(\omega)\ge 1\}.\]
Assume that \(X_1\) has finite and nonzero expected value, denoted by \(\lambda\) and called the intensity of the process. Let \(F\) denote the distribution function of the waiting time \(V\ .\) It
is well known that the waiting time of a Poisson process has exponential distribution, i.e., it satisfies \[F(t) = 1 - e^{-\lambda t}\] (see the derivation). The key notions of attracting and
repelling are defined below:
Definition 1. Consider a homogeneous signal process with intensity \(\lambda\ .\) The signals attract each other from a distance \(t>0\ ,\) if \(F(t)< 1-e^{-\lambda t}\ .\) Analogously, the signals
repel each other from a distance \(t>0\ ,\) if \(F(t)> 1-e^{-\lambda t}\ .\) The difference \(|1-e^{-\lambda t}-F(t)|\) is called the intensity of attracting (or repelling) at \(t\ .\)
Why is attracting (repelling) defined as above? By elementary properties of homogeneous processes, it is seen that the expected number of signals \(EX_t\) in the interval of time \([0,t]\) equals \(\
lambda t\ .\) The value \(F(t)\) is the probability, that there will be at least one signal in this interval. Hence the ratio \(\frac {\lambda t}{F(t)}\) represents the conditional expectation of the
number of signals in \([0,t]\) for all these \(\omega\in\Omega\ ,\) for which at least one signal is observed there. We now compare this expected value with an analogous value computed for the
Poisson process with the same intensity \(\lambda\ .\) The numerators \(\lambda t\) are the same for both processes. So, this conditional expectation in the process \((X_t)\) is larger than in the
Poisson process if and only if \(F(t)< 1-e^{-\lambda t}\ .\) In such case, if we observe the process for time \(t\ ,\) there are two possibilities: either we detect no signals, or, once the first
signal occurs, we can expect a larger global number of observed signals than if we were dealing with the Poisson process. The first signal attracts further signals. By stationarity, the same happens
in any interval \([s,s+t]\) of length \(t\ ,\) contributing to an increased clustering effect. Repelling is the converse: the first signal lowers the expected number of signals in the observation
period, contributing to a decreased clustering, and a more uniform distribution of signals in time.
If a given process reveals attracting from some distance and repelling from another, the tendency to clustering is not clear and depends on the applied time perspective. However, if there is only
attracting (without repelling), then at any time scale we shall see the increased clustering. Thus it is natural to say that
Definition 2. The signal process obeys the law of series if the following two conditions hold:
• 1. there is no repelling from any distance and
• 2. there is attracting from at least one distance.
With this definition, all speculations about the law of series in a perfectly independent process are ruled out. However, in practice perfect independence of arriving signals never happens. It is
only approximate; in the universe there exist residual connections for any pair of events. Any reasonable interpretation of the mathematical law of series should postulate that these residual
dependencies cannot generate repelling, while they can and in many cases do generate attracting of occurrences for certain events. This is exactly what the theorems below say, which can be regarded
as an (at least partial) answer to the question 2 in the general debate. They support the hypothesis that in processes occurring in reality (at least in their mathematical models) attracting is a
much more common phenomenon than repelling. Moreover, strong attracting prevails for some types of events.
The theorems
There are so far three major results in this direction. They concern signal processes associated with ergodic measure preserving transformations (here referred to as master processes). In the master
process one fixes a measurable set \(B\) of small probability (a so called rare event) and obtains a signal process by letting the signals be the occurrences of the event \(B\) in the realization of
the master process. This signal process is homogeneous and has intensity \(\lambda\) equal to the measure of \(B\) (by the ergodic theorem). Although such a signal process has discrete time, for
events of very small probability the signals are so rare that the increment of time becomes relatively very small and the time can be considered continuous. Two theorems describe the behavior of such
signal processes, where the event \(B\) is a cylinder set, i.e., the occurrence of a fixed finite sequence of symbols (a word) in the symbolic representation of the master process generated by a
finite partition. The last result concerns slightly larger events, more fit to modeling some types of experiments in reality.
Theorem 1. (Downarowicz and Lacroix, 2006) Consider an ergodic measure preserving transformation \((X,\mu, T)\) and a finite measurable partition \(\mathcal P\) of \(X\ .\) If the corresponding
symbolic system is not deterministic (i.e., has positive entropy) then for every \(\epsilon >0\) the joint measure of all cylinders (words) of length \(n\) which reveal repelling (for any \(t\)) with
intensity exceeding \(\epsilon\) converges to zero as \(n\) tends to infinity.
Interpretation: The majority of sufficiently long words do not reveal repelling (other than marginal). Note that Theorem 1 says nothing about attracting. It only claims that repelling decays as \(n\)
tends to infinity. This corresponds to postulate 1 in the Definition 2 of the law of series.
Theorem 2. (Downarowicz, Lacroix and Leandri, preprint) In every measure preserving system \((X,\mu, T)\) the symbolic system associated with a finite partition \(\mathcal P\) of \(X\) has, for a
typical (in the sense of category) such partition, the following property: There exists a subset of natural numbers of upper density 1, such that all cylinders associated to words of lengths \(n\)
from this subset reveal attracting with intensity close to 1.
Interpretation: Although neutral processes cannot be theoretically eliminated (the Poisson process exists), no process in reality fits precisely to this perfect independent model. Perfect
independence is only theoretic, and in practice - approximate. This gives room for non-neutral behavior in nearly any process. This non-neutral behavior happens to be the attracting. This corresponds
to the postulate 2 in Definition 2. For example, consider a perfectly independent process with finitely many states, (e.g. the process of flipping the coin). Clearly, occurrences of any long word
form a Poisson process, hence are neutral. Now perturb slightly the generating partition. Typically the new process will now reveal strong attracting for all words of "special" lengths belonging to a
rather large set of integers.
The above theorems have a certain weakness: they deal with signals which are repetitions of one and exactly the same long word. In reality, one is more interested in occurrences of similar events,
rather than repetitions of precisely the same event. For example, when noting the repetitions of some meteorological phenomenon, say, a tornado in Denton County, Texas, every tornado differs from the
others in many parameters. Yet, they are classified as "the same" event: a tornado in Denton County, Texas. This leads to studying the occurrences of events consisting not of one, but several united
cylinders (we will call them composite events). The problem becomes seriously difficult when the number of added cylinders grows exponentially with their length. A special but very natural case of a
composite event of this kind occurs when one agrees to identify all cylinders (words) which differ from a specific word \(B\) on a small percentage of coordinates. That is to say, the event is a ball
with respect to the Hamming distance in the space of words of a certain length. Such a model fits very well to the type of experiment, where the observed event is positively recognized whenever it is
sufficiently similar to some master pattern. The following theorem deals with this case.
Theorem 3. (Downarowicz, Grzegorek and Lacroix, preprint) For every measure preserving system \((X, \mu, T)\) of positive entropy, and every sufficiently small \(\delta>0\) (depending only on the
cardinality of the partition) the symbolic system associated with a finite partition \(\mathcal P\) of \(X\) has, for a typical (in the sense of category) such partition, the following property:
There exists a subset of natural numbers of upper density 1, such that for every word \(B\) of length \(n\) from this subset the composite event \(B^\delta\) (the \(\delta\)-ball around \(B\) in the
Hamming distance) reveals attracting with intensity close to 1.
It must be understood that the above theorems apply to a rather limited variety of events in reality. First, not all events can be described as words with respect to some partition or as balls in the
Hamming distance around some words. Most events have the structure of complicated unions of many not similar cylinders and of different lengths. Second, even if an event is a single word, the
theorems require that the word be very long. Certainly they do not govern the single numbered outcomes of the roulette spins or of multinumbered lotto drawings, or even incidental repetitions of
names of people (several letters in length) encountered by someone during his life.
Nonetheless, these theorems can be applied to some types of phenomena, for example in genetics, computer science, or data transmission, where one deals with really long strings of symbols. In spite
of such possible applications, these results have a philosophic meaning: Even if the observed process is believed to be completely independent, due to tiny imperfections of the independence, the
occurrences of events of some kind indeed obey the law of series. Thus, the law of series is not only an illusion or an unexplained paranormal phenomenon, but a rigorous statistical law. It is now a
matter of further investigation (not speculation), to extend the range of its applicability.
• Diaconis, P. and Mosteller, F. Methods for Studying Coincidences, Journal of the American Statistical Association, Vol. 84, 1989
• Downarowicz, T. and Lacroix, Y.: The law of series, preprint [1]
• Downarowicz, T., Lacroix, Y. and Leandri, D.: Spontaneous clustering in theoretical and some empirical stationary processes, ESAIM :P&S, to appear [2]
• Downarowicz, T., Grzegorek, P. and Lacroix, Y.: Attracting and repelling in stationary signal processes - a survey, preprint
• Jung, C. G. (1952). Synchronicity: An Acausal Connecting Principle, 1973 2nd ed. Princeton, N.J.: Princeton University Press
• Kammerer, P.: Das Gesetz der Serie, eine Lehre von den Wiederholungen im Lebens und im Weltgeschehen, Stuttgart und Berlin, 1919
• von Mises, R.: Probability, Statistics and Truth, New York, Dover, 1981
• Moisset, J.: La loi des séries, JMG Editions, 2000
• Koestler, A.: The Case of the Midwife Toad, London, Hutchinson 1971
Internal references
• Tomasz Downarowicz (2007) Entropy. Scholarpedia, 2(11):3901.
• Howard Eichenbaum (2008) Memory. Scholarpedia, 3(3):1747.
See Also
Jean Moisset: The Power of Mind | {"url":"http://www.scholarpedia.org/article/Law_of_series","timestamp":"2014-04-16T07:19:43Z","content_type":null,"content_length":"59879","record_id":"<urn:uuid:6e7c7474-37df-442a-8fd1-d22ef108e3be>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kearny, NJ Statistics Tutor
Find a Kearny, NJ Statistics Tutor
...I have taken both AP Macroeconomics and AP Microeconomics. Having written college essays and essays for AP English, I am very familiar with the proofreading process. People call me a "grammar
nazi" and find my attention to proper spelling somewhat obsessive.
43 Subjects: including statistics, English, calculus, reading
...Now, I am more than ready and able to explain all facets of algebra to any struggling student. My goal is to make whatever is challenging you most seem easy. I'm very familiar with Algebra 1.
15 Subjects: including statistics, geometry, algebra 1, algebra 2
...I am really good at reading music.I ran track in high school. I ran in the 400 meter, 400 meter hurdles, and the 200 meter. I also participated in the high jump and the long jump.
16 Subjects: including statistics, geometry, precalculus, elementary math
...Do you have a student who has difficulties with writing such as generating or getting ideas onto paper, organizing writing and grammatical problems? Do you have a student whose life you would
like to enrich with piano lessons? If you answered yes to any of these questions, I can help.
30 Subjects: including statistics, English, piano, reading
...I am a patient and effective professor and alter my teaching style to meet the learning needs of my students. In addition to teaching, I have tutored students in statistics for the past six
years and have worked with students on writing/editing skills outside of the classroom. While I have work...
9 Subjects: including statistics, reading, writing, social studies
Related Kearny, NJ Tutors
Kearny, NJ Accounting Tutors
Kearny, NJ ACT Tutors
Kearny, NJ Algebra Tutors
Kearny, NJ Algebra 2 Tutors
Kearny, NJ Calculus Tutors
Kearny, NJ Geometry Tutors
Kearny, NJ Math Tutors
Kearny, NJ Prealgebra Tutors
Kearny, NJ Precalculus Tutors
Kearny, NJ SAT Tutors
Kearny, NJ SAT Math Tutors
Kearny, NJ Science Tutors
Kearny, NJ Statistics Tutors
Kearny, NJ Trigonometry Tutors
Nearby Cities With statistics Tutor
Belleville, NJ statistics Tutors
Bloomfield, NJ statistics Tutors
East Newark, NJ statistics Tutors
East Orange statistics Tutors
Glen Ridge statistics Tutors
Harrison, NJ statistics Tutors
Irvington, NJ statistics Tutors
Lyndhurst, NJ statistics Tutors
Montclair, NJ statistics Tutors
Newark, NJ statistics Tutors
North Arlington statistics Tutors
Nutley statistics Tutors
Orange, NJ statistics Tutors
South Kearny, NJ statistics Tutors
West Orange statistics Tutors | {"url":"http://www.purplemath.com/kearny_nj_statistics_tutors.php","timestamp":"2014-04-19T02:15:33Z","content_type":null,"content_length":"23872","record_id":"<urn:uuid:14fecd59-8d81-4518-b03c-a61aa86ba414>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
M-Theory and the Higgs boson
The discovery of a potential Higgs boson particle plays a crucial role in super-symmetry - just one more of the ingredients needed to provide evidence of the M-Theory of strings, writes Dr. Henryk
Energetic pieces of threads may finally explain all four fundamental forces of nature and our perceived reality with space, time, matter and motion.
The basic elements of this so far purely mathematical concept are so-called 'strings' and 'membranes' — subatomic one-dimensional energy threads and built areas. The mere vibrations of tiny strings
and membranes, only about a hundredth of a billionth of a billionth of the size of an atomic nucleus generate everything; all elements of the periodic system, the vacuum of space and progressive
Acceptance of this 'theory of everything' relies on the super-symmetry of forces and matter.
Particle physicists need proof for super-symmetry as well as to explain their contemporary model of weakly interacting massive particles (WIMPS) that are currently supposed to form extensive,
galaxies stabilising dark matter halos, apparently providing the majority of matter in the universe.
The announcement by CERN last week that there is a high probability that the new particle they've found is the Higgs boson is an important step toward doing this.
Enter the 11th dimension
String theories, which emerged in the 1980s, postulate that 10 dimensions exist in nature. Only Einstein's three-dimensional space and one-dimensional time are 'rolled out', the other six spatial
dimensions are 'curled up' and invisible.
Varying approaches led to different mathematical solutions and descriptions. Five variants seemed to be promising, but did not yet produce suitable solutions for all existing elementary particles,
space, time, and quantum gravity.
Then in 1994, the so-called M-Theory caused a second superstring revolution. It attempts to unify all five previously developed theories, introducing an 11th dimension and a staggering amount of
mathematical solutions. The M-Theory considers those five set-ups to describe the same, but from different perspectives.
M-Theory formulates relationships between each of the five previous theories, calling those relationships 'dualities'. Each duality provides a mathematical solution to convert one string theory into
another. The 11th dimension is supposed to acquire sufficient energy to infinitely expand.
String specialists ponder on a 'floating membrane' and consider the existence of our universe along such a membrane. Infinite parallel universes accompany our universe with their own floating
membranes. Leakages between those universes lead to a mathematically feasible concept of gravity.
One distinctive feature of the M-theory is the assumed existence of multidimensional spaces within any single point of space and time. Endless string solutions are the result, creating far too many
variations to find the suitable ones randomly; but powerful computers may help scientists find feasible results.
All elementary particles that have been observed are either fermions or bosons; fermions are supposed to build all known types of matter and elementary bosons are either photons or W- and Z-bosons or
gluons. Photons carry the forces of the electromagnetic fields. W- and Z-bosons mediate a weak force of radioactive decay and neutrino interactions, and gluons the strong force in the atomic nuclei.
A feasible solution for quantum gravity would be necessary to cover all four fundamental forces of nature.
The bosons challenge string physicists most; currently they need 26 dimensions for a boson string theory, meaning 15 dimensions on top of the M-Theory.
The Higgs quantum field and the Higgs boson play a crucial role in providing proof of super-symmetry because they give elementary particles a mass by spontaneous breaking of electroweak symmetry; the
Higgs boson is an excitation of the Higgs quantum background field above its ground state.
The basic theories for all elementary particles need getting accustomed to because each material particle is described as a distinguishable excitation state of basic energy strings and areas with
quantum mechanical aspects.
The classical observations of nature completely fade in the imaginations of theoretical string physicists. The quantised approach to all forces and energies of nature already challenges these
scientists from the very beginning. For example, look at a simple electron: like any photon, any electron either behaves as concentrated particle or spreading wave, only depending on the set-up of
the experiment. This peculiarity has been called the dualism of wave and particle.
Quantum physicists handle this remaining inexplicable contradiction by the superposition of several possible states and conditions. There is only a probability that one of these states and conditions
takes place. The whole of possible states is mathematically expressed by so-called 'wave' functions. Any single result of an observation appears accidentally. This way, quantum physics can predict
atomic processes with extraordinary high precision.
Rotational symmetry
A 'theory of everything' also requires rotational symmetry concepts of space-time. Rotational symmetry describes a successive exchange of physical quantities and states by energy impacts, for example
of a length into time, time into energy density, energy density into time compression and, closing the circle, back into a space length.
Einstein described these rotational features in his theory of relativity by energy tensors and rotary functions. This circular exchange chain of physical quantities and states has been proven
experimentally, but now needs completion with additional dimensions.
Quantum physics enters this picture by innovative time compression, representing the opposite function of time dilation.
Cosmology will strongly influence further development of string theory and theory of everything.
We postulate that the accelerating expansion of the universe, explained by dark energy, is being driven by scalar fields. Fields of this kind serve as a description of changing super-symmetries that
have their origin in one single type of initial force. These fields determine the development of the hierarchy of today's fundamental forces of nature. Rotational space-time symmetry accommodates the
types of scalar fields that are needed to explain the peculiar negative pressure and adiabatic nature of dark energy. It explains the location and nature of the Higgs quantum field as well.
The M-Theory may soon culminate in the successful programming of a powerful computer, but only the experimental proofs of super-symmetry and the identification of the circular exchange chain of
parameters will open a new chapter in the contemporary standard model of physics.
About the author:Dr Henryk Frystacki is the author of Einstein's Ignorance of Dark Energy. Dr Frystacki earned a PhD in Applied Physics and Engineering from the Technical University of Munich. He is
an external board member of the Institute for Gravitation and Cosmos at Pennstate; and a member of the Russian Academy of Technical Sciences.
Published 10 July 2012 | {"url":"http://www.abc.net.au/science/articles/2012/07/10/3542763.htm?topic=enviro","timestamp":"2014-04-18T15:05:43Z","content_type":null,"content_length":"64355","record_id":"<urn:uuid:c8f5c49d-2c6d-4f04-a57a-2899355e17c1>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
lossless compression techniques
Some Information About
lossless compression techniques
is hidden..!! Click Here to show lossless compression techniques's more details..
Do You Want To See More Details About
"lossless compression techniques"
? Then
.Ask Here..!
with your need/request , We will collect and show specific information of lossless compression techniques's within short time.......So hurry to Ask now (No Registration , No fees ...its a free
service from our side).....Our experts are ready to help you...
.Ask Here..!
In this page you may see lossless compression techniques related pages link And You're currently viewing a stripped down version of content. open "Show Contents" to see content in proper format
with attachments
Page / Author tags
Posted by: smart paper DATA COMPRESSION AND HUFFMAN ALGORITHM , DATA, COMPRESSION , huffman decode, huffman encoder , huffman encoding, huffman encoding algorithm , huffman encoding example,
boy huffman image compression , lossless coding, lossless compression , lossless compression algorithm, lossless compression techniques , network data compression, static
Created at: Thursday huffman , static huffman coding, HUFFMAN , ALGORITHM, data compression with huffman algorithm , projects on huffman algorithm, data compression algorithms ppt , file
14th of July 2011 compression project using huffman algorithm, compression of an image by various methods , huffman algorithm project, project report on data compression using huffman coding
01:51:43 AM project , what is the huffman data compression technique, image compression huffman project , lzw data compression algo ppt, huffman algorithm project on data compression ,
Last Edited Or Replied use of huffman algorithm in computer, data compression ideas , data compression huffman 2011 pdf download, lossy compression algorithms ppt , report on image compression
at :Thursday 01st of using huffman algorithm, huffman algorithm ,
March 2012 12:12:02 AM
L..................[:=> Show Contents <=:]
Posted by: smart paper A Pseudo Lossless Image Compression Method , Pseudo, Lossless , Image, Compression , lossless compression techniques, lossless encoding , lossless file compression, lossless
boy file format , lossless image compression algorithm, lossless image compression software , lossy compression algorithms, lossy image , lossy lossless compression, picture
Created at: Monday 20th file compression , png compression algorithm, Method , lossless compression projects, a pseudo lossless image compression method ppt , what is pseudo lossless compression in
of June 2011 05:11:22 images, lossless image compression project , a pseudo loseless image compression pdf, compression of an image by various methods , main projects on lossless image
AM compression, a pseudo loseless image compression , lossless data compression technique, error free compression methods pdf , conclusion on pseudo lossy image compression
Last Edited Or Replied method, project on lossless imagecompression , pseudo lossless image compression technique, pseudo lossless image compression , project report of pseudo lossless image
at :Monday 20th of June compression method, pseudo compression images ,
2011 05:11:22 AM
In this study, decorrelation is performed by the subtraction between adjacent pixels. The subtraction is a one dimensional version of the differential pulse code modulation (DPCM).
C.Reversible image compression
Arithmetic coding assigns a code word to each symbol an interval of real numbers between 0 and 1. It exploits the distribution of the image histogram, by assigning short intervals to the most
frequently occurring amplitudes and longer intervals to the others.
III. RESULTS(cont.)
III. RESULTS(cont.)
III. RESULTS(cont.)
Our method modifies the noi..................[:=> Show Contents <=:]
Posted by: project An Improved Lossless Image Compression Algorithm LocoR , Improved, Lossless , Image, Compression , Algorithm, lossless compression techniques , jpeg image compressor,
topics compression image , lossless encoding, compression methods , compression images, jpeg lossless compression , images compression, jpeg compression algorithm , lossless image
Created at: Thursday compression software, lossless compression , image compression, LocoR , near lossless image compression algorithm, ppt for image compression algorithm , an improved loss
28th of April 2011 less image compression, an improved loss less , an improved lossless image compression algorithm loco r, base paper for an improved lossless image compression algorithm loco
08:15:54 AM r in pdf , loco compression ppt, the loco r lossless image compression algorithm , loco r in java, loco r , image compression algorithm loco r ppt, how to implement lossless
Last Edited Or Replied image compression using loco r in java , lossless image compression java, python image lossless compression algorithms , python image lossless compression, the loco i
at :Thursday 28th of lossless image compression algorithm ppt , loco r algorithm,
April 2011 08:15:54 AM
einberger, Seroussi and Sapiro, with modifications and betterment, the algorithm reduces obviously the implementation complexity. Experiments illustrate that this algorithm is better than Rice
Compression typically by around 15 percent..................[:=> Show Contents <=:]
Posted by: computer
science crazy Compression, Image , Fractal, idea behind compression of image files , fractal image compression 2011 papers, seminar report of fractal image compression , electrical
Created at: Saturday seminar project image, fast image compression , fractal image compression techniques, technical seminar on image compression , seminar report on fractal image processing,
20th of September 2008 seminar topic fractals , technical seminar topics related image compression, fractal image pdf , seminar topics in graphics, there are essentially two sorts of data
11:39:54 PM compression lossless compression works by reducing the redundancy in the data the dec , way 2 electrical seminar project, seminar topics on image compression , fractal image
Last Edited Or Replied compression,
at :Monday 04th of June
2012 05:54:44 AM
he redundancies in images cannot be easily detected and certain minute details in pictures can also be eliminated while storing so as to reduce the number of pixels. These can be further
incorporated while reconstructing the image for minimum error. This is the basic idea behind image compression. Most of the image compression techniques are said to be lossy as they reduce the
information being stored.
The present method being employed consists of storing the image by eliminating the high frequency Fourier co-efficients and storing only the low frequency coefficients. This is the principle
..................[:=> Show Contents <=:]
Posted by: computer
science crazy
Created at: Saturday
20th of September 2008 Techniques, Compression , Data, data compression techniques ppt and pdf , seminar on data compression techniques, data compression techniques ,
11:35:16 PM
Last Edited Or Replied
at :Saturday 20th of
September 2008 11:35:16
on is based on knowledge about colour images and human perception.
Lossless Compression
In this type of compression no information is lost during ..................[:=> Show Contents <=:]
Posted by: computer
science crazy
Created at: Saturday Generation, Power , Solar, solar power generation , solar power, solar panal , solar panel, solar energy generation , seminar topics on solar energy generation pdf,
20th of September 2008 electrical power syatem , saminar of solar energy, design of solar power plant , solar energy for seminar topic, power generation research topics , seminar topics in solar
11:21:54 PM power generation, solar power generation techniques seminar topics , solar power generation seminar topic, solar power generation techniques , solar power ppt 1 electrical
Last Edited Or Replied seminar topics, seminar topics on solar power ,
at :Friday 17th of
February 2012 11:45:20
e panel. In late morning or afternoon, we cannot get maximum output from the panel. Thus for maximum utilization of the Sun through out the day, the panel must be kept perpendicular to the sun.
This is possible only by tra..................[:=> Show Contents <=:]
Posted by: computer
science crazy lists1 , Seminar, Electrical , ppt on ddr motors, seminar on electric cylinders , parasitic power, mems a pollution free option for power generation , boosting motoring
Created at: Saturday efficency using small chips, electrical distribution system and suppression techniques , electric cylinder seminar full report, micro power electrostatic generator meg ,
20th of September 2008 electric cylinders, electricity from socean waves , electro heological fluids, pumped hydroelectric energy storage , presentation on electrical distribution system and
11:10:09 PM suppression techniques, electrical seminar on electric field optimization of high voltage electrode digram , 9632 electro heological fluids, seminar report on jet stream
Last Edited Or Replied windmill ,
at :Saturday 22nd of
January 2011 01:08:34
i diagrams
Motors Without Mechanical Transmissions
Distribution System Relaying(36)
Print verification system
White LED: The Future Lamp
Robotics and its Applications
12 Phase Capacitor
Electric field optimization of high voltage electrode based on neural network
Electro wetting
Pumped Hydroelectric Energy Storage
Voltage Sag Analysis
Modelling Of Transformers With Internal Incipient Faults
Super conducting generator
Telluri Current
Transient over voltages in electrical distribution system and suppression techniques
Intrusion Detection With Snort
Frequency and Time Domain Tec..................[:=> Show Contents <=:]
Posted by: computer
science crazy
Created at: Saturday seminar topics computer networks, seminar topics chemical engineering , seminar topics computer science engg, seminar topics cse , seminar topics computer science, latest
20th of September 2008 seminar topics mechanical , new seminar topics mechanical, latest mechanical seminar topics mechanical engineering , seminar topics mechanical engg, seminar topics
12:15:31 PM mechanical branch , technical seminar topics mechanical engineering, seminar topics mechanical engineering , seminar topics mechanical, topics3 , Seminar, mechanical , List,
Last Edited Or Replied mechanical seninar vibration control techniques , cryogenic ball valves,
at :Saturday 20th of
September 2008 12:15:31
Variable Speed Drives
Durable Prototyping
Simple Constitutive Models for Linear and Branched Polymers
Hydrogen Fuel Tank
Portable Power
Cryogenic Ball Valves
Computer Modelling
LASER Sintering
In Mould Lamination Technique
Thermostatic Refrigerator
Space ShuttleSemisolid Casting
The Atomic Battery
Smart combustors
Magnetic Refrigeration
Hydro Jetting
E85Amoeba Organization
Recent Advances in Statistical Quality Control
Cylinder Deactivation
Sustainable Engineering
Hydro Drive
Expert Technician System
Re-Entry Of Space Vehicle
Superca..................[:=> Show Contents <=:]
Cloud Plugin by Remshad Medappil | {"url":"http://seminarprojects.net/c/lossless-compression-techniques","timestamp":"2014-04-20T18:26:24Z","content_type":null,"content_length":"39701","record_id":"<urn:uuid:8ce1a263-911c-4025-a73d-7dfda3f97049>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
E.W.Dijkstra Archive: Bulterman's theorem on shortest tree (EWD 1131)
Bulterman’s theorem on shortest tree
We consider a finite complete graph in which each edge has a length; as usual, the length of a tree is defined as the sum of the lengths of its edges. The shortest subspanning tree of such a graph is
not necessarily unique. At the last meeting of the ETAC, however, Ronald W. Bulterman pointed out that all shortest subspanning trees give rise to the same bag of edge lengths. Here is a proof.
^* [*] ^*
Lemma A subset of nodes that in the complete graph can be connected by edges of minimum length only, is so connected in a shortest subspanning tree.
Proof Consider for a given subspanning tree a nonextensible set Q of nodes, connected in the tree by edges of minimum length only. Let Q be a true subset of P, a set of nodes that can be connected in
the complete graph by edges of minimum length only. The lemma is then proved by constructing a subspanning tree shorter than the given one.
Q being a true subset of P, there exists an edge from a node in Q to a node in P-Q and of minimal length. Because Q is nonextensible in the given tree, adding that edge to the given tree closes a
cyclic path that contains a longer edge. Removal of the latter yields a tree that is shorter than the given tree. (End of Proof.)
This lemma gives rise to a (for me at least) new algorithm for shortest subspanning trees. To begin with we only consider the edges of minimum length; they partition the vertices in connected
components —and because at least 1 component contains at least 2 vertices, there are fewer components than the original graph has vertices—. To construct the shortest subspanning tree we first select
for each component of p vertices p-1 edges of minimum length that form a subspanning tree for the p vertices of that component. The remaining edges are produced as solution of the shortest tree
problem for a reduced graph. The above components are the vertices of the reduced graph, whose edges are the shortest edges connecting any pair of components.
Because the number of minimum length edges and the reduced graph are both only dependent on the partitioning and hence independent of how the nondeterminacy is resolved, all shortest subspanning
trees yield the same bag of edge lengths. And this concludes my proof of Bulterman’s theorem.
The theorem is not surprising at all: it holds in the extreme cases —all edge lengths different and all edge lengths equal— and for sufficiently incommensurable edge lengths I guess that additive
decompositions are unique anyhow. This note has been written because the proof is very satisfactory: it disentangles precisely what has to be disentangled.
Nuenen, 20 July 1992
prof.dr. Edsger W.Dijkstra
Department of Computer Sciences
The University of Texas at Austin
Austin, TX 78712-1188 | {"url":"http://www.cs.utexas.edu/~EWD/transcriptions/EWD11xx/EWD1131.html","timestamp":"2014-04-18T06:11:12Z","content_type":null,"content_length":"4773","record_id":"<urn:uuid:dee9cf54-033e-467e-bd66-928a3aff1e28>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Obstruction theories on non-smooth spaces with smooth fibres
up vote 0 down vote favorite
Given a perfect obstruction theory $E^\bullet$ over a space $X$, we know that if $X$ is smooth, that the virtual fundamental class $[X, E^\bullet]$ is given by
$$[X, E^\bullet] = c_{top}\big((E^{-1})^\vee\big)$$
Suppose now that we can write $X = A \times B$ where $B$ is smooth. Let $p : X \to A$ be the projection. Is there something that we can say about $p_*[X,E^\bullet]$? In particular, if $A$ were smooth
then we could compute $p_*[X, E^\bullet]$ by integration along the fibre, $B$.
If $A$ is not smooth, then can we still obtain $p_*[X, E^\bullet]$ by integrating the top chern class of a relative obstruction bundle of some kind?
What if $A$ is 0-dimensional?
ag.algebraic-geometry gromov-witten-theory
Did you check the final section of Behrend-Fantechi? They define "relative obstruction theories" and the associated virtual classes. Also, in Behrend's follow-up paper on Gromov-Witten invariants,
he uses the relative version to verify the Kontsevich-Manin axioms (so that might also be a place to check). – Jason Starr Mar 28 '12 at 20:24
... I did not check that. I looked a lot at the comparison lemma, but I somehow missed that section. I'll go look now. – Simon Rose Mar 28 '12 at 21:00
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ag.algebraic-geometry gromov-witten-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/92493/obstruction-theories-on-non-smooth-spaces-with-smooth-fibres","timestamp":"2014-04-18T03:09:57Z","content_type":null,"content_length":"48384","record_id":"<urn:uuid:544b4f61-046b-4b8c-bc43-91d763a2f8f6>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
GAMGI: Screenshots
Orbitals 6hz5, 6hy5, 6s are shown here, the last one represented only by its lower half, to evidence its 5 node internal structure. Each orbital is represented as a cloud of points above a
probability density of 1E-6, 300,000 points for 6hz5, 6hy5 and 3,000,000 for 6s (to properly show the inner node). Users can choose which octants of the orbital to show, allowing a detailed analysis
of its internal structure. Gnome on Debian 6.0. Size: 28,703 bytes.
The five 3d orbitals are shown here with the same orientation, inside a xyz frame, in orthographic projection. Each orbital is represented as a cloud of 150,000 points above a probability density of
1E-6. 3dz2 and 3dx2-y2 are ligned with the xyz axes, while 3dxy, 3dxz, 3dyz are aligned along the xyz bisectors. Gnome on Debian 6.0. Size: 25,325 bytes.
Octahedral interstices are smaller in BCC (r/R = 0.155) than in FCC (r/R = 0.414) structures. However the corresponding octahedra are larger in BCC (V = 4.1 Angstron**3) than in CFC (V = 3.8
Angstron**3) structures! Gnome on Debian 6.0. Size: 91,906 bytes.
6px Hydrogen orbital, represented as the outer solid isosurfaces with 1E-5 probability density, sampled with an accuracy of 150 cells per direction per octant. Gnome on Debian 6.0. Size: 116,699
6gz4 and 6s Hydrogen orbitals, represented as the outer solid isosurfaces with 1E-5 and 1E-6 probability densities, respectively. The sampling accuracy is 150 and 100 sampling cells per direction per
octant, respectively. To show the inside, one and two octants have been removed in the 6gz4 and 6s orbitals, respectively. Removing arbitrary octants from the representation allow users to view the
orbital inside, particularly useful in s orbitals. Gnome on Debian 6.0. Size: 126,971 bytes. | {"url":"http://www.gamgi.org/screenshots/screenshots.html","timestamp":"2014-04-16T19:35:16Z","content_type":null,"content_length":"4375","record_id":"<urn:uuid:e16a053e-9355-41fa-af04-7df40389803d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Open Group Base Specifications Issue 6
IEEE Std 1003.1, 2004 Edition
Copyright © 2001-2004 The IEEE and The Open Group, All Rights reserved.
scalb - load exponent of a radix-independent floating-point number
The scalb() function shall compute x*r^n, where r is the radix of the machine's floating-point arithmetic. When r is 2, scalb() shall be equivalent to ldexp(). The value of r is FLT_RADIX which
is defined in <float.h>.
An application wishing to check for error situations should set errno to zero and call feclearexcept(FE_ALL_EXCEPT) before calling these functions. On return, if errno is non-zero or fetestexcept
(FE_INVALID | FE_DIVBYZERO | FE_OVERFLOW | FE_UNDERFLOW) is non-zero, an error has occurred.
Upon successful completion, the scalb() function shall return x*r^n.
If x or n is NaN, a NaN shall be returned.
If n is zero, x shall be returned.
If x is ±Inf and n is not -Inf, x shall be returned.
If x is ±0 and n is not +Inf, x shall be returned.
If x is ±0 and n is +Inf, a domain error shall occur, and either a NaN (if supported), or an implementation-defined value shall be returned.
If x is ±Inf and n is -Inf, a domain error shall occur, and either a NaN (if supported), or an implementation-defined value shall be returned.
If the result would cause an overflow, a range error shall occur and ±HUGE_VAL (according to the sign of x) shall be returned.
If the correct value would cause underflow, and is representable, a range error may occur and the correct value shall be returned.
If the correct value would cause underflow, and is not representable, a range error may occur, and 0.0 shall be returned.
The scalb() function shall fail if:
Domain Error
If x is zero and n is +Inf, or x is Inf and n is -Inf.
If the integer expression (math_errhandling & MATH_ERRNO) is non-zero, then errno shall be set to [EDOM]. If the integer expression (math_errhandling & MATH_ERREXCEPT) is non-zero, then the
invalid floating-point exception shall be raised.
Range Error
The result would overflow.
If the integer expression (math_errhandling & MATH_ERRNO) is non-zero, then errno shall be set to [ERANGE]. If the integer expression (math_errhandling & MATH_ERREXCEPT) is non-zero, then the
overflow floating-point exception shall be raised.
The scalb() function may fail if:
Range Error
The result underflows.
If the integer expression (math_errhandling & MATH_ERRNO) is non-zero, then errno shall be set to [ERANGE]. If the integer expression (math_errhandling & MATH_ERREXCEPT) is non-zero, then the
underflow floating-point exception shall be raised.
The following sections are informative.
Applications should use either scalbln(), scalblnf(), or scalblnl() in preference to this function.
IEEE Std 1003.1-2001 only defines the behavior for the scalb() function when the n argument is an integer, a NaN, or Inf. The behavior of other values for the n argument is unspecified.
On error, the expressions (math_errhandling & MATH_ERRNO) and (math_errhandling & MATH_ERREXCEPT) are independent of each other, but at least one of them must be non-zero.
feclearexcept(), fetestexcept(), ilogb(), ldexp(), logb(), scalbln(), the Base Definitions volume of IEEE Std 1003.1-2001, Section 4.18, Treatment of Error Conditions for Mathematical Functions,
<float.h>, <math.h>
First released in Issue 4, Version 2.
Issue 5
Moved from X/OPEN UNIX extension to BASE.
The DESCRIPTION is updated to indicate how an application should check for an error. This text was previously published in the APPLICATION USAGE section.
Issue 6
This function is marked obsolescent.
Although this function is not part of the ISO/IEC 9899:1999 standard, the RETURN VALUE and ERRORS sections are updated to align with the error handling in the ISO/IEC 9899:1999 standard.
End of informative text.
UNIX ® is a registered Trademark of The Open Group.
POSIX ® is a registered Trademark of The IEEE.
[ Main Index | XBD | XCU | XSH | XRAT ] | {"url":"http://pubs.opengroup.org/onlinepubs/000095399/functions/scalb.html","timestamp":"2014-04-19T22:16:34Z","content_type":null,"content_length":"8022","record_id":"<urn:uuid:1516a7b4-883c-4c27-b0da-c7d921c5a511>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
In chemistry 22.4litres of gas weighs 70g at S.T.P.Calculate weight of gas if occupies volume of 20litres at... - Homework Help - eNotes.com
In chemistry 22.4litres of gas weighs 70g at S.T.P.Calculate weight of gas if occupies volume of 20litres at 27degree celsius and 700mmHg of pressure
it's a numerical of chemistry based on boyle's law and charles law.
By gas law:
PV= nRT, where P is pressure of the gas at the temperature T and R is a gas constant and n is the substance of gas in moles.
Put values in the equation For STP and The given temprature and pressure situations.
760d)(22.4)=(70/200.59)R(273.15) (1)
(700d)(20)=(x/200.59)R(273.15+27) (2)
(1)/(2) eleminates R and reduces To:
760*22.4/(700*20) = (70/x)(273.15)/290.15
= 54.17 g
It is 54 grams, correct to the nearest number. It is based on the formula, known as the gas law, which is PV= nRT. The figures can be directly put inside and solved. The rest is up to you to decide
on how do you want to proceed.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/chemistry-22-4litres-gas-weighs-70g-s-t-p-90513","timestamp":"2014-04-24T18:21:56Z","content_type":null,"content_length":"27462","record_id":"<urn:uuid:350a40b4-9008-4ec7-9713-cc450c26cae9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ruelle inequality on a noncompact space
up vote 6 down vote favorite
Does someone have a reference where the Ruelle inequality would be proved in the following context.
Let $M$ be a non compact smooth manifold, and $f:M\to M$ be a $C^1$-diffeomorphism (or $C^2$, or smooth), whose differential is uniformly bounded ($\sup_{x\in M}\|T_xf\|<\infty$) on $M$.
Assume maybe that $M$ satisfies an additional assumption : [???? to complete ??]
Let $\mu$ be a $f$-invariant probability measure on $M$. Then $$ h_\mu(f)\le \int_M \sum_{i:\chi_i(f)>0} \chi_i(x)\dim E_i(x) d\mu(x) $$ where the numbers $\chi_i(x)$ are the Lyapounov exponents, and
$E_i(x)$ the corresponding spaces in the Oseledets decomposition.
ds.dynamical-systems ergodic-theory dg.differential-geometry
What is $h_\mu (f)$ ? – Alexander Chervov Jul 11 '12 at 9:39
It is the measure-theoretic (or Kolmogorov-Sinai) entropy of $\mu$. See any standard book in ergodic theory for a definition. Or also here : scholarpedia.org/article/Kolmogorov-Sinai_entropy –
Barbara Schapira Jul 11 '12 at 11:28
Chapter 6 of the thesis of van Bargen seems to be relevant in the case M=R^d opus.kobv.de/tuberlin/volltexte/2010/2571/pdf/… – Thomas Sauvaget Jul 11 '12 at 13:49
add comment
1 Answer
active oldest votes
Dear Barbara,
I don't whether this is useful in your case, but one can get a Ruelle inequality if $M$ admits a "nice compactification" and $f$ behaves "well" near the boundary of this compactification
(i.e., at "infinity") because in this context the results in the book "Invariant manifolds, Entropy and Billiards" of A. Katok and J.-M. Strelcyn may be applied. More precisely, suppose that
$M$ can be viewed as an open and dense subset of a compact metric space $N$ satisfying conditions (A), (B), (C) and (1.1) in Katok-Strelcyn's book, and $f$ is a $C^2$ diffeomorphism
preserving a probability $\mu$ verifying conditions (1.3) and (1.4) in Katok-Strelcyn book and the usual integrability condition $\int \log^+\|df\|d\mu<\infty$. Then, the Ruelle inequality
up vote
1 down Of course, the integrability condition is true under your assumption of uniform bound on $\|df\|$, so that the main issue is to figure out if such a nice compactification of $M$ exists in
vote your setting.
Hello I forgot to thank you earlier for this answer. But in Katok-Strelcyn, you need as you said a nice compactification. The example I had in mind is geodesic flows on complete non
compact negatively curved manifolds, which are unbounded, and cannot be compactified without changing strongly the metric, which changes of course the Lyapounov exponents. Bests Barbara –
Barbara Schapira Nov 8 '12 at 11:18
add comment
Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems ergodic-theory dg.differential-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/101917/ruelle-inequality-on-a-noncompact-space?sort=votes","timestamp":"2014-04-16T07:26:09Z","content_type":null,"content_length":"55633","record_id":"<urn:uuid:7729998b-fec0-4a1d-98be-2dd10eeafe2c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] difficult R-problem
Andreas Pauling pauling at giub.unibe.ch
Fri Feb 15 12:28:30 CET 2002
Hi there
In the course of my diploma thesis in climatology I have encountered
a difficult R-Problem that I cannot solve. I want to fill R-Objects
(whose names should depend on j) with numbers at the i-th position.
The resulting Objects should be something like:
RQuadratStep1, RQuadratStep2, RQuadratStep3 ... filled with Elements like
c(0.324, 0.456, 0.657 ...)
Below is a short version of how I have tried to solve it but I don't
know how to insert numbers at the i-th place. The problem seems to be
the "assign" line. Thanks a lot for all hints.
for(j in 1:10) {
for(i in 1:30) {
assign(eval(parse(text=paste("RQuadratStep",j,sep=""))),round(summary(fit)$r.squared, dig=3))
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2002-February/018787.html","timestamp":"2014-04-19T14:37:27Z","content_type":null,"content_length":"3632","record_id":"<urn:uuid:e54dbe7e-8880-41a8-a21d-4a9963d1db33>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Power and bipower variation with stochastic volatility and jumps
Results 1 - 10 of 136
- REVIEW OF ECONOMICS AND STATISTICS, FORTHCOMING , 2006
"... A rapidly growing literature has documented important improvements in financial return volatility measurement and forecasting via use of realized variation measures constructed from
high-frequency returns coupled with simple modeling procedures. Building on recent theoretical results in Barndorff-Ni ..."
Cited by 79 (7 self)
Add to MetaCart
A rapidly growing literature has documented important improvements in financial return volatility measurement and forecasting via use of realized variation measures constructed from high-frequency
returns coupled with simple modeling procedures. Building on recent theoretical results in Barndorff-Nielsen and Shephard (2004a, 2005) for related bi-power variation measures, the present paper
provides a practical and robust framework for non-parametrically measuring the jump component in asset return volatility. In an application to the DM/ $ exchange rate, the S&P500 market index, and
the 30-year U.S. Treasury bond yield, we find that jumps are both highly prevalent and distinctly less persistent than the continuous sample path variation process. Moreover, many jumps appear
directly associated with specific macroeconomic news announcements. Separating jump from non-jump movements in a simple but sophisticated volatility forecasting model, we find that almost all of the
predictability in daily, weekly, and monthly return volatilities comes from the non-jump component. Our results thus set the stage for a number of interesting future econometric developments and
important financial applications by separately modeling, forecasting, and pricing the continuous and jump components of the total return variation process.
- In , 2006
"... Summary. Consider a semimartingale of the form Yt = Y0 + ∫ t 0 asds + ∫ t σs − dWs, 0 where a is a locally bounded predictable process and σ (the “volatility”) is an adapted right–continuous
process with left limits and W is a Brownian motion. We consider the realised bipower variation process V (Y; ..."
Cited by 38 (17 self)
Add to MetaCart
Summary. Consider a semimartingale of the form Yt = Y0 + ∫ t 0 asds + ∫ t σs − dWs, 0 where a is a locally bounded predictable process and σ (the “volatility”) is an adapted right–continuous process
with left limits and W is a Brownian motion. We consider the realised bipower variation process V (Y; r, s) n t = n r+s
, 2007
"... This article introduces a new nonparametric test to detect jump arrival times and realized jump sizes in asset prices up to the intra-day level. We demonstrate that the likelihood of
misclassification of jumps becomes negligible when we use high-frequency returns. Using our test, we examine jump dyn ..."
Cited by 36 (2 self)
Add to MetaCart
This article introduces a new nonparametric test to detect jump arrival times and realized jump sizes in asset prices up to the intra-day level. We demonstrate that the likelihood of
misclassification of jumps becomes negligible when we use high-frequency returns. Using our test, we examine jump dynamics and their distributions in the U.S. equity markets. The results show that
individual stock jumps are associated with prescheduled earnings announcements and other company-specific news events. Additionally, S&P 500 Index jumps are associated with general market news
announcements. This suggests different pricing models for individual equity options versus index options. (JEL G12, G22, G14) Financial markets sometimes generate significant discontinuities,
so-called jumps, in financial variables. A number of recent empirical and theoretical studies proved the existence of jumps and their substantial impact on financial management, from portfolio and
risk management to option and bond pricing
, 2003
"... A rapidly growing literature has documented important improvements in volatility measurement and forecasting performance through the use of realized volatilities constructed from high-frequency
returns coupled with relatively simple reduced form time series modeling procedures. Building on recent th ..."
Cited by 24 (3 self)
Add to MetaCart
A rapidly growing literature has documented important improvements in volatility measurement and forecasting performance through the use of realized volatilities constructed from high-frequency
returns coupled with relatively simple reduced form time series modeling procedures. Building on recent theoretical results from Barndorff-Nielsen and Shephard (2003c) for related bi-power variation
measures involving the sum of high-frequency absolute returns, the present paper provides a practical framework for non-parametrically measuring the jump component in the realized volatility
measurements. Exploiting these ideas for a decade of high-frequency five-minute returns for the DM/ $ exchange rate, the S&P500 aggregate market index, and the 30-year U.S. Treasury Bond, we find the
jump components to be distinctly less persistent than the contribution to the overall return variability originating from the continuous sample path component of the price process. Explicitly
including the jump measure as an additional explanatory variable in an easy-to-implement reduced form model for the realized volatilities results in highly significant jump coefficient estimates at
the daily, weekly and quarterly forecasts horizons. As such, our results hold promise for improved financial asset allocation, risk management, and derivatives pricing, by separate modeling,
forecasting and pricing of the continuous and jump components of the total return variability.
, 2006
"... This paper provides a methodology for computing optimal filtering distributions in discretely observed continuous-time jump-diffusion models. Although it has received little attention, the
filtering distribution is useful for estimating latent states, forecasting volatility and returns, computing mo ..."
Cited by 20 (5 self)
Add to MetaCart
This paper provides a methodology for computing optimal filtering distributions in discretely observed continuous-time jump-diffusion models. Although it has received little attention, the filtering
distribution is useful for estimating latent states, forecasting volatility and returns, computing model diagnostics such as likelihood ratios, and parameter estimation. Our approach combines
time-discretization schemes with Monte Carlo methods to compute the optimal filtering distribution. Our approach is very general, applying in multivariate jump-diffusion models with nonlinear
characteristics and even non-analytic observation equations, such as those that arise when option prices are available. We provide a detailed analysis of the performance of the filter, and analyze
four applications: disentangling jumps from stochastic volatility, forecasting realized volatility, likelihood based model comparison, and filtering using both option prices and underlying returns. 2
"... We propose bootstrap methods for a general class of nonlinear transformations of realized volatility which includes the raw version of realized volatility and its logarithmic transformation as
special cases. We consider the i.i.d. bootstrap and the wild bootstrap (WB) and prove their first-order asy ..."
Cited by 20 (3 self)
Add to MetaCart
We propose bootstrap methods for a general class of nonlinear transformations of realized volatility which includes the raw version of realized volatility and its logarithmic transformation as
special cases. We consider the i.i.d. bootstrap and the wild bootstrap (WB) and prove their first-order asymptotic validity under general assumptions on the log-price process that allow for drift and
leverage effects. We derive Edgeworth expansions in a simpler model that rules out these effects. The i.i.d. bootstrap provides a second-order asymptotic refinement when volatility is constant, but
not otherwise. The WB yields a second-order asymptotic refinement under stochastic volatility provided we choose the external random variable used to construct the WB data appropriately. None of
these methods provide third-order asymptotic refinements. Both methods improve upon the first-order asymptotic theory in finite samples.
, 2005
"... This paper tries to explain the credit default swap (CDS) premium, using a novel approach to identify the volatility and jump risks of individual firms from high-frequency equity prices. Our
empirical results suggest that the volatility risk alone predicts 50 percent of the variation in CDS spread l ..."
Cited by 18 (0 self)
Add to MetaCart
This paper tries to explain the credit default swap (CDS) premium, using a novel approach to identify the volatility and jump risks of individual firms from high-frequency equity prices. Our
empirical results suggest that the volatility risk alone predicts 50 percent of the variation in CDS spread levels, while the jump risk alone forecasts 19 percent. After controlling for credit
ratings, macroeconomic conditions, and firms ’ balance sheet information, we can explain 77 percent of the total variation. Moreover, the pricing effects of volatility and jump measures vary
consistently across investmentgrade and high-yield entities. The estimated nonlinear effects of volatility and jump risks on credit spreads are in line with the implications from a calibrated
structural model with stochastic volatility and jumps, although the challenge of simultaneously matching credit spreads and default probabilities remains.
, 2003
"... In this paper we review some recent work on limit results on realised power variation, that is sums of powers of absolute increments of various semimartingales. A special case of this analysis
is realised variance and its probability limit, quadratic variation. Such quantities often appear in fin ..."
Cited by 15 (2 self)
Add to MetaCart
In this paper we review some recent work on limit results on realised power variation, that is sums of powers of absolute increments of various semimartingales. A special case of this analysis is
realised variance and its probability limit, quadratic variation. Such quantities often appear in financial econometrics in the analysis of volatility. The paper also provides some new results and
discusses open issues. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3748227","timestamp":"2014-04-16T23:47:32Z","content_type":null,"content_length":"38397","record_id":"<urn:uuid:2b3f97d4-d9ac-4f1d-89a5-757244b0154d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Home Page
Math 307-101 Home Page, Fall 2013
This is a second course in linear algebra, whose goal is to introduce numerous applications and some practical aspects of linear algebra. In addition, some computation will be done in Matlab. We
assume that you have taken one semester of linear algebra (e.g., Math 152, 221, or 223).
The midterm will be given during class time on October 25; location to be announced (probably not Buch B215).
Secondary Page This webpage is the course homepage for Math 307-101. A secondary Math 307-101 webpage has more details.
September 3, 2013: Note that I am often adding more sample exam problems. It is also, in principle, that I will modify a problem to look more like an exam problem, or even delete
a problem if it isn't realistic. Hence the problem numbers will change over time. I'll try to figure out a better way. Any suggestions (to jf at math dot ubc etc.)?
September 6, 2013: Homework must be submitted on 8 1/2 x 11 (i.e., "letter size") paper, to fit in an envelop of slightly larger dimensions, and written (or printed out) in dark
ink or pencil.
News September 8, 2013: Homework 1 is now set in stone. The homework grading scheme and instructions now appear on the course detailed web page; Problem 1 of Homework 1 is due on
Monday, September 16; Problems 2-6 will be due one week from when undergraduate computer accounts are set up (perhaps due on September 16, perhaps later).
September 10, 2013: These Sept 11 slides have just a few pages.
September 15, 2013: Migrating to Beamer/LaTeX PDF slides, as of September 16; September 16 slides are now (more or less) ready; some material, including the first slide after the
title, are for me (the instructor) rather than the students. Exam problems (additions and stand-alone version) and class notes currently under revision/restructuring.
Here are some skeletal files which I discuss during the computer part of class:
Slides September text files: 4 , 6 , 9 , 11 , 13 .
We are switching to beamer/latex pdf files!
September beamer/latex pdf files: 13 , 16 .
Buchanan B215, MWF, 1:00-1:50pm. Generally class is scheduled as follows:
1. 12:45pm : I will generally arrive five minutes before setting up, and be available for brief questions in the lounge opposite the doors of Buch B215. Please speak softly.
2. 12:50pm to 1:00pm : I will set up the computer demos for class and review my notes; I am generally unavailable for questions at this time.
3. 1:00pm to 1:10pm : Computer Part: Matlab calculations for class and homework, give some text slides with important notes and rough outline of class. The time 1:10pm is rough;
Class Schedule: at times it will be a bit shorter or longer. The Matlab demo may be interactive and is not set in stone in advance; for example, I may take student questions and comments and
Computer & perform some computations to address such feedback.
Blackboard 4. 1:10pm to 1:50pm : Blackboard Part: Discuss new material and go over sample exam problems on the blackboard. Which sample exams we cover and other aspects of this discussion
are not set in stone in advance; I will give you a plan beforehand on some slides (you may wish to print this out before class). The plan may be modified "on the fly" based
on student questions and comments.
5. 1:50pm : I gather my materials and will be avaible for brief questions outside the classroom in the lounge area. Please speak softly.
Here are more details, including holidays, midterm location, and office hours .
Required Text The main text is the following online set of notes; the text's appendix is a good place to start, as it describes which parts were covered on which dates. We will use some free
material from Experiements with MATLAB, by Cleve Moler, and his exm toolbox.
Recommended Text First-semester linear algebra will be reviewed only briefly. I will suggest review problems from 3,000 Solved Problems in Linear Algebra by Seymour Litschutz. However, this
material should not be new, and your materials from Math 152, 221, and 223 should suffice.
Recommended I will assign computational homework in Matlab, with some sample code in Matlab; you will be given undergraduate math lab accounts for Matlab (version R2009a). It may be more
Software practical to purchase the student version of Matlab from the UBC Bookstore for $120; I will demo this version in class (student version R2013a), and, at times, Matlab on a
command line (version R2009a).
Other Software You may also use any version of Matlab, or any software (e.g., Maple, Octave, Mathematica, Fortran packages, C packages, etc.). You will not be examined on Matlab, but will be
examined on the results of the computational homework.
More Info Here is a link to a more detailed webpage on Math 307, including sample exam problems, office hours, homework, grading policy, etc.
This course may differ from previous versions of Math 307 and previous math courses you have taken, in that this course will:
1. be problem based: This year's course will be largely problem based. I will supply a bunch of sample final exam problems, and we will work towards these problems in class and
What's New This homework.
Year 2. require critical thinking: This math course will show you that most real world math problems have many solutions, and options; you will be required to think critically, and
explain on homework and exams, for example, which norm, or which algorithm, is most appropriate to various problems. Math, in the real world, is not "one size fits all."
3. have new applications this year: we will cover a few new applications this year, such as (1) (2,7) codes, (2) Laplacian and harmonics in music, (3) how condition numbers tell
you about "bad situations" in interpolation (this happens in polynomial interpolation, y=poly(x), when two observed x values are very close). Accordingly, we will cover fewer
applications from Chapters 1-4 of the notes.
UBC Math Home | Joel Friedman Home | Course Materials | {"url":"http://www.math.ubc.ca/~jf/courses/307/index.html","timestamp":"2014-04-20T23:34:45Z","content_type":null,"content_length":"10059","record_id":"<urn:uuid:4d7da380-d023-4b6e-bd03-8b039defd5d6>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Re: RE: Re: RE: Re: -hpfilter- question
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: Re: RE: Re: RE: Re: -hpfilter- question
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: Re: RE: Re: RE: Re: -hpfilter- question
Date Fri, 27 Feb 2004 17:01:45 -0000
As far as I can see, this is just a longer-winded
way of doing what was done with my loop. The
answers it gives should be the same, and thus no
better. In short, this approach will not solve any
problem with what -hpfilter- does.
Instead of finding a different way of using the
program, you may have to look inside it yourself
to see what it is going on.
Giorgio Ricchiuti
> I'll try to apply the filter for each country and then add all in a
> variable. Could it be a good, even long, solution?
> something like
> hpfilter myvar if country==1, s(c1)
> hpfilter myvar if country==2, s(c2)
> and then
> gen filtermyvar=H_c1
> replace filtermyvar=H_c2 if country==2
> thanks again
> Giorgio
> ----- Original Message -----
> From: "Nick Cox" <n.j.cox@durham.ac.uk>
> To: <statalist@hsphsun2.harvard.edu>
> Sent: Friday, February 27, 2004 5:15 PM
> Subject: st: RE: Re: RE: Re: -hpfilter- question
> > Sorry, Giorgio: I can't see your data from here
> > to experiment. Nor am I an expert on that program.
> >
> > But I do know that -hpfilter- is full of bizarre
> > things. Some interest was shown a while ago
> > in rewriting it, but I'm not sure what became of
> > that. Possibly good intentions were not enough,
> > and the project was swamped by other things.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2004-02/msg00748.html","timestamp":"2014-04-21T09:53:43Z","content_type":null,"content_length":"6461","record_id":"<urn:uuid:9c8d5ee3-4963-49b0-a82d-792d75a911e8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: On the Stability of Kšahler-Einstein Metrics
Xianzhe Dai
Xiaodong Wang
Guofang Wei
April 19, 2005
Using spinc
structure we prove that Kšahler-Einstein metrics with nonpositive scalar curva-
ture are stable (in the direction of changes in conformal structures) as the critical points of the
total scalar curvature functional. Moreover if all infinitesimal complex deformation of the com-
plex structure are integrable, then the Kšahler-Einstein metric is a local maximal of the Yamabe
invariant, and its volume is a local minimum among all metrics with scalar curvature bigger or
equal to the scalar curvature of the Kšahler-Einstein metric.
1 Introduction
Stability issue comes up naturally in variational problems. One of the most important geometric
variational problems is that of the total scalar curvature functional. Following [Bes87, Page 132]
we call an Einstein metric stable if the second variation of the total scalar curvature functional is
non-positive in the direction of changes in conformal structures (we have weakened the notion by
allowing kernels). By the well-known formula, this is to say, | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/623/1383966.html","timestamp":"2014-04-21T10:02:47Z","content_type":null,"content_length":"8195","record_id":"<urn:uuid:a7617b1f-7921-4460-957e-92416ca17dc7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tougher Integral
That's great!!! I thought that the cross multiplying would ended up x^3 as in $(Ax+B)(x^2+4) + (Cx+D)(x^2+1)$ that I try to find A and B instead... but it was probably a long way to go (Headbang)
Sorry, I don't know how to write it in proper format.
Here is what I did:
The function should have the following form:
A/(x^2+1) + B/(X^2+4)
then let A = 1 => solve for B:
=> B = -4/(x^2+1)
substitute B back into the function
=> we have something like:
1/(x^2+1) - 4/[(x^2+4)*(x^2+1)]
The second integral need to split up again (Lipssealed). My question is how do you know a proper way to decompose it (as you did)?
Thank you.
$A \;\text{and}\; B$ are numbers, not variables! To decompose, say
notice that
so we seek numbers A, B, C and D such that
$\frac{Ax+B}{x^2+1} + \frac{Cx+D}{x^2+4} = \frac{x^2}{(x^2+1)(x^2+4)}$
cross multipying gives
$(Ax+B)(x^2+4) + (Cx+D)(x^2+1) = x^2$.
Expanding and re-grouping gives
$(A+C)x^3 + (B+D)x^2 +(4A+C)x + 4B+D = x^2$
comparing gives the equations
$A+C=0,\;\;\;B+D = 1,\;\;\;4A+C=0,\;\;\;4B+D = 0$
from which we solve giving
$A = 0,\;\;\;B = - \frac{1}{3},\;\;\;C=0,\;\;\;D=\frac{4}{3}$
$\frac{- \frac{1}{3}}{x^2+1} + \frac{\frac{4}{3}}{x^2+4} = \frac{x^2}{(x^2+1)(x^2+4)}$
Sorry, I don't know how to write it in proper format.
Here is what I did:
The function should have the following form:
A/(x^2+1) + B/(X^2+4)
then let A = 1 => solve for B:
=> B = -4/(x^2+1)
substitute B back into the function
=> we have something like:
1/(x^2+1) - 4/[(x^2+4)*(x^2+1)]
The second integral need to split up again (Lipssealed). My question is how do you know a proper way to decompose it (as you did)?
Thank you.
That's "partial fractions" and you typically learn it in "Calculus II", a Freshman or Sophomore class (unless you learned calculus in secondary school). I see now that Krizalid did not need
partial fractions because he was able to do the fractions easily- he's sharper than I am!
Note that $\left( x^{2}+4 \right)-\left( x^{2}+1 \right)=3,$ hence, no partial fractions method involved.
Do partial fraction decomposition as danny points out,
$\int_0^{\infty} \frac{4}{3} \frac{1}{x^2+4} \, dx - \int_0^{\infty}\frac{1}{3} \frac{1}{x^2+1}\, dx$$= \left. \frac{2}{3} \tan^{-1}\frac{x}{2} - \frac{1}{3}\tan^{-1} x \right|_0^{\infty} = \frac
{\pi}{3} - \frac{\pi}{6} = \frac{\pi}{6}$
Sorry, I don't know how to write it in proper format.
Here is what I did:
The function should have the following form:
A/(x^2+1) + B/(X^2+4)
then let A = 1 => solve for B:
=> B = -4/(x^2+1)
substitute B back into the function
=> we have something like:
1/(x^2+1) - 4/[(x^2+4)*(x^2+1)]
The second integral need to split up again (Lipssealed). My question is how do you know a proper way to decompose it (as you did)?
Thank you.
Yes you can but I think the first approach is the best one.
Notice that, $\frac{1}{2}\int_{-\infty}^{\infty} \frac{x^2}{x^4+5x^2+1} dx = \int_0^{\infty} \frac{x^2}{x^4+5x^2+1}dx$
Now define $f(z) = \frac{z^2}{z^4+5z^2+1}$ and find its residues in the upper half plane.
Read my tutorial on it, it should go smoothly for thee.
I found this one on a Harvard graduate level exam:
Any clues?
Couldnt you use Countour integration on this? (Nerd). I am not sure how but I am sure TPH or someone could.
I found this one on a Harvard graduate level exam:
Any clues?
Mr. Red,
Would you mind revealing in what class you saw this problem? I am just curious which classes at the graduate level test students on their calculus abilities. Also, was this already set up or was
it part of a word problem? Any more information on this would be very interesting.
I hope this is right...
$\frac{4}{3}\int_0^\infty{\frac{1}{x^2 + 4}\,dx} - \frac{1}{3}\int_0^\infty{\frac{1}{x^2 + 1}\,dx} = \frac{4}{3}\left[\frac{1}{2}\arctan{\frac{x}{2}}\right]_0^\varepsilon - \frac{1}{3}\left[\
$= \frac{2}{3}[\lim_{\varepsilon \to \infty}(\arctan{\frac{\varepsilon}{2}}) - \arctan{0}] - \frac{1}{3}[\lim_{\varepsilon \to \infty}(\arctan{\varepsilon}) - \arctan{0}]$
$= \frac{2}{3}\left[\frac{\pi}{2} - 0 \right] - \frac{1}{3}\left[\frac{\pi}{2} - 0 \right]$
$= \frac{\pi}{3} - \frac{\pi}{6}$
$= \frac{\pi}{6}$.
Looks good to me but on a personal note, I would use something other than $\varepsilon$. For me, $\varepsilon$ is something very small not large :)
I hope this is right...
$\frac{4}{3}\int_0^\infty{\frac{1}{x^2 + 4}\,dx} - \frac{1}{3}\int_0^\infty{\frac{1}{x^2 + 1}\,dx} = \frac{4}{3}\left[\frac{1}{2}\arctan{\frac{x}{2}}\right]_0^\varepsilon - \frac{1}{3}\left[\
$= \frac{2}{3}[\lim_{\varepsilon \to \infty}(\arctan{\frac{\varepsilon}{2}}) - \arctan{0}] - \frac{1}{3}[\lim_{\varepsilon \to \infty}(\arctan{\varepsilon}) - \arctan{0}]$
$= \frac{2}{3}\left[\frac{\pi}{2} - 0 \right] - \frac{1}{3}\left[\frac{\pi}{2} - 0 \right]$
$= \frac{\pi}{3} - \frac{\pi}{6}$
$= \frac{\pi}{6}$.
Do partial fraction decomposition as danny points out,
$\int_0^{\infty} \frac{4}{3} \frac{1}{x^2+4} \, dx - \int_0^{\infty}\frac{1}{3} \frac{1}{x^2+1}\, dx$$= \left. \frac{2}{3} \tan^{-1}\frac{x}{2} - \frac{1}{3}\tan^{-1} x \right|_0^{\infty} = \frac
{\pi}{3} - \frac{\pi}{6} = \frac{\pi}{6}$
Could someone kindly do this improper integral? It has been a while since I have done it, I am very interested in seeing the solution, and would appreciate a proper refresher. Pun intended.
I found this one on a Harvard graduate level exam:
Any clues?
Yes, $\frac{x^2}{x^4+5x^2+4}$ decomposes into
$\frac{4}{3} \frac{1}{x^2+4} - \frac{1}{3} \frac{1}{x^2+1}$
then integrate each separately.
Tougher Integral
I found this one on a Harvard graduate level exam:
Any clues?
$A \;\text{and}\; B$ are numbers, not variables! To decompose, say
notice that
so we seek numbers A, B, C and D such that
$\frac{Ax+B}{x^2+1} + \frac{Cx+D}{x^2+4} = \frac{x^2}{(x^2+1)(x^2+4)}$
cross multipying gives
$(Ax+B)(x^2+4) + (Cx+D)(x^2+1) = x^2$.
Expanding and re-grouping gives
$(A+C)x^3 + (B+D)x^2 +(4A+C)x + 4B+D = x^2$
comparing gives the equations
$A+C=0,\;\;\;B+D = 1,\;\;\;4A+C=0,\;\;\;4B+D = 0$
from which we solve giving
$A = 0,\;\;\;B = - \frac{1}{3},\;\;\;C=0,\;\;\;D=\frac{4}{3}$
$\frac{- \frac{1}{3}}{x^2+1} + \frac{\frac{4}{3}}{x^2+4} = \frac{x^2}{(x^2+1)(x^2+4)}$
I thought that the cross multiplying would ended up x^3 as in | {"url":"http://mathhelpforum.com/calculus/66147-tougher-integral-print.html","timestamp":"2014-04-19T03:54:20Z","content_type":null,"content_length":"30343","record_id":"<urn:uuid:bca5efc7-0703-4f88-8b47-4d7fb288c5cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Change your subject.
Re: Change your subject.
I repeated it because of this:
But when 9 is in a square root sign it gives two answers i.e -3 and 3.
I wanted him to be sure that the answer is only 3. 1 answer not 2.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=19847&p=5","timestamp":"2014-04-16T04:28:15Z","content_type":null,"content_length":"11569","record_id":"<urn:uuid:fee31b1d-ebd0-4686-8be2-94916513af92>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
help me in simple Bitwise operator :(
08-13-2010, 06:25 AM
help me in simple Bitwise operator :(
on the way in learning java i came across Bitwise operator and i don't understand how this Bitwise operator is really working , i mean please give me the real logic of the following program
class BitOp {
public static void main(String args[]){
int a=1;
int b=2;
int c=3;
a=a |4; // Bitwise OR operator
b>>=1; // Right shift assignment operator
c<<=1; // Left shift assingment operator
a=a^c; // Bitwise XOR operator
System.out.println("a=" +a);
System.out.println("b=" +b);
System.out.println("c=" +c);
result for the above example code is :
NOw my request is :
please explain me the logic for :
1. Bitwise OR operator
2. Right shift operator
3 .Left shift operator
4. Bitwise XOR operator
of the above code ..........thanks in advance guys ..... keep rocking
08-13-2010, 07:36 AM
Alright, first off, the code above doesn't compile. The variable "c" is never declared.
But here's explanations of each of those things:
1. Bitwise OR (Inclusive Or)
Every number is represented as a string of 0s and 1s; 5 is 101, 7 is 111, 8 is 1000, and so on. When you do a bitwise IOR, for each bit (0 or 1), you keep it if the bit from the left is 1, or the
bit from the right is 1, otherwise it's 0. Let me give an example.
Imagine X = 8, Y = 6. Therefore, in binary, X == 1000 and Y == 110 (use a binary calculator, such as windows calculator, to calculate these if you need to; otherwise see my related links section
for binary conversion lessons).
Now, we do this (line up the bits, note the added leading 0 on Y):
Now since the first bit is 1 in X and 0 in Y, we keep it. The second bit is 1 in Y and 0 in X, so we keep it. Same with the third bit. The fourth bit, however, is 0 in both, so we do not keep it.
We end up with the result 1110, which is 14. Therefore, 8 | 6 == 14.
2 and 3. Arithmetic shifts
When shifting, once again imagine bits. Imagine you have 10 (1010) as your number. When you left shift it, you add a 0 to the end (10100), which is 20. When you right shift it, you pop the last
bit off the binary. 1010 becomes 101, which is 5. (Notice how LSH is just multiplying by 2, RSH is dividing by 2.)
4. Bitwise XOR (Exclusive Or)
This is just like IOR, except that ONLY ONE bit can be 1 to keep it.
Let's do 7 and 10.
The first bit is 1 in 10, 0 in 7, so we keep it. The second bit is 0 in 10, 1 in 7, so we keep it. The third bit, however, is 1 in both, so it is not kept. Fourth bit is the same as the second.
Therefore, our result is 1101, which is 7 ^ 10 == 13.
Hope that's thorough enough. For more, here's related links:
Arithmetic shift - Wikipedia, the free encyclopedia
Bitwise operation - Wikipedia, the free encyclopedia
Hexadecimal, decimal and binary conversion chart. - AtariAge Forums
Binary Numbers - An intro to binary numbers & conversion formulas
Good luck!
08-13-2010, 08:21 AM
Most, if not all of the Java textbooks describe bitwise operators in great detail, including examples and little tables. Why ask here? Reading doesn't hurt your eyeballs.
kind regards,
08-13-2010, 10:48 AM
Alright, first off, the code above doesn't compile. The variable "c" is never declared.
But here's explanations of each of those things:
1. Bitwise OR (Inclusive Or)
Every number is represented as a string of 0s and 1s; 5 is 101, 7 is 111, 8 is 1000, and so on. When you do a bitwise IOR, for each bit (0 or 1), you keep it if the bit from the left is 1, or the
bit from the right is 1, otherwise it's 0. Let me give an example.
Imagine X = 8, Y = 6. Therefore, in binary, X == 1000 and Y == 110 (use a binary calculator, such as windows calculator, to calculate these if you need to; otherwise see my related links section
for binary conversion lessons).
Now, we do this (line up the bits, note the added leading 0 on Y):
Now since the first bit is 1 in X and 0 in Y, we keep it. The second bit is 1 in Y and 0 in X, so we keep it. Same with the third bit. The fourth bit, however, is 0 in both, so we do not keep it.
We end up with the result 1110, which is 14. Therefore, 8 | 6 == 14.
2 and 3. Arithmetic shifts
When shifting, once again imagine bits. Imagine you have 10 (1010) as your number. When you left shift it, you add a 0 to the end (10100), which is 20. When you right shift it, you pop the last
bit off the binary. 1010 becomes 101, which is 5. (Notice how LSH is just multiplying by 2, RSH is dividing by 2.)
4. Bitwise XOR (Exclusive Or)
This is just like IOR, except that ONLY ONE bit can be 1 to keep it.
Let's do 7 and 10.
The first bit is 1 in 10, 0 in 7, so we keep it. The second bit is 0 in 10, 1 in 7, so we keep it. The third bit, however, is 1 in both, so it is not kept. Fourth bit is the same as the second.
Therefore, our result is 1101, which is 7 ^ 10 == 13.
Hope that's thorough enough. For more, here's related links:
Arithmetic shift - Wikipedia, the free encyclopedia
Bitwise operation - Wikipedia, the free encyclopedia
Hexadecimal, decimal and binary conversion chart. - AtariAge Forums
Binary Numbers - An intro to binary numbers & conversion formulas
Good luck!
thank you very much for your help my dear friend , i clearly understand the concept of IOR and XOR ,you are really rocking because you clearly understood where i am lacking ......... thank you
again JACK :)
i am extremely sorry mate actually this program i have mentioned above is taken from one of the java course book which is available in our place , there they haven't mentioned clearly about the
bitwise operator and also nothing striked my mind to google search for bitwise ,suddenly only our forum came to my mind for clearing my doubt regrading Bitwise operator .. so sorry thank you for
your reply mate :) | {"url":"http://www.java-forums.org/new-java/31618-help-me-simple-bitwise-operator-print.html","timestamp":"2014-04-17T04:30:14Z","content_type":null,"content_length":"14038","record_id":"<urn:uuid:538aa87f-38c6-4e4b-8ef2-ae0d7e5d6403>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
prove 3^n < n! for all n>=7
June 18th 2010, 09:47 PM #1
Jun 2010
prove 3^n < n! for all n>=7
I'm familiar with induction with equal signs. We did it in high school. However I can't seem to understand out to prove by induction using the logical operator '<' (less than).
And i'm stuck on this question. All help is greatly appreciated.
So the base case is n = 7. You need to show that
3^7 < 7!
Then consider that whenever you increase n by 1, you multiply the left side by 3, and the right side by a number greater than 3. Thus the right side will continue to be greater than the left
side. This is what you will write formally as the induction step. (Assume 3^n < n!. Then show that 3^(n+1) < (n+1)! is true.)
so we have to explicitly state that the LHS grows quicker than the RHS in the inductive step...
thank you kindly, THank you very much indeed.
June 18th 2010, 09:59 PM #2
June 18th 2010, 10:02 PM #3
Jun 2010
June 18th 2010, 10:45 PM #4 | {"url":"http://mathhelpforum.com/discrete-math/148843-prove-3-n-n-all-n-7-a.html","timestamp":"2014-04-19T21:03:33Z","content_type":null,"content_length":"39548","record_id":"<urn:uuid:77a862f7-8b65-4eb6-a80d-9dae011b2ffc>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transfer homomorphisms with coefficients
up vote 6 down vote favorite
In group cohomology, for $H$ a finite-index subgroup of $G$ and $M$ a $G$-module, there is a transfer (or corestriction) map $Cor : H^* (H;M) \to H^*(G;M)$.
In homotopy theory, there is a transfer map for finite covering spaces $\bar{X} \to X$, and it exists for all coefficient systems on $X$. It is given on homology (say) by sending a small simplex in
$X$ to its finitely-many lifts to $\bar{X}$. The group transfer is obtained from the topological one by the finite covering space $BH \to BG$.
There is a fancier transfer, due to Becker and Gottlieb, for any map $E \to B$ whose homotopy fibre is stably equivalent to a finite complex. Can this be extended to give a transfer map for
coefficient systems on $B$?
group-cohomology at.algebraic-topology homotopy-theory
2 Hi Oscar. If your map $E\to B$ is smooth, there is a Gysin map with local coefficients, described here: ams.org/journals/tran/2003-355-09/S0002-9947-03-03345-2/… – Mark Grant Nov 11 '10 at 10:45
add comment
2 Answers
active oldest votes
I believe the answer is yes. The idea should be to generalize the construction of the group-theoretic transfer. I'll describe how I think this ought to go. I'm pretty sure what I'm
describing is a known construction, but I haven't been able to dig up a reference which describes it.
Let $f:X\to Y$ be a map of spaces (CW-complexes), and assume (for simplicity) that both are path connected. Then there is an inclusion of topological groups $\phi: G\to H$, such that $B
\phi: BG\to BH$ is equivalent to $f:X\to Y$.
A local system on $X$, in the most general possible context, is a spectrum $M$ equipped with a $G$-action. (I'm not doing anything fancy here; in particular, I'm not doing equivariant
stable homotopy theory. Just spectra with a $G$-action; a $G$ map $f:M\to N$ of such things is a weak equivalence if it is a weak equivalence of the underlying spectra.)
Let $S_G$ be the category of $G$-spectra. It'll be important to note that this is a closed monoidal category: the smash product $X\wedge Y$ of two objects in $S_G$ is defined in $S_G$
(smash the spectra, and use the diagonal $G$-action), as well as a function object $\mathrm{Hom}(X,Y)$ (take the spectrum function object, with diagonal $G$-action).
The Becker Gottlieb transfer $f_\!$ of $f$ associated to an $M\in S_H$ should be a map $M_{hH}\to M_{hG}$ (where "$hG$" is "homotopy orbits"). So, if $M=S^0$ with trivial $H$-action,
you get a map $S^0_{hH}\to S^0_{hG}$, which is a map $\Sigma^\infty Y_+\to \Sigma^\infty X_+$. You should interpret the map $\pi_* f_\!$ as being a map $$ H_*(Y,M) \to H_*(X,f^*M), $$
where these are "homology with coefficients in the local systems $M$ and $f^*M$". There should be a similar way to get a cohomolgy transfer.
up vote 6
down vote Let $F=\Sigma^\infty(H/G)_+$, as a spectrum with $H$-action. Note that the space $H/G$ is just the fiber of $f:X\to Y$.
Here is a sequence of maps in $S_H$: $$ M\to \mathrm{Hom}(F,F\wedge M) \leftarrow \mathrm{Hom}(F,M)\wedge F \to \mathrm{Hom}(F,M)\wedge F\wedge F \to M\wedge F \approx M\wedge_G H. $$
The first is smashing with the identity map of $F$; the second comes from smashing a map $F\to M$ with a map $S^0\to F$; the third comes from the diagonal map $H/G\to H/G\times H/G$;
the fourth is "evaluation".
If $F$ has the homotopy type of a finite CW-spectra (i.e., if $H/G$ is finitely dominated), then the backwards map in this sequence is a weak equivalence. In this case, we get a map $M\
to M\wedge_G H$ of $H$-spectra, and taking homotopy orbits gives $M_{hH}\to M_{hG}$.
If $Y$ is contractible, this amounts to a map $$ S^0 \to \mathrm{Hom}(F,F) \leftarrow \mathrm{Hom}(F, S^0)\wedge F \to \mathrm{Hom}(F, S^0)\wedge F\wedge F \to S^0\wedge F, $$ which is
to say, a map $S^0\to \Sigma^\infty X_+$, and in cohomology this will send $1\in H^0(X)$ to $\chi(X)\in H^0(\mathrm{point})$. So apparently we recover the Becker-Gottlieb transfer.
If the spaces aren't connected, you can still do all this, but you need to allow $H$ and $G$ to be groupoids.
(I learned this way of thinking from some paper of John Klein about the "dualizing spectrum" of a toplogical group; I can't find one where he addresses the BG transfer this way.)
add comment
I think what Charles has written up above is closely related to what Becker and Gottlieb did in
Becker, J. C.; Gottlieb, D. H. Transfer maps for fibrations and duality. Compositio Math. 33 (1976), no. 2, 107–133.
up vote 3 down
vote They work fiberwise, but that is equivalent to working equivariantly in the sense that a $G$-space $X$ gives rise to a fibration $X \times_G EG \to BG$ (Borel construction; this
induces a Quillen equivalence).
add comment
Not the answer you're looking for? Browse other questions tagged group-cohomology at.algebraic-topology homotopy-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/45670/transfer-homomorphisms-with-coefficients?sort=newest","timestamp":"2014-04-17T18:25:15Z","content_type":null,"content_length":"57426","record_id":"<urn:uuid:c0552133-c2e9-48b4-8af3-8139ff2b0fd3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
River Rouge Precalculus Tutor
Find a River Rouge Precalculus Tutor
...Obviously, Math is my specialty and ACT/SAT Test prep is my other passion. When I was in school Math did not come easily, and now it has become my gift. My ability to understand what students
are struggling with and guide them into the right direction is my greatest strength.
9 Subjects: including precalculus, geometry, algebra 1, algebra 2
...I have a degree in Finance from the University of Michigan. I have tutored students at the college level in various classes concerning finance. I also run an independent consulting firm that
regularly examines varying companies' finances.
21 Subjects: including precalculus, calculus, geometry, statistics
...Relating Pre-calc to Other Subjects (including: calculus, physics, chemistry, investing, and more) k. Much More ...... Always expect to have fun while learning! :) Are you ready?! Let's go!! :)
Training is available for successful use of many reading and comprehension aspects, including: rea...
30 Subjects: including precalculus, chemistry, reading, writing
...I also know how to apply Calculus in Physics and Engineering in general. I have been tutoring Physics for more than 2 years and consistently receive the highest marks as a Wyzant Tutor from
students and their parents. I have my Master's Degree in Civil and Structural Engineering and have a deep understanding of all math subjects from pre-Algebra to Calculus.
10 Subjects: including precalculus, calculus, physics, ASVAB
...There are two math portions on this test: Arithmetic Reasoning and Mathematics Knowledge. The Arithmetic Reasoning portion contains word problems, while the Mathematics Knowledge portion
contains algebra and geometry problems, which may or may not start off as a word problem. Currently, no calculator is allowed, so mental calculations, and tricks for simplifying are very useful.
13 Subjects: including precalculus, calculus, geometry, GRE | {"url":"http://www.purplemath.com/river_rouge_precalculus_tutors.php","timestamp":"2014-04-19T00:00:01Z","content_type":null,"content_length":"24357","record_id":"<urn:uuid:8acc4456-e2fd-497d-bf74-e4a357565c40>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Describe at least two ways that gravitational and strong nuclear forces are alike and two ways that they are different.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
for similarity.. look at the formulae and try to figure out
Best Response
You've already chosen the best response.
they are different because electric force is very very strong but everything are in the form of nuteral where as gravitational force is pretty much every on ecan feel
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50fd8396e4b0860af51ed947","timestamp":"2014-04-20T21:08:05Z","content_type":null,"content_length":"30161","record_id":"<urn:uuid:f2b480ba-2837-49a6-b19b-0fc027b78cc0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can a Vitali Set be constructed without AC?
up vote 2 down vote favorite
For the purposes of this discussion, let a Vitali Set be any subset $V\subseteq{}[0,1)$ such that for $V_q:=\{x+q\;|\;x<1-q,\;x\in{}V\}\cup\{x+q-1\;|\;x\geq{}1-q,\;x\in{}V\}$ there is a countable
subset $I\subset[0,1)$ such that
1. $[0,1)=\bigcup{}_{q\in{}I}V_q$
2. For $r,q\in{}I$ distinct, $V_r\cap{}V_q=\emptyset$
Can such a $V$ be constructed without AC?
set-theory axiom-of-choice
add comment
1 Answer
active oldest votes
No Vitali set in your sense can be measurable. I am assuming this is the reason for defining a Vitali set in this way. But Solovay has shown (assuming the consistency of a certain
large cardinal, namely an inaccessible) that there is a model of ZF in which all sets of reals are Lebesgue measurable. In particular, there is no Vitali set in Solovay's model. Hence
up vote 4 down you need some fragment of AC to construct one.
vote accepted
I actually just cited this article in response to mathoverflow.net/questions/72904/… but apparently countable additivity of Lebesgue Measure is not a theorem of ZF. So, the standard
proof of the nonmeasurability of $V$ would not work in ZF alone. Do you know of a proof of the nonmeasurability of $V$ in ZF alone? – user17100 Aug 15 '11 at 7:12
2 Solovay's model also satisfies DC, so Lebesgue measure is countably additive in that model. – François G. Dorais♦ Aug 15 '11 at 7:26
I don't know a proof of the nonmeasurability of $V$ is ZF alone, but the countable additivity of Lebesgue measure can certainly be proved using the axiom of dependent choice (DC),
which holds in Solovay's model. DC says that given a relation $R$ on a set such that every element has an upper bound with respect to $R$, then you can choose an $R$-increasing
sequence of ordertype $\omega$. This is stronger than AC for countable families. – Stefan Geschke Aug 15 '11 at 7:27
Oh, Francois beat me in his comment. – Stefan Geschke Aug 15 '11 at 7:28
Thanks, I just saw this after posting to the other question. Thank you! – user17100 Aug 15 '11 at 8:54
add comment
Not the answer you're looking for? Browse other questions tagged set-theory axiom-of-choice or ask your own question. | {"url":"https://mathoverflow.net/questions/72908/can-a-vitali-set-be-constructed-without-ac","timestamp":"2014-04-16T22:15:36Z","content_type":null,"content_length":"56160","record_id":"<urn:uuid:4e4f44c5-5434-4671-8ced-eb8f31054790>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two Identical Police Cars Are Chasing A Robber. ... | Chegg.com
Two identical police cars are chasing a robber. When at rest, their sirens have a frequency of 462 Hz. A stationary observer watches as the two cars approach. The siren of one car (#1) has a
frequency of 587 Hz, while the other (#2) has a frequency of 749 Hz.
(a) Which car is moving faster? 2
(b) What are the speeds of the two cars?
car (#1) 73.04 m/s
car (#2) 131.43 m/s
(c) If the robber hears a frequency of 647 Hz from car #2, what is the speed of the robber? | {"url":"http://www.chegg.com/homework-help/questions-and-answers/two-identical-police-cars-chasing-robber-rest-sirens-frequency-462-hz-stationary-observer--q1705053","timestamp":"2014-04-18T18:58:50Z","content_type":null,"content_length":"21390","record_id":"<urn:uuid:b4b2ce41-c38f-4060-9872-1a2d82a8a251>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
eCommons Collection:Relative Succinctness of Representations of Languages and Separation of Complexity ClassesOne-Way Log-Tape ReductionsOn the Succintness of Different Representations of LanguagesOn Log-Tape Isomorphisms of Complete SetsOn Polynomial Time Isomorphism of Complete SetsOn Isomorphisms and Density of NP and Other Complete SetsRelations Between Diagonalization, Proof Systems, and Complexity GapsA Note on Tape Bounds for SLA Language ProcessingStructural Complexity Theory: Recent SurprisesGodel, von Neumann and the P=?NP Problem
http://hdl.handle.net/1813/14935 2014-04-20T21:05:12Z 2014-04-20T21:05:12Z Hartmanis, Juris http://hdl.handle.net/1813/7467 2007-12-09T13:54:58Z 1978-08-01T00:00:00Z Title: Relative Succinctness of
Representations of Languages and Separation of Complexity Classes Authors: Hartmanis, Juris Abstract: In this paper we study the relative succinctness of different representations of polymomial time
languages and investigate what can and cannot be formally verified about these representations. We also show that the relative succintness of different representations of languages is directly
related to the separation of the corresponding complexity classes; for example, PTIME $\neq$ NPTIME if and only if the relative succintness of representing languages in PTIME by deterministic and
nondeterministic clocked polynomial time machines is not recursively bound which happens if and only if the relative succintness of these representations is not linearly bounded. 1978-08-01T00:00:00Z
Hartmanis, Juris Immerman, Neil Mahaney, Stephen R. http://hdl.handle.net/1813/7465 2007-12-09T13:26:12Z 1978-07-01T00:00:00Z Title: One-Way Log-Tape Reductions Authors: Hartmanis, Juris; Immerman,
Neil; Mahaney, Stephen R. Abstract: One-way log-tape (1-L) reductions are mappings defined by log-tape Turing machines whose read head on the input can only move to the right. The 1-L reductions
provide a more refined tool for studying the feasible complexity classes than the P-time [2,7] or log-tape [4] reductions. Although the 1-L computations are provably weaker than the feasible classes
L, NL, P and NP, the known complete sets for those classes are complete under 1-L reductions. However, using known techniques of counting arguments and recursion theory we show that certain log-tape
reductions cannot be 1-L and we construct sets that are complete under log-tape reductions but not under 1-L reductions. 1978-07-01T00:00:00Z Hartmanis, Juris http://hdl.handle.net/1813/7464
2007-12-09T12:37:49Z 1978-06-01T00:00:00Z Title: On the Succintness of Different Representations of Languages Authors: Hartmanis, Juris Abstract: The purpose of this paper is to give simple new
proofs of some interesting recent results about the relative succintness of different representations of regular, deterministic and unambiguous context-free languages and to derive some new results
about how the relative succintness of representations change when the representations contain a formal proof that the languages generated are in the desired subclass of languages.
1978-06-01T00:00:00Z Hartmanis, Juris http://hdl.handle.net/1813/7439 2007-12-09T12:24:41Z 1977-07-01T00:00:00Z Title: On Log-Tape Isomorphisms of Complete Sets Authors: Hartmanis, Juris Abstract: In
this paper we study $\log n$-tape computable reductions between sets and investigate conditions under which $\log n$-tape reductions between sets can be extended to $\log n$-tape computable
isomorphisms of these sets. As an application of these results we obtain easy to check necessary and sufficient conditions that sets complete under $\log n$-tape reductions in NL, CSL, P, NP, PTAPE,
etc. are $\log n$-tape isomorphic to the previously known complete sets in the respective classes. As a matter of fact, all the "known" complete sets for NL, CSL, P, NP, PTAPE, etc. are now easily
seen to be, respectively, $\log n$-tape isomorphic. These results strengthen and extend substantially the previously known results about polynomial time computable reductions and isomorphisms of NP
and PTAPE complete sets. Furthermore, we show that any set complete in CSL, PTAPE, etc. must be dense and therefore, for example, cannot be over a single letter alphabet. 1977-07-01T00:00:00Z Berman,
L. Hartmanis, Juris http://hdl.handle.net/1813/7111 2007-12-09T12:32:28Z 1976-12-01T00:00:00Z Title: On Polynomial Time Isomorphism of Complete Sets Authors: Berman, L.; Hartmanis, Juris Abstract: IN
this note we show that the recently discovered NP complete sets arising in number theory, the PTAPE complete sets arising in game theory and EXPTAPE complete sets arising from algebraic word problems
are polynomial time isomorphic to the previously known complete sets in the corresponding categories. 1976-12-01T00:00:00Z Hartmanis, Juris Berman, L. http://hdl.handle.net/1813/7101
2007-12-09T13:22:18Z 1975-10-01T00:00:00Z Title: On Isomorphisms and Density of NP and Other Complete Sets Authors: Hartmanis, Juris; Berman, L. Abstract: If all NP complete sets are isomorphic under
deterministic polynomial time mappings (p-isomorphic) then P $\neq$ NP and if all PTAPE complete sets are p-isomorphic then P $\neq$ PTAPE. We show that all NP complete sets known (in the literature)
are indeed p-isomorphic and so are the known PTAPE complete sets. Thus showing that, in spite of the radically different origins and attempted simplification of these sets, all the known NP complete
sets are identical but for polynomially time bounded permutations. Furthermore, if all NP complete sets are p-isomorphic then they must have similar densities and, for example, no language over a
single letter alphabet can be NP complete, nor can any sparse language over an arbitrary alphabet be NP complete. We show that complete sets in EXPTIME and EXPTAPE cannot be sparse and therefore they
cannot be over a single letter alphabet. Similarly, we show that the hardest context-sensitive languages cannot be sparse. We also relate the existence of sparse complete sets to the existence of
simple combinatorial circuits for the corresponding truncated recognition problem of these languages. 1975-10-01T00:00:00Z Hartmanis, Juris http://hdl.handle.net/1813/7016 2007-12-09T13:55:24Z
1977-03-01T00:00:00Z Title: Relations Between Diagonalization, Proof Systems, and Complexity Gaps Authors: Hartmanis, Juris Abstract: In this paper we study diagonal processes over time-bounded
computations of one-tape Turing machines by diagonalizing only over those machines for which there exist formal proofs that they operate in the given time bound. This replaces the traditional "clock"
in resource bounded diagonalization by formal proofs about running times and establishes close relations between properties of proof systems and existence of sharp time bounds for one-tape Turing
machine complexity classes. Furthermore, these diagonalization methods show that the Gap Theorem for resource bounded computations does not hold for complexity classes consisting only of languages
accepted by Turing machines for which it can be formally proven that they run in the required time bound. 1977-03-01T00:00:00Z Hartmanis, Juris Berman, L. http://hdl.handle.net/1813/6988
2007-12-09T12:51:23Z 1975-05-01T00:00:00Z Title: A Note on Tape Bounds for SLA Language Processing Authors: Hartmanis, Juris; Berman, L. Abstract: In this note we show that the tape bounded
complexity classes of languages over single letter alphabets are closed under complementation. We then use this result to show that there exists an infinite hierarchy of tape bounded complexity
classes of sla languages between log n and log log n tape bounds. We also show that every infinite sla language recognizable on less than log n tape has infinitely many different regular subsets,
and, therefore, the set of primes in unary notation, P, requires exactly log n tape for its recognition and every infinite subset of P requires at least log n tape. 1975-05-01T00:00:00Z Hartmanis,
Juris Chang, Richard Ranjan, Desh Rohatgi, Pankaj http://hdl.handle.net/1813/6957 2007-12-09T13:30:53Z 1990-04-01T00:00:00Z Title: Structural Complexity Theory: Recent Surprises Authors: Hartmanis,
Juris; Chang, Richard; Ranjan, Desh; Rohatgi, Pankaj Abstract: This paper reviews the impact of some recent results on the research paradigms in structural complexity theory. 1990-04-01T00:00:00Z
Hartmanis, Juris http://hdl.handle.net/1813/6910 2007-12-09T12:59:31Z 1989-04-01T00:00:00Z Title: Godel, von Neumann and the P=?NP Problem Authors: Hartmanis, Juris Abstract: In a 1956 letter, Godel
asked von Neumann about the computational complexity of an NP complete problem. In this column, we review the historic setting of this period, discuss Godel's amazing letter and why von Neumann did
not solve the P = ?NP problem. 1989-04-01T00:00:00Z | {"url":"http://ecommons.library.cornell.edu/feed/atom_1.0/1813/14935","timestamp":"2014-04-20T21:05:12Z","content_type":null,"content_length":"12712","record_id":"<urn:uuid:c2f3e64c-e870-4664-b07f-619460b04722>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Matheology � 191
Date: Jan 13, 2013 4:45 PM
Author: Virgil
Subject: Re: Matheology � 191
In article
WM <mueckenh@rz.fh-augsburg.de> wrote:
> There are not uncountably many finite (initial segments of) paths. And
> also any anti-diagonal can only differ from other paths in its (and
> their) finite initial segments. Unless your silly idea of nodes at
> level aleph_0 was correct (it is not) there is no chance to differ at
> other places than finite (initial segments of) paths. But that is
> impossible if all of them are already there. And the latter is
> possible, because they form a countable set.
A set which WM cannot count!
The definition of a set being countable is that there is a surjection
from |N to that set.
Thus in order to PROVE a set is countable one must show a surjection
from |N to that set, which is just a listing, possibly with repetitions,
of that sets members.
But any listing of the paths of a Complete Infinite Binary Tree (as
infinite binary sequences) proves itself incomplete.
Thus the set of paths cannot be made to fit the "countable" definition.
At least not outside of WMytheology! | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8066273","timestamp":"2014-04-19T14:55:39Z","content_type":null,"content_length":"2204","record_id":"<urn:uuid:89d0b39a-6dcb-4414-8976-52b0d19a1b4c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 1:Introduction
1- Define: surveying, the meter, 1 radian, 1 degree, 1 grad,
2-What are the primary types of surveying , what are the basic types of surveying (according to the applications) , what are the types of surveying (according to the method of surveying)
3-Using suitable sketches outline the basic methods of locating a detail point with respect to a surveying line
4-What is meant by working from the whole to the part (use a sketch)
5- Convert 123.8765 grade to deg.min.sec and to radiance
6- Convert 2.765476 rad to deg.min.sec and to grads
7-Make a comparison between small scale maps and large scale maps
8- What are the benefits of using graphical scales
9- A distance of 83.87m was measured by a 30m tape. After the measurements, it was found that the first 15cm of the tape was cut. What is the correct distance.
10- The following distances was measured with a 30m tape a- 18.52m , b- 52.86m
After the measurements it was found that the zero points was not at the beginning of the tape but to the back by 28cm and the tape was held mistakenly at the tip of the tape, find the corrected
11-Convert 129.7865grad into deg.min.sec convert 5.876544 rad into grad.
12- if csec α= 2.675432 find α in grad
13- A man bought a piece of land =20×25 m= 500m^2. Later he found that the 25m side is slopped by 20% , the 20m side is horizontal. What is the actual area he bought.
14- If an area measured on a map plotted with a scale= 1/2000 equal to 400 cm^2, what would be the area on a map having a scale= 1/ 5000. | {"url":"http://hwy.uod.ac/?page_id=208","timestamp":"2014-04-20T19:07:39Z","content_type":null,"content_length":"17695","record_id":"<urn:uuid:ef5e1817-7f6e-46bb-8114-d6b012e8e7a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Path integral over probability functional
Hi. Can anyone tell me how to solve the path integral
[tex] \int D F \exp \left\{ - \frac{1}{2} \int_{t'}^{t} d \tau \int_{t'}^{\tau} ds F(\tau) A^{-1}(\tau - s) F(s) + i \int_{t'}^{t} d\tau F(\tau) \xi(\tau) \right\} [/tex]
In case my Latex doesn't work the integral is over all possible forces F over the functional
\exp \left\{ - \frac{1}{2} \int_{ t' } ^{ t } d \tau \int_{ t' } ^{ \tau } ds F( \tau ) A^{-1} ( \tau - s ) F( \tau) + i \int_{ t' } ^{t} d \tau F( \tau ) \xi ( \tau ) \right\}
I have tried to solve it by making the discrete Fourier transform of the functions F, A^{-1} and \xi but I run into some trouble when doing that. | {"url":"http://www.physicsforums.com/showthread.php?t=121704","timestamp":"2014-04-16T22:08:57Z","content_type":null,"content_length":"19935","record_id":"<urn:uuid:0318d0f5-7b8f-401d-bc1d-ca26f4549c4b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weird Binary
Imagine you are in charge of humanities search for extra terrestrials program. One day, after scanning the skies, you find a signal. The signal consists of a series of pulses, and after a little bit
of work, you discover that they form an image 197×199 in size. The image contains what looks like simple arithmetic. However, no normal number-system seems to work.
101110 + 100100 = 100010
Welcome to the world of "weird binary". So what exactly is going on? It turns out that even though a number system can be based on two different symbols, it doesn't have to be the traditional binary
system we all know and love. These new binary number systems have differing properties, and come with their own strengths and weaknesses.
So what defines a binary number system? Firstly we require two symbols. Let us use "0" and "1" for simplicity. We also require a place-based number system, which we will assume operates in the normal
right-to-left increasing powers of a single base. Note that we could imagine a left-to-right system, but that just corresponds to using the inverse of a right-to-left base, so doesn't add anything
Once we have this, we can define a simple multiplication table; anything multiplied by zero is zero, and 1×1=1. Similarly, addition by zero is idempotent. Thus only 1+1=2 needs to be defined, since
"2" doesn't fit in our binary system. By choosing different representations of the number 2, we can define different binary number systems. Thus by enumerating these choices we can see which weird
binary systems exist.
Before we will proceed, it is nice to define a format for such numbers. Unfortunately, the complexity of the addition operation will be quite large. This is due to the carries being unlike those of
normal binary numbers. Thus, to simplify the carry calculation, we will provide an integer per bit. Such a layout would look like the following in C:
static void print_wb(unsigned *x, int num_size)
int i;
char c[2] = {' ', '1'};
for (i = num_size - 1; i > 0; i--)
printf("%c", c[x[i]]);
if (x[i] == 1) c[0] = '0';
printf("%c\n", x[0] + '0');
static void clear_wb(unsigned *x, int num_size)
memset(x, 0, num_size * sizeof(unsigned));
static void copy_wb(unsigned *x, unsigned *y, int num_size)
memcpy(y, x, num_size * sizeof(unsigned));
static void carry_wb(unsigned *x, int num_size, unsigned *add_spec, int add_spec_size)
int i, j;
unsigned c;
for (i = 0; i < num_size; i++)
if (x[i] <= 1) continue;
c = x[i] / 2;
x[i] &= 1;
for (j = 0; (j < add_spec_size) && (j < num_size - 1 - i); j++)
x[j + i + 1] += c * add_spec[j];
static void add_wb(unsigned *x, unsigned *y, int num_size, unsigned *add_spec, int add_spec_size)
int i;
for (i = 0; i < num_size; i++)
x[i] += y[i];
carry_wb(x, num_size, add_spec, add_spec_size);
static void mult(unsigned *x, unsigned *y, int num_size, unsigned *add_spec, int add_spec_size)
int i, j;
unsigned t[num_size];
for (i = 0; i < num_size; i++)
t[i] = x[i];
x[i] = 0;
for (i = 0; i < num_size; i++)
if (t[i])
for (j = 0; j < num_size; j++)
if (j + i > num_size) break;
x[j + i] += y[j];
carry_wb(x, num_size, add_spec, add_spec_size);
The above has the form of the weird binary encoded in the value of the number 2. This specification is stored within the add_spec array, of size add_spec_size. So long as the numbers aren't so large
that the carries don't overflow an unsigned integer, the above can be used to add and multiply arbitrary precision weird binary numbers.
To create weird binary numbers, we can use induction. We know the value of one, and can then add it multiple times to obtain any natural number we require.
static void create_wb_natural(unsigned *x, int num_size, unsigned *add_spec, int add_spec_size, int n)
unsigned temp[num_size];
unsigned temp2[num_size];
/* Clear values */
clear_wb(x, num_size);
clear_wb(temp, num_size);
temp[0] = 1;
/* Log2(n) steps */
while (n)
/* Is the current bit set? */
if (n & 1) add_wb(x, temp, num_size, add_spec, add_spec_size);
/* Move to the next bit */
n = n / 2;
/* Double the size of the thing to add */
copy_wb(temp, temp2, num_size);
add_wb(temp, temp2, num_size, add_spec, add_spec_size);
These C functions assume that the symbolic representation for the number two lies solely to the left of the radix point. This means that carries only propagate to the left. Allowing carries to
propagate to the left and right simultaneously makes it rather difficult to construct an addition routine. (In effect it is possible to make a linear cellular automata this way, and such things are
subject to the halting problem.) Of course, some cases are solvable. i.e. The case where 1+1=10.1 yields b+1/b=2, and thus b=1. This base-one solution also appears in other situations, see the 1+1=11
case below.
The first possibility results in a rather boring number system. In this system, addition works like the xor operator. Multiplication still mixes number places together. However, no carries take
place, so all number positions operate as if they are alone with the "addition" step. This system is somewhat useful in cryptography, and Intel has added the PCLMULQDQ instruction to perform this
This number system is even less interesting, with addition acting like a logical-or. Once you have a 1, you can never remove it. This means that negative numbers cannot exist in this system, since
you can never add two non-zero numbers to get zero.
This is the traditional binary number system used in computers. By using twos-complement arithmetic, we can represent negative numbers. All normal mathematical operations work as you would expect in
this system, (unlike the previous two systems). However, as you can see, this isn't the end of the story, as several more interesting binary systems exist.
Any other binary system that defines two as ending with a 1 is problematic. In such systems, you cannot form minus one. (In other words, the equation 1 + y = 0 has no solution.) This severely
restricts the usefulness of such systems. However, there is one system which is of note. 1+1=11 describes the "stick counting" system of natural numbers. The more sticks you have, the larger the
number. The total number of sticks exactly corresponds to the number you have. Another way of looking at this system, is saying that it is "base 1". Unfortunately, working in base 1 is extremely
inefficient, as the exponential savings in symbol compression don't happen. i.e. 999 only requires three symbols to write in decimal... but would require 999 symbols in base 1.
Since all the interesting systems will define two ending with a zero symbol, we can now evaluate what negative numbers are. This can be done by first calculating what minus one is, and then
multiplying that by the correct natural number.
static void find_m1(unsigned *m1, int num_size, unsigned *add_spec, int add_spec_size)
int i, j;
unsigned t[num_size];
clear_wb(m1, num_size);
clear_wb(t, num_size);
x[0] = 1;
for (i = 0; i < add_spec_size; i++)
if (i + 1 > num_size) break;
t[i + 1] = add_spec[i];
for (j = 1; j < num_size; j++)
if (!t[j]) continue;
m1[j] = 1;
for (i = 0; i < add_spec_size; i++)
if (i + 1 + j > num_size) break;
t[i + j + 1] += add_spec[i];
carry_wb(t, num_size, add_spec, add_spec_size);
static void create_wb_integer(unsigned *x, int num_size, unsigned *add_spec, int add_spec_size, int n,
unsigned *m1)
/* Are we a natural number? */
if (n >= 0) return create_wb_natural(x, num_size, add_spec, add_spec_size, n);
/* Negative number */
create_wb_natural(x, num_size, add_spec, add_spec_size, -n);
/* Multiply by minus one */
mult(x, m1, num_size, add_spec, add_spec_size);
Using the above, we have b^2=2, where b is the base. Thus, we find that the bases that satisfy this system are ±√2. This system thus can exactly store values proportional to the square root of two.
This comes at a cost, numbers written in this form are twice as large as in normal binary. Normal binary can approximate the square root of two by including digits below the radix point. By using
enough digits, the error can be made as small as needed. This means that this system is only really useful if exact calculations are required, and isn't really useful.
This system is defined by b^2+b=2, and thus b=1, or b=-2. It turns out that the b=1 solution is spurious, leaving the base = -2, or "negabinary" number system. This number system is fairly well
known. See Knuth's Art of Computer Programming, Seminumerical Algorithms for a discussion of its properties. Since this case is very similar to normal binary, (it is only the odd positions that
vary), there exist some fast algorithms to convert from binary to negabinary and back. There are also fast ways to add, multiply and divide these numbers.
Such numbers are more uniform than normal binary. No twos-complement trick is required to represent negative numbers. Thus there is no longer a difference between signed and unsigned multiplication.
A table of the representation of such numbers is:
-16: 110000
-15: 110001
-14: 110110
-13: 110111
-12: 110100
-11: 110101
-10: 1010
-9: 1011
-8: 1000
-7: 1001
-6: 1110
-5: 1111
-4: 1100
-3: 1101
-2: 10
-1: 11
0: 0
1: 1
2: 110
3: 111
4: 100
5: 101
6: 11010
7: 11011
8: 11000
9: 11001
10: 11110
11: 11111
12: 11100
13: 11101
14: 10010
15: 10011
16: 10000
This system is somewhat similar to the case where 1+1=100. It is a "fat binary" system, where there are multiple representations for the same numbers. In this case, the base is the cubic root of two,
instead of the square root. The result is that numbers take three times as much space as normal. Such a system is only useful if you need exact arithmetic with such numbers.
This case has b^3+b=2, and has the following three solutions b=1, b=(-1±i√7)/2. The first base-one case is again spurious, leaving the two bases that are complex conjugates of each other. These bases
allow one to calculate complex arithmetic using a single binary string. (The previous case also allowed complex solutions, but symmetry prevented them from being usefully different from the real
Integers in this representation look like:
-16: 101110000
-15: 101110001
-14: 101111010
-13: 101111011
-12: 101010100
-11: 101010101
-10: 101011110
-9: 101011111
-8: 101011000
-7: 101011001
-6: 100010
-5: 100011
-4: 111100
-3: 111101
-2: 110
-1: 111
0: 0
1: 1
2: 1010
3: 1011
4: 11100100
5: 11100101
6: 11101110
7: 11101111
8: 11101000
9: 11101001
10: 11100110010
11: 11100110011
12: 11001100
13: 11001101
14: 11100010110
15: 11100010111
16: 11100010000
Notice how the length of the numbers doesn't increase monotonically as they increase away from zero. This is due to the fact that the boundary of numbers of a given symbol length in this system is a
fractal on the complex number plane:
This case has b^3+b^2=2, resulting in b = 1, -1±i. Again, the base one solution is spurious. The resulting systems are quite well known, and are often presumed as the only complex binary ones. As can
be seen by the previous case, this isn't so. See the book "Hacker's Delight" for a description, and the inverse problem of solving for the number two in these systems.
Integers in this representation are:
-16: 1110100000000
-15: 1110100000001
-14: 1110100001100
-13: 1110100001101
-12: 11010000
-11: 11010001
-10: 11011100
-9: 11011101
-8: 11000000
-7: 11000001
-6: 11001100
-5: 11001101
-4: 10000
-3: 10001
-2: 11100
-1: 11101
0: 0
1: 1
2: 1100
3: 1101
4: 111010000
5: 111010001
6: 111011100
7: 111011101
8: 111000000
9: 111000001
10: 111001100
11: 111001101
12: 100010000
13: 100010001
14: 100011100
15: 100011101
16: 100000000
The symbol lengths for numbers in this system are increasing more rapidly than in the previous case. This is due to the fact that the previous system in effect de-weights imaginary numbers by a
factor of the square root of seven. This weighting means that the previous system cannot exactly represent numbers such as the imaginary unit, i. In exchange for the larger verbosity, the current
system doesn't suffer this problem.
Again, it turns out that the symbol lengths do not increase monotonically away from zero. The numbers of a given length form a "Dragon Fractal":
This final case where the number two is represented by four symbols isn't particularly interesting. The cubic equation for the base produces horribly messy solutions. Only when you would like exact
arithmetic with such a base would this system be better than others already discussed.
This is yet another fat-binary case, where the base is the fourth root of two. Other than that, it is uninteresting.
This yields a quartic equation, which unfortunately produces messy solutions just like the 1+1=1110 case. It isn't useful.
This gives the equation b^4+b^2=2, having the solution b=±1,±i√2. Ignoring the non-imaginary solutions yields the interesting case of a pure-imaginary base. (Which one of the two we choose doesn't
matter due to symmetry under complex conjugates.) This number system is also mentioned in Knuth's Art of Computer Programming, in where he states that its disadvantage over the -1+i system is the
fact that the complex unit is represented by an infinitely long non-repeating string. However, if we use floating-point, then the small error introduced is usually ignorable. This is especially true
because this system has no trouble exactly representing integers:
-16: 10100000000
-15: 10100000001
-14: 10100010100
-13: 10100010101
-12: 10100010000
-11: 10100010001
-10: 1000100
-9: 1000101
-8: 1000000
-7: 1000001
-6: 1010100
-5: 1010101
-4: 1010000
-3: 1010001
-2: 100
-1: 101
0: 0
1: 1
2: 10100
3: 10101
4: 10000
5: 10001
6: 101000100
7: 101000101
8: 101000000
9: 101000001
10: 101010100
11: 101010101
12: 101010000
13: 101010001
14: 100000100
15: 100000101
16: 100000000
This complex number system technically is also fractal, but the system of nested rectangles isn't particularly complicated:
1+1=10110 ... 1+1=100010
These systems are again like 1110 and 10010, except with quartic and quintic equations needing to be solved. The resulting solutions are complex functions containing several nested roots, and as as
result do not make very interesting bases.
This case has b^5+b^3+b^2+b=2, which amongst its solutions has b=(1±i√7)/2. Thus this case is very similar to that of 1+1=1010, and has the integer table:
-16: 110110000
-15: 110110001
-14: 101011110
-13: 101011111
-12: 101010100
-11: 101010101
-10: 101010010
-9: 101010011
-8: 101101000
-7: 101101001
-6: 101110110
-5: 101110111
-4: 1100
-3: 1101
-2: 1010
-1: 1011
0: 0
1: 1
2: 101110
3: 101111
4: 100100
5: 100101
6: 100010
7: 100011
8: 1111000
9: 1111001
10: 10111000110
11: 10111000111
12: 101100011100
13: 101100011101
14: 101100011010
15: 101100011011
16: 101100010000
And fractal boundary of:
This is the base used by the aliens described in the introduction. Their crazy mathematical statement is simply showing that 2+4=6.
Creating complex numbers in weird binary
As was shown in the beginning, creating natural numbers is easy, induction can be used to create any number once we have the definition of the number two. Integers can be created once we can evaluate
what minus one is, which again only depends on the definition of the number two. Unfortunately, complex numbers aren't so simple. There, we need to know which of the possibly many solutions for the
base we are using in order to obtain a value for the imaginary unit, i.
It turns out that the last number system described contains the identity i√7=1+b+b^2, when b = (1+i√7)/2. Thus we can simply divide by the square root of seven, and use induction again to evaluate
any pure imaginary integer. There is one other case though, the fact that the base includes the factor of 1/2 means that this system also has half-integer complex numbers. These can be created by
adding or subtracting the base, and simplifying the problem into the integer case. A C function which does this is:
static void b_to_wb(unsigned *x, int num_size, unsigned *add_spec, int add_spec_size, complex double n,
unsigned *m1)
double r = creal(n);
double im = cimag(n);
double eps = 1e-8;
unsigned temp[num_size];
unsigned imval[num_size];
/* Clear output */
clear_wb(x, num_size);
clear_wb(imval, num_size);
/* Are we not an integer? */
if (fabs(floor(r + eps) - r) > eps)
r += 0.5;
im += 0.5*sqrt(7.0);
x[1] = 1;
/* Create real part */
create_wb_integer(temp, num_size, add_spec, add_spec_size, (int) r, m1);
add_wb(x, temp, num_size, add_spec, add_spec_size);
/* Scale imaginary part */
im /= sqrt(7.0);
imval[0] = 1;
imval[1] = 1;
imval[2] = 1;
/* Create imaginary part */
create_wb_integer(temp, num_size, add_spec, add_spec_size, (int) im, m1);
mult(temp, imval, num_size, add_spec, add_spec_size);
/* Combine real and imaginary parts */
add_wb(x, temp, num_size, add_spec, add_spec_size);
For other weird binary bases, such as b=-1+i, the procedure is somewhat different, especially in that case, where i can be represented exactly.
The reverse process, of converting a weird binary number back to binary is relatively simple. We just add the relevant powers of the base. C code that does this is:
static complex double wb_to_b(unsigned *x, int num_size, complex double base)
int i;
complex double b = base;
complex double out = x[0];
for (i = 1; i < num_size; i++)
out += b * x[i];
b *= base;
return out;
Are there any other interesting bases for weird binary? It turns out that no, there aren't. For a base to be interesting, its complex squared norm must be equal to two. A pure imaginary base with
this is the b=±i√2, discussed above. Similarly, there is the pure real case b=±√2 also described. This leaves complex cases. Normalizing, we have:
b = [±1±i√ (2n^2-1)]/n
If we evaluate b^2, we have:
b^2=±[1-n^2±i√ (2n^2-1)] ×2/n^2
We need 2/n^2 to be in lowest common terms to be a multiple of 1/n. If this isn't the case, then we can never build a terminating expression that evaluates to be the number two. The denominators will
increase without limit as we increase the powers of the base, and no cancellation will occur. To prevent that, n needs to be 1 or 2. If n is one, then we get b = ±1 ±i, and if n is two, we obtain the
other pure complex solutions; b = (±1±i√7)/2.
So binary number systems can be quite complicated. However, they unfortunately cannot represent quaternions or ocotonions due to the roots of polynomials being closed under the complex numbers. But
still, as can be seen, there are several complex binary number systems, some more well known than others.
Justin said...
Not to pick nits, but when you say "anything multiplied by zero is zero, and 11=1. Similarly, addition by zero is idempotent", wouldn't it be more useful to say that zero is the additive identity,
and that *multiplication* by zero is idempotent?
Both addition and multiplication by zero are idempotent, but idempotence is implied in the case of addition by zero being invariant.
Very interesting post however! Hopefully no aliens read this.
Awana56 said...
Someday I will come back to this page and make myself fully comprehend this.
Warren D. Smith said...
I don't agree with your proof at the end that "there are no other interesting bases for weird binary." You are assuming B has a specific quadratic irrational
form without justification, and indeed earlier you had made assumptions
contradicting that (note your remarks about quartic and quintic equations).
However... despite your proof being bogus, I suspect its conclusion is correct.
The demand that
sum(k=0..D) B^k * {0 or 1} = 1+1
where B=squareroot(2)*exp(i*q) is a complex number on the circle of radius squareroot(2), is (for any given D-bit string) 2 equations (both the real & imaginary parts) with only one real degree of
freedom (q) usable to solve them. So in general we would expect no solutions to exist, hence it is a miracle anytime a solution exists, hence presumably there are only a finite set of solutions.
My computer just did a search for all solutions with D<=10 and it does not agree that your quartic/quintic cases 1+1=10110 and 1+1=100010 and 1+1=10010 are even solutions at all. It claims these 6
are the only solutions:
base=sqrt2*exp(i*q)= 1.414214+0.000000i where q=0.000000(rad)= 0.000000(deg),
1+1=0000000100 note sqrt(2)=1.41421356237309504880
base=sqrt2*exp(i*q)= 0.500000+1.322876i where q=1.209429(rad)= 69.295189(deg), 1+1=0000101110 *
note sqrt(7)/2=1.32287565553229529525
base=sqrt2*exp(i*q)= 0.000000+1.414214i where q=1.570796(rad)= 90.000000(deg), 1+1=0000010100 *
base=sqrt2*exp(i*q)=-0.500000+1.322876i where q=1.932163(rad)=110.704811(deg), 1+1=0000001010 *
base=sqrt2*exp(i*q)=-1.000000+1.000000i where q=2.356194(rad)=135.000000(deg), 1+1=0000001100 *
base=sqrt2*exp(i*q)=-1.414214+0.000000i where q=3.141593(rad)=180.000000(deg), 1+1=0000000100 *
I was looking at this because I was interested in "halvable objects." A compact connected measurable set in d-dimensional space is "halvable" if it can be cut into two 2^(-1/d) scaled copies of
I proved the only 2D halvable objects are the 45-45-90 right triangle and
parallelograms with sidelength ratio squareroot(2). That is for CONVEX ones.
However... upon seeking nonconvex halvable 2D objects, I discovered the
weird binary examples, where the set in the complex plane is all sums of
B^(-k) * {0 or 1}
for k=1,2,3...
and B is any base of a (complex) weird binary system. It would be nice to create a picture of all these with one in red, other in blue, showing how the red and blue translated copies (translated by 0
and 1) fit together to make a sqrt(2)-times larger copy of the same shape (rotated). You've already made some pictures that nearly are what I want, but not quite.
I do not currently know if any other 2D halvable objects exist.
--Warren D. Smith
warren.wds AT gmail.com
IBRAHIM said...
Dear sir,
kindly help me to send the answer of THE SQUARE ROOT OF 100100 in base two TO BASE TWO
ishaq umar abdullahi said...
Enter your comments here please kindly assist us with the answer.thanks for your anticipated perusal.
sfuerst said...
Sorry you two, this website isn't particularly helpful when you want others to do your homework. ;-) | {"url":"http://locklessinc.com/articles/weird_binary/","timestamp":"2014-04-16T16:05:10Z","content_type":null,"content_length":"38305","record_id":"<urn:uuid:005f9444-deb0-4c8b-b038-c81b6e8ee814>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
jet bundle
Differential geometry
For $p : P \to X$ a surjective submersion of smooth manifolds and $k \in \mathbb{N}$, the order $k$- jet bundle $J_k P \to X$ is the bundle whose fiber over a point $x \in X$ is the space of
equivalence classes of germs of sections of $p$ where two germs are considered equivalent if their first $k$derivatives coincide.
General abstract
We discuss a general abstract definition of jet bundles.
For $X \in \mathbf{H}$, write $\mathbf{\Pi}_{inf}(X)$ for the corresponding de Rham space object.
Notice that we have the canonical morphism
$i : X \to \mathbf{\Pi}_{inf}(X)$
(“inclusion of constant paths into all infinitesimal paths”).
In the context of D-schemes this is (BeilinsonDrinfeld, 2.3.2). See (Paugam, section 2.3) for a review. There this is expressed dually in terms of algebras in D-modules. We indicate how the
translation works
A quasicoherent (∞,1)-sheaf on $X$ is a morphism of (∞,2)-sheaves
$X \to Mod \,.$
We write
$QC(X) := Hom(X, Mod)$
for the stable (∞,1)-category of quasicoherent (∞,1)-sheaves.
A D-module on $X$ is a morphism of (∞,2)-sheaves
$\mathbf{\Pi}_{inf}(X) \to Mod \,.$
We write
$DQC(X) := Hom(\mathbf{\Pi}_{inf}(X), Mod)$
for the stable (∞,1)-category of D-modules.
The Jet algebra functor is the left adjoint to the forgetful functor from commutative algebras over $\mathcal{D}(X)$ to those over the structure sheaf $\mathcal{O}(X)$
$(Jet \dashv F) : Alg_{\mathcal{D}(X)} \stackrel{\overset{Jet}{\leftarrow}}{\underset{F}{\to}} Alg_{\mathcal{O}(X)} \,.$
Typical Lagrangians in quantum field theory are defined on jet bundles. Their variational calculus is governed by Euler-Lagrange equations.
Examples of sequences of infinitesimal and local structures
The abstract characterization of jet bundles as the direct images of base change along the de Rham space projection is noticed on p. 6 of
The explicit description in terms of formal duals of commutative monoids in D-modules is in
An exposition of this is in section 2.3 of
Standard textbook references include
• G. Sardanashvily, Fibre bundles, jet manifolds and Lagrangian theory, Lectures for theoreticians, arXiv:0908.1886
• Shihoko Ishii, Jet schemes, arc spaces and the Nash problem, arXiv:math.AG/0704.3327
• D. J. Saunders, The geometry of jet bundles, London Mathematical Society Lecture Note Series 142, Cambridge Univ. Press 1989.
A discussion of jet bundles with an eye towards discussion of the variational bicomplex on them is in chapter 1, section A of
• Ian Anderson, The variational bicomplex (pdf)
Discussion of jet-restriction of the Haefliger groupoid is in
• Arne Lorenz, Jet Groupoids, Natural Bundles and the Vessiot Equivalence Method, Thesis (pdf) | {"url":"http://www.ncatlab.org/nlab/show/jet+bundle","timestamp":"2014-04-18T13:08:06Z","content_type":null,"content_length":"57221","record_id":"<urn:uuid:aa5db7cf-42fd-49dc-a521-136c930511e0>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Source localization
From Scholarpedia
Neuroelectromagnetic source imaging (NSI) is the scientific field devoted to modeling and estimating the spatiotemporal dynamics of neuronal currents throughout the brain that generate the electric
potentials and magnetic fields measured with noninvasive or invasive electromagnetic (EM) recording technologies (Ha93; Ba01; Ha05). Unlike the images produced by fMRI , which are only indirectly
related to neuroelectrical activity through neurovascular coupling (e.g., the BOLD signal), the current source density or activity images that NSI techniques generate are direct estimates of
electrical activity in neuronal populations. In the past few decades, researchers have developed better source localization techniques that are robust to noise and that are well informed by anatomy,
neurophysiology, magnetic resonance imaging (MRI), and realistic volume conduction physics (Io90; Ge95; Fu99; Gr01; Vr01; Li02; Wo03). State-of-the-art algorithms can localize many simultaneously
active sources and even determine their variable spatial extents and single-trial dynamics.
Neuronal origin of the electromagnetic signals
The measured EM signals that are generated by the brain are thought to be primarily due to ionic current flow in the apical dendrites of cortical pyramidal neurons and the associated return currents
throughout the volume conductor (Ok93; Ok97). Neurons that have dendritic arbors with closed field geometry (e.g., interneurons) produce no externally measurable signals (Hu69). However, some
non-pyramidal neurons, such as the Purkinje neurons, do produce externally measurable signals (Ni71). Although still controversial, it is believed that some source localization methods can accurately
image the activity of deep brain structures, such as the basal ganglia, amygdala, hippocampus, brain stem, and thalamus (Ri91; Vo96; Te97; Li99). However, single neurons produce weak fields and if
the current flow is spatiotemporally incoherent (e.g., a local desynchronization) the fields end up canceling. Thus, EM recordings are particularly suited for studying spatiotemporally coherent and
locally synchronized collective neural dynamics. There is a limit to how much current density a patch of cortex can support and this seems to be similar across species (Ok93). Thus, large amplitude
fields/potentials entail distributed synchronized oscillations.
Measurement modalities
All the EM recording technologies share the important benefit of high temporal resolution (< 1 ms). However, they measure different, yet closely related, physical quantities at different spatial
scales that are generated by the current density source distribution located throughout the brain, heart, muscles, eyes, and the environment. In principle, the inverse methods described here can be
applied to data acquired using magnetoencephalography (MEG), scalp electroencephalography (sEEG), intracranial EEG (iEEG) or Electrocorticography (ECoG), or combinations of these measurement
modalities, as long as the appropriate realistic forward models are available to compute the gain vectors needed for localization (Wo03; Wo04; Wo06).
Magnetoencephalography (MEG)
In MEG, an array of sensors is used to measure components of the magnetic vector field surrounding the head (Ha93; Vr01). Although the first magnetoencephalogram was recorded with a single coil (Co68
), after the invention of the superconducting quantum interference device (SQUID) (Zi71), MEG data has been recorded with SQUIDs (Co72). Current state-of-the-art systems have many sensors (>275)
organized as a helmet-array of magnetometers and/or gradiometers (planar or radial). Distant reference sensors are used to eliminate noise (Vr01). Also, magnetic shielded rooms have been improved and
active shielding technology has been developed to further reduce the effects of sources outside the shielded room and to reduce the need for heavily shielded rooms.
Electroencephalography (EEG)
In sEEG, an array of electrodes is placed on the scalp surface to measure the electric potential scalar field relative to a reference electrode (Ni05; Nu05). EEG recording technology has progressed
much since the first human recordings by Hans Berger in 1924 (Be29) and the later work by Edgar Adrian (Ad34). Modern state-of-the-art EEG systems use caps with as many as 256 active electrodes. Some
EEG systems can be used for simultaneous EEG/MEG or EEG/fMRI recordings. Researchers are also working on wireless acquisition and on dry electrode technologies that do not use conductive gel.
Invasive electrical recordings
In patients undergoing iEEG or ECoG, grid or strip electrode arrays are neurosurgically placed to record the electric potential more closely and undistorted by the skull (La03; Le05; Ca06). Grid
electrode arrays are commonly rectangular in shape and have inter-electrode distances of about 1 cm or lower. Invasive measurements of the local field potential (LFP) can be recorded by depth
electrodes, electrode grids, and laminar electrodes (Pe54; Ul01; Sa03). Invasive recordings are usually not localized because the LFP is treated as strictly local. However, there is also a local
mixing process that generates the LFPs, and electrodes can pick up far field potentials of strong sources. Thus, inverse models are still desired.
An overview of all the models
A quick overview of several aspects of modeling which directly affect source estimation even though they are not technically inverse modeling per se is presented in this section.
Source modeling
The source model refers to the mathematical model used to approximate the current density. The source model most often used is the equivalent electric dipole model, which approximates the primary
current density within a patch or volume as a point source \(\mathbf{j^p(r')}=\mathbf{q}\delta(\mathbf{r'}-\mathbf{r})\ ,\) where \(\delta(\mathbf{r})\) is the Dirac delta function with moment \(\
mathbf{q}=\textstyle \int \mathbf{j^p(r')}\mathbf{dr'}\) (Ha93; Ba01). Other source models, such as the blurred dipole (i.e., a dipole with surrounding monopoles), the quadrupole, or higher
multipolar expansions, can also be used for forward modeling. In addition, an alternative irrotational current density source model, which excludes the solenoidal component of the current density,
can be used to formulate the forward problem as a mapping from the scalar field of local field potentials throughout the brain to the scalar field of measured potentials of EEG or ECoG. This approach
reduces the number of unknowns, thereby making the inverse problem less underdetermined. This irrotational source model approach for forward/inverse modeling is called ELECTRA (Gr00). Once the LFP
vector is estimated, the irrotational current density vector can be computed by taking its gradient. Although we will assume a dipole source model for the rest of this article, the linear system of
equations of ELECTRA can also be solved with many of the inverse methods explained here.
Anatomical modeling
An MRI scan must be obtained from each individual for doing subject-specific anatomical modeling. The T1 scan is segmented into white and gray matter volumes from which topologically correct white
and gray matter cortical surfaces are tessellated. Sources can then be constrained to reside only in the gray matter volume by constructing a volumetric source space (i.e., a set of coordinates where
candidates dipoles are allowed to have nonzero amplitudes). However, the cortical gray matter volume is usually modeled as a surface since NSI technology probably does not have the spatial resolution
to discriminate sources at different cortical layers. With this surface approach, dipole orientations can easily be constrained to point normal to the cortical surfaces (i.e., in the direction of the
apical dendrites of pyramidal neurons). Non-cortical structures can be modeled as volumetric source sub-spaces without dipole orientations constraints. MRI scans are also used to segment other head
subvolumes (e.g., scalp/skin, skull, CSF, and brain envelope volume) and to tesselate their associated surfaces (e.g., scalp/skin, inner and outer skull, and brain envelope surfaces) which are used
for forward modeling of volume conduction. If a subject's MRI is not available, a standardized MRI brain atlas (e.g., MNI) can be be warped to optimally fit the subject's anatomy based on the
subject's digitized head shape. The warped atlas can then be used as a standardized volumetric source space and for standardized forward modeling.
Coordinate transformations
To compute the forward EM signals that would be produced by a unit dipole somewhere in the brain, both the sensor and dipole positions and orientations must be expressed in the same coordinate
system. This is usually done by transforming (i.e., translating and rotating based on fiducial markers) the sensor positions and orientations to the coordinate system of the MRI where the dipoles are
modeled. Errors in finding the true fiducial marker coordinates in the MRI space can result in poor co-registration. For better co-registration an automatic algorithm (e.g., iterative closest point
(ICP) matching) can be used to match the digitized head-shape or EEG electrode surface to the skin surface segmented from the MRI. Even with optimal co-registration, one must keep in mind that some
of the modeled meshes may be misaligned with the true brain structures not only due to transformation errors but also due to intrinsic structural errors in the MRI data sets (e.g., due to
susceptibility artifacts and/or mistaken tissue segmentation).
Forward modeling
In order to localize the primary current density one has to model the EM signals produced by both the primary (i.e., impressed) and the secondary (i.e., volume, return) current density throughout the
head volume conductor, which in reality has an inhomogenous and anisotropic conductivity profile. Analytic MEG forward solutions can be computed if the volume conductor is approximated by an
isotropic sphere or a set of overlapping spheres (Sa87; Hu99). The same is true for EEG but using cocentric spherical shells with different isotropic conductivities. The level of realism of the
forward volume conduction head model is particularly important since the measured signals have significant contributions from volume currents. The different volume conductor materials (e.g., scalp,
skull layers, CSF, gray matter, white matter) have different conductivities and different levels of anisotropy. These need to be modeled in great spatial detail to compute a realistic bases of gain
vectors used to represent the data. Much progress has been made toward realistic EM forward modeling using numerical techniques such as the Boundary Element Method (BEM) and the Finite Element Method
(FEM) (Ha87; Ak04; Wo06). BEM assumes homogenous and isotropic conductivity through the volume of each tissue shell (e.g., brain, CSF, skull, skin) but not across the boundaries of the shells. FEM
usually also assumes homogeneity and isotropy within each tissue type, but in contrast to BEM, can also be used to model the anisotropy of white matter (using DTI) and that of the skull's spongiform
and compact layers (using geometric models). Although realistic modeling exploits any available subject-specific information from Magnetic Resonance Imaging (MRI) (e.g., T1, T2, PD, DTI),
standardized BEM or FEM head models can be used as a first approximation for subjects without an MR scan (Fu02; Da06).
Inverse modeling
The goal of inverse modeling, the topic of this paper, is to estimate the location and strengths of the current sources that generate the measured EM data. This problem of source localization is an
ill-posed inverse problem. There are an infinite number of solutions that explain the measured data equally well because silent sources (i.e., sources that generate no measurable EM signals) exist,
and these can always be added to a solution without affecting the data fit (He53). Because of this nonuniqueness, a priori information is needed to constrain the space of feasible solutions (Sa87;
Ha93). Nonuniqueness is handled by making assumptions about the nature of the sources (e.g., number of sources, anatomical and neurophysiological constraints, prior probability density functions,
norms, smoothness, correlation, covariance models, sparsity, diversity measures, spatial extent constraints, etc.). Thus, the accuracy and validity of the estimates depend to some extent on the
biological correctness of the assumptions and priors adopted in our models. This is why priors should not only be informed by neurophysiology domain knowledge, but should also be flexible and
adaptive to particular data sets. The rest of this paper focuses on different inverse modeling approaches. In such a short space it would be impossible to mention all inverse algorithms that exist.
Thus, we focus on three basic approaches that encompass most algorithms: 1) non-linear dipole or source fitting; 2) spatial scanning or beamforming; and 3) source-space based algorithms explained
within a general Bayesian framework.
Data pre-processing
Before localization, data is usually preprocessed with uni- and multivariate techniques. The measured univariate time-series of each electric or magnetic recording channel is typically represented as
a row vector, and is usually cleaned by mean subtraction, high- and low-pass filtering, detrending, harmonic analysis, and/or downsampling. Signals can also be transformed to the time-frequency/scale
domains with Fourier/wavelet methods, or can be band-pass filtered and Hilbert transformed to extract the instantaneous amplitude and phase (Ta96; Se99; Gr01; Fr07). Frequency-specific noise can
easily be filtered out in the frequency or time domains. Filtering or time-frequency analysis is also useful for studying wavelike activity and oscillations at specific frequency bands: the slow
oscillation (<1 Hz), delta (1-4 Hz), theta (5-8Hz), alpha (8-12 Hz), mu (8-12 Hz and 18-25 Hz), spindles (~14 Hz), low beta (13-20 Hz), high beta (20-30 Hz), gamma (30-80 Hz), high gamma or omega
(80-200 Hz), ripples (~200 Hz), and sigma bursts (~600 Hz).
The continuously and simultaneously acquired time-series of all MEG, EEG, and/or ECoG channels can be concatenated to form a multivariate data matrix \(\mathbf{B}\in\Re^{{d_m}\times{d_t} }\ ,\) where
\(d_m\) is the number of measurement channels and \(d_t\) is the number of time points. Correlated noise, which is generated by non-brain sources such as the heart, eyes, and muscles, can be filtered
out using blind source separation, subspace methods, and machine learning methods (Ma97; Uu97; Ma02; Ta02; Pa05; Zu07).. Different linear transformations that use only the second-order statistics of
the data can be applied to the data for noise removal (e.g. signal space separation (SSP), Principal Component Analysis (PCA)). The denoised \(\mathbf{B}\) matrix can be cut into epochs time-locked
to an event (e.g., stimulus onset) for single trial analysis or averaged across epochs to extract the event-related potential and/or field (ERP/F) (Lu05). The ERP/F can then be localized by many
different inverse methods. Alternatively, blind source separation algorithms that use higher order statistics or temporal information (e.g., infomax/maximum-likelihood independent component analysis
(ICA) or second-order blind identification (SOBI)), can be applied to the entire unaveraged multivariate time series to learn a data representation basis of sensor mixing vectors (associated with
maximally independent time-courses) that can be localized separately, and to reject non-brain components (i.e., denoising) (Be95; Ma97; Ma02; Ta02). Time-frequency analysis can be performed on the
activation functions of each component, and the source event related spectral perturbation (ERSP), or inter-trial coherence (ITC) can be computed (De04). Convolutive ICA algorithms in the time and
time-frequency domains can model more complex spatiotemporal source dynamics (An03; Dr07; Le07).
Regardless of any transformation or averaging on the data, the data to be simultaneously solved can be represented as a \({d_m}\times{d_v}\) real or complex matrix \(\mathbf{B}\ ,\) which contains \
(d_v\) measurement column vectors. For example, if \(\mathbf{B}\) is a time-series, \({\mathbf{b}}_\tau = (b_1 ,...,b_\tau )^T\) is the \(d_m\) dimensional measurement column vector at time \(\tau\ .
\) But \(\mathbf{B}\) needs not be a time-series, it can also be any given set of vectors obtained from the data that benefit from simultaneous source localization (e.g., a subspace of the data).
When \(d_v=1\ ,\) the single measurement problem is recovered. This case is also used for localizing individual sensor maps obtained from a decomposition of the data (e.g., ICA).
Parametric dipole modeling
The goal of inverse methods is to estimate the location and strengths of the current sources that generate \(\mathbf{B}\ .\) However, because of nonuniqueness a priori assumptions are needed to
constrain the space of feasible solutions. The most common assumption is that the measurements were generated by a small number of brain regions that can be modeled using equivalent dipoles. These
algorithms minimize a data-fit cost function defined in a multidimensional space with dimension equal to the number of parameters. Once the algorithm converges to a local minima in the
multidimensional space of parameters, the optimal parameters (each corresponding to a dimension) are found. The algorithms estimates five nonlinear parameters per dipole: the x, y, and z dipole
position values, and the two angles necessary to define dipole orientations in 3D space. However, in the MEG spherical volume conductor model only one angle (on the tangent space of the sphere) is
necessary because the radial dipole component is silent. The amplitudes are linear parameters estimated directly from the data as explained below for the cases of uncorrelated or correlated noise.
Parametric dipole fitting algorithms, minimize a data fit cost function such as the Frobenius norm of the residual, \[\tag{1} \min_{\mathbf{s}} ||\mathbf{B}-\mathbf{\hat B}||_F^2=||\mathbf{B}-\mathbf
{L_s\hat J_s}||_F^2=||\mathbf{B}-\mathbf{L_sL_s^{\dagger}B}||_F^2=||(\mathbf{I}-\mathbf{L_sL_s^{\dagger}})\mathbf{B})||_F^2=||\mathbf{P_{L_s}^\perp}\mathbf{B}||_F^2\]
where \(\mathbf{s}\) refers to all the nonlinear parameters, \(\mathbf{s}=\lbrace \mathbf{r}_i , \mathbf{\theta}_i \rbrace\ ,\) which are the positions and orientations of all the dipoles in the
model that are optimized to minimize the data fit cost (Sc91; Ba01). \(\mathbf{\hat B}\) is the explained forward data given by the generative linear model\[\mathbf{\hat B}=\mathbf{L_s\hat J_s}\ ,\]
where \(\mathbf{\hat J_s}=\mathbf{L_s^{\dagger}B}\in\Re^{{d_s}\times{d_v} }\) is the estimated current matrix containing the moments of \(d_s\) dipoles. The ith row vector of \(\mathbf{\hat J_s}\)
contains the moments of a dipole located at position \(\mathbf{r}_i\) with orientation \(\mathbf{\theta}_i\ .\) \(\mathbf{L_s}\) is the lead-field matrix containing \(d_s\) m-dimensional column
vectors called gain vectors. They are computed for \(d_s\) unit dipoles located at different positions \(\mathbf{r}_i=(x_i,y_i,z_i)^T\) and with different orientations \(\mathbf{\theta}_i=(\phi_i,\
omega_i)^T\ .\) The orientations of the dipoles however can be obtained linearly if we optimize only the positions and include the gain vectors of all 3 dipole components pointing in the (x,y,z)
directions, so that \(\mathbf{L_s}\) is a \(d_m\) by \(3d_s\) matrix and \(\mathbf{J_s}\) is a \(3d_s\) by \(d_v\) matrix. \(\mathbf{I}\) is the \(d_m\)-dimensional identity matrix, \(\mathbf{L_s}^{\
dagger}\) is the pseudoinverse of \(\mathbf{L_s}\ ,\) and \(\mathbf{P_{L_s}^\perp}\) is the orthogonal projection operator onto the null space of \(\mathbf{L_s}\ .\) Note that in this approach the
gain matrix needs to be recomputed at each iteration for any given parameters. Also, note that this approach is equivalent to a maximum likelihood estimate of the parameters using a Gaussian
likelihood noise model.
In the presence of correlated noise, a modified cost function can be minimized (Sa87). If \(\mathbf{C}_\varepsilon\) is the noise covariance matrix, which can be decomposed as \(\mathbf{C}_\
varepsilon=\mathbf{V}\Lambda^2 \mathbf{V}^T\ ,\) where \(\mathbf{V}\) is an orthonormal matrix, and \(\Lambda=diag(\lambda _1 , \ldots ,\lambda_{d_m})\ ,\) then the new cost function can be expressed
as \[\tag{2} \min_{\mathbf{s}} \left\| {{\mathbf{PB} } - {\mathbf{P\hat B} } } \right\|_F^2 = \left\| {{\mathbf{PB} } - {\mathbf{PL_s} } {\mathbf{\hat J_s} } } \right\|_F^2\ ,\]
where \(\mathbf{P}= \mathbf{V}\Lambda^{-1}\mathbf{V}^T\ .\)
Global minimization
These cost functions are usually minimized using nonlinear optimization algorithms (e.g., Nelder-Meade downhill simplex, Levenberg-Marquardt). Unfortunately, when the number of dipoles is increased
(e.g., \(d_s>2\)), the cost suffers from many local minima. Furthermore, it should be noted that by adding a spatial term to the data-fit cost function dipoles can be constrained to reside as close
as desired to the gray matter volume. However, such spatial penalties can introduce more local minima problems. Global minimization can be achieved for a time-series containing maybe up to seven
dipoles by using more computationally intensive algorithms such as simulated annealing, multistart simplex algorithms, or genetic algorithms (Hu98; Uu98). Alternatively, instead of selecting a point
estimate, one can use Markov Chain Monte Carlo algorithms to make Bayesian inferences about the number of sources and their spatial extents and to compute probabilistic maps of activity anatomically
constrained to gray matter (Sc99; Be01). It is important to distinguish the cost function from the optimization algorithms. Although the standard costs for dipole fitting have many local minima,
other costs, like for example the negative log marginal likelihood (see sparse Bayesian learning section), have less local minima, and can also be minimized with nonlinear dipole fitting algorithms.
Spatial scanning and beamforming
An alternative approach to the ill-posed bioelectromagnetic inverse problem is to independently scan for dipoles within a grid containing candidate locations (i.e., source points). Here the goal is
to estimate the activity at a source point or region while avoiding the crosstalk from other regions so that these affect as little as possible the estimate at the region of interest.
Matched filter
The simplest spatial filter, a matched filter, is obtained by normalizing the columns of the leadfield matrix and transposing this normalized dictionary. The spatial filter for location s is given by
\[\tag{3} {\mathbf{W}}_s^T = \frac{{ {{\mathbf{L} }_s^T } } }{ {\left\| {{\mathbf{L} }_s } \right\|_F } }\ .\] This approach essentially projects the data to the column vectors of the dictionary.
Although this guarantees that when only one source is active, the absolute maximum of the estimate corresponds to the true maximum, this filter is not recommended since this single-source assumption
is usually not valid, and since the spatial resolution of the filter is so low given the high correlation between dictionary columns. This approach can be extended to fast recursive algorithms, such
as matching pursuit and its variants, which sequentially project the data or residual to the non-used dictionary columns to obtain fast suboptimal sparse estimates.
Multiple signal classification (MUSIC)
The MUSIC cost function is given by \[\tag{4} M_s = \frac{ {\left\| {\left( {{\mathbf{I} } - {\mathbf{U} }_s {\mathbf{U} }_s^T } \right){\mathbf{L} }_s } \right\|_2^2 } }{ {\left\| {{\mathbf{L} }_s }
\right\|_2^2 } } = \frac{ {\left\| {P_{{\mathbf{U} }_s }^ \bot {\mathbf{L} }_s } \right\|_2^2 } }{ {\left\| {{\mathbf{L} }_s } \right\|_2^2 } }\ ,\] where \({\mathbf{B}} = {\mathbf{USV}}^T\) is the
singular value decomposition (SVD) of the data, \({\mathbf{U}}_s \) is a matrix with the first \(d_s\) left singular vectors that form the signal subspace, and \(\mathbf{L}_s\) is the gain vector for
the dipole located at \(r_i\) and with orientation \(\theta_i\) (obtained from anatomy or using the generalized eigenvalue decomposition) (Mo98). \(P_{\mathbf{U}_s }^ \perp\) is an orthogonal
projection operator onto the data noise subspace. The MUSIC map is the reciprocal of the cost function at all locations scanned. This map can be used to guide a recursive parametric dipole fitting
Linearly constrained minimum-variance (LCMV) beamforming
In contrast, the minimum variance beamformer attempts to minimize the beamformer output power subject to a unity gain constraint: \[\tag{5} \min_{{\mathbf{W} }_s} \quad tr\left( {{\mathbf{W} }_s^T {\
mathbf{CW} }_s } \right)\]
subject to \({\mathbf{W} }_s^T {\mathbf{L} }_s = {\mathbf{I} }\)
where \({\mathbf{C}}\) is the data covariance matrix, \({\mathbf{L}}_s\) is the \(d_m\) by 3 gain matrix at source point s, and \({\mathbf{W}}_s\) is the \(d_m\) by 3 spatial filtering matrix ( Va97
). The solution to this problem is given by \[\tag{6} {\rm{ }}{\mathbf{W}}_s^T = \left( {{\mathbf{L} }_s^T {\mathbf{C} }^{ - 1} {\mathbf{L} }_s } \right)^{ - 1} {\mathbf{L} }_s^T {\mathbf{C} }^{ - 1}
\ .\]
The parametric source activity at source point s is given by \({\mathbf{A}}_{\rm{s}} = {\mathbf{W}}_s^T {\mathbf{B}}\ .\) This can be performed at each source-point of interest to obtain a
distributed map of activity. This beamforming approach can be expanded to a more general Bayesian graphical model that uses event timing information to model evoked responses, while suppressing
interference and noise sources (Zu07). This approach uses a variational Bayesian EM algorithm to compute the likelihood of a dipole at each grid location.
Synthetic aperture magnetometry (SAM)
In synthetic aperture magnetometry (SAM), a nonlinear beamformer, an optimization algorithm is used at each source-point to the find the dipole orientation that maximizes the ratio of the total
source power over noise power, the so-called pseudo-Z deviate \[\tag{7} {\rm{ Z = }}\sqrt {\frac{{{\mathbf{W} }_s^T {\mathbf{CW} }_s } }{{{\mathbf{W} }_s^T {\mathbf{C} }_n {\mathbf{W} }_s } } } = \
sqrt {\frac{P}{n} }\]
where \({\mathbf{C}}_n\) is the noise covariance, usually based on some control recording or assumed to be a multiple of the identity matrix (Vr01). This maximization generates a scalar beamformer
(i.e., \({\mathbf{L}}_s\) is a vector) with optimal dipole orientations in terms of SNR. This improves the spatial resolution of SAM relative to that of LCMV beamforming. To generate statistical
parametric maps between an active task period (a) and a control period (c), the so-called pseudo-T statistic can be computed as \[\tag{8} T{\rm{ = }}\frac{{P{\rm{(} }a{\rm{) - } }P{\rm{(} }c{\rm{)} }
} }{{n{\rm{(} }a{\rm{) + } }n{\rm{(} }c{\rm{)} } } }\ .\]
Such maps usually have more focal activities since they contrast the differences between two states. Other scalar beamformers can be implemented. For example, an anatomically constrained beamformer
(ACB) can be obtained by simply constraining the dipole orientations to be orthogonal to the cortical surfaces (Hi03).
Dynamic imaging of coherent sources (DICS)
Beamforming can be performed in the frequency domain using the dynamic imaging of coherent sources (DICS) algorithm, whose spatial filter matrix for frequency f is given by \[\tag{9} {\rm{ }}{\mathbf
{W}}_s^T (f) = \left( {{\mathbf{L} }_s^T {\mathbf{C} }(f)^{ - 1} {\mathbf{L} }_s } \right)^{ - 1} {\mathbf{L} }_s^T {\mathbf{C} }(f)^{ - 1}\]
where \({\mathbf{C}}(f)\) is the cross-spectral density matrix for frequency f (Gr01). Note that the covariance matrix has simply been replaced by the cross-spectral density matrices.
Other spatial filtering methods
All the spatial filter vectors explained so far depend on the gain vectors associated only with the region of interest (i.e., they don't depend on the gain vectors associated with the rest of the
source space). There are other more direct approaches to spatial filtering that incorporate the gain vectors associated with both the region of interest and the rest of the source space, and that do
not necessarily use the measured covariance matrix. In the Backus-Gilbert method, a different spread matrix is computed for each candidate source location (Gr98; Gr99). The goal is to penalize the
side lobes of the resolution kernels (i.e., the row vectors of the resolution matrix, defined as \(\mathbf{R}=\mathbf{O}\mathbf{L}\ ,\) where \(\mathbf{L} \) is the leadfield matrix for the entire
sourcespace (see next section) and \(\mathbf{O}\) is any optimized linear operator that gives the source estimates when multiplied with the data). This usually results in a wider main lobe. In the
spatially optimal fast initial analysis (SOFIA) algorithm, virtual leadfields are constructed that are well concentrated within a region of interest compared to the rest of the sourcespace (Bo99).
The region of interest can be moved to every source-point. A similar approach is adopted in the Local Basis Expansion (LBEX) algorithm which solves a generalized eigenvalue problem to maximize the
concentration of linear combinations of leadfields (Mi06). As a final remark, it should be emphasized that all of the spatial filtering algorithms presented scan one source-point or local region at a
time, but can be expanded with multi-source scanning protocols that search through combinations of sources. Although, multisource scanning methods can recover perfectly synchronized sources (which
are usually missed by single-source methods), there is no agreed protocol to scan the immense space of multisource configurations.
Sourcespace-based distributed and sparse methods
Instead of performing low-dimensional nonlinear optimization or spatial scanning, one can assume dipoles at all possible candidate locations of interest within a grid and/or mesh called the
sourcespace (e.g., source-points in grey matter) and then solve the underdetermined linear system of equations \[\tag{10} {\mathbf{B}} = {\mathbf{LJ}} + \Upsilon\]
for \(\mathbf{\hat J}\in\Re^{{d_s}\times{d_v} }\) (\(d_s\) now being the number of dipole components throughout the sourcespace). The leadfield matrix \(\mathbf{L}\in\Re^{{d_m}\times{d_s} }\) relates
the current space with the measurement space. \(\Upsilon \in \Re ^{d_m {\rm{ } } \times {\rm{ } }d_v } \) is the noise matrix usually assumed to be Gaussian. Since there is no unique solution to this
problem, priors are needed to find solutions of interest. The rest of the algorithms can be best presented from a more abstract Bayesian general framework that makes explicit the source prior
assumptions using probability density functions. Bayes theorem \[\tag{11} p({\mathbf{J}}|{\mathbf{B}}) = \frac{{p({\mathbf{B} }|{\mathbf{J} })p({\mathbf{J} })} }{{p({\mathbf{B} })} }\]
says that the posterior probability given the measurements is equal to the likelihood multiplied by the marginal prior probability and divided by a normalization factor called the evidence.
Bayesian maximum a posteriori probability (MAP) estimates
A Gaussian likelihood model is assumed, \[\tag{12} p({\mathbf{B}}|{\mathbf{\hat J}}) = \exp \left( { - \frac{1}{{2\sigma ^2 } }\left\| {{\mathbf{B} } - {\mathbf{L\hat J} } } \right\|_F^2 } \right) \
together with a very useful family of prior models using generalized Gaussian marginal probability density functions (pdf) \[\tag{13} p({\mathbf{\hat J}}) \propto \exp \left( { - {\mathop{\rm sgn}}
(p)\sum\limits_{{\rm{i = 1} } }^{\rm{n} } {\left\| {{\mathbf{\hat J} }{(i,\;\,:)} } \right\|_q^p } } \right)\]
where p specifies the shape of the pdf which determines the sparsity of the estimate, and q usually is 2 and specifies the norm of \({\mathbf{\hat J}}{(i,\,:)}\ ,\) the ith row vector of \({\mathbf{\
hat J}}\ .\) Because \(p({\mathbf{B}})\) does not affect the location of the posterior mode, it can be ignored, and thus the MAP point estimate can be computed by \[\tag{14} {\mathbf{\hat J}}_{MAP} =
\arg \; \max_{{\mathbf{\hat J} } } \;\ln p({\mathbf{\hat J} }|{\mathbf{B} }) = \;\ln p({\mathbf{B} }|{\mathbf{\hat J} }) + \ln p({\mathbf{\hat J} }) = \ln p_\Upsilon ({\mathbf{B} } - {\mathbf{L\hat
J} }) + \ln p({\mathbf{\hat J} })\ .\]
Maximizing the log posterior (which is a monotonic function) rather than the posterior, illustrates how these MAP approaches are equivalent to algorithms that minimize p-norm-like measures \[\tag{15}
\min_{\mathbf{\hat J}} {\rm{ }}\left\| {{\mathbf{B} } - {\mathbf{L\hat J} } } \right\|_F^2 + \lambda {\mathop{\rm sgn} } (p)\,\sum_{{\rm{i = 1} } }^{\rm{n} } {\left\| {{\mathbf{\hat J} }{(i,\;:)} } \
right\|_q^p } ,\;\forall {\rm{ } }\left\| {{\mathbf{\hat J} }{(i,\;:)} } \right\|_q \ne 0\]
where \(\lambda\) is the noise regularization parameter, which can be fixed across iterations, learned, stabilized (e.g., l-curve, generalized cross validation), or adjusted to achieve a desired
representation error using the discrepancy principle, \[\tag{16} {\rm{ }}\left\| {{\mathbf{B} } - {\mathbf{L\hat J} } } \right\|_F^2 = \varepsilon = {\rm{ } }\left\| \Upsilon \right\|_F^2\ .\]
MAP estimates using a Gaussian prior (p=2) are equivalent to noise-regularized minimum-l[2]-norm solutions, \[\tag{17} {\mathbf{\hat J}} = {\mathbf{L}}^T \left( {{\mathbf{LL} }^T + \lambda {\mathbf
{I} } } \right)^{ - 1}{\mathbf{B} }\ ,\]
which are widely distributed and suffer from depth bias (i.e., deep sources are mislocalized to superficial source-points) (Ah92; Wa92; Ge98). Weighted minimum-norm algorithms can be used to
partially compensate for this depth bias. The inverse operator is then given by \[\tag{18} {\mathbf{\hat J}} = {\mathbf{W}}_a {\mathbf{W}}_a^T {\mathbf{L}}^T \left( {{\mathbf{LW} }_a {\mathbf{W} }_a^
T {\mathbf{L} }^T + \lambda {\mathbf{I} } } \right)^{ - 1} {\mathbf{B} }\]
where \({\mathbf{W}}_a\) is a diagonal matrix (e.g., \({\mathbf{W}}_a = diag\left( {\left\| {l_i } \right\|_2^{ - 1} } \right)\ ,\) \({\mathbf{W}}_a = diag\left( {\left\| {l_i } \right\|_2^{ - 1/2} }
\right) \ ,\) 3-D Gaussian function, or fMRI priors) (Io90; Fu99). Non-diagonal \({\mathbf{W}}_a {\mathbf{W}}_a^T\) matrices can be used to incorporate source covariance and smoothness constraints (
Pa99). To suppress correlated noise the matrix \(\lambda {\mathbf{I}}\) can be replaced with the noise covariance matrix, \(C_\upsilon\ ,\) which can be estimated from the data. Such approaches are
often called adaptive. Likewise, MAP estimates using a Laplacian prior (p=1) are equivalent to obtaining minimum-l[1]-norm solutions. These are usually computed using linear programming, and have
been called recently minimum current estimates (MCE) (Ma95; Uu99), but can be computed efficiently for a time-series using a generalized Focal Underdetermined System Solver (FOCUSS) algorithm \[\tag
{19} {\mathbf{\hat J}}_k = {\mathbf{W}}_k {\mathbf{W}}_k^T {\mathbf{L}}^T \left( {{\mathbf{LW} }_k {\mathbf{W} }_k^T {\mathbf{L} }^T + \lambda {\mathbf{I} } } \right)^{ - 1} {\mathbf{B} }\ ,\]
where \[\tag{20} {\mathbf{W}}_k = {\mathbf{W}}_a diag\left( {\left\| {{\mathbf{\hat J} }_{k - 1} (i,:)} \right\|_q^{1 - {\textstyle{p \over 2} } } } \right)\]
and \({\mathbf{\hat J}}_{k - 1}\) is the previous estimated matrix at iteration k-1 (Go95; Go97; Ra99; Ra02; Co05; Ra05; Ra08). Importantly, FOCUSS is equivalent to an Expectation Maximization (EM)
algorithm used to find MAP estimates in a hierarchical Bayesian framework in which the hyperparameters are integrated out and the prior pdf is a Gaussian scale mixture (Wi07b; Fi02). To
simultaneously localize a very long time-series of any length very quicky, the times-series matrix \(\mathbf{B}\) or its covariance matrix can be decomposed with the SVD and replaced in eq. 19 with
the matrix \(\mathbf{U}\left(\mathbf{S}\right)^{-1/2}\ ,\) where \(\mathbf{U}\) and \(\mathbf{S}\) are the left singular vectors and singular values matrices (up to whatever rank desired),
respectively. Although FOCUSS works optimally with a Laplacian prior (p=1), it can also be used to find MAP estimates with different generalized Gaussian priors. When p=-2, the Magnetic Field
Tomography (MFT) algorithm is recovered if the update rule is based on the current modulus, there is only one iteration, and the a priori weight matrix is a 3D Gaussian used for depth bias
compensation (Io90; Ri91; Ta99). Importantly, MAP estimates have different modes for different \(\mathbf{W}_a\) matrices (Wi07b), and computer simulations have shown that \({\mathbf{W}}_a = diag\left
( {\left\| {l_i } \right\|_2^{ - 1} } \right)\) and p=1 work optimally (Ra05). If one is not sure whether one should use a Gaussian or Laplacian prior, one can use Markov chain Monte Carlo (MCMC)
methods to learn which l[p]-norm is optimal for that particular data set (Au05).
Dynamic statistical parametric maps (dSPM)
Another approach to compensate for depth bias is the noise-normalized dynamic statistical parametric map (dSPM) technique (Da00; Li02). First, the linear inverse operator, which is equivalent to a
correlated noise-regularized pseudoinverse or Wiener operator, is computed by \[\tag{21} {\mathbf{P}} = {\mathbf{C}}_s {\mathbf{L}}^T \left( {{\mathbf{LC} }_s {\mathbf{L} }^T + {\mathbf{C} }_n } \
right)^{ - 1}\]
where \(\mathbf{C}_s\) is the source covariance matrix, usually assumed to be the identity matrix. Then the noise- normalized operator is computed which in the case of fixed dipole orientations is
given by \[\tag{22} {\mathbf{P}}_{norm} = diag\left( v \right)^{ - 1/2} {\mathbf{P}}\ ,\]
where \(v = diag\left( {{\mathbf{PC} }_n {\mathbf{P} }^T } \right)\ .\) Note that dSPM performs noise-normalization after the inverse operator \(\mathbf{P}\) has been computed.
An alternative approach for depth-bias compensation is the sLORETA technique (Pa02). In contrast to the dSPM method, the inverse operator is weighted as a function of the resolution matrix, \(\mathbf
{R=PL}\ ,\) that is associated with the inverse and forward operators \(\mathbf{P}\) and \(\mathbf{L}\ .\) For fixed dipole orientations, the pseudo-statistics of power and absolute activation are
respectively given by \[\tag{23} \varphi_i=\frac{\mathbf{j}_i^2}{\mathbf{R}_{ii}}\]
and \(\phi_i=\sqrt{\varphi_i}\ .\)
Thus, the standardized sLORETA inverse operator is given by \[\tag{24} \mathbf{P}_{sloreta}=diag(r)^{-1/2}\mathbf{P}\]
where \(r=diag(\mathbf{R})\ .\) Interestingly, the sLORETA algorithm is equivalent to the first step of the Sparse Bayesian (SBL) algorithm explained in the next section.
Sparse Bayesian learning (SBL) and automatic relevance determination (ARD)
Instead of finding MAP point estimates using fixed priors, one can use the evidence framework of sparse Bayesian learning (SBL) to learn adaptive parametrized priors from the data itself (Ma92; Ne96;
Ti01; Wi04; Sa04; Ra05; Ra06a; Ra07a; Nu07a; Ra08). This approach is an extremely important alternative because the posterior mode may not be representative of the full posterior, and thus, a better
point estimate may be obtained, the posterior mean, by tracking the posterior probability mass. This is achieved by maximizing a tractable Gaussian approximation of the evidence, also known as the
type-II likelihood or marginal likelihood \[\tag{25} p({\mathbf{B}};\Sigma _{\mathbf{B}} ) = \int {p({\mathbf{B}}|{\mathbf{J}})p({\mathbf{J}};\Sigma _{\mathbf{B}} )} dJ = N(0,\Sigma _{\mathbf{B}} )\]
or equivalently by minimizing the negative log marginal likelihood \[\tag{26} L\left( \gamma \right) = - \log p({\mathbf{B}};\Sigma _{\mathbf{B}} ) = d_v \log \left| {\Sigma _{\mathbf{B}} } \right| +
trace\left( {{\mathbf{B} }^T \Sigma _{\mathbf{B} }^{ - 1} {\mathbf{B} } } \right)\]
where \(\Sigma _{\mathbf{B}} = {\mathbf{L}}\Sigma _{\mathbf{J}} {\mathbf{L}}^T + \Sigma _\varepsilon\ .\) The diagonal matrix \(\Sigma _{\mathbf{J}} = \Gamma = diag(\gamma _i )\) is the prior source
covariance matrix which contains the vector of hyperparameters on the diagonal (i.e., the variances). In the ARD framework the precisions (i.e., inverse variances) are Gamma distributed. The matrix \
(\Sigma _\varepsilon\) is the noise covariance matrix, which can be assumed to be a multiple of the identity matrix (e.g., \(\sigma_\varepsilon^2\mathbf{I}\ ,\) where \(\sigma_\varepsilon^2\) is the
noise variance, a hyperparameter that can also be learned from the data or empirically obtained from the measurements). The evidence maximization is achieved by using an Expectation-Maximization
update rule \[\tag{27} \gamma _i^{(k + 1)} = \frac{1}{{d_v r_i } }\left\| {\gamma _i^{(k)} {\mathbf{L} }_{(:,\;i)}^T \left( {\Sigma _{\mathbf{B} }^{(k)} } \right)^{ - 1} {\mathbf{B} } } \right\|_F^2
+ \frac{1}{{r_i } }trace\left( {\gamma _i^{(k)} {\mathbf{I} } - \gamma _i^{(k)} {\mathbf{L} }_{(:,\;i)}^T \left( {\Sigma _{\mathbf{B} }^{(k)} } \right)^{ - 1} {\mathbf{L} }_{(:,\;i)}^{} \gamma _i^
{(k)} } \right)\]
or alternatively using a fixed-point gradient update rule \[\tag{28} \gamma _i^{(k + 1)} = \frac{1}{{d_v r_i} }\left\| {\gamma _i^{(k)} {\mathbf{L} }_{(:,\;i)}^T \left( {\Sigma _{\mathbf{B} }^{(k)} }
\right)^{ - 1} {\mathbf{B} } } \right\|_F^2 \left( {trace\left( {\gamma _i^{(k)} {\mathbf{I} } - \gamma _i^{(k)} {\mathbf{L} }_{(:,\;i)}^T \left( {\Sigma _{\mathbf{B} }^{(k)} } \right)^{ - 1} {\
mathbf{L} }_{(:,\;i)}^{} \gamma _i^{(k)} } \right)} \right)^{ - 1}\]
where \(r_i\) is the rank of \({\mathbf{L}}_{(:,\;i)}{\mathbf{L}}_{(:,\;i)}^T\ ,\) and \({\mathbf{L}}_{(:,\;i)}\) is a matrix with column vectors from \(\mathbf{L}\) that are controlled by the same
hyperparameter (Ra06b,Wi07b). With fixed dipole orientations \({\mathbf{L}}_{(:,\;i)}\) is a vector, but with loose orientations \({\mathbf{L}}_{(:,\;i)}\) is a \(d_m \times 3\) matrix. For patch
source models involving dipoles within a region \({\mathbf{L}}_{(:,\;i)}\) is a matrix containing all gain vectors associated with the local patch of cortex. The gradient udpdate rule is much faster
than the EM rule, and is almost identical to the update rule used in the sLORETA/FOCUSS hybrid algorithm, which is not expressed in the ARD framework (Sc05; Ra05). Once the optimal hyperparameters
have been learned, the posterior mean is given by \[\tag{29} {\mathbf{\hat J}}=E[{\mathbf{J}}|{\mathbf{B}};\Sigma _{\mathbf{J}} ] = \Sigma _{\mathbf{J}} {\mathbf{L}}^T \left( {\Sigma _{\mathbf{B}} }
\right)^{ - 1} {\mathbf{B}}\ .\]
It is important to note that many SBL algorithms are distinguished by the parametrization of the source covariance matrix \(\Sigma _{\mathbf{J}} =\sum_{i = 1}^{d_s } {{\mathbf{C} }_i \gamma _i }\) (
Wi07b). In fact, if only a few hyperparameters are used that are controlling many source-points, then the parametrization cannot support sparse estimates. For example, in the restricted maximum
likelihood (ReML) algorithm one of the source covariance component is the identity matrix which is controlled by a single hyperparameter (Fr02; Ph05; Ma06; Wi07b).
In standard SBL, \({\mathbf{C}}_i=e_i e_i^T\ ,\) where \(e_i\) is a vector with zeros everywhere except at the i^th element, where it is one. This delta function parametrization can be extended to
box car functions in which \(e_i\) takes a value of 1 for all three dipole components or for a patch of cortex. More interestingly, the \(e_i\) can be substituted by geodesic basis functions \(\psi
_i\) (e.g., a 2-D Gaussian current density function) centered at the i^th sourcepoint and with some spatial standard deviation (Sa04; Ra06a; Ra06b; Ra07a; Ra08). More powerfully, the source
covariance can be composed of components across many possible spatial scales, by using multiple \(\psi _i\) located at the i^th source-point but with different spatial standard deviations (Ra06a;
Ra06b; Ra07a; Ra08). This approach can be used to estimate the spatial extent of distributed sources by using a mixture model of geodesic Gaussians at different spatial scales. Such multiscale
approach has now been extended also to MAP estimation (Ra07b; Ra08).
Although we have assumed here a non-informative hyperprior on the precisions, \(p(\gamma _i^{-1})=\gamma _i\ ,\) since the degrees of freedom parameter of the Gamma distribution is set to 0, this
does not need to be the case (Sa04; Wi07b). The problem of finding optimal hyperpriors to handle multimodal posteriors and to eliminate the use of improper priors has been dealt with by making this
parameter non-zero (yet small) or by introducing MCMC stategies (Nu07a; Nu07b). In practice, the non-informative hyperprior works well, and helps avoid the problem of determining the optimal
hyperprior. Finally, like explained for MAP estimation, to simultaneously localize a very long time-series of any length very quicky, instead of localizing the times-series matrix \(\mathbf{B}\) one
can localize the matrix \(\mathbf{U}\left(\mathbf{S}\right)^{-1/2}\ ,\) where \(\mathbf{U}\) and \(\mathbf{S}\) are the left singular vectors and singular values matrices of \(\mathbf{B}\) (up to
whatever rank desired).
The relative strengths of different localization algorithms offer an opportunity to select the most appropriate algorithm, constraints, and priors for a given experiment. If one expects only a few
focal sources, then dipole fitting algorithms may be sufficient. If one expects distributed sources, then beamforming, spatial scans, or distributed MAP-based estimation algorithms are appropriate.
If one expects sparse sources, then MAP estimation with a Laplacian prior may better reflect the true sources. If one is more interested in finding representative sparse estimates of the whole
posterior, then SBL is the right choice. If one expects distributed sources with variable levels of spatial extent, then SBL or MAP (with Laplacian prior) estimation using a mixture model of
multiscale basis functions is recommended.
• Adrian E, Mathews B. (1934): The Berger Rhythm: Potential changes from the occipital lobes in man. Brain 57:355-385.
• Ahlfors SP, Ilmoniemi RJ, Hamalainen MS. (1992): Estimates of visually evoked cortical currents. Electroencephalogr Clin Neurophysiol 82(3):225-36.
• Aine C, Huang M, Stephen J, Christner R. (2000): Multistart algorithms for MEG empirical data analysis reliably characterize locations and time courses of multiple sources. Neuroimage 12
• Akalin-Acar Z, Gencer NG. (2004): An advanced boundary element method (BEM) implementation for the forward problem of electromagnetic source imaging. Physics in Medicine and Biology 49
• Anemuller J, Sejnowski TJ, Makeig S. (2003): Complex independent component analysis of frequency-domain electroencephalographic data. Neural Networks 16(9):1311-1323.
• Auranen T, Nummenmaa A, Hamalainen MS, Jaaskelainen IP, Lampinen J, Vehtari A, Sams M. (2005): Bayesian analysis of the neuromagnetic inverse problem with l(p)-norm priors. Neuroimage 26
• Baillet S, Mosher JC, Leahy RM. (2001): Electromagnetic Brain Mapping. IEEE Signal Processing Magazine 18(6):14-30.
• Bell AJ, Sejnowski TJ. (1995): An information-maximization approach to blind separation and blind deconvolution. Neural Comput 7(6):1129-59.
• Berger H. (1929): Über das Elektroenkephalogramm des Menschen. . Archiv für Psychiatrie 87:527-570.
• Bertrand C, Ohmi M, Suzuki R, Kado H. (2001): A probabilistic solution to the MEG inverse problem via MCMC methods: the reversible jump and parallel tempering algorithms. IEEE Trans Biomed Eng 48
• Bolton JPR, Gross J, Liu AK, Ioannides AA. (1999): SOFIA: spatially optimal fast initial analysis of biomagnetic signals. Phys Med Biol 44:87-103.
• Canolty RT, Edwards E, Dalal SS, Soltani M, Nagarajan SS, Kirsch HE, Berger MS, Barbaro NM, Knight RT. (2006): High gamma power is phase-locked to theta oscillations in human neocortex. Science
• Cohen D. (1968): Magnetoencephalography: evidence of magnetic fields produced by alpha-rhythm currents. Science 161:784-6.
• Cohen D. (1972): Magnetoencephalography: Detection of the brain's electrical activity with a superconducting magnetometer. Science 175:664-6.
• Cotter SF, Rao BD, Engan K, Kreutz-Delgado K. (2005): Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans Signal Processing 53(7):2477-2488.
• Dale AM, Liu AK, Fischl BR, Buckner RL, Belliveau JW, Lewine JD, Halgren E. (2000): Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical
activity. Neuron 26(1):55-67.
• Darvas F, Ermer JJ, Mosher JC, Leahy RM. (2006): Generic head models for atlas-based EEG source analysis. Hum Brain Mapp 27(2):129-43.
• Delorme A, Makeig S. (2004): EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 134(1):9-21.
• Dyrholm M, Makeig S, Hansen LK. (2007): Model Selection for Convolutive ICA with an Application to Spatiotemporal Analysis of EEG. Neural Computation 19(4):934-955.
• Figueiredo MAT. 2002. Adaptive sparseness using Jeffreys prior. Advances in Neural Information Processing Systems: MIT Press. p 697-704.
• Freeman WJ. 2007. Hilbert Transform for Brain Waves. Scholarpedia. p. 7514.
• Friston KJ, Penny W, Phillips C, Kiebel S, Hinton G, Ashburner J. (2002): Classical and Bayesian inference in neuroimaging: theory. Neuroimage 16(2):465-83.
• Fuchs M, Kastner J, Wagner M, Hawes S, Ebersole JS. (2002): A standardized boundary element method volume conductor model. Clin Neurophysiol 113(5):702-12.
• Fuchs M, Wagner M, Kohler T, Wischmann HA. (1999): Linear and nonlinear current density reconstructions. J Clin Neurophysiol 16(3):267-95.
• Gencer NG, Williamson SJ. (1998): Differential characterization of neural sources with the bimodal truncated SVD pseudo-inverse for EEG and MEG measurements. IEEE Trans Biomed Eng 45(7):827-38.
• George JS, Aine CJ, Mosher JC, Schmidt DM, Ranken DM, Schlitt HA, Wood CC, Lewine JD, Sanders JA, Belliveau JW. (1995): Mapping function in the human brain with magnetoencephalography, anatomical
magnetic resonance imaging, and functional magnetic resonance imaging. J Clin Neurophysiol 12(5):406-31.
• Gorodnitsky IF, George JS, Rao BD. (1995): Neuromagnetic source imaging with FOCUSS: a recursive weighted minimum norm algorithm. Electroencephalogr Clin Neurophysiol 95(4):231-51.
• Gorodnitsky I, Rao BD. (1997): Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm. IEEE Trans Signal Processing 45(3):600–616.
• Grave de Peralta Menendez R, Hauk O, Gonzalez Andino S, Vogt H, Michel C. (1998): Linear inverse solutions with optimal resolution kernels applied to electromagnetic tomography. Human Brain
Mapping 5(6):454-467.
• Grave de Peralta Menendez R, Gonzalez Andino SL. (1999): Backus and Gilbert method for vector fields. Hum Brain Mapp 7(3):161-5.
• Grave de Peralta Menendez R, Gonzalez Andino SL, Morand S, Michel CM, Landis T. (2000): Imaging the electrical activity of the brain: ELECTRA. Hum Brain Mapp 9(1):1-12.
• Gross J, Ioannides AA. (1999): Linear transformations of data space in MEG. Phys Med Biol 44(8):2081-97.
• Gross J, Kujala J, Hamalainen M, Timmermann L, Schnitzler A, Salmelin R. (2001): Dynamic imaging of coherent sources: Studying neural interactions in the human brain. Proc Natl Acad Sci U S A 98
• Halchenko YO, Hanson SJ, Pearlmutter BA. 2005. Multimodal Integration: fMRI, MRI, EEG, MEG. In: Landini L, Positano V, Santarelli MF, editors. Advanced Image Processing in Magnetic Resonance
Imaging: Dekker. p p. 223-265.
• Hamalainen M, Hari R, Ilmoniemi R, Knuutila J, Lounasmaa O. (1993): Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies of the working human brain.
Rev.Mod.Phys. 65(2):413-497.
• Hamalainen M, Sarvas J. (1987): Feasibility of the homogenous head model in the interpretation of the magnetic fields. Phys Med Biol 32:91-97.
• Helmholtz Hv. (1853): Ueber einige Gesetze der Vertheilung elektrischer Strome in korperlichen Leitern, mit Anwendung auf die thierisch-elektrischen Versuche. Ann Phys Chem 89:211-233,353-377.
• Hillebrand A, Barnes GR. (2003): The use of anatomical constraints with MEG beamformers. Neuroimage 20(4):2302-13.
• Huang M, Aine CJ, Supek S, Best E, Ranken D, Flynn ER. (1998): Multi-start downhill simplex method for spatio-temporal source localization in magnetoencephalography. Electroencephalogr Clin
Neurophysiol 108(1):32-44.
• Huang MX, Mosher JC, Leahy RM. (1999): A sensor-weighted overlapping-sphere head model and exhaustive head model comparison for MEG. Phys Med Biol 44(2):423-40.
• Huang MX, Dale AM, Song T, Halgren E, Harrington DL, Podgorny I, Canive JM, Lewis S, Lee RR. (2006): Vector-based spatial-temporal minimum L1-norm solution for MEG. Neuroimage 31(3):1025-37.
• Hubbard JI, Llinás RR, Quastel DMJ. 1969. Electrophysiological analysis of synaptic transmission. Baltimore,: Williams & Wilkins Co. ix, 372 p.
• Ioannides AA, Bolton JP, Clarke CJS. (1990): Continuous probabilistic solutions to the biomagnetic inverse problem. Inverse Probl. 6:523-542.
• Lachaux JP, Rudrauf D, Kahane P. (2003): Intracranial EEG and human brain mapping. Journal of Physiology Paris 97:613-628.
• Leahy RM, Mosher JC, Spencer ME, Huang MX, Lewine JD. (1998): A study of dipole localization accuracy for MEG and EEG using a human skull phantom. Electroencephalogr Clin Neurophysiol 107
• Lee I, Kim T, Lee TW. (2007): Fast fixed-point independent vector analysis algorithms for convolutive blind source separation. Signal Process 87(8):1859-1871.
• Lee IK, Worrell G, Makeig S. 2005. Relationships between concurrently recorded scalp and intracranial electrical signals in humans. 11th Annual Meeting of the Organization for Human Brain
Mapping. Toronto, Ontario.
• Liu AK, Dale AM, Belliveau JW. (2002): Monte Carlo simulation studies of EEG and MEG localization accuracy. Hum Brain Mapp 16(1):47-62.
• Liu H, Gao X, Schimpf PH, Yang F, Gao S. (2004): A recursive algorithm for the three-dimensional imaging of brain electric activity: Shrinking LORETA-FOCUSS. IEEE Trans Biomed Eng 51
• Liu L, Ioannides AA, Streit M. (1999): Single trial analysis of neurophysiological correlates of the recognition of complex objects and facial expressions of emotion. Brain Topogr 11(4):291-303.
• Luck SJ. 2005. An Introduction to the Event-Related Potential Technique: The MIT Press.
• Mackay DJC. (1992): Bayesian Interpolation. Neural Computation 4(3):415-447.
• Makeig S. (1993): Auditory event-related dynamics of the EEG spectrum and effects of exposure to tones. Electroencephalogr Clin Neurophysiol 86(4):283-93.
• Makeig S, Bell AJ, Jung TP, Sejnowski TJ. 1996. Independent Component Analysis of Electroencephalographic Data. Advances in Neural Information Processing Systems: NIPS, 8. Denver, CO: MIT Press.
p 145-151.
• Makeig S, Jung TP, Bell AJ, Ghahremani D, Sejnowski TJ. (1997): Blind separation of auditory event-related brain responses into independent components. Proceedings of the National Academy of
Sciences of the United States of America 94(20):10979-10984.
• Makeig S, Westerfield M, Jung TP, Enghoff S, Townsend J, Courchesne E, Sejnowski TJ. (2002): Dynamic brain sources of visual evoked responses. Science 295(5555):690-694.
• Matsuura K, Okabe Y. (1995): Selective minimum-norm solution of the biomagnetic inverse problem. IEEE Trans Biomed Eng 42(6):608-15.
• Mattout J, Phillips C, Penny WD, Rugg MD, Friston KJ. (2006b): MEG source localization under multiple constraints: an extended Bayesian framework. Neuroimage 30(3):753-67.
• Mitra PP, Maniar H. (2006): Concentration maximization and local basis expansions (LBEX) for linear inverse problems. IEEE Trans on Biomed Eng 53(9):1775-1782.
• Mosher JC, Lewis PS, Leahy RM. (1992): Multiple dipole modeling and localization from spatio-temporal MEG data. IEEE Trans Biomed Eng 39(6):541-57.
• Mosher JC, Leahy RM. (1998): Recursive MUSIC: a framework for EEG and MEG source localization. IEEE Trans Biomed Eng 45(11):1342-54.
• Neal R. 1996. Bayesian Learning in Neural Networks.: Springer.
• Nicholson C, Llinas R. (1971): Field potentials in the alligator cerebellum and theory of their relationship to Purkinje cell dendritic spikes. J Neurophysiol 34(4):509-31.
• Niedermeyer E, Lopes da Silva FH. 2005. Electroencephalography : basic principles, clinical applications, and related fields. Philadelphia: Lippincott Williams & Wilkins. xiii, 1309 p.
• Nummenmaa A, Auranen T, Hamalainen MS, Jaaskelainen IP, Lampinen J, Sams M, Vehtari A. (2007a): Hierarchical Bayesian estimates of distributed MEG sources: theoretical aspects and comparison of
variational and MCMC methods. Neuroimage 35(2):669-85.
• Nummenmaa A, Auranen T, Hamalainen MS, Jaaskelainen IP, Sams M, Vehtari A, Lampmen J. (2007b): Automatic relevance determination based hierarchical Bayesian MEG inversion in practice. Neuroimage
• Nunez Paul L. 2005. Electric fields of the brain. Oxford University Press (NC) 2nd edition.
• Okada Y. (1993): Empirical bases for constraints in current-imaging algorithms. Brain Topogr 5(4):373-7.
• Okada YC, Wu J, Kyuhou S. (1997): Genesis of MEG signals in a mammalian CNS structure. Electroencephalogr Clin Neurophysiol 103(4):474-85.
• Oostendorp TF, Vanoosterom A. (1989): Source Parameter-Estimation in Inhomogeneous Volume Conductors of Arbitrary Shape. IEEE Trans on Biomed Eng 36(3):382-391.
• Parra LC, Spence CD, Gerson AD, Sajda P. (2005): Recipes for the Linear Analysis of EEG. Neuroimage 28(2):326-341.
• Pascual-Marqui RD, Lehmann D, Koenig T, Kochi K, Merlo MC, Hell D, Koukkou M. (1999): Low resolution brain electromagnetic tomography (LORETA) functional imaging in acute, neuroleptic-naive,
first-episode, productive schizophrenia. Psychiatry Res 90(3):169-79.
• Pascual-Marqui RD. (2002): Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details. Methods Find Exp Clin Pharmacol 24 Suppl D:5-12.
• Penfield W, Jasper HH. 1954. Epilepsy and the functional anatomy of the human brain. Boston: Little. 898 p.
• Phillips C, Mattout J, Rugg MD, Maquet P, Friston KJ. (2005): An empirical Bayesian solution to the source reconstruction problem in EEG. Neuroimage 24(4):997-1011.
• Ramirez RR, Kronberg E, Ribary U, Llinás R. 2003. Recursive Weighted Minimum-norm Algorithms for Neuromagnetic Source Imaging Using Diversity Measure Minimization: Analysis of Spatial Resolution.
Soc. Neurosci. Abstr., Vol. 29, Program No. 863.20.
• Ramirez RR. 2005. Neuromagnetic Source Imaging of Spontaneous and Evoked Human Brain Dynamics. New York: New York University School of Medicine. 452 p.
• Ramírez RR, Makeig S. 2006a. Neuroelectromagnetic source imaging using multiscale geodesic neural bases and sparse Bayesian learning. 12th Annual Meeting of the Organization for Human Brain
Mapping. Florence, Italy.
• Ramírez RR, Wipf D, Rao B, Makeig S. 2006b. Sparse Bayesian Learning for estimating the spatial orientations and extents of distributed sources. Biomag 2006 - 15th International Conference on
Biomagnetism. Vancouver, BC, Canada.
• Ramírez RR, Makeig S. 2007a. Neuroelectromagnetic source imaging of spatiotemporal brain dynamical patterns using frequency-domain independent vector analysis (IVA) and geodesic sparse Bayesian
learning (gSBL). 13th Annual Meeting of the Organization for Human Brain Mapping. Chicago, USA.
• Ramirez RR, Makeig S. 2007b. Neuroelectromagnetic source imaging (NSI) toolbox and EEGLAB module. 37th annual meeting of the Society for Neuroscience, November, San Diego, CA.
• Ramirez RR, Makeig S. (2008): Neuroelectromagnetic Source Imaging Using Multiscale Geodesic Basis Functions with Sparse Bayesian Learning or MAP estimation. Neural Computation. (in preparation).
• Rao B, Kreutz-Delgado K. (1999): An affine scaling methodology for best basis selection. IEEE Trans Signal Processing 1:187–202.
• Rao BD, Engan K, Cotter SF, Palmer J, Kreutz-Delgado K. (2002): Subset selection in noise based on diversity measure minimization. IEEE Trans Signal Processing 51(3):760-770.
• Ribary U, Ioannides AA, Singh KD, Hasson R, Bolton JP, Lado F, Mogilner A, Llinas R. (1991): Magnetic field tomography of coherent thalamocortical 40-Hz oscillations in humans. Proc Natl Acad Sci
U S A 88(24):11037-41.
• Sarnthein J, Morel A, von Stein A, Jeanmonod D. (2003): Thalamic theta field potentials and EEG: high thalamocortical coherence in patients with neurogenic pain, epilepsy and movement disorders.
Thalamus Related Syst 2(3):231-238.
• Sarvas J. (1987): Basic mathematical and electromagnetic concepts of the biomagnetic inverse problem. Phys Med Biol 32(1):11-22.
• Sato M, Yoshioka T, Kajihara S, Toyama K, Naokazu G, Doya K, Kawatoa M. (2004): Hierarchical Bayesian estimation for MEG inverse problem. Neuroimage 23:806-826.
• Scherg M, Berg P. (1991): Use of prior knowledge in brain electromagnetic source analysis. Brain Topogr 4(2):143-50.
• Schmidt DM, George JS, Wood CC. (1999): Bayesian inference applied to the electromagnetic inverse problem. Hum Brain Mapp 7(3):195-212.
• Schimpf PH, Liu H, Ramon C, Haueisen J. (2005): Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS. IEEE Trans Biomed Eng 52(5):901-8.
• Sekihara K, Nagarajan S, Poeppel D, Miyashita Y. (1999): Time-frequency MEG-MUSIC algorithm. IEEE Trans Med Imaging 18(1):92-7.
• Sekihara K, Nagarajan SS, Poeppel D, Marantz A, Miyashita Y. (2002): Application of an MEG eigenspace beamformer to reconstructing spatio-temporal activities of neural sources. Hum Brain Mapp 15
• Tallon-Baudry C, Bertrand O, Delpuech C, Pernier J. (1996): Stimulus specificity of phase-locked and non-phase-locked 40 Hz visual responses in human. J Neurosci 16(13):4240-9.
• Tang AC, Pearlmutter BA, Malaszenko NA, Phung DB, Reeb BC. (2002): Independent components of magnetoencephalography: localization. Neural Comput 14(8):1827-58.
• Taylor JG, Ioannides AA, Muller-Gartner HW. (1999): Mathematical analysis of lead field expansions. IEEE Trans Med Imaging 18(2):151-63.
• Tesche CD. (1997): Non-invasive detection of ongoing neuronal population activity in normal human hippocampus. Brain Res 749(1):53-60.
• Tipping ME. (2001): Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research 1:211-244.
• Ulbert I, Halgren E, Heit G, Karmos G. (2001): Multiple microelectrode-recording system for human intracortical applications. Journal of Neuroscience Methods 106(1):69-79.
• Uusitalo MA, Ilmoniemi RJ. (1997): Signal-space projection method for separating MEG or EEG into components. Medical and Biological Engineering and Computing 35(2):135-140.
• Uutela K, Hamalainen M, Salmelin R. (1998): Global optimization in the localization of neuromagnetic sources. IEEE Trans Biomed Eng 45(6):716-23.
• Uutela K, Hamalainen M, Somersalo E. (1999): Visualization of magnetoencephalographic data using minimum current estimates. Neuroimage 10(2):173-80.
• Van Veen BD, van Drongelen W, Yuchtman M, Suzuki A. (1997): Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Trans Biomed Eng 44
• Volkmann J, Joliot M, Mogilner A, Ioannides AA, Lado F, Fazzini E, Ribary U, Llinas R. (1996): Central motor loop oscillations in parkinsonian resting tremor revealed by magnetoencephalography.
Neurology 46(5):1359-70.
• Vrba J, Robinson SE. (2001): Signal processing in magnetoencephalography. Methods 25(2):249-71.
• Wang JZ, Williamson SJ, Kaufman L. (1992): Magnetic source images determined by a lead-field analysis: the unique minimum-norm least-squares estimation. IEEE Trans Biomed Eng 39(7):665-75.
• Wipf DP, Rao BD. (2004): Sparse Bayesian Learning for Basis Selection. IEEE Trans Signal Processing 52:2153-2164.
• Wipf DP, Rao BD. (2007a): An Empirical Bayesian Strategy for Solving the Simultaneous Sparse Approximation Problem. IEEE Trans Signal Processing. 55(7):3704-3716.
• Wipf DP, Ramírez RR, Palmer JA, Makeig S, Rao BD. 2007b. Analysis of Empirical Bayesian Methods for Neuroelectromagnetic Source Localization. Advances in Neural Information Processing Systems
(NIPS). Vancouver, CA: MIT Press.
• Wolters CH. 2003. Influence of Tissue Conductivity Inhomogeneity and Anisotropy on EEG/MEG based Source Localization in the Human Brain. Leipzig: University of Leipzig. 253 p.
• Wolters CH, Grasedyck L, Hackbusch W. (2004): Efficient computation of lead field bases and influence matrix for the FEM-based EEG and MEG inverse problem. Inverse Problems 20(4):1099-1116.
• Wolters CH, Anwander A, Tricoche X, Weinstein D, Koch MA, MacLeod RS. (2006): Influence of tissue conductivity anisotropy on EEG/MEG field and return current computation in a realistic head
model: A simulation and visualization study using high-resolution finite element modeling. Neuroimage 30(3):813-826.
• Zimmerman JE, Frederick NV. (1971): Miniature ultrasensitive superconducting magnetic gradiometer and its use in cardiography and other applications. Appl Phys Lett 19(1):16-19.
• Zumer J, Attias H, Sekihara K, Nagarajan S. (2007): A probabilistic algorithm integrating source localization and noise suppression for MEG and EEG data. NeuroImage 37:102-115.
Internal references
• Jan A. Sanders (2006) Averaging. Scholarpedia, 1(11):1760.
• Valentino Braitenberg (2007) Brain. Scholarpedia, 2(11):2918.
• Eugene M. Izhikevich (2006) Bursting. Scholarpedia, 1(3):1300.
• Rodolfo Llinas (2008) Neuron. Scholarpedia, 3(8):1490.
• Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358.
• Arkady Pikovsky and Michael Rosenblum (2007) Synchronization. Scholarpedia, 2(12):1459.
• Carsten Wolters and Jan C de Munck (2007) Volume conduction. Scholarpedia, 2(3):1738.
External Links
See Also
Volume Conduction, Electroencephalography (EEG), Magnetoencephalography (MEG), Event-Related Brain Dynamics, Functional Imaging, Functional Magnetic Resonance Imaging | {"url":"http://www.scholarpedia.org/article/Source_localization","timestamp":"2014-04-21T04:33:20Z","content_type":null,"content_length":"119328","record_id":"<urn:uuid:911af308-b885-40bc-90ec-86de81535b45>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Racine, WI Calculus Tutor
Find a Racine, WI Calculus Tutor
...Geosciences/Geology, University of Wisconsin-MilwaukeeM.S. Geosciences/Geology/Sedimentology, Colorado State University I completed Algebra 1 Honors in my advanced eighth grade math class. I
have, since then, consistently helped family, friends and other students with their math homework, assignments and test preparation.
10 Subjects: including calculus, algebra 1, ACT Math, ACT Science
...I was keeping up with the language to give myself the background and qualifications to teach it should I be given the chance, which actually happened back in 2002. I was given the chance to
teach German for the Spring semester in 2002. I have tried to keep up with the language by taking conversational German classes over the years, or by doing some reading.
12 Subjects: including calculus, statistics, geometry, ESL/ESOL
...As far as my tutoring philosophy, I believe that there's more than one way to explain something. If a student isn't comprehending something, I will try to explain it in a different manner.
After all, not everyone learns the same way.
32 Subjects: including calculus, Spanish, English, chemistry
...I’ve taught every math class at the high school, but I now teach primarily advanced pre calculus and AP Statistics. I also have tutored all levels from elementary through advance and do some
part-time work at the college. In the summer, I am employed by ETS as a grader for the AP Statistics exams and also as a grader for the Praxis exam, the exam for math majors.
26 Subjects: including calculus, statistics, geometry, ACT Math
...I am qualified to teach several topics in discrete mathematics including set theory, graph theory, probability, number theory, algebra, discrete calculus, geometry, game theory and
discretization. I have taken several mathematics courses covering these topics during my PhD in Engineering from Mi...
22 Subjects: including calculus, physics, geometry, statistics | {"url":"http://www.purplemath.com/Racine_WI_calculus_tutors.php","timestamp":"2014-04-18T04:18:43Z","content_type":null,"content_length":"24145","record_id":"<urn:uuid:de19b19c-f09f-43d1-9c32-fe15b96a12b2>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normalising and Solving Type Equalities
The following is based on ideas for the new, post-ICFP'08 solving algorithm. Most of the code is in the module TcTyFuns.
Normal equalities
Central to the algorithm are normal equalities, which can be regarded as a set of rewrite rules. Normal equalities are carefully oriented and contain synonym families only as the head symbols of
left-hand sides. They assume one of the following three forms:
1. co :: F t1..tn ~ t,
2. co :: x ~ t, where x is a flexible type variable, or
3. co :: a ~ t, where a is a rigid type variable (skolem) and t is not a flexible type variable.
The types t, t1, ..., tn may not contain any occurrences of synonym families. Moreover, in Forms (2) & (3), the left- and right-hand side need to be different, and the left-hand side may not occur in
the right-hand side.
Coercions co are either wanteds (represented by a flexible type variable) or givens aka locals (represented by a type term of kind CO).
NB: We explicitly permit equalities of the form x ~ y and a ~ b, where both sides are either flexible or rigid type variables.
In GHC, TcTyFuns.RewriteInst represents normal equalities, emphasising their role as rewrite rules.
• Whenever an equality of Form (2) or (3) would be recursive, the program can be rejected on the basis of a failed occurs check. (Immediate rejection is always justified, as right-hand sides do not
contain synonym familles; hence, any recursive occurrences of a left-hand side imply that the equality is unsatisfiable.)
The Note [skolemOccurs loop] in the old code explains that equalities of the form x ~ t (where x is a flexible type variable) may not be used as rewrite rules, but only be solved by applying Rule
Unify. As Unify carefully avoids cycles, this prevents the use of equalities introduced by the Rule SkolemOccurs as rewrite rules. For this to work, SkolemOccurs also had to apply to equalities of
the form a ~ t[[a]]. This was a somewhat intricate set up that's being simplified in the new algorithm. Whether equalities of the form x ~ t are used as rewrite rules or solved by Unify doesn't
matter anymore. Instead, we disallow recursive equalities completely. This is possible as right-hand sides are free of synonym families. | {"url":"https://ghc.haskell.org/trac/ghc/wiki/TypeFunctionsSolving?version=3","timestamp":"2014-04-20T03:54:50Z","content_type":null,"content_length":"13324","record_id":"<urn:uuid:2129bb54-8757-4853-85c8-5fcef3063d73>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Typesetting Math in LaTeX
LaTeX Lesson 5
Typesetting Math in LaTeX
Typing Mathematics
Let's take a look at how to handle some typical problems in using LaTeX to typeset mathematics. We have already looked at various math environments.
In order to get fractions in-line, you have a choice of using the solidus to get a fraction like 1/2, or of using the math mode command \frac{a}{b}, which will give a small fraction with numerator a
and denominator b for in-line formulas. Using the "displayed fraction" command \dfrac{a}{b} gives an upright fraction, and makes the line a little larger vertically to provide sufficient vertical
space for the displayed fraction. In a displaymath environment or equation environment it is not necessary to use the \dfrac command; the command \frac{a}{b} will give an upright fraction, and the
resulting line will also take extra vertical space.
Warning: note that the fraction command has two arguments, each enclosed in curly braces. Avoid non-LaTeX-style argument lists, like \frac{a,b}, an error that many beginners make when they first
start typing LaTeX documents.
The integral sign is produced by the command \int, and one can attach the limits of integration by giving them as subscript and superscript to the command \int.
Expanding Delimiters
LaTeX provides parenthesis-like symbols that will expand vertically so that they are tall enough to look well-matched to the height of the formula that they enclose. As an example, take the source
\left| \frac{A+B}{3} \right|
It produces a displayed version of the formula |(A+B)/3| in which the absolute value delimiters are expanded somewhat in length.
NOTE: The symbol | is located above the backslash on most keyboards. It works properly only in math environments. You can also use \vert to get the same result.
Now please look at the page of integral calculus provided by one of the two links below.
As an exercise, you are going to use LaTeX to typeset your own version of this page. [WORK IN PROGRESS]
In it we encounter the problem of using a single right delimiter, ], (with subscript and superscript attached) to indicate the difference of values of a function. LaTeX produces typeset copy in which
the symbol ] is too small.
It is natural to think of using the idea introduced above for matching delimiters (such as []) to the text inside, and to try the code
\int_0^1 \cdots dx = x \tan^{-1} x \right]
but LaTeX complains if you try to use \right] without a corresponding \left] command earlier in the file. To get only a single right delimiter, say ], sized correctly you can use \big] or, if that is
still too small, \bigg]; but you can also let LaTeX adjust the size of the delimiter by putting in a blank delimiter with the command \left., that is \left followed immediately by a period. For
\left. F(x) \right]_{0}^{1} = F(1) - F(0)
In math mode, the spaces in your source file are completely ignored by LaTeX with just two exceptions.
1. a space is one of the characters that indicates the end of a command.
2. you can insert a text box (see below) in which everything is in text mode.
For example,
$a b$
is typeset as ab.
Therefore, to achieve the appearance that you want, you may have to add or remove space. For example, in an integral, between the integrand and the differential it is usually better to add a
"thinspace" \,.
Perhaps you will find this feature of LaTeX to be disagreeable, but eventually you should be able to appreciate the balance struck in LaTeX between convenience and the power to control the whitespace
completely whenever it seems desirable. Here is a list of the most widely used commands for horizontal spacing.
These commands work both in math environments and in text environments.
There are other ways of getting space. If you want 12 points of horizontal whitespace between two characters, you can get it by using the command \hspace{12pt} between them. The argument to the \
hspace command can be specified in inches, for example, .5in if you want a half inch of horizontal whitespace. This command also works in both math and text environments. You may want to keep in mind
that there are 72 points to the inch.
If you want whitespace at the beginning of a line, you need the variant form of the \hspace command, namely \hspace* because LaTeX generally swallows all space at the beginning of a line.
While we are on the subject of creating space, we should mention that there is a command \vspace which works very much like \hspace but creates vertical space in a document. This is needed, for
instance, if you want to create vertical space for including a picture.
For simple vertical spacing one can use the predefined vertical spaces provided by the commands
• \smallskip
• \medskip
• \bigskip
It is sometimes convenient to use relative size units rather than absolute size units. You can refer to the relative size units in the current font by using the units em and ex. At one time, em and
ex corresponded, respectively, to the widths of the letters M and X in whatever font was being used at that point. In LaTeX that is no longer true, but it is true that the size of these units will
scale with the font, so that a change to a different font size will not disturb the proportions of the typeset material.
Here is some LaTeX code that will create a small table showing the effects of the various horizontal spacing commands.
\begin{tabular}{ l l }
\verb=||= gives & $||$ \\
\verb=|\,|= gives & $|\,|$ \\
\verb=|\;|= gives & $|\;|$ \\
\verb=|\quad|= gives & $|\quad |$ \\
\verb=|\qquad|= gives & $|\qquad|$ \\
\verb=|\hspace{.5in}|= gives & $|\hspace{.5in}|$ \\
\verb=|\hspace{6em}|= gives & $|\hspace{6em}|$ \\
\caption{Horizontal Spacing} %NOTE: this is within the center environment
It gives this table:
You can also get horizontal space in a more flexible way by using LaTeX's text boxes.
You may already have wanted to write some text when you were in a math environment. There are several box constructions which do the job. Without using the amsmath package, one could use the command
\makebox, which has a short form \mbox. These take
• an optional length argument, such as 3.5in (or cm or pt),
• an optional alignment argument
□ l for flushed left (this is the default)
□ r for flushed right,
□ s for stretched,
• a text argument.
Thus, you could have a command
\makebox[3in][r]{This is in a line box. }
This gives a 3 inch horizontal space in which the text "This is in a line box." is right justified. If you want to put a frame around your box, that is easy. Use
\framebox[3in][r]{This is in a line box. }
Your instructor has a handout with some material marked for you to typeset in LaTeX as an exercise, which you should do now. If your browser and platform are set up to use a PostScript viewer, like
ghostview or ghostscript, then you can see the material by following the link: Material for Typesetting: PostScript version, or the link Material for Typesetting: Acrobat (PDF) version,.
Tutorial at Cornell.
There is a nicely done tutorial at Cornell University. LaTeX tutorial at Cornell University
The part of the tutorial on errors in LaTeX is appropriate reading now since the most common error messages in LaTeX, besides misspelled commands, involve "overfull hbox" errors.
Go to their tutorial now.
Here is an example of some LaTeX code that will typeset a matrix.
\begin {array}{ccc}
\end {array}
If you put this code inside a LaTeX displaymath environment, you will get the matrix typeset. (Since matrices are large, they are almost always set as displays.) Here are some points to observe about
this code.
• The \left[ and \right] are delimiters of adjustable size that make the brackets around the matrix. If you want parentheses instead of square brackets, use \left( and \right). To make vertical
bars for determinants, use \left\vert and \right\vert. You can also make curly braces via \left\{ and \right\}. (For curly braces, you need to put a backslash in front of the braces so that LaTeX
realizes they are not LaTeX grouping symbols.)
• The LaTeX array environment has an argument, in this case ccc, that determines how the entries in each column are aligned. In this example, all entries are centered in their columns. If you
change ccc to lrc, for example, then the entries in the first column will be left-aligned, the entries in the second column will be right-aligned, and the entries in the third column will be
• The ampersand & character is used to separate entries in different columns.
• A double backslash \\ is used to terminate each row of the matrix except the last one.
• This example was produced by Maple. The source code \noalign{\medskip} in this example gives an extra space between the rows of the matrix. This extra space is not necessary and could be deleted.
(If you \usepackage{amsmath}, and you are willing to settle for centered entries in matrices, then you can replace the array environment with the matrix environment, which does better spacing. The
amsmath package also has a pmatrix environment that has enclosing parentheses built in.)
Exercise: Typeset the following matrix.
[ 47 41 31 33 ]
[ ]
[ 1 12 27 12 ]
[ ]
[ 41 1 28 58 ]
[ ]
[ 35 24 23 34 ]
email: Martin Karel | {"url":"http://crab.rutgers.edu/~karel/latex/class5/class5.html","timestamp":"2014-04-21T07:45:26Z","content_type":null,"content_length":"12107","record_id":"<urn:uuid:7fe27af2-ff36-4184-9455-4709dd3eaa9b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Department, Princeton University
Informal HET Seminar - IAS - Boaz Keren-Zur, Ecole Polytechnique Federale de Lausanne, CH -The Local Callan-Symanzik Equation: Structure & Application
The local Callan-Symanzik equation describes the response of a quantum field theory to local scale transformations in the presence of background sources. The consistency conditions associated with
this anomalous equation can be used to derive powerful constraints on RG flows. We will discuss various aspects of the equation and present new results regarding the structure of the anomaly. We then
use the equation to write correlation functions of the trace of the energy-momentum tensor off-criticality.
Location: Bloomberg Lecture Hall - Institute for Advanced Study
Date/Time: 09/11/13 at 1:30 pm - 09/11/13 at 2:30 pm
Category: High Energy Theory Seminar
Department: Physics | {"url":"https://www.princeton.edu/physics/events_archive/viewevent.xml?id=651","timestamp":"2014-04-21T03:26:49Z","content_type":null,"content_length":"10234","record_id":"<urn:uuid:0484cd14-0b92-4ccd-82e8-bf6bf5195d28>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Star Clusters
Level Four > Number and Algebra
Star Clusters
Student Activity:
Click on the image to enlarge it. Click again to close. Download PDF (1533 KB)
This is a level 4 number activity from the Figure It Out series. It relates to Stage 7 of the Number Framework.
Specific Learning Outcomes:
find fractions of whole numbers
Description of mathematics:
Number Framework Links
Use this activity to help students consolidate and apply their knowledge of fractions (stages 6 and 7).
Required Resource Materials:
FIO, Level 3+, Proportional Reasoning, Star Clusters, page 5
Question 1 has 12 parts to it, so your students may find it helpful if they create a suitable table and then complete it as they work out each fraction:
Students should find it easy to come up with numerous answers for question 2 without needing any system. You may, however, like to suggest an approach that will help them find many solutions at the
same time as they create meaning for algebraic representation and substitution.
Suggest using this shorthand: A (for one 6-star packet), B (8 stars), C (12 stars), and D (20 stars).
We can see that 144 is divisible by 6 (24A packets make 144), and 8 (18B packets make 144), and 12 (12C packets make 144). Also, B + C = D and 2A = C, so by substitution, we can get many more
6D + 2C = 144
5D + 3C + B = 144 (replacing 1D with one B + C)
4D + 4C + 2B = 144 (replacing 1D with one B + C)
3D + 5C + 3B = 144 (replacing 1D with one B + C)
and so on until we end up with 8C + 6B = 144.
We know that 12C = 144 and 2A = C, so we can keep replacing 1C with 2A to get another group of solutions:
2A + 11C = 144 (replacing 1C with 2A)
4A + 10C = 144 (replacing 1C with 2A)
6A + 9C = 144 (replacing 1C with 2A)
and so on until we end up with 24A = 144.
Other combinations such as 6A + 6B + 3D can provide the starting point for further lists of possibilities.
Question 3 asks students to calculate 1/3 of 144 and 1/4 of 144. They should share their strategies for these calculations. Those who know their basic facts are at stage 7 and more likely to use the
short form of division.
Question 4 requires students to express in its simplest form. Although these numbers lie outside the range of known facts for many students, they have been working with 144 in questions 2 and 3 and
should have little trouble finding a strategy that they can use to simplify the fraction. Those who remember that 144 = 12 x 12 have access to the most efficient strategy of all, the one that avoids
finding the product 12 x 9 and sees that Anita is in effect using just 9 stars out of every 12, so she uses 9/12 = 3/4 of the total.
Answers to Activity
1. 3-star clusters:
1/2 of a 6-pack; 3/8 of an 8-pack; 1/4 of a 12-pack; 3/20 of a 20-pack.
4-star clusters:
4/6 or 2/3 of a 6-pack; 1/2 of an 8-pack; 1/3 of a 12-pack; 1/5 of a 20-pack.
5-star clusters:
5/6 of a 6-pack; 5/8 of an 8-pack; 5/12 of a 12-pack; 1/4 of a 20-pack.
2. Many answers are possible. They include: twenty four 6-packs; twelve 12-packs; six 6-packs + six 8-packs + three 20-packs.
3. 16 x 3-star clusters. (1/3 of 144 = 48)
9 x 4-star clusters. (1/4 of 144 = 36)
12 x 5-star clusters. (144 – 48 – 36 = 60)
4. . (12 x 9 = 108, and 108 is 3/4 of 144.) | {"url":"http://nzmaths.co.nz/resource/star-clusters","timestamp":"2014-04-19T06:52:58Z","content_type":null,"content_length":"31122","record_id":"<urn:uuid:edbd3867-5e8e-42b8-bf46-876950da963d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
How can I Calculate days between 2 dates using a 360 day year.
• How can I Calculate days between 2 dates using a 360 day year.
Jun 15, 2005 03:48 PM|rayfusion|LINK
I can do this very easy with the datediff("d",date1,date2) function which use 365 days per year. But I want to use the same function that excel uses which is the 360 days per year. Thanks in
advance for your help.
• Re: How can I Calculate days between 2 dates using a 360 day year.
Jun 16, 2005 02:21 AM|crpietschmann|LINK
When you compare two dates and find the difference in the number of days, it doesn't matter how many days are in a year. All that matters is the number of day difference between the dates.
• Re: How can I Calculate days between 2 dates using a 360 day year.
Jun 16, 2005 12:43 PM|rayfusion|LINK
That is true. But for purpose of this application, accountants use a 360 day year which is 12 months of 30 days each. Excel has a function for this called Days360. I need to be able to perform
the same function in asp.net.
• Re: How can I Calculate days between 2 dates using a 360 day year.
Jun 16, 2005 03:28 PM|PDraigh|LINK
Roll your own, I'm afraid. I think the logic is pretty easy. Probably a combination of the datediff in months and the day of the month. Something like 1/1/05, 3/7/05 is 2 months difference (2x30)
plus 7-1 = 66 days. Not sure what to do when you have1/31/05. Put all the possible types of cases in Excel and reverse-engineer its formula.
• Re: How can I Calculate days between 2 dates using a 360 day year.
Jun 16, 2005 05:58 PM|rayfusion|LINK
• Re: How can I Calculate days between 2 dates using a 360 day year.
Jun 16, 2005 06:06 PM|shahramk|LINK
The number of days inbetween two dates will still remain the same .. whether there are 365 or 360 days in a year.
For example, There will always be two days between the 14th and the 16th of any month.
Now, if you need to display that information in Month/Day format , yeah, you would need to create your own formula.
• Re: How can I Calculate days between 2 dates using a 360 day year.
Jun 16, 2005 07:26 PM|rayfusion|LINK
You are absolutely correct. But when calculation the days between 11/01/2003 and 11/01/2004, the amount of days will be 360 with the below code. Therefore, every month will have 30 days. This is
what alot of accounting programs use instead of your typical 365 days per year. | {"url":"http://forums.asp.net/t/893856.aspx","timestamp":"2014-04-17T21:27:04Z","content_type":null,"content_length":"25828","record_id":"<urn:uuid:7afc399e-02b0-477d-b252-a0660ff24425>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Density Altitude Calculator
HOME LEARN TO FLY PLAN A FLIGHT SCHEDULE A FLIGHT THINGS TO DO INTERESTING FACTS ADVERTISING
Density Altitude Calculator
Home > Plan a Flight > Calculators > Density Altitude Calculator
NOTE: To use the calculator, just click the type of units that you will be entering, then enter the altitude, temperature, altimeter setting and dew point. Then click the calculate button.
Additional Information:
Example 1: at 5050 feet elevation, 95 deg F air temp, 29.45 inches-Hg barometric pressure and a dew point of 67 deg F, the Density Altitude is calculated as 9252 feet.
Example 2: at 1540 meters elevation, 35 deg C air temp, 997 hPa barometric pressure and a dew point of 19 deg C, the Density Altitude is calculated as 2821 meters.
Air density is affected by the air pressure, temperature and humidity. The density of the air is reduced by decreased air pressure, increased temperatures and increased moisture. A reduction in air
density reduces the engine horsepower, reduces aerodynamic lift and reduces drag.
Input Values:
The elevation (or altitude) is the geometric elevation above mean sea level, and is the elevation at which the altimeter setting, temperature and dew point have been measured.
The altimeter setting is the value in the altimeter's Kollsman window when the altimeter is set to correctly read a known elevation. The altimeter setting is generally included in NWS reports. The
altimeter setting is not the same as the sea level corrected barometric pressure.
This calculator uses dew-point rather than relative humidity because the dew point is fairly constant for a given air mass, while the relative humidity varies greatly as the temperature changes.
Output Values:
The density altitude is the altitude in the International Standard Atmosphere that has the same density as the air being evaluated.
The absolute air pressure is the actual air pressure, not corrected for altitude, and is also called the station pressure.
Relative density is the ratio of the actual air density to the standard sea level density, expressed as a percentage.
The ICAO International Standard Atmosphere standard conditions for zero density altitude are 0 meters (0 feet) altitude, 15 deg C (59 deg F) air temp, 1013.25 mb (29.921 in Hg) pressure and 0 %
relative humidity ( absolute zero dew point). The standard sea level air density is 1.225 kg/m^3 (0.002378 slugs/ft^3). | {"url":"http://www.pilotoutlook.com/calculators/density-altitude-calculator","timestamp":"2014-04-21T15:40:43Z","content_type":null,"content_length":"35435","record_id":"<urn:uuid:b0148c1d-8e32-4229-9b3e-28ed72ee879b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conceptual Calculus
From Wikibooks, open books for an open world
Welcome to calculus, a branch of mathematics rooted in the study of "infinity". Many books about calculus have already been written, with some more approachable than others. The purpose of this text
is to provide an accessible and well-motivated introduction to the subject, without sweeping the formalities under the rug.
1. Core concepts[edit]
• Everything you'll ever need to know about calculus (the rest is details)
2. Infinite Questions with Finite Answers[edit]
• Infinite sequences of numbers
• The real numbers
• Functions of real numbers
• The limit of a function
3. Geometry of Functions[edit]
• Familiar Functions, Familiar shapes
• Slope
• Area
• Length
• Volume
4. The derivative[edit]
• Secant lines and rates of change
• Defining the derivative
• Computing derivatives
• Linear approximation
5. How to use the derivative[edit]
• Approximating a function by a polynomial
6. Infinite Series[edit]
7. Integration[edit]
8. How to use the integral[edit] | {"url":"https://en.wikibooks.org/wiki/Conceptual_Calculus","timestamp":"2014-04-20T08:59:39Z","content_type":null,"content_length":"31857","record_id":"<urn:uuid:b09f6cf3-1066-41f0-b541-abc7c471bea7>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rockdale, IL Algebra 2 Tutor
Find a Rockdale, IL Algebra 2 Tutor
...I have had numerous students tell me that I should teach math, as they really enjoy my step-by-step, simple breakdown method. I have helped a lot of people conquer their fear of math. Before I
knew I was going to teach French, I was originally going to become a math teacher.
16 Subjects: including algebra 2, English, chemistry, French
...I have a Bachelor's of Science from California Institute of Technology (CIT), an incredibly challenging university. My teaching style is one of: * LISTENING to see what your student is doing;
to learn how he or she thinks. * ENCOURAGING students to push a little farther; to show them what they...
21 Subjects: including algebra 2, chemistry, calculus, statistics
I earned High Honors in Molecular Biology and Biochemistry as well as an Ancient History (Classics) degree from Dartmouth College. I then went on to earn a Ph.D. in Biochemistry and Structural
Biology from Cornell University's Medical College. As an undergraduate, I spent a semester studying Archeology and History in Greece.
41 Subjects: including algebra 2, chemistry, English, writing
...I used to solve the "question of the day" on the SAT web site just for fun. I am sure that I will be helpful to my students in how to "think outside the box" to succeed in SAT. I practiced ACT
Math with my three sons and with some of their friends.
8 Subjects: including algebra 2, physics, algebra 1, SAT math
...Also, I understand how difficult algebra can be, but I have the skills and patience to help any student reach the goals they need to reach. I have a masters in Advertising and bachelors in
Advertising/German. In undergrad, I tutored non –native English speakers as part of community service.
4 Subjects: including algebra 2, English, German, prealgebra
Related Rockdale, IL Tutors
Rockdale, IL Accounting Tutors
Rockdale, IL ACT Tutors
Rockdale, IL Algebra Tutors
Rockdale, IL Algebra 2 Tutors
Rockdale, IL Calculus Tutors
Rockdale, IL Geometry Tutors
Rockdale, IL Math Tutors
Rockdale, IL Prealgebra Tutors
Rockdale, IL Precalculus Tutors
Rockdale, IL SAT Tutors
Rockdale, IL SAT Math Tutors
Rockdale, IL Science Tutors
Rockdale, IL Statistics Tutors
Rockdale, IL Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Carbon Hill, IL algebra 2 Tutors
Channahon algebra 2 Tutors
Coal City, IL algebra 2 Tutors
Crest Hill algebra 2 Tutors
Cresthill, IL algebra 2 Tutors
Diamond, IL algebra 2 Tutors
Downtown Station Joliet, IL algebra 2 Tutors
Elwood, IL algebra 2 Tutors
Joliet, IL algebra 2 Tutors
Manhattan, IL algebra 2 Tutors
Plano, IL algebra 2 Tutors
Shorewood algebra 2 Tutors
Stateville, IL algebra 2 Tutors
Symerton, IL algebra 2 Tutors
Wilmington, IL algebra 2 Tutors | {"url":"http://www.purplemath.com/Rockdale_IL_Algebra_2_tutors.php","timestamp":"2014-04-16T04:50:38Z","content_type":null,"content_length":"24177","record_id":"<urn:uuid:1a894a37-b5a9-46b1-8ff7-f0c4152feabc>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dont understand
Write a program that prompts the user to enter an integer and then displays that integer as a product of its primes and if it is a prime then it should say so??
Above is the question I have been given
#include <iostream>
#include <vector>
using namespace std;
int main ()
int num, result;
cout << "Enter A Number " << endl;
system ("pause");
return 0;
This is what I have so far, do I have to use a for loop, a while loop or a do loop,
and if it is possible can someone give me an example of how to it.
thanks very much appreciated
Need to use vecotrs in the code
I have changed the code around to this,
#include <iostream>
using namespace std;
int main() {
// Declaring Variables
int perfect;
int counter;
int num1;
//initialize variables
cout<<"This program will tell you the prime factorization of a number"<<endl;
cout<<"Please input a number and press [ENTER]."<<endl;
for (counter=1;counter<=num1;counter++)
if (num1%counter==0)
cout<<"The possible factors of that number are "<< counter <<endl;
return 0;
and it is showing me all the possible factors of the number that I enter, I only want it to show the Prime numbers
I have now changed my code to this,
#include <iostream>
using namespace std;
int main()
int n, c = 2;
printf("Enter a number to check if it is prime\n");
for ( c = 2 ; c <= n - 1 ; c++ )
if ( n%c == 0 )
printf("%d is not prime.\n", n);
if ( c == n )
printf("%d is prime.\n", n);
system ("pause");
return 0;
can anyone explain to me on how i can get the prime numbers shown of the number that is entered
Last edited on
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/116115/","timestamp":"2014-04-16T10:17:02Z","content_type":null,"content_length":"13380","record_id":"<urn:uuid:0e901e89-8368-46dc-bbcb-34937f3e7001>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research Group on Orthogonal Polynomials and Approximation Theory
Main Research Topics
♦ Constructive Approximation Theory. Study of the Sobolev orthogonality as well as the general orthogonal polynomials with respect to a wide variety of measures. Matrix moment problems, Matrix
orthogonal polynomials, q-polynomials.
♦ Integrable functions, Interpolation of the space of Integrable functions, vectorial measures, ...
♦ Applications in mathematical physics, non-linear mathematical physics and ratchet effect (non-linear direct transport).
Key words: Approximation theory; Special functions; Orthogonal polynomials; Sobolev orthogonality; hypergeometric functions; Padé and rational approximations; asymptotics; vectorial measures;
Solitons; Ratchet effect.
AMS MSC 2000:28AXX, 33C45, 33C90, 33D15, 33D45, 33D80, 33D90, 33F10, 35Q51, 39A13, 42C05, 70KXX
The complete version of the MSC can be found at the address http://www.ams.org/msc/
Preprints are avaliable from the personal web pages of the members of the group.
The research of the group is partially supported by Ministerio de Economía y Competitivad, Fondos FEDER and Junta de Andalucía. | {"url":"http://euler.us.es/~opap/","timestamp":"2014-04-20T05:47:45Z","content_type":null,"content_length":"6489","record_id":"<urn:uuid:64f4151a-6ca8-4982-aa70-add36739b5e3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
nag_zero_cont_func_cntin_rcomm (c05axc)
NAG Library Function Document
nag_zero_cont_func_cntin_rcomm (c05axc)
1 Purpose
nag_zero_cont_func_cntin_rcomm (c05axc) attempts to locate a zero of a continuous function using a continuation method based on a secant iteration. It uses reverse communication for evaluating the
2 Specification
#include <nag.h>
#include <nagc05.h>
void nag_zero_cont_func_cntin_rcomm (double *x, double fx, double tol, Nag_ErrorControl ir, double scal, double c[], Integer *ind, NagError *fail)
3 Description
nag_zero_cont_func_cntin_rcomm (c05axc) uses a modified version of an algorithm given in
Swift and Lindfield (1978)
to compute a zero
of a continuous function
. The algorithm used is based on a continuation method in which a sequence of problems
are solved, where
$1={\theta }_{0}>{\theta }_{1}>\cdots >{\theta }_{m}=0$
(the value of
is determined as the algorithm proceeds) and where
is your initial estimate for the zero of
. For each
${\theta }_{r}$
the current problem is solved by a robust secant iteration using the solution from earlier problems to compute an initial estimate.
You must supply an error tolerance
is used directly to control the accuracy of solution of the final problem (
${\theta }_{m}=0$
) in the continuation method, and
is used to control the accuracy in the intermediate problems (
${\theta }_{1},{\theta }_{2},\dots ,{\theta }_{m-1}$
4 References
Swift A and Lindfield G R (1978) Comparison of a continuation method for the numerical solution of a single nonlinear equation Comput. J. 21 359–362
5 Arguments
this function uses
reverse communication.
Its use involves an initial entry, intermediate exits and re-entries, and a final exit, as indicated by the argument
. Between intermediate exits and re-entries,
all arguments other than fx must remain unchanged
1: x – double *Input/Output
On initial entry: an initial approximation to the zero.
On intermediate exit: the point at which $f$ must be evaluated before re-entry to the function.
On final exit: the final approximation to the zero.
2: fx – doubleInput
On initial entry
: if
need not be set.
must contain
for the initial value of
On intermediate re-entry
: must contain
for the current value of
3: tol – doubleInput
On initial entry
: a value that controls the accuracy to which the zero is determined.
is used in determining the convergence of the secant iteration used at each stage of the continuation process. It is used directly when solving the last problem (
${\theta }_{m}=0$
Section 3
), and
is used for the problem defined by
${\theta }_{r}$
. Convergence to the accuracy specified by
is not guaranteed, and so you are recommended to find the zero using at least two values for
to check the accuracy obtained.
Constraint: ${\mathbf{tol}}>0.0$.
4: ir – Nag_ErrorControlInput
On initial entry
: indicates the type of error test required, as follows. Solving the problem defined by
${\theta }_{r}$
$1\le r\le m$
, involves computing a sequence of secant iterates
${x}_{r}^{0},{x}_{r}^{1},\dots \text{}$
. This sequence will be considered to have converged only if:
$xr i+1 -xri ≤eps×max1.0,xri ,$
for some
; here
is either
as discussed above. Note that there are other subsidiary conditions (not given here) which must also be satisfied before the secant iteration is considered to have converged.
Constraint: ${\mathbf{ir}}=\mathrm{Nag_Mixed}$, $\mathrm{Nag_Absolute}$ or $\mathrm{Nag_Relative}$.
5: scal – doubleInput
On initial entry
: a factor for use in determining a significant approximation to the derivative of
, the initial value. A number of difference approximations to
${f}^{\prime }\left({x}_{0}\right)$
are calculated using
has the same sign as
. A significance (cancellation) check is made on each difference approximation and the approximation is rejected if insignificant.
Suggested value
$\sqrt{\epsilon }$
, where
is the
machine precision
returned by
nag_machine_precision (X02AJC)
Constraint: ${\mathbf{scal}}$ must be sufficiently large that ${\mathbf{x}}+{\mathbf{scal}}e {\mathbf{x}}$ on the computer.
6: c[$26$] – doubleCommunication Array
(${\mathbf{c}}\left[4\right]$ contains the current ${\theta }_{r}$, this value may be useful in the event of an error exit.)
7: ind – Integer *Input/Output
On initial entry
: must be set to
fx need not be set.
fx must contain $f\left({\mathbf{x}}\right)$.
On intermediate exit
: contains
. The calling program must evaluate
, storing the result in
, and re-enter nag_zero_cont_func_cntin_rcomm (c05axc) with all other arguments unchanged.
On final exit: contains $0$.
Constraint: on entry ${\mathbf{ind}}=-1$, $1$, $2$, $3$ or $4$.
8: fail – NagError *Input/Output
The NAG error argument (see
Section 3.6
in the Essential Introduction).
6 Error Indicators and Warnings
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
Continuation away from the initial point is not possible. This error exit will usually occur if the problem has not been properly posed or the error requirement is extremely stringent.
Current problem in the continuation sequence cannot be solved. Perhaps the original problem had no solution or the continuation path passes through a set of insoluble problems: consider refining
the initial approximation to the zero. Alternatively,
is too small, and the accuracy requirement is too stringent, or too large and the initial approximation too poor.
Final problem (with ${\theta }_{m}=0$) cannot be solved. It is likely that too much accuracy has been requested, or that the zero is at $\alpha =0$ and ${\mathbf{ir}}=\mathrm{Nag_Relative}$.
On initial entry, ${\mathbf{ind}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{ind}}=-1$ or $1$.
On intermediate entry, ${\mathbf{ind}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{ind}}=2$, $3$ or $4$.
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact
for assistance.
On entry, ${\mathbf{scal}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{x}}+{\mathbf{scal}}e {\mathbf{x}}$ (to machine accuracy).
On entry, ${\mathbf{tol}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{tol}}>0.0$.
Significant derivatives of
cannot be computed. This can happen when
is almost constant and nonzero, for any value of
7 Accuracy
The accuracy of the approximation to the zero depends on
. In general decreasing
will give more accurate results. Care must be exercised when using the relative error criterion (
If the zero is at
, or if the initial value of
and the zero bracket the point
, it is likely that an error exit with
will occur.
It is possible to request too much or too little accuracy. Since it is not possible to achieve more than machine accuracy, a value of
${\mathbf{tol}}\ll \mathbit{machine precision}$
should not be input and may lead to an error exit with
. For the reasons discussed under
Section 6
should not be taken too large, say no larger than
For most problems, the time taken on each call to nag_zero_cont_func_cntin_rcomm (c05axc) will be negligible compared with the time spent evaluating
between calls to nag_zero_cont_func_cntin_rcomm (c05axc). However, the initial value of
and the choice of
will clearly affect the timing. The closer that
is to the root, the less evaluations of
required. The effect of the choice of
will not be large, in general, unless
is very small, in which case the timing will increase.
9 Example
This example calculates a zero of $x-{e}^{-x}$ with initial approximation ${x}_{0}=1.0$, and ${\mathbf{tol}}=\text{1.0e−3}$ and $\text{1.0e−4}$.
9.1 Program Text
9.2 Program Data
9.3 Program Results | {"url":"http://www.nag.com/numeric/cl/nagdoc_cl23/html/C05/c05axc.html","timestamp":"2014-04-20T23:56:54Z","content_type":null,"content_length":"29250","record_id":"<urn:uuid:f0602343-1837-4a6c-869b-a18b3f2798b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the meaning of this?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/509f43ede4b05517d5362d04","timestamp":"2014-04-17T07:04:43Z","content_type":null,"content_length":"48733","record_id":"<urn:uuid:ba32e639-06a3-436c-b1f2-e3f7623455c6>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
State of the art?
November 29th 2010, 09:00 PM #1
State of the art?
Hello MHF,
has it been (dis)proven yet that efficient polynomial integer factorization on a classical computer is impossible, (we know it is possible and easy on a quantum computer)? I can't seem to find
any references on this topic on google and it sure would put many minds at rest knowing the answer. Does anybody have any information on this?
Hello MHF,
has it been (dis)proven yet that efficient polynomial integer factorization on a classical computer is impossible, (we know it is possible and easy on a quantum computer)? I can't seem to find
any references on this topic on google and it sure would put many minds at rest knowing the answer. Does anybody have any information on this?
Does >>this<< address this?
November 29th 2010, 10:45 PM #2
Grand Panjandrum
Nov 2005
November 29th 2010, 10:53 PM #3 | {"url":"http://mathhelpforum.com/number-theory/164813-state-art.html","timestamp":"2014-04-20T22:30:54Z","content_type":null,"content_length":"36327","record_id":"<urn:uuid:c6a53b65-657d-44eb-af1e-683da2672630>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Negligible function question.
There is a question:
In constructing obfuscators for point-functions theory, there is a statement that there exists a polynomial-time computable permutation
[tex]pi : B^n -> B^n[/tex] and a constant c such that for every polynomial s(n) and every adversary A of size s for all sufficiently large n,
[tex] Prob[A(pi(x)) = x] <= S(n)^c/2^n [/tex]
I am trying to prove that
where s is a polynomial and c is a constant, is also a negligible function.
Could anyone help me with that? | {"url":"http://www.physicsforums.com/showthread.php?p=937799","timestamp":"2014-04-21T02:12:44Z","content_type":null,"content_length":"19844","record_id":"<urn:uuid:bd6c661d-8db0-47a6-a494-8c14b0c41a68>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Figure This Math Challenges for Families - Answer
The probability that your team wins is 9/16, or about 56% of the time.
Complete Solution:
There are many ways to answer the question:
Make a rectangular diagram with four rows of the same size. Shade three of the four rows to represent making the first shot.
Make four columns of the same size and shade three of them to represent making the second shot.
There should now be 16 cells in the grid. The 9 cells that are shaded twice represent success on both shots which means your team wins without any overtime play. Since 9 of the 16 equally likely
outcomes reresents wins, the probability of winning is 9/16.
Complete Solution (cont.)
A different strategy is to draw a tree diagram labeled with all outcomes and their probabilities for each shot. The probability of winning is found by multiplying the probabilities on the appropriate
brances of the tree.
If your teammate makes the first shot 3/4 of the time, then 3/4 of those times that she makes the first shot, she will make the second shot; that is:
(3/4) × (3/4) = 9/16
In this case, the probability of winning without overtime is 9/16.
[Multiplying probabilities here is correct only if the two shots are independent events. Assume that they are.] | {"url":"http://www.figurethis.org/challenges/c19/answer.htm","timestamp":"2014-04-20T01:35:48Z","content_type":null,"content_length":"19297","record_id":"<urn:uuid:5cc2a4ca-5937-46d7-b874-fca9effd817c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Trivia About Algebra With Answers
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
The new version is sooo cool! This is a really great tool will have to tell the other parents about it... No more scratching my head trying to help the kids when I get home from work after a long
day, especially when the old brain is starting to turn to mush after a 10 hour day.
Marsha Stonewich, TX
I am a 9th grade Math Teacher. I use the Algebrator application in my class room, to assist in the learning process. My students have found the easy step by step instructions, and the explanations on
how the formula works to be a great help.
George Miller, LA
It is tremendous. Any difficult problem and I get the step-by-step. Not seen anything better than this. All that I can say is it seems I got a personal tutor for me.
William Marks, OH
Search phrases used on 2007-06-03 :
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• middle school math pizzazz book e
• sample of lesson plan in converting exponential notation to radical notation
• free math printouts
• real life application of permutations
• radical calculator online
• balance equations calculator online
• algebra for beginners
• answers key to college preparatory mathematics algebra 2 linear systems and matrices unit
• solving equation by subtracting worksheets
• 7.4 study guide biology mcdougal littell
• high school math exam with answer keys
• adding integers absolute value worksheet
• rules of radicals in math
• consumer arithmetic questions
• second year math trivia
• multiplying integer fractions
• simplification calculator
• free 10th grade math worksheets
• polynomial inequalities answers
• how to factor an equation to the third power
• What is the difference between evaluation and simplification of an expression
• how to get from decimal to radical form
• math tile questions
• clep intermediate algebra
• algebra buster free download
• glencoe worksheet answers
• small answer to large equation
• Simple Steps to Balance Chemical Equations
• solving equations with grouping symbols
• soft math
• matlab solve many variables
• grade 10 Equations and working
• dividing variables with exponents
• algebra test online ks3
• phoenix calculator cheats
• solving college algebra word problems
• rotation worksheets
• percentage equations
• math machine multiplying rational expressions
• free ratios worksheets
• radical expression calculator free
• how to go from standard form to vertex form
• SOLVE MY NTH TERM QUADRATICS
• list of fractions
• ellipse graphing calculator
• Translating Word Problems into equation in Grade 7
• 4th Grade Math Logical Reasoning
• matrix to the nth power calculator
• Balancing Equations Quiz pdf
• printable math worksheets for canadian ged prep
• simplifying radical solver
• writing algebraic equation in grade 7
• powerpoint position term to term rules
• the cube root of a variable with a negative exponent
• help EXPONENTS OF EXPONENTIAL EXPRESSIONS
• ellipse calculator
• gre math formulas
• excel "quadratic programming"
• free inequalities worksheet
• solving 2nd order homogeneous
• Matlab Cramer's rule with 4x4
• uses of quadratic equation in real life
• circle graph algebraic problems
• algebra help rational equation calculator
• nth term worksheet
• how to get x on graphing calculator
• algebra 2 book mcdougal littell online
• why do you need to factor the numerator and the denominator? When adding and subtracting rational ex pressions , why do you need a LCD ?
• first order nonlinear ode
• hyperbola converter
• ADDING radical fraction
• fx-115ms hack
• absolute value of linear equation in two variable
• parabola quadratic equation interactive
• 8th grade compound interest problems
• ti 83 log base
• How to solve simplified radical form
• simplify online calculator
• online graphing calculator ti-83
• graphing inequalities
• adding and subtracting matrices worksheets
• simplifying fractional exponents calculator
• binomial table | {"url":"http://www.softmath.com/algebra-help/mathematical-trivia-about-alge.html","timestamp":"2014-04-18T03:02:16Z","content_type":null,"content_length":"26029","record_id":"<urn:uuid:0686e2a7-53cf-4988-ac14-0c0879ca10cf>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Resources for Teaching Linear Algebra
The articles in this collection discuss both the content of linear algebra courses and approaches to teaching such courses. The authors address elementary topics, such as row reduction, and more
advanced topics, such as sparse matrices, iterative methods and pseudo-inverses. There are agreements:
• "Only under torture would I tell a student about Kramer's rule..."(Almon);
• "As an example of how familiarity with determinants can rot your brain...." (Axler).
There are, of course, also areas of disagreement, as when Dubinsky disagrees with the LACSG recommendations. There are well-known applications, such as Markov chains, and some not-so-well-known
applications, such as Fisher's theorem on complete bipartite subgraphs (bicliques). There are articles from the users of matrix algebra: computer graphics, computer science and others. The articles
together constitute a thoughtful, well-written, challenging and often entertaining discussion of this important area of mathematics.
PART I, "The Role of Linear Algebra," is a perfect way to begin this book. Alan Tucker gives a nice survey of the history of linear algebra with many historical notes on people such as Leibniz,
(Wilhelm) Jordan, Babbage, von Neumann and Turing. He emphasizes the importance of linear algebra for its applicability and its role in computation. He discusses the pedagogical importance of linear
algebra as a "very accessible geometrically based theory whose study serves as preparation to more abstract upper-division courses." (p.10). He also gives a nice overview of the history of the linear
algebra course in the undergraduate curriculum.
In PART II, "Linear Algebra as Seen from Client Disciplines," four professionals who use linear algebra in their work discuss their views on linear algebra. We learn about applications in computer
graphics, computer science, economics and engineering. "Matrix Algebra in Economics," written by Clopper Almon is quite enthusiastic about applications of linear algebra to statistics (least square
models and related topics), to modeling an economy, to the maximization of functions of many variables subject to constraints and to dynamical systems defined by difference equations. Margaret Wright
in "Linear Algebra for Computer Science Students" finds that most of her coworkers at AT&T Bell Laboratories use linear algebra and describes what she sees as the important topics for computer
science majors. Rosemary Chang, in her article " A View from a Client Discipline, Computer Graphics," says that the more sophisticated the techniques in computer graphics, the more the need for a
strong background in linear algebra. The concluding article in this section is by David P. Young, "Linear Algebra Use at Boeing: Implications for Undergraduate Education." He states that while there
is not much use of linear algebra in the programming section at Boeing Computer services, there is in research and development. Useful topics include iterative methods, approximation theory, sparse
matrix techniques, least squares, eigenvalues and eigenvectors and Gaussian elimination theory.
PART III, "The Teaching of Linear Algebra," is very useful. It shows that many of us who teach linear algebra face the same problems: we have all seen "the fog roll in" as we hit the more abstract
portion of the course. This section includes a great discussion about the recommendations made by the Linear Algebra Curriculum Study Group (LACSG), including differing viewpoints by others. There is
a discussion of various approaches used in the linear algebra classroom, and a useful article on conceptualization.
The first article, "Teaching Linear Algebra: Must the Fog Always Roll In?" by David Carlson, describes the common problems with ideas such as subspace, rank, basis, span and linear independence. He
identifies the reasons he feels that the "fog rolls in" and makes some suggestions on how to deal with the "fog." The next article is "The Linear Algebra Curriculum Study Group Recommendations for
the First Course in Linear Algebra." The recommendations are made with convincing justifications for their suggested core syllabus (included). Charles C. Cowen describes an interesting project that
has come from problems faced in consulting work at Ford Motor Company in his article "A Project on Circles in Space." The project uses many basic concepts of linear algebra to determine if a set of
points lie on a circle.
One of the most useful articles for those of us who are trying to include MATLAB in our courses is Jane M. Day's "Teaching Linear Algebra New Ways." This article describes the changes she has made in
her linear algebra class and says it is a work still in progress. She has included MATLAB projects in her course, includes numerical issues, allows partners for computer work and has changed her
style of teaching as well as testing. She makes suggestions of textbooks that include MATLAB exercises and has a lengthy bibliography, which I found most helpful.
In the well-written and well-documented article "Some Thoughts on a First Course in Linear Algebra at the College Level," Ed Dubinsky expresses his somewhat negative reaction to the recommendations
of the LACSG and of David Carlson. He describes his philosophy on the way to approach undergraduate mathematics instruction in general and linear algebra in particular. Guershon Harel's article
responds to the LACSG recommendations; he endorses the incorporation of technology and proposes the use of MATLAB. He has specific recommendations of his own including changing high school curriculum
to enhance the learning of linear algebra on the college level, use of MATLAB in calculus in preparation for linear algebra, and how to teach proofs. Two articles follow dealing with specific topics
in linear algebra, namely L-U factorization and iterative methods.
Robert Mena discusses the evolution of the undergraduate linear algebra course through the years in his article "Reflections (1988)". Gerald Porter in "Writing About Linear Algebra: Report on an
Experiment" talks about a linear algebra course for non-mathematics majors in which he had the students write a ten page chapter to supplement the text material on subspaces, spanning sets, basis and
dimension, lines, planes and hyperplanes. The final article in this section is "Scenes from Linear Algebra Classes" by Shlomo Vinner. He would like to see a change in teaching to an approach that
emphasizes concepts, ideas and thought. He discusses and gives examples of conceptual and pseudoconceptual behavior of the students. He feels the place of proof in service courses is controversial
but agrees with the LACSG recommendations emphasizing problem solving (including some proofs) and motivating applications. He concludes with the observation that there is a need to study cognitive
issues as well as curricular issues.
Part IV, "Linear Algebra Exposition," contains very insightful and useful articles on the teaching of various topics in linear algebra. Sheldon Axler's "Down with Determinants!" takes the reader
through a convincing and thought-provoking presentation of eigenvalues and eigenvectors without the use of determinants. He feels that the proper place to introduce determinants is late in the course
(defining then as the products of eigenvalues), for use in the change of variables formula for multi-variable integrals. In "Subspaces and Echelon Forms" David Lay encourages linear algebra
instructors to make sure that students can easily move between the explicit form of subspaces (all linear combinations of some vectors) and the implicit form (a solution set of a system of
homogeneous linear equations). This paper describes how to do this using matrix echelon forms. Maron and Manwani in "A Geometric Interpretation of the Columns of the (Pseudo) Inverse of A" describes
how the columns of the (pseudo)inverse of a matrix A can be used to project the i-th row of A on the span of the other rows. The final article in this section is "The Fundamental Theorem of Linear
Algebra" by Gilbert Strang, which centers on figures illustrating the relations of the four important subspaces related to an mxn matrix A.
PART V, "Applications of Linear Algebra," completes the book by including six articles dealing with various applications. Many of the applications seem to require most of the material covered in a
first linear algebra course and thus could only be presented near the end of the semester. This in no way lessens the value of these applications. In "Some Applications of Elementary Linear Algebra
in Combinatorics" Brualdi and Quinn give three applications, two of which can be done without linear algebra (Fisher's inequality and Hall's Marriage Theorem), although using linear algebra
techniques may simplify the solution. I especially liked the third application, Biclique Partitions, where linear algebra is essential to the solution. Clark and Datta present an enjoyable
student-pleasing card trick that is based on invariance properties of certain matrix subspaces. Lange and Miller present an intriguing ladder game that is used in Japan to determine Christmas gifts.
Gerald Porter in "Linear Algebra and Affine Planar Transformations" gives concrete ideas which are used in computer graphics and which are easily accessible to the linear algebra student. Again, this
is a case of linear algebra simplifying the work even though it is not, strictly speaking, needed. This section ends with two wonderful articles by Gilbert Strang "Patterns in Linear Algebra" and
"Graphs, Matrices, and Subspaces". In the first article, Strang gives two "remarkable families of matrices... which illustrate the central ideas of elimination and diagonalization and
orthogonality... with numbers that make you smile." And, yes indeed, they make you smile. In his second article, Strang explains that "what y = f(x) is to calculus, matrices and subspaces are to
linear algebra." He gives an application using connected directed graphs associated with incidence matrices. These matrices and their subspaces illustrate Kirkhoff's laws. It is a wonderful,
accessible application.
Overall, this book is a source of much information and should be useful to many teachers of linear algebra. Even though many of the articles have been published elsewhere before, having them in one
place and arranged by topic is an excellent idea. The client discipline articles give many examples of how linear algebra is used in the "real world," something students always want to know. The
articles on the LACSG recommendations give us much food for thought, and the application section is filled with wonderful examples. The incorporation of technology is discussed in several articles
and sources for projects are given. For those who teach linear algebra, this one's a "must get".
Rebecca Berg (rberg@bowiestate.edu) is a professor of mathematics at Bowie State University. | {"url":"http://www.maa.org/publications/maa-reviews/resources-for-teaching-linear-algebra","timestamp":"2014-04-19T03:12:37Z","content_type":null,"content_length":"107034","record_id":"<urn:uuid:c87c6076-884d-48b6-9b87-e47a6023a326>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kaplansky's 6th conjecture: dim(Irrep) | dim(algebra) - for semi-simple Hopf algebras
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Let $H$ be a semisimple Hopf algebra. One of the Kaplansky's conjectures states that the dimension of any irreducible $H$-module divides the dimension of $H$.
In which cases the conjecture is known to be true?
up vote 7 down vote favorite
1 hopf-algebras rt.representation-theory open-problem
add comment
Let $H$ be a semisimple Hopf algebra. One of the Kaplansky's conjectures states that the dimension of any irreducible $H$-module divides the dimension of $H$.
In which cases the conjecture is known to be true?
From Shlomo Gelaki research statement (which is nice survey, by the way):
up vote 6 down We also proved that the dimension of an irreducible representation of a semisimple Hopf algebra H, which is either quasitriangular or cotriangular, divides the dimension of H. This
vote result partially answers a celebrated conjecture of Kaplansky, which is still open.
add comment
From Shlomo Gelaki research statement (which is nice survey, by the way):
We also proved that the dimension of an irreducible representation of a semisimple Hopf algebra H, which is either quasitriangular or cotriangular, divides the dimension of H. This result partially
answers a celebrated conjecture of Kaplansky, which is still open.
Yorck Sommerhäuser has a very nice survey about Kaplansky's conjectures. Section 6 is devoted to Kaplansky's 6th conjecture.
In Sommerhäuser's survey it is mentioned that Richmond and Nichols proved that the conjecture is true if the simple module has dimension two:
Theorem (Nichols & Richmond). The dimension of a semisimple Hopf algebra over $\mathbb{C}$ is even if the Hopf algebra has a simple module of dimension 2.
In Sommerhäuser's survey it is also mentioned that Montgomery and Witherspoon proved that Kaplansky's conjecture holds if it holds for a subalgebra.
In this paper
Cohen, Miriam; Gelaki, Shlomo; Westreich, Sara. Hopf algebras. Handbook of algebra. Vol. 4, 173--239, Handb. Algebr., 4, Elsevier/North-Holland, Amsterdam, 2006. MR2523421
up vote 5 down (2010j:16076), link
it is written that Kaplansky's conjecture has been proved
• if $H$ is triangular,
• if $H$ is semisolvable,
• if $H$ is cotriangular,
• if $R(H)$ is central in $H^\*$, where $R(H)$ is the span in $H^\*$ of all the characters on $H$.
add comment
Yorck Sommerhäuser has a very nice survey about Kaplansky's conjectures. Section 6 is devoted to Kaplansky's 6th conjecture.
In Sommerhäuser's survey it is mentioned that Richmond and Nichols proved that the conjecture is true if the simple module has dimension two:
Theorem (Nichols & Richmond). The dimension of a semisimple Hopf algebra over $\mathbb{C}$ is even if the Hopf algebra has a simple module of dimension 2.
In Sommerhäuser's survey it is also mentioned that Montgomery and Witherspoon proved that Kaplansky's conjecture holds if it holds for a subalgebra.
Cohen, Miriam; Gelaki, Shlomo; Westreich, Sara. Hopf algebras. Handbook of algebra. Vol. 4, 173--239, Handb. Algebr., 4, Elsevier/North-Holland, Amsterdam, 2006. MR2523421 (2010j:16076), link
if $R(H)$ is central in $H^\*$, where $R(H)$ is the span in $H^\*$ of all the characters on $H$. | {"url":"https://mathoverflow.net/questions/108404/kaplanskys-6th-conjecture-dimirrep-dimalgebra-for-semi-simple-hopf-alg/108405","timestamp":"2014-04-20T03:35:45Z","content_type":null,"content_length":"57996","record_id":"<urn:uuid:161c5d81-8b59-45de-b036-cb8be283623c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fractals/Cantor's Set
From Wikibooks, open books for an open world
Cantor's set is a subset of an interval in the real line - normally the unit interval, [0,1]. It is constructed by removing the middle third of the interval but leaving the end-points where they are
(see open intervals). This iterative step is then applied again and again to the parts left "to infinity" to any and all line segments left. This eventually (after an infinite length of time and
number of iterations) reduces the original set to a set of distinct points. The picture on the right shows the source set
Taking a probability measure over the line (analogous to determining how much of the length of the original line is left at any given time) one can see that one loses 1/3 per iteration
multiplicatively. Hence, the length left decreases exponentially to zero with the number of iterations increasing. The limiting behaviour of this measure can be described by examining the function $f
(n)=\left ( \frac{2}{3} \right )^n$, noting that its limit lies at zero. In fact, the cantor set, the limiting residue of the line after an infinite number of iterations as described above is a set
of distinct points. Since the points have no length, this is described as a set of measure zero. It is also an infinite set, as 2^n more endpoints are created with every successive step and these
end-points are all that is left within the set. It is also an uncountable set, which means that the natural numbers ("whole" or "counting" numbers) can't be singularly mapped onto it by a process
that allows them to be mapped back. | {"url":"http://en.wikibooks.org/wiki/Fractals/Cantor's_Set","timestamp":"2014-04-25T03:33:13Z","content_type":null,"content_length":"25158","record_id":"<urn:uuid:7421e609-cd48-40f5-a0e4-00f8f12631ce>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rafael Araujo’s illustrations are bafflingly complex—so complex that you might assume the artist uses a computer to render the exacting angles and three-dimensional illusions. And true, if you were
to recreate his intricate mathematical illustrations using software, it probably wouldn’t take you long at all. But the craziest part of all is that Araujo doesn’t use modern technology to create his
intricately drawn Calculations series—unless, of course, you count a ruler and protractor.
[MORE: Wildly Detailed Drawings That Combine Math and Butterflies]
(via wired) | {"url":"http://sundrian.tumblr.com/","timestamp":"2014-04-18T05:30:58Z","content_type":null,"content_length":"44055","record_id":"<urn:uuid:9334b36c-e012-4e11-9531-a8b2c6a57d49>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
R: st: Bootstrapping new observations to add to an existing dataset
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
R: st: Bootstrapping new observations to add to an existing dataset
From "Carlo Lazzaro" <carlo.lazzaro@tiscalinet.it>
To <statalist@hsphsun2.harvard.edu>
Subject R: st: Bootstrapping new observations to add to an existing dataset
Date Mon, 22 Jun 2009 17:34:46 +0200
Davide wrote:
"I just don't want to create new observations by
filling up the dataset with -invnorm(uniform())- because I want to
preserve the properties of the variables: some are binary.."
Dear Davide, if only variables properties withold you from creating new
observations, you can use -invibeta- for creating random observation with
given parameters a and b (in Stata 9.2/SE it goes like this: g
A=invibeta(a,b,uniform()) for binary variables.
The same approach can be replicated for continuous or counts variables, with
-invgammap-(in Stata 9.2/SE it goes like this: g B=b*invgammap(a,uniform()).
Two interesting textbooks on fitting beta and gamma distributions (as well
as taking advantage from their inverse distributions) are:
Spiegelhalter DJ, Abrams KR, Myles JP. Bayesian approach to clinical trials
and health-care evaluation. Chichester: Wiley, 2004
Gelman A, Carlin JB. Bayesian Data Analysis. Second edition. Boca Raton:
Chapman & Hall/CRC, 2004
Kind Regards,
-----Messaggio originale-----
Da: owner-statalist@hsphsun2.harvard.edu
[mailto:owner-statalist@hsphsun2.harvard.edu] Per conto di Davide Cantoni
Inviato: lunedì 22 giugno 2009 14.59
A: statalist@hsphsun2.harvard.edu
Oggetto: Re: st: Bootstrapping new observations to add to an existing
Thank you, gentlemen. To clarify: yes, In a sense I want more of the
same observations. I just don't want to create new observations by
filling up the dataset with -invnorm(uniform())- because I want to
preserve the properties of the variables: some are binary, some are
integers, and some are just any kind of number. The values of these
variables are irrelevant for my regressions, their underlying
distribution is not: I want binary variables to remain binary etc.
Since there are more than 300 variables in the dataset, I do not want
to do this one-by-one, finding out whether var4 consists only of
integers between 1 and 10, var231 is binary etc.
So one way to go ist just take more of the same observations. E.g.,
obs 201 is the same as obs 133, obs 202 is the same as obs 78 and so
on. Or, and that's the other thing I was thinking of, I could create
obs 201 by drawing a new value from the existing distribution (given
by obs1-obs200) of var1, then drawing a new value from the
distribution of var2 and so on...
2009/6/22 Martin Weiss <martin.weiss1@gmx.de>:
> <>
> Davide said that he wanted to keep "the same (unknown to me) data
> process". Every advice so far has assumed that this means that he just
> "more of the same observations". If that was the case, he could also use a
> random frequency weight for every observation which would reduce the size
> his dataset.
> Davide could clarify whether he merely wants to duplicate observations
> randomly or whether he really wants "new" observations...
> HTH
> Martin
> -----Ursprüngliche Nachricht-----
> Von: owner-statalist@hsphsun2.harvard.edu
> [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Austin
> Gesendet: Montag, 22. Juni 2009 14:33
> An: statalist@hsphsun2.harvard.edu
> Betreff: Re: st: Bootstrapping new observations to add to an existing
> dataset
> Davide Cantoni <davide.cantoni@gmail.com> :
> You don't say how many more obs you want--let's assume you want about
> 100 times as many:
> expand 100
> will do it, or
> g u=round(uniform()*200)
> expand u
> for a random-sized sample about 100 times as big with the same DGP.
> You could also
> loc n=_N*100
> g u=round(uniform()*1000)
> expand u
> drop u
> g u=uniform()
> sort u
> drop if _n>`n'
> for a sample 100 times as big but with random numbers of replications
> of each obs.
> On Sun, Jun 21, 2009 at 11:58 PM, Davide Cantoni
> <davide.cantoni@gmail.com> wrote:
>> Hello, I am stuck while thinking about this issue and I would
>> appreciate your suggestions. I have a dataset which I use for
>> simulation purposes, to test whether my do-files run correctly. The
>> issue is that this dataset is too short for many applications, as it
>> has only 200 observations.
>> What I want to do is expand this dataset to include more observations,
>> but keeping the same (unknown to me) data generating process that
>> created the first 200 observations. So I was thinking to proceed in a
>> bootstrapping manner, by drawing the values for each one of the
>> variables (var1, var2 etc etc) for the new observations from the
>> empirical distributions of var1, var2,... in the first 200
>> observations. Yet, I have no idea on how to implement this. I'm
>> grateful for any idea. Thanks for your interest,
>> Davide
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-06/msg00798.html","timestamp":"2014-04-19T21:29:59Z","content_type":null,"content_length":"12159","record_id":"<urn:uuid:457bf1dc-ba15-4fad-a45f-d8f2bb56a08b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
The difference between Principle Components Analysis (PCA) and Factor Analysis (FA)
up vote 1 down vote favorite
I am trying to understand the difference between PCA and FA. Through google research, I have come to understand that PCA accounts for all variance, while FA accounts for only common variance and
ignores unique variance.
However, I am having a difficult time wrapping my head around how exactly this occurs. I know PCA rotates the axis used to describe the data in order to eliminate all covariance. Does this step still
occur in FA? If not, what differentiates FA from PCA? Thanks in advance.
eigenvector st.statistics
It's "principal", not "principle". – Hans Lundmark Sep 27 '10 at 18:52
You will probably get a better answer to this on stats.stackexchange.com/questions – Robby McKilliam Sep 28 '10 at 1:35
add comment
1 Answer
active oldest votes
The difference between PCA and FA can be thought of in terms of the underlying statistical models (regardless of estimation methods, although these will change depending on the model
Consider $n$ iid observations of a $p$ dimensional (column) vector $X$. Suppose that for each $X_i$, $i \in \lbrace 1, \dots, n\rbrace$, we also had a $k$ dimensional vector $f_i$, with
$k \leq p$. These are our "latent factors". A (linear) factor model assumes that $\mbox{E}(X_i \mid f_i) = Bf_i$, where $B$ is a $p \times k$ "factor loadings" matrix and $\mbox{Cov}
(X_i \mid f_i) = \Psi$, a diagonal matrix. If we further assume that $\mbox{V}(f_i) = \mbox{I}_k$ so that the factors are independent we see that the marginal covariance is $\Sigma \
equiv \mbox{Cov}(X_i) = BB^t + \Psi$.
Roughly, you can think of PCA as making the assumption that $\Psi$ is the zero matrix. In both cases the goal is to find/estimate rotations ($B$) that explain covariance patterns.
up vote 3
down vote If we remove the estimation part of the problem and assume we have $\Sigma$ in hand, the difference is between two ways of decomposing a covariance matrix. We either want a "factor
accepted decomposition" $\Sigma = BB^t + \Psi$ or a principle component decomposition $\Sigma = BB^t$.
I think the key really is this: Any covariance matrix will admit either kind of decomposition, but often the rank of $B$ will be substantially smaller if we allow the diagonal
elements of $\Psi$ to be non-zero as in the factor decomposition.
Incidentally, finding the factor decomposition for a given covariance that minimizes the rank of $B$ is known as the Frisch problem and is computationally demanding.
PS. I hope this isn't merely a restatement of your remark that "PCA accounts for all variance, while FA accounts for only common variance and ignores unique variance".
add comment
Not the answer you're looking for? Browse other questions tagged eigenvector st.statistics or ask your own question. | {"url":"http://mathoverflow.net/questions/40191/the-difference-between-principle-components-analysis-pca-and-factor-analysis?sort=votes","timestamp":"2014-04-19T07:16:21Z","content_type":null,"content_length":"55433","record_id":"<urn:uuid:f5a39340-de59-49ee-b1b9-fbf096fddbb8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
A true Idiot
Apparently someone bid up to $3,000 - for a country with a current account balance of -$8,000,000,000 (2005 est), that may have been quite brave! (In other words, an 8 billion dollar loss)
Hmmm ... maybe they were planning on creating some "efficiencies" in the economy so it would make a profit?
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=34786","timestamp":"2014-04-20T06:06:25Z","content_type":null,"content_length":"12847","record_id":"<urn:uuid:c86607a3-2fc4-49be-8520-6ddbbd1cb0b5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Backwards Derivative Rules
April 21st 2007, 04:56 PM #1
Global Moderator
Nov 2005
New York City
Backwards Derivative Rules
I came up with two problems which I find interesting.
1)If f(x)+g(x) is differenciable at c, and f(x) is differenciable at c then is g(x) differenciable at c?
2)If f(x)g(x) is differenciable at c, and f(x) is differenciable at c then is g(x) differenciable at c?
The first is true, for obvious reasons.
(f + g)' = f' + g'
If f(x)+g(x) is differentiable at c then f and g must be defined and continuous at c, and therefore, each is independently differentiable at c.
The second may not be as easy to prove:
If f(x)*g(x) is differentiable at c and f(x) is differentiable at c, then does g(x) necessarily have to be?
(f*g)' = f'g + fg'
This shows that f(x) times g'(x) exists at c. I think it can be proven that this is not true, but I'll just provide a counter example:
Let f(x) = x^3 and g(x) = 1/x, and c = 0
(f(x)*g(x)) = x^3*1/x = x^2
(f(x)*g(x))' = 3x^2*1/x + x^3*-1/x^2 = 3x - x = 2x
(f(x)*g(x))'(0) = 0
g'(x) = -1/x^2
g'(0) = undefined!
You are right! It is true, but how do you prove it? It really is easy.
The second may not be as easy to prove:
If f(x)*g(x) is differentiable at c and f(x) is differentiable at c, then does g(x) necessarily have to be?
(f*g)' = f'g + fg'
This shows that f(x) times g'(x) exists at c. I think it can be proven that this is not true, but I'll just provide a counter example:
Let f(x) = x^3 and g(x) = 1/x, and c = 0
(f(x)*g(x)) = x^3*1/x = x^2
(f(x)*g(x))' = 3x^2*1/x + x^3*-1/x^2 = 3x - x = 2x
(f(x)*g(x))'(0) = 0
g'(x) = -1/x^2
g'(0) = undefined!
Oh no! Very bad, very bad conter example.
Given f and g to be two real functions we domains dom(f) and dom(g) respectively. We define the function:
f * g to be f(x)*g(x) with domain, dom(f) (intersect) dom(g) (hoping the intersection is non-empty).
So given, f(x)=x^3 on R and given g(x)=1/x on R-{0} then their product is (f*g)(x)=x^2 for R-{0}. Thus, your reasoning does not work. In high school precalculus what you did was okay, but not if
you want to be totally mathematically correct.
Oh no! Very bad, very bad conter example.
Given f and g to be two real functions we domains dom(f) and dom(g) respectively. We define the function:
f * g to be f(x)*g(x) with domain, dom(f) (intersect) dom(g) (hoping the intersection is non-empty).
So given, f(x)=x^3 on R and given g(x)=1/x on R-{0} then their product is (f*g)(x)=x^2 for R-{0}. Thus, your reasoning does not work. In high school precalculus what you did was okay, but not if
you want to be totally mathematically correct.
If f(x) is continuous and differentiable over some domain D(f) containing x = c (and f(x) != 0), and if f(x)*g(x) is continuous and differentiable over some domain D(f*g) containing x = c, where
D(f*g) is the intersection of D(f) and D(g), then D(g) must also be continuous at x = c.
For f(x) to be continuous at c, f(c) = C is a constant, real number solution.
For f(x) to be differentiable and g(x) to be continuous at c, f'(c)g(c) = R is some constant, real number solution.
For (f(x)*g(x)) to be differentiable at c, f'(c)g(c) + f(c)g'(c) = Q is a constant, real number solution.
(f(x)*g(x))'(c) = f'(c)g(c) + f(c)g'(c) = R + Cg'(c) = Q --> g'(c) = (Q - R)/C is some constant, real number solution if C = f(c) != 0.
However, if C = f(x) = 0, then R = Q and R - Q = 0, and so
g'(c) = lim{x->c} (Q - R)/C
= lim{x->c} (f'(x)g(x) + f(x)g'(x) - f'(x)g(x))/f(x)
I'm not sure where to go from here if C = 0 (if I'm even going in the right direction). I've tried a few different routes, but I keep getting stuck.
If f(x) is continuous and differentiable over some domain D(f) containing x = c and f(x)+g(x) is continuous and differentiable over some domain D(f+g) containing x = c, where D(f+g) is the
intersection of D(f) and D(g), then g is continuous at x = c.
For f(x) to be differentiable at c, f'(c) = C is a constant, real number solution.
For f(x)+g(x) to be differentiable at c, f'(c)+g'(c) = Q is a constant, real number solution.
[f(x) + g(x)]' = f'(x) + g'(x) = C + g'(x) = Q --> g'(x) = Q - C is some constant, real number solution.
Let me post the solutions to show you how easy they really are:
1)If f(x)+g(x) is differenciable at c and f(x) is differenciable at c then [f(x)+g(x) - f(x)] is differenciable at c. Thus, g(x) is differenciable at c.
2)If f(x)g(x) is differenciable at c and f(x) is differenciable at c and f(c)!=0 then f(x)g(x)/f(x) is differenciable at c, hence g(x) is differenciable at c.
But if f(x) is differenciable at c and f(c)=0 then it is not true.
For example f(x)=x and g(x) = |x| and c=0.
Let me post the solutions to show you how easy they really are:
1)If f(x)+g(x) is differenciable at c and f(x) is differenciable at c then [f(x)+g(x) - f(x)] is differenciable at c. Thus, g(x) is differenciable at c.
2)If f(x)g(x) is differenciable at c and f(x) is differenciable at c and f(c)!=0 then f(x)g(x)/f(x) is differenciable at c, hence g(x) is differenciable at c.
But if f(x) is differenciable at c and f(c)=0 then it is not true.
For example f(x)=x and g(x) = |x| and c=0.
I wrote more but I pretty much had the same conclusions.
Here is an easy counterexample for #2.
Consider the case of f(x)=x and g(x)=1/(1+|x|).
Did you graph x/(1+|x|)?
Say that p(x)=x/(1+|x|) then p(0+h)= h/(1+|h|), p(0)=0, &
[p(0+h)-p(0)]/h =1/(1+|h|).
P.S. The function p is a very interesting example. It has a derivative at 0 but at the same time the second derivative does not exist. Nonetheless, the point (0,0) is a point of inflection for
the graph.
Last edited by Plato; April 22nd 2007 at 12:21 PM. Reason: Post Script
April 21st 2007, 08:16 PM #2
April 22nd 2007, 06:09 AM #3
Global Moderator
Nov 2005
New York City
April 22nd 2007, 09:32 AM #4
April 22nd 2007, 09:41 AM #5
April 22nd 2007, 09:43 AM #6
Global Moderator
Nov 2005
New York City
April 22nd 2007, 09:45 AM #7
April 22nd 2007, 10:00 AM #8
April 22nd 2007, 10:46 AM #9
April 22nd 2007, 10:55 AM #10
April 22nd 2007, 10:57 AM #11
April 22nd 2007, 11:16 AM #12 | {"url":"http://mathhelpforum.com/calculus/14002-backwards-derivative-rules.html","timestamp":"2014-04-20T00:11:40Z","content_type":null,"content_length":"73427","record_id":"<urn:uuid:67ed8e0a-47d0-4d35-bc98-bf5e016de02b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
please check if my proof is correct
October 5th 2009, 06:37 PM #1
Mar 2008
please check if my proof is correct
Check if $\langle t^{n} \rangle$ is a retract of $\langle t \rangle$.
Define $\phi : \langle t \rangle \rightarrow \langle t \rangle$ by $\phi (x)=x$ for all $x \in \langle t \rangle$.
Let $x_{1}, x_{2} \in \langle t \rangle$.
Then $x_{1}=t^{a}, x_{2}=t^{b}$ where $a,b$ are integers.
$\phi (x_{1}x_{2})= \phi (t^{a}t^{b})=\phi (t^{a+b})=t^{a+b}=t^{a}t^{b}=\phi (t^{a} \phi (t^{b})=\phi (x_{1}) \phi(x_{2})$
So, $\phi$ is homomorphism.
Clearly, $\phi (x^*)=x^*$ for all $x^* \in \langle t^{n} \rangle$.
Since $\phi (x_{1})= \phi (t^{a})=t^{a}$, $t^{a} \in \langle t^{n} \rangle \Leftrightarrow a | n$.
Hence, $\langle t^{n} \rangle$ need not be a retract of $\langle t \rangle$.
Check if $\langle t^{n} \rangle$ is a retract of $\langle t \rangle$.
Define $\phi : \langle t \rangle \rightarrow \langle t \rangle$ by $\phi (x)=x$ for all $x \in \langle t \rangle$.
Let $x_{1}, x_{2} \in \langle t \rangle$.
Then $x_{1}=t^{a}, x_{2}=t^{b}$ where $a,b$ are integers.
$\phi (x_{1}x_{2})= \phi (t^{a}t^{b})=\phi (t^{a+b})=t^{a+b}=t^{a}t^{b}=\phi (t^{a} \phi (t^{b})=\phi (x_{1}) \phi(x_{2})$
So, $\phi$ is homomorphism.
Clearly, $\phi (x^*)=x^*$ for all $x^* \in \langle t^{n} \rangle$.
Since $\phi (x_{1})= \phi (t^{a})=t^{a}$, $t^{a} \in \langle t^{n} \rangle \Leftrightarrow a | n$.
Hence, $\langle t^{n} \rangle$ need not be a retract of $\langle t \rangle$.
What is t, anyway? A normal subgroup of a group is a retract of the group iff it has a normal complement.
$\langle t \rangle$ is a cyclic group generated by $t$.
$t$ is a generator.
October 6th 2009, 10:15 AM #2
Oct 2009
October 6th 2009, 05:16 PM #3
Mar 2008
October 6th 2009, 08:02 PM #4
Oct 2009 | {"url":"http://mathhelpforum.com/advanced-algebra/106365-please-check-if-my-proof-correct.html","timestamp":"2014-04-19T21:26:02Z","content_type":null,"content_length":"47094","record_id":"<urn:uuid:d427124f-5cbe-4532-84ae-1ecbe95f5e85>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Announcing the second batch of 2013 Ambassadors!
• one year ago
• one year ago
Best Response
You've already chosen the best response.
poopsiedoodle Andriod09 energeia EvonHowell Ashleyisakitty SWAG Naveen klimenkov lexiaimee TheViper
Best Response
You've already chosen the best response.
A Hearty Congratulations to all!!
Best Response
You've already chosen the best response.
@poopsiedoodle @Andriod09 @energeia @EvonHowell @Ashleyisakitty @SWAG @Naveen @klimenkov l@exiaimee @TheViper
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
congratulations everybody! (:
Best Response
You've already chosen the best response.
Congrats everyone!!!
Best Response
You've already chosen the best response.
great job!!!!!!!!!!!1
Best Response
You've already chosen the best response.
What do they do?
Best Response
You've already chosen the best response.
Great to be here serving as a Open Study Ambassador!
Best Response
You've already chosen the best response.
congrats, and no worries godorovg, there is always next time!
Best Response
You've already chosen the best response.
Hey new amby's congratulations
Best Response
You've already chosen the best response.
How many are there now?
Best Response
You've already chosen the best response.
Do all the amb's get the purple A next to their name?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
When do they get placed by our names?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@rebeccaskell94 i think theyres like 20... Honestly, its pretty exciting when you first become one, but after that, you realize it's just a purple A
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Ambassadors don't have any special powers but a purple A? Well I guess that beats a scarlet A. :P
Best Response
You've already chosen the best response.
Theyre technically supposed to welcome new users around the site and stuff..
Best Response
You've already chosen the best response.
lol oh... enjoy yourself as one of the prime elite. ;P
Best Response
You've already chosen the best response.
Uhh.. sure i will.
Best Response
You've already chosen the best response.
I'm sure you will. :) I'm just joking with you. Almost Jealous.
Best Response
You've already chosen the best response.
Dont need to be jealous haha..
Best Response
You've already chosen the best response.
Honestly, I was rather saddened when I did not make it in the first round. But this is great! :D
Best Response
You've already chosen the best response.
Oh wow, this is wonderful! woohoo!
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Congratulations everyone ! :)
Best Response
You've already chosen the best response.
Yay! I think the new ambassadors are all great :3. Congratulations new ambassadors.
Best Response
You've already chosen the best response.
Congratulations guys :)
Best Response
You've already chosen the best response.
Congrats to all new ambassadors:)
Best Response
You've already chosen the best response.
Maybe You forgot to put my name there ? ahaha :v
Best Response
You've already chosen the best response.
Where is @Carniel?
Best Response
You've already chosen the best response.
Lol it wasn't exciting and it is *just* a purple A.
Best Response
You've already chosen the best response.
is there a third batch?
Best Response
You've already chosen the best response.
@AravindG Preetha said: "Rebecca, c'mon. Why does it suck for new Ambis? Here is the deal. We need to be sure that you are who you say you are (hence photo id). And we need your real name because
we want to make sure you are who you say you are. Now you are a member of a team, and representing OpenStudy. Its big time. And opCode, I hope you will apply to become an Ambi. One of your fans
has already recommended you!" So either there is one or you apply now for next year. Source: http://openstudy.com/updates/510b4695e4b0d9aa3c46e68a
Best Response
You've already chosen the best response.
fans ? :)
Best Response
You've already chosen the best response.
Glad to be server everyone!
Best Response
You've already chosen the best response.
Not to be pushy, but when do we get the purple (A) by our name?
Best Response
You've already chosen the best response.
Congrats all. From a former amby. Saifoo.khan
Best Response
You've already chosen the best response.
how exciting! congrats
Best Response
You've already chosen the best response.
congratulations everyone... :)
Best Response
You've already chosen the best response.
So officiall :) @saifoo.khan
Best Response
You've already chosen the best response.
#Team @karatechopper
Best Response
You've already chosen the best response.
lol #TeamKC ftw
Best Response
You've already chosen the best response.
TEAM KZ.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
heheh I see an imposter is trying to go against my team
Best Response
You've already chosen the best response.
TEAM KC WILL CRUSH KZ >:D But seriously, when are the moderator olympics going to happen?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I don't think you made it :<
Best Response
You've already chosen the best response.
What just happened^
Best Response
You've already chosen the best response.
>wants to become an ambassador >uses curse words
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Carniel *has* been asking for over a year and he keeps being told he'll be added and the mods and admin aren't keeping their side of the deal. I'd be frustrated, too. I also suggest they add him
like they SAID they would.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@Carniel did you put in an application for ambassadors?
Best Response
You've already chosen the best response.
@karatechopper He did. Back in the day When cavemen dug caves And the mountains sucked lava Back in the day. When dinosaurs wandered Around the land When the man dreamed Of free meals Oh, KC, oh.
Where were you sleeping? It's been almost a year Since Carniel requested
Best Response
You've already chosen the best response.
What rhyme scheme would that be? T_T
Best Response
You've already chosen the best response.
when are they calling a new batch??
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Well...we need to start training soon..so I would think about another 3 weeks at the least. but I am sure preetha will know more. @stgreen
Best Response
You've already chosen the best response.
What does training start? @karatechopper
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Grats everyone :D
Best Response
You've already chosen the best response.
That's bc you're not an ambassador.
Best Response
You've already chosen the best response.
good job guys
Best Response
You've already chosen the best response.
@Karatechopper yes... I did another application around Jan 8th :\
Best Response
You've already chosen the best response.
@Carniel For ambassador!
Best Response
You've already chosen the best response.
Seriously doubt its gonna happen.... They can't ever keep true to what they say apparently.
Best Response
You've already chosen the best response.
Well @Carniel dont lose hopes , might be you get chance in the third batch of ambassadors
Best Response
You've already chosen the best response.
I'm not waiting for a third batch... Seems to me they just picking random ppl
Best Response
You've already chosen the best response.
Well , be positive friend. And of course not, they are picking the people ( ambassadors ) from the statements given by them and their performance basis as per my thinking... You just keep helping
the needy ones and of course get help from here. Even you will not believe but most probably they monitor how you are helping students and responding to their problems. The mods and admins will
recognise you soon :) Best of luck.
Best Response
You've already chosen the best response.
I mean seriously bro there is a guy with a SS of 15 and another 32 who made it with ease...
Best Response
You've already chosen the best response.
So it takes 2 years to recognize me o_O? Blues is the only person that actually notices me
Best Response
You've already chosen the best response.
Well when you told me this at a first glance I also thought of "random picking" BUT one thing we are forgetting that the statements and why do you want to be an ambassador matters a lot, oh! that
is what I am guessing, I don't know the reality but friend really they can not do random picking :) as per my experience in this site they are not partial and are fair to all.
Best Response
You've already chosen the best response.
Doesn't seem that way...
Best Response
You've already chosen the best response.
There is something going in the minds of admins and moderators , I am currently thinking of a pattern like they are selecting some ambassadors first from 1-20 serial numbers and then so on...
this is just my imagination but just wait for the third batch or any announcement or any comment here by admin or a mod.
Best Response
You've already chosen the best response.
I'm not gonna wait for a third batch... Everyone who was defending me thought I was gonna get the position, yet I didn't get anything. Even people who didn't reply thought I was gonna get it...
Best Response
You've already chosen the best response.
Well Carnie I still remember that you were also a candidate ( choice ) in the coming batch when i became amby, I was also shocked when you were not declared as an amby even in the first batch but
still I request you to keep a hope at least a little one ?
Best Response
You've already chosen the best response.
1. Applied las year 2. Amby committee 3. Applied this year
Best Response
You've already chosen the best response.
coming one : 4 . Selected as amby.
Best Response
You've already chosen the best response.
0.000000001% chance .-.
Best Response
You've already chosen the best response.
^ This is enough in mathematics for a miracle... :)
Best Response
You've already chosen the best response.
Of me not getting Amby? I agree
Best Response
You've already chosen the best response.
Of you getting Amby?
Best Response
You've already chosen the best response.
Nah .-.
Best Response
You've already chosen the best response.
I respect the work done by the Announcing..
Best Response
You've already chosen the best response.
Hey great work @Preetha
Best Response
You've already chosen the best response.
valid phto id mean simply.........photo......or smthing else while submitting application for amby membership???????
Best Response
You've already chosen the best response.
A photo of yourself.
Best Response
You've already chosen the best response.
Where do we need to send in the photograph?
Best Response
You've already chosen the best response.
uhhh I think it's emailed to somewhere I forgot the address sorry
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
YAY! Happy to be joining the OpenStudy Ambassador Team Finally!!! Thank You @Preetha
Best Response
You've already chosen the best response.
Congrats everyone!!!!!!!!!!! :)
Best Response
You've already chosen the best response.
Congrats, ladies and gents. c:
Best Response
You've already chosen the best response.
What is an ambassador? Oh and congrats to anyone who got it.
Best Response
You've already chosen the best response.
@Opcode, thank you, I was going for link, but settled on Zelda :p, anyhow thanks.
Best Response
You've already chosen the best response.
Sooo, you think that ambys are chosen at random? Wrong, we look through every application and we nominate by looking to see if you have the characteristics we are looking for. Some people give a
totally lame excuse for them needing to be an amby, therefore we cross them out and move on. Don't give up! @Carniel Keep trying!!
Best Response
You've already chosen the best response.
Yeah, what KC said.
Best Response
You've already chosen the best response.
No KC I don't think they're chosen at random and LONG before you were a mod Carniel was told he'd be made an ambassador. So if he was already "chosen" I think the mods need to keep their word and
make him one. Damn he can have my A if he wants it. It doesn't make me feel any better or worse about myself.
Best Response
You've already chosen the best response.
@Carniel Don't give up bro, same thing happened to me, you will certainly make it! :)
Best Response
You've already chosen the best response.
@Swag @Karatechopper @rebeccaskell94 Whats the point of constantly trying when they said I was suppose to become one and yes I think you guys pick at random since I see ppl who just started in OS
became Amby
Best Response
You've already chosen the best response.
@Carniel they dont really pick at random but more pick from what they see in the application but i do agree that they should pick more carefully and that they should have kept their word. i'm
sorry you did not get picked. Trust me i know how it feels i think i have been waiting longer than you for this and they have been telling me foreverrrrrrrrrrrr that i would get picked but i
NEVER did. After a year of waiting i finally made it. Don't worry bro you will make it.
Best Response
You've already chosen the best response.
Yeah, I think you'd be a good amby so don't stop trying. I'll do what I can to help ya out on it, because you deserve it.
Best Response
You've already chosen the best response.
@Carniel I believe that They do NOT pick at random they go through the applications 1 by 1 and read the reasons they applicant is applying and if they like what they see on the application then
they put you on the list for Amby, if they don't like the reason or think that it is just some lame excuse they cross out your name but they Don't pick at random in my belief. @Carneil You will
probably be in the 3rd batch of Amby this year. DON'T lose hope there is still time!
Best Response
You've already chosen the best response.
@Preetha Can explain the process of choosing an amby..
Best Response
You've already chosen the best response.
3rd batch is most likely a few months or 2 away and I'm not gonna w8 that long...
Best Response
You've already chosen the best response.
@carniel did you actually apply for the second batch? just wondering...
Best Response
You've already chosen the best response.
@poopsiedoodle No I just waited and expected to become Amby... (Srry for sarcasm) That's like me standing in line for food and there isn't any...
Best Response
You've already chosen the best response.
We look at your statement, your history, behavior patterns and ask the mods for their opinions. Mods have veto power!!! So if you want to be an Ambi, be nice, be nice and stay calm!!!
Best Response
You've already chosen the best response.
@Carneil You don't need to have that kind of attitude honestly just by the way you've been complaining and your sarcasm I wouldn't put you as an Amby just my opinion.
Best Response
You've already chosen the best response.
@carniel at the end of the day, ambassadors are just like regular users, and just have the purple A.. theres really nothing special about it, your extent of power is the same.
Best Response
You've already chosen the best response.
but it makes you feel better @zaynahf :P
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Yeah, but that feeling lasts for like a few days.. until you realize theres nothing to it.
Best Response
You've already chosen the best response.
Thanx @Preetha & everyone who supported me :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510aeb90e4b0d9aa3c46b944","timestamp":"2014-04-18T13:45:43Z","content_type":null,"content_length":"318867","record_id":"<urn:uuid:bf87afc6-5c27-437b-baa1-6000c913f187>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
function help
November 13th 2008, 01:28 PM #1
function help
Find the minimum value of the quadratic y = 2x2 - 8x + 8. At what x-value does the minimum occur? State the domain and range of this function.
This is a parabola faces upwards, so the minimum point is going to be the vertex of the parabola. Write the equation in the form of $y=(x-h)^2+k$ then the vertex is the point (h,k). The domain is
all possible values of x, which you should see quickly. The range is the values of y covered by this graph - you know this graph has a minimum, so that's one bound of the range. Is there a
maximum value?
This is a parabola faces upwards, so the minimum point is going to be the vertex of the parabola. Write the equation in the form of $y=(x-h)^2+k$ then the vertex is the point (h,k). The domain is
all possible values of x, which you should see quickly. The range is the values of y covered by this graph - you know this graph has a minimum, so that's one bound of the range. Is there a
maximum value?
The vertex i get is (2,0), is this correct?
November 13th 2008, 01:37 PM #2
MHF Contributor
Oct 2005
November 14th 2008, 05:50 AM #3 | {"url":"http://mathhelpforum.com/pre-calculus/59427-function-help.html","timestamp":"2014-04-20T14:06:55Z","content_type":null,"content_length":"37882","record_id":"<urn:uuid:366e6065-2636-4c3f-8231-2ee9706af647>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
NearPy brings two classes for experiments. RecallPrecisionExperiment and DistanceRatioExperiment. They allow to evaluate different engine configurations, hashes, distances and filters on a custom
data set.
Distance Ratio
We found out that recall and precision are no good measures when it comes to ANN, because they focus on the actual vectors in the result set. In ANN we are more interested in the preservation of
spatial structure and do not care too much, if the result set contains all the exact neighbours or not. So in our eyes a much better measure is the average ANN distance ratio of all the vectors in
the data set. We do not know if this has been used before, but we find it to be a really good measure to determine how a certain ANN method performs on a given data set.
The distance ratio of an ANN y is it's distance to the minimal hypersphere around the query vector x, that contains all exact nearest neighbours n, clamped to zero and normalized with this
hypersphere's radius.
This means, that if the average distance ratio is 0.0, all ANNs are within the exact neighbour hypersphere. A ratio of 1.0 means the average ANN is 2*R away from the query vector.
Show / Hide formal definition of distance ratios
These three plots are the result of one experiment with random data (dim=100, count=10000) and N=10. Experiments return average ANN distance ratio, result size and search time (with respect to exact
search time) for each engine configuration in the experiment. In this experiment four random discretized 1-dim projection hashes were used with varying bin widths.
From the plots one can see, that with increasing bin width, search time and size of the result list increase. This is simply the case, because the larger the bin, the more vectors are contained in
each bin. At the same time, the distance ratio decreases, because more vectors in each bin mean more close neighbours of the query vector. So in this particular case, with a search time of only about
6% compared to the exact search, the average ANN is only 12% outside of the exact neighbour hypersphere (N=10), which is not bad for a first shot.
Download the source code for this experiment here. | {"url":"http://nearpy.io/","timestamp":"2014-04-18T20:45:16Z","content_type":null,"content_length":"16212","record_id":"<urn:uuid:07508fed-e9c8-4562-94ab-d517cfce8ea5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bottom up hand-written parsers in under an hour
mark@freenet.uwm.edu (Mark Hopkins)
Sat, 12 Feb 1994 04:44:43 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: mark@freenet.uwm.edu (Mark Hopkins)
Keywords: parse
Organization: Compilers Central
Date: Sat, 12 Feb 1994 04:44:43 GMT
Context Free Grammars can be thought of as systems of recursive
equations whose variables are the grammar's non-terminals. These
variables denote context free sets. The equations can be solved, with the
solution expressed in terms of context free expressions. With the right
kind of notation the solution will also form a compact representation of a
parser for the grammar.
The notation to be presented by way of a case study is actually more
powerful than this, it originates from mid '92 and has been described here
briefly on several occastions. With it, we can not only represents
context free sets, but also sets generated by translation grammars and so
is ideally suitable for parsing applications. The form of the solution
will be a linear system of equations that will be called a Quantum
Grammar. One could ideally convert this system into regular expressions
(over algebraic sets called Monoids, for all you theorists out there), but
that's aside from our purpose.
In general, there are three sets: the input set (X), the action set
(Y), and the operator set (D). The operator set has the form:
{ <0|, <1|, ..., <m|, |0>, |1>, ..., |m> }
and the following properties hold:
(A) x y = y x, x d = d x, d y = y d
for all x in X, y in Y, d in D
(B) <m| |n> = 1 if m = n, 0 else
The symbol 1 denotes the empty string, and 0 effectively denotes the fail
string, and they have the following properties (expressed using the
notation of regular expressions):
(C) A 0 = 0 = 0 A, A + 0 = A = 0 + A
A 1 = A = 1 A
(Technically, this discussion amounts to saying that we're considering
regular expressions taken over non-free Monoids with 0). Also, the
following axiom will be invoked:
(D) |0><0| + ... |m><m| = 1
but note that being a regular expression equation, this leads to some
subtle changes in the way regular expressions are defined in terms of
sets. As an exercise given the algebraic relations defined by (A) and
(B), try coming up with a definition of regular expressions in terms of
sets that also satisfies (C), or (C) and (D) (C's easy, but C and D
together is a bit of a challenge).
Now, consider the grammar:
S -> w E d S | i E t S | i E t S e S
| x q E z | o L c
L -> | L S
E -> E b E | u E | E p | x | m E n
In equational form, using the notation regular expressions this can be
presented as the system:
S = w E d S + i E t S + i E t S e S + x q E z + o L c
L = 1 + L S
E = E b E + u E + E p + x + m E n
The input set, X, here is { w, d, i, t, e, x, q, z, o, c, b, u, p, x, m, n }.
It is assumed that some of the terms in this grammar are augmented by a
set of action symbols:
S = w E d S y0 + i E t S y1 + i E t S e S y2 + x q E z y3 + o L c
L = y4 + L S y5
E = E b E y6 + u E y7 + E p y8 + x y9 + m E n
Thus, Y = { y0, ..., y9 }. They are appended to some of the terms here,
but that's not a necessary restriction. These actions can stand for
anything, but are usually understood to stand for code segments in parsing
Before undergoing the process, the system is first left-factored:
S = w E d S y0 + i E t S (y1 + e S y2) + x q E z y3 + o L c
L = y4 + L S y5
E = E (b E y6 + p y8) + u E y7 + x y9 + m E n
Two simple steps are required to get a parser from this, and that's
all. Step 1: insert operator symbols, step 2: chop up the terms.
In the first step, each non-terminal, other than those in
left-recursive contexts, is bracketed by a balanced pair of operators.
Each occurrence of a given non-terminal should be bracketed by distinct
operators, but two different non-terminals may share the same operators if
neither occurs in the left-recursive context of the other (as is the case
S = w <1| E |1> d <1| S |1> y0
+ i <2| E |2> t <2| S |2> (y1 + e <3| S |3> y2)
+ x q <4| E |4> z y3
+ o <5| L |5> c
L = y4 + L <6| S |6> y5
E = E (b <7| E |7> y6 + p y8)
+ u <8| E |8> y7
+ x y9
+ m <9| E |9> n
Thus, D = { <0|, ..., <9|, |0>, ..., |9> }, here. In addition, we add
another non-terminal, start, and the following equation:
start = <0| S |0>.
In the second step, we define a new set of non-terminals, SF, LF, EF
and make the following (re)definitions:
N redefined as N NF.
NF = { x in (X + Y)*: start ->* w N x }
These sets can be derived directly from the grammar above in the
following way: (i) append NF to each term in the equation for N
(except the new start symbol):
start = <0| S |0>.
S = w <1| E |1> d <1| S |1> y0 SF
+ i <2| E |2> t <2| S |2> (y1 SF + e <3| S |3> y2 SF)
+ x q <4| E |4> z y3 SF
+ o <5| L |5> c SF
L = y4 LF
+ L <6| S |6> y5 LF
E = E (b <7| E |7> y6 EF + p y8 EF)
+ u <8| E |8> y7 EF
+ x y9 EF
+ m <9| E |9> n EF
and (ii) starting at the right of each term, chop the term of past the
last non-terminal, N, and place the remainder in the equation for NF. For
example, |0> is removed from the first equation and placed in the equation
for SF, |1> y0 SF is removed from the second equation, placed in SF's
equation, and then |1> d <1| S is chopped off and placed in the equation
for EF. The final result of this process is:
start = <0| S.
S = w <1| E
+ i <2| E
+ x q <4| E
+ o <5| L
SF = |0>
+ |1> y0 SF
+ |2> (y1 SF + e <3| S)
+ |3> y2 SF
+ |6> y5 LF
E = E
+ u <8| E
+ x y9 EF
+ m <9| E
EF = |1> d <1| S
+ |2> t <2| S
+ |4> z y3 SF
+ |7> y6 EF
+ |8> y7 EF
+ |9> n EF
+ (b <7| E + p y8 EF)
L = y4 LF
+ L
LF = |5> c SF
+ <6| S
The spurious left-recursions are then eliminated and the result is:
start = <0| S.
S = w <1| E
+ i <2| E
+ x q <4| E
+ o <5| L
SF = |0>
+ |1> y0 SF
+ |2> (y1 SF + e <3| S)
+ |3> y2 SF
+ |6> y5 LF
E = u <8| E
+ x y9 EF
+ m <9| E
EF = |1> d <1| S
+ |2> t <2| S
+ |4> z y3 SF
+ |7> y6 EF
+ |8> y7 EF
+ |9> n EF
+ (b <7| E + p y8 EF)
L = y4 LF
LF = |5> c SF
+ <6| S
This is a Quantum Grammar, essentially a listing of a push down
transducer in equational form. Each equation can be interpreted as the
specification of a state, the <m| operator means "push m", the |m>
operator means "pop and test for equality to m". For instance, the state
representing SF is:
SF: Input Stack Context Stack Writes Actions Next State
|0> (end) <0| - accept -
|1> y0 SF - <1| - y0 SF
|2> y1 SF - <2| - y1 SF
|2> e <3| S e <2| <3| - S
|3> y2 SF - <3| - y2 SF
|6> y5 LF - <6| - y5 LF
On the surface, parts of this grammar may look non-deterministic, such
as state LF. But the determinism can be brought out by substitution and
algebra, e.g.,
LF = |5> c SF + <6| S
= c |5> SF
+ w <6| <1| E + i <6| <2| E + x q <6| <4| E + o <6| <5| L
In the other cases, the non-determinism is inherent,
In equation SF:
|2> y1 SF vs. |2> e <3| S
In equation EF:
{ |7> y6 EF, |8> y7 EF } vs. { b <7| E, p y8 EF }
The first non-determinism corresponds to the ambiguity:
i E t (i E t S e S) vs. i E t (i E t S) e S
(the famous dangling-else ambiguity), and the second set corresponds
to cases such as:
E b (E b E) vs. (E b E) b E
(the precedence conflicts).
To handle the first case, we can impose a disambiguation rule, e.g.
that |2> e <3| S take precedence over |2> y1 SF.
To handle the latter set of conflicts, the same method can be used and
an action table can be created, which amounts to a 2 x 2 table filled by
whatever disambiguation rules exist. In this case present case, the
disambiguation rules are none other than the precedence rules of the
various operators in the syntax.
The contexts corresponding to the first set of terms (the reduce
contexts) are respectively:
|7> y6 EF: E -> E b E .
|8> y7 EF: E -> u E .
The contexts corresponding to the second set of terms (the shift
contexts) are:
b <7| E: E -> E . b E
p y8 EF: E -> E . p
each table entry is filled out by combining a shift context and reduce
context and deciding which way to disambiguate. For example, the |7> vs b
entry corresponds to the context:
E -> E b E . b E
and is filled out with a shift action if b is right-associative, or a
reduce otherwise. The entire table is listed below:
Context |7> y6 EF |8> y7 EF
b <7| E E b E . b E u E . b E
p y8 EF E b E . p u E . p
Note that this treatment is similar in spirit to how precedence
grammars are handled, but substantially different in detail. I claim that
taking tables by Context vs. Lexical Item is actually the more natural way
to be approaching precedence relations and that the customary approach to
precedence grammars bungles up this insight. For one thing, you don't get
a plethora of extraneous and impossible combinations (e.g. a p vs. u entry
in a precedence table), but only the possible ones.
Another thing to note is that the conflicts are easy to resolve because
they bear such a direct relation to the original syntax, which in turns
arises from the fact that the transformation made from the syntax was so
On a more theoretical side, also note that the general parsing problem
in light of the developments above can be cast as the problem of finding
the intersection of w Y* with the Translation Expression, start, where w
is the input word, and w Y* (in view of the commutation relations (A)) the
set of interleaves of actions in the input. The result of the parsing
process will be the determination of the set, F, where:
(w Y*) & (start) = w F
This set, F, is in general a context free set compactly representing the
set of all parses of w, but for deterministic parsers is a regular
expression representing one string (the parse sequence).
Now, after all that I'm sure you're dying to see actual code (ha,
and you thought it was all a bluff!). Well, it's below. When the sample
input, w:
w x b x b x d o
x q u x z
x q x z
is given, the result will be that:
F = y9 y9 y6 y9 y6 y4 y9 y7 y3 y5 y9 y3 y5 y0
will be output. And yes, it did take less than an hour to write and
test. I challenge you to find any inconsistencies in it.
Stay tuned for future developments, there's more to come...
/* Must use an ANSI-C compiler to compile this program. */
#include <stdio.h>
#include <ctype.h>
#include <stdarg.h>
static unsigned ERRORS = 0;
void ERROR(const char *Format, ...) {
va_list AP;
va_start(AP, Format);
vfprintf(stderr, Format, AP); fputc('\n', stderr);
if (++ERRORS >= 24)
fprintf(stderr, "Too many errors. Aborting.\n"), exit(1);
void FATAL(const char *Format, ...) {
va_list AP;
va_start(AP, Format);
vfprintf(stderr, Format, AP); fputc('\n', stderr);
typedef unsigned char Lexical;
Lexical Scan(void) {
int Ch;
do Ch = getchar(); while (isspace(Ch));
switch (Ch) {
case EOF: return 0;
case 'w': case 'd': case 'i': case 't': case 'e':
case 'q': case 'z': case 'o': case 'c': case 'b':
case 'u': case 'p': case 'x': case 'm': case 'n': return Ch;
default: return 'x'; /* Anything else is considered to be x. */
typedef unsigned char Context;
#define STACK_MAX 0x10
Context Stack[STACK_MAX], *SP;
void PUSH(Context Tag, ...) {
if (SP >= Stack + STACK_MAX) FATAL("Syntax too complex.");
*SP++ = Tag;
void Parse(void) {
Lexical X;
SP = Stack, *SP++ = 0;
X = Scan();
switch (X) {
case 'w': PUSH(1); X = Scan(); goto qE;
case 'i': PUSH(2); X = Scan(); goto qE;
case 'o': PUSH(5); X = Scan(); goto qL;
default: ERROR("Missing 'x'."); goto HaveX;
case 'x': X = Scan();
if (X != 'q') ERROR("Missing 'q'."); else X = Scan();
goto qE;
switch (*--SP) {
case 0:
if (X != 0) ERROR("Extra symbols present in input.");
case 1: printf(" y0"); goto qSF;
case 2:
/* Incorporates the precedence |2> e <3| S over |2> y1 SF */
if (X == 'e') { X = Scan(), PUSH(3); goto qS; }
printf(" y1");
goto qSF;
case 3: printf(" y2"); goto qSF;
case 6: printf(" y5"); goto qLF;
default: FATAL("Internal error (SF).");
switch (X) {
case 'u': PUSH(8); X = Scan(); goto qE;
case 'm': PUSH(9); X = Scan(); goto qE;
default: ERROR("Missing 'x'."); goto HaveX2;
case 'x':
X = Scan();
printf(" y9");
goto qEF;
/* The action table here is incorporated directly into the code,
with the following precedences:
E b E b E = (E b E) b E, E b E p = E b (E p)
u E b E = (u E) b E, u E p = (u E) p
switch (*--SP) {
case 7: /* |7> y6 EF has precedence only over b <7| E. */
if (X == 'p') goto ShiftP;
printf(" y6");
goto qEF;
case 8: /* |8> y7 EF has precedence. */
printf(" y7");
goto qEF;
switch (X) {
case 'b': ShiftB: SP++, X = Scan(); PUSH(7); goto qE;
case 'p': ShiftP: SP++, X = Scan(); printf(" y8"); goto qEF;
switch (*SP) {
case 1:
if (X != 'd') ERROR("Missing 'd' in w ... d.");
else X = Scan();
goto qS;
case 2:
if (X != 't') ERROR("Missing 't' in i ... t.");
else X = Scan();
goto qS;
case 4:
if (X != 'z') ERROR("Missing 'z' in x q ... z.");
else X = Scan();
printf(" y3");
goto qSF;
case 9:
if (X != 'n') ERROR("Missing 'n' in m ... n.");
else X = Scan();
goto qEF;
default: FATAL("Internal error (EF).");
qL: printf(" y4");
if (SP[-1] == 5 && X == 'c') {
SP--, X = Scan(); goto qSF;
goto qS;
void main(void) {
if (ERRORS > 0)
fprintf(stderr, "%d error(s) present in input.\n", ERRORS);
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/94-02-079","timestamp":"2014-04-20T00:47:01Z","content_type":null,"content_length":"44536","record_id":"<urn:uuid:857712fd-48b3-4f53-a84d-afd949fcac67>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lecture 10: Mathematical Induction
Lecture 10: Mathematical Induction
5121 views, 1 rating - 01:19:47
More information about this course:
Licensed under Creative Commons Attribution ShareAlike 2.0:
A formal lecture explaining in depth what mathematical induction is and how to use it.
• What is mathematical induction and how is it used to prove conjectures?
• What is the Josephus problem and how do you solve it and know where to sit to avoid execution?
• If every other person around a circle is killed until just one person remains, how can you determine who will be saved using recursion?
• How do you prove the Josephus problem using induction?
• How do you sum up the nodes of a binary tree?
• What are some example induction conjectures in which the base cases are not true but all other values work?
• How do you prove that a binary tree of height n has less than or equal to 2^n leaves using induction?
• What is the difference between weak and strong induction?
An interesting problem called the Josephus problem is explained and analyzed in the first part of this lesson. Recursion turns out to be a central part of this analysis, and induction is used to
prove that a conjecture is true. It turns out to be a really cool solution and a cool inductive proof. Then, more problems are done using induction. This is a fun, very useful Discrete Math | {"url":"http://mathvids.com/topic/20-discrete-math/major_subtopic/72/lesson/620-lecture-10-mathematical-induction/mathhelp","timestamp":"2014-04-18T18:24:41Z","content_type":null,"content_length":"80259","record_id":"<urn:uuid:3c2dbaf7-b5e1-47a0-8a86-53f8fc8c485a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portfolio Optimization Algorithm Showdown: GTAA Edition - QUSMA
Portfolio Optimization Algorithm Showdown: GTAA Edition
I was revisiting the choice of portfolio optimization algorithm for the GTAA portion of my portfolio and thought it was an excellent opportunity for another post. The portfolio usually contains 5
assets (though at times it may choose fewer than 5) picked from a universe of 17 ETFs and mutual funds, which are picked by relative and absolute momentum. The specifics are irrelevant to this post
as we’ll be looking exclusively at portfolio optimization techniques applied after the asset selection choices have been made.
Tactical asset allocation portfolios present different challenges from optimizing portfolios of stocks, or permanently diversified portfolios, because the mix of asset classes is extremely important
and can vary significantly through time. Especially when using methods that weigh directly on volatility, bonds tend to have very large weights. During the last couple of decades this has been
working great due to steadily dropping yields, but it may turn out to be dangerous going forward. I aim to test a wide array of approaches, from the crude equal weights, to the trendy risk parity,
and the very fresh minimum correlation algorithm. Standard mean-variance optimization is out of the question because of its many and well-known problems, but mainly because forecasting returns is an
exercise in futility.
The algorithms
The only restriction on the weights is no shorting; there are no minimum or maximum values.
Risk parity (often confused with equal risk contribution) is essentially weighting proportional to the inverse of volatility (as measured by the 120-day standard deviation of returns, in this case).
I will be using an unlevered version of the approach. I must admit I am still somewhat skeptical of the value of the risk parity approach for the bond-related reasons mentioned above.
Minimum volatility portfolios take into account the covariance matrix and have weights that minimize the portfolio’s expected volatility. This approach has been quite successful in optimizing equity
portfolios, partly because it indirectly exploits the low volatility anomaly. You’ll need a numerical optimization algorithm to solve for the minimum volatility portfolio.
A note on shrinkage (not that kind of shrinkage!): one issue with algorithms that make use of the covariance matrix is estimation error. The number of covariances that must be estimated grows
exponentially with the number of assets in the portfolio, and these covariances are naturally not constant through time. The errors in the estimation of these covariances have negative effects
further down the road when we calculate the desired weightings. A partial solution to this problem is to “shrink” the covariance matrix towards a “target matrix”. For more on the topic of shrinkage,
as well as a description of the shrinkage approach I use here, see Honey, I Shrunk the Sample Covariance Matrix by Ledoit & Wolf.
• Equal Risk Contribution (ERC)
The ERC approach is sort of an advanced version of risk parity that takes into account the covariance matrix of the assets’ returns (here‘s a quick comparison between the two). This difference
results in significant complications when it comes to calculating weights, as you need to use a numerical optimization algorithm to minimize
subject to the standard restrictions on the weights, where x[i ]is the weight of the i^th asset, and (Σx)[i] denotes the i^th row of the vector resulting from the product of Σ (the covariance matrix)
and x (the weight vector). To do this I use MATLAB’s fmincon SQP algorithm.
For more on ERC, a good overview is On the Properties of Equally-Weighted Risk Contributions Portfolios by Maillard, et. al.
See above.
• Minimum Correlation Algorithm (MCA)
A new optimization algorithm, developed by David Varadi, Michael Kapler, and Corey Rittenhouse. The main object of the MCA approach is to under-weigh assets with high correlations and vice versa,
though it’s a bit more complicated than just weighting by the inverse of assets’ average correlation. If you’re interested in the specifics, check out the paper: The Minimum Correlation Algorithm: A
Practical Diversification Tool.
The results
Moving on to the results, it quickly becomes clear that there isn’t much variation between the approaches. Most of the returns and risk management are driven by the asset selection process, leaving
little room for the optimization algorithms to improve or screw up the results.
Predictably, the “crude” approaches such as equal weights or the inverse of maximum drawdown don’t do all that well. Not terribly by any means, but going up in complexity does seem to have some
advantages. What stands out is that the minimum correlation algorithm outperforms the rest in both risk-adjusted return metrics I like to use.
Risk parity, despite its popularity, wallows in mediocrity in this test; its only redeeming feature being a bit of positive skew which is always nice to have.
The minimum volatility weights are an interesting case. They do what is says on the box: minimize volatility. Returns suffer consequently, but are excellent on a volatility-adjusted basis. On the
other hand, the performance in terms of maximum drawdown is terrible. Some interesting features to note: the worst loss for the minimum volatility weights is by far the lowest of the pack: the worst
day in over 15 years was -2.91%. This is accompanied by the lowest average time to recover from drawdowns, and an obscene (though also rather unimportant) longest winning streak of 22 days.
Finally, equal risk contribution weights almost match the performance of minimum volatility in terms of CAGR / St.Dev. while also giving us a lower drawdown. ERC also comes quite close to MCA; I
would say it is the second-best approach on offer here.
A look at the equity curves below shows just how similar most of the allocations are. The results could very well be due to luck and not a superior algorithm.
To investigate further, I have divided the equity curves into three parts: 1996 – 2001, 2002-2007, and 2008-2012. Consistent results across these sub-periods would increase my confidence that the
best algorithms actually provide value and weren’t just lucky.
As expected there is significant variation in results between sub-periods. However, I believe these numbers solidify the value of the minimum correlation approach. If we compare it to its closest
rival, ERC, minimum correlation comes out ahead in 2 out of 3 periods in terms of volatility-adjusted returns, and in 3 out of 3 periods in terms of drawdown-adjusted returns.
The main lesson here is that as long as your asset selection process and money/risk management are good, it’s surprisingly tough to seriously screw up the results by using a bad portfolio
optimization approach. Nonetheless I was happily surprised to see minimum correlation beat out the other, more traditional, approaches, even though the improvement is marginal.
I very like the way you write and conduct your analysis. Please go on with your work. Can i ask with which software you conduct your backtest?
Thanks a lot. These tests were done in MATLAB.
Very nice post! How often do you adjust positions and whats the effect when including transaction costs and slippage. Since some of the algos require more transactions the cost of this might be a
drag to the performance
Thanks Christian. The number of transactions is (almost) completely independent of the portfolio optimization algorithm. What drives trades are the asset allocation choices, and these happen
before I get to the stage where I optimize the weights.
I usually adjust positions weekly, unless there are no changes in the asset mix and the weight adjustments are too small to warrant paying commissions.
Another thing is that all the algos have one or more parameters. How have you chosen the parameters for this test? Are they optimized in any way? Wrong parameters might be a reason for future
performance failure.
Ah, I should of course have mentioned something about this in the post.
If we ignore the shrinkage factor (which has a tiny effect), the only parameter to change is the length of the period of data that I use to calculate volatility, max drawdown, the
covariance matrix, etc.
These values have been subject to optimization…I might write a post in the future about it, but in most cases the difference between the best and worst results within a “reasonable” range
of values is very small and statistically insignificant. In most cases I simply use 120 days, which is also the period I use to calculate momentum.
I’ve done some test with these portfolio algorithms. Minimum variance, risk parity and now minimum correlation…all show better risk/adjusted performance because they weight more the best asset class
in the last decades: bonds. In my simulation with 5 asset class (us equity, international equity, Treasury 10 years, REITs and Commodities) MCA algorithm suggest a weight of 73% on 10y Treasury. My
question is about the pratical utility of these algorithm for real investment. How is possible today to put 73% in long Treasury?
What do you think about this observation and which kind of solutions (or compromise) you propose?
You are of course correct that most of these algorithms tend to over-weigh fixed income, which has been extremely beneficial during the last few decades of decreasing interest rates.
If we take them in the context of a permanent portfolio that always allocates to all asset classes, then I would consider the use of risk parity/minimum volatility/MCA quite dangerous going
forward. There is little upside (a good post at marketsci over how much upside there’s left in regards to interest rates: http://marketsci.wordpress.com/2012/10/16/
follow-up-to-timely-portfolios-what-if-we-go-to-zero/ ) and a LOT of risk if things go badly.
However, in the context of a relative & absolute momentum-based tactical allocation approach, I think there is much less of a problem. If bonds start doing badly, there simply won’t be any
capital allocated to them. Thus the issue of their weighting becomes irrelevant. Now it could be the case that these algorithms only do an outstanding job when bonds are included in the portfolio
(which can be tested by running the same system but without any of the bond funds). But that would simply lead to somewhat sub-optimal returns and not outright catastrophe.
You are of course correct that most of these algorithms tend to over-weigh fixed income, which has been extremely beneficial during the last few decades of decreasing interest rates.
If we take them in the context of a permanent portfolio that always allocates to all asset classes, then I would consider the use of risk parity/minimum volatility/MCA quite dangerous going
forward. There is little upside (a good post at marketsci over how much upside there’s left in regards to interest rates: http://marketsci.wordpress.com/2012/10/16/
follow-up-to-timely-portfolios-what-if-we-go-to-zero/ ) and a LOT of risk if things go badly.
However, in the context of a relative & absolute momentum-based tactical allocation approach, I think there is much less of a problem. If bonds start doing badly, there simply won’t be any
capital allocated to them. Thus the issue of their weighting becomes irrelevant. Now it could be the case that these algorithms only do an outstanding job when bonds are included in the portfolio
(which can be tested by running the same system but without any of the bond funds). But that would simply lead to somewhat sub-optimal returns and not outright catastrophe.
Yes, i think that the difference between permanent and tactical exists and is very important. But at the same time don’t think that this difference makes the question irrelevant.
It’s stupid and unwise put 73% to bond (like algo suggest), even in a contex of tactical strategies. We can have a rapid move on that 73% asset class with the paradoxical consequence that the
same algo whose aim is to reduce risk is the principal cause of higher (even catastrophic) risk.
I think that we need to find a better way and incorporate this pratical consideration in the algo.
You can always institute some minimum/maximum “sanity” limits on the weights, just as you would in standard mean-variance optimization. It would probably hurt performance in the long run,
but let you sleep better at night…
Your opinion is to follow the model always, is it right?
Well, it depends. Personally I don’t use MCA but risk parity, so I don’t get weights THAT extreme.
Fundamentally I would say that it is a question of whether you believe your backtest sample is (at least roughly) representative of what you can expect to see in the future. Perhaps
with <20 years of data it's not…in which case it could be a good idea to impose limits on the maximum weights that your optimization algorithm spits out.
Maybe the next Black Monday style event will be in Treasuries…nobody knows of course, but there are certainly good arguments for such limits.
I just ran a quick test with a 33% max weight on the MCA algo. The end results are very similar…the max limit has worse volatility-adjusted performance, but better drawdown-adjusted
performance. The differences are quite small though. So my opinion would be to impose maximum weights that you're comfortable with.
I agree with you on “<20 years….".
In reading what you wrote i can't not make you a question.
If numbers seems to show the superiority of MC (but also equal risk contribution and minimum variance seems to be better) why your choice is Risk Parity? For Risk Parity you mean
weight proportional to inverse of standard deviation, is it correct? What are the reasons for this choice that convinced you to not use the best algo? I agree that numbers are not all
and that it's important to think over them. I'm interested to ear your ideas about this not simple matter.
A very reasonable question. The answer is that MC is very new, and this is the first and still only test I have used it in. I’m just not confident enough in it yet. I plan to run a
ton of tests with it, optimizing different sets of assets (I’m particularly interested in how it would handle a portfolio of stocks for example), seeing how it performs during sharp
volatility increases, etc. If the results are convincing, I’ll probably switch to MC (or even better, an improved version of it if I can find something to improve on).
I understand your view. Please let your reader know what you discover in your further analysis.
BTW, did you add commissions to your analysis in this post? Different algos have different turnover (number of trades). Some theorethical advantage of an algo could be due to a minor
number of trades. I think that it’s a good idea to use a conservative commissions policy for a real comparison between different optimization algos. We have to penalize algos which
trade more.
MC is new but ERC and MV are in the arena from long time. I don’t know, i can’t form a clear picture of which algo convince me more. More thoughts and test are necessary. | {"url":"http://qusma.com/2012/10/08/portfolio-optimization-algorithm-battle-tactical-asset-allocation-edition/","timestamp":"2014-04-19T14:28:56Z","content_type":null,"content_length":"69921","record_id":"<urn:uuid:be2247f6-5293-42c1-9db4-4541afa512a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fluids eBook: Hydrostatic Force -Curved Surface
For a curved surface AB, as shown in the figure, the magnitude and the line of action of the resultant force F[R] exerted on the surface can be best derived by splitting the force into its horizontal
and vertical components.
The x-component of the resultant force F[Rx ]is the normal force acting on the vertical projection of the curved surface (i.e., surface AE). This force F[Rx] passes through the center of pressure for
the projected area AE.
The y-component of the resultant force F[Ry ]is the weight of the liquid directly above the curved surface (i.e., volume ABCD). Note that this volume can be either real or imaginary. In this case,
the volume is real since the liquid actually occupies this volume. Another example will be given later to illustrate an imaginary volume. This force, F[Ry] ,passes through the center of gravity of
volume ABCD. If the gravitational acceleration is assumed to be constant and the fluid is incompressible, then the center of gravity is the same as the centroid of the fluid volume.
The magnitude of the resultant force is then determined by
F[R ]= (F[Rx] + F[Ry])^0.5
By inspection, it is noted that pressure forces are always perpendicular to the surface AB (i.e., the normal stresses). Since all points on a circle have a normal passing through the center of a
circle, the resultant force F[R ]has to pass through point E. The direction of the resultant force is given by
θ = tan ^-1 (F[Ry] / F[Rx])
For non-circular shapes, the resultant angle is not used and the horizontal and vertical gravity center may not align with the actual surface. Again, it is easiest to keep the horizontal and vertical
components separate in all calculations. | {"url":"https://ecourses.ou.edu/cgi-bin/ebook.cgi?doc=&topic=fl&chap_sec=02.4&page=theory","timestamp":"2014-04-16T13:03:19Z","content_type":null,"content_length":"12365","record_id":"<urn:uuid:d61ae824-d666-46af-8f7c-135ec9f966f0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |