content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
The importance of being linear
Next: The controller equations Up: number11 Previous: Introduction
While there is a good deal of mathematics behind adaptive controllers, its not particularly hard mathematics. The reason for this is that traditionally most controllers are linear. We can take
advantage of this linearity to make the equations relatively easy to manipulate. Lets first consider what is means for a system to be linear. Essentially, linearity means that the system obeys a
superposition principle. Suppose that if it is true that,
then the system is said to be linear. Many familiar systems have this property. It is equation (1) that allows us to decompose a periodic signal into frequency bands and calculate a power spectrum.
Potential fields (electric, magnetic and gravitational) are also linear. The main reason why linear systems are so familiar, is not because they are so ubiquitous (in fact, one author has pointed out
that dividing nature into linear and nonlinear systems is like having ``nonelephant'' biology as a special subfield, and missing the fact that most systems are not linear), but because equation (1)
makes linear systems solvable. Many nonlinear systems are handled by making them approximately linear, e.g.:
The work then primarily concentrates on how small that little bit actually is and under what circumstances is stays small. For many nonlinear systems (2) is a practical approach that gives useful
answers. Systems that can be analyzed this way generally get described with phrases like ``small amplitude''; a dead giveaway that something like (2) was used. A simple example of this is the
ordinary pendulum. A pendulum is actually a nonlinear system, but for small amplitude excursions (say 10 degrees), the nonlinear effects are extremely small and can be ignored for typical
applications. Some nonlinear systems cannot be broken down to something like (2) without completely missing the real solutions. Any system that has chaotic behavior is like this, the chaos comes from
the nonlinearity; there are no chaotic systems that are linear.
Many methods have been invented for dealing with linear systems, one that we will find useful here is the Laplace transform. For the system
(strictly speaking this applies only for
• Like the Fourier transform, it converts a linear differential equation into a polynomial. (you might not have realized this, but we can transform - Fourier or Laplace - an equation, not just a
stream of data).
• Unlike the Fourier transform, it treats transients efficiently. The Laplace transform is, in fact, the impulse response function for a system.
Its a bit tedious to do the integrations required to do either a forward or inverse transform by hand, so the Laplace transform is often done with the help of symbolic integration software or the use
of tables in a handbook. Table 1 gives the Laplace tranform for several useful mathematical functions. Combining this with some general transformation properties given in Table 2, gives us the
ability to determine the Laplace tranform of a large number of useful functions without the need to explicitly solve (3). The forward Laplace transform is not too difficult to do numerically, but
calculating the inverse transform numerically leads to problems with the numerical stability of the calculation; this is not a problem with software that is capable of doing the inverse tranform
We will use the Laplace tranform to work out how the controller will respond to its inputs. We need to do be able to do this because there is no unique way to set up an adaptive controller - a motor
speed controller that uses a shaft angle encoder will be quite different from one that uses a tachometer.
Next: The controller equations Up: number11 Previous: Introduction Skip Carter 2008-08-20 | {"url":"http://www.taygeta.com/papers/number11/node2.html","timestamp":"2014-04-16T07:13:38Z","content_type":null,"content_length":"9060","record_id":"<urn:uuid:2d3bc6e4-8c23-4ad2-9c68-4b15881f48cf>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
D Are the points I, L, J, and G coplanar? What is the intersection of KJ¯ and plane IJGH? View Solution
D What is the intersection of AEFB and CDEA? D View Solution
D What is the intersection of the plane ABGH with the plane ADEH? Are the points A, D, F, and G coplanar? View Solution
D If the m AB¯ is 3.2 cm, find the m AC¯. D View Solution
D If BD¯ ≅ DC¯, then which of the following statements is true? View Solution
D Find the m PS¯ and m SR¯, if the m PQ¯ and m QR¯ are 2.3 m and 1.5 m respectively. D View Solution
D How many lines can be drawn in a plane passing through a given point? View Solution
D There are 5 points marked on a plane, out of which 3 are collinear. How many lines can be drawn joining these points if a line shall contain at least 2 marked points? View Solution
D Choose the correct statement(s).
I. The intersection of a line and a plane is a point.
II. The intersection of two planes is a line. View Solution
III. Three points on the same plane are always collinear.
D What is the maximum number of points of intersection of two distinct lines in a plane? D View Solution
D There are two lines on a plane with 3 points marked on each line. Find the total number of additional lines that can be drawn joining these points if a line has to contain minimum two View
marked points. D Solution
D Select the correct statement/statements.
1. A line is a set of points that extends in two opposite directions without end.
2. Lines of different sizes have different thickness. View Solution
3. A line is of finite length
4. A line is a straight one-dimensional figure having no thickness. D
D Which of the following points is not collinear with respect to the other points? View Solution
D Are the points I, L, J, and G coplanar? What is the intersection of FG¯ and plane IJGH? View Solution
D A line in a plane separates the plane into View Solution
D Identify the correct statement / statements.
1. A space is the set of all points. View Solution
2. A line is a set of points that extends in two directions without end.
3. A point is a location in a plane with fixed dimensions. D
D A(1, 1) and B(4, 4) are 2 distinct points. How many lines can be drawn passing through both these two points? D View Solution
D What is the intersection of BFDC and AEDC? D View Solution
D Choose the incorrect statement / statements.
1. A plane is a flat surface that extends in all directions without end.
2. A plane has two dimensions. View Solution
3. All the points and lines that lie in the same plane are coplanar.
4. Two planes intersect to form a plane.
D Select the correct statement / statements.
1. A plane has infinite thickness.
2. Two planes will never intersect. View Solution
3. A pyramid is a plane figure.
4. All points of a circle lie on a single plane where as all points of a sphere lie on different planes.
D Select the correct stement/statements.
1. The ends of a line are points.
2. A straight line is a line which lies evenly with the points on itself. View Solution
3. A plane surface is a surface which lies evenly with the straight lines on itself.
D If two planes intersect, then they intersect in a _____. D View Solution
D Which of the following points are non-collinear? D View Solution
D The points A(- 2, - 3), B(1, y) and C(4, 1) are collinear. Find y. D View Solution
D Choose the correct statement/statements.
1. Points F, A, L, I, C, G, E, O, B are coplanar.
2. Points G, E, O, B are collinear. View Solution
3. O, A, and B are coplanar.
4. Points F, A, L, I, C are coplanar. D
D What is the intersection of the plane ABCD with the plane BCFG? Are the points A, D, F, and G coplanar? View Solution
D How many lines can be drawn connecting two points, if there are 5 points and no three of them are collinear? View Solution
D What are the possible number of intersections between two lines? D View Solution
D Four parallel lines are intersected by another four parallel lines. Find the number of points of intersection. D View Solution
D Which of the following statements best describes the points A, B, C?
I. The points A, B, C are collinear.
II. The points A, B, C are non-collinear. View Solution
III. The points A, B, C are non-coplanar.
D Points and lines that lie in the same plane are called View Solution
D Find the number of regions into which two parallel lines divide a plane. D View Solution
D Which points are colinear in the figure shown? D View Solution
D Select the correct statement / statements.
1. A point is that which has no part
2. A line is breadthless length View Solution
3. A surface is that which has length and breadth only
D Identify three collinear points in the figure shown. View Solution
D What is the least number of points needed to draw the figure? View Solution
D From the figure
1. Identify one set of collinear points. View Solution
2. Name any one plane.
3. Find the point of intersection of AD↔ and BE↔. D
D Identify the number of planes and the number of line segments in the figure. View Solution
D Where do the two planes intersect? View Solution
D Identify which of the following can be best represented as a point. D View Solution
D Identify which of these can be best represented by a line. D View Solution
D Identify which of the following can be best represented by a plane. D View Solution
D Identify the graph representing the coordinate plane containing the points A(- 2, - 3), B(1, - 1) and C(4, 1). The points A, B and C are collinear and D is a point that does not lie on View
the line. Solution
D Refer the figure to answer the questions.
(i). Are the points B, C and D collinear? View Solution
(ii). Name any one plane.
D Find the maximum number of possible names for the plane shown. D View Solution
D In how many ways can the line be represented? View Solution
D Refer the figure to answer the questions.
(i). Are points K, E and F collinear? View Solution
(ii). Are points A, L, D and B coplanar?
D The points on the map represent the important places of Sicilia, Italy. Use the map to select the choices that are collinear. View Solution
D Use figure to answer the questions.
1. Name two planes that do not contain R. View Solution
2. Are the points R, Q and T coplanar?
D What type of geometric intersection do you find in the given figure? View Solution
D What kind of geometric intersection does the photograph resemble? D View Solution
D What kind of geometric intersection does the photograph indicate? View Solution
D Name the two lines that intersect at point M. View Solution
D Which of the following is true for the figure? D View Solution
ff If the m AB¯ is 3.2 cm, find the m AC¯. ff View Solution
ff If BD¯ ≅ DC¯, then which of the following statements is true? View Solution
ff Find the m PS¯ and m SR¯, if the m PQ¯ and m QR¯ are 2.3 m and 1.5 m respectively. ff View Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxefxaxbegjxdkjhe&.html","timestamp":"2014-04-24T22:48:47Z","content_type":null,"content_length":"103019","record_id":"<urn:uuid:689ec522-4453-481b-ae91-42bdaca2266e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topology/Free group and presentation of a group
From Wikibooks, open books for an open world
Free monoid spanned by a set[edit]
Let $V$ be a vector space and $v_1,\ldots,v_n$ be a basis of $V$. Given any vector space $W$ and any elements $w_1,\ldots,w_n \in W$, there is a linear transformation $\varphi : V \rightarrow W$ such
that $\forall i \in \{1,\ldots,n\}, \, \varphi(v_i) = w_i$. One could say that this happens because the elements $v_1,\ldots,v_n$ of a basis are not "related" to each other (formally, they are
linearly independent). Indeed, if, for example, we had the relation $v_1 = \lambda v_2$ for some scalar $\lambda$ (and then $v_1,\ldots,v_n$ wasn't linearly independent), then the linear
transformation $\varphi$ could not exist.
Let us consider a similar problem with groups: given a group $G$ spanned by a set $X = \{x_i : i \in I\} \subseteq G$ and given any group $H$ and any set $Y = \{y_i : i \in I\} \subseteq H$, does
there always exist a group morphism $\varphi : G \rightarrow H$ such that $\forall i \in I, \, \varphi(x_i) = y_i$? The answer is no. For example, consider the group $G = \mathbb{Z}_n = \mathbb{Z}/n\
mathbb{Z}$ which is spanned by the set $X = \{1\}$, the group $H = \mathbb{R}$ (with the adition operation) and the set $Y = \{2\}$. If there exists a group morphism $\varphi : \mathbb{Z}_n \
rightarrow \mathbb{R}$ such that $\varphi(1) = 2$, then $n2 = n \varphi(1) = \varphi(n \, 1) = \varphi(0) = 0$, which is impossible. But if instead we had choose $G = \mathbb{Z}$, then such a group
morphism does exist and it would be given by $\varphi(t) = 2t$. Indeed, given any group $H$ and any $y \in H$, we have the group morphism $\varphi : \mathbb{Z} \rightarrow H$ defined by $\varphi(t) =
y^t$ (in multiplicative notation) that verifies $\varphi(1) = y$. In a way, we can think that this happens because the elements of the set $X = \{1\} \subseteq \mathbb{Z}$ (that spans $\mathbb{Z}$)
don't verify relations like $nx = 1$ (like $\mathbb{Z}_n$) or $xy = yx$. So, it seems that $\mathbb{Z}$ is a group more "free" that $\mathbb{Z}_n$.
Our goal in this section will be, given a set $X$, build a group spanned by the set $X$ such that it will be the most "free" possible, in the sense that it doesn't have to obey relations like $x^n =
1$ or $xy = yx$. To do so, we begin by constructing a "free" monoid (in the same sense). Informally, this monoid will be the monoid of the words written with the letters of the alphabet $X$, where
the identity will be the word with no letters (the "empty word"), and the binary operation of the monoid will be concatenation of words. The notation $x_1 \ldots x_n$ that we will use for the element
of this monoid meets this idea that the elements of this monoid are the words $x_1 \ldots x_n$ where $x_1,\ldots,x_n$ are letters of the alphabet $X$. Here is the definition of this monoid.
Definition Let $X$ be a set.
1. We denote the $n$-tuples $(x_1,\ldots,x_n)$ with $x_i \in X$ and $n \in \mathbb{N}$ by $x_1 \ldots x_n$.
2. We denote $()$, that is $(x_1,\ldots,x_n)$ with $n = 0$, by $1$.
3. We denote by $FM(X)$ the set $\{x_1 \ldots x_n : n \in \mathbb{N}, x_i \in X\}$.
4. We define in $FM(X)$ the concatenation operation $*$ by $x_1\ldots x_m * y_1\ldots y_n = x_1\ldots x_m y_1\ldots y_n$.
Next we prove that this monoid is indeed a monoid. It's an easy to prove result, we need to show associativity of $*$ and that $1*x=x*1=x$.
Proposition $(FM(X),*)$ is a monoid with identity $1$.
Proof The operation $*$ is associative because, given any $x_1 \ldots x_m,y_1 \ldots y_n,z_1 \ldots z_p \in FM(X)$, we have
$(x_1 \ldots x_m * y_1 \ldots y_n) * z_1 \ldots z_m$
$= x_1 \ldots x_m y_1 \ldots y_n * z_1 \ldots z_m$
$= x_1 \ldots x_m y_1 \ldots y_n z_1 \ldots z_m$
$= x_1 \ldots x_m * (y_1 \ldots y_n z_1 \ldots z_m)$
$= x_1 \ldots x_m ( y_1 \ldots y_n * z_1 \ldots z_m)$.
It's obvious that $(FM(X),*)$ has the identity $1$ as $1 * x_1 \ldots x_n = x_1 \ldots x_n$ by the definition of $1$ and $*$. $\square$
Following the idea that the monoid $(FM(X),*)$ is the most "free" monoid spanned by $X$, we will call it the free monoid spanned by $X$.
Definition Let $X$ be a set. We denote the free monoid spanned by $X$ by $(FM(X),*)$.
1. Let $X = \{x\}$. Then $FM(X) = \{1,x,xx,xxx,\ldots\}$ and, for example, $xx * xxx = xxxxx$.
2. Let $X = \{x,y,z\}$. Then $1,x,y,z,xxx,yxz,xyzzz \in FM(X)$ and, for example, $xxx * yxz = xxxyxz$.
Free group spanned by a set[edit]
Now let us construct the more "free" group spanned by a set $X$. Informally, what we will do is insert in the monoid $FM(X)$ the inverse elements that are missing in it for it to be a group. In a
more precise way, we will have a set $\bar X$ equipotent to $X$, choose a bijection from $X$ to $\bar X$ and in this way achieve a "association" between the elements of $X$ and the elements of $\bar
X$. Then we face $x_1 \ldots x_n \in FM(X)$ (with $x_1,\ldots,x_n \in X$) as having the inverse element $\overline{x_n} \ldots \overline{x_1}$ (with $\overline{x_1},\ldots,\overline{x_n} \in X$),
where $x_1,\ldots,x_n \in X$ is is associated with $\overline{x_1} \ldots \overline{x_n}$, respectively. Let us note that the order of the elements in $\overline{x_n} \ldots \overline{x_1}$ is
"reversed" because the inverse of the product $x_1 \ldots x_n = x_1 * \cdots * x_n$ must be $x_n^{-1} * \cdots * x_1^{-1}$, and the $x_1^{-1},\ldots,x_n^{-1}$ are, respectively, $\overline{x_1},\
ldots,\overline{x_n}$. The way we do that $\overline{x_n} \ldots \overline{x_1}$ be the inverse of $x_1 \ldots x_n$ is to take a congruence relation $R$ that identifies $x_1 \ldots x_n \overline{x_n}
\ldots \overline{x_1}$ with $1$, and pass to the quotient $FM(X \cup \bar X)$ by this relation (defining then, in a natural way, the binary operation of the group, $[u]_R \star [v]_R = [u * v]_R$).
By taking the quotient, we are formalizing the intuitive idea of identifying $x_1 \ldots x_n \overline{x_n} \ldots \overline{x_1}$ with $1$, because in the quotient we have the equality $[x_1 \ldots
x_n \overline{x_n} \ldots \overline{x_1}]_R = [1]_R$. Let us give the formal definition.
Definition Let $X$ be a set. Let us take another set $\overline{X}$ equipotent to $X$ and disjoint from $X$ and let $f : X \rightarrow \overline{X}$ be a bijective application.
1. For each $x \in X$ let us denote $f(x)$ by $\bar x$, for each $x \in \overline{X}$ let us denote $f^{-1}(x)$ by $\bar x$ and for each $x_1 \ldots x_n \in FM(X \cup \overline{X})$ let us denote $\
overline{x_n} \ldots \overline{x_1}$ by $\overline{x_1 \ldots x_n}$.
2. Let $R$ be the congruence relation of $FM(X \cup \overline{X})$ spanned by $G = \{(u * \bar u,1) : u \in X \cup \overline{X}\}$, this is, $R$ is the intersection of all the congruence relations
in $FM(X \cup \overline{X})$ wich have $G$ as a subset. We denote the quotient set $FM(X \cup \overline{X})/R$ by $FG(X)$.
Frequently, abusing the notation, we represent an element $[u]_R \in FG(X)$ simply by $u$.
Because the operation $[u]_R \star [v]_R = [u * v]_R$ that we want to define in $FM(X \cup \bar X)/R$ is defined using particular represententes $u$ and $v$ of the equivalence classes $[u]_R$ and $
[v]_R$, a first precaution is to verify that the definition does not depend on the chosen representatives. It's an easy verification.
Lemma Let $X$ be a set. It is well defined in $FG(X)$ the binary operation $\star$ by $[u]_R \star [v]_R = [u_r * v_r]_R$ (where $R$ is the congruence relation of the previous definition).
Proof Let $u,u',v,v' \in FM(X)$ be any elements such that $[u]_R = [u']_R$ and $[v]_R = [v']_R$, this is, $uRu'$ and $vRv'$. Because $R$ is a congruence relation in $FM(X \cup \bar X)$, we have
$u*vRu'*v'$, this is, $[u*v]_R = [u'*v']_R$. $\square$
Because the definition is valid, we present it.
Definition Let $X$ be a set. We define in $FG(X)$ the binary operation $\star$ by $[u]_R \star [v]_R = [u_r * v_r]_R$.
Finally, we verify that the group that we constructed is indeed a group.
Proposition Let $X$ be a set. $(FG(X),\star)$ is a group with identity $[1]_R$ and where $\forall [u]_R \in FG(X), \, {[u]_R}^{-1} = [\bar u]_R$.
1. $(FG(X),\star)$ is associative because $\forall [u]_R,[v]_R,[w]_R \in FG(X), \, ([u]_R \star [v]_R) \star [w]_R = ([u * v]_R) \star [w]_R = [([u]_R * [v]_R) * w]_R =$$[u * (v * w)]_R = [u]_R \
star [v * w]_R = [u]_R \star ([v]_R \star [w]_R).$
2. Let us see that $[1]_R$ is the identity $(FG(X),\star)$. Let $[u]_R \in FG(X)$ be any element. We have $[u]_R \star [1]_R = [u * 1]_R = [u]_R$ and, in the same way, $[1]_R \star [u]_R = [u]_R$.
3. Let $[u]_R \in FG(X)$ be any element and let us see that $[u]_R \star [\bar u]_R = [1]_R$. We have $[u]_R \star [\bar u]_R = [u * \bar u]_R$ and, by definition of $R$, $u * \bar u R 1$, this is,
$[u * \bar u]_R = [1]_R$, therefore $[u]_R \star [\bar u]_R = [1]_R$ and, in the same way, $[\bar u]_R \star [u]_R = [1]_R$. $\square$
In the same way that we did with the free monoid, we will call free group spanned by the set $X$ to the more "free" group spanned by this set.
Definition Let $X$ be a set. We call free group spanned by $X$ to $(FG(X),\star)$.
Example Let $X = \{x\}$. Let us choose any set $\bar X = \{y\}$ disjoint (and equipotent) of $X$. Let $f : X \rightarrow \bar X$ be any (in fact, the only) bijective application of $X$ in $\bar X$.
Then we denote $f(x) = y$ by $\bar x$ and we denote $f^{-1}(y) = x$ by $\bar y$. We regard $x$ and $y$ as inverse elements. Let $R$ be the congruence relation of $FM(X \cup \bar X)$ spanned by $G = \
{(1,1),(x \bar x,1),(xx \bar x \bar x,1),\ldots\}$. $FG(X) = FM(\{x,\bar x\})/R$ is the set of all "words" written in the alphabet $\{[x]_R,[\bar x]_R\}$. For example, $[1]_R,[x]_R,[\bar x]_R, [xx\
bar x xx]_R \in FG(X)$.
We have $G \subseteq R$ and, for example, $(xx \bar x,1) \in R$, because $(x\bar x,1) \in G \subseteq R$ (therefore $x \bar x R 1$) and because $R$ is a congruence relation, we can "multiply" both
"members" of the relation $x \bar x R 1$ by $x$ and obtain $xx \bar x R x$. We see $x \bar x R 1$ as meaning that in $FG(X)$ We have $xx \bar x = x$ (more precisely, $[xx \bar x]_R = [x]_R$), and we
think in this equality as being the result of one $x$ "cut out" with $\bar x$ in $xx \bar x$.
Given $u \in FM(X \cup \bar X)$, let us denote the exact number of times that the "letter" $x$ appears in $u$ by $|u|_x$ and let us denote the exact number of times the "letter" $\bar x$ appears in
$u$ by $|u|_\bar x$. Then "cutting" $x$'s with $\bar x$'s it remains a reduced word word with $|u|_x - |u|_\bar x$ times the letter $x$ (if $|u|_x - |u|_\bar x < 0$, let us us consider that there
aren't any letters $x$ and remains $-(|u|_x - |u|_\bar x)$ times the letter $\bar x$). Let us denote $|u|_x - |u|_\bar x$ by $|u|_{x - \bar x}$. We have
1. $[u]_R = [v]_R$ if and only if $|u|_{x - \bar x} = |v|_{x - \bar x}$ and
2. $\forall [u]_R,[v]_R \in FG(X), \, |uv|_{x - \bar x} = |u|_{x - \bar x} + |v|_{x - \bar x}$.
In this way, each element $[u]_R \in FG(X)$ is determined by the integer number $|u|_{x - \bar x}$ and the product $\star$ of two elements $[u]_R,[v]_R \in FG(X)$ correspondent to the sum of they
associated integers numbers $|u|_{x - \bar x}$ and $|v|_{x - \bar x}$. Therefore, it seems that the group $(FG(X),\star)$ is "similar" to $(\mathbb{Z},+)$. indeed $(FG(X),\star)$ is isomorph to $(\
mathbb{Z},+)$ and the application $|\cdot|_{x - \bar x} : FG(X) \rightarrow \mathbb{Z}$ is a group isomorphism.
Presentation of a group[edit]
Informally, it seems that $\mathbb{Z}_n$ is obtained from the "free" group $\mathbb{Z}$ imposing the relation $nx = 1$. Let us try formalize this idea. We start with a set $X$ that spans a group $G$
that que want to create and a set of relations $R$ (such as $x^n = 1$ or $xy = yz$) that the elements of $G$ must verify and we obtain a group $G/R$ spanned by $G$ and that verify the relations of
$R$. More precisely, we write each relation $u = v$ in the form $uv^{-1} = 1$ (for example, $xy = yx$ is written in the form $xyx^{-1}y^{-1} = 1$) and we see each $uv^{-1}$ as a "word" of $FG(X)$.
Because $R$ doesn't have to be a normal subgroup of $G$, we can not consider the quotient $FG(X)/R$, so we consider the quotient $FG(X)/N$ where $N$ is the normal subgroup of $FG(X)$ spanned by $R$.
In $G/N$, we will have $uv^{-1}N = 1N$, which we see as meaning that in $G/N$ the elements $uv^{-1}$ and $1$ are the same. In this way, $FG(X)/N$ will verify all the relations that we want and will
be spanned by $X$ (more precisely, by $\{xN : x \in X\}$). Let us formalize this idea.
Definition Let $G$ be a group. We call presentation of $G$, and denote by $< X:R >$, to a ordered pair $(X,R)$ where $X$ is a set, $R \subseteq FG(X)$ and $G \simeq FG(X)/N$, where $N$ is the normal
subgroup of $FG(X)$ spanned by $R$. Given a presentation $<X:R>$, we call spanning set to $X$ and set of relations to $R$.
Let us see some examples of presentations of the free group $FG(X)$ and the groups $\mathbb{Z}_n$, $\mathbb{Z} \oplus \mathbb{Z}$, $\mathbb{Z}_m \oplus \mathbb{Z}_n$ and $S_3$. We also use the
examples to present some common notation and to show that a presentation of a group does not need to be unique.
1. Let $X$ be a set. $<X:{\emptyset}>$ is a presentation of $FG(X)$ because $FG(X) \simeq FG(X)/\{1\}$, where $\{1\}$ is the normal subgroup of $FG(X)$ spanned by $\emptyset$. In particular, $<\{x
\}:\emptyset >$ is a presentation of $(\mathbb{Z},+) \simeq FG(\{x\})$, more commonly denoted by $<x:\emptyset>$. Another presentation of $(\mathbb{Z},+)$ is $<\{x,y\}:\{xy^{-1}\}>$, more
commonly denoted by $<x,y:xy^{-1}>$. Informally, in the presentation $<x,y:xy^{-1}>$ we insert a new element $y$ in the spanning set, but we impose the relation $xy^{-1} = 1$, this is, $x = y$,
which is the same as having not introduced the element $y$ and have stayed by the presentation $<x:\emptyset>$.
2. Let $X = \{x\}$. $<X:\{x^n\}>$ (where $x^n = x \star \cdots \star x \in FG(X)$$n$ times) is a presentation of $\mathbb{Z}_n$. Indeed, the subgroup of $FG(X)$ spanned by $\{x^n\}$ is $N = \{\
ldots,\bar{x}^{2n},\bar{x}^n,1,x^n,x^{2n},\ldots\} \simeq n\mathbb{Z}$ and $FG(X) \simeq \mathbb{Z}$, therefore$\mathbb{Z}_n = \mathbb{Z}/n\mathbb{Z} \simeq FG(X)/N$. Is more common to denote $<\
{x\} : \{x^n\}>$ by $<x : x^n>$.
3. Let $X = \{x,y\}$ (with $x$ and $y$ distinct) and $R = \{xyx^{-1}y^{-1}\}$. $<X:R>$ is a presentation of $\mathbb{Z} \oplus \mathbb{Z}$. Informally, what we do is impose comutatibility in $FG(X)$
, this is, $xy = yx$, this is, $xyx^{-1}y^{-1} = 1$, obtaining a group isomorph to $\mathbb{Z} \oplus \mathbb{Z}$. It's more usually denote $<\{x,y\} : \{xyx^{-1}y^{-1}\}>$ by $<x,y : xyx^{-1}y^
4. Let $X = \{x,y\}$ and $R = \{xyx^{-1}y^{-1},x^m,y^n\}$. $<X,R>$ is a presentation of $\mathbb{Z}_m \times \mathbb{Z}_n$. Informally, what we do is impose comutability in the same way as in the
previous example, and we impose $x^m = 1$ and $x^n = 1$ to obtain $\mathbb{Z}_m \times \mathbb{Z}_n$ instead $\mathbb{Z} \oplus \mathbb{Z}$. It's more common denote$<\{x,y\}:\{xyx^{-1}y^{-1},x^
m,y^n\}>$ by $<x,y:xyx^{-1}y^{-1},x^m,y^n>$.
5. $<\{a,b,c\}:\{aa,bb,cc,abac,cbab\}>$, more commonly written $<a,b,c:a^2,b^2,c^2,abac,cbab>$, is a presentation of $S_3$, the group of the permutations of $\{1,2,3\}$ with the composition of
applications. To verify this, one can verify that any group with presentation $<a,b,c:a^2,b^2,c^2,abac,cbab>$ as exactly six elements $id$, $a$, $b$, $c$, $a$, $ab$ and $ac$, and that the
multiplication of this elements results in the following Cayley table that is equal to the Cayley table of $S_3$. Just to give an idea how this can be achived, a group with presentation $<a,b,c:a
^2,b^2,c^2,abac,cbab>$ as exactly the elements $id$, $a$, $b$, $c$, $a$, $ab$ and $ac$ because none of this elements are the same (the relations $a^2 = b^2 = c^2 = abac = cbab = 1$ don't allow us
to conclude that two of the elements are equal) and because "another" elements like $bc$ are actually one of the previous elements (for example, from $cbab = id$ we have $cb = ab$, and taking
inverses of both members, we have $b^{-1}c^{-1} = b^{-1} a^{-1}$, which, using $a^2 = b^2 = c^2 = id$, this is, $a = a^{-1}$, $b = b^{-1}$ and $c = c^{-1}$, results in $bc = ba$). Then, using the
relations of the presentation, one can compute the Cayley table. For example, $a (ab) = b$ because we have the relation $a^2 = 1$. Another example: we have $b(ac) = a$ because we can multiply
both members of the relation $abac = id$ by $a$ and then use $a^2 = id$. One could have suspected of this presentation by taking $a = (1 \ 2)$, $b = (1 \ 3)$ and $c = (2 \ 3)$ and then, trying to
construct the Cayley table of $S_3$, found out that it was possible if one know that $a^2 = b^2 = c^2 = abac = cbab = 1$.
│ $\times$ │ $id$ │ $a$ │ $b$ │ $c$ │ $ab$ │ $ac$ │
│ $id$ │ $id$ │ $a$ │ $b$ │ $c$ │ $ab$ │ $ac$ │
│ $a$ │ $a$ │ $id$ │ $ab$ │ $ac$ │ $b$ │ $c$ │
│ $b$ │ $b$ │ $ac$ │ $id$ │ $ab$ │ $c$ │ $a$ │
│ $c$ │ $c$ │ $ab$ │ $ac$ │ $id$ │ $a$ │ $b$ │
│ $ab$ │ $ab$ │ $c$ │ $a$ │ $b$ │ $ac$ │ $id$ │
│ $ac$ │ $ac$ │ $b$ │ $c$ │ $a$ │ $id$ │ $ab$ │
It's natural to ask if all groups have a presentation. The following theorem tell us that the answer is yes, and it give us a presentation.
Theorem Let $(G,\times)$ be a group.
1. The application $\varphi : FG(G) \rightarrow G$ defined by $\varphi([x_1]_R \star \cdots \star [x_n]_R) = x_1 \times \cdots \times x_n$ (where $x_1,\ldots,x_n \in G$) is an epimorphism of groups.
2. $<G:\textrm{ker} \, \varphi>$ is a presentation of $(G,\times)$.
1. $\varphi$ is well defined because every element of $FG(X)$ as a unique representations in the form $[x_n] \star \cdots [x_n]_R$ with $x_1,\ldots,x_n \in G$, with the exception of $[1]_R$ appears
several times in the representation, but that doesn't affect the value of $x_1 \times \cdots \times x_n$ Let $[x_1]_R \star \cdots \star [x_m]_R,[y_1]_R \star \cdots \star [y_n]_R \in FG(X)$ be
any elements, where $x_1,\ldots,x_m,y_1,\ldots,y_n \in G$. We have $\varphi(([x_1]_R \star \cdots \star [x_m]_R) \star ([y_1]_R \star \cdots \star [y_n]_R)) = (x_1 \times \cdots \times x_m) \
times (y_1 \times \cdots \times y_n) =$$\varphi([x_1]_R \star \cdots \star [x_m]_R) \times \varphi([y_1]_R \star \cdots \star [y_n]_R)$, therefore $\varphi$ is a morphism of groups. Because $\
forall x \in G, \, \varphi ([x]_R) = x$, then $G$ is a epimorphism of groups.
2. Using the first isomorphism theorem (for groups), we have $FG(G)/\textrm{ker} \, \varphi \simeq \varphi(G) = G$, therefore $<G:\textrm{ker} \, \varphi>$ is a presentation of $(G,\times)$. $\
The previous theorem, despite giving us a presentations of the group $G$, doesn't give us a "good" presentation, because the spanning set $G$ is usually much larger that other spanning sets, and the
set of relations $\mathrm{ker} \, \varphi$ is also usually much larger then other sufficient sets of relations (it is even a normal subgroup of $FG(G)$, when it would be enough that it span an
appropriated normal subgroup). | {"url":"http://en.wikibooks.org/wiki/Topology/Free_group_and_presentation_of_a_group","timestamp":"2014-04-18T16:18:41Z","content_type":null,"content_length":"94435","record_id":"<urn:uuid:5fb096ba-7498-425c-8a5e-cd16b4017cc2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Common Core Standards : CCSS.Math.Content.HSN-VM.A.1
Common Core Standards: Math
1. Recognize vector quantities as having both magnitude and direction. Represent vector quantities by directed line segments, and use appropriate symbols for vectors and their magnitudes (e.g., v, |v
|, ||v||, v).
Sometimes, a number is enough. You know, that dream you have, where all you seem to hear is "Thirty million dollars!" The rest of the dream is a blur, but that one single number is enough to evoke
images of private jets, vacations in the Caribbean, and that long-awaited Netflix subscription.
Other times, numbers need a little help. They need direction. That's where vectors come in.
Students should know that vectors are directed line segments, having both magnitude and direction. That means a vector not only tells you how fast the wind is blowing, but also the direction it's
blowing in.
They should also know the symbols for vectors. (Mathematicians love symbols and shortcuts, and vectors are no exception.) Instead of calling a vector AB, which is far too much work, mathematicians
prefer to call it v (for vector, obviously). In general, boldfaced lowercase letters represent vectors.
If students want to refer to the magnitude, or length, of the vector, they put it inside a double absolute value: ||v||.
Two vectors that have the same magnitude and direction are said to be equivalent (though we use an equal sign to represent equivalency). Equivalent vectors can be thought of as congruent segments on
parallel lines: same direction and same length.
The direction of a vector is determined by finding the slope of the segment between its initial point and its terminal point, while the magnitude of a vector is the same as the distance between its
endpoints. A vector is said to be in standard form if its initial point is the origin.
1. Is it possible for two vectors to have the same length and not be equivalent?
Correct Answer:
Yes, if their directions are different
Answer Explanation:
Equivalent vectors need to fulfill two requirements: they must have both the same length and the same direction. If one of these requirements isn't met (or if both aren't met, obviously), then
the two aren't equivalent.
2. Which formula would help you find the magnitude of a vector?
Answer Explanation:
"Magnitude" is just a fancy word for "size." The size of a vector is its length, and length means distance. Slope would only get us direction and the midpoint wouldn't tell us anything useful.
While the absolute value of a vector means the same as its magnitude, the term "absolute value" isn't a formula that we can use to help us calculate it.
3. What information is needed to determine whether or not two vectors are equivalent?
Correct Answer:
Both sets of initial and terminal points
Answer Explanation:
Determining the equivalency of two vectors means knowing their directions and their magnitudes. If we know only the initial points or terminal points of the vectors, their directions and
magnitudes are still unknown. Knowing one set of initial and terminal points will give us the direction and magnitude of one of the vectors, but what about the other one? (D) is the only answer
that tells us the directions and magnitudes of both vectors.
4. Vector v has an initial point of (0, 2) and a terminal point of (3, 5). What is the vector's slope?
Answer Explanation:
Bust out your skis, because it's time to hit some slopes. The slope of a vector is like the slope of a line: rise over run. That means we take the difference of its y coordinates and put it over
the difference of its x coordinates. If we do that, we end up with
5. Vector v has an initial point of (0, 2) and a terminal point of (3, 5). What is the vector's magnitude?
Answer Explanation:
The magnitude of a vector is its length. To calculate the length of a segment, we just use the distance formula. That means we have ||v|| =
6. Both vectors n and m have slopes of 2. Are they equivalent?
Correct Answer:
The magnitude of each vector is needed
Answer Explanation:
Equivalent vectors must have identical slopes and identical magnitudes. Since n and m have the same slope, we need their magnitudes before we can conclusively say that they're equivalent. (We
could just say so, but we'd be lying.) That eliminates (A) and (B) automatically. Since we don't have the initial points of the vectors, the terminal points wouldn't help us find their
magnitudes. That means (D) is right.
7. Vectors p and q both have magnitudes of
Correct Answer:
The initial and terminal points of each vector are needed
Answer Explanation:
Vectors have both length (magnitude) and direction. In order to be considered equivalent to each other, both their magnitudes and directions must be the same. We've got magnitude down, but what
about direction? Since we don't know their directions yet, (A) and (B) aren't true. We already have the magnitudes, so they aren't needed. That leaves (C) as our only choice.
8. Vector b has a slope of -¾ and a magnitude of 5. If its initial point is (-3, 2), what is its terminal point?
Answer Explanation:
The easiest way to approach this problem is to plug in and check. While it's possible to do fancy algebraic manipulations with the slope and distance formulas, it might take a while just to solve
for one variable. If we plug in each point and check, we'll quickly see whether or not something matches or doesn't. If we do so, the only point that matches both slope and magnitude is (A).
9. Vector v has an initial point at (1, 2) and a terminal point at (4, 4), while vector w has an initial point at (2, 1) and a terminal point at (4, 4). Are the two vectors equivalent?
Correct Answer:
No, they have the same magnitudes but different directions
Answer Explanation:
We can use the distance formula to calculate the magnitudes of the vectors and the slope formula to calculate the directions. Looking at the magnitudes, we have v||, compared to w||. So they both
have the same magnitude. The slopes are v, and w. So they have different directions.
10. Vector g has an initial point (1, 1) and a terminal point of (9, 3). If vector h has an initial point at (-7, 3), what must its terminal point be in order to be equivalent with g?
Answer Explanation:
First order of business: find the direction and magnitude of g. Plugging its points into the distance formula will give us ||g||. That means ||g|| = g and h on the x-y axis might be more your
style. Either way, the terminal point should be (B). | {"url":"http://www.shmoop.com/common-core-standards/ccss-hs-n-vm-1.html","timestamp":"2014-04-20T05:51:41Z","content_type":null,"content_length":"66902","record_id":"<urn:uuid:7ea2eaa3-f660-430e-8f64-a573045678d0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Successful Strategies for Solving Problems on Assignments
Solving complex problems is a challenging task and warrants ongoing effort throughout your career. A number of approaches that expert problem-solvers find useful are summarized below, and you may
find these strategies helpful in your own work. Any quantitative problem, whether in economics, science, or engineering, requires a two-step approach: analyze, then compute. Jumping directly to
“number-crunching” without thinking through the logic of the problem is counter-productive. Conversely, analyzing a problem and then computing carelessly
will not result in the right answer either.
So, think first, calculate, and always check your results. And remember, attitude matters. Approach solving a problem as something that you know you can do, rather than something you think that you
can’t do. Very few of us can see the answer to a problem without working through various approaches first.
Analysis Stage
• Read the problem carefully at least twice, aloud if possible, then restate the problem in your own words.
• Write down all the information that you know in the problem and separate, if necessary, the “givens” from the “constraints.”
• Think about what can be done with the information that is given. What are some relationships within the information given? What does this particular problem have in common conceptually with
course material or other questions that you have solved?
• Draw pictures or graphs to help you sort through what’s really going on in the problem. These will help you recall related course material that will help you solve the problem. However, be sure
to check that the assumptions underlying the picture or graph you have drawn are the same as the assumptions made in the problem. If they are not, you will need to take this into consideration
when setting up your approach.
Computing Stage
• If the actual numbers involved in the problem are too large, small, or abstract and seem to be getting in the way of your thinking, substitute simple numbers and plan your approach. Then, once
you get an understanding of the concepts in the problem, you can go back to the numbers given.
• Once you have a plan, do the necessary calculations. If you think of a simpler or more elegant approach, you can try it afterwards and use it as a check of your logic. Be careful about changing
your approach in the middle of a problem. You can inadvertently include some incorrect or inapplicable assumptions from the prior plan.
• Throughout the computing stage, pause periodically to be sure that you understand the intuition behind each concept in the problem. Doing this will not only strengthen your understanding of the
material, but it will also help you in solving other problems that also focus on those concepts.
• Resist the temptation to consult the answer key before you have finished the problem. Problems often look logical when someone else does them; that recognition does not require the same knowledge
as solving the problem yourself. Likewise, when soliciting help from the AI or course head, ask for direction or a helpful tip only—avoid having them work the problem for you. This approach will
help ensure that you really understand the problem—an essential prerequisite for successfully solving problems on exams and quizzes where no outside help is available.
• Check your results. Does the answer make sense given the information you have and the concepts involved? Does the answer make sense in the real world? Are the units reasonable? Are the units the
ones specified in the problem? If you substitute your answer for the unknown in the problem, does it fit the criteria given? Does your answer fit within the range of an estimate that you made
prior to calculating the result? One especially effective way to check your results is to work with a study partner or group. Discussing various options for a problem can help you uncover both
computational errors and errors in your thinking about the problem. Before doing this, of course, make sure that working with someone else is acceptable to your course instructor.
• Ask yourself why this question is important. Lectures, precepts, problem sets, and exams are all intended to increase your knowledge of the subject. Thinking about the connection between a
problem and the rest of the course material will strengthen your overall understanding.
If you get stuck, take a break. Research has shown that the brain works very productively on problems while we sleep—so plan your problem-solving sessions in such a way that you do a “first pass.”
Then, get a night’s rest, return to the problem set the next day, and think about approaching the problem in an entirely different way.
References and Further Reading:
Adapted in part from Walter Pauk. How to Study in College , 7th edition, Houghton Mifflin Co., 2001 | {"url":"http://www.princeton.edu/mcgraw/library/for-students/problem-solving/","timestamp":"2014-04-19T02:39:55Z","content_type":null,"content_length":"14135","record_id":"<urn:uuid:fc64ea6c-7c3b-4349-ab26-5210909b9d43>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Word Problem on Geometric Series
Re: Word Problem on Geometric Series
Word Problem on Geometric Series
Explain what the sum on an infinite geometric series is if the first term is positive and the common ratio is greater than 1.
Griffin wrote:Explain what the sum on an infinite geometric series is if the first term is positive and the common ratio is greater than 1.
If the ratio is larger than 1, are the successive terms getting larger or smaller? | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=2277","timestamp":"2014-04-17T13:27:27Z","content_type":null,"content_length":"18188","record_id":"<urn:uuid:5028e566-3f1a-444d-b957-2a7620e2cf22>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of a Kite
Year 8 Interactive Maths - Second Edition
Area of a Kite
A kite has two pairs of adjacent sides equal and one pair of opposite angles equal. Diagonals intersect at right angles. One diagonal is bisected by the other.
Consider the area of the following kite.
Diagonals of a kite cut one another at right angles as shown by diagonal AC bisecting diagonal BD.
Example 7
Find the area of the following kite.
So, the area of the kite is 176 cm^2.
• Two pairs of adjacent sides of a kite are equal and one pair of opposite angles are equal.
• The diagonals intersect each other at right angles.
• One diagonal is bisected by the other.
• The area of a kite is given by the following formula where x and y are the lengths of the kite's diagonals: | {"url":"http://www.mathsteacher.com.au/year8/ch12_area/06_kite/kite.htm","timestamp":"2014-04-19T14:30:51Z","content_type":null,"content_length":"10921","record_id":"<urn:uuid:a220b15d-3813-4ffa-b7de-417a944ebc65>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Dubner PC Cruncher { a microcomputer coprocessor card for doing integer arithmetic, review in
"... The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance,
because the security of some of these cryptosystems, such as the Rivest-Shamir-Adelman (RSA) system, depends o ..."
Cited by 41 (17 self)
Add to MetaCart
The problem of finding the prime factors of large composite numbers has always been of mathematical interest. With the advent of public key cryptosystems it is also of practical importance, because
the security of some of these cryptosystems, such as the Rivest-Shamir-Adelman (RSA) system, depends on the difficulty of factoring the public keys. In recent years the best known integer
factorisation algorithms have improved greatly, to the point where it is now easy to factor a 60-decimal digit number, and possible to factor numbers larger than 120 decimal digits, given the
availability of enough computing power. We describe several algorithms, including the elliptic curve method (ECM), and the multiple-polynomial quadratic sieve (MPQS) algorithm, and discuss their
parallel implementation. It turns out that some of the algorithms are very well suited to parallel implementation. Doubling the degree of parallelism (i.e. the amount of hardware devoted to the
problem) roughly increases the size of a number which can be factored in a fixed time by 3 decimal digits. Some recent computational results are mentioned – for example, the complete factorisation of
the 617-decimal digit Fermat number F11 = 2211 + 1 which was accomplished using ECM.
, 1996
"... . We describe the complete factorization of the tenth and eleventh Fermat numbers. The tenth Fermat number is a product of four prime factors with 8, 10, 40 and 252 decimal digits. The eleventh
Fermat number is a product of five prime factors with 6, 6, 21, 22 and 564 decimal digits. We also note a ..."
Cited by 17 (8 self)
Add to MetaCart
. We describe the complete factorization of the tenth and eleventh Fermat numbers. The tenth Fermat number is a product of four prime factors with 8, 10, 40 and 252 decimal digits. The eleventh
Fermat number is a product of five prime factors with 6, 6, 21, 22 and 564 decimal digits. We also note a new 27-decimal digit factor of the thirteenth Fermat number. This number has four known prime
factors and a 2391-decimal digit composite factor. All the new factors reported here were found by the elliptic curve method (ECM). The 40-digit factor of the tenth Fermat number was found after
about 140 Mflop-years of computation. We discuss aspects of the practical implementation of ECM, including the use of special-purpose hardware, and note several other large factors found recently by
ECM. 1. Introduction For a nonnegative integer n, the n-th Fermat number is F n = 2 2 n + 1. It is known that F n is prime for 0 n 4, and composite for 5 n 23. Also, for n 2, the factors of F n are
of th...
- Math. Comp , 1999
"... Abstract. Extending previous searches for prime Fibonacci and Lucas numbers, all probable prime Fibonacci numbers Fn have been determined for 6000
Cited by 6 (0 self)
Add to MetaCart
Abstract. Extending previous searches for prime Fibonacci and Lucas numbers, all probable prime Fibonacci numbers Fn have been determined for 6000 <n≤50000 and all probable prime Lucas numbers Ln
have been determined for 1000 <n≤50000. A rigorous proof of primality is given for F9311
, 1997
"... Abstract. We report the discovery of new 27-decimal digit factors of the thirteenth and sixteenth Fermat numbers. Each of the new factors was found by the elliptic curve method. After division
by the new factors and other known factors, the quotients are seen to be composite numbers with 2391 and 19 ..."
Cited by 5 (2 self)
Add to MetaCart
Abstract. We report the discovery of new 27-decimal digit factors of the thirteenth and sixteenth Fermat numbers. Each of the new factors was found by the elliptic curve method. After division by the
new factors and other known factors, the quotients are seen to be composite numbers with 2391 and 19694 decimal digits respectively. 1.
- Math. Comp , 2000
"... We report the discovery of a new factor for each of the Fermat numbers F 13 ,F 15 ,F 16 . These new factors have 27, 33 and 27 decimal digits respectively. Each factor was found by the elliptic
curve method. After division by the new factors and previously known factors, the remaining cofactors are ..."
Cited by 4 (0 self)
Add to MetaCart
We report the discovery of a new factor for each of the Fermat numbers F 13 ,F 15 ,F 16 . These new factors have 27, 33 and 27 decimal digits respectively. Each factor was found by the elliptic curve
method. After division by the new factors and previously known factors, the remaining cofactors are seen to be composite numbers with 2391, 9808 and 19694 decimal digits respectively. 1.
, 1997
"... We consider the problem of distributing potentially dangerous information to a number of competing parties. As a prime example, we focus on the issue of distributing security patches to
software. These patches implicitly contain vulnerability information that may be abused to jeopardize the security ..."
Add to MetaCart
We consider the problem of distributing potentially dangerous information to a number of competing parties. As a prime example, we focus on the issue of distributing security patches to software.
These patches implicitly contain vulnerability information that may be abused to jeopardize the security of other systems. l/Vhen a vendor supplies a binary program patch, different users may receive
it at different times. The differential application times of the patch create a window of vulnerability until all users have installed the patch. An abuser might analyze the binary patch before
others install it. Armed with this information, he might be able to abuse another user's machine. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1091228","timestamp":"2014-04-16T06:23:43Z","content_type":null,"content_length":"29731","record_id":"<urn:uuid:2651b8e1-3faf-4445-9bd2-b6ab358e5e7c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
please help me solve 20g^2=16g
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f38686e4b0abb3d8702631","timestamp":"2014-04-17T15:54:13Z","content_type":null,"content_length":"44654","record_id":"<urn:uuid:df4e8d3f-b5ff-43a3-b8be-35170c500ba8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the complexity of teaching
Results 1 - 10 of 81
, 1998
"... We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a
setting in which the description of each example can be partitioned into two distinct views, motivated by the ta ..."
Cited by 1244 (28 self)
Add to MetaCart
We consider the problem of using a large unlabeled sample to boost performance of a learning algorithm when only a small set of labeled examples is available. In particular, we consider a setting in
which the description of each example can be partitioned into two distinct views, motivated by the task of learning to classify web pages. For example, the description of a web page can be
partitioned into the words occurring on that page, and the words occurring in hyperlinks that point to that page. We assume that either view of the example would be su cient for learning if we had
enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment amuch smaller set of labeled examples. Speci cally, the presence of two distinct views
of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm's predictions on new unlabeled examples are used to enlarge the
training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled
and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice. As part of our
analysis, we provide new re-
- Machine Learning , 1993
"... Neural networks, despite their empirically-proven abilities, have been little used for the refinement of existing knowledge because this task requires a three-step process. First, knowledge in
some form must be inserted into a neural network. Second, the network must be refined. Third, knowledge mus ..."
Cited by 198 (4 self)
Add to MetaCart
Neural networks, despite their empirically-proven abilities, have been little used for the refinement of existing knowledge because this task requires a three-step process. First, knowledge in some
form must be inserted into a neural network. Second, the network must be refined. Third, knowledge must be extracted from the network. We have previously described a method for the first step of this
process. Standard neural learning techniques can accomplish the second step. In this paper, we propose and empirically evaluate a method for the final, and possibly most difficult, step. This method
efficiently extracts symbolic rules from trained neural networks. The four major results of empirical tests of this method are that the extracted rules: (1) closely reproduce (and can even exceed)
the accuracy of the network from which they are extracted; (2) are superior to the rules produced by methods that directly refine symbolic rules; (3) are superior to those produced by previous
techniques fo...
- Evolutionary Computation , 1996
"... We consider "competitive coevolution," in which fitness is based on direct competition among individuals selected from two independently evolving populations of "hosts" and "parasites."
Competitive coevolution can lead to an "arms race," in which the two populations reciprocally drive one another to ..."
Cited by 121 (3 self)
Add to MetaCart
We consider "competitive coevolution," in which fitness is based on direct competition among individuals selected from two independently evolving populations of "hosts" and "parasites." Competitive
coevolution can lead to an "arms race," in which the two populations reciprocally drive one another to increasing levels of performance and complexity. We use the games of Nim and 3-D Tic-Tac-Toe as
test problems to explore three new techniques in competitive coevolution. "Competitive fitness sharing" changes the way fitness is measured, "shared sampling" provides a method for selecting a
strong, diverse set of parasites, and the "hall of fame" encourages arms races by saving good individuals from prior generations. We provide several different motivations for these methods, and
mathematical insights into their use. Experimental comparisons are done, and a detailed analysis of these experiments is presented in terms of testing issues, diversity, extinction, arms race
progress measurements, a...
, 2003
"... We present a new approach to clustering based on the observation that \it is easier to criticize than to construct." Our approach of semi-supervised clustering allows a user to iteratively
provide feedback to a clustering algorithm. The feedback is incorporated in the form of constraints which ..."
Cited by 100 (2 self)
Add to MetaCart
We present a new approach to clustering based on the observation that \it is easier to criticize than to construct." Our approach of semi-supervised clustering allows a user to iteratively provide
feedback to a clustering algorithm. The feedback is incorporated in the form of constraints which the clustering algorithm attempts to satisfy on future iterations. These constraints allow the user
to guide the clusterer towards clusterings of the data that the user nds more useful. We demonstrate semi-supervised clustering with a system that learns to cluster news stories from a Reuters data
set. Introduction Consider the following problem: you are given 100,000 text documents (e.g., papers, newsgroup articles, or web pages) and asked to group them into classes or into a hierarchy such
that related documents are grouped together. You are not told what classes or hierarchy to use or what documents are related; you have some criteria in mind, but may not be able to say exactly w...
- In Neural Information Processing Systems , 2005
"... ..."
, 1996
"... We investigate the query complexity of exact learning in the membership and (proper) equivalence query model. We give a complete characterization of concept classes that are learnable with a
polynomial number of polynomial sized queries in this model. We give applications of this characterization, i ..."
Cited by 64 (8 self)
Add to MetaCart
We investigate the query complexity of exact learning in the membership and (proper) equivalence query model. We give a complete characterization of concept classes that are learnable with a
polynomial number of polynomial sized queries in this model. We give applications of this characterization, including results on learning a natural subclass of DNF formulas, and on learning with
membership queries alone. Query complexity has previously been used to prove lower bounds on the time complexity of exact learning. We show a new relationship between query complexity and time
complexity in exact learning: If any "honest" class is exactly and properly learnable with polynomial query complexity, but not learnable in polynomial time, then P<F NaN> 6= NP. In particular, we
show that an honest class is exactly polynomial-query learnable if and only if it is learnable using an oracle for \Sigma p 4 . 1 Introduction Today concept learning is studied under two rigorous
frameworks which model t...
- In Proceedings of 25th Annual ACM Symposium on the Theory of Computing , 1993
"... ) Angus Macintyre Mathematical Inst., University of Oxford Oxford OX1 3LB, England, UK E-mail: ajm@maths.ox.ac.uk Eduardo D. Sontag 3 Dept. of Mathematics, Rutgers University New Brunswick, NJ
08903 E-mail: sontag@hilbert.rutgers.edu Abstract Proc. 25th Annual Symp. Theory Computing , San Diego, ..."
Cited by 44 (12 self)
Add to MetaCart
) Angus Macintyre Mathematical Inst., University of Oxford Oxford OX1 3LB, England, UK E-mail: ajm@maths.ox.ac.uk Eduardo D. Sontag 3 Dept. of Mathematics, Rutgers University New Brunswick, NJ 08903
E-mail: sontag@hilbert.rutgers.edu Abstract Proc. 25th Annual Symp. Theory Computing , San Diego, May 1993 This paper deals with analog circuits. It establishes the finiteness of VC dimension,
teaching dimension, and several other measures of sample complexity which arise in learning theory. It also shows that the equivalence of behaviors, and the loading problem, are effectively
decidable, modulo a widely believed conjecture in number theory. The results, the first ones that are independent of weight size, apply when the gate function is the "standard sigmoid" commonly used
in neural networks research. The proofs rely on very recent developments in the elementary theory of real numbers with exponentiation. (Some weaker conclusions are also given for more general
analytic gate functions...
- Journal of Computer and System Sciences , 1994
"... We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a
model remedies the non-intuitive aspects of other models in which the teacher must successfully teach ..."
Cited by 39 (1 self)
Add to MetaCart
We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a
model remedies the non-intuitive aspects of other models in which the teacher must successfully teach any consistent learner. We prove that any class that can be exactly identified by a deterministic
polynomial-time algorithm with access to a very rich set of example-based queries is teachable by a computationally unbounded teacher and a polynomialtime learner. In addition, we present other
general results relating this model of teaching to various previous results. We also consider the problem of designing teacher/learner pairs in which both the teacher and learner are polynomial-time
algorithms and describe teacher/learner pairs for the classes of 1-decision lists and Horn sentences. 1 Introduction Recently, there has been interest in developing formal models of teaching [4, 10,
- In Proceedings of the Fifth Annual Workshop on Computational Learning Theory , 1992
"... Goldman and Kearns [GK91] recently introduced a notionof the teaching dimensionof a concept class. The teaching dimension is intended to capture the combinatorial difficulty of teaching a
concept class. We present a computational analog which allows us to make statements about bounded-complexity tea ..."
Cited by 27 (0 self)
Add to MetaCart
Goldman and Kearns [GK91] recently introduced a notionof the teaching dimensionof a concept class. The teaching dimension is intended to capture the combinatorial difficulty of teaching a concept
class. We present a computational analog which allows us to make statements about bounded-complexity teachers and learners, and we extend the model by incorporating trusted information. Under this
extended model, we modify algorithms for learning several expressive classes in the exact identification model of Angluin [Ang88]. We study the relationships between variants of these models, and
also touch on a relationship with distribution-free learning. 1 INTRODUCTION In the eight years since Valiant's seminal paper on learnability was published [Val84], computational learning theory has
been an active and productive field. Several different learning models have been proposed, each attempting to model a different aspect of learning. Many of these models envision a teacher who
interacts in some w... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=676286","timestamp":"2014-04-18T14:16:15Z","content_type":null,"content_length":"38178","record_id":"<urn:uuid:e8a56e3b-bc64-400f-a037-b3388b031aa5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brevet US7941057 - Integral phase rule for reducing dispersion errors in an adiabatically chirped amplitude modulated signal
Numéro de publication US7941057 B2
Type de publication Octroi
Numéro de demande US 11/964,321
Date de publication 10 mai 2011
Date de dépôt 26 déc. 2007
Date de priorité 28 déc. 2006
Autre référence de publication US20080159747
Numéro de publication 11964321, 964321, US 7941057 B2, US 7941057B2, US-B2-7941057, US7941057 B2, US7941057B2
Inventeurs Daniel Mahgerefteh, Parviz Tayebati, Xueyan Zheng, Yasuhiro Matsui
Cessionnaire d'origine Finisar Corporation
Exporter la citation BiBTeX, EndNote, RefMan
Citations de brevets (101), Citations hors brevets (58), Classifications (14), Événements juridiques (1)
Liens externes: USPTO, Cession USPTO, Espacenet
Integral phase rule for reducing dispersion errors in an adiabatically chirped amplitude modulated signal
US 7941057 B2
An optical transmitter is disclosed for transmitting a signal along a dispersive medium to a receiver. The optical transmitter generates adiabatically chirped profile having an initial pulse width
and frequency excursion chosen such that high frequency data sequences include one bits that interfere destructively at a middle point of an intervening zero bit upon arrival at the receiver.
1. An optical transmission system comprising:
an optical transmitter;
an optical receiver; and
a digital signal source coupled to the optical transmitter and operable to generate an electrical data signal effective to cause the optical transmitter to emit a digital signal onto an optical fiber
having a first end coupled to the optical transmitter and a second end coupled to the optical receiver, the optical fiber comprising a dispersive material and defining an optical path length between
the first and second ends, wherein:
the digital signal comprises a train of zero and one bits, the one bits comprising adiabatic pulses comprising a frequency excursion between a base frequency and a peak frequency;
the train of zero and one bits includes a high frequency sequence comprising a first one bit followed by a zero bit followed by a second one bit;
the frequency excursion has a value such that the first one bit and second one bit are between π/2 and −π/2 radians out of phase at a middle point of the zero bit when the high frequency sequence
arrives at the receiver so as to decrease the bit error rate of the received digital signal at the receiver;
the adiabatic pulses have a 1/e^2 pulse width τ[0 ]upon exiting the transmitter and a 1/e^2 pulse width τ at the receiver after propagation through a length of dispersive fiber; and
the difference between the base frequency and peak frequency excursion of the pulses at the transmitter approximately satisfies
Δv [AD](τ×erf(T/τ)−τ[0]×erf(T/τ [0]))=¼.
2. The optical transmission system of claim 1, wherein the frequency excursion has a value such that the first one bit and second one bit are about π radians out of phase at a middle point of the
zero bit when the high frequency sequence travels a distance equal to the optical path length through the optical fiber.
3. An optical transmission system comprising:
an optical transmitter;
an optical receiver;
an optical fiber having a first end coupled to the optical transmitter and a second end coupled to the optical receiver, the optical fiber comprising a dispersive material and defining an optical
path length between the first and second ends; and
a digital signal source coupled to the optical transmitter and operable to generate an electrical data signal effective to cause the optical transmitter to emit a digital signal, wherein:
the digital signal comprises a train of zero and one bits, the one bits comprising adiabatic pulses comprising a frequency excursion between a base frequency and a peak frequency;
the train of zero and one bits includes a high frequency sequence comprising a first one bit followed by a zero bit followed by a second one bit;
the frequency excursion has a value such that the first one bit and second one bit are between π/2 and −π/2 radians out of phase at a middle point of the zero bit when the high frequency sequence
arrives at the receiver so as to decrease the bit error rate of the received digital signal at the receiver;
the adiabatic pulses have a 1/e^2 pulse width τ[0 ]upon exiting the transmitter and a 1/e^2 pulse width τ at the receiver after propagation through a length of dispersive fiber; and
the difference between the base frequency and peak frequency excursion of the pulses at the transmitter approximately satisfies
Δv [AD](τ×erf(T/τ)−τ[0]×erf(T/τ [0]))=¼.
4. The optical transmitter as in claim 1, wherein the frequency excursion between the base frequency and peak frequency excursion of the pulses at the transmitter is a decreasing function of the
transmission distance.
5. The optical transmission system of claim 1, wherein the optical transmitter comprises a directly modulated laser.
6. The optical transmission system of claim 1, wherein the optical transmitter comprises a directly frequency modulated laser coupled to an optical spectrum reshaper.
7. The optical transmission system of claim 1, wherein the optical transmitter comprises a distributed feedback laser.
8. The optical transmission system of claim 1, wherein the optical transmitter comprises an independent DFB laser for FM generation and a tandem external modulator for AM generation.
9. The optical transmission system of claim 1, where Δv[AD ]is the frequency excursion and T is a bit period of the digital signal.
10. The optical transmission system of claim 3, where Δv[AD ]is the frequency excursion and T is a bit period of the digital signal.
11. A method for reducing dispersion-related errors in an optical transmission system comprising an optical fiber coupled to a receiver and having an optical path length, the method comprising:
generating a train of zero and one bits, including a high frequency sequence comprising a first one bit followed by a zero bit followed by a second one bit, the first and second one bits comprising
adiabatic pulses having a frequency excursion (ΔV[AD]) between a base frequency and a peak frequency; and
transmitting the train of zero and one bits through the optical fiber, ΔV[AD ]having a value such that the first one bit and second one bit are between π/2 and −π/2 radians out of phase at a middle
point of the zero bit when the high frequency sequence arrives at the receiver;
wherein the adiabatic pulses have a 1/e
pulse width τ
[0 ]
upon exiting the transmitter and a 1/e
pulse width τ at the receiver after propagation through a length of dispersive fiber; and wherein the difference between the base frequency and peak frequency excursion of the pulses at the
transmitter approximately satisfies
Δv [AD](τ×erf(T/τ)−τ[0] ×erf(T/τ [0]))=¼.
12. The method of claim 11, wherein the frequency excursion has a value such that the first one bit and second one bit are about π radians out of phase at a middle point of the zero bit when the high
frequency sequence arrives at the receiver.
13. The method of claim 11. wherein the frequency excursion between the base frequency and peak frequency excursion of the pulses at the transmitter is a decreasing function of the transmission
14. The method of claim 11, wherein generating the train of zero and one bits comprises modulating a directly modulated laser.
15. The method of claim 11, wherein generating the train of zero and one bits comprises directly modulating a laser coupled to an optical spectrum reshaper.
16. The method of claim 9, wherein generating the train of zero and one bits comprises modulating a distributed feedback laser.
17. The method of claim 9, wherein generating the train of zero and one bits comprises modulating an independent DFB laser for FM generation and a tandem AM modulator.
18. The method of claim 11, where T is a bit period of a digital signal including the train of zero and one bits.
The Application claims the benefit of U.S. Provisional Patent Application Ser. No. 60/877,425, filed Dec. 28, 2006.
1. The Field of the Invention
The present invention relates to dispersion resistant digital optical transmitters.
2. The Relevant Technology
The quality and performance of a digital transmitter is determined by the distance over which the transmitted digital signal can propagate without severe distortions. This is typically characterized
as the distance over which a dispersion penalty reaches a level of about 1 dB. A standard 10 Gb/s optical digital transmitter, such as an externally modulated optical source (e.g., a laser), can
transmit up to a distance of about 50 km in standard single mode fiber, at 1550 nm, before the dispersion penalty reaches the level of about 1 dB. This distance is typically called the dispersion
The Bit Error Rate (BER) of an optical digital signal after propagation though fiber, and the resulting distortion of the signal, are determined mostly by the distortions of a few bit sequences. The
101 bit sequence, and the single bit 010 sequence, are two examples of bit sequences that have high frequency content and tend to distort most after dispersion in a fiber, leading to errors in the
bit sequence. Transmission techniques that can alleviate the distortion for these bit sequences increase the dispersion tolerance of the entire data pattern.
In view of the foregoing it would be advancement in the art to provide an apparatus and method for increasing the dispersion tolerance of an optical digital transmitter, particularly for
high-frequency data.
In one aspect of the invention, an optical transmission system includes an optical transmitter, an optical receiver, and an optical fiber having a first end coupled to the optical transmitter and a
second end coupled to the optical receiver. The optical fiber includes a dispersive material and defines an optical path length between the first and second ends. The optical transmitter includes a
laser transmitter operable to emit a digital signal comprising a train of zero and one bits, the one bits comprising adiabatic pulses. The pulses have an adiabatic frequency excursion between a base
frequency and a peak frequency.
The train of zero and one bits may include a high frequency sequence comprising a first one bit followed by a zero bit followed by a second one bit. The frequency excursion has a value such that the
phase difference between the first one bit and the second one bit at a middle point of the zero bit between them is between π/2 and −π/2 radians when the bit sequence arrives at the receiver.
In another aspect of the invention, the adiabatically chirped pulses of the one bits have a 1/e^2 pulse width τ[0 ]upon exiting the transmitter and a 1/e^2 pulse width pulse width τ upon traveling to
the receiver through the optical fiber. The frequency excursion (Δv[AD]) between the base frequency and the peak frequency approximately satisfies the relation Δv[AD](τ−τ[0])erf(1)=¼ such that the 1
bits interfere destructively at a middle point of an intervening zero bit having a duration T.
To further clarify the above and other advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof
which are illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope.
The invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
FIG. 1 is a schematic block diagram of a laser transmitter suitable for use in accordance with an embodiment of the present invention;
FIG. 2 is an eye diagram representation of a pseudo-random sequence of ones and zeros with various duty cycle values at 10 Gb/s;
FIG. 3 is a graph illustrating adiabatically chirped pulse shapes as transmitted from a laser transmitter; and
FIG. 4 is a graph illustrating adiabatically chirped pulses shaped in accordance to an embodiment of the present invention after traveling through a dispersive medium.
Referring to FIG. 1, a digital signal source 10 supplies an electrical digital bit sequence to an optical transmitter 12, such as a laser. The output of the optical transmitter 12 is transmitted
through a dispersive medium, such as an optical fiber 14. A receiver 16 is coupled to an end of the optical fiber 14 and receives optical signals transmitted from the transmitter 12. The optical
fiber 14 defines an optical path length between the optical transmitter 12 and the receiver 16.
The optical transmitter 12 may be a directly frequency modulated laser coupled to an optical spectrum reshaper, such as is used in the commercially available Chirp Managed Laser (CML™).
Alternatively, the transmitter 12 includes a directly modulated distributed feedback (DFB) laser for FM generation and a separate amplitude modulator (AM). In the preferred embodiment of the present
invention, the optical transmitter generates optical pulses that are amplitude modulated and frequency modulated such that the temporal frequency modulation profile of the pulses substantially
follows the temporal amplitude modulation profile. We call these pulses adiabatically chirped amplitude modulated pulses (ACAM).
Dispersion tolerance of pulses generated by the optical signal source 12 are enhanced when pulses have a flat-top chirp and the adiabatic chirp is chosen to produce a π phase shift between 1 bits
separated by odd number of 0 bits. This is evident by considering a 101 bit sequence. In this case, as the 1 bits spread in time, they interfere destructively in the middle due to the uniform π phase
shift across the pulse. Accordingly, the dispersion tolerance tends to be relatively independent of distance, because the phase across each pulse is constant and any overlap is adding destructively.
In a pulse generated according to embodiments of the present invention, the optical transmitter 12 is modulated to produce an adiabatically chirped amplitude modulated (ACAM) pulse sequence that
manifests superior dispersion tolerance. In some embodiments, the chirp is not flat-topped, but varies adiabatically with the amplitude of the pulse. Hence the phase across the pulse is not constant
and is varying.
The adiabatic chirp and the crossing percentage can be arranged according to a novel integral rule, described below, to optimize transmission at a particular distance. Optical cross over is a
convenient representation of the pulse duty cycle for a random digital bit sequence, and is defined below. For example, for a 100% duty cycle pulse, where the single 1 bit duration is equal to the
bit period, the cross-over is 50%.
Digital data consists of 1s and 0s, at a bit rate, B=1/T, where T is the bit period. For a B=10 Gb/s system, T=100 ps. The 1 and 0 bits each occupy time durations □[1], and □[0 ]respectively, such
The duty cycle is defined as the fraction of the duration of the 1s to twice the bit period;
D=□ [1]/2T.(2)
A non-return-to-zero digital data stream is often shown on a sampling oscilloscope in the form of an “eye diagram,”, as in FIG. 2, in which all the bits in the bit stream are folded on top of each
other on the same two bit periods. In the eye diagram, the rising edge of a 1 bit crosses the falling edge of another bit at a point along the vertical amplitude axis, as used in this application is
called the crossing point, which is determined by the duty cycle and the rise and fall times. For a bit stream having 50% duty cycle, the crossing point is in the middle between the 1 level and the
zero level, or 50%. The crossing point moves above 50% for duty cycle higher than 50% (1s pulses longer than the bit period) and moves below 50% for duty cycle less than 50% (1s pulses shorter than
the bit period). FIG. 2 shows a 50% duty cycle with a 50% crossing point (a), a 60% duty cycle (b), and a 40% duty cycle (c).
In some embodiments, pulses are formed according to an integral rule such that the phase difference between the peaks of two 1 bits separated by a 0 bit are adjusted such that the phase difference
between the two pulses in the middle of the 0 bit becomes equal to π at a desired propagation distance. This guarantees that the interference of the 1 bits in the middle of the 0 bit, which is
separating them, is maximally destructive, leading to a minimum at the desired distance. This causes the phase margin near the 0 bit and the extinction ratio to increase with propagation distance.
For a fixed crossing percentage, the optimum adiabatic chirp decreases with increasing propagation distance. Also optimum chirp increases for higher crossing percentage. It should be noted that the
integral rule assumes that the bit sequence limiting propagation is the 101 bit sequence. So the optimum conditions of the transmitter may be somewhat different to accommodate other limiting bit
sequences. For example, single 1 bits spread less if they have higher crossing (longer 1s width). So it is advantageous to use a high crossing. However, the 101 bit should still maintain integrity
for lower crossing, as long as the integral rule is satisfied.
FIG. 3 illustrates the instantaneous frequency of a 101 sequence of an ACAM signal. It is assumed for this model that there is either minimal or no transient chirp. It is also assumed that the
amplitude (not shown) has the same profile as the frequency. Since absolute phase is arbitrary, the phase of the first bit, E[1 ]is assumed to be zero at its peak, which we take to be at t=0. The
phase at time t relative to this point is given by
$Φ = 2 π ∫ 0 t ( Δ v AD - Δ v ( t ′ ) ) ⅆ t ′ ( 1 )$
Where Δv[AD ]is the adiabatic chirp, defined as peak frequency excursion of the frequency profile of the pulse, and Δv(t) is the time varying instantaneous frequency profile of the pulse. For
example, as shown in FIG. 3, the phase difference between the peaks of the first 1 bit, E[1 ]and the second 1 bit, E[2 ]is given by the shaded area, where T is the bit period. This phase difference
is a function of the adiabatic chirp, rise times, fall time, and pulse shape.
This ACAM signal can be generated by a variety of ways, including using a directly frequency modulated laser coupled to an optical spectrum reshaper, such as is used in the commercially available
Chirp Managed Laser (CML™). The ACAM signal may be generated by an independent distributed feedback (DFB) laser for FM generation and a separate amplitude modulator placed after the laser modulator.
When the frequency modulation is generated by a DFB laser, the resulting output field has continuous phase. Hence the phase in the center of the 0 bit between the two 1 bits is ½ the phase difference
between the peaks of E[1 ]and E[2].
Upon propagation through a dispersive fiber, the pulses broaden and their wings overlap. The instantaneous frequency of the pulses has two contributions: 1) the adiabatic chirp of the original pulse,
and 2) the linear chirp introduced by fiber dispersion, which introduces a quadratic phase variation across the pulse. In the absence of adiabatic chirp this quadratic phase is the same for the two 1
bit pulses in the 101 sequence. Because of the quadratic symmetry, the dispersion-induced phase is the same for the E[1 ]and E[2 ]pulses in the middle of the 0 bit between the 1 bits, where they
overlap. Hence the overlapped pulses interfere constructively, causing the 0 level to rise at the 0 bit and increase the 0→1 bit error rate. This is a key feature of the distorted eye for a
chirp-free externally modulated transmitter after fiber propagation.
FIG. 4 shows the frequency profile of an adiabatically chirped amplitude modulated signal (ACAM) after fiber dispersion. The amplitude (not shown separately) is the same as the frequency profile. The
linear chirp introduced by dispersion, which would introduce a tilt, is not shown. The two 1 bit pulses overlap in the middle of the zero bit, t=T, and interfere. We neglect the dispersion-induced
phase for the moment, because it gives the same phase to the two pulses at t=T. Adiabatic chirp, on the other hand will introduce a phase difference, which can be adjusted to cause cancellation. Note
that if the adiabatic chirp is high enough, it will cause the pulse to broaden asymmetrically. The method presented here still applies. However the cancellation occurs away from the center of the 0
bit between the two bits. This is evident, for example when the adiabatic chirp is 7-8 GHz for a 10 Gb/s bit sequence.
The curve 18 of FIG. 4 shows the intensity of the sum of the square of the fields when there is a π phase shift between them in the middle of the 0 bit. The resulting intensity is given by Equation 2
I(t)=E [1] ^2(t)+E [2] ^2(t)+2E [1]*(t)E [2](t)cos(Φ[1t]−Φ[2t])(2)
Here Φ[1t ]and Φ[2t ]are the phases of the field at time t for the 1 bits, E[1 ]and E[2]. In order to have destructive interference, the phase difference has to be ideally π, however, any value in
the range π/2≦Φ[2t]−Φ[1t]≦−π/2 (modulo 2π) will cause some destructive interference since the cosine function is negative in this range. This accounts for the large range of usable distances, and
adiabatic chirp values for which the resulting optical eye is relatively open and the BER is acceptably low. Using Eq. 1 the phases at t=T are given in terms of the shaded areas A[1 ]and A[2 ]to be
Φ[1t]=Φ[1] +A [1] =A [1 ]
Φ[2t]=Φ[2] −A [2](3)
In the case that the pulses broaden approximately symmetrically, A[1]=A[2], the condition for destructive interference becomes
According to Eq. 4, optimum cancellation is achieved when the phase difference between the peaks of two 1 bits separated by a zero is given by
$Φ 2 = 2 πΔ v AD ∫ 0 2 T ( 1 - Δ v ( t ′ ) / Δ v AD ) ⅆ t ′ = π + 2 A ( z ) ( 5 )$
Note that the phase difference, Φ[2], between the two 1 bits separated by a single 0 bit, has to be larger than π in order to get cancellation at distance z. This is distinctly different from the
case of flat-top chirp, in which the phase difference is equal to π. It is interesting to note that since the phase difference has to be π modulo 2π, that phase difference 2A(z)−π will also provide a
cancellation at the middle of the pulses. In Eq. 5, the integral is a dimensionless factor, which depends only on the pulse shape, rise time and fall times. This factor decreases with increasing
pulse duty cycle; i.e. increasing eye crossing percentage. So a higher chirp required for pulses with higher duty cycle (higher crossing percentage) is expected. For experimental conditions using a
directly frequency modulated laser coupled to an optical spectrum reshaper, such as the commercially available CML™, we find that for Δv[AD]=6.5 GHz, crossing percentage of 55%, rise time ˜35 ps, and
fall time ˜35 ps, which were optimized for 2300 ps/nm dispersion, the phase difference is Φ[2]=1.3π. This value was calculated from a measured pulse shape and assuming adiabatic chirp. For this
condition the CML™ gave a <10^−6 bit error rate at 10.7 Gb/s at 22 dB optical signal to noise ratio (OSNR) after 2300 ps/nm of dispersion and satisfies the industry requirements. It is important to
note that the receiver used in the preferred embodiment of the present invention is a standard 10 Gb/s direct detection receiver having a bandwidth of approximately 75% of the bit rate. Also, the
optical eye diagram of the resulting signal at the receiver is a standard two-level intensity modulated eye diagram. This is because the destructive interference between bits keeps the optical eye
The valley area, between the two overlapping pulses, A(z), decreases with increasing distance, as the pulses broaden. This implies that the optimum adiabatic chirp decreases with increasing distance.
For a Gaussian pulse the area, A(z), up to the middle of the zero bit between the two 1 bits, at t=T, can be approximated by
A(z)=2πΔv [AD](T−√{square root over (τ[0] ^2+β[2] ^2 z ^2/τ[0] ^2)}erf(T/τ))(6)
Where τ[0 ]is the 1/e^2 pulse width of the 1 bit before propagation, τ=√{square root over (τ[0] ^2+β[2] ^2z^2/τ[0] ^2)} is the pulse width after propagation, β[2 ]is the fiber dispersion in ps^2/km,
and z is propagation distance. Substituting Eq. 6 for the area into Eq. 5 for the integral rule for Gaussian pulses to calculate Φ[2 ]in terms of the adiabatic chirp and initial pulse width, τ[0], we
obtain an explicit dependence of optimum adiabatic chirp on pulse width:
Δv[AD](τ×erf(T/τ)−τ[0] ×erf(T/τ [0]))=¼(7)
As an example, according to Eq. 7, for τ=90 ps and τ[0]=50 ps, the optimum adiabatic chirp is ˜ 7 GHz. It is important to note that τ is an increasing function of the transmission distance τ=√{square
root over (τ[0] ^2+β[2] ^2z^2/τ[0] ^2)}, and so the optimum chirp according to Eq. 7 will decrease with increasing distance:
$Δ v AD = 1 4 1 ( τ 0 2 + β 2 2 z 2 / τ 0 2 × erf ( T / τ ) - τ 0 × erf ( T / τ 0 ) ) ( 8 )$
According to some embodiments of an invention, for a given dispersive medium having an optical path length between the transmitter 12 and the receiver 16, the initial pulse width τ[0 ]and frequency
excursion Δv[AD ]are chosen such that Eq. 7 will be satisfied near the receiver, so as to generate a phase shift equal to π between 1 bits separated by single 0 bits, for a given pulse width τ near
the receiver 16 after dispersion of the pulse while traveling through the dispersive medium.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as
illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and
range of equivalency of the claims are to be embraced within their scope.
Brevet cité Date de Date de Déposant Titre
dépôt publication
US3324295 7 nov. 6 juin 1967 Research Corp Frequency modulation discriminator for optical signals
US3973216 28 avr. 3 août 1976 The United States Of America As Represented By The Laser with a high frequency rate of modulation
1975 Secretary Of The Navy
US3999105 19 avr. 21 déc. 1976 International Business Machines Corporation Liquid encapsulated integrated circuit package
US4038600 17 févr. 26 juil. 1977 Westinghouse Electric Corporation Power control on satellite uplinks
US4561119 23 août 24 déc. 1985 International Standard Electric Corporation Optical frequency modulation system
US4671604 6 févr. 9 juin 1987 The United States Of America As Represented By The Wavelength dependent, tunable, optical time delay system for electrical signals
1985 Secretary Of The Air Force
US4754459 7 oct. 28 juin 1988 British Telecommunications Plc Semiconductor lasers
US4805235 17 févr. 14 févr. 1989 Nec Corporation Optical transmitter comprising an optical frequency discriminator
US4841519 27 juin 20 juin 1989 Nec Corporation Apparatus for discriminating an optical signal from others and an apparatus for tuning an optical
1988 wavelength filter used in the same
US4896325 23 août 23 janv. 1990 The Regents Of The University Of California Multi-section tunable laser with differing multi-element mirrors
US4914667 30 mai 3 avr. 1990 American Telephone And Telegraph Company, At&T Bell Hybrid laser for optical communications, and transmitter, system, and method
1989 Laboratories
US5088097 4 avr. 11 févr. 1992 Canon Kabushiki Kaisha Semiconductor laser element capable of changing emission wavelength, and method of driving the same
US5119393 13 juin 2 juin 1992 Hitachi, Ltd. Semiconductor laser device capable of controlling wavelength shift
US5136598 31 mai 4 août 1992 The United States Of America As Represented By The Modulated high-power optical source
1990 Secretary Of The Navy
US5177630 14 déc. 5 janv. 1993 Westinghouse Electric Corp. Method and apparatus for generating and transferring high speed data for high speed testing applications
US5293545 27 juil. 8 mars 1994 General Instrument Corporation Optical source with reduced relative intensity noise
US5325378 16 déc. 28 juin 1994 Hewlett-Packard Company Misalignment-tolerant, grating-tuned external-cavity laser with enhanced longitudinal mode selectivity
US5325382 30 sept. 28 juin 1994 Nec Corporation Method and electrode arrangement for inducing flat frequency modulation response in semiconductor laser
US5371625 29 janv. 6 déc. 1994 Alcatel N.V. System for optically transmitting digital communications over an optical fiber with dispersion at the
1993 operating wavelength
US5412474 8 mai 2 mai 1995 Smithsonian Institution System for measuring distance between two points using a variable frequency coherent source
US5416629 2 déc. 16 mai 1995 General Instrument Corporation Intensity modulated digital optical communications using a frequency modulated signal laser
US5434693 16 mars 18 juil. 1995 Kokusai Denshin Denwa Kabushiki Kaisha Optical short pulse generating device
US5450432 9 sept. 12 sept. 1995 Nec Corporation Semiconductor distributed feedback laser emitting device improved in mutual modulation distortion
US5459799 12 août 17 oct. 1995 Telefonaktiebolaget Lm Ericsson Tunable optical filter
US5465264 22 nov. 7 nov. 1995 Xerox Corporation Electronic simulation for compensating laser diode thermal effects
US5477368 29 déc. 19 déc. 1995 At&T Corp. High power lightwave transmitter using highly saturated amplifier for residual AM suppression
US5550667 18 août 27 août 1996 Alcatel N.V. Optical transmitter
US5568311 26 mai 22 oct. 1996 Mitsubishi Denki Kabushiki Kaisha Wavelength tunable semiconductor laser device
US5592327 29 déc. 7 janv. 1997 Clark-Mxr, Inc. Regenerative amplifier incorporating a spectral filter within the resonant cavity
US5696859 27 sept. 9 déc. 1997 Fujitsu Limited Optical-filter array, optical transmitter and optical transmission system
US5737104 18 déc. 7 avr. 1998 Dicon Fiberoptics Wavelength division multiplexer and demultiplexer
US5777773 31 oct. 7 juil. 1998 Northern Telecom Limited Optical frequency control system and method
US5805235 3 avr. 8 sept. 1998 Hyundai Electronics America Bookmarking television program and channel selections
US5856980 8 déc. 5 janv. 1999 Intel Corporation Baseband encoding method and apparatus for increasing the transmission rate over a communication medium
US5920416 21 févr. 6 juil. 1999 Cit Alcatel Optical method of transmitting digital data
US5946129 29 août 31 août 1999 Oki Electric Industry Co., Ltd. Wavelength conversion apparatus with improved efficiency, easy adjustability, and polarization
1997 insensitivity
US5953139 21 juin 14 sept. 1999 Cfx Communications Systems, Llc Wavelength division multiplexing system
US5953361 13 mai 14 sept. 1999 Siemens Aktiengesellschaft DFB laser diode structure having complex optical grating coupling
US5974209 30 avr. 26 oct. 1999 Cho; Pak Shing System comprising an electroabsorption modulator and an optical discriminator
US5991323 20 oct. 23 nov. 1999 Lucent Technologies Inc. Laser transmitter for reduced signal distortion
US6018275 18 déc. 25 janv. 2000 Nokia Mobile Phones Limited Phase locked loop with down-conversion in feedback path
US6081361 17 oct. 27 juin 2000 Lucent Technologies Inc. Sub-carrier multiplexing in broadband optical networks
US6088373 17 févr. 11 juil. 2000 Lucent Technologies Inc. Hybrid tunable Bragg laser
US6096496 19 juin 1 août 2000 Frankel; Robert D. Supports incorporating vertical cavity emitting lasers and tracking apparatus for use in combinatorial
1997 synthesis
US6104851 24 avr. 15 août 2000 Mahgerefteh; Daniel Transmission system comprising a semiconductor laser and a fiber grating discriminator
US6115403 22 juil. 5 sept. 2000 Ciena Corporation Directly modulated semiconductor laser having reduced chirp
US6157025 19 oct. 5 déc. 2000 Nippon Telegraph And Telephone Corporation Disk shaped tunable optical filter
US6188499 12 mai 13 févr. 2001 Canon Kabushiki Kaisha Wavelength locking method for tunable filter, wavelength locking apparatus and wavelength division
1998 multiplexing communication network using the same
US6222861 3 sept. 24 avr. 2001 Photonic Solutions, Inc. Method and apparatus for controlling the wavelength of a laser
US6271959 23 juin 7 août 2001 Nortel Networks Limited Method and apparatus for optical frequency demodulation of an optical signal using interferometry
US6282003 2 févr. 28 août 2001 Uniphase Corporation Method and apparatus for optimizing SBS performance in an optical communication system using at least two
1998 phase modulation tones
US6298186 7 juil. 2 oct. 2001 Metrophotonics Inc. Planar waveguide grating device and method having a passband with a flat-top and sharp-transitions
US6331991 15 juil. 18 déc. 2001 The United States Of America As Represented By The Transmission system using a semiconductor laser and a fiber grating discriminator
1999 National Security Agency
US6351585 2 févr. 26 févr. 2002 Lucent Technologies Inc. Thermally adjustable optical fiber grating device with packaging for enhanced performance
US6353623 4 janv. 5 mars 2002 Uniphase Telecommunications Products, Inc. Temperature-corrected wavelength monitoring and control apparatus
US6359716 24 févr. 19 mars 2002 Massachusetts Institute Of Technology All-optical analog FM optical receiver
US6421151 10 août 16 juil. 2002 Lucent Technologies Inc. Method and arrangement for stabilizing wavelength of multi-channel optical transmission systems
US6459518 11 juin 1 oct. 2002 Kdd Corporation Optical transmitting apparatus
US6473214 1 avr. 29 oct. 2002 Nortel Networks Limited Methods of and apparatus for optical signal transmission
US6486440 9 juil. 26 nov. 2002 Jds Uniphase Corporation Redundant package for optical components
US6506342 15 mai 14 janv. 2003 Robert D. Frankel Tracking apparatus and method for use with combinatorial synthesis processes
US6522809 17 août 18 févr. 2003 Mitsubishi Denki Kabushiki Kaisha Waveguide grating device and method of controlling Bragg wavelength of waveguide grating
US6563623 19 juil. 13 mai 2003 Alcatel System for transmitting optical data
US6577013 5 sept. 10 juin 2003 Amkor Technology, Inc. Chip size semiconductor packages with stacked dies
US6580739 12 juil. 17 juin 2003 Agility Communications, Inc. Integrated opto-electronic wavelength converter assembly
US6618513 3 août 9 sept. 2003 Fibercontrol Apparatus for polarization-independent optical polarization scrambler and a method for use therein
US6628690 12 juil. 30 sept. 2003 Agility Communications, Inc. Opto-electronic laser with integrated modulator
US6650667 15 mai 18 nov. 2003 The Furukawa Electric Co., Ltd. Semiconductor laser apparatus, semiconductor laser module, optical transmitter and wavelength division
2001 multiplexing communication system
US6654564 7 août 25 nov. 2003 Jds Uniphase Inc. Tunable dispersion compensator
US6658031 6 juil. 2 déc. 2003 Intel Corporation Laser apparatus with active thermal tuning of external cavity
US6665351 29 janv. 16 déc. 2003 Telefonaktiebolaget Lm Ericsson (Publ) Circuit and method for providing a digital data signal with pre-distortion
US6687278 12 juil. 3 févr. 2004 Agility Communications, Inc. Method of generating an optical signal with a tunable laser source with integrated optical amplifier
US6690686 3 mars 10 févr. 2004 University Of Central Florida Method for reducing amplitude noise in multi-wavelength modelocked semiconductor lasers
US6738398 1 déc. 18 mai 2004 Yokogawa Electric Corporation SHG laser light source and method of modulation for use therewith
US6748133 26 nov. 8 juin 2004 Alliance Fiber Optic Products, Inc. Compact multiplexing/demultiplexing modules
US6778307 28 mars 17 août 2004 Beyond 3, Inc. Method and system for performing swept-wavelength measurements within an optical system
US6785308 28 mars 31 août 2004 Nortel Networks Limited Spectral conditioning system and method
US6810047 27 juin 26 oct. 2004 Electronics And Telecommunications Research Institute Wavelength tunable external resonator laser using optical deflector
US6834134 23 mars 21 déc. 2004 3M Innovative Properties Company Method and apparatus for generating frequency modulated pulses
US6836487 31 août 28 déc. 2004 Nlight Photonics Corporation Spectrally tailored raman pump laser
US6847758 14 août 25 janv. 2005 Fujitsu Limited Method, optical device, and system for optical fiber transmission
US6943951 10 mai 13 sept. 2005 Oyokoden Lab Co., Ltd. Optical component and dispersion compensation method
US6947206 18 juil. 20 sept. 2005 Kailight Photonics, Inc. All-optical, tunable regenerator, reshaper and wavelength converter
US6963685 6 nov. 8 nov. 2005 Daniel Mahgerefteh Power source for a dispersion compensation fiber optic system
US7013090 21 févr. 14 mars 2006 Matsushita Electric Industrial Co., Ltd. Transmitting circuit apparatus and method
US7027470 11 août 11 avr. 2006 Spectrasensors, Inc. Laser wavelength locker
US7054538 6 oct. 30 mai 2006 Azna Llc Flat dispersion frequency discriminator (FDFD)
US7073956 20 juil. 11 juil. 2006 Samsung Electronics Co., Ltd. Optical transceiver and passive optical network using the same
US7076170 14 mai 11 juil. 2006 University Of Maryland, Baltimore County System and method for generating analog transmission signals
US7123846 17 juil. 17 oct. 2006 Nec Corporation Optical receiving device, waveform optimization method for optical data signals, and waveform
2002 optimization program for optical data signals
US7164865 15 mars 16 janv. 2007 Opnext Japan, Inc. Optical fiber communication equipment and its applied optical systems
US7187821 18 janv. 6 mars 2007 Yasuhiro Matsui Carrier suppression using adiabatic frequency modulation (AFM)
US7263291 8 juil. 28 août 2007 Azna Llc Wavelength division multiplexing source using multifunctional filters
US7280721 17 déc. 9 oct. 2007 Azna Llc Multi-ring resonator implementation of optical spectrum reshaper for chirp managed laser technology
US7352968 17 déc. 1 avr. 2008 Finisar Corporation Chirped managed, wavelength multiplexed, directly modulated sources using an arrayed waveguide grating
2004 (AWG) as multi-wavelength discriminator
US7356264 17 déc. 8 avr. 2008 Azna Llc Chirp managed laser with electronic pre-distortion
US7376352 17 déc. 20 mai 2008 Finisar Corporation Chirp managed laser fiber optic system including an adaptive receiver
US7406266 18 mars 29 juil. 2008 Finisar Corporation Flat-topped chirp induced by optical filter edge
US7406267 2 sept. 29 juil. 2008 Finisar Corporation Method and apparatus for transmitting a signal using thermal chirp management of a directly modulated
2004 transmitter
US7555225 * 28 févr. 30 juin 2009 Finisar Corporation Optical system comprising an FM source and a spectral reshaping element
US20060029358 28 févr. 9 févr. 2006 Daniel Mahgerefteh Optical system comprising an FM source and a spectral reshaping element
* 2005
1 Alexander et al., Passive Equalization of Semiconductor Diode Laser Frequency Modulation, Journal of Lightwave Technology, Jan. 1989, 11-23, vol. 7, No. 1.
2 Binder, J. et al., 10 Gbit/s-Dispersion Optimized Transmission at 1.55 um Wavelength on Standard Single Mode Fiber, IEEE Photonics Technology Letters, Apr. 1994, 558-560, vol. 6, No. 4.
3 CA 2510352, Mar. 17, 2020, Office Action.
4 CN 200380108289.9, Aug. 29, 2008, Office Action.
5 CN 200380108289.9, Nov. 21, 2008, Office Action.
6 CN 200380108289.9, Nov. 23, 2007, Office Action.
7 CN 200580012705.4, Mar. 29, 2010, Office Action
8 CN 200580015245.0, Mar. 29, 2010, Office Action.
9 CN 200580015245.0, Sep. 25, 2009, Office Action.
10 CN 200580037807, May 27, 2010, Office Action.
11 CN 200880009551.7, Jul. 14, 2010, Office Action.
12 Corvini, P.J. et al., Computer Simulation of High-Bit-Rate Optical Fiber Transmission Using Single-Frequency Lasers, Journal of Lightwave Technology, Nov. 1987, 1591-1596, vol. LT-5, No. 11.
13 Dischler, Roman, Buchali, Fred, Experimental Assessment of a Direct Detection Optical OFDM System Targeting 10Gb/s and Beyond, Optical Fiber Communication/National Fiber Optic Engineers
Conference, Feb. 24-28, 3 pages, San Diego, CA., 2008.
14 Dong Jae Shin, et al., Low-cost WDM-PON with Colorless Bidirectional Tranceivers, Journal of Lightwave Technology, Jan. 2006, pp. 158-165, vol. 24, No. 1.
15 EP 05731268.8, Jan. 16, 2008, Office Action.
16 EP 05731268.8, May 12, 2010, Office Action.
17 EP 05764209.2, Jun. 9, 2009, Exam Report.
18 Hyryniewicz, J.V., et al., Higher Order Filter Response in Coupled Microring Resonators, IEEE Photonics Technology Letters, Mar. 2000, 320-322, vol. 12, No. 3.
19 JP 2009-504345, Apr. 27, 2010, Office Action.
20 JP 2009-504345, Oct. 26, 2010, Office Action.
21 JP2004-551835, Jul. 18, 2008, Office Action.
22 JP2004-551835, Mar. 2, 2010, Office Action.
23 Kikuchi, Nobuhiko, et al., Experimental Demonstration of Incoherent Optical Multilevel Staggered-APSK (Amplitude- and Phase-Shift Keying) Signaling, Optical Fiber Communication/National Fiber
Optic Engineers Conference, Feb. 24-28, 2008, 3 pages, San Diego, CA.
24 Kiyoshi Fukuchi, Proposal and Feasibility Study of a 6-level PSK modulation format based system for 100 Gg/s migration, 2007, 3 pages.
25 Koch, T. L. et al., Nature of Wavelength Chirping in Directly Modulated Semiconductor Lasers, Electronics Letters, Dec. 6, 1984, 1038-1039, vol. 20, No. 25/26.
26 KR 102008-7027139, Apr. 28, 2010, Office Action.
27 Kurtzke, C., et al., Impact of Residual Amplitude Modulation on the Performance of Dispersion-Supported and Dispersion-Mediated Nonlinearity-Enhanced Transmission, Electronics Letters, Jun. 9,
1994, 988, vol. 30, No. 12.
28 Lammert et al., MQW DBR Lasers with Monolithically Integrated External-Cavity Electroabsorption Modulators Fabricated Without Modification of the Active Region, IEEE Photonics Technology Letters,
vol. 9, No. 5, May 1997, pp. 566-568.
29 Lee, Chang-Hee et al., Transmission of Directly Modulated 2.5-Gb/s Signals Over 250-km of Nondispersion-Shifted Fiber by Using a Spectral Filtering Method, IEEE Photonics Technology Letters, Dec.
1996, 1725-1727, vol. 8, No. 12.
30 Li, Yuan P., et al., Chapter 8: Silicon Optical Bench Waveguide Technology, Optical Fiber Communications, 1997, 319-370, vol. 111B, Lucent Technologies, New York.
31 Little, Brent E., Advances in Microring Resonators, Integrated Photonics Research Conference 2003.
32 Mahgerefteh, D. and Fan, F., Chirp-managed-laser technology delivers > 250-km reach, Lightwave Online, 2005, PennWell Corporation. Accessed online Jul. 1, 2009 at: http://www.finisar.com/
33 Mahgerefteh, D. and Fan, F., Chirp-managed-laser technology delivers > 250-km reach, Lightwave Online, 2005, PennWell Corporation. Accessed online Jul. 1, 2009 at: http://www.finisar.com/
34 Matsui, Yasuhiro et al, Chirp-Managed Directly Modulated Laser (CML), IEEE Photonics Technology Letters, Jan. 15, 2006, pp. 385-387, vol. 18, No. 2.
35 Mohrdiek, S. et al., 10-Gb/s Standard Fiber Transmission Using Directly Modulated 1.55-um Quantum-Well DFB Lasers, IEEE Photonics Technology Letters, Nov. 1995, 1357-1359, vol. 7, No. 11.
36 Morton, P.A. et al., "38.5km error free transmission at 10Gbit/s in standard fibre using a low chirp, spectrally filtered, directly modulated 1.55um DFB laser", Electronics Letters, Feb. 13,
1997, vol. 33(4).
37 Nakahara, K. et al, 40-Gb/s Direct Modulation With High Extinction Ratio Operation of 1.3-mum InGaA1 As Multiquantum Well Ridge Waveguide Distributed Feedback Lasers, IEEE Photonics Technology
Leters, Oct. 1, 2007, pp. 1436-1438, vol. 19 No. 19.
38 Nakahara, K. et al, 40-Gb/s Direct Modulation With High Extinction Ratio Operation of 1.3-μm InGaA1 As Multiquantum Well Ridge Waveguide Distributed Feedback Lasers, IEEE Photonics Technology
Leters, Oct. 1, 2007, pp. 1436-1438, vol. 19 No. 19.
39 Prokais, John G., Digital Communications, 2001, 202-207, Fourth Edition, McGraw Hill, New York.
40 Rasmussen, C.J., et al., Optimum Amplitude and Frequency-Modulation in an Optical Communication System Based on Dispersion Supported Transmission, Electronics Letters, Apr. 27, 1995, 746, vol.
31, No. 9.
41 Ronald Freund, Dirk Daniel Gross, Matthias Seimetz, Lutz Molle, Christoph Casper, 30 Gbit/s RZ 8-PSK Transmission over 2800 km Standard Single Mode Fibre without Inline Dispersion Compensation,
2007, 3 pages.
42 Sato, K. et al, Chirp Characteristics of 40-Gb/s Directly Modulated Distributed-Feedback Laser Diodes, Journal of Lightwave Technology, Nov. 2005, pp. 3790-3797, vol. 23, No. 11.
43 Sekine, Kenro, et al., Advanced Multi-level Transmission Systems, Optical Fiber Communication/National Fiber Optic Engineers Conference, Feb. 24-28, 2008, 3 pages, San Diego, CA.
44 Shalom, Hamutal1 et al., On the Various Time Constants of Wavelength Changes of a DFB Laser Under Direct Modulation, IEEE Journal of Quantum Electronics, Oct. 1998, pp. 1816-1822, vol. 34, No.
45 Tokle, Torger et al., Advanced Modulation Formats for Transmission Systems, Optical Fiber Communication/National Fiber Optic Engineers Conference, Feb. 24-28, 2008, 3 pages, San Diego, CA.
46 U.S. Appl. No. 11/964,315, mail date Aug, 25, 2010, Office Action.
47 U.S. Appl. No. 12/014,676, mail date Oct. 4, 2010, Office Action.
48 U.S. Appl. No. 12/025,573, mail date Oct. 6, 2010, Office Action.
49 U.S. Appl. No. 12/047,017, mail date Aug. 6, 2010, Office Action.
50 U.S. Appl. No. 12/047,017, mail date Jun. 1, 2010, Restriction Requirement.
51 U.S. Appl. No. 12/047,017, mail date Sep. 27, 2010, Notice of Allowance.
52 U.S. Appl. No. 12/053,344, mail date Sep. 3, 2010, Notice of Allowance.
53 U.S. Appl. No. 12/115,337, mail date Aug. 20, 2010, Office Action.
54 U.S. Appl. No. 12/115,337, mail date Mar. 4, 2010, Office Action.
55 U.S. Appl. No. 12/115,337, mail date Oct. 28, 2010, Notice of Allowance.
56 Wedding, B., Analysis of fibre transfer function and determination of receiver frequency response for dispersion supported transmission, Electronics Letters, Jan. 6, 1994, 58-59, vol. 30, No. 1.
57 Wedding, B., et al., 10-Gb/s Optical Transmission up to 253 km Via Standard Single-Mode Fiber Using the Method of Dispersion-Supported Transmission, Journal of Lightwave Technology, Oct. 1994,
1720, vol. 12, No. 10.
58 Yu, et al., Optimization of the Frequency Response of a Semiconductor Optical Amplifier Wavelength Converter Using a Fiber Bragg Grating, Journal of Lightwave Technology, Feb. 1999, 308-315, vol.
17, No. 2.
Classification aux États-Unis 398/193, 398/183, 398/199, 398/185
Classification internationale H04B10/12, H04B10/04
Classification coopérative H04B10/5563, H04B10/5161, H04B10/541, H04B10/25137
Classification européenne H04B10/541, H04B10/5161, H04B10/25137, H04B10/5563
Date Code Événement Description
Owner name: FINISAR CORPORATION, CALIFORNIA
3 mars Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MAHGEREFTEH, DANIEL;TAYEBATI, PARVIZ;ZHENG, XUEYAN;AND OTHERS;REEL/FRAME:020589/0618;SIGNING DATES FROM
2008 AS Assignment 20071221 TO 20071224
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MAHGEREFTEH, DANIEL;TAYEBATI, PARVIZ;ZHENG, XUEYAN;AND OTHERS;SIGNING DATES FROM 20071221 TO 20071224;REEL/ | {"url":"http://www.google.fr/patents/US7941057?hl=fr","timestamp":"2014-04-19T14:33:51Z","content_type":null,"content_length":"148570","record_id":"<urn:uuid:7d3ec692-efc2-40b1-a16e-f14368941e40>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic Equation
Date: 6/22/96 at 22:34:20
From: Anonymous
Subject: The Quadratic Equation - In Laypeople's Terms
We were testing our math literacy and found that we could not recall
what a quadratic equation is used for. Could you please enlighten us?
We searched and searched on the Web but could not find an answer in
laypeople's terms. Any insight would be appreciated.
Date: 6/22/96 at 23:46:36
From: Doctor Sarah
Subject: The Quadratic Equation - In Laypeople's Terms
Hi -
R. Seiden, whose web pages have since fallen off the web, once wrote
this explanation, which you might help refresh your memory:
"Where would you use it in the real world? Outside of helping you pass
math, the Quadratic Equation can be very helpful. Let's say you throw
a baseball up in the air to your friend. That ball follows a
parabolic path, which means there is a quadratic equation involved,
and you can use the Quadratic Formula to figure out how far the ball
will go. You can also use it to figure out how hard you need to throw
it to make it to your friend."
If you need a more detailed explanation, write back with a specific
question and we'll try to help you out.
-Doctor Sarah, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 6/24/96 at 10:31:2
From: Doctor Ethan
Subject: The Quadratic Equation - In Laypeople's Terms
The most basic example is any time that you have an equation
describing the motion of an object where there is a constant
acceleration. Then you have a quadratic equation. This occurs a lot.
Gravity is a good example.
-Doctor Ethan, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/52959.html","timestamp":"2014-04-19T00:48:57Z","content_type":null,"content_length":"6611","record_id":"<urn:uuid:54733660-bd24-49f0-ac30-ee7ca29df897>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does a finite universe make sense to you?
Marcus -
First of all I apologize for my error in saying amongst most theoretical physicists today.
Big ups to Leaonard Susskind and Lisa Randall for opening me up to these ideas.
no need to apologize! I just meant to point out that there is a disconnect between the real research literature---stuff published and cited in peer-review professional journals, and what you get in
New Scientist and in pop-sci mass market books.
Huge difference. Can't take NewSci seriously, if they give you the impression of a consensus amongst some professional group. Lot of ga-ga stuff in NewSci.
In the case of Susskind and Randall,
1. they are just 2 scientists out of many hundreds that sometimes do cosmology. not representative of community of working cosmologists (really in other specialties, string, braneworld models)
2. watch what they do, not what they say
3. both Susskind and Randall have authored popularization books. they naturally talk up the stuff they present in their books.
Susskind wrote a pop-sci book called Cosmic Landscape. It came out in 2005 and he talked it up a lot on the media. It didn't sell well. Now three years later, he has just brought out A DIFFERENT
popularization book that has
nothing about multiverse
or Landscape. It hits the market July 2008.
When they had that informal show of hands at Strings 05 in Toronto it was a room full of about 400-some string theorists. They voted over 3 to 1 against Susskind's pet idea of the anthropic string
theory Landscape. Of course science is not a democracy and Susskind has support money and visibility and tenure at Stanford. He is prominent and carries a lot of weight. But you can't say he
represents a majority or a consensus.
Science in the media is to some extent personality-driven. It is different from actual science.
Neither Susskind nor Randall got invited to give talks at the main annual string meeting, Strings 2008.
whereas they were very big in past years. Indeed in 2005, in Toronto, Susskind gave one of the two public lectures in the big auditorium. The other big talk was given by Robbert Dijkgraaf. Multiverse
and Landscape were very big that year.
Now there is a quiet unpublicized reaction against that stuff. Coupled with a cutback in faculty jobs for string theorists in the US.
Basically Susskind, a smart guy, is changing his message and how he presents himself. He recently said he doesn't like to be labeled as a string theorist. He has other research interests, other
directions, he points out. And he has stopped promoting the Landscape so vociferously as he was back in 2003-2005. His new book is about something else. He is presenting a new face.
Maybe in 2009 he will be invited to give a talk at Strings 09----and if so it will probably not be about Multiverses or the landscape of possible string theories. We'll see. Nobody can predict the
future course of fundamental physics research. We can bet, though. Would you like to bet? No money, just go on record with a prediction. | {"url":"http://www.physicsforums.com/showthread.php?p=1785286","timestamp":"2014-04-17T04:03:13Z","content_type":null,"content_length":"86778","record_id":"<urn:uuid:2d575ada-db56-4d0b-ab5b-f58130e7a1a0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Velocity Reviews - Beginner square root question
Stefan Ram 07-08-2008 03:32 AM
Re: Beginner square root question
Ray Leon <popeyeray@qwest.net> writes:
>I have the following algorithm
>I would appreciate someone to check if this is mostly correct.
To check whether it is correct, one needs to
compare its actual behavior with the specification
for its required behavior.
Without a specification, it is »not even wrong«.
Also, this is a Java newsgroup, but your post does
not refer to Java.
Ray Leon 07-08-2008 04:23 AM
Beginner square root question
I have the following algorithm
1. Input: a real number X
If (X < 0) Then
Display: X ³ cannot be negative.²
sqrt(X) = X^0.5
Print ³The square root of X is² sqrt(X)
2. Exit
My flowchart is on the following web page:
I would appreciate someone to check if this is mostly correct.
I also am not sure about the error message and how to put it into my
Thank you
Mark Space 07-08-2008 06:22 AM
Re: Beginner square root question
Stefan Ram wrote:
> To check whether it is correct, one needs to
> compare its actual behavior with the specification
> for its required behavior.
To the OP:
To expand on this a bit, what is your algorithm supposed to do when X is
negative? Obviously, you display "X cannot be negative" and stop, but
was that the required behavior? What if in the case of negative number
input the algorithm was really supposed to make X positive, take the
square root, then display "Ri" to signify an imaginary number? (R =
square root.)
So, your algorithm is probably correct, but you do need to get the idea
down of a specification. Please make sure to tell us first what the
algorithm is supposed to do, then give us an implementation to check.
It's like those problems in geometry where you're "Given X, Y, Z, show
that A is true." Then show the work. You've just shown us the work, we
aren't sure what you were given to do.
And it would be nice if you included Java in there somehow.
All times are GMT. The time now is 08:35 PM.
Powered by vBulletin®. Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SEO by vBSEO ©2010, Crawlability, Inc. | {"url":"http://www.velocityreviews.com/forums/printthread.php?t=624338","timestamp":"2014-04-19T20:35:44Z","content_type":null,"content_length":"6750","record_id":"<urn:uuid:e28fcdae-b40e-4866-8cc9-84f1eea51565>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Predicting gene expression in T cell differentiation from histone modifications and transcription factor binding affinities by linear mixture models
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Bioinformatics. 2011; 12(Suppl 1): S29.
Predicting gene expression in T cell differentiation from histone modifications and transcription factor binding affinities by linear mixture models
The differentiation process from stem cells to fully differentiated cell types is controlled by the interplay of chromatin modifications and transcription factor activity. Histone modifications or
transcription factors frequently act in a multi-functional manner, with a given DNA motif or histone modification conveying both transcriptional repression and activation depending on its location in
the promoter and other regulatory signals surrounding it.
To account for the possible multi functionality of regulatory signals, we model the observed gene expression patterns by a mixture of linear regression models. We apply the approach to identify the
underlying histone modifications and transcription factors guiding gene expression of differentiated CD4+ T cells. The method improves the gene expression prediction in relation to the use of a
single linear model, as often used by previous approaches. Moreover, it recovered the known role of the modifications H3K4me3 and H3K27me3 in activating cell specific genes and of some transcription
factors related to CD4+ T differentiation.
All cells in a multi-cellular organism arise from the same zygote and thus carry the same genetic information. However, complex regulatory programs allow stem cells to differentiate into distinct
cell types. For instance, in response to different infectious agents Naive CD4+ T cells differentiate into at least four types of T helper cells—Th1, Th2, Th17, and inducible regulatory T cells
(iTregs) [1]. While all of these cell types are involved in the adaptive immune response they serve distinct roles by secreting different cytokines. For example, Th1 acts against mycobacterial
infections by releasing IFNγ, which activates the response of macrophages [1] while Th2 cells secrete various interleukins helping B-cells to induce humoral immunity.
On the transcriptional level, the differentiation process from stem cells to fully differentiated cell types is controlled by the interplay of chromatin modifications and transcription factor
activity [2]. Chromatin structure is shaped primarily by histones. The presence or absence of these large globular protein complexes determines the accessibility of the promoter regions for the
transcriptional machinery and thus performs a high-level control on gene expression [3,4]. The affinity of histones to DNA is modified by the cell via a large repertoire of post-translational protein
modifications including acetylations and methylations.
The resulting epigenetic histone code appears highly intricate, with a given histone frequently carrying several different modifications at a time. Despite this complexity, it has become clear that
certain modifications, such as the trimethylation of the lysine 4 residue in the tail of histone H3 (abbreviated H3K4me3) are mainly associated with active promoters while other modifications such as
H3K27me3 tend to be associated with inactive promoters [5]. The importance of histone modifications for the differentiation of Naive CD4 T-cells into Th1 cells has recently been verified at [6],
which demonstrated that IFNγ expression is controlled by the histone methylation status of its promoter.
Aside from chromatin structure, transcription factors (TFs), play an essential role in controlling cell differentiation by guiding the transcriptional machinery to its target promoters and
facilitating the initiation of transcription. For instance, in T-cell differentiation, in vitro studies demonstrated that either high levels of the transcription factor GATA3 or strong signalling via
the transcription factor STAT5 is sufficient to determine the Th2 cell fate [1].
Particularly in the context of genome wide studies, computational biology analysis have become an essential component of elucidating the regulatory signals underlying observed gene expression
patterns. Usually, the problem of identifying the promoter elements guiding differentiation and cell type specific gene expression is tackled by first selecting the genes which are most specifically
expressed in the particular cell type and then performing motif over-representation analysis on their promoter sequences as in [7,8] (see [9] for a recent review). While such methods allow
identifying potentially regulating transcription factors they have the intrinsic drawback of requiring a previous grouping of genes and of being able to explain only the expression of the genes with
highest specificity for the condition.
In contrast, linear regression models, as first proposed by [10,11], combine all regulatory signals in order to explain the expression pattern of the genes. In their work, Bussemaker et al. [10]
focused on explaining gene expression based on combinations of predicted TF binding sites. The coefficients of the linear model indicate the importance of a particular regulatory signal. That is,
signals which obtain large positive coefficients likely correspond to putative activators while signals with large negative coefficients likely act as suppressors. Recently, Karlic et al. [12] also
used a linear regression model in order to estimate promoter activity based on histone modification data. By design, the above approaches assume that a given regulatory signal exert the same
regulatory effect on all its target genes.
However, transcription factors and thus their DNA binding motifs frequently act in a bi-functional manner, with a given DNA motif conveying both transcriptional repression and activation depending on
its location with respect to the transcription start site (TSS) and the sequence motifs surrounding it. For instance, RUNX1 and RUNX3 have been shown to act both as repressors and activators in
different tissues and are involved in determining T-cell fate [13].
To account for the possible multi functionality of regulatory signals, in this study, we propose to extend [10-12] by allowing the observed gene expression patterns to be explained by not just one,
but by a mixture of several linear regression models [14,15]. This permits for instance to find mixture models, such that genes with high maximal expression are controlled by a different group of
regulatory signals than genes with low maximal expression (see Fig. Fig.1).1). That is, a regulatory signal might act as repressor when associated with lowly expressed genes while it may function as
activator or neutral bystander when present in the promoter of highly expressed genes.
Example of Linear Regression Mixture Model. Illustrative example of the use of linear regression mixture models to predict gene expression from the regulatory signals. Gene expression profiles,
indicated by the heat map on the left (red and green correspond ...
In order to find the regression models best explaining the expression data, our method takes as input the matrix Y of observed expression profiles from all genes as well as a matrix X, containing the
regulatory signals for the corresponding promoters (i.e. predicted TF binding affinities and presence of histone modifications). For each gene, it then estimates the coefficient vector B,
representing the relative importance of each regulatory signal and its effect on gene expression (activation or repression).
We apply this novel approach to identify the underlying regulatory signals guiding gene expression in each of the four differentiated CD4+ T-helper cell types. As potential regulatory signals we
consider both, histone modifications (HM) as measured by Chip-Seq [16] as well as predicted binding affinities [17] from a set of TFs related to lymphoiesis [1,18-20]. As we are mainly interested in
cell type specific signals, we restrict the analysis to genes with low CpG content in their promoters [21] as such genes tend to be expressed in a tissue and stage specific manner while genes with
high CpG promoter content tend to be broadly expressed. Using this method we expect to improve the gene expression prediction in relation to the use of single linear model, but also to reveal the
regulatory roles for histone modifications and transcription factors.
Results and discussion
Regulatory signals predicts expression
As a first step, we want to determine which set of regulatory signals, X, can explain the observed gene expression data, Y , best. To this end, as a first step we supply our algorithm with a matrix X
containing only predicted TF binding affinities, only histone modification data or both sets of regulatory signals and assess how well the resulting regression models can capture the data. As measure
of quality for the different models we thereby compute the mean square error between the predicted gene expression values and the actual measurements (see Methods for details).
Predicting the gene expression from the four T-cell types based on only histone modification data and by means of only a single regression model yields MSEs of about 0.5 for HM and HM+TF on all data
sets (see red bars in Fig. Fig.2).2). A mixture of two regression models further reduces the MSEs to an average value 0.25 across all cell types. In all scenarios, the difference of MSE between one
and two models were statistically relevant (t-test p-value < 0.01) indicating the advantage of using mixtures to predict expression. The model selection procedure (see Methods) indicates that the
data is optimally explained by the combination of 2-4 regression models (see Fig. Fig.2)2) and that gene expression data can be well predicted based on histone modification data alone.
Regression Prediction Error. We depict the MSE error for 1 to 6 models for the prediction of expression on Th1, Th2, Th17 and iTreg. Bars marked with * indicate number of linear models indicated by
the model selection. The MSE with TF is higher than the ...
In contrast, using a single regression model to predict gene expression data based on TF binding affinities alone yields considerably larger MSEs across all cell types (average MSE = 2, see blue bars
in Fig. Fig.2).2). Interestingly, supplying our algorithm with the combined data from both histone modifications and TF binding affinities yields MSEs similar to the ones obtained with only histone
modification data alone (see Fig. Fig.2).2). This indicates that the utilized histone modification and TF binding data cover rather redundant than complementary information about gene expression. As
histone modifications and HM+TF affinities yield the solutions with the lowest MSEs, in the following we will continue the analysis with the models based on these data sets.
Control of Th1 gene expression
Having established that histone modification data together with a mixture of two regression models yields the most significant results we now investigate which type of modifications contribute most
strongly to these models. We thereby restrict our analysis to the data from Th1 cells and the corresponding regulatory signals which obtain the largest absolute regression coefficients (results from
other cell types closely resemble those from Th1 cells, see Additional File 1 for details).
For model 1, which explains the expression pattern of the most highly as well as moderately expressed genes, the histone modifications with largest influence are H3K4me3 and H3K27me3, with regression
coefficients of +0.7975 and -0.4533, respectively. As shown in the top part of Fig. Fig.33 these two modification form a gradient with H3K4me3 being most frequently found in the highly expressed
genes while being absent in moderately to lowly expressed genes. In contrast, H3K27me3 is consistently detected in promoters of lowly expressed genes but appears weaker or even absent in promoters of
highly expressed genes.
Regression Results Th1. Results for mixtures of two regression models utilizing only histone modification data on Th1. Model 1 captures the expression of 2231 genes and model 2 of 3923 genes.
Corresponding gene expression levels are shown by the vectors ...
For model 2, which explains the transcriptional activity of a small subset of highly expressed as well as most of the lowly expressed genes, we again find H3K4me3 to have the strongest positive
regression coefficient (b = 0.71). This is reflected by a strong association of this modification with the most highly expressed genes of this set (see Fig. Fig.3).3). In contrast, H3K27me3 obtains
a regression coefficient of close to zero (b = 0.03) in this model as this modification appears with the same intensity in nearly all genes assigned to model 2.
An alternative view of this results is presented at Fig. Fig.4.4. There, we have the interpolated values of the histone modifications against the gene expression, the linear models for each
component and the resulting mixture model. Clearly, dependence of H3K27me3 on gene expression is not linear, as low expressed genes all present a high presence of histone modifications. This
non-linearity is captured by the mixture model (red line), and explains the lowest MSE errors obtained when more than one linear models is applied. To see whether the influence of TFs may contribute
to this effect we next look in detail at the results obtained from combined histone and TF data together with a mixture of three correlation models. As shown in Fig. Fig.5),5), we see similar
results in respect to the histone markers: H3K4me3 as enhancer and H3K27me3 as inhibitor of expression for genes with high expression and H3K4me3 as enhancer for genes with low expression. Moreover,
only for the genes with high expression, there are some TFs (Pax5, Stat5, Meis/Hox, Iscbp) promoting gene expression and a TF (MyB) inhibiting expression. For all TFs, regression coefficients were in
the range of 0.1 to 0.15 (see Additional File 1 for additional results). For genes with low expression, we found no relation between the TFs binding affinities and expression. Th1 cells are known to
be regulated by T-bet and Stat4 [1]. While our study lacked the PFM of T-bet, it listed the closely related Stat5 as a positive regulator of the genes with high expression. In relation to factor
related to inhibition, there has been a recent implication of c-MyB to bind to H3 histone tails and to promote histone acetylations in Humans [22]. These results indicate a putative role of the MyB
in down-regulating the expression of genes during CD4+ T differentiation by promotion of epigenetic changes. However, further acytilation modification data would be required for a better
characterizaion of the role of this factor.
Histone Modification against Th1 Gene Expression. We depict the values of Th1 gene expression against H3K27me3 modification (left) and H3K4me3 modification (right). The blue line represents a
nearest-neighbor interpolation (30 samples) of the histone ...
Regression Coefficients on Th1 with HM/TF data. We depict the regression coefficients of the most relevant regulatory signals and mean expression values for the mixture with three linear models on
Th1 with HM/TF data.
Comparison with previous studies
Several computational biology methods have been previously proposed for the use of linear models for predicting gene expression in the context transcription factor binding [10,11] or histone
modifications [12]. In all cases, distinct datasets were used and results are not directly comparable. In relation to [12], the analysis were based on Human naive CD4+ T cells and included 38 histone
modifications. Their predicted model obtained a correlation coefficient of 0.72 on genes with low CpG content with HM H3K4me3 and H3K79me1, while our method had a coefficient at the range 0.64 – 0.68
for one model and 0.85 – 0.87 for two models for H3K4me3 and H3K27me3 data. The increase of the correlation coefficient from single linear models to two linear models is an indication that all these
approaches would profit from the use of the mixture of linear regressions framework.
Predicting gene expression from regulatory signals is an important but unmet goal in bioinformatics. In this study, we propose a novel approach which uses mixtures of linear regression models
together with transcription factor binding and histone modification data for estimating transcriptional activity of CpG depleted promoters. In addition the approach allows to determine the functional
activity of the various regulatory signals. We show that our approach obtains significantly smaller errors in predicting the expression of genes in comparison to simple linear regression models as
used in previous approaches. For gene expression data from CD4+ T helper cells we find that both, histone modification data alone and histone modifications together with predicted TF binding
affinities, yields the best expression predictors. In accordance with previous dedicated studies we recover the well known regulatory roles of H3K4me3 as an enhancing and H3K27me3 as a repressive
regulatory signal for gene expression. Moreover, our predictions suggest that histone modifications act not in a binary on/off fashion but rather in a continuous way with levels of H3K4me3 and
H3K27me3 steadily rising or falling over a large range of expression values in a non-linear way. With the use of TF binding affinities, we also partially recover the main factors such as the Stat
family involved in T helper cell type specific gene expression. Interestingly, we observe a negative effect of cMyb on expression in all T helper cell types. This raises the question whether MyB,
which has been recently showed to promote histone acetylation marks in hematopoiesis [22], could play a role in the down-regulation of genes in T helper cells types.
The advent of next generation sequencing provides an ever growing stock of high quality data for the full range of histone modifications, DNA methylation state and transcription factor occupancy
across the entire genome from various cell types and differentiation stages. Several methodological improvements will be required to integrate this wealth of data in order to shed light on the
complex interplay between the different regulatory signals acting in eukaryotics. Moreover, in an ideal case where all possible regulatory signals have been measured, advanced feature selection
procedures such as postulated by [23], will be vital for the detection of all the players involved in determining gene expression.
Mixture of linear regressions
In the following we want to model the observed expression level of all N genes, using different linear combinations of the M different regulatory signals associated with the promoters (i.e. binding
affinities for various TFs and different histone modifications). To this end let y[i] be the gene expression level of gene i (the dependent variable) and x[i] be a corresponding vector of M
regulatory signals (the regressor variables). The single linear regression model is then defined as
y[i]=b[0]+ x[i]B^T + [i], (1)
where B is a vector (b[1], …,b[M]) representing regression coefficients and [i] is an error term. For mathematical convenience, we redefine the vector with the regressor variables to be x[i] = (1,x
[i][1], …,x[iM]) and include the bias parameter b[0] in the beginning of B, that is B = (b[0], b[1],…, b[M]). Assuming the error follows a Normal distribution with standard deviation σ^2, the linear
regression model has the following distribution
y[i]|x[i], B,σ^2 ) =N(y[i]|x[i]B^T,σ^2 ). (2)
A mixture of linear regression models is defined as a convex summation of K distributions
where Π = (π[1], …,π[K]) are the mixture coefficients, which respect π[k] ≥ 0 and B[1],…, B[K],
For a given data X and Y, where X is a set on N observations x[i] and Y a vector with N observations y[i], the mixture of linear regression models can be estimated with the Expectation-Maximization
algorithm [14,24]. We resort to Maximum-a-posteriori (MAP) estimates of the parameters, as described in the next section, to avoid over-fitting [25]. The EM works by finding estimates Θ maximizing
the posterior distribution over the data X and Y
X, Y, Z) ≈ Y, Z|X,Θ)
where Z is the vector of hidden variables with z[i] K} indicating which linear model an observation i belongs to. Y, Z|X, Θ) is the complete data likelihood and is given by:
where r[ik] is the posterior probability (or responsibility) [25] that observation i belongs to the linear model k and is given by:
For further details on mixture models we refer the reader to [25].
The EM algorithm works by iteratively estimating the model assignments (r[ik]) and the model parameters Θ until some convergence criteria is reached. In the context of the mixture of linear
regression models, we need estimates of the linear regression parameters (B[k], k, and all other parameters (r[ik],Π) follow the usual EM algorithm [25].
Once the mixture model is estimated, the predicted value ŷ[i] for a particular regressor observation x[i] is given by
That is, the linear regression prediction is a mixture of the predictions of each individual component times the posterior probability of the observation i to belong to the model k. In our particular
application problem, we are interested in estimating the models which corresponds to an unsupervised learning problem, that is, the coefficients indicating whether a regulatory signal plays an
important repressive or activating role. The predictions ŷ can thereby be used for evaluating the fit of our model. In cases where one wants estimate the expression level of genes, that is,
estimation of ŷ (supervised learning problem), the above equation should not be used, as the posterior probabilities are based on the response variable y, which is usually unknown in a predictive
scenario. In such a context, methods for combinations of predictors, such as [26], are required.
Bayesian linear regression estimates
We resort to Bayesian approach for obtaining MAP estimates of the linear regression models as proposed in [27]. Therefore, we avoid problems related to over-fitting which usually occur with the EM
algorithm and mixture models [25]. More formally, the prior distribution in Eq. 4 can be decomposed as
We use the following conjugate prior for the regression coefficient B[k]
B[k]) = N(B[k]|0, β[k]I), (9)
where 0 is a vector with M zeros, I is a M x M identity matrix and β[k] is the hyper-parameter.
Let r[k] be an N dimensional vector (r[1][k], …,r[Nk]) containing the posterior probabilities of the observations belonging to model k and let W[k] = diag(r[k]), then the estimates from model k
maximizing Eq. 4 are defined as
From Eq. 10, we can see that β[k] works by shrinking the regression coefficients. Small β[k] imposes a higher shrinkage on the regression coefficients. Furthermore, for β[k] → ∞ we have a
non-informative prior and the regression coefficients are the maximum likelihood estimates.
We estimate the hyper-parameter β[k] in an Empirical Bayes approach with
and λ[j] is the jth eigenvalue of the PCA decomposition of matrix 27] for details).
Note that β[k] requires the definition of B[k], which in our context is taken from the previous iteration of the EM algorithm.
For the mixture mixing coefficients, we use a symmetric Dirichlet distribution as prior
where α is the hyper-parameter. Hence, the mixing coefficients estimates used by the EM algorithm are
We use a prior of α = 2, which avoids models with a low number of observations assigned to it.
Transcription factor affinity
TF binding motifs are traditionally described in the form of position frequency matrices (PFMs). PFMs show how often a certain base occurs at a given position in the alignments of known binding sites
of the TF. To predict the binding strength of a given TF to a promoter sequences we utilize the TRAP method [17]. In contrast to motif matching algorithms which make a binary distinction between
binding sites and non-binding sites, TRAP avoids this artificial separation and instead computes the probability of a TF to bind site i in the sequence using the following equation
where δE[i](λ) is the energy difference between the state in which the factor is bound to site i and the state in which the factor is bound to its consensus site. This so called mismatch energy is
scaled by a parameter λ which was previously determined to have an optimal value of 0.7 [17]. The second transcription factor dependent parameter R[0] determines both, the binding energy between the
factor and its consensus site as well as the TF concentration. R[0] is derived for each PFM individually as
R[0] = exp(0.6 • W – 6), (17)
where W is the number of columns in the PFM with information content exceeding 0.1 bits. Matrix positions which fall below this entropy cutoff also do not contribute to the mismatch energy in Eq. 16.
The nucleotide dependent mismatch energies for each site in the promoter sequence are computed by
where v[i][,][max] is the frequency of the consensus base at position i in the PFM and v[i][,][α], is the frequency of the observed base α at position i in the PFM. Eventually, TRAP obtains the
expected number N of TFs bound to the promoter by summing over the individual probabilities from all L sites in the sequence:
As input, TRAP requires for each TF a PFM suitable for computing the mismatch energies and a DNA sequence of interest (see [17] for details).
For our study we use a selection of 102 PFMs from the Transfac database version 11.1 [28], which correspond to TFs involved in lymphoid development (see Additional File 1 for TF list). As we are
mainly interested in binding sites near the promoter, the analysis was based on the 200 base pairs upstream of the transcription start site (TSS) of the genes. We restrict the analysis to genes with
normalized CpG content < 0.5 in their promoter sequence [21], as such genes tends to be expressed in a tissue and stage specific way. In the end, we calculate the affinity (Eq. 19) for all the
selected genes and PFMs. This yields the matrix X containing the TF binding data, where x[i][,][j] corresponds to the affinity of TF j to the promoter of gene i.
T-cell gene expression and histone modification data
We use the gene expression and histone modification data from Th1, Th2, Th17 and iTreg cells published by [16]. The histone modification data was measure with the Chip-Seq Illumina platform. We used
the Cisgenome tool [29] to align sequence data and to detect peaks. As we are only interested in the modifications near the promoter, we consider the region of 8000 bps upstream and 2000 bps
downstream of the TSS and kept the tag counts of the highest peak. Finally, we added a pseudo count to avoid zero values and applied a log transform. This yields the matrix X containing the histone
modification data, where x[i][,][j] corresponds to the number of ChIP-seq tags derived from a particular histone modification j that are being mapped to the promoter of gene i.
The expression data was measured with Affymetrix 430 chips. The raw data has been normalized using the variance stabilization method of [30] and normalized the tissues to have mean expression equal
to zero.Microarray probes were mapped to ENSEMBL gene identifiers with the help of the biomart tool [31]. We thereby kept all genes that had their expression measured by multiple probe sets. In the
following, we restrict our analysis to those 6154 genes with low CpG content for which both, gene expression as well as histone modification data is available. The final data sets used in this
analysis can be found at http://www.cin.ufpe.br/~igcf/MixLin.
Experimental design
We model gene expression from four different T helper cell types (Th1, Th2, Th17 and iTreg) with the use of either transcription factor affinities (TF), histone modifications (HM) or both regulatory
signals combined (HM+TF). As parameter of our method, we vary the number of linear models, K, from 1 to 6. In order to select the optimal model for each cell type, we first perform 10 fold
cross-validation on each parameter setting and then estimate the Mean Square Errors (MSE) from the validation sets. As the MSE tends to decrease with higher K[32], we use a model selection procedure,
the Bayesian Information Criteriun (BIC) [25], to indicate the optimal number of models. The method has been implemented with Pymix [33] and is freely available at http://www.pymix.org.
Authors contributions
IGC, RH, TGR implemented the approach and performed the experiments. IGC, RH and FATC designed the study and evaluated the results. All authors wrote the manuscript. All authors read and approved the
final manuscript.
Competing interests
The authors declare that they have no competing interests.
Supplementary Material
Additional file 1:
Supplementary Figures and Tables This file contains additional Figures and Tables.
Acknowledgements and funding
This work has been partially supported by Brazilian research agencies: FACEPE, CNPq and CAPES.
This article has been published as part of BMC Bioinformatics Volume 12 Supplement 1, 2011: Selected articles from the Ninth Asia Pacific Bioinformatics Conference (APBC 2011). The full contents of
the supplement are available online at http://www.biomedcentral.com/1471-2105/12?issue=S1.
• Zhu J, Paul WE. CD4 T cells: fates, functions, and faults. Blood. 2008;112(5):1557–1569. doi: 10.1182/blood-2008-05-078154. [PMC free article] [PubMed] [Cross Ref]
• Goldberg AD, Allis CD, Bernstein E. Epigenetics: a landscape takes shape. Cell. 2007;128(4):635–638. doi: 10.1016/j.cell.2007.02.006. [PubMed] [Cross Ref]
• Kouzarides T. Chromatin modifications and their function. Cell. 2007;128(4):693–705. doi: 10.1016/j.cell.2007.02.005. [PubMed] [Cross Ref]
• Turner BM. Defining an epigenetic code. Nat Cell Biol. 2007;9:2–6. doi: 10.1038/ncb0107-2. [PubMed] [Cross Ref]
• Bibikova M, Laurent LC, Ren B, Loring JF, Fan JB. Unraveling epigenetic regulation in embryonic stem cells. Cell Stem Cell. 2008;2(2):123–134. doi: 10.1016/j.stem.2008.01.005. [PubMed] [Cross Ref
• Schoenborn JR, Dorschner MO, Sekimata M, Santer DM, Shnyreva M, Fitzpatrick DR, Stamatoyannopoulos JA, Stamatoyonnapoulos JA, Wilson CB. Comprehensive epigenetic profiling identifies multiple
distal regulatory elements directing transcription of the gene encoding interferon-gamma. Nat Immunol. 2007;8(7):732–742. doi: 10.1038/ni1474. [PMC free article] [PubMed] [Cross Ref]
• Costa IG, Roepcke S, Schliep A. Gene expression trees in lymphoid development. BMC Immunol. 2007;8:25. doi: 10.1186/1471-2172-8-25. [PMC free article] [PubMed] [Cross Ref]
• Costa IG, Roepcke S, Hafemeister C, Schliep A. Inferring differentiation pathways from gene expression. Bioinformatics. 2008;24(13):i156–i164. doi: 10.1093/bioinformatics/btn153. [PMC free
article] [PubMed] [Cross Ref]
• Bussemaker HJ, Foat BC, Ward LD. Predictive modeling of genome-wide mRNA expression: from modules to molecules. Annu Rev Biophys Biomol Struct. 2007;36:329–347. doi: 10.1146/
annurev.biophys.36.040306.132725. [PubMed] [Cross Ref]
• Bussemaker HJ, Li H, Siggia ED. Regulatory element detection using correlation with expression. Nat Genet. 2001;27(2):167–171. doi: 10.1038/84792. [PubMed] [Cross Ref]
• Keles S, van der Laan M, Eisen MB. Identification of regulatory elements using a feature selection method. Bioinformatics. 2002;18(9):1167–1175. doi: 10.1093/bioinformatics/18.9.1167. [PubMed] [
Cross Ref]
• Karlic R, Chung HR, Lasserre J, Vlahovicek K, Vingron M. Histone modification levels are predictive for gene expression. Proc Natl Acad Sci U S A. 2010;107(7):2926–2931. doi: 10.1073/
pnas.0909344107. [PMC free article] [PubMed] [Cross Ref]
• Woolf E, Xiao C, Fainaru O, Lotem J, Rosen D, Negreanu V, Bernstein Y, Goldenberg D, Brenner O, Berke G, Levanon D, Groner Y. Runx3 and Runx1 are required for CD8 T cell development during
thymopoiesis. Proc Natl Acad Sci U S A. 2003;100(13):7731–7736. doi: 10.1073/pnas.1232420100. [PMC free article] [PubMed] [Cross Ref]
• DeSarbo W, Cron W. A maximum likelihood methodology for clusterwise linear regression. Journal of Classification. 1988;5(2):249–282. doi: 10.1007/BF01897167. [Cross Ref]
• Hinton GE, Revow M, Dayan P. In: NIPS. Tesauro G, Touretzky DS, Leen TK, editor. MIT Press; 1994. Recognizing Handwritten Digits Using Mixtures of Linear Models; pp. 1015–1022.
• Wei G, Wei L, Zhu J, Zang C, Hu-Li J, Yao Z, Cui K, Kanno Y, Roh TY, Watford WT, Schones DE, Peng W, Sun HW, Paul WE, O’Shea JJ, Zhao K. Global mapping of H3K4me3 and H3K27me3 reveals specificity
and plasticity in lineage fate determination of differentiating CD4+ T cells. Immunity. 2009;30:155–167. doi: 10.1016/j.immuni.2008.12.009. [PMC free article] [PubMed] [Cross Ref]
• Roider HG, Kanhere A, Manke T, Vingron M. Predicting transcription factor affinities to DNA from a biophysical model. Bioinformatics. 2007;23(2):134–141. doi: 10.1093/bioinformatics/btl565. [
PubMed] [Cross Ref]
• Barreda DR, Belosevic M. Transcriptional regulation of hemopoiesis. Dev Comp Immunol. 2001;25(8-9):763–789. doi: 10.1016/S0145-305X(01)00035-0. [PubMed] [Cross Ref]
• Matthias P, Rolink AG. Transcriptional networks in developing and mature B cells. Nat Rev Immunol. 2005;5(6):497–508. doi: 10.1038/nri1633. [PubMed] [Cross Ref]
• Rothenberg EV, Moore JE, Yui MA. Launching the T-cell-lineage developmental programme. Nat Rev Immunol. 2008;8:9–21. doi: 10.1038/nri2232. [PMC free article] [PubMed] [Cross Ref]
• Roider HG, Lenhard B, Kanhere A, Haas SA, Vingron M. CpG-depleted promoters harbor tissue-specific transcription factor binding signals-implications for motif overrepresentation analyses. Nucleic
Acids Res. 2009;37(19):6305–6315. doi: 10.1093/nar/gkp682. [PMC free article] [PubMed] [Cross Ref]
• Mo X, Kowenz-Leutz E, Laumonnier Y, Xu H, Leutz A. Histone H3 tail positioning and acetylation by the c-Myb but not the v-Myb DNA-binding SANT domain. Genes Dev. 2005;19(20):2447–2457. doi:
10.1101/gad.355405. [PMC free article] [PubMed] [Cross Ref]
• Zou H, Hastie T. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B. 2005;67(2):301–320. doi: 10.1111/j.1467-9868.2005.00503.x. [Cross
• Dempster A, Laird N, Rubin D. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B. 1977;39:1–38.
• McLachlan GJ, Peel D. Finite Mixture Models. Wiley Series in Probability and Statistics., Wiley, New York; 2000.
• Breiman L. Bagging Predictors. Machine Learning. 1996. pp. 123–140.
• MacKay DJC. Bayesian Interpolation. Neural Computation. 1992;4(3):415–447. doi: 10.1162/neco.1992.4.3.415. [Cross Ref]
• Matys V, Fricke E, Geffers R, Gössling E, Haubrock M, Hehl R, Hornischer K, Karas D, Kel AE, Kel-Margoulis OV, Kloos DUU, Land S, Lewicki-Potapov B, Michael H, Münch R, Reuter I, Rotert S, Saxel
H, Scheer M, Thiele S, Wingender E. TRANSFAC: transcriptional regulation, from patterns to profiles. Nucleic acids research. 2003;31:374–378. doi: 10.1093/nar/gkg108. [PMC free article] [PubMed]
[Cross Ref]
• Ji H, Jiang H, Ma W, Johnson DS, Myers RM, Wong WH. An integrated software system for analyzing ChIP-chip and ChIP-seq data. Nat Biotechnol. 2008;26(11):1293–1300. doi: 10.1038/nbt.1505. [PMC
free article] [PubMed] [Cross Ref]
• Huber W, von Heydebreck A, Sültmann H, Poustka A, Vingron M. Variance stabilization applied to microarray data calibration and to the quantification of differential expression. Bioinformatics.
2002;18(Suppl 1):S96–104. [PubMed]
• Smedley D, Haider S, Ballester B, Holland R, London D, Thorisson G, Kasprzyk A. BioMart-biological queries made easy. BMC Genomics. 2009;10:22. doi: 10.1186/1471-2164-10-22. [PMC free article] [
PubMed] [Cross Ref]
• Brusco MJ, Cradit JD, Steinley D, Fox GL. Cautionary Remarks on the Use of Clusterwise Regression. Multivariate Behavioral Research. 2008;43:29–49. doi: 10.1080/00273170701836653. [Cross Ref]
• Georgi B, Costa IG, Schliep A. PyMix - The Python mixture package - a tool for clustering of heterogeneous biological data. BMC Bioinformatics. 2010;11:9. doi: 10.1186/1471-2105-11-9. [PMC free
article] [PubMed] [Cross Ref]
Articles from BMC Bioinformatics are provided here courtesy of BioMed Central
• Human Transcriptome and Chromatin Modifications: An ENCODE Perspective[Genomics & Informatics. 2013]
Shen L, Choi I, Nestler EJ, Won KJ. Genomics & Informatics. 2013 Jun; 11(2)60-67
• Modeling gene expression using chromatin features in various cellular contexts[Genome Biology. 2012]
Dong X, Greven MC, Kundaje A, Djebali S, Brown JB, Cheng C, Gingeras TR, Gerstein M, Guigó R, Birney E, Weng Z. Genome Biology. 2012; 13(9)R53
See all...
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3044284/?tool=pubmed","timestamp":"2014-04-20T01:14:07Z","content_type":null,"content_length":"118576","record_id":"<urn:uuid:97f5611c-9815-419a-b1bd-d53dba7c06c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
ALEX Lesson Plans
Subject: Mathematics (9 - 12), or Science (9 - 12)
Title: Minerals
Description: The students will gain information on the 5 characteristics of minerals. The information can be related to nonrenewable resources. This lesson should facilitate discussion on the
difference in precious gems and semi-precious gems.This lesson plan was created as a result of the Girls Engaged in Math and Science, GEMS Project funded by the Malone Family Foundation.
Subject: Mathematics (9 - 12)
Title: Platonic Solids Ornaments
Description: This is a hands-on activity that introduces students to the five Platonic solids. Students will discover the special relationship between faces, vertices, and edges. Students will
research the Platonic solids and then construct and decorate Platonic solids ornaments.
Thinkfinity Lesson Plans
Subject: Mathematics
Title: Polygon Capture: A Geometry Game
Description: In this lesson, from Illuminations, students classify polygons according to more than one property at a time. They move from a simple description of shapes to an analysis of how
properties are related, all in the context of an enjoyable game
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Building Using the Front-Right-Top View
Description: In this lesson, one of a multi-part unit from Illuminations, students explore drawing the front-right-top view when given a three dimensional figure built from cubes. Students also
explore building a three dimensional figure when given the front-right-top view.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Using Cubes and Isometric Drawings
Description: In this unit of six lessons, from Illuminations, students explore polyhedra using different representations and perspectives for three dimensional block figures. In addition, students
examine area and volume concepts for block figures within this context.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Exploring the Isometric Drawing Tool
Description: In this lesson, one of a multi-part unit from Illuminations, students explore using an isometric drawing tool and gain practice and experience in manipulating drawings. They explore
polyhedra using different representations and perspectives.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Cubes Everywhere
Description: In this Illuminations lesson, students use cubes to develop spatial thinking and review basic geometric principles through real-life applications. Students are given the opportunity to
build and take apart structures based on cubes.
Thinkfinity Partner: Illuminations
Grade Span: 6,7,8
Subject: Mathematics
Title: Soda Cans
Description: This reproducible activity sheet, from an Illuminations lesson, guides students through a simulation in which they try different arrangements to make the most efficient use of space and
thus pack the most soda cans into a rectangular packing box.
Thinkfinity Partner: Illuminations
Grade Span: 9,10,11,12
Web Resources
Geometry Pad App
Create geometric shapes, explore/change their properties, and calculate metrics.
Thinkfinity Learning Activities
Subject: Mathematics
Title: Isometric Drawing Tool
Description: Create dynamic drawings on isometric dot paper with this interactive tool. Draw 2-D and 3-D figures using edges, faces, or cubes that you can shift, rotate, color, or decompose.
Thinkfinity Partner: Illuminations
Grade Span: 3,4,5,6,7,8,9,10,11,12
Subject: Mathematics
Title: Geometric Solids
Description: This student interactive, from Illuminations, allows students to explore various geometric solids and their properties. Students can virtually manipulate and color each solid to
investigate properties such as the number of faces, edges, and vertices. Students can explore tetrahedrons, cubes, octahedrons, dodecahedrons, icosahedrons, and irregular polyhedrons.
Thinkfinity Partner: Illuminations
Grade Span: K,1,2,3,4,5,6,7,8,9,10,11,12 | {"url":"http://alex.state.al.us/all.php?std_id=54235","timestamp":"2014-04-19T02:08:41Z","content_type":null,"content_length":"73865","record_id":"<urn:uuid:29a4ee50-480a-4958-8987-283e47fde4fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calc. Word Problems
heya, I absolutely horrible with Calculus Word Problems and was wonder if anyone could help me with these. http://i304.photobucket.com/albums/n...0/calculus.jpg
#148 We are told that $\frac{\,dV}{\,dt}$ is inversely proportional to the square of $t+1$ This tells us that $\frac{\,dV}{\,dt}=\frac{k}{(t+1)^2}$, where $k$ is a constant of proportionality. You
need to solve this differential equation, and the best way to do so would be with separation of variables. $\frac{\,dV}{\,dt}=\frac{k}{(t+1)^2}\implies\,dV=k\ frac{\,dt}{(t+1)^2}$ Thus, we see that
$V=-\frac{k}{t+1}+C$ Now, this is where two conditions come into play: The first condition: "The initial value of the machine was $500,000" This is saying that $V(0)=500000$ Applying this condition
to the equation we have for V, we see that $500000=-k+C$ Now let us look at the second condition: "Its value decreased by $100,000 in the first year" This is saying that $V(1)=400000$ Applying this
condition to the equation we have for V, we see that $400000=-\frac{k}{2}+C$ We have to solve this system for $k$ and $C$: $\left\{\begin{array}{rcr}-k+C&=&500000\\-\frac{1}{2}k+C&=&400000\end{array}
\right.$ I leave it for you to verify that $k=-200000$ and $C=300000$ Thus, our equation for V is $V(t)=\frac{200000}{t+1}+300000$ Now all you have to do is find $V(4)$ $V(4)=\frac{200000}{(4)+1}
+300000=\dots$ # 96 This question is similar to the first one here. Following the same idea, we see that the equation modeling the number of sales per week is $\frac{\,dS}{\,dt}=\frac{k}{t}$, where
$k$ is the constant of proportionality. Using separation of variables, we see that $\,dS=\frac{k}{t}\,dt\implies S=k\ln|t|+C$ We are given two conditions: $S(2)=200$, and $S(4)=300$. Use a similar
process to # 148 to get an equation for $S(t)$ # 68 Set up the integral: $\int_0^{\ln\sqrt{3}}\frac{e^x}{1+e^{2x}}\,dx$ Make the substitution $u=e^x$. I leave it for you to verify that: $\int_0^{\ln\
sqrt{3}}\frac{e^x}{1+e^{2x}}\,dx=\int_ 1^{\sqrt{3}}\frac{\,du}{1+u^2}$ Then evaluate the integral. I hope this makes sense! (Sun) --Chris | {"url":"http://mathhelpforum.com/differential-equations/48016-calc-word-problems-print.html","timestamp":"2014-04-21T04:42:47Z","content_type":null,"content_length":"11563","record_id":"<urn:uuid:b346aff6-d246-4170-a247-12193994048e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Properties of Angles and Triangles. Classification of Angles. Classification of Triangles
Geometry involves the understanding of geometric language. It's important to learn the properties for figures. This gallery classifies angles and names triangles by their sides and angles.
Acute Angle Obtuse Angle Reflex Angle Right Angle
Straight Angle Acute Triangle Equilateral Isosceles Triangle
Obtuse Triangle Right Triangle Scalene Triangle | {"url":"http://math.about.com/od/geometry/ig/Angles-and-Triangles/","timestamp":"2014-04-18T18:11:27Z","content_type":null,"content_length":"37908","record_id":"<urn:uuid:9b96ba96-3506-4f35-a984-e09117b79fa3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume of Solid of Revolution
June 8th 2009, 10:52 AM #1
Mar 2009
[Solved]Volume of Solid of Revolution
Hello guys,
Im given the equation but i keep on getting the wrong answer:
Here's the problem.
f(x) = (2x+1)^(0.5), y=0, x=1, x=2.
2x+1 is under the square root sign.
Can anyone show how they got the answer?
thanks in advance
Solved. If mods want to delete this feel free
Last edited by Redeemer_Pie; June 8th 2009 at 11:03 AM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/92200-volume-solid-revolution.html","timestamp":"2014-04-19T17:14:31Z","content_type":null,"content_length":"28987","record_id":"<urn:uuid:0c47495e-d9f8-421a-86c8-3201c32d8013>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maximizing the number of 'correct' literals in planar monotone 3SAT
up vote 1 down vote favorite
I'm trying to find the complexity of this optimization problem:
Given an instance of planar monotone 3SAT, with positive clauses $C_i = v_{i1} V v_{i2} V v_{i3}$ and negative clauses $D_i = not(w_{i1}) V not(w_{i2}) V not(w_{i3})$, (it's possible that $v_{ij} =
w_{jl}$ for some i,j,k,l), find the 0-1 assignment that maximizes the number of agreeing literals. Basically, maximize $(/sum_{C_i} v_{i1}+v_{i2}+v_{i3}) + (/sum_{D_i} 3 - w_{i1} - w_{i2} - w_{i3})$.
lo.logic co.combinatorics
1 It should be polynomial time to maximize the given sum. Add both sums together and find which literals occur negatively more often than positively, and vice versa. Unless you meant something else?
Gerhard "Ask Me About System Design" Paseman, 2011.03.02 – Gerhard Paseman Mar 3 '11 at 1:12
1 If you ask for an assignment for which the instance evaluates to 1, it is probably something like #P-hard to get one and maximize the sum as well. Gerhard "Ask Me About System Design" Paseman,
2011.03.02 – Gerhard Paseman Mar 3 '11 at 1:14
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged lo.logic co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/57190/maximizing-the-number-of-correct-literals-in-planar-monotone-3sat","timestamp":"2014-04-19T09:52:40Z","content_type":null,"content_length":"48704","record_id":"<urn:uuid:57d184a9-aba4-42a5-b50b-3c82f7962474>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If physical objects are in motion they have KE. But if they are not... Do they have PE? In a way all objects in the universe do have energy and I think energy is not a constant value in that way
isn't it? Also... Everything that's surrounds not in motion has Potential Energy... And only when a force is applied to it... It can be converted to Kinetic Energy and so on?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
@Jemurray3 @experimentX @ghazi @Carl_Pham
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
KE is 1/2(mv^2) so if an object is moving then it has KE. PE just means that it has stored energy but is not using it.
Best Response
You've already chosen the best response.
@Fellowroot yea, so basically every object has PE or it's PE is used in KE yea? So every physical object has one of those to in motion or not. KE or PE ?
Best Response
You've already chosen the best response.
The law of Conservation of Energy states that the total energy in an isolated system is the sum of all the energies within it. Since energy cannot be created or destroyed, the sum of energy
within an isolated system must be constant. Mathematically, that means for any object: \[\frac{1}{2}mv_i^2+mgh_i=\frac{1}{2}mv_f^2+mgh_f\] Note that objects that are moving can have PE as well as
KE. An example of this would be a moving projectile which has KE due to its motion and PE due to its position (and gravity).
Best Response
You've already chosen the best response.
True, its really gains KE with its PE. So thats just adding up all te energies in that one isolated system. However, my point is even objects that don't have KE still at rest have PE. @Shane_B
Best Response
You've already chosen the best response.
"However, my point is even objects that don't have KE still at rest have PE. " Absolutely true.
Best Response
You've already chosen the best response.
If you think about it, on a universal scale, everything is moving so in that system, everything has KE.
Best Response
You've already chosen the best response.
If I'm holding a bowling ball, it looks like it's at rest to me. No kinetic energy. Somebody is running by, and they see a bowling ball whizzing by with some velocity, so they calculate a kinetic
energy. Who's right?
Best Response
You've already chosen the best response.
That's why when we do calcs we always do them from 1 reference frame :)
Best Response
You've already chosen the best response.
Since we're speaking nonrelativistically, the point I'm trying to make is that the total energy possessed by an object or system is different for different observers. If the ball is sitting on
the ground at my feet at the top of a mountain, someone at the bottom would calculate a potential energy but I wouldn't see any particular reason to give it any, and somebody flying by in a plane
may well give it both kinetic and potential energy.
Best Response
You've already chosen the best response.
Potential energy is a property of systems, not of objects. If I have two identical blocks, one higher than the other, the fact that we say that one has higher gravitational potential energy
doesn't make them somehow distinguishable in any meaningful way, only that if we release them, when they hit the ground, one will be moving faster than the other. In that sense, potential energy
is not a directly meaningful quantity. It serves merely as a construct from which we might deduce the behavior of a system.
Best Response
You've already chosen the best response.
I mean youre always gonna have electric potential energy on a microscopic level
Best Response
You've already chosen the best response.
Yeah, but I could add 100 joules to whatever electric potential energy you calculate and my physics would work just as well.
Best Response
You've already chosen the best response.
@Jemurray3 True, Im just saying that most example's of PE is about a ball or a rock on top of a hill,mountain,building, etc... Truth is the rock in any condition has PE. All physical objects that
work in a "system" has PE. If not, you can't move or do anything... Everything in the universe has the two main forms of energy in motion KE or at rest really having PE. I want to generalize all
objects at rest, you're bike doing nothing has PE your're table holding you're PC/Laptop etc... Has PE only when a force is acted on it/them only then its converted to KE and it still has PE in
its motion you see where I'm going here?
Best Response
You've already chosen the best response.
I mean yea potential energy can be defined at whatever zero you want but @Shane_B 's equation is wrong potential energy is more than just gravitational potential energy it follows the law of
Best Response
You've already chosen the best response.
Energy as we mostly agreed earlier (in that older question) is just the ability for a thing to do anything... Since energy is the capability for a lot, so EVERYTHING in the universe has that
ability... If its in motion KE if at rest simply PE :P that how I feel about the matter really. Im just trying to prove the point that all objects at rest what even orientation they are in...
Have PE. I do agree that some have a higher PE and some a lower one but still they have PE it can never be zero can it?
Best Response
You've already chosen the best response.
correct in a real system the potential energy wouldn't be zero but in an introductory physics text book where all you had to account for was mgy then i guess it would sadly :/
Best Response
You've already chosen the best response.
It can be zero...
Best Response
You've already chosen the best response.
If I define potential energy relative to the ground, and there's a ball sitting on the ground, its potential energy is zero.
Best Response
You've already chosen the best response.
ok but what about external forces on the system. U=-W, and all electrons exert forces between each other, that would never be zero?
Best Response
You've already chosen the best response.
of course i would define my \[U_0=0 \] at \[r=\infty\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I'll define mine so that on the average the particles in the ball have potential energy equal to zero.
Best Response
You've already chosen the best response.
then you're arguing on a base of relativity in a reference frame that has no application in any real physics.
Best Response
You've already chosen the best response.
That is just as valid as your reference frame, yes.
Best Response
You've already chosen the best response.
So, in what way am I incorrect in my assertion that at least some of the objects in my system have potential energy equal to zero?
Best Response
You've already chosen the best response.
I mean yeah of course i can just define my coordinate axis to be s,j,l but i don't because x,y,z or i,j,k is standard in most practices. I'm answering this question as if this person is actually
trying to apply it to any real problem.
Best Response
You've already chosen the best response.
The question does not pertain to any physical situation in particular. It is a general question, essentially "Does every object in the universe have energy?". The answer to that question is that
the question itself is not physically meaningful. The numerical value of energy is not a given thing (again, nonrelativistically...). Whether I calculate the total energy of a system to be 10 or
100 or 1000000 or precisely zero has absolutely no physical meaning or relevance. The only things of importance are (a) in closed systems, whatever value I calculate the total energy to be stays
the same all the time, and (b) the spatial gradients of the respective potential energies determine the value of the forces exerted on my system.
Best Response
You've already chosen the best response.
Its a not a practical answer. Im a realist, or an engineer whatever you want to call it so numbers count because physics is practically seen through a reference frame. The air around me is moving
in a flow that has a vector field. its moving over objects statically in equilibrium it exerts a force on that object and therefore changes the potential energy.You can therefore say in a
relativistic perspective that as \[t= \infty\] normal forces and friction forces and all that will keep the entirety of the system as a whole at 0. but you would eventually have to define the
system to be the entire universe which is physically irrelevant because the universe is ever expanding.
Best Response
You've already chosen the best response.
@Jemurray3 You do agree at a point that everything in the universe has energy right?
Best Response
You've already chosen the best response.
Everything that is capable of doing work...
Best Response
You've already chosen the best response.
I wanted to state out the point if an object is at rest it still has the potential to do work. Or specifically the "Ability" to do work only when a force is acted on it to do work...
Best Response
You've already chosen the best response.
How is it possible of an object to have ZERO energy? Explain please.
Best Response
You've already chosen the best response.
yes and energy cant be created or destroyed just transferred so work is always being done and in order to discount all internal forces and have work equal to 0 your system would have to be the
entire universe which is of no practical use really
Best Response
You've already chosen the best response.
Either you are misunderstanding my argument or you have a fundamental misconception about how the universe works. I am not saying that numbers don't matter, and I would appreciate it if you did
not assume that I am too high up on my pedestal to appreciate the fact that physics is applied to real life situations. All I'm saying is nowhere in the history of physics or engineering have you
or anybody else used the numerical value of energy ITSELF in any physically meaningful calculation.
Best Response
You've already chosen the best response.
If you disagree, give me an example.
Best Response
You've already chosen the best response.
And please make it nonrelativistic since that's the entire premise of my argument.
Best Response
You've already chosen the best response.
what do you mean i want to see what potential an object has, that is the literral potential an object has to do work around its surroundings. capacitance is one example, electrical engineering's
bread and butter. you want to find the potential something has.
Best Response
You've already chosen the best response.
Group hug.
Best Response
You've already chosen the best response.
No, you want to find the potential something has relative to some arbitrarily defined zero.
Best Response
You've already chosen the best response.
Sorry for hijacking your discussion, by the way...
Best Response
You've already chosen the best response.
it's not arbitrarily defined....you said what use is it for. like i said again unless you define your system to be an ever expanding entity you're always going to have an external force related
to work due to the deltar of even one electron which is related to U by deltaU=-W, which means whatever zero you defined at t=0 U will not =0. sorry for the sloppy equations im sleepy and need to
do stop this pointless argument
Best Response
You've already chosen the best response.
after time deltat
Best Response
You've already chosen the best response.
It's not a pointless argument. It speaks to a deep point in physics, which is that only differences in energy are physically relevant, not the numerical value of energy itself. I didn't ask why
its useful to calculate potential energy, I said that you have never EVER said "The value of the potential energy is ______ and therefore I can use it to calculate ______" without defining the
potential energy to have some zero point that either a) makes the problem easier to solve or b) you just pull out of a hat. It's subtle, admittedly, but it's important.
Best Response
You've already chosen the best response.
@Jemurray3 Im really really lost here... What going on? :S
Best Response
You've already chosen the best response.
I agree with you with the point energy has no constant value like 10 or 1000 or 1000000x1000000x10000 to the power of 10000 lol, but Im just trying to say don't all objects have energy? :s
Best Response
You've already chosen the best response.
I don't understand what @AEsocooldood saying too... There is a confusing argument here that I think just want far far far beyond :S
Best Response
You've already chosen the best response.
Let me try to give you a better example of what I'm saying. Take g=10 m/s^2 for the moment. Let's say I have a 1 kg object that's sitting at rest at a height of 1 meter. Then, we would say it has
kinetic energy = 0, potential energy = 10 J.
Best Response
You've already chosen the best response.
Yes all objects have energy. there's a car moving somewhere in this world so it has energy.so E notequalto 0. since energy can't be created or destroyed by the fundamental law of conservation of
energy e cannot equal 0 unless youre in some sort of void 0-D crazy physics this whack job is talking about.
Best Response
You've already chosen the best response.
@Jemurray3 is pulling the hypercube 17 dimensions on us man get outta here go preach your physics to...oh wait. no practical value to anyone. everything's 0 in my dimension.
Best Response
You've already chosen the best response.
I've tried to be polite and calm but your ignorance is reaching such staggering levels that I'm beginning to lose my patience. The question being addressed is rather fundamental in nature and so
I thought it warranted an answer that did not take shortcuts or speak at the level of a five year old incoherently reading off random sentences from an introductory physics textbook. Before you
start prattling off about conservation of energy and the expanding universe you may want to crack a book that doesn't have quite as many pictures in it.
Best Response
You've already chosen the best response.
@Jemurray3 Calm down @AEsocooldood Stop it!
Best Response
You've already chosen the best response.
if its fundamental why are you getting mad tham im referencing funamental theories?
Best Response
You've already chosen the best response.
Because you don't understand them. Where does the law of conservation of energy come from?
Best Response
You've already chosen the best response.
Argue in a respectful mature manner. This is a place of learning and sharing of ideas not the other way around.
Best Response
You've already chosen the best response.
I am doing my best. I have to leave soon, and I've said all I can on the subject. Hopefully it was helpful, but if not, I'm sure you'll find your answers in a different way.
Best Response
You've already chosen the best response.
Joe I'll talk to you aside from this question and thank you for you're inputs so far...
Best Response
You've already chosen the best response.
wherle E cant be created or destroyed that is if that E is zero youre living in a dimension of no energy.and thats not good, because i need some energy from that system(gas) to go to work man.
Best Response
You've already chosen the best response.
Who says that energy can't be created or destroyed? Nobody just pulled that out of a hat, there is a deep fundamental reason that that's the case. Go ask any professor which matters, the
numerical value of energy or the differences in energy. That's all I can say on the subject.
Best Response
You've already chosen the best response.
well youre saying energy of objects is 0 so since i cant create energy out of my fingertips i guess its gonna stay 0 forever
Best Response
You've already chosen the best response.
Good luck in your quest to learn about the universe, ladies and gentlemen. Goodnight.
Best Response
You've already chosen the best response.
wow... this is going way out hand
Best Response
You've already chosen the best response.
@Jemurray3 check you're private messages please!!
Best Response
You've already chosen the best response.
I pretty much got the basic's of my answer thanks guys...
Best Response
You've already chosen the best response.
this doesnt take a phd proff on physics to solve. Any 4th grader will tell you the energy of the universe is not zero. ill leave on that
Best Response
You've already chosen the best response.
And my theoretical physics knowledge is ALWAYS based on the intuition of 4th graders.
Best Response
You've already chosen the best response.
Just for posterity, and in case anybody else comes by and reads this whole monstrosity, I did not assert that the energy of the universe is zero. I argued that in general, when you ask me about a
particular object, I am free to choose a frame of reference in which the total energy of that object is equal to zero. That automatically nails down the value of energy for every other particle
in the universe, but that initial freedom is exercised every time you choose a reference level or a rest frame. Energy gradients matter, not the numerical value of energy itself, which is in
general frame-dependent. Nonrelativistically.
Best Response
You've already chosen the best response.
Good point. I do agree that energy can never have a constant value... In the whole universe because that makes no sense at all. @Jemurray3
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5071fec7e4b04aa3791dbd80","timestamp":"2014-04-18T23:55:51Z","content_type":null,"content_length":"199415","record_id":"<urn:uuid:f931799e-ad4e-4ac5-82d8-ca3b3dd16485>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: September 2004 [00292]
[Date Index] [Thread Index] [Author Index]
Re: Sum question and general comment
• To: mathgroup at smc.vnet.net
• Subject: [mg50673] Re: [mg50638] Sum question and general comment
• From: Bob Hanlon <hanlonr at cox.net>
• Date: Wed, 15 Sep 2004 07:55:02 -0400 (EDT)
• Reply-to: hanlonr at cox.net
• Sender: owner-wri-mathgroup at wolfram.com
{i_Symbol, imin_Integer:1,imax_Integer},
exclude:{_Integer..}] :=
Tr[f /. ({i->#}& /@
_?(Or@@Thread[#==exclude]&) ])];
{j_Symbol,jmin_Integer:1,jmax_Integer}] :=
Sum[excludedSum[f, {j,jmin,jmax},{i}],{i,imin,imax}];
f[1] + f[3] + f[5]
f[1, 2] + f[1, 3] + f[2, 1] + f[2, 3] + f[3, 1] + f[3, 2]
Tr[f /@ {1,2,3,5,7,8,21}]
f[1] + f[2] + f[3] + f[5] + f[7] + f[8] + f[21]
Bob Hanlon
> From: Steve Gray <stevebg at adelphia.net>
To: mathgroup at smc.vnet.net
> Date: 2004/09/15 Wed AM 01:49:43 EDT
> To: mathgroup at smc.vnet.net
> Subject: [mg50673] [mg50638] Sum question and general comment
> I don't want to overload the group with my questions, so I only post after
> not being able to find the answer in the Help or at the site. Part of the
> problem of course is that it isn't clear how to state the question so that
> I can look it up*. Anyway, the current question has to do with Sum and
> similar "indexed" operations:
> I find no way to do, for example,
> "Sum over i=1 to 100 except i!= 23 and 36", etc., or Sum over values
belonging to a list, such as
> "Sum over i (belonging to) {1,2,3,5,7,8,21}", etc., or
> "Sum ( i=1 to 10) Sum (0ver j=1 to 10 but j !=i)", etc. (this can be
awkwardly done with j=1 to i-1
> and j=i+1 to 10)
> In some cases there can be workarounds using things like
> (1- KroneckerDelta[i,j]), etc., but these can get complicated and obscure.
> I would have thought that Mathematica could do operations like these
> but ??. Thank you for any information.
> Steve Gray
> * Someone who makes major progress on the problem of letting users
communicate with a computer in
> ordinary, appropriate technical "people" language will have big success.
Currently in almost all
> software one must ask using exactly the right terms. (I realize that
Microsoft and others are trying
> to make progress here, but it's negligible so far in my opinion.) Part of the
answer would be a
> greatly expanded index, compiled knowing what terms people are likely to
use for their questions. | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Sep/msg00292.html","timestamp":"2014-04-18T13:31:11Z","content_type":null,"content_length":"36560","record_id":"<urn:uuid:a54eeea6-95c0-43b8-a871-41ea2296ec4b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roslyn Heights Prealgebra Tutor
Find a Roslyn Heights Prealgebra Tutor
...As an author, I've spent eight years leading convention seminars. Through NASA TV, I've spoken in front of 19 million people worldwide at a time. Was I afraid?
55 Subjects: including prealgebra, reading, English, Spanish
...Students will learn the 9 elementary argument forms (the rules of inference) and the 10 logical equivalencies (rules of replacement) and how to use these forms to construct argument proofs.
They will also learn how to construct and work with truth tables and truth trees. I try to teach not just...
34 Subjects: including prealgebra, English, reading, GED
...In addition to English and math, I feel comfortable teaching music theory and composition, general history, and literature.Passing levels 1-3 of the Chartered Financial Analyst (CFA)
Examination, the gold standard for financial analysis, qualifies me to teach the quantitative sections of the MCAT...
37 Subjects: including prealgebra, English, reading, writing
...I took college classes English 101 and English 102 and received an A and B+. Subjects such as math, history, and a select few sciences I am also good at. I am great at earth science, physics
and lower grade science. I am very patient and enjoy tutoring.
19 Subjects: including prealgebra, English, grammar, writing
...At one point during the year, I was ranked number 6 in the country in Public Forum Debate, putting me in the top .5% of debaters nation-wide. Public speaking is my thing, and I have worked with
over 200 students over the past five years and assisted them in honing their public speaking skills. ...
43 Subjects: including prealgebra, English, reading, algebra 1 | {"url":"http://www.purplemath.com/Roslyn_Heights_Prealgebra_tutors.php","timestamp":"2014-04-17T11:05:51Z","content_type":null,"content_length":"24207","record_id":"<urn:uuid:4ed11c2f-c96c-41b5-9613-9f73e740997a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Is there a way to visualize 4D data?
Replies: 1 Last Post: Jun 28, 2006 3:23 PM
Re: Is there a way to visualize 4D data?
Posted: Jun 28, 2006 3:23 PM
no; the sequence of digital blares that one has heard
for years from car-alarms, which funded the bogus gubenatorial recall
in California, always does that for me. so,
what is a "military-pavlovian style," if you prefer
not to address any of my questions?
Sir David is quite a guy, as proved in _A NEW Kind of Science_.
> > did you say that you could find the Bornouli numbers,
> > using the buckynumbers, that is not a triviality?
> > thus:
> > I note that that last graph had 3 coordinates,
> > with different-sized tetrahedra and "hue for 0
> > through 2;" so, What?
> > what possible use could your Buckynumbers have,
> > that has not already been covered by other homogenous 3d formats?...
> > when you find a mathematical-physical application,
> > I'm sure that you'll announce it!
> >
> > >One four-dimensional point is a regular wireframe tetrahedron in the
> > >Synergetics coordinate system and you can make each point a different
> > >hue and you might see something you're looking for, who knows?
> > >
> > >See the last graphic in the Section The Vector Equilibrium at:
> > >http://users.adelphia.net/~cnelson9/
> Does the name Pavlov ring a bell? Your military-Pavlovian style
> propaganda against Wolfram's Mathematica will work unfortunately. I even
monsieur Magadin was, I think, working with scalars, so that
the "subspace" was artificially mooted. all of this was covered
by Hamilton in _Quaternions_, where the terminology was coined....
one problem is that common parlance of "subspace,"
would be merely a region of a larger space, although it's not a
since the parlance is really only common to math and
sciencefiction, as far as I know, and it's mostly going
to deal with dimensionality, otherwise. if so, then
you're not going to look at silly degenerate cases (say,
a Hilbert space of infinite dimensions,
all pointing on the same complex vector to your forehead ... although
this is exactly what Hamilton did with his first "2D" complex numbers,
on one line (I think, homogenous coordinates, oppositely directed .-)
> If you do not hand wave or lie, the why should you feel insulted?
good point. although there is a 2.5-page proof of the isomorphism
of deductive & inductive proofs, I don't know of one
from induction to "bizzaarr circumlocution."
> "Thus together with step n = 3 the extension of the statement of the
> overlapping to the step n = 4
> is that each even number in the step n = 4 can be written as the sum of
> two prime numbers and these two prime numbers are each from a pair
> of twin prime numbers in the steps n = 1, 2, 3, 4 and that there
> is a pair of twin prime numbers in the step n = 4 (and not in the step
> n
> = 1, 2, 3) such that the sum of this pair of twin prime numbers is an
> even number in the step n = 5. "
> How Sze knows that in the steps n>5 there are twin primes?
> Between 1322 and 1432 there are not twin primes.
--it takes some to jitterbug! | {"url":"http://mathforum.org/kb/message.jspa?messageID=4859301","timestamp":"2014-04-20T13:31:49Z","content_type":null,"content_length":"18325","record_id":"<urn:uuid:ad20558f-2e70-4a78-9afb-05b828d0d111>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
arbitrary-axis rotation matrix - Graphics Programming and Theory
Not sure if this is the right place to post this, but i am trying to build a 4X4 matrix that will rotate a 3D object around an arbitraty axis (x,y,z) The axis vector (x,y,z) need not be normalized.
The angle a is measured in radians. If the rotation axis faces the user, the rotation will be counterclockwise. x --> Specifies the X component of the axis of rotation. y --> Specifies the Y
component of the axis of rotation. z --> Specifies the Z component of the axis of rotation. a --> Specifies the rotation angle, in radians. I am working in C++ and i think i am close, but while the
object rotates it changes shape. Any help would be great, thanks. | {"url":"http://www.gamedev.net/topic/395904-arbitrary-axis-rotation-matrix/","timestamp":"2014-04-23T14:30:09Z","content_type":null,"content_length":"92581","record_id":"<urn:uuid:8960a6e9-a8d9-4e1a-99c2-57eb027c8e50>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Cantor's absurdity, once again, why not?
Replies: 77 Last Post: Mar 19, 2013 11:02 PM
Messages: [ Previous | Next ]
Re: Cantor's absurdity, once again, why not?
Posted: Mar 15, 2013 7:28 PM
On Friday, March 15, 2013 6:18:08 AM UTC-7, Jesse F. Hughes wrote:
> I assumed that this relationship between "falsifiability" and
> mathematics allowed one to distinguish non-mathematical claims from
> mathematical claims. If not, what role does falsifiability play? In
> science, it distinguishes scientific hypotheses from non-scientific.
Yes, exactly, I'm suggesting it would be reasonable to have falsifiability play the same role in mathematics that it plays in science. Why do I need to keep repeating that for you?
Date Subject Author
3/14/13 Cantor's absurdity, once again, why not? David Petry
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? David Petry
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? David Petry
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/17/13 Re: Cantor's absurdity, once again, why not? Shmuel (Seymour J.) Metz
3/17/13 Re: Cantor's absurdity, once again, why not? ross.finlayson@gmail.com
3/18/13 Re: Cantor's absurdity, once again, why not? fom
3/18/13 Re: Cantor's absurdity, once again, why not? Shmuel (Seymour J.) Metz
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? harold james
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/14/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? David Petry
3/15/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/15/13 Re: Cantor's absurdity, once again, why not? Virgil
3/15/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/15/13 Re: Cantor's absurdity, once again, why not? Virgil
3/15/13 Re: Cantor's absurdity, once again, why not? fom
3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/16/13 Re: Cantor's absurdity, once again, why not? FredJeffries@gmail.com
3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/16/13 Re: Cantor's absurdity, once again, why not? Virgil
3/16/13 Re: Cantor's absurdity, once again, why not? fom
3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/16/13 Re: Cantor's absurdity, once again, why not? Virgil
3/16/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/16/13 Re: Cantor's absurdity, once again, why not? Virgil
3/17/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/19/13 Re: Cantor's absurdity, once again, why not? Virgil
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? fom
3/19/13 Re: Cantor's absurdity, once again, why not? Virgil
3/16/13 Re: WM's absurdity, once again, why not? Virgil
3/17/13 Re: Cantor's absurdity, once again, why not? fom
3/14/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/15/13 Re: Cantor's absurdity, once again, why not? mueckenh@rz.fh-augsburg.de
3/15/13 Re: Cantor's absurdity, once again, why not? Virgil
3/14/13 Re: Cantor's absurdity, once again, why not? David Petry
3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/14/13 Re: Cantor's absurdity, once again, why not? David Petry
3/14/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/15/13 Re: Cantor's absurdity, once again, why not? David Petry
3/15/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/15/13 Re: Cantor's absurdity, once again, why not? David Petry
3/15/13 Re: Cantor's absurdity, once again, why not? Virgil
3/15/13 Re: Cantor's absurdity, once again, why not? fom
3/15/13 Re: Cantor's absurdity, once again, why not? fom
3/15/13 Re: Cantor's absurdity, once again, why not? fom
3/15/13 Re: Cantor's absurdity, once again, why not? Jesse F. Hughes
3/14/13 Re: Cantor's absurdity, once again, why not? ross.finlayson@gmail.com | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2440785&messageID=8645825","timestamp":"2014-04-21T08:14:25Z","content_type":null,"content_length":"108498","record_id":"<urn:uuid:0e604545-5ec1-44d5-ab8b-549a61978dd9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intersections of Polygon Diagonals
Date: 8/26/96 at 9:32:56
From: Anonymous
Subject: Intersections of polygon diagonals
Given a regular polygon of v vertices write a formula f that gives the
number of distinct zones z in which that polygon is divided by all its
diagonals. i.e. z = f(v).
I have tried to find a solution by myself or to find the problem
addressed in some text but with no luck.
By drawing the polygons and using a computer program to isolate the
zones, I have found the following sequence:
v zo ze
17 2446 (I am not really sure about
18 2446 these last two)
I have divided the sequence into even and odd numbers of vertices
because, working on the problem, I noticed same regularities within a
class (e.g. I found a formula for the number of layers of distinct
intersection for the odd class).
After having tried several types of functions and series (obviously
the sequence is not a polynomial of n < 17 as many other polygon
properties are) I had to give up.
Even to know that the problem does not have a solution would be a
Thanks very much for your help.
Franco Languasco
Date: 9/6/96 at 14:4:25
From: Doctor Ceeks
Subject: Re: Intersections of polygon diagonals
This problem is actually quite complicated and has only recently
been solved by Bjorn Poonen and Mike Rubenstein. It has a long
history; apparently it was posed several decades ago as a problem
with an award, and then the award was claimed by someone who thought
he had a solution. Then Bjorn and Mike tackled the problem without
knowing of this other "solution" and got an answer, which showed, in
fact, that the other "solution" (which got the award) was flawed!
I suggest you write e-mail to poonen@math.princeton.edu and ask him
for a preprint of his paper regarding this matter. You can tell him
what you have below and say that you were refered to him by Dr.
Math...he'll know because he knows me (Doctor Ceeks).
Do you know about roots of unity and Euler characteristic? If not,
the paper isn't going to be so easy to read. Still, the final answer
should be easy to get from the paper.
-Doctor Ceeks, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/51733.html","timestamp":"2014-04-20T06:33:29Z","content_type":null,"content_length":"7392","record_id":"<urn:uuid:757968ce-6949-4e01-aaf6-00e53b014ee1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Langhorne Math Tutor
Find a Langhorne Math Tutor
...I am currently certified to teach Social Studies in grades 7-12 in both PA and NJ. I have much experience in that field and I also have knowledge in the other subjects I have specified as
being able to tutor. I have worked one-on-one with several students throughout my teaching experience and in my collegiate career.
30 Subjects: including algebra 1, prealgebra, reading, public speaking
...I have had many successful students during this time so long as they are willing to be flexible and think a bit abstractly. My approach tends to be to break down abstract ideas into bite-sized
chunks and have a student reconstruct that information to create a coherent thought. Proceeding throug...
5 Subjects: including discrete math, algebra 2, calculus, precalculus
...I taught introductory and intermediate physics classes at New College, Duke University and RPI. Some years ago I started to tutor one-on-one and have found that, more than classroom
instruction, it allows me to tailor my teaching to students' individual needs. Their success becomes my success.
21 Subjects: including algebra 1, algebra 2, calculus, SAT math
...I have had much success using this technique in my tutoring over the past five years. I can tutor general chemistry as well as organic chemistry and I would be happy to help you prepare for
the chemistry portion of the MCAT, the GRE subject exam in chemistry, or the chemistry AP exam. See my subject section below for a complete list of the subjects I tutor!
8 Subjects: including algebra 1, chemistry, biology, American history
...I am a qualified Special Education teacher with a certification in Middle School Math. I enjoy working with preschool (Early Intervention) through 8th grade. I worked in another county for 35
years and now am looking for work close to home.
12 Subjects: including geometry, ESL/ESOL, algebra 1, reading
Nearby Cities With Math Tutor
Bristol, PA Math Tutors
Delanco Township, NJ Math Tutors
Fairless Hills Math Tutors
Feasterville Trevose Math Tutors
Feasterville, PA Math Tutors
Hulmeville, PA Math Tutors
Levittown, PA Math Tutors
Middletown Twp, PA Math Tutors
Morrisville, PA Math Tutors
Parkland, PA Math Tutors
Penndel, PA Math Tutors
Penns Park Math Tutors
Richboro Math Tutors
Roebling Math Tutors
Yardley, PA Math Tutors | {"url":"http://www.purplemath.com/langhorne_pa_math_tutors.php","timestamp":"2014-04-18T16:18:55Z","content_type":null,"content_length":"23857","record_id":"<urn:uuid:5fc4bb25-2331-460d-90a9-28ec526f8ed5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 13
, 1997
"... In recent years, there has been a considerable amount of work on using continuous domains in real analysis. Most notably are the development of the generalized Riemann integral with applications
in fractal geometry, several extensions of the programming language PCF with a real number data type, and ..."
Cited by 43 (8 self)
Add to MetaCart
In recent years, there has been a considerable amount of work on using continuous domains in real analysis. Most notably are the development of the generalized Riemann integral with applications in
fractal geometry, several extensions of the programming language PCF with a real number data type, and a framework and an implementation of a package for exact real number arithmetic. Based on
recursion theory we present here a precise and direct formulation of effective representation of real numbers by continuous domains, which is equivalent to the representation of real numbers by
algebraic domains as in the work of Stoltenberg-Hansen and Tucker. We use basic ingredients of an effective theory of continuous domains to spell out notions of computability for the reals and for
functions on the real line. We prove directly that our approach is equivalent to the established Turing-machine based approach which dates back to Grzegorczyk and Lacombe, is used by Pour-El &
Richards in their found...
- Theoretical Computer Science , 2002
"... Solid modelling and computational geometry are based on classical topology and geometry in which the basic predicates and operations, such as membership, subset inclusion, union and
intersection, are not continuous and therefore not computable. But a sound computational framework for solids and g ..."
Cited by 33 (13 self)
Add to MetaCart
Solid modelling and computational geometry are based on classical topology and geometry in which the basic predicates and operations, such as membership, subset inclusion, union and intersection, are
not continuous and therefore not computable. But a sound computational framework for solids and geometry can only be built in a framework with computable predicates and operations. In practice,
correctness of algorithms in computational geometry is usually proved using the unrealistic Real RAM machine model of computation, which allows comparison of real numbers, with the undesirable result
that correct algorithms, when implemented, turn into unreliable programs. Here, we use a domaintheoretic approach to recursive analysis to develop the basis of an eective and realistic framework for
solid modelling. This framework is equipped with a well-dened and realistic notion of computability which reects the observable properties of real solids. The basic predicates and operations o...
"... The iRRAM is a very efficient C++ package for error-free real arithmetic based on the concept of a Real-RAM. Its capabilities range from ordinary arithmetic over trigonometric functions to
linear algebra even with sparse matrices. We discuss the concepts and some highlights of the implementation. ..."
Cited by 16 (0 self)
Add to MetaCart
The iRRAM is a very efficient C++ package for error-free real arithmetic based on the concept of a Real-RAM. Its capabilities range from ordinary arithmetic over trigonometric functions to linear
algebra even with sparse matrices. We discuss the concepts and some highlights of the implementation.
- Theoretical Computer Science , 1998
"... This paper extends the order-theoretic approach to computable analysis via continuous domains to complete metric spaces and Banach spaces. We employ the domain of formal balls to define a
computability theory for complete metric spaces. For Banach spaces, the domain specialises to the domain of clos ..."
Cited by 15 (2 self)
Add to MetaCart
This paper extends the order-theoretic approach to computable analysis via continuous domains to complete metric spaces and Banach spaces. We employ the domain of formal balls to define a
computability theory for complete metric spaces. For Banach spaces, the domain specialises to the domain of closed balls, ordered by reversed inclusion. We characterise computable linear operators as
those which map computable sequences to computable sequences and are effectively bounded. We show that the domain-theoretic computability theory is equivalent to the wellestablished approach by
Pour-El and Richards. 1 Introduction This paper is part of a programme to introduce the theory of continuous domains as a new approach to computable analysis. Initiated by the various applications of
continuous domain theory to modelling classical mathematical spaces and performing computations as outlined in the recent survey paper by Edalat [6], the authors started this work with [9] which was
concerned with co...
- In Proc. 13th Canad. Conf. Comput. Geom , 2001
"... We present a new model of geometric computation which supports the design of robust algorithms for exact real number input as well as for input with uncertainty, i.e. partial input. In this
framework, we show that the convex hull of N computable real points in R^d is indeed computable. We provide a ..."
Cited by 12 (5 self)
Add to MetaCart
We present a new model of geometric computation which supports the design of robust algorithms for exact real number input as well as for input with uncertainty, i.e. partial input. In this
framework, we show that the convex hull of N computable real points in R^d is indeed computable. We provide a robust algorithm which, given any set of N partial inputs, i.e. N dyadic or rational
rectangles, approximating these points, computes the partial convex hull in time O(N log N) in 2d and 3d. As the rectangles are refined to the N points, the sequence of partial convex hulls converges
effectively both in the Hausdorff metric and the Lebesgue measure to the convex hull of the N points.
- MATH. LOGIC QUART , 2005
"... We discuss the question whether the Mandelbrot set is computable. The computability notions which we consider are studied in computable analysis and will be introduced and discussed. We show
that the exterior of the Mandelbrot set, the boundary of the Mandelbrot set, and the hyperbolic components sa ..."
Cited by 10 (0 self)
Add to MetaCart
We discuss the question whether the Mandelbrot set is computable. The computability notions which we consider are studied in computable analysis and will be introduced and discussed. We show that the
exterior of the Mandelbrot set, the boundary of the Mandelbrot set, and the hyperbolic components satisfy certain natural computability conditions. We conclude that the two–sided distance function of
the Mandelbrot set is computable if the hyperbolicity conjecture is true. We formulate the question whether the distance function of the Mandelbrot set is computable also in terms of the escape time.
- Proceedings of CCA-2000
"... Based on an eective theory of continuous domains, notions of computability for operators and real-valued functionals dened on the class of continuous functions are introduced. Denability and
semantic characterisation of computable functionals are given. Also we propose a recursion scheme which is a ..."
Cited by 3 (3 self)
Add to MetaCart
Based on an eective theory of continuous domains, notions of computability for operators and real-valued functionals dened on the class of continuous functions are introduced. Denability and semantic
characterisation of computable functionals are given. Also we propose a recursion scheme which is a suitable tool for formalisation of complex systems, such as hybrid systems. In this framework the
trajectories of continuous parts of hybrid systems can be represented by computable functionals. 1
, 1996
"... It is shown that formulas in monadic second order logic (mso) with one free variable can be mimicked by attribute grammars with a designated boolean attribute and vice versa. We prove that mso
formulas with two free variables have the same power in defining binary relations on nodes of a tree as reg ..."
Cited by 3 (2 self)
Add to MetaCart
It is shown that formulas in monadic second order logic (mso) with one free variable can be mimicked by attribute grammars with a designated boolean attribute and vice versa. We prove that mso
formulas with two free variables have the same power in defining binary relations on nodes of a tree as regular path languages have. For graphs in general, mso formulas turn out to be stronger. We
also compare path languages against the routing languages of Klarlund and Schwartzbach. We compute the complexity of evaluating mso formulas with free variables, especially in the case where there is
a dependency between free variables of the formula. Last, it is proven that mso tree transducers have the same strength as attributed tree transducers with the single use requirement and flags.
- LNCS , 2001
"... We propose semantic characterisations of second-order computability over the reals based on -denability theory. Notions of computability for operators and real-valued functionals dened on the
class of continuous functions are introduced via domain theory. We consider the reals with and without equal ..."
Cited by 2 (2 self)
Add to MetaCart
We propose semantic characterisations of second-order computability over the reals based on -denability theory. Notions of computability for operators and real-valued functionals dened on the class
of continuous functions are introduced via domain theory. We consider the reals with and without equality and prove theorems which connect computable operators and real-valued functionals with
validity of nite -formulas. 1
"... Abstract. We promote the concept of object directed computability in computational geometry in order to faithfully generalise the wellestablished theory of computability for real numbers and
real functions. In object directed computability, a geometric object is computable if it is the effective lim ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. We promote the concept of object directed computability in computational geometry in order to faithfully generalise the wellestablished theory of computability for real numbers and real
functions. In object directed computability, a geometric object is computable if it is the effective limit of a sequence of finitary objects of the same type as the original object, thus allowing a
quantitative measure for the approximation. The domain-theoretic model of computational geometry provides such an object directed theory, which supports two such quantitative measures, one based on
the Hausdorff metric and one on the Lebesgue measure. With respect to a new data type for the Euclidean space, given by its non-empty compact and convex subsets, we show that the convex hull, Voronoi
diagram and Delaunay triangulation are Hausdorff and Lebesgue computable. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=419306","timestamp":"2014-04-21T11:11:16Z","content_type":null,"content_length":"36806","record_id":"<urn:uuid:0adaa477-74ff-4605-860e-e51e15229ec3>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Sinkhorn-Knopp Algorithm:Convergence and Applications
when quoting this document, please refer to the following
URN: urn:nbn:de:0030-drops-10644
URL: http://drops.dagstuhl.de/opus/volltexte/2007/1064/ Knight, Philip A.
The Sinkhorn-Knopp Algorithm:Convergence and Applications
As long as a square nonnegative matrix $A$ contains sufficient nonzero elements, the Sinkhorn-Knopp algorithm can be used to balance the matrix, that is, to find a diagonal scaling of $A$ that is
doubly stochastic. We relate balancing to problems in traffic flow and describe how balancing algorithms can be used to give a two sided measure of nodes in a graph. We show that with an appropriate
modification, the Sinkhorn-Knopp algorithm is a natural candidate for computing the measure on enormous data sets.
BibTeX - Entry
author = {Philip A. Knight},
title = {The Sinkhorn-Knopp Algorithm:Convergence and Applications},
booktitle = {Web Information Retrieval and Linear Algebra Algorithms},
year = {2007},
editor = {Andreas Frommer and Michael W. Mahoney and Daniel B. Szyld},
number = {07071},
series = {Dagstuhl Seminar Proceedings},
ISSN = {1862-4405},
publisher = {Internationales Begegnungs- und Forschungszentrum f{\"u}r Informatik (IBFI), Schloss Dagstuhl, Germany},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2007/1064},
annote = {Keywords: Matrix balancing, Sinkhorn-Knopp algorithm, PageRank, doubly stochastic matrix}
Keywords: Matrix balancing, Sinkhorn-Knopp algorithm, PageRank, doubly stochastic matrix
Seminar: 07071 - Web Information Retrieval and Linear Algebra Algorithms
Issue date: 2007
Date of publication: 28.06.2007
DROPS-Home | Fulltext Search | Imprint | {"url":"http://drops.dagstuhl.de/opus/volltexte/2007/1064/","timestamp":"2014-04-18T02:59:32Z","content_type":null,"content_length":"6575","record_id":"<urn:uuid:40593540-9ca8-4c69-ae3b-dbd267f1fb05>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
my new score
Announcement Module
No announcement yet.
my new score
Page Title Module
Move Remove Collapse
Conversation Detail Module
pressure to be main into a jetter
18HP 5000 psi 5 GPM
i would say 7 out of 10
• Re: my new score
We used to rent those big v twin briggs motors. Good score for you keep the RPM's as low as possible to get the PSI you need.
Seattle Drain Service
• Re: my new score
Do you have a hose reel setup already on another pressure washer.
Got pics?
I have a 13hp 3000psi 5gm jet, and I think my 3/8 hose is only rated for 4000psi.
Although my pressure has only been hitting 1800-2000 psi lately.
• Re: my new score
pump is a genral TS2021N
5.6 GPM
3500 PSI
• Re: my new score
same pump i have on my 18 h.p. machine.
there is no way to get 5 gpm @ 5000 from a 18hp. gas engine. math doesn't work.
phoebe it is
• Re: my new score
I was mistaken and had reposted the correct GPM and PSI
5.6 GPM
3500 PSI
• Re: my new score
Good lookin' out.
• Re: my new score
how do you do the math? i'm not putting my finger in your eye, but i don't understand. ace sewer posted a formula last year. hp = gpm x psi v .0005833. does that work? if not what is the formula?
i know that is for a 100% efficient system, which in reality does not exist. do we have a huge power loss? i am planning on doing what appletondrain did, so i would like to know how to figure
this out. thank you. breid
• Re: my new score
Originally posted by
breid1903 View Post
how do you do the math? i'm not putting my finger in your eye, but i don't understand. ace sewer posted a formula last year. hp = gpm x psi v .0005833. does that work? if not what is the formula?
i know that is for a 100% efficient system, which in reality does not exist. do we have a huge power loss? i am planning on doing what appletondrain did, so i would like to know how to figure
this out. thank you. breid
I'm not sure Ace's formula takes pressure loss from the hose into consideration. I'm sure more than one person has been tripped up buying nozzles sized for their pump output vurses the output at
the nozzle end of the hose...
• Re: my new score
Originally posted by
breid1903 View Post
how do you do the math? i'm not putting my finger in your eye, but i don't understand. ace sewer posted a formula last year. hp = gpm x psi v .0005833. does that work? if not what is the formula?
i know that is for a 100% efficient system, which in reality does not exist. do we have a huge power loss? i am planning on doing what appletondrain did, so i would like to know how to figure
this out. thank you. breid
i believe those #'s are for an electric motor and not a gas engine.
the #'s that are realistic for gas is hp= gpm x psi / 1100.
so 5.6 x 3500 = 19,600
19,600/ 1100 =17.818 h.p
keep in mind that is the max h.p for an engine all out. doesn't take into account pressure loss from friction and h.p loss from altitude.
gas engines are approx 2/3 of an electric motor.
phoebe it is
• Re: my new score
it needs 13.4 EBHP Electric HP per specs
gas it needs 17 GPH gas HP
now this is for the best results
• Re: my new score
Just some FYI.... Rick will go on forever that you cant have anything better then what he`s got
• Re: my new score
is that what im reading ?
Re: my new score
pr. thank you. that is an optimal situation where everything has to be spot on. that ain't likely. printed that out. my other formula wasn't working. thank you. breid | {"url":"https://www.ridgidforum.com/forum/mechanical-trades/drain-cleaning-discussion/25094-my-new-score?t=24449","timestamp":"2014-04-20T02:10:54Z","content_type":null,"content_length":"156119","record_id":"<urn:uuid:8ff1f95f-e8d2-4f7c-b00b-790777119ce9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tewksbury Calculus Tutor
Find a Tewksbury Calculus Tutor
...Prior to my teaching career a lifelong interest in--and deep affection for--mathematics and physics led me to wonderful adventures as an aerospace engineer. While a NASA employee on the Apollo
Project, I made extensive use of algebra and calculus in the development of orbital rendezvous techniqu...
7 Subjects: including calculus, physics, algebra 1, algebra 2
...With over 15 years experience teaching math, and science I am well versed in various topics from pre-algebra through calculus (including algebra 1 and 2, geometry, pre-calculus and probability)
as well as SAT and GRE prep.Algebra 2 includes a more thorough exploration of functions with an emphasi...
23 Subjects: including calculus, physics, geometry, statistics
...I hold a Massachusetts Math 9-12 Educator’s License, which includes Trigonometry. The [SAT] Mathematics Level 1 Subject Test assesses the knowledge you’ve gained from three years of
college-preparatory mathematics, including two years of algebra and one year of geometry. The [SAT] Mathematics L...
9 Subjects: including calculus, geometry, algebra 1, algebra 2
I'm a very experienced and patient Math Tutor with a wide math background and a Ph.D. in Math from West Virginia University. I teach high school through college students and can teach in person
or, if convenient, via Skype. I don't want to take your tests or quizzes, so I may need to verify in some way that I'm not doing that!
14 Subjects: including calculus, geometry, GRE, algebra 1
...I have worked with many students of different academic levels from elementary to college students. Whether you want to solidify your knowledge and get ahead or get a fresh perspective if your
are struggling, I am confident I can help you. I have the philosophy that anything can be understood if it is explained correctly.
19 Subjects: including calculus, chemistry, Spanish, public speaking | {"url":"http://www.purplemath.com/tewksbury_calculus_tutors.php","timestamp":"2014-04-17T13:15:18Z","content_type":null,"content_length":"24009","record_id":"<urn:uuid:332c946f-bb89-48eb-953e-b13ab6cd0608>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
generating function
February 25th 2011, 04:54 PM #1
Junior Member
Nov 2009
generating function
whats the generating function for the number of compositions of n such that each part is a multiple of 5?
I know that its zero if n is not a multiple of 5 ( or clearly if n is less than k)
if it is a multiple of 5 generating function.. (1+x^5 +x^10+x^15+...)
But i end up getting stuck
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/discrete-math/172637-generating-function.html","timestamp":"2014-04-17T12:59:06Z","content_type":null,"content_length":"28781","record_id":"<urn:uuid:9960179e-9b4f-4f51-b240-330fd0e27ddf>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] hough transform again
Kfir Breger kbreger at science.uva.nl
Fri Aug 18 00:56:29 CDT 2006
I am working on a voxel coloring implementation with scipy and
have used (in the beginning) hough transformation to detect lines
the algroithm i used is quite simple
1. get the PIL library, which is quite good imo
2. use the build in filter function for edge detection
3. make a scipy array from the image
4. to eliminate noise do a closing operation on the edge image
5. set a threshold on edge value (i.e pixel value > t is a point on
the edge). I used a double threshold whith 2 runs
6. for each of the pixels found in 6 calculate the p value found for
a discreat number of tetas (i used 360 values between 0 and pi)
as in x cos tet + y sin tet = p where x,y and tet are know. and
store for each pair of tet and p how often they come out
7. set a threshhold for the value to determine if its a real line or
not ( i u sed max value found /2) and all p and tetas tuple with
a total value bigger then this are your lines.
I eventually dropped this in favour of radon transform as the results
are better imo.
Kfir Breger
On Aug 17, 2006, at 10:56 PM, Brent Pedersen wrote:
> right, extending the example at the bottom of the houghtf.py,
> i do:
> import scipy as S
> largevals = S.where(out + delta > max(out.flatten()));
> largevals = N.array(zip(ss[0],ss[1]))
> which gives an array of r,thetas that are within delta of the
> maximum. now, to find img coordinates that match those values...
> On 8/17/06, stephen emslie <stephenemslie at gmail.com> wrote:
> I'm busy doing just that at the moment (in fact literally right
> now), and I'll be happy to post any results here.
> My understanding is that you'll need to search the output of the
> hough transform for cells with a high count as those will be the
> most likely to correspond to lines in the main image. The output is
> a matrix relating combinations of rho and theta to the number of
> feature points that that line passes through - so the combinations
> of rho and theta with that go through the most feature points will
> be the strongest lines.
> Or something like that - I'm also really new to this stuff so I'd
> be happy to be corrected by someone that knows more.
> The second to last part of this document is a good read on the
> hough transform: http://homepages.inf.ed.ac.uk/rbf/BOOKS/VERNON/
> Chap006.pdf
> Stephen
> On 8/17/06, Brent Pedersen <bpederse at gmail.com > wrote:
> hi, i found this in recent archives and the script is useful.
> http://projects.scipy.org/pipermail/scipy-user/2006-August/008841.html
> has anyone written the code to go from the hough transform back to
> the image with the lines/edges enhanced or with non-lines removed?
> it's bending my mind a bit so if someone's already done it, i'd be
> glad of it--or any pointers.
> thanks.
> -brent
> [please include my email in the reply, i've subscribed to scipy-
> users, but not sure if it went through yet]
> _______________________________________________
> SciPy-user mailing list
> SciPy-user at scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
> _______________________________________________
> SciPy-user mailing list
> SciPy-user at scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20060818/805b9623/attachment-0001.html
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2006-August/008990.html","timestamp":"2014-04-18T10:41:30Z","content_type":null,"content_length":"7351","record_id":"<urn:uuid:409931b3-0234-4827-b071-3c7ba69dd979>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
This adding up of partials to make a complex waveform might make sense acoustically, but in order to really understand how to add phasors from a mathematical standpoint, we first need to understand
how to add vectors, or arrows.
How should we define an arithmetic of arrows? It sounds funny, but in fact it s a pretty natural generalization of what we already know about adding regular old numbers. When we add a negative
number, we go backward, and when we add a positive number, we go forward.
Our regular old numbers can be thought of as arrows on a number line. Adding any two numbers, then, simply means taking the two corresponding arrows and placing them one after the other, tip to tail.
The sum is then the arrow from the origin pointing to the place where "adding" the two arrows landed you.
Really, what we are doing here is thinking of numbers as vectors. They have a magnitude (length) and a direction (in this case, positive or negative, or better yet 0 radians or
Now, to add phasors, we need to enlarge our worldview and allow our arrows to get not just 2 directions, but instead a whole 2 In other words, we allow our arrows to point anywhere in the plane. We
add, then, just as before: place the arrows tip to tail, and draw an arrow from the origin to the final destination.
So, to recap: to add phasors, at each instant as our phasors are spinning around, we add the two arrows. In this way, we get a new arrow spinning around (the sum) at some frequency a new phasor. Now
it s easy to see that the sum of two phasors of the same frequency yields a new phasor of the same frequency. We can also see that the sum of a cosine and sine of the same frequency is simply a
phase-shifted sine of the same frequency with a new amplitude given by the square root of the sum of squares of the two original phasors. That s the Pythagorean theorem!
Sampling and Fourier Expansion
The decomposition of a complex waveform into its component phasors (which is pretty much the same as saying the decomposition of an acoustic waveform into its component partials) is called Fourier
In practice, the main thing that happens is that analog waveforms are sampled, creating a time-domain representation inside the computer. These samples are then converted (using what is called a fast
Fourier transform, or FFT) into what are called Fourier coefficients.
Figure 3.17 shows a common way to show timbral information, especially the way that harmonics add up to produce a waveform. However, it can be slightly confusing. By running an FFT on a small
time-slice of the sound, the FFT algorithm gives us the energy in various frequency bins. (A bin is a discrete slice, or band, of the frequency spectrum. Bins are explained more fully in Section
3.4.) The x-axis (bottom axis) shows the bin numbers, and the y-axis shows the strength (energy) of each partial.
The slightly strange thing to keep in mind about these bins is that they are not based on the frequency of the sound itself, but on the sampling rate. In other words, the bins evenly divide the
sampling frequency (linearly, not exponentially, which can be a problem, as we’ll explain later). Also, this plot shows just a short fraction of time of the sound: to make it time-variant, we need a
waterfall 3D plot, which shows frequency and amplitude information over a span of time. Although theoretically we could use the FFT data shown in Figure 3.17 in its raw form to make a lovely,
synthetic gamelan sound, the complexity and idiosyncracies of the FFT itself make this a bit difficult (unless we simply use the data from the original, but that’s cheating).
Figure 3.18 shows a better graphical representation of sound in the frequency domain. Time is running from front to back, height is energy, and the x-axis is frequency. This picture also takes the
essentially linear FFT and shows us an exponential image of it, so that most of the "action" happens in the lower 2k, which is correct. (Remember that the FFT divides the frequency spectrum into
linear, equal divisions, which is not really how we perceive sound it s often better to graph this exponentially so that there s not as much wasted space "up top.")
The waterfall plot in Figure 3.18 is stereo, and each channel of sound has its own slightly different timbre.
Here s a fact that will help a great deal: if the highest frequency is B times the fundamental, then you only need 2B + 1 samples to determine the Fourier coefficients. (It s easy to see that you
should need at least 2B, since you are trying to get 2B pieces of information (B amplitudes and B^2 phase shifts).) | {"url":"http://music.columbia.edu/cmc/MusicAndComputers/chapter3/03_02.php","timestamp":"2014-04-18T12:17:24Z","content_type":null,"content_length":"25590","record_id":"<urn:uuid:d9c15221-4b3c-4136-b42a-5c6e0d41400e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Imperial Beach Algebra 2 Tutor
Find a Imperial Beach Algebra 2 Tutor
...I hope to be able to help someone in any course that they are struggling in soon.In obtaining both my physics and engineering degree, calculus has been a necessary part of my everyday life. I
can give examples of why the subject is useful as well as explain the best way to apply the different co...
19 Subjects: including algebra 2, chemistry, calculus, writing
I am a San Diego native that chose to stay in this beautiful city and go to University of California, San Diego (UCSD).I graduated from high school as a lifetime member of the California Scholars
Federation, as an AP Scholar with Distinction, as a National AP Scholar, and as an IB Diploma recipient....
42 Subjects: including algebra 2, reading, English, Spanish
...Additionally, I lived in France for five years, which helped develop very strong and natural French language skills. I also spent one year tutoring in Seattle, WA, working with special needs
students pursuing their GEDs. I am an effective tutor because of my skill in assessing my student's needs, but also because of my ability to empathize with young learners.
14 Subjects: including algebra 2, French, geometry, ESL/ESOL
...For my PhD in Complexity Science, I earned this degree through research contributions based around computer programming. I work now as a post doc, where computer programming is a major
component of my work. I hold a master's degree in computer science and a PhD in computational neuroscience.
26 Subjects: including algebra 2, physics, calculus, statistics
I am passionate about teaching and learning mathematics. I've been working in the classroom and with students one-on-one for 14 years. One of the greatest thrills in life is to see the spark of
understanding in a student's eyes when a new concept is learned.
6 Subjects: including algebra 2, algebra 1, GED, trigonometry | {"url":"http://www.purplemath.com/Imperial_Beach_algebra_2_tutors.php","timestamp":"2014-04-18T00:29:48Z","content_type":null,"content_length":"24277","record_id":"<urn:uuid:26cb1426-48fe-4ad8-973f-a800977fc102>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Python - SciPy : forecast from a periodic set of points
up vote 0 down vote favorite
I'm learning SciPy and I have a problem that should be easy to solve if I find the right function : I have a set of points (2D) that cover 10 periods. I want to forecast what would be values for the
next period.
I suppose I have to create a periodic function that corresponds to my points and then to take points on this model, but I don't find how to do that !
Could you help me ?
Thanks in advance
python scipy fft
add comment
closed as not a real question by Josh Caswell, TheWhiteRabbit, Saul, Jon Egerton, Rory McCrossan Feb 11 '13 at 10:29
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying
this question so that it can be reopened, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question.
1 Answer
active oldest votes
Extrapolation is never easy. It's almost always poor, except if you have some strong assumptions about the data.
In your case, you could try it, but I don't think there's something available immediately.
I'd try:
• determine the first maximum of the autocorrelation
• extend your signal by shifting it with a multiple of this value
If needed, do interpolation afterwards.
import numpy as np
import matplotlib.pyplot as plt
def autocorr(x):
result = np.correlate(x, x, mode='full')
return result[result.size/2:]
data = np.sin(np.linspace(0,30,300)) + np.random.random((300)) * 0.1
up vote 1 down acorr = autocorr(data)
vote acorr_diff = np.diff(acorr)
maxima = [i+1 for i in range(acorr_diff.shape[0]-1)
if acorr_diff[i]>=0 and acorr_diff[i+1]<0]
for m in maxima:
plt.axvline(m, color="b", alpha=0.5)
first_max = maxima[0]
new_data = np.hstack([data[:4*first_max],data])
#plt.plot(data,"b-", alpha=0.1)
This is only a very basic implementation. It has limitations, for sure, but the principle should be clear.
Thank you for your answer. I don't look for a very precise result, the most important is how I manage to get a result. But here my points are pseudo-periodic so I can't just paste
an other period behind – Corentin Geoffray Feb 12 '13 at 15:35
add comment
Not the answer you're looking for? Browse other questions tagged python scipy fft or ask your own question. | {"url":"http://stackoverflow.com/questions/14807445/python-scipy-forecast-from-a-periodic-set-of-points","timestamp":"2014-04-18T04:34:03Z","content_type":null,"content_length":"58372","record_id":"<urn:uuid:5174bc6a-5081-4171-a6b2-0a80e1903969>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimal Letter Frequency in N-Th Power-Free Binary Words
Minimal Letter Frequency in N-Th Power-Free Binary Words (1997)
Download Links
Other Repositories/Bibliography
by Roman Kolpakov , Gregory Kucherov
Venue: in Mathematical Foundations of Computer Science 1997, Lecture Notes in Comput. Sci., 1295, eds. I. Privara and P. Ru˘zička
Citations: 1 - 0 self | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.8021","timestamp":"2014-04-16T18:06:13Z","content_type":null,"content_length":"24295","record_id":"<urn:uuid:aff3a8d3-2aef-42cb-8bb0-fd38bb375a74>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patterns with
Patterns with Polygons
All around us we see geometrical designs – on wallpaper, caprpets and fabrics for instance. Some of the simplest patterns are constructed using regular polygons, which are straight line shapes with
equal sides and equal angles. If these are fitted togther on a flat surface, edge to edge with no gaps, to make a repeating pattern which could be continued indefinitely in any direction, the result
is called a plane tessellation.
Regular tessellations are made of only one type of regular polygon. As only equilateral triangles, squares and hexagons fit together in the right way, there are only three regular tessellations. More
interesting are tessellations constructed using two or more types of regular polygon. Patterns made so that every vertex is the same are called semi-regular tessellations.
(A vertex is a point where two or more lines meet.)
This example of a semi-regular tessellation has 2 squares and 3 equilateral triangles at each vertex – always arrnaged in the same order:
2 triangles, then a square, a triangle and a square.
A neater way of writing down this vertex pattern uses index notation:
A different vertex pattern using the same polygons may produce a different tessellation. Try it out – and then try to find all the semi-regular tessellations. There are only 8 different types, but
one of these has 2 forms which are mirror images of each other.
Drawing out tessellations using ruler, compasses and protractor for each shape can take a very long time, so you might like to experiment with other methods. You could, for instance, carefully cut
out accurate polygon shapes in cardboard, then draw round them. (Your school might have ready-made shapes or templ,ates for this kind of activity.)
If you enjoy working with computers, you could use Logo or a graphics package to construct tessellations. Some of you may even have access to software specially designed for exploring tessellations.
What about repeating patterns using regular polygons which have two or more types of vertex? We illustrate two of these here – the one above uses two different shapes and three different vertex
patterns. The design below involves three shapes and three vertex patterns:
3.4.6.4, 3.4².6 and 3.6.4².
Borrowing from the language of music, we will call these patterns demi-semi-regular tesselations. Can you find more – or any other interesting patterns with polygons? | {"url":"http://www.counton.org/xplusyfiles/home/issue_1/patterns_with_polygons/index.htm","timestamp":"2014-04-20T08:15:17Z","content_type":null,"content_length":"10283","record_id":"<urn:uuid:415f8c7b-0205-46cd-b237-835dc212447a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
simplify this function
October 28th 2007, 09:11 AM #1
Oct 2007
simplify this function
r(s)= ((1+s^2)^-1, root 2 s(1+s^2)^-1, 1- (1+s^2)^-1
I understand this simplification up to a point:
r(s)= 1+s^2)^-1 (1, root2 s, s^2) (green i understand, red i dont)
the (1+s^2)^-1 substitutes nicely for the first 2 sections, but what is the algebra required for the last part?
I mean s^2 multiplied by (1+s^2)^-1 is equal to 1- (1+s^2)^-1 but I cannot work out how to derive this.
Can anyone help please?
(1+s^2)^-1 = (1+s^2)^0 - (1+s^2)^-1 = 1/ 1+s^2
times s^2
= s^2/ 1+ s^2
correct me if I'm wrong
Ni hao! and thanks for the help.
your message:
(1+s^2)^-1 = 1/ 1+s^2
times s^2
= s^2/ 1+ s^2
(1+s^2)^-1 = 1/ 1+s^2 - yes understand that
times s^2 - why?
how did you know to multiply by s^2 at the end?
What were the algebraic steps on the run up to that answer?
feichang ganxing =]
October 28th 2007, 10:03 AM #2
Junior Member
May 2007
October 28th 2007, 10:13 AM #3
Oct 2007 | {"url":"http://mathhelpforum.com/calculus/21500-simplify-function.html","timestamp":"2014-04-18T06:54:48Z","content_type":null,"content_length":"33043","record_id":"<urn:uuid:f97c92f5-1048-4350-92cd-9faa04d0a263>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Should numpy.sqrt(-1) return 1j rather than nan?
Travis Oliphant oliphant.travis at ieee.org
Thu Oct 12 01:26:17 CDT 2006
David Goldsmith wrote:
> Travis Oliphant wrote:
>> pearu at cens.ioc.ee wrote:
>>> Could sqrt(-1) made to return 1j again?
>> Not in NumPy. But, in scipy it could.
> Ohmigod!!! You are definitely going to scare away many, many potential
> users - if I wasn't obliged to use open source at work, you'd be scaring
> me away.
Why in the world does it scare you away. This makes no sense to me.
If you don't like the scipy version don't use it. NumPy and SciPy are
not the same thing.
The problem we have is that the scipy version (0.3.2) already had this
feature (and Numeric didn't). What is so new here that is so scary ?
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-October/011362.html","timestamp":"2014-04-19T04:38:18Z","content_type":null,"content_length":"4054","record_id":"<urn:uuid:a1b162c9-b386-4102-835b-8b694c9dd939>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roslyn Heights Prealgebra Tutor
Find a Roslyn Heights Prealgebra Tutor
...As an author, I've spent eight years leading convention seminars. Through NASA TV, I've spoken in front of 19 million people worldwide at a time. Was I afraid?
55 Subjects: including prealgebra, reading, English, Spanish
...Students will learn the 9 elementary argument forms (the rules of inference) and the 10 logical equivalencies (rules of replacement) and how to use these forms to construct argument proofs.
They will also learn how to construct and work with truth tables and truth trees. I try to teach not just...
34 Subjects: including prealgebra, English, reading, GED
...In addition to English and math, I feel comfortable teaching music theory and composition, general history, and literature.Passing levels 1-3 of the Chartered Financial Analyst (CFA)
Examination, the gold standard for financial analysis, qualifies me to teach the quantitative sections of the MCAT...
37 Subjects: including prealgebra, English, reading, writing
...I took college classes English 101 and English 102 and received an A and B+. Subjects such as math, history, and a select few sciences I am also good at. I am great at earth science, physics
and lower grade science. I am very patient and enjoy tutoring.
19 Subjects: including prealgebra, English, grammar, writing
...At one point during the year, I was ranked number 6 in the country in Public Forum Debate, putting me in the top .5% of debaters nation-wide. Public speaking is my thing, and I have worked with
over 200 students over the past five years and assisted them in honing their public speaking skills. ...
43 Subjects: including prealgebra, English, reading, algebra 1 | {"url":"http://www.purplemath.com/Roslyn_Heights_Prealgebra_tutors.php","timestamp":"2014-04-17T11:05:51Z","content_type":null,"content_length":"24207","record_id":"<urn:uuid:4ed11c2f-c96c-41b5-9613-9f73e740997a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
A simple polytomy resolver for dated phylogenies
You have free access to this content
A simple polytomy resolver for dated phylogenies
1. Tyler S. Kuhn^1,
2. Arne Ø. Mooers^2 and
3. Gavin H. Thomas^3,*
Article first published online: 21 MAR 2011
DOI: 10.1111/j.2041-210X.2011.00103.x
Methods in Ecology and Evolution
Additional Information
How to Cite
Kuhn, T. S., Mooers, A. Ø. and Thomas, G. H. (2011), A simple polytomy resolver for dated phylogenies. Methods in Ecology and Evolution, 2: 427–436. doi: 10.1111/j.2041-210X.2011.00103.x
Publication History
1. Issue published online: 10 OCT 2011
2. Article first published online: 21 MAR 2011
3. Received 13 August 2010; accepted 3 February 2011 Handling Editor: Emmanuel Paradis
• birth–death model;
• gamma;
• imbalance;
• phylogenetics;
• polytomy;
• simulation;
• supertree
1.Unresolved nodes in phylogenetic trees (polytomies) have long been recognized for their influences on specific phylogenetic metrics such as topological imbalance measures, diversification rate
analysis and measures of phylogenetic diversity. However, no rigorously tested, biologically appropriate method has been proposed for overcoming the effects of this phylogenetic uncertainty.
2.Here, we present a simple approach to polytomy resolution, using biologically relevant models of diversification. Using the powerful and highly customizable phylogenetic inference and analysis
software beast and r, we present a semi-automated ‘polytomy resolver’ capable of providing a distribution of tree topologies and branch lengths under specified biological models.
3.Utilizing both simulated and empirical data sets, we explore the effects and characteristics of this approach on two widely used phylogenetic tree statistics, Pybus’ gamma (γ) and Colless’
normalized tree imbalance (I[c]). Using simulated pure birth trees, we find no evidence of bias in either estimate using our resolver. Applying our approach to a recently published Cetacean
phylogeny, we observed the expected small positive bias in γ and decrease in I[c].
4.We further test the effect of polytomy resolution on diversification rate analysis using the Cetacean phylogeny. We demonstrate that using a birth–death model to resolve the Cetacean tree with
20%, 40% and 60% of random nodes collapsed to polytomies gave qualitatively similar patterns regarding the tempo and mode of diversification as the same analyses on the original, fully resolved
5.Finally, we applied the birth–death polytomy resolution approach to a large (>5000 tips), but unresolved, supertree of extant mammals. We report a distribution of fully resolved model-based trees,
which should be useful for many future analysis of the mammalian supertree.
In phylogenetic analysis, polytomous nodes (multifurcations rather than bifurcations) can be considered ‘soft’ (incomplete taxonomic resolution; Maddison 1989; DeSalle, Absher, & Amato 1994) or
‘hard’ (multiple simultaneous splitting events; Hoelzer & Meinick 1994a,b). Unlike ‘hard’ polytomies that reflect the true topology, the presence of ‘soft’ polytomies, which represent missing or
ambiguous data, will influence results from many types of phylogenetic analysis. Although some phylogenetic methods (e.g. BiSSE, Maddison, Midford, & Otto 2007) have been adapted to allow for
polytomies (FitzJohn, Maddison, & Otto 2009), most methods require complete, bifurcating trees, e.g. identifying changes in diversification rates through time (e.g. with laser, Rabosky 2006),
estimating phylogenetic diversity and isolation scores (e.g. the EDGE of Existence program, Isaac et al. 2007), and calculating tree shapes (most indices require fully resolved trees). The problem of
resolving polytomies is particularly acute when missing tips are added to phylogenies based on taxonomic information, as is frequently the practice when constructing clade-wide supertrees (see, e.g.
Angiosperms, Davies et al. 2004; Ruminants, Hernandez & Vrba 2005; Primates, Ranwez et al. 2007). In doing this, many nodes are formed with no age estimates, and many additional polytomies are
We suggest that there are two general approaches for appropriately dealing with polytomous nodes, both of which can be implemented within a Bayesian framework. The first involves the addition of
missing taxa as empty sequences at the tree inference stage where the placement of missing species can be constrained using priors on topology. Topology priors might be derived from published
phylogenies or taxonomic information. This has the advantage that the full suite of Bayesian phylogenetic tools (e.g. relaxed molecular clocks, molecular evolutionary parameters, tree priors) can be
readily incorporated into the tree-building process along with the missing taxa. However, many previously published supertrees, with a high proportion of polytomies and representing a significant
amount of research time, cannot be dealt with in this manner. We present a simple approach suitable for application to previously published trees (particularly supertrees) by modelling
diversification at polytomies in dated phylogenetic trees. At present, there is no widely accepted model-based method of dealing with soft polytomies. Current methods either involve random resolution
of the tree topology (either without specifying branch lengths or with branch lengths drawn from a specific distribution) or conversely by allowing analyses to work around the effects of polytomies
rather than explicitly attempting to resolve them (see, e.g. FitzJohn, Maddison, & Otto 2009). Many uses of phylogenies require branch length distribution for all tips, so we do not consider
topology-only approaches here.
Several methods of assigning branch lengths exist. First, Purvis (1995) introduced an approach suggested by Sean Nee in which unknown node ages are proportional to the log of the daughter clade
divided by the log of the parent clade (the LnN approach). This has been applied to dating unknown nodes within published phylogenies (Purvis 1995; Bininda-Emonds et al. 2007; Fritz, Bininda- Emonds,
& Purvis 2009), but we note that this approach cannot properly be applied to polytomies that contain resolution nested within polytomies (see discussion in model comparison section). Second, branch
lengths can be distributed evenly between the known parent age and the known daughter age (the equal splits, or EQS, approach; see Webb, Ackerly, & Kembel 2008). Third, branch lengths can be randomly
assigned to the paths created during polytomy resolution. To our knowledge, this has not been published (but see, Day, Cotton, & Barraclough 2008), and we refer to two variants of the random
approach: RND and RND2. The EQS, RND and RND2 approaches may be viable alternative approaches to polytomy resolution, as they do not make reference to any particular a priori model. However, their
behaviour, inherent biases and impacts on phylogenetic inference have yet to be studied.
We propose an alternative approach that uses the constant rate birth–death model to sample from topologies and branch length distributions at polytomies, referred to herein as the BD approach.
Analyses can then be applied to the resulting pseudo-posterior distributions of trees. This approach leverages the power of the beast phylogenetic inference package (Drummond et al. 2002; Drummond &
Rambaut 2007) to explore tree space. We demonstrate the efficacy of this model-based approach to polytomy resolution by comparing its behaviour to other previously used resolution approaches, as well
as through its application to both simulated and published phylogenies. We explore how the polytomy-resolved phylogenies perform in commonly used tests of lineage diversification. We also show that
the method can be applied to large trees by providing a distribution of fully resolved mammal supertrees generated from a recently updated mammalian supertree containing 5020 terminal taxa but >2500
unresolved nodes (Fritz, Bininda-Emonds, & Purvis 2009).
Resolving polytomies with a birth–death model
beast (Drummond & Rambaut 2007) implements Bayesian approaches to phylogenetic and phylogeographic analyses. Priors can be placed on, for example, the molecular evolutionary parameters, branch rates
and tree topology. A useful (though not unique) property of beast is that it allows sampling from the prior only and the application of prior constraints to both the tree topology and branch lengths.
Importantly, beast does not produce negative branch lengths, a common stumbling block for many polytomy resolution approaches. This, in addition to the prior-only sampling scheme and flexible XML
input language, makes beast particularly well-suited as a general polytomy resolution tool. Specifically, it is possible to input a partially resolved tree, where the known resolved topology and node
ages are constrained and allow the Bayesian Markov chain Monte Carlo search algorithm to permute the unresolved portions of the tree based on a specific biological model, such as (but not limited to)
the constant rate birth–death model.
The polytomy resolution approach presented here is comprised of two separate stages: (1) production of an XML input file containing the topology constraints and (2) model-based tree permutations in
beast. We provide two scripts (see supplementary materials) using the library APE (Paradis, Claude, & Strimmer 2004) for the r statistical language (R Development Core Team 2010) that define topology
constraints in which the dichotomous portions of the user-input tree remain fixed, leaving the polytomies free to be permuted. The first (stand-alone) script writes a complete XML input file
including topology constraints and a full set of beast input commands, including specification of a birth–death tree prior. The second script only defines topology constraints, allowing the user to
adjust the model settings either by directly editing the XML or using the program BEAUti (Drummond & Rambaut 2007). We encourage users to take advantage of the flexibility of the Bayesian framework
to explore broad but appropriate prior distributions. The specific model used will of course depend on each researcher’s interests and data set. In the example scenarios, we will utilize the
stand-alone BD model r script. Within the present BD script, a uniform prior is employed for both the mean growth rate (λ − μ) and relative death rate (μ/λ) parameters. The beast MCMC is then used to
estimate these parameters based on the distribution of constrained nodes. In some situations, it may be appropriate or required to specify different prior distributions on these parameters, or even
fix their values; however, here, we wish to provide a widely applicable preset approach and demonstrate its functionality.
We stress that by using a birth–death prior to resolve polytomies, our proposed method is necessarily biased towards favouring the birth–death model in analyses of diversification. Because most known
phylogenies do not conform to a constant rate birth–death process, most applications of our approach will therefore be biased. However, we demonstrate below that the bias is predictable and, in the
context of diversification analyses, conservative because constant rate birth–death is the standard null model.
Testing the approach
proof of concept
We resolved a single 10-tip polytomy to ensure that the estimated model parameters (e.g. the birth and death rates) were appropriately optimized and that the resulting tree distribution conformed to
a birth–death model. We suggest that, assuming convergence and mixing of relevant tree statistics (see below), a posterior distribution of 10 000 trees will generally be adequate to explore tree
space. We conducted preliminary analyses (not shown) to estimate the likely burnin period and found it to be short even for large trees. Consequently, we ran analyses for 11 111 000 iterations,
sampling trees every 1000 iterations with a 10% burnin to yield posterior distributions of 10 000 trees. This is the default within the stand-alone r script but can readily be changed as appropriate
in either the r script or the XML file generated by the script. For all beast output, we assessed mixing, convergence and that 10% burnin was appropriate by visual inspection of three statistics in T
racer v1.5 (Rambaut & Drummond 2009): net diversification rate (λ − μ), relative extinction rate (μ/λ) and root age.
We consider these three statistics to be the most relevant for determining whether tree space has been adequately sampled because they refer directly to the tree-sampling prior or to the tree
structure itself. With beast, a particular node age will be changed in 50% of the possible move types affecting that node (Drummond et al. 2002), and as a result, a trace of the node’s age represents
a conservative estimate of the number of changes made to it. We therefore use the root node as a standard marker for all the other nodes that are being sampled (i.e. nodes involved in polytomies) by
incorporating a small amount of uncertainty in the prior on root age. Similar to standard analysis in beast, we regard a post-burnin estimated sample size (ESS) value >200 as evidence that
stationarity has been reached. We note that because it is tightly constrained, the root age may not be useful as a means of assessing convergence between independent runs. For this 10-tip polytomy,
ESS values calculated in Tracer v1.5 were between 7000 and 9500 for λ − μ, μ /λ and root age (although we note that estimated λ - μ and μ/λ are not independent of one another).
Comparison among resolution methods
To assess the relevance of our proposed polytomy resolution approach, we compare its behaviour to that of two previously used approaches (LnN and EQS) as well as two unpublished random resolution
approaches (RND and RND2). These non-model-based methods involve a two-step process of random topology resolution followed by branch length estimation. The first step is identical in all four
non-model-based approaches, whereas branch length inference differs. In contrast, tree topology and branch lengths are estimated simultaneously in the BD approach. All r-scripts used for method
comparisons are available from the authors upon request.
Purvis (1995) developed a method to determine unknown node ages based on the theoretical relationship between clade size and node age distribution within either the pure birth or random birth–death
process (Grafen 1989; Nee in Purvis 1995). Purvis proposed the following relationship:
where the age of the daughter node (T[D]) is proportional to the size of the daughter clade (N[D]) and the size of the ancestral clade (N[A]) and the age of the ancestral node (T[A]). We refer to
this approach as LnN approach. Although this approach has been used for providing an age estimate for undated nodes within supertrees (Bininda-Emonds et al. 2007), it is prone to generating negative
branch lengths where there is resolved tree topology nested within a polytomy. The standard response to this is simply to place the node age equidistant between the age of the mother and daughter
node. A fuller discussion of this challenge with examples is included in the supplementary materials.
Unlike the LnN approach, both the EQS and RND approaches were developed for resolving polytomies with nested constraints. For each polytomous node, the total path length from the polytomy to a
constrained daughter node along a newly resolved path is divided using a broken stick method. For the EQS approach, the total path length between polytomy and constrained daughter node is split
equally, assigning the length of each branch, l[b], along this path using
where n is the number of edges (or sticks), and l[T] is total path length between the polytomy and the constrained daughter node. For the RND approach, the total path length from polytomy to
constrained daughter node is split into n sections of random length where the sum of all n random sections must equal the total path length. To avoid the negative branch length issues discussed for
the LnN approach, the EQS and RND approaches must estimate the path lengths of the shortest polytomy to constrained daughter node path first. If no constrained daughter nodes exist, path lengths will
then be estimated sequentially starting with the path with the most new nodes.
The RND2 approach was developed for testing the single polytomy scenarios and is not easily transferrable to a nested constraint polytomy resolution application. The RND2 approach estimates edge
lengths sequentially through the tree, beginning with the first node up from the root, then following that path to a tip, before returning to the next node to tip path. At each edge, a random number
is drawn from a uniform 0–1 distribution. This value represents the proportion of the remaining path length that is assigned to the current edge.
We test the behaviour and inherent biases in these four approaches, as well as the BD approach, using a simplest case scenario, a single polytomy (number of taxa = 10, 100, 500) with no internal
constraints and a root age of 1. We simulate trees (10 000 trees for N = 10; 1000 trees for N = 100 and 500) under each method and compare the summary tree statistic, Pybus’ gamma, γ (Pybus & Harvey
2000) using the program TreeStat v1.2 (Rambaut & Drummond 2008). Distributions for these parameters are shown using a modified violin plot (Hintze & Nelson 1998; Adler 2005).
For the smallest polytomy (N = 10), the EQS, LnN and RND2 methods perform poorly (Fig. 1a). At this tree size, both the EQS and LnN approaches produce negatively biased γ estimates. This behaviour
for the LnN approach was also noted by Vos (2006). Conversely, the RND2 approach produces positively biased γ estimates. In all cases, these biases appear to be strongly size dependant (Fig. 1b,c).
The RND approach performs better at all tree sizes but shows evidence of a size-dependant bias in γ. Most significantly, this γ bias shifts from slightly negative for N = 10, to positive for N = 100
and 500. This bias is problematic for application of the RND approach to a wide variety of size varying trees, in particular for very large supertrees with N >> 1000. In contrast with these four
approaches, the BD approach, which does produce positively biased γ estimates, shows no evidence of size dependency. This small, size-independent bias appears to be a result of the birth–death tree
prior implemented in beast. When the single polytomy scenario is run through beast using a Yule Process tree prior, rather than a birth–death tree prior, there is no observed bias in γ (see
supplementary materials). It is worth noting that although a single polytomy represents the simplest scenario, it is the most challenging scenario for the BD approach. This is because there are no
internal node constraints providing information for estimation of the λ − μ and μ/λ parameters. Because of the significant size-dependant biases in the LnN and RND2 approaches, we do not attempt to
modify these approaches to nested constraint polytomy scenarios.
Simulated birth–death trees
To compare the behaviour of the BD, EQS and RND approaches on a more relevant scenario involving a tree with nested node constraints, we simulated two sets of 10 trees, one with 64 tips and one with
250 tips. For each data set, we randomly selected and collapsed 40% of the nodes back to polytomies. Both of these trees were simulated under a birth–death model where λ = 0·1 and μ = 0·0 using the r
package geiger (Harmon et al. 2008). Further simulations using varying λ and μ parameters are discussed below for the BD approach.
To assess the behaviour of the three polytomy approaches, we compared the original value and recovered the pseudo-posterior distribution for two summary tree statistics, Pybus’γ (Fig. 2, top) and
Colless’ normalized tree imbalance, I[c] (Fig. 2, bottom; Colless 1982; Mooers & Heard 1997). Only results for the 250-tip tree are reported here; however, results from the 64-tip tree are included
in the supplementary materials. For all trees, the BD method was the only approach able to reliably recover γ and I[c]. The bias in I[c] for EQS and RND is easy to explain: each polytomy is resolved
under a Yule model of diversification, but there may be structure between it and the tips: the lineages emanating from any given polytomy may represent larger clades. After resolution, such trees
will be less balanced than the Yule expectation.
Both the EQS and RND methods showed a strong negative bias in the recovered γ value. This was expected for the EQS approach, which was shown to have a strong negative bias in the single polytomy
scenario. The negative bias observed in the RND approach is unexpected, and at odds with the observed positive bias in the single polytomy scenario. In this case, the bias appears to reflect the
different behaviours of the RND approach when nested constraints exist. For the simple single polytomy scenarios, there was no ‘shortest path’ for the RND method to select first; thus, edge lengths
were systematically assigned, beginning with the first node (in the cladogram) up from the root. Once edge lengths on this path were assigned; the next node-tip path was dealt with. However, in
scenarios where there are nested constraints, and thus where a ‘shortest path’ does exist, the RND approach must assign edge lengths to that shortest path first to avoid negative branch lengths. It
appears that as a result of this requirement, node ages are on average shifted more towards the root, resulting in a negative γ bias.
We further illustrate the behaviour of these three methods by showing plots of lineages through time for one of the 10 250-tip trees (Fig. 3), comparing the original tree, the polytomized tree and
the resolved distribution. Results from the other nine 250-tip trees and the ten 64-tip trees are not shown, but are consistent with the result shown in Fig. 3. The BD approach again results in a
better fit of the pseudo-posterior distribution of branching times to the original tree (Fig. 3; top, red line). Both the EQS and RND methods do produce a noticeable shift of branching times towards
the tips – i.e. towards the original tree. This is consistent with the observed negative γ bias (Fig. 2; top, grey and black distributions).
These comparisons suggest that although the EQS and RND approaches do not bias the diversification rate to any particular a priori model (e.g. the birth–death process), they do introduce
size-dependent biases in both γ and I[c]. Of particular concern regarding the RND approach is the inconsistency of the observed biases. For small single polytomy trees, the bias in γ is negative, but
as the size of the tree increases, the bias becomes increasingly positive. Further complicating inference made from an RND resolved distribution, if nested constraints are present, the γ bias shifts
to a negative bias. We consider this sufficient evidence to support the BD approach as the most consistently characterizable method for resolving polytomies.
We checked the behaviour of the BD approach on a suite of constant rate birth–death parameters (λ = 0·1, 0·2, 0·3 and μ = 0, 0·05, 0·09, 0·15, 0·25) by simulating multiple 64-tip 10-tree data sets
using the r package geiger. Once simulated, for each of these 10-tree data sets, 40% of the nodes, chosen at random, were collapsed to polytomies (r script available upon request). A less
conservative approach, with 20% of nodes collapsed, produced similar results but with tighter confidence limits. In addition to these 20% and 40% polytomized trees, we generated one data set of 60%
polytomized trees – exceeding the amount of polytomies within the mammalian supertree. The results were not qualitatively different from those reported for the 40% polytomized trees (see
supplementary materials, Fig. S5). For each of the 10-tree sets, we resolved the trees with the BD approach and recorded the estimated λ − μ and μ/λ parameters as recovered by beast (Fig. 4a,b) and
the two summary tree statistics, Pybus’γ (Fig. 4c) and Colless’ normalized tree imbalance, I[c] (Fig. 4d).
As expected, both the λ and μ estimates from the pseudo-posterior distribution of resolved trees encompass the original values (λ = 0·1, μ = 0·0), and there was no bias in estimates of γ (pure birth
simulation mean γ = 0; Fig. 4c). Similarly, I[c] does not appear to change substantially between the original simulated tree (mean I[c] = 0·1 for N = 64) and resolved polytomous trees (Fig. 4d).
Empirical test – cetacean radiation
We tested whether our approach could have been used to capture a recently published diversification pattern. Steeman et al. (2009) utilized a near-complete phylogeny of cetaceans (N = 87) to explore
competing hypotheses about the tempo of modern whale diversification. Their results supported a pulse of increased diversification related to periods of ocean restructuring, rather than an initial
radiation of cetacean lineages.
We obtained the fully resolved cetacean phylogeny (provided by Dan Rabosky) and measured its shape with I[c] and γ. We then produced three sets of 10 randomly ‘polytomized’ trees, with either 20%,
40% or 60%, of the internal nodes collapsed to polytomies, to mimic a partially resolved and dated supertree of the same group. We then resolved the polytomous nodes for each of these 30 trees using
the BD approach. The birth and death rates for the original fully resolved tree were calculated with beast and compared to those from the BD resolved trees. The estimated net diversification rates (λ
− μ) from the BD resolved trees were similar to and unbiased from the original tree (original tree: λ − μ = 0·0952; mean from 20% trees λ − μ = 0·0955; mean from 40% trees λ − μ = 0·0955; mean from
60% trees λ − μ = 0·0954). However, estimates of the relative diversification rate (μ/λ) increased with size of polytomy (original tree: μ/λ = 0·1400; median from 20% trees μ/λ = 0·1519; median from
40% trees μ/λ = 0·1611; median from 60% trees μ/λ = 0·2129).
Comparison of the γ and I[c] metrics between the original, fully resolved cetacean tree and the 30 resolved polytomy trees is presented in Fig. 5. Unlike the simulated birth–death trees discussed
earlier, there is a noticeable, if non-significant, increase in the γ value, indicating a shift in internal nodes towards the tips relative to the original tree, and the resolved cetacean trees are
biased to be more balanced than the true input phylogeny. Both of these patterns are to be expected, because the expected γ for a Yule tree is 0 (vs. the observed value on the original tree of
−0·623), and real-world trees are known to be more imbalanced (here I[c] = 0·164) than expected under a null model of speciation/diversification [here, E(I[c]) = 0·08 for N = 87; Mooers & Heard 1997
We then reanalysed the 30 resolved polytomy tree distributions using the methods described by Steeman et al. (2009), using the r library laser (Rabosky 2006) and additional code kindly provided by
Dan Rabosky. For these analyses, we combined and resampled the 10 sets of trees from the 20%, 40% and 60% resolved distributions respectively to produce three distributions of 10 000 trees. Steeman
et al. (2009) compared the fit of seven models describing the diversification of cetaceans. These included two constant rates models (pure birth and constant rate birth–death), two models of
diversity dependence (linear diversity dependence and exponential diversity dependence) and three rate shift models based on the timings of major periods of ocean restructuring. The latter three
models were used to test for rate shifts at 35–31 Ma, 13–4 Ma or both periods combined. Steeman et al. (2009) rejected the constant rates models and favoured the combined ocean restructuring model.
Our results are consistent with those from the original Cetacean tree (Table 1). The ocean restructuring model is, on average, the best fitting model and is favoured in 99·2%, 75·5% and 58·9% of
polytomy-resolved trees from the 20%, 40% and 60% polytomy tree distributions, respectively. The major change with decreasing topology resolution (from 20% polytomy through to 60% polytomy trees) is
in the number of times that a constant rates model cannot be rejected. With 20% polytomies, the pure birth model is favoured in only 0·5% of trees, while it is acceptable in 20% of trees in analyses
with 40% polytomies and 32% of trees with 60% polytomies. Again, this is not surprising because the polytomies are resolved using the constant rates model. Nonetheless, this suggests that resolving
polytomies using constant rate birth–death models does not mask strong patterns in the data, even with a high proportion of polytomies. Where the favoured model departs from the true best model, it
does so conservatively. We consider this departure conservative because the standard null hypothesis for diversification analysis is that of a constant rates pure birth or birth–death model, and the
bias inherent in our approach will diminish the chances of rejecting the null hypothesis. Moreover, we note that the parameter estimates for both background and elevated diversification rates for the
original tree fall well within the 95% sampling intervals of the polytomy-resolved trees. We caution that it is not appropriate to use results from the BD approach to determine whether
diversification of a particular phylogeny follows a constant rates model. The BD approach may only be used for testing whether alternate diversification models better fit the data. Although this
distinction is subtle, it is necessary to acknowledge the limitations of the approach. In this manner, acceptance of an alternate diversification model will be conservative.
Table 1. Maximum likelihood analysis of diversification rates in complete and polytomy-resolved cetacean phylogenies. Number of parameters (k),
log-likelihood (LogL), P (from likelihood ratio statistic for each model against the pure birth model) and Akaike Information Criterion (AIC) values are
based on fitting models to the original cetacean tree. Background and elevated diversification rates (lineages/million years) are based on the original
cetacean tree with 95% sampling intervals from 10 000 trees with 40% polytomies resolved in parentheses. The Best model columns are the number of times
each model is the favoured best AIC model from 10 000 trees with 20%, 40% and 60% polytomies, respectively
Model k LogL P AIC Best model 20% Best model 40% Best model 60% Background Elevated
Pure birth 1 22·527 −43·053 49 2008 3233 0·104(0·100–0·112)
Birth–death (constant rate) 2 22·527 −41·053 0 1 15 0·104(0·095–0·110)
Density dependent, linear 2 22·385 0·595(0·086–0·996) −40·770 0 2 0
Density dependent, exponential 2 22·590 0·722(0·543–0·993) −41·180 0 0 0
Ocean restructuring 2 25·465 0·015(0·003–0·592) −46·930 9916 7552 5895 0·081(0·075–0·102) 0·137(0·113–0·148)
35–31 Ma only 2 23·114 0·278(0·091–0·838) −42·229 35 45 740 0·102(0·097–0·111) 0·207(0·080–0·267)
13–4 Ma only 2 24·753 0·035(0·009–0·795) −45·507 0 395 117 0·085(0·079–0·104) 0·134(0·109–0·146)
Mammalian supertree
One of the main applications of phylogenetic analysis to phylogenies with unresolved polytomous nodes is in analyses of taxonomically completed supertrees. It is important that a polytomy resolution
approach is capable of dealing with the compounding issues of such large data sets. In this section, we applied our ‘polytomy resolver’ to the recently published mammalian supertree (Bininda-Emonds
et al. 2007; Davies et al. 2008; as updated by Fritz, Bininda- Emonds, & Purvis 2009). This large supertree (N = 5020 tips), which represents the most complete summary of phylogenetic relationships
for all mammalian taxa, is only 50% resolved. Although the possibility remains that some of these nodes may represent hard polytomies, the majority result from insufficient information.
Owing to the complexity of this analysis, with 2503 bifurcating node constraints, several important considerations needed to be addressed. The first was related to the sheer volume of information.
The input file required to code 2503 constraints required more than 155 000 lines of XML code. For this reason, we recommend using the stand-alone input file generator, as some versions of the BEAUti
user interface are not capable of accepting the 5020 mammalian taxa. Similarly, the tree log file output from beast that contained the entire 10 000 trees exceeds 3·8 gigabytes in size and cannot be
opened in most graphical text editors. In addition to these logistic issues, the complexity of this analysis meant we needed to decrease the sampling frequency of the beast analysis. Previously, we
sampled at a frequency of once per 1000 iterations, but test runs of the full supertree showed this was insufficient for this data set, and so we recorded samples every 2000 iterations. We divided
the analysis into seven independent runs, each lasting between 2·5 and 5 million iterations (several analyses were cut short because of power failures). For all analyses, careful examination of
parameter estimates using Tracer v1.5 indicated a burnin period of c. 500 000 steps was required to achieve stationarity. In all independent runs, the parameter estimates converged to similar values.
When a sufficiently independent sample was suspected, we made use of the standard diagnostics available to confirm stationarity. We required ESS values for the parameters of interest, mean growth
rate, relative death rate and root age (with a small amount of uncertainty incorporated into this constraint) be well above the accepted value of 200. In this case, ESS values of the final
compilation ranged from 730 to 1250.
From this final compilation, we report two distributions of fully resolved 5020 tip trees; 10 000 trees, representing the full data set, and a much smaller resampled set of 100 trees. Both
distributions are available in the online supplementary materials; note the full 10 000 trees are contained in an 800+ megabyte zip file. Pybus’γ and I[c] values for both the 100 and 10 000 tree
distributions are shown in Fig. S14 (supplementary materials). As there are no estimates of the ‘true’γ and I[c] values, we report only the γ and I[c] distributions (mean γ 4·92 [3·85,5·96] and mean
I[c] 5·55 E−3 [5·35 E−3,5·77 E−3]). In addition, we include a plot of the lineages through time for the full 100 tree distribution and the unresolved mammal tree (Fig. S15; supplementary materials).
As expected, the branch length distribution of the resolved mammalian supertree is markedly different from that of the unresolved tree, with branching times shifted noticeable towards the tips (Fig.
The Bayesian polytomy resolution approach presented here has several important benefits over previous approaches. Rather than designing metrics that ignore polytomies, or approaches that address only
terminal polytomies, this approach allows for inference based on a biologically relevant model-based simulation of branch lengths for all nodes within a polytomous tree, including very large
supertrees. This makes it possible to utilize previously developed metrics that require fully resolved trees, without modification and with minimal violation of assumptions.
We document the behaviour and biases of several published and unpublished polytomy resolution approaches – approaches that estimate a distribution of node ages or branch lengths. The birth–death
approach (the BD approach) generally recovered the expected starting values for diversification parameters (mean growth rate and relative death rate) as well as two phylogenetic shape metrics
(Pybus’γ and Colless’ tree imbalance). There was a small positive bias in Pybus’γ; however, this bias and the behaviour of the BD approach showed no size dependency. Biases, notably related to
phylogeny size, were detected in all other non-model-based approaches (EQS, RND) making inferences on diversification from such approaches inappropriate.
Trees based on our method are necessarily biased towards constant rate birth–death models but are generally conservative because this is the typical null model for diversification rate analyses. Even
with this known and predictable bias, our analyses of cetacean diversification, with 60% of nodes in the cetacean phylogeny resolved using a birth–death model, still recovered the same best model
(Ocean Restructuring) as that obtained from the fully resolved phylogeny. However, there will inevitably be a loss of power, as the proportion of polytomies increases such that constant rates models
are more likely to be favoured. We typically find that polytomy-resolved trees return slightly higher (more positive) values of γ than the ‘correct’ fully resolved tree. Consequently, while false
inference of slow-downs will be rare using our approach, some instances of real slowdowns may be missed.
Although the implementation of our approach here is based on a birth–death model, it can in principle be extended to other tree priors such as rate heterogeneous birth–death models. A particularly
interesting extension would be to develop priors that build in information from species traits. For example, body size has frequently been shown to have a strong phylogenetic signal and may be
informative about the possible relationships between species in unresolved parts of the phylogeny. To our knowledge, such a model has not yet been implemented in beast.
We thank Dan Rabosky for providing the cetacean phylogeny and r-scripts necessary to replicate their analysis, Rakesh Parhar for help with tree generation and measurement, and Karen Magnuson-Ford for
scripting the RND approach we tested. We thank the SFU evolution group (FAB*) for discussion and comments. This work was supported by funding from the Canadian Natural Sciences and Engineering
Research Council (NSERC Canada) and by a Natural Environment Research Council (NERC, UK) Postdoctoral Research Fellowship (grant number NE/G012938/1).
• (2005) vioplot: Violin plot. R package version 0.2. http://cran.r-project.org/web/packages/vioplot/index.html (accessed 25 October 2010).
• , , , , , , , , & (2007) The delayed rise of present-day mammals. Nature, 446, 507–512.
• (1982) Review of phylogenetics: the theory and practice of phylogenetic systematics. Systematic Zoology, 31, 100–104.
• , , , , & (2004) Darwin’s abominable mystery: insights from a supertree of the angiosperms. Proceedings of the National Academy of Sciences, 107, 1904–1909.
• , , , , , , , , & (2008) Phylogenetic trees and the future of mammalian biodiversity. Proceedings of the National Academy of Sciences, 105, 11556.
• , & (2008) Tempo and mode of diversification of Lake Tanganyika Cichlid Fishes. PLoS ONE, 3, e1730.
• , & (1994) Speciation and phylogenetic resolution. Trends in Ecology and Evolution, 9, 297–298.
• & (2007) BEAST: Bayesian evolutionary analysis by sampling trees. BMC Evolutionary Biology, 7, 214.
• , , & (2002) Estimating mutation parameters, population history and genealogy simultaneously from temporally spaced sequence data. Genetics, 161, 1307.
• , & (2009) Estimating trait-dependent speciation and extinction rates from incompletely resolved phylogenies. Systematic Biology, 58, 595–611.
• , & (2009) Geographical variation in predictors of mammalian extinction risk: big is bad, but only in the tropics. Ecology letters, 12, 538–549.
• (1989) The phylogenetic regression. Philosophical Transactions of the Royal Society B: Biological Sciences, 326, 119–157.
• , , , & (2008) GEIGER: investigating evolutionary radiations. Bioinformatics, 24, 129–131.
• & (2005) A complete estimate of the phylogenetic relationships in Ruminantia: a dated species-level supertree of the extant ruminants. Biological Reviews, 80, 269–302.
• & (1998) Violin plots: a box plot-density trace synergism. The American Statistician, 52, 181–184.
• & (1994a) Patterns of speciation and limits to phylogenetic resolution. Trends in Ecology & Evolution, 9, 104–107.
• & (1994b) Reply from G.A. Hoelzer and D.J. Melnick. Trends in Ecology and Evolution, 9, 298–299.
• , , , & (2007) Mammals on the EDGE: conservation priorities based on threat and phylogeny. PLoS ONE, 2, e296.
• (1989) Reconstructing character evolution on polytomous cladograms. Cladistics, 5, 365–377.
• , & (2007) Estimating a binary character’s effect on speciation and extinction. Systematic Biology, 56, 701–710.
• & (1997) Inferring evolutionary process from phylogenetic tree shape. Quarterly Review of Biology, 72, 31–54.
• , & (2004) APE: analyses of phylogenetics and evolution in R language. Bioinformatics, 20, 289–290.
• (1995) A composite estimate of primate phylogeny. Philosophical Transactions of the Royal Society B: Biological Sciences, 348, 405–421.
• & (2000) Testing macro-evolutionary models using incomplete molecular phylogenies. Proceedings of the Royal Society B, 267, 2267–2272.
• R Development Core Team (2010) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0. http://www.r-project.org
(accessed 25 October 2010).
• (2006) LASER: a maximum likelihood toolkit for detecting temporal shifts in diversification rates from molecular phylogenies. Evolutionary Bioinformatics, 2, 247–250.
• & (2008) TreeStat v1.2: tree statistic calculation tool. http://tree.bio.ed.ac.uk/software/treestat/ (accessed 5 June 2010).
• & (2009) Tracer v1.5: an MCMC trace analysis tool. http://beast.bio.ed.ac.uk/ (accessed 1 December 2009).
• , , , , , & (2007) PhySIC: a veto supertree method with desirable properties. Systematic Biology, 56, 798–817.
• , , , , , , , , & (2009) Radiation of extant cetaceans driven by restructuring of the oceans. Systematic Biology, 58, 573–585.
• (2006) A new dated supertree of the primates. In: Inferring Large Phylogenies: The Big Tree Problem, pp. 94–164. PhD thesis. Simon Fraser University, Burnaby, Canada.
• , & (2008) Phylocom: software for the analysis of phylogenetic community structure and trait evolution. Bioinformatics, 24, 2098–2100.
Supporting Information
Fig. S1. Inferred gamma bias for the simplest single polytomy scenarios using a Yule tree prior in BEAST rather than Birth-death tree prior. Two sizes of polytomies (100 tips and 500 tips) are shown.
In both cases, there is no evidence of a bias in the pseudo-posterior distribution of gamma estimates [E(γ) = 0.0]. This is in contrast with the slight positive γ bias shown for the BD model (Fig.
1). Researchers wishing to use the Yule prior can do so by running the included ‘PolytomyResolverConstraints’ R script, and combining the output XML tags with a user generated BEAUti XML file where
the tree prior is set to a Yule prior.
Figs S2–S4. Comparison between BD, EQS and RND approaches for simulated 64-tip trees. Three different sets of 10 trees were simulated (λ = 0.1, μ = 0.0, 0.05, 0.09). None of these polytomy resolution
approaches appear affected by the different birth and death rates used to simulate trees. Similar to Fig. 2 from the main text, the EQS approach has a strong negative γ bias (grey). Comparison of
results for these smaller trees with the 250 tip tree results shown in Figure 2, demonstrates the size-dependent bias observed in the RND approach (black). In Figs S2–S4, the RND approach better
recovers the true γ values (red lines) than does the RND approach in the 250-tip trees (Figure 2). The BD approach does not demonstrate any strong bias, with the 95% confidence interval overlapping
the best estimate in all the 10-tree datasets. As expected there is no bias in the imbalance estimate (I[c]) for any of these three 10-tree datasets (panel B).
Fig. S5. Parameters estimated from the pseudo posterior distribution of trees resolved using the BD approach. Sixty percent of internal nodes chosen at random from the starting trees, the same
10-tree dataset presented in the main content (Fig. 4), were collapsed to polytomies. Similar to the 40% polytomized trees presented in the main content, the parameter estimates (grey) for the BD
resolved tree distributions encompass the initial values (red bars) for all four parameters. No biases are apparent in γ or I[c]. The mean growth rate (λ − μ) does appear to be consistently
underestimated. This is likely related to the challenges of estimating a relative death rate (μ/λ).
Figs S6–S13. Parameter estimates from pseudo posterior distribution of 40% polytomized 64-tip simulated trees resolved using the BD approach. For Figs S6–S12, the BD approach was able to recover the
original value for all four parameters (mean growth rate, λ − μ; relative death rate, μ/λ; Pybus’ gamma, γ; and Colless’ tree imbalance, I[c]). In Figure S13, where the birth and death rates were
high, the BD approach was not able to reliably estimate the birth and death rate parameters. However, there is again no observable bias in γ or I[c].
Fig. S14. Pybus’ gamma (γ) and Colless’ tree imbalance (I[c]) for the mammalian supertree resolved using the BEAST birth-death (BD) method. Two pseudo-posterior tree distributions are shown, the
complete set of 10 000 trees, and a subsampled set of 100 trees. Although the γ and I[c] distrubtions are less smooth, there does not appear to be a difference between the full set and the subsample.
The true γ and I[c] values for the mammalian supertree are not known.
Fig. S15. A lineage through time plot showing both the original unresolved mammalian supertree (blue line), and the 100-tree distribution of resolved trees (grey lines, 100 tightly overlapping lines
shown). It is clear from this figure that the polytomy resolution approach has a noticeable effect on the node ages, shifting nodes towards the tips as more and more polytomies are resolved. The true
distribution of lineages through time is not known.
Data S1.Pseudo-posterior distribution of resolved mammalian supertrees. Mammalian supertree resolution was done using the stand-alone Polytomy Resolver script. This approach resolved all polytomies
under a constant rates birth-death model. Both the complete distribution of 10,000 trees and a resampled set of 100 trees are available. Log files are available from the authors upon request.
Data S2. Polytomy Resolver scripts. See supplementary materials for detailed instructions on running the "PolytomyResolver.R" ,standalone R-script or the "PolytomyResolverConstraints.R" customizable
As a service to our authors and readers, this journal provides supporting information supplied by the authors. Such materials may be re-organized for online delivery, but are not copy-edited or
typeset. Technical support issues arising from supporting information (other than missing files) should be addressed to the authors.
Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to
the corresponding author for the article. | {"url":"http://onlinelibrary.wiley.com/doi/10.1111/j.2041-210X.2011.00103.x/full","timestamp":"2014-04-25T08:59:36Z","content_type":null,"content_length":"142469","record_id":"<urn:uuid:faa039bf-a7c7-412f-99bf-a3566915a36b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cudahy, CA Algebra 1 Tutor
Find a Cudahy, CA Algebra 1 Tutor
...I like putting my mathematical skills to use in a that will benefit others. I've been tutoring math for the past seven years. I tend to tutor all grade levels and I like parents to see
6 Subjects: including algebra 1, geometry, SAT math, elementary (k-6th)
...Knowledge of Algebra 2 is important for success on both the ACT and college mathematics entrance exams. American history is one of the few subjects that American students are expected to study
throughout most their educational careers. I have studied it more or less continuously, starting in el...
27 Subjects: including algebra 1, reading, English, GED
...I can tailor my lessons around your child's homework or upcoming tests and stay synchronized. Your child's skills will be improved in a few sessions. I am organized, professional and friendly.
14 Subjects: including algebra 1, reading, Spanish, ESL/ESOL
...Something I have noticed is that students, especially middle school/high school students, are not told why they are learning something or what use does the information have in the real world.
They are not told the history behind the scientific discoveries. I always try to give the student real ...
31 Subjects: including algebra 1, chemistry, reading, English
...Have tutored students in Algebra 2. Received 5 on Calc BC exam. I have taken several advanced math classes at Caltech since then, and have used Calculus regularly over the course of my physics
15 Subjects: including algebra 1, reading, calculus, geometry
Related Cudahy, CA Tutors
Cudahy, CA Accounting Tutors
Cudahy, CA ACT Tutors
Cudahy, CA Algebra Tutors
Cudahy, CA Algebra 2 Tutors
Cudahy, CA Calculus Tutors
Cudahy, CA Geometry Tutors
Cudahy, CA Math Tutors
Cudahy, CA Prealgebra Tutors
Cudahy, CA Precalculus Tutors
Cudahy, CA SAT Tutors
Cudahy, CA SAT Math Tutors
Cudahy, CA Science Tutors
Cudahy, CA Statistics Tutors
Cudahy, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Cudahy_CA_algebra_1_tutors.php","timestamp":"2014-04-18T16:26:23Z","content_type":null,"content_length":"23722","record_id":"<urn:uuid:1d3b7f89-252f-4120-bb7f-d55406f385c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching Physics - Education for Problem Solving
This page describes some especially interesting parts of the book, Power Tools for Problem Solving in Physics (it was the best of books, and the worst) and helps you discover features that include
Aesop's Problems (each designed to illustrate principles) and — to help students master strategies for solving problems — repetition of ideas in different contexts (as in a spiral curriculum),
flashcard reviews that help students store ideas in long-term memory so these ideas will be available when they are needed for solving problems, and one of my favorite parts, the chapter summaries
that provide "big picture overviews" with logically meaningful visual organization.
A Suggestion for Evaluation: As explained in the homepage, while writing this book in the late-1980s my goal was "just to explain ideas-and-skills in ways that are logical and clear. Since then, I've
learned... that effective teaching requires more than just clear explanations. ..... So I'll ask you to evaluate my book based on what [in the 1980s] it was intended to achieve, by simply asking
whether its explanations are logical and clear." But the basics remain similar. Then and now, many benefits are offered by eclectic instruction — for example, by combining explanation-based
instruction (as in my book) with interactive computer games or simulations like PhET (FAQ - research) or EpiGame or EGame (HomePage) — to help students achieve multiple educational goals for
improving their ideas-and-skills.
This mini-section is about links-navigation. You can skip it if you want, and move on to Quantitative...Understanding below.
Here is an option: If you want links to open in a separate new window, so this page remains open in this window, it will happen in another version of this page.
I made "named destinations" within each PDF file, so clicking a link usually takes you directly to the correct location in the file. For example, the first link below (for "2.2") goes to Section 2.2.
/ link-problems with Macs: These links work properly in Windows, and also with inside-the-browser viewing on Mac's Safari or Chrome or Firefox, but not with Mac's Firefox when (in "Applications")
it's set to open PDF files in Adobe Reader or Preview.
Quantitative (mathematical) Understanding
Mathematical skill is essential for physics. Students can quickly learn what they need to know about geometry and trigonometry in Sections 1.1-1.2. (also, check the logically organized Chapter
Section 2.3 explains how a "system of five equations" (from 2.2) makes it easy to Choose a Useful Equation. When students realize how easily they can choose an equation (which can be difficult
without the 5-equation system) they will feel free to focus on the more important qualitative understanding in Step 1 of the equation-choosing strategy, when they "read carefully, think, draw
pictures; do whatever is needed to form a clear idea of the problem situtation." Later, 4.12 explains how to Choose an Equation from Chapters 2 (motion), 3 (F = ma), 4A (work-energy), and 4B
(impulse-momentum), while 5F encourages you to rationally Cope with Equation Overload by understanding (in ways that are both qualitative & quantitave) the equations you're using.
4.7 shows the conceptual and practical utility of a Many-Sided Equation by explaining that "each of the 8 boxes is equal to every other box" so "you can equate any two of these boxes to make an
equation that fits the needs of a particular problem." / a comment in 2011: Oops! Because the term "Many-Sided Equation" might encourage a bad habit, such as writing "F = ma = m(-9.8 m/s2)" if "a" is
9.8 m/s2 downward and "up" is defined to be +, instead I should have called it a "A Multi-Option System of Equations" or something similar. To discourage the bad habit of writing 3-sided equations, I
emphasize (in Section 18.2) an important strategy for algebra — "USE VERTICAL SUBSTITUTION. Don't substitute horizontally; by definition, an equation has 2 (and only 2) sides" — and in this case
vertical substitution is done by writing "F = ma" and then below it "F = m(-9.8 m/s2)" to keep the equation 2-sided. I emphasize the importance of vertical substitution in 3.2, and explain another
reason for it, but I should have included both reasons in 3.2.
The characteristics of motion graphs (point, slope, shape, area) are in 2.10 and their connections with calculus are in 19.1-19.2.
Qualitative (non-mathematical) Understanding
It's also important to construct a conceptual understanding that is is accurate and extensive. Helping students improve their qualitative understanding, and connecting this with quantitative
understanding, is a frequent goal throughout the book.
For example, the section above describes a 5-equation system which shows that in physics the key to problem solving is qualitative understanding, along with translations of
qualitative-into-quantitative; this theme is continued later, with similar strategies for choosing equations in other areas, including physical situations involving work-energy and impulse-momentum
The section above ends with graphs-and-calculus connections, which illustrates my goal of helping students see "the big picture" and how different parts of it fit together to form the whole.
Relationships between ideas are explained in every chapter, and are shown (verbally-visually-mathematically) in the chapter summaries.
The section below describes playing with a problem, using several thinking modes and translating between them, and imagining you're an object being pushed-and-pulled by forces. Some parts of the book
explain how to avoid common misconceptions (about Newton's Third Law, and in other areas) and "the similarities and differences between related concepts." There is a flowchart to explain the process
of finding a friction force, a creative way to think about simple harmonic motion, and much more, continuing throughout the book.
Physics Thinking
3.2-3.4 explains how to use the cause-effect relationship summarized in "F = ma" (*) and, in 3.4, how to "play with a problem" in order to fluently translate between thinking modes (verbal, visual,
and mathematical) and skillfully coordinate their concrete manifestations (in words, pictures, and equations) while solving problems. * To make a force diagram, for example, "choose an object, look
at a drawing of the problem-situation, imagine you are the object and ask ‘What forces do I feel pushing and pulling on me?’, then draw and label these forces." / comment in 2011: This visualizing is
easier for contact forces; but students must also develop the skill of imagining non-contact forces, such as gravitational, electrical, and magnetic forces. Therefore, I should have emphasized this
contact/non-contact distinction in the book, as I did with my in-person teaching, which also included "hand waving" visual gestures to show what I (as an object) was feeling, to illustrate the
process-of-imagining and make it more dramatic.
Later, Chapter 8 illustrates a combining of modes: 8.1 helps a student explore (and intuitively understand) a cycle of simple harmonic motion, 8.2 explains how imaginary circular motion can be used
as a visual-mathematical model for real harmonic motion, and 8.3 summarizes math-formulas and shows the difference between constants, constant-variables, and changing-variables. { This distinction
betwen variables is ignored in most textbooks. }
Misconceptions: Problem 2-G and Section 3.5 (plus a "lazy horse" challenge in 3.91) are designed to help students replace wrong ideas — things they “know” that just ain't so — with correct ideas. {
2-G compares Aristotelian Intuition and Galilean Relativity, while 3.5 shows why forces that are "equal and opposite" may not be related by Newton's Third Law } / Here is a comment in 2005 while
writing this page: I learned much more about "misconceptions research" after writing this book, so although it clearly explains a useful way to think about many "tough concepts" it doesn't do this
for all of the common misconceptions; and it doesn't explicitly use strategies of Teaching for Conceptual Change, such as those of Posner, et al (1982) who suggest first producing dissatisfaction
with an alternative preconception, before showing that the corresponding scientific concept is intelligible, plausible, and fruitful.
The similarities and differences between related concepts are explained in Sections 3.7 (FRICTION: kinetic versus static), 4.8 (FORCE: internal vs external, and CONSERVATION: of momentum vs kinetic
energy), and in 5A and 5D (for MOTION: linear vs tangential vs angular) and 5F (for a rotational analogy of F = ma, and rotational applications of work-energy and impulse-momentum).
Two right-hand rules (for moving charge producing magnetism in 12.1, and moving charge being affected by magnetism in 12.2) are combined in 12.3.
2.6-2.8 show three types of motion problems — involving two intervals, two objects, or two dimensions — and the tools you'll need to solve them. Disciplined step-by-step strategies are explained in
Sections 3.7 (with a flowchart for friction force) and 5G (for torque statics) and elsewhere. Strategies for circuit analysis, showing similarities and differences between V=IR and Q=VC, are in
│ LINKS EARLIER IN PAGE │ LINKS LATER IN PAGE │
│ 1.1-1.2 (geometry & trigonometry) │ memory and problem solving │
│ 2.2-2.3 (a "tvvax equation-system" ) │ with flashcards and summaries │
│ 4.12 (equation choice from 4 chapters) │ extra problems (for Chapters 1-3,...) │
│ 5F (coping with equation-overload) │ Chapter 5: Introduction & Summary │
│ 4.7 (a "many-sided equation") │ │
│ 2.10 & 19.1 (motion graphs & calculus) │ more mathematics: │
│ 3.2-3.4 (Aesop's Problems for F = ma) │ 2.2 (linking equations, plus 3.3 & 4.1) │
│ Chapter 8 (shm: cycle, model, variables) │ 2.9 (ratio logic, by intuition & math) │
│ 2-G (the "release principle" of Galileo) │ 3.6 (the meaning of + and signs) │
│ 3.5 (equal & opposite twice, lazy horse) │ 10.93-10.95 (visual-math symmetry) │
│ 4.8 (force on system: internal & external) │ Chapter 1 (geometry, trig, prefixes) │
│ 5A & 5D (motion: linear, tangential,...) │ Chapter 18 (algebra for physics) │
│ 5F (rotational F=ma, work-energy,...) │ Chapter 19 (calculus for physics), │
│ 12.3 for combining the right-hand rules │ 2.10 & 19.1 (graphs: points,...) │
│ 2.6-2.8 (for two intervals, objects, or...) │ │
│ 3.7 (step-by-step flowchart for friction) │ 2.1 & 2.6 (principles for thinking) │
│ 5G (a careful method for torque statics) │ │
│ 11.1-11.4 (circuit analysis: V=IR Q=VC) │ │
Memory and Problem Solving — Review & Organization with Flashcards & Summaries
Yes, memory is extremely useful because it "provides raw materials... for creativity and critical thinking" and "although memory is not sufficient for productive thinking, it is necessary," as
explained in my web-page about Productive Thinking.
Two key memory-improvers are review and organization,* and both principles are used in this book: at the end of each chapter is a flashcard review that will help students review what they have
learned, and an overview-summary that provides logical organization. These two reviewing-and-organizing tools, when used in the context of personal experience with solving problems, will help
students “put it all together” and master the effectively coordinated use of their problem-solving tools. The cumulative result of Principles plus Practice — of building a working memory containing
useful principles, and working with these principles during practice in problem solving — of combining memory-building with problem-practicing — is improving the quality of problem solving. / * In
scientific studies of learning techniques the two best were practice testing (and "there is one familiar approach that captures its benefits: using flash cards") plus distributed practice (and flash
cards are an excellent way to do this). {also, The Educational Value of Organization}
Here are practical tips for using these two tools: 1) The first time you try the flashcard review for a chapter, you'll have a feeling of "trying to guess what's in the teacher's mind" to fill the
blanks; but if you think about the logic of WHY each blank is filled the way it is (this is a great way to learn!) your later reviews will be easier and more effective for helping you
understand-and-remember. Or you can use my flash cards as a starting point to make your own personally customized flash cards. 2) When you invest time in a deep study of the "visually organized
logic" in chapter summaries, you will be rewarded with improved understanding of the concepts and their inter-relationships, with better ideas (about the physics) and skills (in using the ideas to
solve problems).
Some ideas (especially concepts) are only in a chapter's flashcard review, while some (including most equations) are only in the summary, and some central ideas are in both. Most chapters end with a
summary, and all available summaries (for 1 2 3 4 5, 8, 10, 11, 19) are collected in a file for Chapter Summaries. Together, the summaries for Chapters 2-5 provide a nice overview of motion physics,
and Chapter 1 summarizes the geometry-and-trig commonly used in physics, while Chapter 10 shows a useful perspective on electrostatic relationships between F, E, V, and W.
Extra Problems
Some "Aesop's Problems" are inside the body of each chapter, but there are also end-of-chapter problems for most chapters. For three chapters (1-3) these problems are in camera-ready format with text
and diagrams, but most chapters ( 4 5 6 7 8 9 10 11 14 15 16 17 ) have the text but — at least for awhile — they don't have any diagrams. (but I may scan-and-post these missing diagrams during the
summer of 2011) Although some problems & solutions are mainly for practice, to help students build good habits and confidence, most problems teach principles that are not essential (so they don't
have to be in the main part of the chapter) but are still very useful. Some "recommended" problems are marked with •, and you may want to look at Problems 1-1, 1-4, 2-5, 2-12, 2-14, 2-16, 2-17, 2-19,
2-21, 2-26, 3-6, 3-8, 3-13, 3-19, 3-21, 3-25, 3-33, and 3-35.
The Chapter 5 Introduction shows how creative structure can be used to meet the challenge of making a chapter "internally logical" and easy for students to integrate with the corresponding parts of
their main text.
More Mathematics
When the same variable appears in two equations, you can solve for it in one equation and substitute it into the other, thus linking the equations with each other. Most equation derivations and many
problem solutions use this tool. A strategy of "linking equations" is introduced in Section 2.2 and reinforced in 3.3 & 4.1, and is used throughout the book. Ratio Logic (intuitive and algebraic) is
in 2.9.
Useful physics-math concepts are scattered throughout the book, as in The Meaning of ± Signs in Section 3.6, or the visual-math "symmetry logic" of Gauss's Law in 10.93-10.95. And three whole
chapters are devoted to math:
Chapter 1 teaches Math for Physics: geometry, trigonometry, metric prefixes (two meanings), and conversion factors.
Chapter 18 covers a variety of useful algebra tools, including How to Make an Equation (18.1), An Overall Equation-Solving Strategy (18.4), Exponents and Logarithms (18.6), and Optimization Analysis
of Conflicting Factors (18.10).
Chapter 19 begins with Motion Graphs (by explaining Point, Slope, Shape, and Area, in 2.10 & 19.1) for students in either non-calculus or calculus-based physics courses. The rest of the chapter helps
students develop an intuitive understanding of how physical concepts are expressed in the "language" of calculus, beginning with ideas from Chapter 2 (in 19.2) and continuing with goal-directed
Aesop's Problems (to accompany sections in Chapters 4, 5, and 10) to teach skills that are essential for a calculus-based approach to physics: constucting equations (either derivative or integral),
making variables match, using a tangent line approximation, setting up integrals using the logic of "mass-ratio" and "density", and more.
This book takes time to explain math tools more clearly than in most physics books. And it covers ideas that are valuable but aren't discussed at all in most books and courses.
Principles for Learning-and-Thinking
Useful principles are in Sections 2.1 and 2.6, in Learning from Mistakes (how I didn't learn to ski), Aesop's Problems, Principles plus Practice, and The Most Important Strategy. { Since 1989, these
ideas have been expanded and revised in web-pages about Aesop's Activities for Goal-Directed Education and Motivations & Strategies for Learning. And general "learning skills," originally in Chapter
20, are now in Study Skills for Effective Learning and Strategies for Problem Solving. }
Two features of this book are:
1) The specific "power tools" that can be learned from each problem are clearly stated, thus the name Aesop's Problems, by analogy to Aesop's Fables that each have a specific, clearly stated "lesson"
to be learned.
2) To help students remember these tools and incorporate them into an effective system of problem-solving, essential strategies are re-emphasized in later problems (in a miniature spiral curriculum),
gathered into a flashcard review at the end of the chapter and are "visually organized" in a chapter summary that follows the flashcard summary. {summaries for Chapters 1, 2, 3, 4A-4B, 5, 8, 10, 19}
Memory and Problem Solving
The nature of problem-solving tools varies from one section to another. Some sections (like 2.3,...) focus on "how to choose a formula" because this is a common student difficulty that, if it isn't
overcome, it destroys a student's chance to become a competent problem solver. In other sections (like 3.5) the emphasis is on physical concepts.
Although an individual section may have its primary focus on formula knowledge or physics intuition, when the book is viewed as a whole it is well balanced, and will help the student develop both of
these valuable skills. One goal is to help students improve their ability to fluently translate ideas between different thinking modes (verbal, visual, and mathematical) in the concrete form of
words, pictures, and equations.
Because the book is intended to be supplementary, my main goal is to give a student "added value" so the time they invest in using the book will be time well invested.
Many years of one-to-one tutoring conversation, plus reading about physics teaching, has helped me develop a feeling for concepts that students usually understand (the book sails through these with
little comment ) and concepts that are inherently difficult (these are explained in greater detail).
Personal Inventions
Many ideas in the book are, as far as I know, my own inventions. These include the tvvax system (2.2), many-sided equations (4.7), friction flowchart (3.7), distinctions between constant-variables
and changing-variables (8.3), and more. And many other ideas — such as "imagining you're the object" (in 3.2) and most teaching techniques (in 2.6-2.8, 3.5, 5D & 5F, 8.1-8.3, 11.1-11.4, 2.10 &
19.1,...) — were developed by me, although probably most of these have also been independently developed by others.
a note about websurfing: In this page, all links (for HTML web-pages, and PDF book-files) open in this window.
If you want links to open in a separate new window,
so this page remains open in this window, that will happen in
another version of this page
this page is http://www.asa3.org/ASA/education/teach/tools/tips.htm | {"url":"http://www.asa3.org/ASA/education/teach/tools/tips.htm","timestamp":"2014-04-19T09:31:21Z","content_type":null,"content_length":"31813","record_id":"<urn:uuid:c45be3ea-3510-47ee-81e1-e5e50c204a70>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical Computing with MATLAB
How much did the reduced air resistance in Mexico City contribute to Bob Beamon’s extraordinary performance in the long jump at the 1968 Olympic games? What is the effect of burning fossil fuels on
the carbon dioxide concentration in the Earth’s atmosphere? How does the expected return for the game of blackjack change if you remove all the kings from the deck of cards? How does Google rank Web
These are examples of the programming projects that tie together several of the topics covered in Numerical Computing with MATLAB, my new textbook for an introductory course in numerical methods,
MATLAB, and technical computing. I am pleased to describe the book for you here.
The emphasis in the book is on informed use of mathematical software. I want students to learn enough about the mathematical functions in MATLAB that they will be able to use them correctly,
appreciate their limitations, and modify them when necessary to suit their own needs.
The Chapters
1. Introduction to MATLAB
2. Linear Equations
3. Interpolation
4. Zero and Roots
5. Least Squares
6. Quadrature
7. Ordinary Differential Equations
8. Random Numbers
9. Fourier Analysis
10. Eigenvalues and Singular Values
11. Partial Differential Equations
George Forsythe initiated such a software-based numerical methods course at Stanford University in the late 1960s. The 1977 textbook by Forsythe, Malcolm, and Moler and the 1989 textbook by
Kahaner,Moler, and Nash that evolved from the Stanford course were based upon libraries of Fortran subroutines. This new book can be thought of as a modern, MATLAB oriented replacement for those
Two editions of the book are available, an electronic edition published by The MathWorks and a conventional print edition published by the Society for Industrial and Applied Mathematics (SIAM).
The book is intended for students in engineering and science who want to have a better understanding of the numerical methods implemented in MATLAB and similar mathematical software systems. The
prerequisites for the course, and the book, include:
• Calculus
• Some familiarity with ordinary differential equations
• Some familiarity with matrices
• Some computer programming experience
If students have never used MATLAB, the first chapter will help them get started. If they are already familiar with MATLAB, they can glance over most of the first chapter quickly. Everyone should
read the section in the first chapter about floating-point arithmetic.
Regular readers of Cleve’s Corner will find the book familiar. Several sections of the book, including the ones on floating-point arithmetic, stiffness, random number generation, and the L-shaped
membrane, are expanded versions of previous columns in MATLAB News & Notes.
A collection of more than 70 M-files, which I refer to as NCM, forms an essential part of the book. These are available from the MathWorks Web site devoted to the book. There are three types of NCM
• gui files: interactive graphical demonstrations
• tx files: textbook implementations of built-in MATLAB functions
• others: miscellaneous files, primarily associated with exercises
For example, one of the of the tx files, lutx, shows the algorithm used by the built-in function involved in the most important computation in MATLAB: the solution of simultaneous linear equations.
Another tx file, ffttx, provides a compact implementation of the fast algorithm for the finite Fourier transform. The original Forsythe et al. text was successful in part because its Fortran programs
were small enough to be read and understood. In retrospect, I think the codes distributed with Kahaner et al. were too large and unwieldy. MATLAB enables us to return to programs that can be printed
in the book and discussed in class.
The book makes extensive use of computer graphics.When you have NCM available, the MATLAB statement
produces a window where each of 20 thumbnail plots launches a graphical demonstration of some problem, algorithm, or mathematical application (Figure 1).Most of these GUI programs are interactive.
You may already be familiar with eigshow, because it is distributed with MATLAB. The other GUIs have the same spirit as eigshow. For example, with lugui, you choose the pivots in Gaussian
elimination.With fzerogui, the plot zooms in on the point where a function crosses the x-axis.
There are more than 200 exercises. Some of them involve modifying and extending the programs in NCM. I don’t want students to write their own programs from scratch. I would rather they start with the
programs that I’ve written. For example, one exercise asks them to modify lutx to keep track of the sign of the permutation and then compute the determinant. Another exercise has students remove the
step size calculation from the ODE solver, ode23tx, and implement the classical Runge-Kutta algorithm with fixed-step size.
Some of the exercises involve ill-posed problems.What does bslashtx do with a singular system of linear equations? How does quadtx behave if the integral doesn’t exist?
An exercise from Forsythe et al. that involved U. S. Census data from 1900 through 1970 now has three more data points and a GUI. Exercises in the chapter on least squares examine the Statistical
Reference Datasets from the National Institute of Standards and Technology.
A Cleve’s Corner about Google, "The World’s Largest Matrix Computation," has been expanded to become a section on "PageRank and Markov Chains" and several exercises about sparse matrices and Web
graphs. The example about touch-tone dialing that has been in MATLAB for many years, phone.m, has been expanded into the section introducing the FFT.
I’ve been working on this book for several years. Along with a few colleagues, I’ve had a chance to use it in both undergraduate and graduate university courses, and in two-day MathWorks training
courses. I hope that students and teachers will find it useful for their own courses, and in their own individual study. | {"url":"http://www.mathworks.es/company/newsletters/articles/numerical-computing-with-matlab.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-24T09:27:14Z","content_type":null,"content_length":"29327","record_id":"<urn:uuid:12d9454f-f3b5-4151-b6f9-73e2ec80e6c2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
South Gate Trigonometry Tutor
Find a South Gate Trigonometry Tutor
...It describes how the world around us works, and it is the foundation of the other sciences (chemistry, biology, etc., have their roots in physics). I love talking about and teaching physics, as
it can be applied to (and describe) many common real-life situations. For example: the next time you...
11 Subjects: including trigonometry, calculus, physics, statistics
...I know how to break math problems down into simple steps. I can analyze what your son or daughter needs to succeed in math. I have a life teaching credential and a bilingual credential in
14 Subjects: including trigonometry, reading, Spanish, ESL/ESOL
...I have taken 4 semesters of algebra-based physics in high school and college and received an A every semester. In high school, I had my first exposure to physics, where I gained a strong
understanding of the subject and used it to ace the course as well as aid my classmates in understanding the ...
9 Subjects: including trigonometry, chemistry, physics, geometry
...I want to be a math professor one day and help out many students the way my teachers have helped me throughout the years. I have been tutoring for this website for almost one year and had the
pleasure of meeting all types of people. I've tutored subjects as low as third grade math, and as high as trignometry.
10 Subjects: including trigonometry, calculus, geometry, algebra 1
...I offer tutoring and assistance/guidance for the following subjects:* Math: Pre-algebra, Algebra, Trigonometry, Geometry, Pre-calculus, calculus* Science: Biology, Chemistry, Anatomy,
Physiology, Earth Sciences, Physics* Test prep: SAT, ACT, GRE, AP, Standardized testing* Computer skills: Microso...
39 Subjects: including trigonometry, English, reading, chemistry | {"url":"http://www.purplemath.com/south_gate_ca_trigonometry_tutors.php","timestamp":"2014-04-19T07:01:02Z","content_type":null,"content_length":"24156","record_id":"<urn:uuid:be3d013e-d6a3-462d-b716-609611af2b70>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Norcross, GA Geometry Tutor
Find a Norcross, GA Geometry Tutor
...I also took several other courses that included Differential Equations in the solution process. When I graduated from college I worked as a Stress Analysis Engineer on turbin blades for jet
engines and for that job I constantly used differential equations to solve stress analysis problems. I have taken all the GACE tests for secondary math and passed them.
20 Subjects: including geometry, calculus, algebra 1, GRE
...I have personal experience tutoring students with this diagnosis as my own child is ADHD. I hold a BA in Theater Management. I was required to take extensive courses in film and editing.
33 Subjects: including geometry, reading, English, writing
I have loved math all my life and I took lots of it in school. I even attended math contests. Now I have the opportunity to help others with their math.
22 Subjects: including geometry, calculus, GRE, ASVAB
...My goal is always to make sure that the student, not only understands the material, but also feels confident in what they are doing. I feel that every student is different in what builds their
confidence in the material, so try to figure out what that is as we work together. I also ask for some...
9 Subjects: including geometry, chemistry, calculus, algebra 1
...I have also put together training classes for my associates and peers in several companies I have worked for to teach newly-hired employees as well as progressive learning for the experienced
ones. I received my MBA with a concentration in Human resources and I achieved all A's in these classes....
42 Subjects: including geometry, reading, English, writing
Related Norcross, GA Tutors
Norcross, GA Accounting Tutors
Norcross, GA ACT Tutors
Norcross, GA Algebra Tutors
Norcross, GA Algebra 2 Tutors
Norcross, GA Calculus Tutors
Norcross, GA Geometry Tutors
Norcross, GA Math Tutors
Norcross, GA Prealgebra Tutors
Norcross, GA Precalculus Tutors
Norcross, GA SAT Tutors
Norcross, GA SAT Math Tutors
Norcross, GA Science Tutors
Norcross, GA Statistics Tutors
Norcross, GA Trigonometry Tutors
Nearby Cities With geometry Tutor
Berkeley Lake, GA geometry Tutors
Buford, GA geometry Tutors
Chamblee, GA geometry Tutors
Doraville, GA geometry Tutors
Duluth, GA geometry Tutors
Dunwoody, GA geometry Tutors
East Point, GA geometry Tutors
Johns Creek, GA geometry Tutors
Kennesaw geometry Tutors
Lilburn geometry Tutors
Milton, GA geometry Tutors
Roswell, GA geometry Tutors
Snellville geometry Tutors
Suwanee geometry Tutors
Tucker, GA geometry Tutors | {"url":"http://www.purplemath.com/Norcross_GA_geometry_tutors.php","timestamp":"2014-04-21T00:14:02Z","content_type":null,"content_length":"23821","record_id":"<urn:uuid:9c7c618b-2a19-4ee8-8384-0df004d2546c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof....rational numbers
March 4th 2011, 10:57 PM #1
Jan 2011
Proof....rational numbers
Hey all,
how can i prove that if $a$ and $b$ are non-zero rational numbers, then $a + b\sqrt{2} ot\in \mathbb{Q}$
...using the fact that $\sqrt{2} ot\in \mathbb{Q}$
Does this need proof? It should be obvious that a rational times an irrational is irrational, so $\displaystyle b\sqrt{2}$ is irrational.
The sum of a rational and an irrational is also irrational, so $\displaystyle a + b\sqrt{2}$ is irrational.
If it requires proof, then you probably need to use the fact that the set of rationals is closed under addition and multiplication...
Let $a e 0$ and $b e 0$ be rational and suppose $a+b\sqrt{2}$ is rational. Hence there exists a rational $c$ such that:
But that rationals are closed under addition so:
is rational. Also the rationals are closed under division (by non-zero elements anyway) so:
is rational, which is a contradiction so our hypothesis that $a+b\sqrt{2}$ is rational fails, hence $a+b\sqrt{2}$ is irrational.
March 4th 2011, 11:02 PM #2
March 4th 2011, 11:48 PM #3
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/discrete-math/173473-proof-rational-numbers.html","timestamp":"2014-04-19T10:59:43Z","content_type":null,"content_length":"40093","record_id":"<urn:uuid:94404c51-0cc0-4343-ae44-08e635aeda41>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inequlaities etc. for tomorrow!!!!!!!
January 13th 2008, 08:20 AM #1
Jan 2008
Inequlaities etc. for tomorrow!!!!!!!
Hi, i'm a new member who really needs help. We've had a supply teacher for a while who was terrible at teaching us. Now the real teacher is back (very scary) and wants homework on Inequalities in
for tomorrow. I don't have a clue how to do them but she'll go mad if they're not done!
Please may i have some help?
The following questions i need help with:
Solve the following linear inequalities:
1a. x+3<8
g.x+3 over 2 <8
m.3x+1 more than or equal to 2x-5
p.3x+2 more than or equal to x+3
1. You solve inequalities in the same manner as you solve equations with a few exceptions:
a) If you mulitply/divide both sides of an inequality by a negative number you have to change the < sign into a > sign, and the > sign into a < sign.
to #1a:
to #d:
to #g: Do you mean:
$\frac{x+3}{2}<8$ ? Mulitply by 2, subtract 3, done!
to #j: Expand the bracket, add 6, divide by 2
to #m: subtract 1, subtract 2x,
and the last one is intirely for you
I can't thank you enough earboth, i was extremely worried but thanks to you i now know not just the answers, but also how to work them out which i have shown in my exercise book. I was also able
to do p-i wish you were my teacher.
Thanks a lot!
January 13th 2008, 08:49 AM #2
January 13th 2008, 09:14 AM #3
Jan 2008 | {"url":"http://mathhelpforum.com/algebra/26006-inequlaities-etc-tomorrow.html","timestamp":"2014-04-16T15:09:52Z","content_type":null,"content_length":"37786","record_id":"<urn:uuid:9bbacb85-e4ac-4afb-89fe-0e067b68aefc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
A Scalable Scheduling Scheme for Functional Parallelism on Distributed Memory Multiprocessor Systems
April 1995 (vol. 6 no. 4)
pp. 388-399
ASCII Text x
Santosh Pande, Dharma P. Agrawal, Jon Mauney, "A Scalable Scheduling Scheme for Functional Parallelism on Distributed Memory Multiprocessor Systems," IEEE Transactions on Parallel and Distributed
Systems, vol. 6, no. 4, pp. 388-399, April, 1995.
BibTex x
@article{ 10.1109/71.372792,
author = {Santosh Pande and Dharma P. Agrawal and Jon Mauney},
title = {A Scalable Scheduling Scheme for Functional Parallelism on Distributed Memory Multiprocessor Systems},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {6},
number = {4},
issn = {1045-9219},
year = {1995},
pages = {388-399},
doi = {http://doi.ieeecomputersociety.org/10.1109/71.372792},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - A Scalable Scheduling Scheme for Functional Parallelism on Distributed Memory Multiprocessor Systems
IS - 4
SN - 1045-9219
EPD - 388-399
A1 - Santosh Pande,
A1 - Dharma P. Agrawal,
A1 - Jon Mauney,
PY - 1995
VL - 6
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
Abstract—We attempt a new variant of the scheduling problem by investigating the scalability of the schedule length with the required number of processors, by performing scheduling partially at
compile time and partially at run time.
Assuming infinite number of processors, the compile time schedule is found using a new concept of the threshold of a task that quantifies a trade-off between the schedule-length and the degree of
parallelism. The schedule is found to minimize either the schedule length or the number of required processors and it satisfies: A feasibility condition which guarantees that the schedule delay of a
task from its earliest start time is below the threshold, andAn optimality condition which uses a merit function to decide the best task—processor match for a set of tasks competing for a given
At run time, the tasks are merged producing a schedule for a smaller number of available processors. This allows the program to be scaled down to the processors actually available at run time.
Usefulness of this scheduling heuristic has been demonstrated by incorporating the scheduler in the compiler backend for targeting Sisal (Streams and Iterations in a Single Assignment Language) on
Index Terms—Compile time scheduling, dataflow graphs, distributed memory multiprocessors, functional parallelism, runtime scheduling, scaling, schedule length.
[1] H. H. Ali and H. El-Rewini,“An optimal algorithm for scheduling interval ordered tasks with communication on$N$processors,”Dep. Math and Comput. Sci., Univ. Nebraska at Omaha, Tech. Rep.
UNO-CS-TR-91-20, 1991.
[2] M. A. Al-Mouhamed,“Lower bound on the number of processors and time for scheduling precedence graphs with communication costs,”IEEE Trans. Software Eng., vol. 16, pp. 1390–1401, 1990.
[3] F.D. Anger, J.J. Hwang, and Y.C. Chow, “Scheduling with Sufficient Loosely Coupled Processors,” J. Parallel and Distributed Computing, vol. 9, pp. 87-92, 1990.
[4] S. Bokhari,“On mapping problem,”IEEE Trans. Comput., vol. C-30, pp. 207–214, 1981.
[5] ——,“Retire Fortran? A debate rekindled,”Commun. ACM, vol. 35, no. 8, pp. 81–89, Aug. 1992.
[6] T. Casavant and J. A. Kuhl,“Taxonomy Of scheduling in general purpose distributed computing systems,”IEEE Trans. Software Eng., vol. 14, pp. 141–154, Feb. 1988.
[7] V. Chaudhary and J. K. Aggarwal,“Generalized mapping of parallel algorithms to parallel architectures,”inProc. 1990 Int. Conf. Parallel Processing, vol. II, pp. 137–141, Aug. 1990.
[8] Ph. Chretienne,“A polynomial algorithm to optimally schedule tasks over an ideal distributed system under tree-like precedence constraints,”inEuropean J. Oper. Res., vol. 2, no. 43, pp. 225–230,
[9] E. G. Coffman and R. L. Graham,“Optimal scheduling for two-processor systems,”Acta Informatica, vol. 3, pp. 200–213, 1972.
[10] G. Cybenko, "Dynamic Load Balancing for Distributed Memory Multiprocessors," J. Parallel and Distributed Computing, vol. 7, pp. 279-301, 1989.
[11] R. Cytron, M. Hind, and W. Hsieh,“Automatic generation of DAG parallelism,”inProc. 1989 SIGPLAN ACM Conf. Program. Languages Design Implementation, 1989, pp. 54–68.
[12] H.E. Rewini and T.G. Lewis,"Scheduling parallel program tasks onto arbitrary target machines," J. Parallel and Distributed Computing, vol. 9, pp. 138-153, 1990.
[13] J. Feo, D. Cann, and R. Oldehoeft,“A report on the Sisal language project,”J. Parallel, Distrib. Comput., vol. 10, pp. 349–365, Dec. 1990.
[14] A. Gerasoulis and T. Yang,“A comparison of clustering heuristics for scheduling directed acyclic graphs on multiprocessors,”J. Parallel and Distrib. Comput., vol. 16, no. 4, pp. 276–291, Dec.
[15] M. Girkar and C. D. Polychronopoulos,“Automatic extraction of functional parallelism from ordinary programs,”IEEE Trans. Parallel Distrib. Syst., vol. 3, pp. 166–178, March 1992.
[16] R. L. Graham, E. L. Lawler, J. K. Lenstra, and A. H. G. Rinnooy Kan,“Optimization and approximation in deterministic sequencing and scheduling: A survey,”inAnnals Discrete Mathematics.
Amsterdam, The Netherlands: North-Holland, 1979, pp. 287–326.
[17] S. Hiranandani,K. Kennedy,, and C.-W. Tseng,“Compiler optimizations for Fortran D on MIMD distributed-memory machines,” Proc. Supercomputing’91, IEEE CS Press, pp. 86–100, Nov. 1991.
[18] H. Jiang, L. N. Bhuyan, and D. Ghosal,“Approximate analysis of multiprocessing task graphs,”inProc. 1990 Int. Conf. Parallel Processing, vol. III, pp. 228–235, Aug. 1990.
[19] S. J. Kim and J. C. Brawne,“A general approach to mapping of parallel computation upon multiprocessor architectures,”inProc. Int. Conf. Parallel Processing, vol. 3, pp. 1–8, 1988.
[20] B. Kruatrachue,“Static taskscheduling and grain packing in parallel processing systems,”Ph.D. dissertation, Comput. Sci. Dep., Oregon State Univ., 1987.
[21] C.H. Papadimitriou and M. Yannakakis,"Towards an architecture-independent analysis of parallel algorithms," SIAM J. Computing, vol. 19, no. 2, pp. 322-328, Apr. 1990.
[22] S. S. Pande, D. P. Agrawal, and J. Mauney,“A new threshold scheduling strategy for Sisal programs on distributed memory machines,”J. Parallel And Distrib. Comput., vol. 21, no. 2, pp. 223–236,
May 1994.
[23] ——,“An automatic compiler for a parallel functional language on a distributed memory machine,”IEEE Parallel and Distrib. Technol., pp. 64–76, Spring 1994.
[24] M. Prastein,“Precedence constrained scheduling with minimum time and communication,”M.S. thesis, Univ. of Illinois at Urbana Champaign, 1987.
[25] V. Sarkar, "Automatic Partitioning of a Program Dependence Graph into Parallel Tasks," IBM J. Research and Development, vol. 35, no. 6, pp. 779-804, Nov. 1991.
[26] ——,“Partitioning and scheduling of parallel programs for execution on multiprocessors,”Ph.D. dissertation, Stanford Univ., 1987.
[27] R. Sethi,“Scheduling graphs on two processors,”SIAM J. Comput., vol. 5, no. 1, Mar. 1976.
[28] B. Shirazi, M. Wang, and G. Pathak, “Analysis and Evaluation of Heuristic Methods for Static Task Scheduling,” J. Parallel and Distributed Computing, vol. 10, no. 3, pp. 222-232, Nov. 1990.
[29] S. K. Skedzielewski and J. Glaurert,“IF1—An intermediate form for applicative languages,”Lawrence Livermore National Laboratory Manual M-170, 1985.
[30] Q. Wang and K.H. Cheng, “List Scheduling of Parallel Tasks,” Information Processing Letters, vol. 37, pp. 291-297, Mar. 1991.
Santosh Pande, Dharma P. Agrawal, Jon Mauney, "A Scalable Scheduling Scheme for Functional Parallelism on Distributed Memory Multiprocessor Systems," IEEE Transactions on Parallel and Distributed
Systems, vol. 6, no. 4, pp. 388-399, April 1995, doi:10.1109/71.372792
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/td/1995/04/l0388-abs.html","timestamp":"2014-04-24T12:35:45Z","content_type":null,"content_length":"61181","record_id":"<urn:uuid:77bf7c41-6576-457a-8713-ec2715578b29>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
A greedy algorithm is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a
greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable
What does a greedy algorithm look like?
Images from Wikipedia illustrate the concept well. In the first one, we begin our search of the space for a maximum at point A. If we're greedy, the next highest point takes us along a path that
maximizes at point (lower case) m. However, the optimal solution is at point (upper case) M.
If you're greedily searching (again, for a maximum) in three-dimensional space, and start in a poor position (like near the left in the image below), you'll never make it off the shorter hill onto
the taller one:
What does a greedy algorithm for yourself look like?
Perhaps we should first define what it is for which we're optimizing: To keep it simple, I'm going to identify it as maximizing income.
You could also be seeking to minimize the distance (in time) between working for clients, or minimizing the time you spend acquiring each customer -- maximization is not an intrinsic property of
I purposefully exclude things like time and happiness (or optimizing for some combination of all three). While I value those things and would like to think I've tried to optimize for them, upon
closer inspection I recently realized I've tended to simply accept whatever side-effect optimizing for money has had on those values.
I think it's a fair assumption that plenty of other people are doing the same. If you're not one of them: congrats! But before you dismiss the idea entirely, you ought to consider that you might be,
just to be sure.
So what does it look like? Again from Wikipedia, I like the idea of moving from node to node on a graph:
At each point in time (a vertex on the graph) we are presented with a series of options whose positive return is the value at that vertex. The options available to us are connected by the edges. (You
might also assign a cost to each edge, and change your greedy-optimization heuristic to be return minus cost).
In the graphic I've used, only two options are available at each point, but in reality there are normally a lot more options from which to choose, and maybe even times when you have only one option.
If we're starting at 7 and being greedy in our search for a maximum, we miss out on the global maximum.
You might mention -- hey, we could backtrack and search the entire space. But you might not have the resources available once you've gone down the path to 6. You might have reached your maximum
workload by that point, and by the time you free up some time, the opportunity for the 99 is gone.
How do you know if you're running your business like a greedy search algorithm?
Here are some symptoms:
• Taking the first job offer or client you get when you need work
• If you're a freelancer, filling all your time with client work
• Taking the first business model that seems to be working and trying to maximize it
Can you think of any others?
Prior to last year, I was executing a totally greedy algorithm with regards to my career. But it's not just something I've noticed in myself: almost every job I've had utilized greedy methods for
making money.
As part of my greedy search (which was not just about maximizing money, but more towards minimizing unemployment, even if it meant working for people who would end up stiffing me on paychecks), I
ended up burning through about four months of savings (I had six). Last year though, I started making some changes to a better approach. When I made my way back up to a six month cushion and took
time to reflect on some of the changes that got me there, I realized the parallels to algorithms I'd learned about in college.
Improving your odds
How can you improve your chances of finding a global (or at least higher) maximum?
In the general problem, given enough time and resources, you could do an exhaustive search -- or in our graph model, visit every node.
But I don't have the resources to exhaustively search the entire space to produce an optimal solution, and I'm betting you don't either.
Instead of straight greedy hill-climbing, we could explore some
. My favorite here is called "shotgun hill climbing," where instead of following a path all the way to it's maximum, we'll randomly restart (but we don't forget where we came from on those restarts,
in case we don't end up in a better position).
It's what I like about Amy Hoy's
stacking bricks metaphor
. A freelancer can start taking time between gigs (or not remain fully engaged) and spend some time developing products. Each successive product is a new brick, and over time you can build a wall.
Savings and product revenue are like ammunition for your shotgun hill climbing business algorithm. They allow you to explore different starting positions, as long as you're not being greedy with
minimizing the distance (in time) between clients or booking all your time to work for others.
In practice, all those ideas you have floating around in your head (or ideas.txt file) are the different options you have in paths to take (in addition to other jobs, clients, ideas for "upselling",
etcetera). You have to spend a significant piece of time on one to find out if it gets you further up the hill or not -- but not so much so that you can't get back to your last local maximum or
randomly try another.
(Side note: I'm not advocating
strictly random
here -- you can apply some heuristic to affect the "probability" of choosing a certain idea).
Last year I took my first steps to taking a shotgun approach. This year I plan to take it further.
I'd love to hear them below.
Hey! Why don't you make your life easier and subscribe to the full post or short blurb RSS feed? I'm so confident you'll love my smelly pasta plate wisdom that I'm offering a no-strings-attached,
lifetime money back guarantee! | {"url":"http://www.codeodor.com/index.cfm/category/Business/44","timestamp":"2014-04-16T19:10:17Z","content_type":null,"content_length":"20202","record_id":"<urn:uuid:726e2fd0-3f4c-47fc-86ce-eedbb750c118>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2010 [00036]
[Date Index] [Thread Index] [Author Index]
Re: Re: Replace and ReplaceAll -- simple application
• To: mathgroup at smc.vnet.net
• Subject: [mg106108] Re: [mg106038] Re: Replace and ReplaceAll -- simple application
• From: DrMajorBob <btreat1 at austin.rr.com>
• Date: Fri, 1 Jan 2010 05:38:17 -0500 (EST)
• References: <200912270006.TAA12080@smc.vnet.net> <hh72dp$kud$1@smc.vnet.net>
• Reply-to: drmajorbob at yahoo.com
All true... but I know about the Conjugate function without ever (as far
as I recall) having a need for it. Mathematica users should expect such a
function to exist, besides, and it's easy to find if you're looking for it.
One certainly might, of course, want to see a function's conjugate without
a series of operations like:
expr = (R + I L) (R - I w L);
(-I L + R) (R + I L w)
...and that sequence won't work in general, anyway.
If we want to see that, we might have to make the I -> -I substitution
manually. For instance,
expr /. {-I -> I, I -> -I}
(-I L + R) (R + I L w)
That works when the terms are symbolic, but not if the symbols are
replaced with constants:
expr = (1 + 2 I) (1 - 3 I)
expr /. {-I -> I, I -> -I}
7 - I
7 - I
That is annoying... that a simple Replace (though not as simple as I ->
-I) works for symbols, but not for numbers.
The following works for numbers AND for symbols, and it's no more
complicated nor less intuitive, I suppose:
expr = (1 + 2 I) (1 - 3 I)
expr /. Complex[a_, b_] :> Complex[a, -b]
7 - I
7 + I
expr = (R + I L) (R - I w L);
expr /. Complex[a_, b_] :> Complex[a, -b]
(-I L + R) (R + I L w)
...but the right path isn't obvious until we've had this discussion (or
something like it).
On Thu, 31 Dec 2009 02:14:13 -0600, AES <siegman at stanford.edu> wrote:
> In article <hhf5kg$go6$1 at smc.vnet.net>,
> Murray Eisenberg <murray at math.umass.edu> wrote:
> [Re the I -> -I problem in particular:]
>> On the other hand, not every possible issue can be addressed immediately
>> at the top of documentation just because this or that user happened to
>> experience some difficulty with it.
>> Only gathering usage statistics, or having a focus group of users trying
>> stuff, might suffice to escalate some issues to the point of requiring
>> more prominent warnings.
>> I wonder how many users in fact experience this issue.
> I'll give you one sizable group.
> Engineering and science students and practitioners, at all levels down
> to at least college sophomores and even advanced high school students,
> are taught to solve systems of coupled linear differential equations
> (e.g., the loop or node equations for linear electrical networks with
> current and/or voltage sources, or forcing functions) using the phasor
> approach.
> The first step in doing this is of course to replace d^n /dt^n by
> (I w)^n (w as shorthand for omega), thereby converting these to coupled
> algebraic equations. The next step is then to solve these equations to
> obtain a matrix-valued transfer function or scattering matrix, whose
> elements contain only *real-valued* parameters (R's, L's and C's in the
> electrical circuit case) and I -- elements that look like R + I w L.
> In practice, the instructor and the students do a few problems of this
> type by hand, with just one or two variables; define and examine the
> poles and zeros of the transfer function; learn about concepts like
> resonance and impedance and admittance, and scattering matrices and
> input and output ports; and so on. But the instant one goes to anything
> more realistic and interesting, with three or more variables, the
> algebra and the numerical calculations just become too tedious.
> But, hey, Mathematica is just beautiful for this task. The Solve[ ]
> function is perfect for doing the algebra to find the transfer function
> -- simple, easy to understand, obvious; and all the elementary Plot
> functions (David Park's "set pieces") will give you all the plots you
> could ask for. And since the output variables are phasors, e.g.
> voltages and currents, vc(t) and ic(t) (generally indexed and often
> written with superimposed tildes to indicate that they are complex
> variables), you can get numerical results for power flows and energy
> densities using notations like p([t_] = Re[vc(t)] Re[ic(t)].
> But at some point you may want to get analytical formulas as well, e.g.
> the modulus and argument of the transfer function from an input to an
> output port. And, maybe move on to the ideas of "lossless", that is,
> unitary, and Hermitian scattering matrices. At which point, the idea of
> the transfer function, call it tFunc, and its complex conjugate,
> tFunctStar, become significant. EE students say "v" and "vStar" and "i"
> and "iStar" all the time!
> And at that point, if you're focusing on the system properties and not
> specific numerical calculations it's very tempting to note that these
> tFunc's contain nothing but purely real circuit elements (R's. L's and
> C's, or masses and spring constants, or whatever), and I.
> And a quick test confirms that the rule {a->-a} does what it's supposed
> to (whether or not a has a minus sign in front of it). Or, a quick test
> confirms that I->-I properly converts R + I w L into R - I w L. Why
> shouldn't it??? It just does what you'd expect a global find and
> replace to do, or what you'd do "by hand" -- right?
> Take a look at the Mathworld entries for "phasor" and "transfer
> function" and see how far down you'd have to dig to get an explicit
> warning that the previous paragraph is misleading. (And note that the
> entry for "Phasor" does not contain a "SEE ALSO:" for the term "Complex
> Conjugation", and the link to that term within the text does not -- so
> far as I can see -- give any hint the the rule I->-I will fail for an
> expression containing -I.)
DrMajorBob at yahoo.com | {"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jan/msg00036.html","timestamp":"2014-04-19T02:25:55Z","content_type":null,"content_length":"31244","record_id":"<urn:uuid:bbe689eb-8c50-402a-8443-4143244bde5d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
value of k and roots
July 10th 2011, 07:07 AM #1
value of k and roots
Can anybody help... how is it possible to find the root of 16x^2-24x+k = 0 ? when one root is 1/2 and find k?
it is possible to find the value of k if the sum and product of the roots are equal?
if im going to solve i understand that in the k is the c part of quadratic equation.
k = c/a = k/16 then in this part i am stuck.
it made me solve for hours but still can't get the answer correctly.
thank you so much and more power.
Re: value of k and roots
Since 1/2 is a root then you know that $(2x -1)(Ax+B) = 16x^2-24x+k$ where A and B are constants.
Comparing coefficients
$x^2 \rightarrow 2A = 16 \text{ and } x^0 \rightarrow -B = k$
To find the value of B you can compare the coefficients of x. Once you have B it should be easy to find k
Re: value of k and roots
-B = k
and that is -24 = k is this correct sir? or should i have to make it -24 = r1 + r2; - 24 = 1/2 + r2 then -24 -1/2 = r2, then r2 = -24 1/2 which the other root now. Oh my gosh please hope this is
Last edited by mr fantastic; July 10th 2011 at 04:19 PM.
Re: value of k and roots
Comparing the coefficients of x gives $2B-A = -24$ and since we know A we have $2B-8 = -24$ which makes $B = -8$. Therefore I get $k=8$.
If we sub this back we get $16x^2-24x+8 = 8(2x-1)(x-1) = 0$
Re: value of k and roots
thank you sir... hope i can learn this more... i am not so good in mathematics
more power
Re: value of k and roots
Hello RCS,
given one root substitute it into your equation and find k =8
factor the new equation after simplification and find the other root ( 1)
July 10th 2011, 07:14 AM #2
July 10th 2011, 07:20 AM #3
July 10th 2011, 07:29 AM #4
July 10th 2011, 07:35 AM #5
July 10th 2011, 08:04 AM #6
Super Member
Nov 2007
Trumbull Ct | {"url":"http://mathhelpforum.com/algebra/184370-value-k-roots.html","timestamp":"2014-04-20T20:31:23Z","content_type":null,"content_length":"45246","record_id":"<urn:uuid:e30cf26d-a613-4984-a355-9dc05d4942d1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hooke's Law
"The power (sic.) of any springy body is in the same proportion with the extension."
Hooke's Law can best be explained as the relationship between the force exerted on the mass and its position (x). The object with mass that is on a frictionless surface and is attached to a spring of
which the constant is (k). The force exerted by the spring is relative to the spring's compression or expansion, making the force in question a function of the mass' position. Any object which
undergoes slight displacement from a stable point of equilibrium will oscillate about its equilibrium's position, undergoing a restoring force that is relative to its displacement from its stable
point of equilibrium. | {"url":"http://www.roberthooke.com/hookes_law.htm","timestamp":"2014-04-18T21:09:12Z","content_type":null,"content_length":"5058","record_id":"<urn:uuid:c5e4133a-7127-4e38-90d6-444eb4e4a7b2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Difference Between Series and Parallel Circuits
Series vs Parallel Circuits
An electrical circuit can be set up in many ways. Electronic devices such as resistors, diode, switches, and so on, are components placed and positioned in a circuit structure. The placement of such
components is crucial to the operation of the circuit, as different kinds of setups create a different kind of output, result, or purpose. Two of the simplest electronic or electrical connections are
called the series and parallel circuits. These two are actually the most basic setup of all electrical circuits, but are significantly different from each other.
Fundamentally, a series circuit aims to have the same amount of current flow through all the components placed inline. It is called a ‘series’ because of the fact that the components are in the same
single path of the current flow. For instance, when components such as resistors are put in a series circuit connection, the same current flows through these resistors, but each will have different
voltages, assuming that the amount of resistance is dissimilar. The voltage of the whole circuit will be sum of the voltages in every component or resistor.
In series circuits:
Vt = V1 + V2 + V3…
It = I1 = I2 = I3…
Rt = R1 + R2 + R3…
Vt = total circuit voltage
V1, V2, V3, and so on = voltage in each component
It = total current
I1, I2, I3, and so on = current across each component
Rt = total resistance from components/resistors
R1, R2, R3, and so on = resistance values of each component
The other type of connection is called ‘parallel’. Components of such a circuit are not inline, or in series, but parallel to each other. In other words, the components are wired in separate loops.
This circuit splits the current flow, and the current flowing through each component will ultimately combine to form the current flowing in the source. The voltages across the ends of the components
are the same; the polarities are also identical. Let’s draw out the same example given in the series circuit, and assume that the resistors are connected in parallel. The other term for ‘parallel’
circuits is ‘multiple’, because of the multiple connections.
In parallel circuits:
Vt = V1 = V2 = V3
It = V (1/R1 + 1/R2 + 1/R3) since,
1/Rt = (1/R1 + 1/R2 + 1/R3)
One of the major differences – besides from the voltage, current, and resistance formulas ‘“ is the fact that series circuits will break if one component, such as a resistor, burns out; thus, the
circuit won’t be complete. In parallel circuits however, the functioning of other components will still continue, as each component has its own circuit, and is independent.
1. Series circuits are basic types of electrical circuits in which all components are joined in a sequence so that the same current flows through all of them.
2. Parallel circuits are types of circuits in which the identical voltage occurs in all components, with the current dividing among the components based on their resistances, or the impedances.
3. In series circuits, the connection or circuit will not be complete if one component in the series burns out.
4. Parallel circuits will still continue to operate, at least with other components, if one parallel-connected component burns out.
Search DifferenceBetween.net :
Email This Post
: If you like this article or our site. Please spread the word. Share it with your friends/family.
12 Comments
1. awesome n elaborative…..keep on wokring
□ u say it’s great it’s rubbishhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
2. hahahah …. dis is great ……….
3. Hello,
This doesn’t really answer my question…..
I actually really want to know the answer, so help me as soon as possible please.
A bit easier don’t make problems does it?
Haha, a bit easier please, this is my email address, please email me if you can tell me more about that. And if you have any other information.
Thank you, Samiha
4. I’m sorry, I forget to write my email address, here it comes:
Mail me please!!!!
Thank you, Samiha
□ In a series circuit, the downstream wire of the first device is connected to the upstream wire of the second device. So the current has to flow through device one, then device two. In this
case, total resistance is increased because it is harder for the current to flow through two devices.
In a parallel circuit, the upstream wires of the two devices are connected to each other, and the downstream wires of the two devices are connected. So, part of the current will flow through
device one, and the rest of the current will flow through device two. In this case, total resistance is decreased because the current can flow more easily through the resistors (this takes a
little math to really prove, but the conclusion still holds).
Some circuits can have some devices wired in series, and others in parallel, but these circuits should not be confused with simple series and parallel circuits.
Series Circuit ~ There is only one path from one end of the battery back to the other end.
Parallel Circuit ~ There are at least two different paths from end of the battery back to the other end.
5. Thank you, this website was a big help to my project
6. Thanks, This Really Helped Me
7. your very good in explaining bro. two thumbs up. just add more illustrations(^^)
9. This helped REALLY well.
thank you so much.
Leave a Response
Articles on DifferenceBetween.net are general information, and are not intended to substitute for professional advice. The information is "AS IS", "WITH ALL FAULTS". User assumes all risk of use,
damage, or injury. You agree that we have no liability for any damages. | {"url":"http://www.differencebetween.net/science/difference-between-series-and-parallel-circuits/","timestamp":"2014-04-16T16:02:37Z","content_type":null,"content_length":"58270","record_id":"<urn:uuid:fcf5ffd3-527a-4d11-b4c2-784822169aac>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Throughput Analysis for a High-Performance FPGA-Accelerated Real-Time Search Application
International Journal of Reconfigurable Computing
Volume 2012 (2012), Article ID 507173, 16 pages
Research Article
Throughput Analysis for a High-Performance FPGA-Accelerated Real-Time Search Application
^1School of Computing Science, University of Glasgow, Glasgow, G12 8QQ, UK
^2Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA 01854, USA
Received 13 October 2011; Accepted 20 December 2011
Academic Editor: Miaoqing Huang
Copyright © 2012 Wim Vanderbauwhede et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
We propose an FPGA design for the relevancy computation part of a high-throughput real-time search application. The application matches terms in a stream of documents against a static profile, held
in off-chip memory. We present a mathematical analysis of the throughput of the application and apply it to the problem of scaling the Bloom filter used to discard nonmatches.
1. Introduction
The focus on real-time search is growing with the increasing adoption and spread and of social networking applications. Real-time search is equally important in other areas such as analysing emails
for spam or search web traffic for particular patterns.
FPGAs have great potential for speeding up many types of applications and algorithms. By performing a task in a fraction of the time of a conventional processor, large energy savings can be achieved.
Therefore, there is a growing interest in the use of FPGA platforms for data centres. Because of the dramatic reduction in the required energy per query, data centres with FPGA search solutions could
operate at a fraction of the power of current data centres, eliminating the need for cooling infrastructure altogether. As the cost of cooling is actually the dominant cost in today’s data centres [1
], the savings would be considerable. In [2, 3] we presented our initial work on applying FPGAs for acceleration or search algorithms. In this paper, we present a novel design for the scoring part of
an FPGA-based high-throughput real-time search application. We present a mathematical analysis of the throughput of the system. This novel analysis is applicable to a much wider class of applications
than the one discussed in the paper; any algorithm that performs nondeterministic concurrent accesses to a shared resource can be analysed using the model we present. In particular, the technology
presented in this paper can also be used for “traditional,” that is, inverted index based, web search.
2. Design of the Real-Time Search Application
Real-time search, in information retrieval parlance called “document filtering,” consists of matching a stream of documents against a fixed set of terms, called the “profile.” Typically, the profile
is large and must therefore be stored in external memory.
The algorithm implemented on the FPGA can be expressed as follows.(i)A document is modelled as a “bag of words,” that is, a set of pairs , where is the term frequency, that is, number of occurrences
of the term in the document ; is the term identifier.(ii)The profile is a set of pairs where the term weight is determined using the “Relevance Based Language Model” proposed by Lavrenko and Croft [4
In this work we are concerned with the computation of the document score, which indicates how well a document matches the profile. The document has been converted to the bag-of-words representation
in a separate stage. We perform this stage on the host processor using the Open Source information retrieval toolkit Lemur [5]. We note that this stage could also be very effectively performed on
Simplifying slightly, to determine if a document matches a given profile, we compute the sum of the products of term frequency and term weight
The weight is typically a high-precision word (64bits) stored in a lookup table in the external memory. If the score is above a given threshold, we return the document identifier and the score by
writing it into the external memory.
2.1. Target Platform
The target platform for this work is the Novo-G FPGA supercomputer [6] hosted by the NSF Center for high-performance reconfigurable computing (CHREC) (http://www.chrec.org/). This machine consists of
24 compute servers which each host a GiDEL PROCStar-III board. The board contains 4 FPGAs with 2 banks of DDR SDRAM per FPGA used for the document collection and one for the profile. The data width
is 64bits, which means that the FPGA can read 128bits per memory per clock cycle [7]. For more details on the platform, see Section 4.
2.2. Term-Scoring Algorithm
To simplify the discussion, we first consider the case where terms are scored sequentially, and that, as in our original work, we use a Bloom filter to limit the number of external memory accesses.
For every term in the document, the application needs to look up the corresponding profile term to obtain the term weight. As the profile is stored in the external SDRAM, this is an expensive
operation (typically 20 cycles per access). The purpose of document filtering is to identify a small amount of relevant documents from a very large document set. As most documents are not relevant,
most of the lookups will fail (i.e., most terms in most documents will not occur in the profile). Therefore, it is important to discard the negatives first. For that purpose we use a “trivial” Bloom
filter implemented using the FPGA’s on-chip memory.
2.2.1. “Trivial” Bloom Filter
A Bloom filter [8] is a datastructure used to determine membership of a set. False positives are possible, but false negatives are not. With this definition, the design we use to reject negatives is
a Bloom filter. However, in most cases, a Bloom filter uses a number () of hash functions to compute several keys for each element in the set and adds the element to the table (assigns a “1”) if
element is in the set. As a result, hash collisions can lead to false positives.
Our Bloom filter is a “trivial” edge case of this more general implementation; our hashing function is the identity function , and we only use a single hash function () so every element in the set
corresponds to exactly one entry in the Bloom filter table. As a result, the size of the Bloom filter is the same as the size of the set, and there are no false positives. Furthermore, no elements
are added to the set at run time.
2.2.2. Bloom Filter Dimensioning
The internal block RAMs of the Altera Stratix-III FPGA that support efficient single-bit access are limited to 4Mb; on a Stratix-III SE260, there are 864M9K blocks that can be configured as 8K1 [9
]. On the other hand, the vocabulary size of our document collection is 16M terms (based on English documents using unigrams, diagrams, and trigrams). We therefore used a very simple “hashing
function,” . Thus we obtain one entry for every four elements, which leads to three false positives out of four on average. This obviously results in a four times higher access rate to the external
memory than if the Bloom filter would be 16Mb. As the number of positives in our application is very low, the effect on performance is limited.
2.2.3. Document Stream Format
The document stream is a list of (document identifier, document term pair set) pairs. Physically, the FPGA accepts a fixed number of streams of words with fixed width . The document stream must be
encoded onto these word streams. As both elements in the document term pair are unsigned integers, m pairs can be encoded onto a word if is larger than or equal to times the sum of the magnitudes of
the maximum values for and f
To mark the start a document we insert a header word (identified by ) followed by the document ID.
2.2.4. Profile Lookup Table Implementation
In the current implementation, the lookup table that stores the profile is implemented in the most straightforward way; as the vocabulary size is 2^24 and the weight for each term in the profile can
be stored in 64bits, a profile consisting of the entire vocabulary could be stored in the 256MB SDRAM, which is less than the size of the fixed SDRAM on the PROCStar-III board. Consequently, there
is no need for hashing, the memory contains zero weights for all terms not present in the profile.
2.2.5. Sequential Implementation
The diagram for the sequential implementation of the design is shown in Figure 1.
Using the lookup table architecture and document stream format as described above, the actual lookup and scoring system is quite straightforward, the input stream is scanned for header and footer
words. The header word action is to set the document score to 0; the footer word action is to collect and output the document score. For every term in the document, first the Bloom filter is used to
discard negatives, and then the profile term weight is read from the SDRAM. The score is computed and accumulated for all terms in the document, and finally the score stream is filtered against a
threshold before being output to the host memory. The threshold is chosen so that only a few tens or hundreds of documents in a million are returned.
If we would simply look up every term in the external memory, the maximum achievable throughput would be , with the number of cycles required to look up the term weight in the external memory and
compute the term score. The use of a Bloom filter greatly improves the throughput as the Bloom filter access will typically be much faster than the external memory access and subsequent score
computation. If the probability for a term to occur in the profile is and the access time to the Bloom filter is , the average access time will become . In practice will be very low as most document
terms will not occur in the profile (because otherwise the profile would match all documents). The more selective the profile, the fewer the number of document terms that match it.
2.3. Parallelising Lookups
The scoring process as described above is sequential. However, as in the bag-of-words representation all terms are independent, there is scope for parallelisation. In principle, all terms of a
document could be scored in parallel, as they are independent and ordering is of no importance.
2.3.1. Parallel Document Streams
In practice, even without the bottleneck of the external memory access, the amount of parallelism is limited by the I/O width of the FPGA, in our case 64bits per memory bank. A document term can be
encoded in 32bits (a 24-bit term identifier and an 8-bit term frequency). As it takes at least one clock cycle of the FPGA clock to read in two new 64-bit words (one per bank), the best case for
throughput would be if 4 terms per document would be scored in parallel in a single cycle. However, in practice scoring requires more than one cycle; to account for this, the process can be further
parallelised by demultiplexing the document stream into a number of parallel streams. If, for example, scoring would take 4 cycles, then by scoring 4 parallel document streams the application could
reach the maximal throughput.
2.4. Parallel Bloom Filter Design
Obviously, the above solution would be of no use if there would be only a single, single-access Bloom filter. The key to parallelisation of the lookup is that because the Bloom filter is stored in
on-chip memory, accesses to it can be parallelised by partitioning the Bloom filter into a large number of small banks. The combined concepts of using parallel streams and a partitioned Bloom filter
are illustrated in Figure 2. To keep the diagram uncluttered, only the paths of the terms (Bloom filter addresses) have been shown.
Every stream is multiplexed to all Bloom filter banks; every bank is accessed through an -port arbiter. It is intuitively clear that, for large numbers of banks, the probability of contention
approaches zero, and hence the throughput will approach the I/O limit—or would if none of the lookups would result in an external memory access and score computation.
3. Throughput Analysis
In this section, we present the mathematical throughput analysis of the Bloom filter-based document scoring system. The analysis consists of four parts.(i)In Section 3.1 we derive an expression to
enumerate all possible access patterns for concurrent accesses to a Bloom filter built of banks and use it to compute the probability for each pattern.(ii)In Section 3.2 we compute the average access
time for each pattern, given that accesses out of will result in a lookup in the external memory. We consider in particular the cases of and and propose an approximation for higher values of .(iii)In
Section 3.3 we compute the probability that accesses out of will result in a lookup in the external memory.(iv)In Section 3.4, combining the results from Section 3.2 and Section 3.3, we compute the
average access time over all for a given access pattern; finally, we combine this with the results from 3.1 to compute the average access time over all access patterns.
3.1. Bloom Filter Access Patterns
We need to calculate the probability of contention between accesses out of , for a Bloom filter with banks. Each bank has an arbiter which sequentialises the contending accesses, so contending
accesses to a given bank will take a time , with the time required for a single lookup in the Bloom filter. We also account for a fixed cost of contention . We use a combinatorial approach; we count
all possible arrangements of accesses to banks. Then we count the arrangements that result in concurrent accesses to a bank.
To do so, we need first to compute the integer partitions of [10] as they constitute all possible arrangements of accesses. For the remainder of the paper, we will refer to “all possible arrangements
that result in x” as the weight of . Each partition of will results in a particular average access time over all accesses. If we know the probability that each partition will occur and its resulting
average access time, we can compute the total average access time.
3.1.1. Integer Partitions
A partition of a positive integer is a nonincreasing sequence of positive integers with as their sum. Each integer is called a part. Thus, with in our case being the number of access ports to our
Bloom filter, each partition is a possible access pattern for the Bloom filter. For example, if and , the partition means that the first bank in the Bloom filter gets 5 concurrent accesses, the next
3, and so on. For , ; if , we must restrict to because we cannot have more than parts in the partition as is the number of banks. In other words, . We denote this as .
3.1.2. Probability of Each Partition
For each partition, we can compute the probability of it occurring as follows: if there are concurrent accesses to the Bloom filter’s banks, , then each access pattern can be written as a sequence of
numbers. We are not interested in the actual numbers, but in the patterns, for example, with and , we could have a sequence which results in a partition . Consequently, given a partition we need to
compute the probability for the sequence which it represents. The probability for each number occurring is the same, . We can compute this probability as a product of three terms. First, we consider
the probabilities for sequences of length of events with probability where each event occurs times. These are given by the multinomial distribution where and .
In our case, each event has the same probability , and the number of times each event occurs is the size of each part in the partition, so
This gives the probability for a sequence of groups of events, events in total.
The actual sequence will consist of numbers , so we must consider the total number of different sequences of numbers that result in a given partition. This is simply the number of possible
combinations of numbers out of , .
Finally, we must consider the permutations as wells for example, for we must also consider and . This is a combinatorial problem in which the bins are distinguishable by the number of elements they
contain; however, the actual number of elements is irrelevant, only the fact that the bins are distinguishable. The derivation is slightly more complicated. We proceed as follows: we transform the
partition into a tuple with as many elements as the number of different integers in the partition, and the value for each element is the number of times this integer occurs in the partition, for
example, and . We call the new set the frequencies of the partition , . As partitions are nonincreasing sequences, the transformation is quite straightforward.
First we create an ordered set with that is, is a set partition of . The elements of are defined recursively as
That is, contains all parts of identical to the first part of ; for , we remove all elements of from and repeat the process, and we continue recursively until the remaining set is empty. Finally, we
create the (ordered) set of the cardinal numbers of all elements of We are looking for the permutations with repetition of , which is given by where .
Thus the final probability for each partition of and a given becomes
We observe that regardless of the value of .
In the next section we derive an expression for the access time for a given partition, depending on the number of accesses that will result in an external memory lookup.
3.2. Average Access Time per Pattern
The time to perform lookups in the Bloom filter is of course determined by the number of contending accesses. For contending accesses, it will take a time . However, not all Bloom filter lookups will
result in a subsequent access to the external memory—in fact most of them will not, this is exactly the reason for having the Bloom filter. We will call a Bloom filter lookup that results in an
access to the external memory a hit.
3.2.1. Case of No Hits
First, we will consider the case of 0 hits, that is, the most common case. In this case, the average access time for a given partition is the average of all the parts in the partition where is the
number of parts . For the case of (no contention), so there is no fixed cost of contention . Note again that .
In practice, a small number of Bloom filter lookups will result in a hit, and consequently there is a chance of having one or more hits for concurrent accesses.
3.2.2. Case of a Single Hit
Consider the case of a single hit (out of lookups). The question we need to answer is, how long on average will it take to encounter a hit? Because as soon as we encounter a hit, we can proceed to
perform the external memory access, without having to wait for subsequent hits. This time depends on the particular integer partition. To visualise the partition, we use a so-called Ferrers diagram [
11], in which every part is arranged vertically as a list of dots. For example, consider the Ferrers diagram for the partition , that is, . (Figure 3). Each row can be interpreted as the number of
concurrent accesses to different banks; each column represents the number of contending accesses to a particular bank.
From this graph it is clear that the probability for finding the hit on the first cycle is 6/16; on the second to fourth cycle 2/16, on the fifth to eighth cycle 1/16. Consequently, the average time
to encounter a hit will in this case be
To generalise this derivation, we observe first that the transposition of the Ferrers diagram of an integer partition yields a new integer partition for the same integer called the conjugate
partition. In our example with (Figure 4).
We observe that the time it takes to reach a hit in part is . Using the conjugate partition , we can write the lower bound for average time it takes to reach a hit in partition as
The term in only occurs when the hit is in a bank with contention, that is, in a part greater than 1. There are parts of size 1, so the chance of a hit occurring in one of them (i.e., a hit on a bank
without contention) is . Thus, the probability for the term in is
And of course, as the hit results in an external access, the average access time is
For the case of , the equation reduces to
3.2.3. Case of Two or More Hits
If there are two or more hits, the exact derivation would require enumerating all possible ways of distributing hits over a given partition; furthermore, simply enumerating them is not sufficient; we
would have to consider the exact time of occurrence of each hit to be able to determine if a subsequent hit was encountered during or after the time to perform an external lookup and compute the
score () from a given hit. It is easy to see that, for large , this problem is so complex as to be intractable in practice. However, we can make a simplifying assumption; in practice, will be much
larger than the time to perform a Bloom filter lookup.
If that is the case, a good approximation for the total elapsed time is the time until the first hit is encountered plus times the time for external access. This approximation is exact as long as the
time it takes to sequentially perform all external lookups is longer than the time between the best and worst case Bloom filter access time for hits on a single bank, in other words as long as . The
worst case is of course , but this case has a very low probability; for example, for , the average value of all parts is 2.5; even considering only the parts >1, the average is still <4. For , the
numbers are, respectively, 3 and 5. In practice, if , the error will be negligible.
Conversely, we could consider the time until the last hit is encountered plus times . This approximation provides an upper bound for the access time.
Therefore, we are only interested in these two cases, that is, the lowest, respectively, highest part of the partition with at least one hit. We need to compute the probability that the lowest (resp.
highest) part will contain a hit, and the next but lowest (resp. highest) one, and so forth. For simplicity, we leave off in the following derivation.
Lower Bound
The number of all possible cases is , all possible arrangements of elements in bins. To compute the weight of a hit in the lowest part , we compute the complement, all possible arrangements without
any hits in . That means that we remove from . Then, using the notation for “not a hit in ,” we compute
These are all the possible cases for not having a hit in . Thus, is the number of possible arrangements with hits in .
We now do the same for , and so forth. That gives us all possible cases for not having a hit in
Obviously, there must be enough space in the remaining parts to accommodate hits, so is restricted to values where We call the highest index for which (19) holds, .
To obtain the weight of a hit in , we must of course subtract the weight of a hit in , because would give the weight for having a hit in all parts up to . It is easy to show (by substitution of (17))
Finally, the average time it takes to reach a part in a given with at least one hits out of is
With the above assumption, the average access time for hits can then be approximated as
We observe that for , (22) indeed reduces to (15) as and . For the equation reduces to .
Upper Bound
The upper bound is given by the probability that the highest part is occupied, and so forth, so the formula is the same as (18) but starting from the highest part , that is, with the corresponding
restriction on that
As we will see in Section 3.5, in practice the bounds are usually so close together that the difference is negligible.
3.3. Probability of External Memory Access
The chance that a term will occur in the profile depends on the size of the profile and the size of the vocabulary
This is actually a simplified view; it assumes that the terms occurring in the profile and the documents are drawn from the vocabulary in a uniform random way. In reality, the probability depends on
how discriminating the profile is. As the aim of a search is of course to retrieve only the relevant documents, we can assume that actual profiles will be more discriminating than the random case. In
that case (25) provides a worst case estimate of contention.
The probability of hits, that is, contention between accesses to the external memory is then
That is, there are arrangements of accesses out of , and, for each of them, the probability that it occurs is . Furthermore, contending accesses will take a time . Of course, if no external access is
made, the external access time is 0.
3.4. Average Overall Throughput
3.4.1. Average Access Time over All for a Given Pattern
We can now compute the average access time over all for a given access pattern by combining (22) and(26)
3.4.2. Average Access Time over All Patterns for Given and
Finally, using (9) and (27), we can compute the average access time over all patterns for given and , that is, the average overall throughput of the application with parallel threads and an -bank
Bloom filter
3.5. Analysis
In this section the expression obtained in Section 3.4 is used to investigate the performance of the system and the impact of the values of , , , , and on the throughput.
3.5.1. Accuracy of Approximation
To evaluate the accuracy of the approximations introduced in Section 3.2.3, we compute the relative difference between the “first hit” approximation and the “upper bound” approximation. From Figure 5
, it can be seen that the difference is less than 1% of the throughput over all simulated cases. As the upper bound always overestimates the delay, and the “first hit” approximation will in most
cases return the correct delay, this demonstrates that both approximations are very accurate. An interesting observation is that for the error is almost the same as for , which illustrates that the
condition is sufficient but not necessary.
Next, we consider a more radical approximation; we assume that, for , , in other words we ignore all cases with more than 1 hit.
From Figure 6 we see that the relative difference between the throughput using this approximation and the “first hit” is very small, to such an extent that in almost all cases it is justified to
ignore . This is a very useful result as this approximation speeds up the computations considerably.
3.5.2. Maximum Achievable Throughput
The throughput depends on the number of hits in the Bloom filter. Let us consider the case where the Bloom filter contains no hits at all. This is the maximum throughput the system could achieve, and
it corresponds to a profile for which no document in the stream has any matches. We can use (11) and (9) to calculate the best-case average access time for a Bloom filter with banks and access ports
Note that for , .
The results are shown in Figure 7. The figure shows that for , (the values for our current implementation), the I/O-limited throughput (4 terms/cycle for the PROCStar-III board) is achieved with and
. That means that we need to demultiplex both input streams into 4 parallel streams because each 64-bit word contains 2 terms.
3.5.3. Throughput Including External Access
Figure 8 shows the effect of the external memory access and score computation. The important observation is that the performance degradation is quite small for low hit rates, and still only around
25% for a relatively high hit rate of 1/512. This demonstrates that the assumptions underlying our design are justified.
3.5.4. Impact of Bloom Filter Access Time
A further illustration of the impact of is given in Figure 9, which plots the throughput as a function of on a log/log scale. This figure illustrates clearly how a reduction in throughput as result
of slower Bloom filter access can be compensated for by increasing the number of access streams. Still, with , we would need 32 parallel streams per input stream, or we would need a very large number
(128) Bloom filter banks. On the one hand, the upper limit is 512 (the number of M9K blocks on the Stratix-III 260E FPGA); on the other hand, the size of the demultiplexers and arbiters would become
prohibitive as it grows as .
3.5.5. Impact of Profile Hit Probability and External Memory Access Time
The final figure (Figure 10) is probably the most interesting one. It shows how, for very selective profiles (i.e., profiles resulting in very low hit rates), the effect of long external memory
access times is very small.
4. FPGA Implementation
We implemented our design on the GiDEL PROCStar-III development board (Figure 11). This system provides an extensible high-capacity FPGA platform with the GiDEL PROC-API library-based developer kit
for interfacing with the FPGA.
4.1. Hardware
Each board contains four Altera Stratix-III 260E FPGAs running at 125MHz. Each FPGA supports a five-level memory structure, with three kinds of memory blocks embedded in the FPGA:(i)5,100 MLAB RAM
blocks (320bit),(ii)864 M9K RAM blocks (9Kbit), and(iii)48 M144K blocks (144Kbit)
and 2 kinds of external DRAM memory:(i)256MB DDR2 SDRAM onboard memory (Bank A) and(ii)two 2GB SODIMM DDR2 DRAM memories (Bank B and Bank C).
The embedded FPGA memories run at a maximum frequency of 300MHz, Bank A and Bank B at 667MHz, and Bank C at 360MHz. The FPGA-board is connected to the host platform via 8-lane PCI Express I/O
interface. The host system consists of a quad-core 64-bit Intel Xeon X5570 CPU with a clock frequency of 2.93GHz and 3.5GB DDR2 DRAM memory, the operating system is 32-bit Windows XP. The host
computer transfers data to the FPGA using 32-bit DMA channels.
4.2. Development Environment
FPGA-accelerated applications for the PROCStar board are implemented in C++ using the GiDEL PROC-API libraries for interacting with the FPGA. This API defines a hardware abstraction layer that
provides control over each hardware element in the system; for example, Memory I/O is implemented using the GiDEL MultiFIFO and MultiPort IPs. To achieve optimal performance, we implemented the FPGA
algorithm in VHDL (as opposed to Mitrion-C as used in our previous work). We used the Altera Quartus toolchain to create the bitstream for the Stratix-III.
4.3. FPGA Implementation Description
Figure 12 presents the overall workflow of our implementation. The input stream of document term pairs is read from the SDRAM via a FIFO. A Bloom filter is used to discard negatives (terms that do
not appear in the profile) for multiple terms in parallel. Profile weights are read corresponding to the positives, and the scores are computed for each term in parallel and accumulated to achieve
the final score described in (1). Below, we describe the key modules for the implementation: document streaming, profile negative hit filtering, and profile lookup and scoring.
4.3.1. Document Streaming
Using a bag-of-words representation (see Section 2) for the document, the document stream is a list of (document id, document term tuple set) pairs. The FPGA accepts a stream of 64-bit words from the
2GB DRAM (Bank B). Consequently, the document stream must be encoded onto this word stream. The document term tuple can be encoded in 32bits: 24bits for the term id (supporting a vocabulary of 16
million terms) and 8bits for the term frequency. Thus, we can combine two tuples into a 64-bit word. To mark the start and end of a document, we insert a marker words (64bits) followed by the
document id (64bits).
4.3.2. Profile Negative Hit Filtering
As described in Section 2, we implemented a Bloom filter in the FPGA’s on-chip MRAM (M9K blocks). The higher internal bandwidth of the MRAMs leads to very fast rejection of negatives. Although the
MRAM is fast, concurrent lookups lead to contention. To reduce contention we designed a distributed Bloom filter. Based on the analysis presented in this paper, the Bloom filter memory is distributed
over a large number of banks (16 in the current design) and a crossbar switch connects the document terms streams to the banks. As shown in our analysis, in this way contention is significantly
reduced. The design was implemented as shown in Figure 2, but due to an issue with the board we could only use one SDRAM to store the collection. As a result we have only two parallel terms in the
current implementation.
4.3.3. Profile Lookup and Scoring
As explained in Section 2.2, the actual lookup and scoring system is quite straightforward; the input stream is scanned for header and footer words. The header word action is to store the subsequent
document ID and to set the corresponding document score to 0; the footer word action is to collect and output the (document ID, document score) pair if the score exceeds the threshold. For every two
terms in the document, first the Bloom filter is used to discard negatives, and then the weights corresponding to positives are read from the SDRAM. The score is computed for each of the terms in
parallel and added. The score is accumulated for all terms in the document, and finally the score stream is filtered against a limit before being output to the host. Figure 13 summarises the
implementation of the profile lookup and scoring.
4.3.4. Discussion
The implementation above leverages the advantages of an FPGA-based design, in particular the memory architecture of the FPGA; on a general-purpose CPU-based system, it is not possible to create a
very fast, very low-contention Bloom filter to discard negatives. Also, a general-purpose CPU-based system only has a single, shared memory. Consequently, reading the document stream will contend for
memory access with reading the profile terms, and as there is no Bloom filter, we have to look up each profile term. We could of course implement a Bloom filter, but, as it will be stored in main
memory as well, there is no benefit; looking up a bit in the Bloom filter is as costly as looking up the term directly. Furthermore, the FPGA design allows for lookup and scoring of several terms in
4.4. FPGA Utilisation Details
Our implementation used only 11,033 of the 203,520 logic elements (LEs) or a 5% utilisation of the logic in the FPGA, and 4,579,824 out of 15,040,512 for a 30% utilisation of the RAM. Of the
11,033LEs utilised by whole design on the FPGA, the actual document filtering algorithm only occupied 1,655LEs, which is less than 1% of utilisation, and rest was used by the GiDEL Memory IPs. The
memory utilised for the whole design (4,579,824bits) was mainly for the Bloom filter that is mapped on embedded memory blocks (MRAMs). The Quartus PowerPlay Analyzer tool estimates the power
consumption of the design to be 6W. The largest contribution to the power consumption is from the memory I/O.
5. Evaluation
In this section we discuss our evaluation results. We present our experimental methodology and the data summarising the performance of our FPGA evaluation and comparison with non-FPGA-accelerated
baselines, and we conclude with the learnings from our experiments.
5.1. Creating Synthetic Data Sets
To accurately assess the performance of our FPGA implementation, we need to exercise the system on real-world input data; however, it is hard to get access to such real-world data; large collections
such as patents are not freely available and governed by licenses that restrict their use. For example, although the researchers at Glasgow University have access to the TREC Aquaint collection and a
large patent corpus, they are not allowed to share these with a third party. In this paper, therefore, we use synthetic document collections statistically matched to real-world collections. Our
approach is to leverage summary information about representative datasets to create corresponding language models for the distribution of terms and the lengths of documents; we then use these
language models to create synthetic datasets that are statistically identical to the original data sets. In addition to addressing IP issues, synthetic document collections have the advantages of
being fast to generate and easy to experiment with, and not taking up large amounts of disk space.
5.1.1. Real-World Document Collections
We analysed the characteristics of several document collections—a newspaper collection (TREC Aquaint) and two collections of patents from the US Patent Office (USPTO) and the European Patent Office
(EPO). These collections provide good coverage on the impact of different document lengths and sizes of documents on filtering time. We used the Lemur (http://www.lemurproject.org/) Information
Retrieval toolkit to determine the rank frequency distribution for all the terms in the collection. Table 1 shows the summary data from the collections we studied as templates.
5.1.2. Term Distribution
It is well known (see, e.g., [12]) that the rank-frequency distribution for natural language documents is approximately Zipfian where is frequency of term with rank in randomly chosen text of natural
language, is number of terms in the collection, and is an empirical constant. If , the series becomes a value of a Riemann -function and will therefore converge. This type of distribution
approximates a straight line on a log-log scale. Consequently, it is easy to match this distribution to real-world data with linear regression.
Special purpose texts (scientific articles, technical instructions, etc.) follow variants of this distribution. Montemurro [13] has proposed an extension to Zipf’s law which better captures the
linguistic properties of such collections. His proposal is based on observation that, in general after some pivot point , the probability of finding a word of rank in the text starts to decay much
faster than in the beginning. In other words, in log-log scale, the low-frequency part of the distribution has a steeper slope than the high-frequency part. Consequently, the distribution can be
divided into two regions each obeying the power law, but with different slopes
We determine the coefficients from curve-fitting on the summary statistics from the real-world data collections. Specifically, we use the sum of absolute errors as the merit function combined with a
binary search to obtain the pivot. We then use a least-squares linear regression, with statistics as a measure of quality (taken from [14]). A final normalisation step is added to ensure that the
piecewise linear approximation is a proper probability density function.
5.1.3. Document Length
Document lengths are sampled from a truncated Gaussian. The hypothesis that the document lengths in our template collections have a normal distribution was verified using a test with 95% confidence.
The sampled values are truncated at the observed minimum and maximum lengths in the template collection.
Once the models for the distribution of terms and document lengths are determined, we use these models to create synthetic documents of varying lengths. Within each document, we create terms that
follow the fitted rank-frequency distribution. Finally, we convert the documents into the standard bag-of-words representation, that is, a set of unordered (term, frequency) pairs.
5.2. Experimental Parameters
Statistically, the synthetic collection will have the same rank-frequency distribution for the terms as the original data sets. Consequently, the probability that a term in the collection matches a
term in the profile will be the same in the synthetic collection and the original collection. The performance of the algorithm on the system now depends on(i)the size of the collection,(ii)the size
of the profile,(iii)the “hit probability,” that is, the probability that the profile corresponding to a term has a nonzero weight.
To evaluate these effects, we studied a number of different configurations—with different document sizes, different profile lengths, and different profile constructions. Specifically, we studied
profile sizes of 4K, 16K, and 64K terms, the first two are of the same order of magnitude as the profile sizes for TREC Aquaint and EPO as used in our previous work [2], and the third, larger
profile was added to investigate the impact of the profile size. We studied two different document collections: 128K documents of 2048 terms, which is representative for the patent collections, and
512K documents of 512 terms, similar to the Aquaint collection. Note that the total size of the collection is not important for the performance evaluation; for both the CPU and FPGA implementation,
the time taken to filter a collection is proportional to its size.
We evaluated four ways of creating profiles. The first way (“Random”) is by selecting a number of random documents from the collection until the desired profile size is reached. These documents were
then used to construct a relevance model. The relevance model defined the profiles which each document in the collection was matched against (as if it were being streamed from the network). The
second type of profiles (“Selected”) was obtained by selecting terms that occur in very few documents (less than ten in a million). For our performance evaluation purpose, the main difference between
these profiles is the hit probability, which was for the “Random” profiles and for the “Selected” profiles. For reference, we also compared the performance against an “Empty” profile (one that
results in no hits).
5.3. FPGA Performance Results
5.3.1. Access Time Measurements
The performance of the FPGA was measured using a cycle counter. The latency between starting the FPGA and the first term score is 22 cycles. For the subsequent terms, the delay depends on a number of
factors. We considered three different cases:(i)“Best Case”: no contention on the Bloom filter access and no external memory access(ii)“Bloom Filter Contention”: contention on the Bloom filter access
for every term but no external memory access(iii)“External Access”: no contention on the Bloom filter access, external memory access for every term
These cases were obtained by creating documents with contending/not contending term pairs and by setting all Bloom filter bits to 0 (no external access, which corresponds to an empty profile) or 1
(which correspond to a profile that would contain all terms in the vocabulary).
The results are shown in Table 2. As we read two terms in parallel, the Best Case (i.e., the case of no contention and no hits) demonstrates that the FPGA implementation does indeed work at I/O
rates, that is, . The table also shows the probability for each case when filtering an actual document collection.
The most interesting result in Table 2 is the “Bloom Filter Contention,” which shows that in our design . The case of “External Access,” which means no contention on the Bloom filter, and lookup of
both terms in the external memory shows that .
As explained in Section 3, the Bloom filter contention depends on the number of Bloom filter banks (2 parallel terms, 16 banks), and, in the current design, the contention probability is 1/16 (from (
9)). The probability for external access depends on the actual document collection and profile, but as the purpose of a document filter is to retrieve a small set of highly relevant documents, this
probability is typically very small, as demonstrated by the experiments discussed in the next section. Consequently, the typical performance is determined by the cycle counts for Best Case and Bloom
Filter Contention. Using (29), we get cycles per term. At a clock speed of 125MHz, this results in a throughput of 200 million terms per second (200MT/s) per FPGA.
5.3.2. Comparison with CPU Reference Systems
Table 3 presents performance results for our FPGA implementation for various workload types. Focusing on a “Random” profile of 16K terms for 128K documents, our measured performance is close to 800
million terms/second for the design—close to the estimated performance; the earlier calculations showed 200 million terms/second per FPGA: across the four FPGAs in the GiDEL board, that translates to
800 million terms/second for the design. Table 3 also shows the sensitivity to various other parameters. The performance of the FPGA design is comparable for different profile sizes and document
sizes. However, as expected, the performance varies based on different hit probabilities for different profiles.
To compare the FPGA performance against a conventional CPU, we ran the experiments discussed in Section 5.1 on an optimised reference implementation (compared to the Lemur-based implementation used
in our previous work), written in C++, compiled with g++ with optimisation -O3, and run on two different platforms: System1 (an iMac) has an Intel Core 2 Duo Mobile E8435 CPU with clock frequency
3.06GHz and 8GB RAM, bus speed 1067MHz. System2 (a Linux server) has an Intel Core i7-2600 CPU running at 3.4GHz, with 16GB RAM, bus speed 1333MHz. The higher memory configurations are required
to enable sufficient memory for the algorithm; it is not possible to run the reference implementation on the 32-bit Windows XP server which hosts the FPGA board as the 3.5GB of memory is not
sufficient. We could of course run the algorithm several times on smaller data sets but then in that case the time required to read the data from disk would dominate the performance. We keep the
entire data set in memory because the memory I/O is much higher than the disk I/O. While this approach might not be practical on a CPU-based system, on the FPGA-based system this is entirely
practical as the PROCStar-III board has a memory capacity of 32GB. This means that, for example, the Novo-G FPGA supercomputer, which hosts 48 PROCStar-III boards, can support a collection of
1.5TB. Note also that the format in which the documents are stored on the disk is a very efficient bag-of-words representation, which is much smaller than the actual textual representation of the
The results are summarised in Table 3. For example, focusing on one example case, for the random profile with 16K terms and 128K documents, compared to the 800 million terms/second performance per
FPGA achieved by our design, the System2 system achieves 41 million terms/second, and the System1 system achieves 24 million terms/sec. This translates to a 36-fold speedup for the FPGA-based design
relative to the System1 system and a 20-fold speedup relative to the System2 system. Additionally, examining the results for various workload configurations, the FPGA’s performance is relatively
constant across different workload inputs. This bears out the rationale for our design because in general hits are rare, the FPGA works at the speed determined by I/O and Bloom Filter performance.
Unlike the FPGA-based design, the CPU-based system sees more variation in performance with profile size (degraded performance with increased profile size) and document size (degraded performance with
larger documents) and a bigger dropoff in performance between various profile types compared to the FPGA-based design.
6. Discussion
In the above sections we have used a preliminary implementation of our proposed design to validate the analytical model. The design does indeed behave in line with the model, for the case of two
parallel terms and a 16-bank Bloom filter. The performance is 200Mterms/s. This design is not optimal for several reasons. On the one hand, the original aim was to support four parallel terms, but
an issue with the access to one of the memories prevented this. On the other hand, as is clear from the model, a 16-bank implementation does not result in operation close to I/O rates. For four
parallel terms, this would require 64 banks; even for two parallel terms, the performance is 80% of the I/O rate. Our aim was not so much to achieve optimal performance as to implement and evaluate
our novel design and compare it to the analytical model. We therefore decided to limit the number of banks to 16 to reduce the complexity of the design, as the implementation was undertaken as a
summer project.
This means that there is a lot of scope for improving the current implementation.(i)We will deploy our design on a PROCStar-IV board which does not have this issue, and thus we will be able to score
4 terms in parallel rather than 2 terms.(ii)Even with a single SDRAM, we can be more efficient; the SDRAM I/O rate is 4GB/s (according to the PROCStar-III databook); our current rate is only 1GB/s.
By demultiplexing the scoring circuit, it should be possible to increase this rate to 4GB/s.(iii)Combining both improvements, an improved design could score 16 terms in parallel. This will of course
require a Bloom filter with more banks to reduce contention, but considering the current resource utilisation that is not a limitation. Consequently, the improved design should be able to operate up
to 8× faster than the current design.
In terms of the analytical model itself, there is some scope for further refinement, in particular for the external access; we currently use a single access time for one and more hits. Just like for
the Bloom filter, we can include a fixed cost for concurrent accesses on the external memory. We also want to refine the model to include the effect of grouping terms: that is, the parallel terms are
usually grouped per two or four depending on the I/O width. This affects the waiting time on contention, as all terms in a single group need to wait before a new group can be fetched. Currently, the
model assumes all terms are independent. For the case of two terms, this assumption is corrects for more terms there is a slight underestimation of the access time in the case of contention. The
counting problem for this case is complicated as it requires enumerating all the possible groupings and working out the effect if one or more accesses per group are in contention.
7. Conclusion
In this paper we have presented a novel design for a high-performance real-time information filtering application using a low-latency “trivial” Bloom filter. The main contribution of the paper is the
derivation of an analytical model for the throughput of the application. This combinatorial model takes into account the access times to the Bloom filter and the external memory, the access
probability, and the probability and cost of contention on the Bloom filter. The approach followed and the intermediate expressions are applicable to a large class of resource-sharing problems.
We have implemented our design on the GiDEL PROCStar-III board. The analysis of the system performance clearly demonstrates the potential of the design for delivering high-performance real-time
search; we have shown that the system can in principle achieve the I/O-limited throughput of the design. Our current, suboptimal implementation works at 80% of its I/O rate, and this already results
in speedups of up to a factor of 20 at 125MHz compared to a CPU reference implementation on a 3.4GHz Intel Core i7 processor. Our analysis indicates how the system should be dimensioned to achieve
I/O-limited operation for different I/O widths and memory access times.
Our future work will focus on achieving higher I/O bandwidth by using both memory banks on the board and time-multiplexing the memory access. Our aim is to achieve an additional 8× speedup.
The authors acknowledge the support from HP, who hosted the FPGA board and provided funding for a summer internship. In particular, we’d like to thank Mitch Wright for technical support and Partha
Ranganathan for managing the project.
We’d like to acknowledge Anton Frolov who implemented the synthetic document model.
Wim Vanderbauwhede wants to thank Dr. Catherine Brys for fruitful discussions on probability theory and counting problems.
1. C. L. Belady, “In the data center, power and cooling costs more than the IT equipment it supports,” Electronics Cooling, vol. 13, no. 1, 2007.
2. W. Vanderbauwhede, L. Azzopardi, and M. Moadeli, “FPGA-accelerated information retrieval: High-efficiency document filtering,” in the 19th International Conference on Field Programmable Logic and
Applications (FPL '09), pp. 417–422, September 2009. View at Publisher · View at Google Scholar · View at Scopus
3. L. Azzopardi, W. Vanderbauwhede, and M. Moadeli, “Developing energy efficient filtering systems,” in the 32nd Annual International ACM SIGIR Conference on Research and Development in Information
Retrieval (SIGIR '09), pp. 664–665, July 2009. View at Publisher · View at Google Scholar · View at Scopus
4. V. Lavrenko and W. Bruce Croft, “Relevance-based language models,” in the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 120–127,
September 2001.
5. Lemur, “The Lemur toolkit for language modeling and information retrieval,” 2005, http://www.lemurproject.org/.
6. V. Kindratenko, R. Wilhelmson, R. Brunner, T. J. Martíez, and W. M. Hwu, “High-performance computing with accelerators,” Computing in Science and Engineering, vol. 12, no. 4, Article ID 5492949,
pp. 12–16, 2010. View at Publisher · View at Google Scholar · View at Scopus
7. GiDEL Ltd, “PROCStar III, Data Book,” September 2009.
8. B. H. Bloom, “Space/time trade-offs in hash coding with allowable errors,” Communications of the ACM, vol. 13, no. 7, pp. 422–426, 1970. View at Publisher · View at Google Scholar · View at
9. Altera Corp, “Stratix III, Device Handbook,” July 2010.
10. G. Andrews and K. Eriksson, Integer Partitions, Cambridge University Press, 2004.
11. C. Chen and K. Koh, Principles and Techniques in Combinatorics, World Scientific, 1992.
12. R. M. Losee, “Term dependence: a basis for Luhn and Zipf models,” Journal of the American Society for Information Science and Technology, vol. 52, no. 12, pp. 1019–1025, 2001. View at Publisher ·
View at Google Scholar · View at Scopus
13. M. A. Montemurro, “Beyond the Zipf-Mandelbrot law in quantitative linguistics,” Physica A, vol. 300, no. 3-4, pp. 567–578, 2001. View at Publisher · View at Google Scholar · View at Scopus
14. W. Press, Numerical Recipes: The Art of Scientific Computing, Cambridge University Press, 2007. | {"url":"http://www.hindawi.com/journals/ijrc/2012/507173/","timestamp":"2014-04-16T13:18:15Z","content_type":null,"content_length":"370185","record_id":"<urn:uuid:2fd1e5cf-25f8-4b64-881d-b4bebeecbc6c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
canonical bundle
canonical bundle
Differential geometry
Special and general types
Special notions
For $X$ a space with a notion of dimension $dim X \in \mathbb{N}$ and a notion of (Kähler) differential forms on it, the canonical bundle or canonical sheaf over $X$ is the line bundle (or its sheaf
of sections) of $n$-forms on $X$, the $dim(X)$-fold exterior product
$L_{can} \coloneqq \Omega^n_X$
of the bundle $\Omega^1_X$ of 1-forms.
The first Chern class of this bundle is also called the canonical characteristic class or just the canonical class of $X$.
Often this bundle is regarded via its sheaf of sections.
A square root of the canonical class, hence another characteristic class $\Theta$ such that the cup product $2 \Theta = \Theta \cup \Theta$ equals the canonical class is called a Theta characteristic
(see also metalinear structure).
Notice that if $X$ is for instance a complex manifold regarded over the complex numbers, then Kähler differential forms are holomorphic forms.
The following table lists classes of examples of square roots of line bundles
In the context of algebraic geometry:
• Vladimir Lazić Lecture 7. Canonical bundle, I and II (2011) (pdf I, pdf II)
See also
Revised on January 7, 2014 10:23:35 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/canonical+bundle","timestamp":"2014-04-17T06:43:48Z","content_type":null,"content_length":"42675","record_id":"<urn:uuid:6fa703b1-5529-471f-9de4-7be4ecb245ac>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Let R be the region in the first quadrant bounded by the line y = 5 and the graphs y = 5 – x^2. Write an integral expression for the area of R, using horizontal slices.
The region R looks like this:
This problem is very similar to the previous one, except that now the horizontal slice stretches from the graph of y = 5 – x^2 on the left to the graph of
The width of the slice at height y is now
The variable y ranges from 1 to 5, so the area of R is | {"url":"http://www.shmoop.com/area-volume-arc-length/area-assumption-exercises-7.html","timestamp":"2014-04-21T05:17:23Z","content_type":null,"content_length":"28499","record_id":"<urn:uuid:8b5d61ee-5165-4309-8406-be6be32888c9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
HowStuffWorks "Public Key Encryption"
Public Key Encryption
One of the weaknesses some point out about symmetric key encryption is that two users attempting to communicate with each other need a secure way to do so; otherwise, an attacker can easily pluck the
necessary data from the stream. In November 1976, a paper published in the journal IEEE Transactions on Information Theory, titled "New Directions in Cryptography," addressed this problem and offered
up a solution: public-key encryption.
Also known as asymmetric-key encryption, public-key encryption uses two different keys at once -- a combination of a private key and a public key. The private key is known only to your computer,
while the public key is given by your computer to any computer that wants to communicate securely with it. To decode an encrypted message, a computer must use the public key, provided by the
originating computer, and its own private key. Although a message sent from one computer to another won't be secure since the public key used for encryption is published and available to anyone,
anyone who picks it up can't read it without the private key. The key pair is based on prime numbers (numbers that only have divisors of itself and one, such as 2, 3, 5, 7, 11 and so on) of long
length. This makes the system extremely secure, because there is essentially an infinite number of prime numbers available, meaning there are nearly infinite possibilities for keys. One very popular
public-key encryption program is Pretty Good Privacy (PGP), which allows you to encrypt almost anything.
The sending computer encrypts the document with a symmetric key, then encrypts the symmetric key with the public key of the receiving computer. The receiving computer uses its private key to decode
the symmetric key. It then uses the symmetric key to decode the document.
To implement public-key encryption on a large scale, such as a secure Web server might need, requires a different approach. This is where digital certificates come in. A digital certificate is
basically a unique piece of code or a large number that says that the Web server is trusted by an independent source known as a certificate authority. The certificate authority acts as a middleman
that both computers trust. It confirms that each computer is in fact who it says it is, and then provides the public keys of each computer to the other. | {"url":"http://computer.howstuffworks.com/encryption3.htm","timestamp":"2014-04-18T20:44:36Z","content_type":null,"content_length":"122666","record_id":"<urn:uuid:e9170178-3bc9-4233-9cdd-0714fa232bd2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to invert three signals with only two NOT gates (and *no* XOR gates): Part 1 | EE Times
Design How-To
How to invert three signals with only two NOT gates (and *no* XOR gates): Part 1
Good Grief Charlie Brown! Ever since I presented the "Black Box Brain Boggler" in my Logically Speaking column in the 8th May issue of EE Times, I've been running around in ever-decreasing circles
shouting "Don't Panic, Don't Panic!"
Why? I will explain, but first let me say to those who have been playing with this puzzle: (a) You can not use XOR gates and (b) there really is a solution. In fact, as "The Black Adder" would say
(for those au fait with British comedy): "This solution is so cunning that we could pin a tail on it and call it a weasel!"
Note: If you are a hardened logic designer, you might want to skip the next section and go directly to the Proposed solutions that didn't make the grade topic.
But suppose we could use XOR gates
OK, let's take this from the top. The idea is that we start with a black box (no, it doesn't matter whether it's a big box or a small one... look, if you're not going to take this seriously...).
There are three inputs to our black box called A, B, and C. Similarly, there are three outputs called not-A, not-B, and not-C. Each of these outputs is the logical inversion of its corresponding
input; that is, if A = 0, then not-A = 1, and vice versa. Obviously, if we had access to three NOT gates, then the solution would be simple (Fig 1).
1. Life would be simple if we had three NOT gates.
However, the rules of this "Brain Boggler" say that we have access to only two NOT gates. On the bright side, we also have access to as many AND and OR gates as we wish to use. The problem is that –
when I first posed this conundrum in my Logically Speaking column – I mistakenly said that in addition to the AND and OR gates, you could also use as many XOR gates as you wish.
Arrggghhhh! This prompted an immediate flood of responses. Half of these were from logic design experts who immediately recognized what they perceive as being a blatantly obvious solution. In fact,
some of these messages kicked-off along the lines of: "Is this the sad state of affairs to which EE Times has sunk..."
To these folks, I responded by saying: "Sorry, my bad, yes this does have an obvious solution, however I made a mistake, and you aren't allowed to use XOR gates." This was typically followed by a
pause for a few hours before I received a follow-on message saying: "Oh, well, that makes things a little harder doesn't it ... let me think about this some more." In fact, at this stage many readers
said: "I don't think this can be achieved; you can't create a negation from non-negating gates."
Having said this, I also received literally hundreds of emails from folks who aren't well-versed in logic theory, but who do like a good puzzle. Many of these guys and gals grimly fought their way to
a solution the old-fashioned way by drawing a truth table of the inputs and outputs, pondering things, and eventually realizing how XOR gates could be used to provide a solution. So, before we leap
into the more complex problem without XOR gates, let's first consider the XOR-based possibilities. And just to remind those who don't play with logic gates every day, the truth tables for the basic
two-input functions at our disposal (if we assume that we do have access to XOR gates) are shown in Fig 2
2. Truth tables for the logic gates at our disposal
(assuming we have access to XOR gates).
Observe the use of the symbols '&', '+', and '^' to represent logical AND, OR, and XOR functions; we'll be using these symbols in our equations later (also, we will use the symbol '!' to indicate the
NOT function). Note especially the XOR gate. As we see, if one of the inputs is held at a logic one value, then the output will be the inverse of the other input. This immediately allows us to come
up with a solution to our black box problem using two NOT gates and one XOR gate as shown in Fig 3.
3. Solution using two NOT gates and one XOR.
Alternatively, if we wished to make our diagram look more aesthetically pleasing, we could opt to use three XOR gates as shown in Fig 4.
4. Solution using three XOR gates.
Ah, Ha! You say! But what if we aren't allowed to use a constant logic '1' value? Well, no problemo; if necessary, we can generate this value using one of our NOT gates with an OR gate as illustrated
in Fig 5. (Observe that the convention in these schematics is that if two wires cross but there is no dot shown at their intersection, then there is no electrical connection between these wires. By
comparison, a black dot at the intersection of two wires indicates that these wires are connected together.)
5. Generating a '1' using a NOT and an OR.
Note that we could have used any of our input signals to generate our logic 1 value; we just happened to pick input C because of the way this schematic was evolving. The trick here is that if input C
is 0, then the output from the NOT gate will be 1, so the output of the OR gate will be 1 (remember that the output from an OR gate is 1 if either of its inputs are 1). Alternatively, if input C is
1, then the output from the NOT gate will be 0; but this doesn't matter, because input C is connected directly to the other input to the OR gate, so once again the output from the OR will be a 1.
I bet you're thinking this is all really easy, aren't you? Well, before we plunge into the non-XOR-based solutions, let me give you a taste of the things I've been going through this last week. A
reader called Keith emailed me as follows:
"Mr. Maxfield, I just was reading my May 8th, EE Times and came across your puzzles. I left my brain working on the black box over lunch and here is what I came up with. The key is being able to
invert one of the inputs gives us two points that we know are logical 1 and 0 (we just don't know which is which, but it doesn't matter). We can then compare the other two inputs to those states.
Anyway, attached is a PDF of my solution to your inverting black box (using only one of the invertors)."
As you will see from Keith's PDF he's obviously put a lot of thought into this. The problem is that by the time this arrived, I'd waded through so many potential solutions that my eyes were watering
and my brain was leaking out of my ears. In fact, I still haven't decided exactly what this circuit does, how it does it, and – most importantly – if it actually solves the black box problem. Perhaps
you would like to ponder this circuit
But we digress... As I mentioned at the beginning of this article, we are not allowed to use XOR gates as part of our solution. When I let folks know this, they responded with gusto and abandon, and
once again my "In Box" was flooded with emails. The problem now was working out which solutions worked and which fell at the first fence... | {"url":"http://www.eetimes.com/document.asp?doc_id=1274508","timestamp":"2014-04-16T07:30:42Z","content_type":null,"content_length":"138635","record_id":"<urn:uuid:88ca132c-a161-41c0-95e6-39443cb699db>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tewksbury Calculus Tutor
Find a Tewksbury Calculus Tutor
...I hold a Massachusetts Math 9-12 Educator’s License, which includes Trigonometry. The [SAT] Mathematics Level 1 Subject Test assesses the knowledge you’ve gained from three years of
college-preparatory mathematics, including two years of algebra and one year of geometry. The [SAT] Mathematics L...
9 Subjects: including calculus, geometry, algebra 1, algebra 2
...In addition to my love for the subject, I also have a natural impulse to explain it to others. Over the past five years I have acquired hundreds of hours of experience tutoring a variety of
different levels of calculus. As a tutor I am highly adaptable and can accommodate students with busy sch...
14 Subjects: including calculus, geometry, GRE, algebra 1
My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor
a wide array of math courses.
36 Subjects: including calculus, English, reading, chemistry
I have 9 years of experience teaching all levels of high school mathematics in the public schools. I also have more than 6 years of experience tutoring mathematics to students ranging from 7
years old through adult learners. I have taught and/or tutored mathematics from basic addition and subtraction through calculus.
14 Subjects: including calculus, geometry, trigonometry, SAT math
...Once we create a positive and respectful relationship, learning about roots of a quadratic equation or why apples fall from trees will become natural and easy. I live in North Andover and I am
willing to travel as far as Cambridge and Boston, which are within 25 miles radius from N. Andover.
10 Subjects: including calculus, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Tewksbury_Calculus_tutors.php","timestamp":"2014-04-21T11:02:43Z","content_type":null,"content_length":"23909","record_id":"<urn:uuid:cee67851-2d65-491a-ae0f-cdae0fa50d3d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Study of 'Shell and Tube'-Heat-Exchanger exhibiting different
tube arrangements
Short description and remarks
• Engineering point of view :
'Shell and Tube'-Heat-Exchanger are widely spread in the field of chemical engineering. Still it is not clear how to arrange the tubes inside the appartus in order to get an optimal heat-transfer
behaviour. Beginning from a simple configuration with only one tube(Rad=1.5; Area=Pi*Rad^2). We studied cases for 2x2 through 5x5 tube arrangemnets for constant circumference as well as constant
• Mathematical point of view :
We studied a 2D cutplane of the above described apparatus. Fluid flow calculation in a square of 4 units with 25 circles each of radius 0.3, placed equidistantly inside the square. The
calculation starts from rest state. The flow is observed with the Reynolds number, Re=1,000. This simulation is part of a Software-Praktikum at the University of Dortmund in the year 2004. | {"url":"http://www.featflow.de/album/numlab/ss2004/heat_exchanger/ALBUM.HE.CA55.99886.RE1000/data.html","timestamp":"2014-04-19T19:33:22Z","content_type":null,"content_length":"7707","record_id":"<urn:uuid:8405ba9b-7179-41f8-b22c-f761d2f6f356>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research Interests of Frank Sottile
Research Interests of Frank Sottile:
My original research interests were in the Schubert calculus, which is at the interface of algebraic geometry, representation theory and algebraic combinatorics. Over the years, this has evolved in
many directions, including real algebraic geometry, computational algebraic geometry, Hopf algebras in combinatorics, discrete and computational geometry, and tropical geometry. Currently (autumn
2013), I am particularly interested in numerical computation in algebraic geometry and in studying Galois groups in enumerative geometry. I have research projects in most of these areas, and have
projects involving over two dozen collaborators.
More information may be found browsing my web page; I particularly recommend some mathematical short stories or some more involved web pages that I have created in the course of my research.
You may also find the descriptions of my research in recent successful grant proposals worthwhile.
NSF individual research grant Applications and Combinatorics in Algebraic Geometry, 1 August 2010 -- 31 July 2013. $235,395. DMS-1001615.
NSF Individual Research grant, Numerical Real Algebraic Geometry, 1 October 2009 -- 31 July 2012. DMS-0915211.
Co-PI with Professor Luis García-Puente of Sam Houston State University on a Texas Advanced Research Projects grant 010366-0054-2007, Algebraic Geometry in Algebraic Statistics and Geometric
Modeling, 15 May 2008 -- 14 May 2010. Proposal.
NSF Individual Research grant, "Applicable Algebraic Geometry: Real Solutions, Applications, and Combinatorics" 1 September 2007 -- 31 August 2010. DMS-0701050 Proposal.
Return to Frank Sottile's Homepage. Last modified: Tue Jan 28 02:52:19 CST 2014 | {"url":"http://www.math.tamu.edu/~sottile/research/index.html","timestamp":"2014-04-19T14:32:39Z","content_type":null,"content_length":"3675","record_id":"<urn:uuid:643187e1-6084-4e35-9e14-ac99418620ed>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Introduction to Generalized Linear Modelling
P.M.E.Altham, Statistical Laboratory, University of Cambridge.
August 20, 2010
1 Introduction
Preliminary statement. When I first wrote my lecture notes for the Part II course, Sarah Shea--
Simonds very kindly typed the core notes in TeX, and I added to them bit by bit, again in TeX.
However, my style was still rather like a telegram, partly as I was trying to save on paper. Now
that I am retired, I have time to retype the notes in LaTeX. I have tried to make the style rather
more `flowing', and have included more various graphs, exercises, Tripos questions and solutions.
This editing process is quite enjoyable but rather slow. I'll put the revisions on my webpage from
time to time, and of course would appreciate comments and suggestions. Special thanks are due to
Professor Yuri Suhov for his comments and suggestions.
There are already several excellent books on this topic. For example McCullagh and Nelder(1989)
have written the classic research monograph, and Aitkin et al. (1989) have an invaluable intro
duction to the pioneering software GLIM. Although I was very glad to learn a great deal by using
GLIM, that particular software was superseded some years ago by excellent and powerful languages
such as SPlus and R.
Students will naturally gain a much deeper understanding of the theory by putting it into practice
on real (if small) datasets. An excellent text book to help them to do this in Splus and/or R is
the one by Venables and Ripley (2002), particularly their Chapters 6 and 7. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/208/3758375.html","timestamp":"2014-04-17T09:43:32Z","content_type":null,"content_length":"8689","record_id":"<urn:uuid:5cfa9f85-1b45-4613-9a40-b7f759425012>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The scaling properties of two-dimensional compressible magnetohydrodynamic turbulence
FIG. 1.
The temporal evolution of the space averaged kinetic energy (lower curve) and magnetic energy (upper curve) for runs 4 (panel a) and 3 (panel b) (see Table I). The solid part of the curves shows the
period over which steady-state averages are calculated. The horizontal lines show the time averages calculated over this period. The broken part of the curves shows the temporal evolution before
steady-state averages are calculated.
FIG. 2.
The temporal evolution of the alignment of the velocity and magnetic fields for runs 4 (panel a) and 3 (panel b) (see Table I). The horizontal lines show the time averages calculated over this
FIG. 3.
ESS scaling in the field, order 6 against order 3. The boxes surrounding the data points indicate the standard error present in the time averaging process. Marked on the plots are the approximate
values of and . Benzi et al. (Refs. 22,23) found that ESS scaling should extend down to approximately . The break in scaling occurs at approximately in these plots, showing that the viscous cutoff of
the cascade is higher than expected from Kolmogorov phenomenology (K41) and is closer to that of Iroshnikov and Kraichnan (IK).
FIG. 4.
ESS scaling of Elsässer field variables [compare Eq. (7) with and ] for run 4. We exclude since this necessarily gives perfect scaling. The boxes surrounding the data points indicate the standard
error present in the time averaging process. See column 4 of Table II for the corresponding inferrred values of .
FIG. 5.
Relative scaling exponents obtained from the driven 2D simulation of run 4 (error bars), the driven 2D simulation of Ref. 8 (upper broken line , lower broken line ), and the 2D decaying simulation of
Ref. 7 (solid line with square markers). Values reported from the 3D decaying simulation of Ref. 10 (solid line) and the MHD intermittency model of Politano and Pouquet (Ref. 12) (dotted line) are
shown for comparison.
FIG. 6.
Time-averaged energy spectra [compare Eq. (3)] for high-resolution runs 3 (lower) and 4 (upper). The vertical axis is normalized by (a) and (b). The bold lines indicate the power-law index as
estimated by a power-law fit over the inertial range. It can be seen that the power law has a negative index whose magnitude is less than the predicted by K41 but is greater than the predicted by IK.
The scaling range is extended to higher for run 4 compared to run 3, since run 4 is performed at a higher Reynolds number.
FIG. 7.
Scaling of the ESS-consistent refined similarity hypothesis for the Kolmogorov case [compare Eq. (10a)] (top) and the IK case [compare Eq. (10b)] (bottom) for run 4, order . Here the scaling range
extends to below as found in Ref. 23. However, the ideal gradient of unity is not recovered, see Table III. The boxes surrounding the data points indicate the standard error present in the time
averaging process.
FIG. 8.
Scatter plots of normalized energy spectra for decaying turbulence runs for the isothermal high-order code (a) and the Lagrangian remap code that solves an equation of state (b). The vertical axis is
normalized by .
Table I.
Parameters of driven turbulence runs. is the number of the grid points, is the time period over which the steady state is tracked in terms of the nonlinear turnover time, is the ratio of the
simulation box length to the Kolmogorov dissipation scale, and is the kinetic Reynolds number. Statistics are calculated from snapshots taken approximately every two nonlinear turnover times. The
steady-state kinetic energy is approximately twice that of the magnetic energy. The viscosity is set equal to the magnetic diffusivity making the magnetic and kinetic Reynolds numbers equal and the
magnetic Prandtl number equal to unity. The rms sonic Mach number is and the steady-state rms fluctuations in density are for all runs.
Table II.
Ratios of scaling exponents calculated from the Elsässer field variables for different runs (see Table I). Errors are an estimate of the possible range of straight line fits that can be drawn on the
ESS plots (see Fig. 4).
Table III.
Test of the refined similarity hypothesis as modified for consistency with extended self-similarity. K41 or IK refers to Eqs. (10a) and (10b), respectively. The symbols or represent scaling derived
from the and Elsässer field variables, respectively. Exact agreement would be indicated by a value of 1.0 across all columns.
Article metrics loading... | {"url":"http://scitation.aip.org/content/aip/journal/pop/13/1/10.1063/1.2149762","timestamp":"2014-04-16T08:36:33Z","content_type":null,"content_length":"104442","record_id":"<urn:uuid:6575137a-21ba-430e-ab5b-78ccf68ef82f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pls help - math of investment: Simple interest/discount & proceeds
January 9th 2013, 10:00 PM
Pls help - math of investment: Simple interest/discount & proceeds
Your Open Question:
1. Levy availed of a P67k loan at the bank at 17% discount rate for 13months. Find the bank discount of the loan and the proceeds.
2.the WSS is currently charging a 15% discount rate. Find the proceeds on a P17800 note for 60 days.
3. A discount of P300 is charged for a 72-day loan of August 1. If the discount rate is 16500, how much must be repaid on the due date?
4. Isabel signed a P9700 note that her bank discounted. The note will mature in 108 days. The proceeds were P9350.80, just enough to an invoice due after 108 days. What was the the discount rate?
5. A note having a face value of P16k was discounted at 8%. If the discount was P896, find the term of loan.
6. A bank charges 15% interest in advance as simple discount. Find the amount of a 10-month note which pangilinan must sign in order to receive to receive P39500 from the bank.
7. Rodel morada goes to a bank and borrows
P100k with his car to be used as collateral. The bank charges a 15% discount rate. Find the proceeds if the note is for 180days | {"url":"http://mathhelpforum.com/new-users/211095-pls-help-math-investment-simple-interest-discount-proceeds-print.html","timestamp":"2014-04-24T00:28:45Z","content_type":null,"content_length":"4323","record_id":"<urn:uuid:5908d8d0-9c5d-4430-b1f1-3675a3c6693c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry Ch. 5
If your printer has a special "duplex" option, you can use it to automatically print double-sided. However for most printers, you need to:
1. Print the "odd-numbered" pages first
2. Feed the printed pages back into the printer
3. Print the "even-numbered" pages
If your printer prints pages face up, you may need to tell your printer to reverse the order when printing the even-numbered pages. | {"url":"http://quizlet.com/7797858/print","timestamp":"2014-04-19T07:36:29Z","content_type":null,"content_length":"218999","record_id":"<urn:uuid:93b1ccea-8db2-4c55-a2ac-39c20402d76c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
42 pence = how many pound
You asked:
42 pence = how many pound
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/42_pence_%3D_how_many_pound","timestamp":"2014-04-18T21:49:58Z","content_type":null,"content_length":"50058","record_id":"<urn:uuid:010af98c-2487-4c13-8f18-58af65abb91b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sensation and Perception - Syllabus - Laura Snodgrass, Ph.D.
Sensation and Perception
Syllabus | PowerPoint Files | Labs | Study Guides
Paper Directions | Demonstration Links
Text: Sensation and Perception: An Integrated Approach 4th Ed. by H.R. Shiffman.
August 27 & 29 Introduction pp. 2 - 12
August 31 & September 3 Psychophysics Ch. 2
September 5 - 7 Sensory Coding pp. 12-21; 71-86; 112-114; 142-148
September 10 Review and discussion of papers
September 12 Brief Exam 1 and work on papers Study Guide for Exam 1
September 14 & 17 Visual System Anatomy pp. 52 - 70
September 19 Paper topic due for approval
September 19 - 24 Basic Visual Functions Lab pp. 89 - 102
September 26 Journal article due
September 26 - October 1 Color Ch. 5
October 3 Review
October 5 Exam 2 Study Guide for Exam 2
October 10 Introduction and Methods Section due
October 10 - 17 Distance Lab Ch. 9
October 19 Ethics Proposals
October 19th - November 2 Shape Ch 7; pp.103-107; 250 -261; 149-166
November 5th Exam 3 Study Guide for Exam 3
November 7 - 12 Audition Ch. 12,13 & 14
November 14 & 16 Touch Lab Ch. 16
November 19 Rough Drafts Due
November 19 - 26 Taste & Smell Lab Ch. 17 & 18
November 28 Lab
November 30 -December 5 Learning and Development Ch. 11; pp. 187 - 192
Final Exam Review
(either last day of class or scheduled)
December 10 Final Papers due
December 15th Final Exam - 9:00 AM Study Guide for Final Exam
Grading: Exam 1 is worth 50 pts. Exams 2 , 3, and the final are each worth 75 pts. The Introduction and Methods section of the paper is worth 20 points and the rewritten final paper is worth 50
points. The two lab write-ups are each worth 5 points for a total of ten points. You may earn extra-credit by participating in APPROVED experiments.You may participate in up to three experiments,
each one is worth three points for a total of nine points.
There are a total of 355 points in the class (not counting the extra credit). I grade on a straight percentage. For example 90% of 355 is 319.5 points this would give you an A-.
Exams: Please note that the material for the exams is based approximately 70% from the lecture and 30% from the book. Thus you must have GOOD lecture notes.
Lab write ups. We will be doing several in-class labs. For each of these labs I will hand out some thought questions. You must answer two of the set of thought questions. You may turn in as many of
these as you like and I will count the two highest grades, but you may not use them for extra credit.
Paper: The paper is based on a small experiment. Experimenters can work together in groups of up to 4 people. HOWEVER, EACH PERSON MUST WRITE HIS OR HER OWN PAPER AND PARTICIPATE IN RUNNING THE
EXPERIMENT!! The papers in each group must be substantially different from each other. Group members may NOT share graphs, bibliography, methods sections or any other aspect of the writing of the
paper. There are previous papers on reserve in the library to give you a sense of what the paper should look like. Topics MUST be approved in advance! You may (and I suggest you do) turn in a TYPED
rough draft in advance of the deadline and I will return it with comments. Be VERY careful about quoting sources!
Students who do not turn in the Introduction and methods section on time lose 5 pts a day from the paper total until after 5 days 25 pts will be subtracted from the paper grade.
LATE FINAL PAPERS LOSE 5 POINTS A DAY! | {"url":"http://www.muhlenberg.edu/depts/psychology/lsnodgrass/sp/syllabus.html","timestamp":"2014-04-16T10:42:23Z","content_type":null,"content_length":"15287","record_id":"<urn:uuid:5f478e1a-0343-4080-9946-aae9895c2638>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Microsoft Word - P-557 for pdf2.doc
Trim the chart to no less than 10 NM from all course lines.
Write the name of the route on the front of the chart in a conspicuous place. Also, write the
ESA in red on the front of the chart.
On the back of the chart, record your name, chart construction date, date of the latest
CHUM, and chart number(s) and chart edition. Each time the CHUM is updated, the current
CHUM date will be added. As a technique, tape the chart legend on the back of the chart.
Stick diagrams. The stick diagram is a one-page information sheet that displays all pertinent
data needed to fly the LL without reference to the low-level chart. It is used by the pilot flying
the plane to back up the pilot navigating the route. Incomplete stick diagrams are located in the
TAC Flimsy. The stick diagrams must be completed prior to brief time.
On the day of the flight, get 500 foot altitude winds from the weather forecaster (1500 foot
winds for night flight). Get wind information at NGP and from at least one other airfield along
the route of flight.
Using the CR-2, or "whiz wheel", calculate the leg time based on 180 knots groundspeed,
the preflight winds, and the distance for each leg of the route. Determine drift corrections
necessary for each leg of the route and calculate the Magnetic Heading (MH). Write this
information beside the printed route information on each leg of the stick diagram.
For your continuation fuels, start at NGP and work backwards through the entire route. We
need to land with 530 lbs plus 200 lbs for an alternate. Estimating 125 lbs for the approach from
Shamrock, we must have 855 lbs over Shamrock (530+200+125). Now calculate the leg time
from Shamrock to the point immediately before Shamrock. For T-44s, multiply this time by 10
(burn rate is10 lbs per minute). For TC-12s, multiply this time by 12 (burn rate is 12 lbs per
minute). Keep up this entire process until you get to the entry point for the first route. These
fuel calculations are placed next to each point in the route timing box on the lower left side of
the flimsy. Extra spaces are available to account for transition between the routes and for the
recovery to NAS CC.
To calculate times at each point, start with your takeoff time. In the sample flimsy, the
takeoff time is assumed to be 1200. It takes about 8 minutes to get to the entry point (point B),
so the time over that point would be 1208.00. You must now use your CR-2 to get each leg time
based on 180 knots groundspeed. Take this total and add it to 1208.0. This no wind example
has us arriving at the DZ at 1300.1 (1208.0+52.1=1300.1). Now subtract this ".1" from the entry
point and takeoff time. This has us taking off at 1159.9 and getting to the entry point at 1207.9.
Now go to the route timing block and locate the time it takes to get from B to I (7.0). Add this to
the B time of 1207.9 to get 1214.9. Put this in the block next to point I. Continue this process
for the entire route. We now have to calculate the entry point of the next route. In the recovery
block of the stick diagram, calculate the times to arrive at each point after the DZ/LZ (note that
the third point is the entry point into the LOU ONE route). Once you calculate your time at
point L, transfer this to the LOU ONE stick diagram and do the same process for that route. | {"url":"http://navyflightmanuals.tpub.com/P-557/P-5570096.htm","timestamp":"2014-04-17T00:48:21Z","content_type":null,"content_length":"43030","record_id":"<urn:uuid:9a42f5fc-faca-4587-952b-1dacabf15dbb>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Easy Mode Math
Most people understand that the symbol ∞ means “infinity”, and understand that it represents a really big number. But just how big is it? What does it actually mean? Can you do things to infinity?
In mathematics, infinity doesn’t actually represent a number in the typical way we think about numbers. With typical numbers, we can add, subtract, multiply, and do a whole range of calculations to
them, and typically get another answer. But what happens when we do things forever? What does, say, 1 + 1 + 1 + … end up adding to? | {"url":"http://easymodemath.tumblr.com/","timestamp":"2014-04-19T09:41:23Z","content_type":null,"content_length":"35865","record_id":"<urn:uuid:691cea3e-126a-4231-99e3-578e1343ea44>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Structured vector bundles define differential Ktheory,” (2008) arXiv: math 0810.4935
"... Following Hopkins and Singer, we give a definition for the differential equivariant K-theory of a smooth manifold acted upon by a finite group. The ring structure for differential equivariant
K-theory is developed explicitly. We also construct a pushforward map which parallels the topological pushfo ..."
Cited by 1 (0 self)
Add to MetaCart
Following Hopkins and Singer, we give a definition for the differential equivariant K-theory of a smooth manifold acted upon by a finite group. The ring structure for differential equivariant
K-theory is developed explicitly. We also construct a pushforward map which parallels the topological pushforward in equivariant K-theory. An analytic formula for the pushforward to the differential
equivariant K-theory of a point is conjectured, and proved in the boundary case, in the case of a free action, and for ordinary differential K-theory in general. The latter proof is due to K.
Klonoff. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=14747308","timestamp":"2014-04-18T20:56:39Z","content_type":null,"content_length":"12036","record_id":"<urn:uuid:162b001f-dfff-466f-badf-fa941005c62a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to invert three signals with only two NOT gates (and *no* XOR gates): Part 1 | EE Times
Design How-To
How to invert three signals with only two NOT gates (and *no* XOR gates): Part 1
Good Grief Charlie Brown! Ever since I presented the "Black Box Brain Boggler" in my Logically Speaking column in the 8th May issue of EE Times, I've been running around in ever-decreasing circles
shouting "Don't Panic, Don't Panic!"
Why? I will explain, but first let me say to those who have been playing with this puzzle: (a) You can not use XOR gates and (b) there really is a solution. In fact, as "The Black Adder" would say
(for those au fait with British comedy): "This solution is so cunning that we could pin a tail on it and call it a weasel!"
Note: If you are a hardened logic designer, you might want to skip the next section and go directly to the Proposed solutions that didn't make the grade topic.
But suppose we could use XOR gates
OK, let's take this from the top. The idea is that we start with a black box (no, it doesn't matter whether it's a big box or a small one... look, if you're not going to take this seriously...).
There are three inputs to our black box called A, B, and C. Similarly, there are three outputs called not-A, not-B, and not-C. Each of these outputs is the logical inversion of its corresponding
input; that is, if A = 0, then not-A = 1, and vice versa. Obviously, if we had access to three NOT gates, then the solution would be simple (Fig 1).
1. Life would be simple if we had three NOT gates.
However, the rules of this "Brain Boggler" say that we have access to only two NOT gates. On the bright side, we also have access to as many AND and OR gates as we wish to use. The problem is that –
when I first posed this conundrum in my Logically Speaking column – I mistakenly said that in addition to the AND and OR gates, you could also use as many XOR gates as you wish.
Arrggghhhh! This prompted an immediate flood of responses. Half of these were from logic design experts who immediately recognized what they perceive as being a blatantly obvious solution. In fact,
some of these messages kicked-off along the lines of: "Is this the sad state of affairs to which EE Times has sunk..."
To these folks, I responded by saying: "Sorry, my bad, yes this does have an obvious solution, however I made a mistake, and you aren't allowed to use XOR gates." This was typically followed by a
pause for a few hours before I received a follow-on message saying: "Oh, well, that makes things a little harder doesn't it ... let me think about this some more." In fact, at this stage many readers
said: "I don't think this can be achieved; you can't create a negation from non-negating gates."
Having said this, I also received literally hundreds of emails from folks who aren't well-versed in logic theory, but who do like a good puzzle. Many of these guys and gals grimly fought their way to
a solution the old-fashioned way by drawing a truth table of the inputs and outputs, pondering things, and eventually realizing how XOR gates could be used to provide a solution. So, before we leap
into the more complex problem without XOR gates, let's first consider the XOR-based possibilities. And just to remind those who don't play with logic gates every day, the truth tables for the basic
two-input functions at our disposal (if we assume that we do have access to XOR gates) are shown in Fig 2
2. Truth tables for the logic gates at our disposal
(assuming we have access to XOR gates).
Observe the use of the symbols '&', '+', and '^' to represent logical AND, OR, and XOR functions; we'll be using these symbols in our equations later (also, we will use the symbol '!' to indicate the
NOT function). Note especially the XOR gate. As we see, if one of the inputs is held at a logic one value, then the output will be the inverse of the other input. This immediately allows us to come
up with a solution to our black box problem using two NOT gates and one XOR gate as shown in Fig 3.
3. Solution using two NOT gates and one XOR.
Alternatively, if we wished to make our diagram look more aesthetically pleasing, we could opt to use three XOR gates as shown in Fig 4.
4. Solution using three XOR gates.
Ah, Ha! You say! But what if we aren't allowed to use a constant logic '1' value? Well, no problemo; if necessary, we can generate this value using one of our NOT gates with an OR gate as illustrated
in Fig 5. (Observe that the convention in these schematics is that if two wires cross but there is no dot shown at their intersection, then there is no electrical connection between these wires. By
comparison, a black dot at the intersection of two wires indicates that these wires are connected together.)
5. Generating a '1' using a NOT and an OR.
Note that we could have used any of our input signals to generate our logic 1 value; we just happened to pick input C because of the way this schematic was evolving. The trick here is that if input C
is 0, then the output from the NOT gate will be 1, so the output of the OR gate will be 1 (remember that the output from an OR gate is 1 if either of its inputs are 1). Alternatively, if input C is
1, then the output from the NOT gate will be 0; but this doesn't matter, because input C is connected directly to the other input to the OR gate, so once again the output from the OR will be a 1.
I bet you're thinking this is all really easy, aren't you? Well, before we plunge into the non-XOR-based solutions, let me give you a taste of the things I've been going through this last week. A
reader called Keith emailed me as follows:
"Mr. Maxfield, I just was reading my May 8th, EE Times and came across your puzzles. I left my brain working on the black box over lunch and here is what I came up with. The key is being able to
invert one of the inputs gives us two points that we know are logical 1 and 0 (we just don't know which is which, but it doesn't matter). We can then compare the other two inputs to those states.
Anyway, attached is a PDF of my solution to your inverting black box (using only one of the invertors)."
As you will see from Keith's PDF he's obviously put a lot of thought into this. The problem is that by the time this arrived, I'd waded through so many potential solutions that my eyes were watering
and my brain was leaking out of my ears. In fact, I still haven't decided exactly what this circuit does, how it does it, and – most importantly – if it actually solves the black box problem. Perhaps
you would like to ponder this circuit
But we digress... As I mentioned at the beginning of this article, we are not allowed to use XOR gates as part of our solution. When I let folks know this, they responded with gusto and abandon, and
once again my "In Box" was flooded with emails. The problem now was working out which solutions worked and which fell at the first fence... | {"url":"http://www.eetimes.com/document.asp?doc_id=1274508","timestamp":"2014-04-16T07:30:42Z","content_type":null,"content_length":"138635","record_id":"<urn:uuid:88ca132c-a161-41c0-95e6-39443cb699db>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving a linear system by substitution
re hi i am not sure of what is substitution but here is my guess of what it means you've got your first equation ax+by=c (general case ) it could be write as ax-c=-by so y=(c-ax)/a (you should be
familiar with that sort of manipulation, check a is non null) so in your second equation you replace the y by the expression find above in the second equation you should get an equation of the form
ax=b (when compacted) so you know how to find x (if a is null then b must be null or no solution) then you find y using the expression you where ... using! no needs to say that you must check you
allway does in math, twice or more is beter re By!
josh_c Substitution here means you express the value of one variable in terms of the other variable from one of the two equations. Say, get x in terms of y, from the 1st equation. Then substitute
that into the the 2nd equation, thereby you will have an equation involving only one variable. Blah, blah, blah. From the 1st equation, we get the value of x in terms of y, x -2y = 4 ------(1) x = 2y
+4 ------(1a) Then we substitute that into the 2nd equation, 2x -3y = 7 -----------(2) 2(2y +4) -3y = 7 4y +8 -3y = 7 4y -3y = 7 -8 y = -1 ---------*** Substitute that into, say, (1a), ----[you can
substitute that into any of the (1), (1a), or (2) equations above.] x = 2(-1) +4 x = -2 +4 x = 2 -----------*** Check those findings against the original equations (1) and (2), x -2y = 4 --------(1)
2 -2(-1) =? 4 2 +2 =? 4 4 =? 4 Yes, so, OK. 2x -3y = 7 -----------(2) 2(2) -3(-1) = 7 4 +3 =? 7 7 =? 7 Yes, so, OK. Therefore, x=2 and y = -1. -----------answer. | {"url":"http://mathhelpforum.com/algebra/1736-solving-linear-system-substitution.html","timestamp":"2014-04-19T02:13:41Z","content_type":null,"content_length":"39176","record_id":"<urn:uuid:becb49fb-1bc6-4958-8bd3-3bda86aeea4d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to calculate standard deviation
The Math Store
How to calculate standard deviation on TI graphing calculators
Online graphing calculator tutorial on Standard Deviation
Featured Calculators
Many of us go out to the store and
buy a graphing calculator
, then when we get home, we wonder what next? The purpose of these tutorials is the answer that question!
All modern TI graphing calculators (that I know about) allow you to calculate various types of statistics. The TI-84 Plus, in particular, has a powerful suite of statistics tools.
Demonstration problem: Calculate the standard deviation of the following set of data { 50, 20, 33, 40, 55 }
Part I enter the data into List[1]
Step 1) Press "Stat"
Step 2) Hit "enter" button and you should see the three lists on the right. In the next step we will enter all of the scores into L1
Step 4) Press "50" then "enter"
Step 5) Enter the rest of the data into the calculator by pressing each of the numbers then 'enter' :20, 33, 40, 55
Step 6) Return to the main calculator screen by pressing "2nd" then "quit"
Step 7) Press "Stat"
Step 8) Scroll right to highlight "Calc"
Step 9) Hit "enter"
Step 10) hit "2nd" then "L1"
Step 11) Press "enter
Your goold old TI Graphing Calculator came to the rescue. All of the information that you see inthe picture above is qutie useful
Security of your purchase | Privacy Protection | site map | {"url":"http://mathstore.net/graphing-calculators/TI-calculators/how-to-calculate-standard-deviation.php","timestamp":"2014-04-18T03:24:18Z","content_type":null,"content_length":"13413","record_id":"<urn:uuid:e26d82f5-cbee-48cf-af12-0fb17286c2af>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve quadratic equation by completing the square. Check both roots. problem below
• one year ago
• one year ago
Best Response
You've already chosen the best response.
\[2x^2-12x +16=0\]
Best Response
You've already chosen the best response.
please explain each step I am having trouble with it
Best Response
You've already chosen the best response.
2(x^2-6x+8)=0 2[(x-4)(x-2)]=0
Best Response
You've already chosen the best response.
please do explain why you do each step
Best Response
You've already chosen the best response.
1st step- Factor out 2. Second step- What 2 numbers add to the end value, and multiply to the middle value?
Best Response
You've already chosen the best response.
Those 2 values are then put into the brackets as you see them.
Best Response
You've already chosen the best response.
And here, the signs of positive and negatives are reversed because when you solve for x individually, x-4=0 x=4. Ok?
Best Response
You've already chosen the best response.
ok but there is supposed to be two answers
Best Response
You've already chosen the best response.
4 is one of them
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
eh, you might be might be making this more difficult than it actually is
Best Response
You've already chosen the best response.
I did what the question asked. Solve quadratic equation by completing the square.
Best Response
You've already chosen the best response.
ay. missed that. thx.
Best Response
You've already chosen the best response.
You made a small mistake. The -8 should also be multiplied by -2.
Best Response
You've already chosen the best response.
Just 2 rather.
Best Response
You've already chosen the best response.
Leaving you with\[2(x^2-6x+9-9+8)=2((x-3)^2-1)=2(x-3)^2-2\]
Best Response
You've already chosen the best response.
Set equal to 0, and get \[2(x-3)^2-2=0\implies(x-3)^2=1\]Take the root, and solve. \[x-3=\pm1\implies x=4\quad\text{or}\quad x=2\]
Best Response
You've already chosen the best response.
ok george you are right now can you explain each step
Best Response
You've already chosen the best response.
The hardest part is the first couple steps. You start with \[2x^2-12x+16=2(x^2-6x+8)\]The next part is where you complete the square. You take half of \(-6\), square it, and then add/subtract it.
So you get\[2(x^2-6x+8)=2(x^2-6x+\left(\frac{-6}{2}\right)^2-\left(\frac{-6}{2}\right)^2+8=2(x^2-6x+9-9+8)\]
Best Response
You've already chosen the best response.
Now you simplify as I described before. It's important that you simplify \[x^2-6x+9-9+8\]as \((x-3)^2-1\). That's the whole point of completing the square. You add/subtract just the right amount
so you can have the square of a binomial. From there, you can solve for that square easily, and then just take a square root.
Best Response
You've already chosen the best response.
Did this make sense?
Best Response
You've already chosen the best response.
yes thank you but please view my next question
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fd69d24e4b04bec7f17b527","timestamp":"2014-04-17T07:15:30Z","content_type":null,"content_length":"162787","record_id":"<urn:uuid:c446e403-f71a-40f1-9234-f3111bfc010d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Two rectangles are similar. Rectangle ABCD has a length of 5 in and a width of 10 in. Rectangle ABCD is dilated by a scale factor of 4 to form rectangle FGHJ. Answer these questions using the
information provided: 1. How is the area affected by the dilation? 2. How is the perimeter affected by the dilation?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/511edb26e4b03d9dd0c4d76b","timestamp":"2014-04-17T12:31:23Z","content_type":null,"content_length":"136283","record_id":"<urn:uuid:a7fdf564-bfd4-48f8-9581-f35460fa1698>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
4pPA7. Relation for calculating the heat capacity at constant volume for multicomponent systems by using the values of sound velocity for the pure components and binary systems.
Session: Thursday Afternoon, December 5
Time: 3:30
Author: Dmitrii A. Denisov
Location: Dept. of the Colloid Chemistry, Mendeleev Chemical Technol. Univ. of Russia, Miusskaya pl. 9, Moscow 125047, Russia
It is known that heat capacities at constant volume C[inf V] can be calculated by using the relation connecting C[inf V] with values of sound velocity for these substances [A. J. Matheson, Molecular
Acoustics (Wiley, London, 1971)]. The relation for calculating C[inf V] for mixtures following the lattice model of regular mixtures in the zero quasichemical approximation [I. Prigogine, The
Molecular Theory of Solutions (North Holland, Amsterdam, Interscience, New York, 1957)] on the basis of some data including sound velocity referring to the pure components has been derived. The
analogous problem for mixed solutions, namely for the solutions containing several solutes, has been solved. The relation for calculating C[inf V] for a mixed solution consisting of the solvent and
one of the solutes and having the same solvent chemical potential value (which the considered mixed solution has), follows the same conditions.
ASA 132nd meeting - Hawaii, December 1996 | {"url":"http://www.auditory.org/asamtgs/asa96haw/4pPA/4pPA7.html","timestamp":"2014-04-17T09:43:54Z","content_type":null,"content_length":"1954","record_id":"<urn:uuid:9e7ce361-c9c2-4bbc-ae40-9517edce8c48>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Kernel Two-Sample TestFrom Neurons to Circuits: Linear Estimation of Local Field PotentialsInferring Spike Trains From Local Field PotentialsPhase-of-Firing Coding of Natural Visual Stimuli in Primary Visual CortexIntegrating Structured Biological data by Kernel Maximum Mean DiscrepancyA functional hypothesis for adult hippocampal neurogenesis: Avoidance of catastrophic interference in the dentate gyrusOn the Kinetic Design of TranscriptionA Kernel Method for the Two-Sample-ProblemA Kernel Approach to Comparing DistributionsA Kernel Method for the Two-sample ProblemPredicting local field potentials from spike trainsWhat is the functional role of adult neurogenesis in the hippocampus?Analysis of neural signals: Interdependence, information coding, and relation to network models
This file was created by the Typo3 extension sevenpack version 0.7.14 --- Timezone: CEST Creation date: 2014-04-18 Creation time: 23-54-16 --- Number of references 13 article Scholkopf2012 Journal of
Machine Learning Research 2012 3 13 723−773 We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from
different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean
discrepancy (MMD). We present two distribution-free tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed
in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are
obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian
marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests. http://
www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Schölkopf Research Group Borgwardt Department Logothetis http://jmlr.csail.mit.edu/papers/v13/
gretton12a.html arthurAGretton karstenKBorgwardt raschMRasch bsBSchölkopf smolaASmola article RaschLK2009 Journal of Neuroscience 2009 11 29 44 13785-13796 Extracellular physiological recordings are
typically separated into two frequency bands: local field potentials (LFPs) (a circuit property) and spiking multiunit activity (MUA). Recently, there has been increased interest in LFPs because of
their correlation with functional magnetic resonance imaging blood oxygenation level-dependent measurements and the possibility of studying local processing and neuronal synchrony. To further
understand the biophysical origin of LFPs, we asked whether it is possible to estimate their time course based on the spiking activity from the same electrode or nearby electrodes. We used “signal
estimation theory” to show that a linear filter operation on the activity of one or a few neurons can explain a significant fraction of the LFP time course in the macaque monkey primary visual
cortex. The linear filter used to estimate the LFPs had a stereotypical shape characterized by a sharp downstroke at negative time lags and a slower positive upstroke for positive time lags. The
filter was similar across different neocortical regions and behavioral conditions, including spontaneous activity and visual stimulation. The estimations had a spatial resolution of ∼1 mm and a
temporal resolution of ∼200 ms. By considering a causal filter, we observed a temporal asymmetry such that the positive time lags in the filter contributed more to the LFP estimation than the
negative time lags. Additionally, we showed that spikes occurring within ∼10 ms of spikes from nearby neurons yielded better estimation accuracies than nonsynchronous spikes. In summary, our results
suggest that at least some circuit-level local properties of the field potentials can be predicted from the activity of one or a few neurons. http://www.kyb.tuebingen.mpg.de http://
www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Logothetis http://www.jneurosci.org/content/29/44/13785.full.pdf+html 10.1523/JNEUROSCI.2390-09.2009 raschMRasch nikosNKLogothetis
GKreimann article 4946 Journal of Neurophysiology 2008 3 99 3 1461-1476 We investigated whether it is possible to infer spike trains solely on the basis of the underlying local field potentials
(LFPs). Using support vector machines and linear regression models, we found that in the primary visual cortex (V1) of monkeys, spikes can indeed be inferred from LFPs, at least with moderate
success. Although there is a considerable degree of variation across electrodes, the low-frequency structure in spike trains (in the 100-ms range) can be inferred with reasonable accuracy, whereas
exact spike positions are not reliably predicted. Two kinds of features of the LFP are exploited for prediction: the frequency power of bands in the high gamma-range (40&amp;amp;amp;amp;amp;#
8211;90 Hz) and information contained in lowfrequency oscillations ( 10 Hz), where both phase and power modulations are informative. Information analysis revealed that both features code (mainly)
independent aspects of the spike-to-LFP relationship, with the low-frequency LFP phase coding for temporally clustered spiking activity. Although both features and prediction quality are similar
during seminatural movie stimuli and spontaneous activity, prediction performance during spontaneous activity degrades much more slowly with increasing electrode distance. The general trend of data
obtained with anesthetized animals is qualitatively mirrored in that of a more limited data set recorded in V1 of non-anesthetized monkeys. In contrast to the cortical field potentials, thalamic LFPs
(e.g., LFPs derived from recordings in the dorsal lateral geniculate nucleus) hold no useful information for predicting spiking activity. http://www.kyb.tuebingen.mpg.de http://
www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Schölkopf Department Logothetis http://jn.physiology.org/cgi/reprint/99/3/1461 Biologische Kybernetik Max-Planck-Gesellschaft en
doi:10.1152/jn.00919.2007 raschMJRasch arthurAGretton yusukeYMurayama WMaass nikosNKLogothetis article 5115 Current Biology 2008 3 18 5 375-380 We investigated the hypothesis that neurons encode rich
naturalistic stimuli in terms of their spike times relative to the phase of ongoing network fluctuations rather than only in terms of their spike count. We recorded local field potentials (LFPs) and
multiunit spikes from the primary visual cortex of anaesthetized macaques while binocularly presenting a color movie. We found that both the spike counts and the low-frequency LFP phase were reliably
modulated by the movie and thus conveyed information about it. Moreover, movie periods eliciting higher firing rates also elicited a higher reliability of LFP phase across trials. To establish
whether the LFP phase at which spikes were emitted conveyed visual information that could not be extracted by spike rates alone, we compared the Shannon information about the movie carried by spike
counts to that carried by the phase of firing. We found that at low LFP frequencies, the phase of firing conveyed 54% additional information beyond that conveyed by spike counts. The extra
information available in the phase of firing was crucial for the disambiguation between stimuli eliciting high spike rates of similar magnitude. Thus, phase coding may allow primary cortical neurons
to represent several effective stimuli in an easily decodable format. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Logothetis http://
cb1a6bb2099bc83600ddc8cc9656be3b&ie=/sdarticle.pdf Biologische Kybernetik Max-Planck-Gesellschaft en http://dx.doi.org/10.1016/j.cub.2008.02.023 MAMontemurro raschMJRasch yusukeYMurayama
nikosNKLogothetis stefanoSPanzeri article 3981 Bioinformatics 2006 8 22 4: ISMB 2006 Conference Proceedings e49-e57 Motivation: Many problems in data integration in bioinformatics can be posed as one
common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different
if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test
statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data
types arising in molecular biology. Results: We study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data,
cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate
in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same
distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments. http://www.kyb.tuebingen.mpg.de http://
www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Schölkopf http://bioinformatics.oxfordjournals.org/cgi/reprint/22/14/e49 Biologische Kybernetik Max-Planck-Gesellschaft en 10.1093/
bioinformatics/btl242 karstenKMBorgwardt arthurAGretton raschMRasch H-PKriegel bsBSchölkopf smolaASmola article 4702 Hippocampus 2006 1 16 3 329-343 The dentate gyrus is part of the hippocampal
memory system and special in that it generates new neurons throughout life. Here we discuss the question of what the functional role of these new neurons might be. Our hypothesis is that they help
the dentate gyrus to avoid the problem of catastrophic interference when adapting to new environments. We assume that old neurons are rather stable and preserve an optimal encoding learned for known
environments while new neurons are plastic to adapt to those features that are qualitatively new in a new environment. A simple network simulation demonstrates that adding new plastic neurons is
indeed a successful strategy for adaptation without catastrophic interference. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://
www3.interscience.wiley.com/cgi-bin/fulltext/112305178/PDFSTART Biologische Kybernetik Max-Planck-Gesellschaft en 10.1002/hipo.20167 LWiskott raschMJRasch GKempermann article 4701 Genome Informatics
2005 9 16 1 73 We analyse a stochastic model of transcription that describes transcription initiation by promoter activation and subsequent polymerase recruitment. Explicit expressions are derived
for the control of an activator on the mean mRNA number and for the mRNA noise. Both properties are strongly influenced by the kinetics of promoter activation, mRNA synthesis and degradation. Low
transcriptional noise is obtained either when the transcription initiation complex has a long life-time or when its components associate and dissociate rapidly. However, the ability of an activator
to regulate the mRNA level is low in the first and high in the second case. Large noise is generated when the initial activation step of the promoter is slow. In this case, transcription can be
burst-like; the mRNA distribution becomes bimodal while regulability of the mean copy number is maintained. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://
www.kyb.tuebingen.mpg.de http://www.jsbi.org/journal/IBSB05/IBSB05F020.html Biologische Kybernetik Max-Planck-Gesellschaft en THöfer raschMJRasch inproceedings 4193 Advances in Neural Information
Processing Systems 19: Proceedings of the 2006 Conference 2007 9 513-520 We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both
cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the
second is based on the asymptotic distribution of this statistic. The test statistic can be computed in $O(m^2)$ time. We apply our approach to a variety of problems, including attribute matching for
databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests
currently exist. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/NIPS2006_0583_4193[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department
Schölkopf http://nips.cc/Conferences/2006/ Schölkopf, B. , J. Platt, T. Hofmann MIT Press Cambridge, MA, USA Advances in Neural Information Processing Systems 19 Biologische Kybernetik
Max-Planck-Gesellschaft Vancouver, BC, Canada Twentieth Annual Conference on Neural Information Processing Systems (NIPS 2006) en 0-262-19568-2 arthurAGretton karstenKMBorgwardt raschMRasch
bsBSchölkopf smolaASmola inproceedings 4426 Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence (AAAI-07) 2007 7 1637-1641 We describe a technique for comparing distributions
without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a Reproducing Kernel Hilbert Space. We apply this technique to construct a
two-sample test, which is used for determining whether two sets of observations arise from the same distribution. We use this test in attribute matching for databases using the Hungarian marriage
method, where it performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist. http://www.kyb.tuebingen.mpg.de
/fileadmin/user_upload/files/publications/Gretton_4426[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Schölkopf http://www.aaai.org/Library/AAAI/aaai07contents.php
AAAI Press Menlo Park, CA, USA Biologische Kybernetik Max-Planck-Gesellschaft Association for the Advancement of Artificial Intelligence Vancouver, BC, Canada Twenty-Second AAAI Conference on
Artificial Intelligence (IAAI-07) en 978-1-577-35323-2 arthurAGretton karstenKMBorgwardt raschMRasch bsBSchölkopf smolaAJSmola techreport 5111 2008 4 157 We propose a framework for analyzing and
comparing distributions, allowing us to design statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over
functions in the unit ball of a reproducing kernel Hilbert space (RKHS). We present two tests based on large deviation bounds for the test statistic, while a third is based on the asymptotic
distribution of this statistic. The test statistic can be computed in quadratic time, although efficient linear time approximations are available. Several classical metrics on distributions are
recovered when the function space used to compute the difference in expectations is allowed to be more general (eg.~a Banach space). We apply our two-sample tests to a variety of problems, including
attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are
the first such tests. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/MPIK-TR-157_5111[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department
Schölkopf Biologische Kybernetik Max-Planck-Gesellschaft Max-Planck-Institute for Biological Cybernetics Tübingen en arthurAGretton karstenKBorgwardt raschMRasch bsBSchölkopf smolaASmola poster
CalabreseRLK2008 2008 11 38 459.9 Spiking activity provides information about the outputs of neurons. Recently, there has been increased interest in the study of local field potentials (LFPs), partly
due to their correlation with fMRI BOLD measurements [1], to the possibility of studying local inputs [2] and as a tool to assess neuronal synchrony [3]. The LFP is operationally defined by low-pass
filtering (100 Hz) the extracellular recordings, and its precise biophysical origin of remains only poorly understood. Recently, Rasch and colleagues used a SVM algorithm to infer the spiking
activity at a given site from the LFPs [4]. To further understand the relationship between spikes and LFPs, we asked whether we could predict the detailed timecourse of the LFP based solely on the
spiking activity of units recorded from the same electrode or nearby electrodes. We used a Wiener-Kolmogorov approach to derive the optimum linear filter that estimates the LFPs [5, 6]. We considered
electrophysiological recordings in the macaque lateral geniculate nucleus and primary visual cortex during spontaneous activity (86 electrodes, 7 monkeys) [4]. We found that it is possible to predict
LFPs from V1 solely using spike trains from single electrodes in that area. The mean correlation coefficient (r) between the predictions and the actual LFP varied between 0.23 and 0.65. We found that
the estimations were highly significant (p < 10, based on generating a Poisson spike train with the same rate and re-estimating the filters). In contrast, trying to predict LGN LFPs resulted in a
performance hardly above chance level. The reconstruction filter was closely related to the spike-triggered average of the LFPs. A causal filter that used only the spikes occurring before the actual
time of the LFP yielded a higher error (p < 0.01) than a filter that used only the spikes occurring after. It was possible to predict LFPs in V1 from spike trains recorded in LGN (r = 0.3 to 0.7).
The algorithm performed at chance level when trying to predict LFPs in LGN from spikes in V1. In sum, these results support the notion that LFPs represent the input and local processing while spikes
represent the output and suggest that a linear convolution can account for a large fraction of the timecourse of the LFP. We have observed similar results in recordings from macaque monkey inferior
temporal cortex and the human temporal lobe, suggesting that there may be a universal relationship between spikes and LFPs. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://
www.kyb.tuebingen.mpg.de Department Logothetis http://www.sfn.org/annual-meeting/past-and-future-annual-meetings Washington, DC, USA 38th Annual Meeting of the Society for Neuroscience (Neuroscience
2008) AMCalabrese raschMJRasch nikosNKLogothetis GKreiman poster 4703 2005 3 299 The hippocampus is a brain structure that is instrumental for episodic memory, i.e. for memorizing facts and events.
It is often thought to be an intermediate memory that can store new input patterns quickly, which subsequently get transferred into more permanent cortical memory. The entorhinal cortex serves as an
interface between hippocampus and other cortical areas. Within the hippocampal formation the different substructures form a loop: entorhinal cortex (layers II/III) - dentate gyrus - CA3 - CA1 -
subiculum - entorhinal cortex (layers V/VI). Because of its recurrent connectivity, CA3 is thought to be the actual memory. Dentate gyrus would then be an encoding network, preparing the input
patterns for storage in CA3; CA1 and subiculum would perform the decoding to reconstruct the stored patterns in entorhinal cortex. The dentate gyrus is special in that it generates new neurons
throughout life, a phenomenon referred to as adult neurogenesis. Why does adult neurogenesis occur in the dentate gyrus and not in any of the other structures? Assume the dentate gyrus adapts to the
environment the animal lives in in order to optimize its encoding for the input-pattern distribution encountered in this environment. If the animal moves to another environment, new adaptation takes
place and the dentate gyrus is faces with the problem of catastrophic interference. As a new encoding is learned the old encoding degrades quickly and as a consequence old patterns could not be
addressed and retrieved from the CA3-memory. In artificial neural networks catastrophic interference is usually avoided by interleaved training, i.e. the training patterns are presented repeatedly in
an alternating fashion. However, this is not possible in real life, because many patterns occur only once. How can the dentate gyrus solve this problem? We hypothesize that new neurons are the
solution to this problem. If the dentate gyrus keeps old neurons and their connections fixed but adds new neurons that are plastic, it can adapt to qualitatively new input patterns but at the same
time maintain the encoding capabilities for old patterns. Note that new neurons are required only for qualitatively new patterns and not for new patterns that belong to the old input distribution,
because we assume the encoding to be characteristic for a distribution and not for individual patterns. As a proof of principle we have simulated a linear auto-encoder network modelling the loop
within the hippocampal formation. We assume that the animal first lives in environment A with a certain input-pattern statistics, then moves to a new environment B with a different input-pattern
statistics, and finally returns to environment A. We assume that the animal has time to adapt to environment A and then B, but when it returns to A we only test the performance without giving the
time for new adaptation. We also assume that the decoding (CA1/subiculum) stays plastic and can adapt to environment A and B in any case (but not when the animal returns to A). We have considered
three different scenarios: (a) No DG-adaptation: The dentate gyrus adapts to environment A and keeps the synaptic weights fixed after that. No adaptation to environment B occurs. (b) Neurogenesis:
The dentate gyrus starts with fewer units and first adapts to environment A. In environment B the old units and connections are fixed but a few new units are added and used to adapt to the new
input-pattern distribution. (c) Full adaptation: The dentate gyrus always fully adapts to the current environment, first A then B. In the simulations we find that the networks always perform
reasonably well on the pattern distributions they are adapted to. However, in scenario (a) the performance is poor in environment B, because although the decoding can adapt to environment B the
encoding is still optimal for A and misses important dimensions of B. Performance is also poor when the animal returns to A, because the decoding has adapted to B. In scenario (c) performance is
particularly poor when the animal returns to environment A, because the network is then fully adapted to B. Only in scenario (b) is the effect of catastrophic interference largely avoided and the
performance good in environments A and B and also as the animal returns to A. Our model is consistent with a number of anatomical and physiological facts: New neurons are found to be more plastic
than old ones as required by our model. Since new units are only required for qualitatively new input patterns, there is decreasing need for neurogenesis with age, because the animal has more and
more experience and encounters fewer and fewer qualitatively new stimuli. This is consistent with the decrease in neurogenesis observed experimentally. A relatively small number of newly added units
can have a large effect, since only missing dimensions have to be newly encoded. This is consistent with the relatively low level of neurogenesis of 30% new neurons in mice over the whole lifetime.
Since the generation of new neurons takes weeks but the demand for new neurons can be on a much shorter time scale when the animal changes its environment, it is reasonable that new neurons are
generated all the time to have some in stock when needed. New neurons not needed die after some time. This is what is found experimentally. The level of neurogenesis is regulated by rather unspecific
factors such as physical activity or hunger. It is clear that if new neurons are only needed if qualitatively new input-pattern distributions are encountered, there are no specific factors that would
be available earlyenough. Thus the unspecific factors might actually be fairly descent predictors for the need of new neurons, since hunger andrunning fosters exploration of new environments. In
summary we hypothesize that adult neurogenesis in the dentate gyrus helps to solve the problem of catastrophic interference when an animal adapts to new environments. Our network simulations confirm
that adding new neurons can reduce the effect of catastrophic interference significantly. The model is also qualitatively consistent with a number of anatomical and physiological facts about adult
neurogenesis. See http://cogprints.org/4012/ for more information. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.cosyne.org/c/index.php?
title=Cosyne_05 Biologische Kybernetik Max-Planck-Gesellschaft Salt Lake City, UT, USA Computational and Systems Neuroscience Meeting (COSYNE 2005) LWiskott raschMJRasch GKempermann thesis 5260 2008
6 3 http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/RASCH_thesis_5260[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Logothetis Biologische
Kybernetik Max-Planck-Gesellschaft Graz University of Technology, Graz, Austria PhD en raschMJRasch | {"url":"http://www.kyb.tuebingen.mpg.de/nc/employee/details/rasch.html?tx_sevenpack_pi1%5Bshow_abstracts%5D=0&tx_sevenpack_pi1%5Bshow_keywords%5D=0&tx_sevenpack_pi1%5Bexport%5D=xml","timestamp":"2014-04-18T21:54:17Z","content_type":null,"content_length":"34607","record_id":"<urn:uuid:f8fb56b2-1de5-4f2b-9d19-9e2eb68ae270>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Power series
August 24th 2012, 07:49 PM
Power series
Hi I'm having troubles with a question and was wondering if someone could guide me through it:
Given two mystery functions called anna and bob. Both have power series representations
that work for all x $\epsilon$ R
Anna(x) = $\sum_{n=0}^{\infty }a_{n}x^{n}$, bob(x)= $\sum_{n=0}^{\infty }b_{n}x^{n}$
Furthermore, anna(0) = 0; bob(0) = 1 and we know that deriving bob gives anna and deriving
anna gives bob.
a) Find anna and bob, i.e. nd the coefficients $a_{n}$ and $b_{n}$ of their power series.
b) Express the exponential function $e^{x}$ and $e^{-x}$ as a combination of anna and bob. Conversely,
express anna and bob as a combination of $e^{x}$ and $e^{-x}$
Thanks in advance :)
August 24th 2012, 08:35 PM
Prove It
Re: Power series
Hi I'm having troubles with a question and was wondering if someone could guide me through it:
Given two mystery functions called anna and bob. Both have power series representations
that work for all x $\epsilon$ R
Anna(x) = $\sum_{n=0}^{\infty }a_{n}x^{n}$, bob(x)= $\sum_{n=0}^{\infty }b_{n}x^{n}$
Furthermore, anna(0) = 0; bob(0) = 1 and we know that deriving bob gives anna and deriving
anna gives bob.
a) Find anna and bob, i.e. nd the coefficients $a_{n}$ and $b_{n}$ of their power series.
b) Express the exponential function $e^{x}$ and $e^{-x}$ as a combination of anna and bob. Conversely,
express anna and bob as a combination of $e^{x}$ and $e^{-x}$
Thanks in advance :)
Here is the important information: You know that differentiating Anna will give Bob, and differentiating Bob will give Anna. So differentiating Anna TWICE will get back to Anna. This gives us the
differential equation
\displaystyle \begin{align*} A'' &= A \\ A'' - A &= 0 \\ \textrm{Characteristic Equation: } m^2 - 1 &= 0 \\ m^2 &= 1 \\ m &= \pm 1 \\ \textrm{Therefore } A &= C_1 e^x + C_2 e^{-x} \end{align*}
We also know that \displaystyle \begin{align*} A(0) = 0 \end{align*} and \displaystyle \begin{align*} A'(0) = 1 \end{align*}, so we get
\displaystyle \begin{align*} 0 &= C_1 e^0 + C_2 e^{-0} \\ 0 &= C_1 + C_2 \\ \\ 1 &= C_1 e^0 - C_2 e^{-0} \\ 1 &= C_1 - C_2 \\ \\ C_1 = \frac{1}{2}, C_2 = -\frac{1}{2} \end{align*}
So therefore
\displaystyle \begin{align*} A = \frac{1}{2}e^x - \frac{1}{2}e^{-x} \end{align*} and \displaystyle \begin{align*} B = \frac{1}{2}e^x + \frac{1}{2}e^{-x} \end{align*}
Go from here. | {"url":"http://mathhelpforum.com/calculus/202523-power-series-print.html","timestamp":"2014-04-18T11:54:28Z","content_type":null,"content_length":"10894","record_id":"<urn:uuid:ff0637ad-8aba-41d1-8690-c14a802af57c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
Resolution: standard / high
Figure 7.
Frame-shifting penalty function. The figure shows the insertion and deletion penalty, P=P[0]- P[ħ]· f, P[ħ]=P[0]· (1-h), on the left y-axis as well as the gap penalty reduction factor, f, on the
right y-axis for flowpeak values between 0 and 10 (x-axis). The penalties intersect at each integer peak value, noted by squares and the dotted line shows the 1/n slope of the f-function. The
homopolymer length in this plot is assumed to be the integer value of the flowpeak value p (x-axis), thus p satisfies; n-0.5≥p>n+0.5, except for deletion penalties for p<0.5 where a 1-mer is
Lysholm BMC Bioinformatics 2012 13:230 doi:10.1186/1471-2105-13-230
Download authors' original image | {"url":"http://www.biomedcentral.com/1471-2105/13/230/figure/F7","timestamp":"2014-04-19T04:36:46Z","content_type":null,"content_length":"12359","record_id":"<urn:uuid:22e0a3bc-4aee-4c8d-9d26-b97df2cac1e5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing Calculator---worth it?
Hand-held calculators are useless. If I want to calculate something, I use calculator software on a computer. I haven't owned a hand-held calculator in 30 years, and I've never felt that I was
lacking a useful tool.
To you, maybe. I still find the dedicated interface and separation from the computer to be useful.
I work for a comms engineering company and it is a fact that the majority of the couple of hundred people on my floor (
systems engineers, support engineers and programme managment (project management, finance, commercial, etc, age range 18 to 66)
) have calculators on their desk and use them despite having multi-screen PCs available. Calculators have the advantages of not occupying the same physical screen space (so the visibility and overlap
problems don't occur), of having a tactile-feedback, dedicated GUI and, like books or pdas/smartphones, having greater freedom in positioning. The majority of people use their calculators for
checking PC results (eg, I've picked up a few Excel worksheet errors) and for generating simple order of magnitude estimates / one-off calculations - in the latter case, it is often easier to use the
calculator than to fire up an app and jiggle around with the mouse, particularly if, say, the results are intended for log book use, lab use or in meetings, where use of a PC is either not an option
or is physically inconvenient or imposes a poor workflow (eg, looking up to the PC, poking around with the mouse, looking down and refinding place in notebook as opposed to putting the calculator by
the side of the notebook and tapping the keys with minimal hand, head & eye movement). Improvements in touch-screen systems may well erode these advantages in the future, but I suspect the dedicated
nature of the calculator will keep it fixed on many people's desks for a few more years to come. | {"url":"http://www.physicsforums.com/showthread.php?p=4282005","timestamp":"2014-04-19T22:54:10Z","content_type":null,"content_length":"64280","record_id":"<urn:uuid:97be5a1c-e72e-4fe6-9138-bb22897886b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry for Elementary School/Transformation
From Wikibooks, open books for an open world
Transformation is when we change the size, orientation, and/or position of a shape. Note that transformation is usually done on graph paper to avoid excessive meaurements and ensure accuracy.
Reflection is when a shape is reflected along an axis to produce a reflectionally symmetrical figure. The axis of reflection is also the axis of symmetry of the new figure.
Look at the diagram on the right. Imagine you are given the left part. How can you reflect the figure? First, Find out the distance between A and the axis. That's four. Then find point A′ (pronounced
'A prime'), which should be the same distance from the axis but on the different side. Look at the figure that is on the right of the figure. If you found that point, you're right. Put a little cross
there - if your teacher doesn't let you do it, just erase it later! Now do the same for the other two points. Join them up. Now you have formed the reflected figure!
A common mistake while reflecting a figure is forgetting to mark the points. That costs you a lot in marks, so don't make this mistake! Also, never mark the point wrongly. Remember to add the ' ′ '
symbol and check if the points are corresponding!
Rotation is the most difficult kind of transformation. It requires rotating a figure with reference to a single point. Therefore, you will only be asked to rotate 90°, 180° and 270° at this stage.
(360° is meaningless, but if you do manage to come across it you're very lucky!) There are three things that we need to note before rotating a figure.
• The number of degrees to rotate
• Whether you should rotate something clockwise or anticlockwise
• Where the centre of rotation is
Let us look at the example on the left. Imagine we are only given the top, left one. We need to rotate it in a number of ways.
1. 90° anticlockwise through the point in the middle
2. 180° anticlockwise through the point in the middle
3. 270° anticlockwise through the point in the middle
4. 90° clockwise through the point in the middle
5. 180° clockwise through the point in the middle
6. 270° clockwise through the point in the middle
Look at #1 and #6. If you look at them carefully, you should be able to see that #1 and #6 are actually the same! This can be explained through the example. To translate triangle ABC 90°
anticlockwise through the point in the middle, we can use point A first. Point a is three squares to the left and one square to above the centre of rotation, so we can rotate that 90 degrees. This
cannot be explained in words, so try looking at the figure on the right. As you can see, A and A′ are the same distance from the centre of rotation. We do the same to the other two points, producing
a triangle that looks like the one in the figure. If we do the same for number 6, you will produce an identical triangle! The same goes for points number 3 and 4.
What about 2 and 5? As we can see, they both involve translating 180°; however, one is clockwise and the other is anticlockwise. Let's try rotating it clockwise first. You should get the triangle
A′′B′′C′′ in the figure. Then do the same for the anticlockwise. Do they look the same?
Now that we have tried rotating 6 possibilities from one centre of rotation, and figuring out there is only three, can you try rotating triangle ABC through point A? If you are reading this as an
e-book, please copy out triangle ABC on a piece of graph paper. If you are using the print version, you can draw the new triangle directly on your book. Name your new triangle DEF.
Translation is a simple transformation. Put simple, translation is the change of the position of a shape. For example, if we want to transform a figure five units to the left, then we just move it
five units to the left.
The process of translating is very easy. Imagine we have a right-angled triangle ABC on a piece of graph paper. Our task is to translate it four squares upwards and two squares to the left. We take
the vertex of the right angle, named A, as our point, move it four squares upwwards, then two squares to the left. Draw a little cross there. and mark it A′. Now we re-create the shape by referring
to the original shape. Remember to name the points correctly.
We are often asked to trace back the translation gone through in a given translation. We are given the original figure and the new figure. When dealing with this type of questions, it may be helpful
to use a point like we did above. Take the same right-angled triangle. We see that A has been translated four squares upwards. So we write: Translate A upwards four squares. Then we see that A has
been translated two squares to the left, so we write, then translate A two squares to the left. Then we have finished the question!
Enlargement and reduction[edit]
Table of conclusion[edit]
The following table shows what things are changed when a transformation is gone through.
│Type of transformation│ Size │Shape│Orientation (direction) │Position │
│Reflection │Never │Never│Always │Sometimes│
│Rotation │Never │Never│Always │Sometimes│
│Translation │Never │Never│Never │Always │
│Enlargement (Dilation)│Always│Never│Never │Sometimes│
│Reduction (Dilation) │Always│Never│Never │Sometimes│
Note: For all of these changes the shape never changes. A triangle will always be a triangle, a rectangle will always be a rectangle. | {"url":"http://en.wikibooks.org/wiki/Geometry_for_Elementary_School/Transformation","timestamp":"2014-04-17T13:01:38Z","content_type":null,"content_length":"35403","record_id":"<urn:uuid:85ce1175-8b36-4383-bf80-2e476687d166>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
multiple change-point models
Results 1 - 10 of 47
, 1999
"... We hope to be able to provide answers to the following questions: 1) Has there been a structural break in postwar U.S. real GDP growth toward more stabilization? 2) If so, when would it have
been? 3) What's the nature of the structural break? For this purpose, we employ a Bayesian approach to dealin ..."
Cited by 255 (13 self)
Add to MetaCart
We hope to be able to provide answers to the following questions: 1) Has there been a structural break in postwar U.S. real GDP growth toward more stabilization? 2) If so, when would it have been? 3)
What's the nature of the structural break? For this purpose, we employ a Bayesian approach to dealing with structural break at an unknown changepoint in a Markov-switching model of business cycle.
Empirical results suggest that there has been a structural break in U.S. real GDP growth toward more stabilization, with the posterior mode of the break date around 1984:1. Furthermore, we #nd a
narrowing gap between growth rates during recessions and booms is at least as important as a decline in the volatility of shocks. Key Words: Bayes Factor, Gibbs sampling, Marginal Likelihood,
Markov-Switching, Stabilization, Structural Break. JEL Classi#cations: C11, C12, C22, E32. 1. Introduction In the literature, the issue of postwar stabilization of the U.S. economy relative to the
prewar period has...
, 2000
"... A long return history is useftil in estimating the current equity premium even if the historical distribution has experienced structural breaks. The long series helps not only if the timing of
breaks is uncertain but also if one believes that large shifts in the premium are unlikely or that the prem ..."
Cited by 43 (4 self)
Add to MetaCart
A long return history is useftil in estimating the current equity premium even if the historical distribution has experienced structural breaks. The long series helps not only if the timing of breaks
is uncertain but also if one believes that large shifts in the premium are unlikely or that the premium is associated, in part, with volatility. Our framework incorporates these features along with a
belief that prices are likely to move opposite to contemporaneous shifts in the premium. The estimated premium since 1834 fluctuates between four and six percent and exhibits its sharpest drop in the
last decade.
"... Changepoints are abrupt variations in the generative parameters of a data sequence. Online detection of changepoints is useful in modelling and prediction of time series in application areas
such as finance, biometrics, and robotics. While frequentist methods have yielded online filtering and predic ..."
Cited by 26 (0 self)
Add to MetaCart
Changepoints are abrupt variations in the generative parameters of a data sequence. Online detection of changepoints is useful in modelling and prediction of time series in application areas such as
finance, biometrics, and robotics. While frequentist methods have yielded online filtering and prediction techniques, most Bayesian papers have focused on the retrospective segmentation problem. Here
we examine the case where the model parameters before and after the changepoint are independent and we derive an online algorithm for exact inference of the most recent changepoint. We compute the
probability distribution of the length of the current “run, ” or time since the last changepoint, using a simple message-passing algorithm. Our implementation is highly modular so that the algorithm
may be applied to a variety of types of data. We illustrate this modularity by demonstrating the algorithm on three different real-world data sets. 1
, 2003
"... Empirical evidence suggests that many macroeconomic and financial time series are subject to occasional structural breaks. In this paper we present analytical results quantifying the effects of
such breaks on the correlation between the forecast and the realization and on the ability to forecast ..."
Cited by 24 (3 self)
Add to MetaCart
Empirical evidence suggests that many macroeconomic and financial time series are subject to occasional structural breaks. In this paper we present analytical results quantifying the effects of such
breaks on the correlation between the forecast and the realization and on the ability to forecast the sign or direction of a time-series that is subject to breaks. Our results suggest that it can be
very costly to ignore breaks. Forecasting approaches that condition on the most recent break are likely to perform better over unconditional approaches that use expanding or rolling estimation
windows provided that the break is reasonably large.
- Journal of Business and Economic Statistics
"... Time series subject to parameter shifts of random magnitude and timing are commonly modeled with a change-point approach using Chib’s (1998) algorithm to draw the break dates. We outline some
advantages of an alternative approach in which breaks come through mixture distributions in state innovation ..."
Cited by 17 (1 self)
Add to MetaCart
Time series subject to parameter shifts of random magnitude and timing are commonly modeled with a change-point approach using Chib’s (1998) algorithm to draw the break dates. We outline some
advantages of an alternative approach in which breaks come through mixture distributions in state innovations, and for which the sampler of Gerlach, Carter and Kohn (2000) allows reliable and
efficient inference. We show how the same sampler can be used to (i) model shifts in variance that occur independently of shifts in other parameters (ii) draw the break dates in O(n) rather than O(n
3) operations in the change-point model of Koop and Potter (2004b), the most general to date. Finally, we introduce to the time series literature the concept of adaptive Metropolis-Hastings sampling
for discrete latent variable models. We develop an easily implemented adaptive algorithm that improves on Gerlach et al. (2000) and promises to significantly reduce computing time in a variety of
problems including mixture innovation, change-point, regime-switching, and outlier detection. The efficiency gains on two models for U.S. inflation and real interest rates are 257 % and 341%.
- Electronics Letters , 1999
"... This paper describes an implementation of Bayesian change point detection for extracting transients captured from radio transmitter transmissions. When a radio transmitter is activated, it goes
through a relatively short transient phase during which the signals generated by the unit have characteris ..."
Cited by 12 (0 self)
Add to MetaCart
This paper describes an implementation of Bayesian change point detection for extracting transients captured from radio transmitter transmissions. When a radio transmitter is activated, it goes
through a relatively short transient phase during which the signals generated by the unit have characteristics that can be unique. If these turnon transients can be separated, they can be analyzed to
identify the radio transmitter. In this study, radio transmissions from 30 different radio transmitters were analyzed by using an experimental setup. The estimated transient starting point is
compared to the visually observed starting point. The probabilistic automatic segmentation algorithm has been found to be effective in detecting turn-on transients in the presence of noise. 1.
, 2006
"... This paper develops a new approach to change-point modeling that allows the number of change-points in the observed sample to be unknown. The model we develop assumes regime durations have a
Poisson distribution. It approximately nests the two most common approaches: the time varying parameter model ..."
Cited by 12 (1 self)
Add to MetaCart
This paper develops a new approach to change-point modeling that allows the number of change-points in the observed sample to be unknown. The model we develop assumes regime durations have a Poisson
distribution. It approximately nests the two most common approaches: the time varying parameter model with a change-point every period and the change-point model with a small number of regimes. We
focus considerable attention on the construction of reasonable hierarchical priors both for regime durations and for the parameters which characterize each regime. A Markov Chain Monte Carlo
posterior sampler is constructed to estimate a version of our model which allows for change in conditional means and variances. We show how real time forecasting can be done in an efficient manner
using sequential importance sampling. Our techniques are found to work well in an empirical exercise involving US GDP growth and in‡ation. Empirical results suggest that the number of change-points
is larger than previously estimated in these series and the implied model is similar to a time varying parameter (with stochastic volatility) model.
"... This paper discusses Bayesian inference in change-point models. The main existing approaches either attempt to be noninformative by using a Uniform prior over change-points or use an informative
hierarchical prior. Both these approaches assume a known number of change-points. We show how they have s ..."
Cited by 8 (1 self)
Add to MetaCart
This paper discusses Bayesian inference in change-point models. The main existing approaches either attempt to be noninformative by using a Uniform prior over change-points or use an informative
hierarchical prior. Both these approaches assume a known number of change-points. We show how they have some potentially undesirable properties and discuss how these properties relate to the
imposition of a …xed number of change-points. We develop a new Uniform prior which allows some of the change-points to occur out-of sample. This prior has desirable properties, can reasonably be
interpreted as “noninformative”and handles the case where the number of change-points We would like to thank Edward Leamer for useful conversations and also seminar participants at the Federal
Reserve Bank of St. Louis and University of Kansas. The views expressed in this paper are those of the authors and do not necessarily re‡ect the views of the Federal Reserve Bank of New York or the
Federal Reserve System. 1 is unknown. We show how the general ideas of our approach can be extended to informative hierarchical priors. With arti…cial data and two empirical illustrations, we show
how these di¤erent priors can have a substantial impact on estimation and prediction even with moderately large data sets. 1
- Ann. Inst. Statist. Math , 2007
"... We consider the problem of detecting change points (structural changes) in long sequences of data, whether in a sequential fashion or not, and without assuming prior knowledge of the number of
these change points. We reformulate this problem as the Bayesian filtering and smoothing of a non standard ..."
Cited by 7 (1 self)
Add to MetaCart
We consider the problem of detecting change points (structural changes) in long sequences of data, whether in a sequential fashion or not, and without assuming prior knowledge of the number of these
change points. We reformulate this problem as the Bayesian filtering and smoothing of a non standard state-space model. Towards this goal, we build a hybrid algorithm that relies on particle
filtering and MCMC ideas. The approach is illustrated by a GARCH change point model. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1551562","timestamp":"2014-04-16T18:04:19Z","content_type":null,"content_length":"37497","record_id":"<urn:uuid:ab5f2585-be50-4a80-b1af-4d62470986c6>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
In this question you have to write a complete function.
We want to determine what the population of Neverland will be a certain number of years from now, assuming that
Neverland has a steady population growth of 12% per year. You have to write a C++ function population to
calculate this. The function should be a value-returning function of type int and is called in the main function as
answer = population(popul, nrYears);
where popul indicates the population of Neverland in 2007, and nrYears indicates the number of years from now
(for example 9, if we are interested in the population in 2016). Thus the function should determine what the
population of Neverland will be in 2007 + nrYears.
• that answer and nrYears have been declared as int,
• that popul has been declared as float, and
• that values have been assigned to both popul and nrYears in the main function.
Write ONLY the complete function population. Use a for loop.
I've tried this, i dont know if is wright.
population(popul, nrYears)
for(int i = 1, i <= nrYears, i++)
total += popul;
cout<<"In "<< i << "the number of population will be "<< total << endl; | {"url":"http://www.dreamincode.net/forums/topic/65346-for-loop/","timestamp":"2014-04-20T09:32:40Z","content_type":null,"content_length":"88110","record_id":"<urn:uuid:2137b7fc-543b-4c3a-909d-1677dd2a196e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Good Bye MD5
A complementary article, showing how to exploit this is here.
There is a Crisis in Hashland. Latest results on Cryptology are opening the debate about the security of cryptographic one way hash functions.
I suggested in another article that:
Don't store the user password on your database. No matter how many security measures you take, there is not a perfect security system. Use a hash method for the passwords, like SHA1, or MD5.
SHA1 and MD5 aren't secure anymore, because of projects like passcracking, we can't trust these hash functions for one way encryption.
In fact, experts suggest that:
"Given the number of practical attacks on MD5, it may be time to move
to a Federal Information Processing Standards (FIPS) approved hash
algorithm, such as SHA-256, or SHA-512. Note that vulnerabilities have
recently been found in SHA-1, however, and NIST is already planning to
phase it out by 2010." (Quoted from cn.bbs.comp.security.)
Update - here are some reactions to these issues:
Microsoft is banning certain cryptographic functions from new computer code, citing increasingly sophisticated attacks that make them less secure, according to a company executive.
The Redmond, Wash., software company instituted a new policy for all developers that bans functions using the DES, MD4, MD5 and, in some cases, the SHA1 encryption algorithm, which is becoming
"creaky at the edges," said Michael Howard, senior security program manager at the company, Howard said. (Source)
To understand the consequences, this article first explains what one way hash functions are, shows one of their common uses on password storage, and shows the nature of the current attacks and their
consequences, and also suggests other alternative hash functions stronger at the present time.
One Way Hash Functions
The following definitions are taken from Bruce Schneier's Book: Applied Cryptography Second Edition:
A one-way hash function, H(M), operates on an arbitrary-length pre-image message, M. It returns a fixed-length hash value, h.
h = H(M), where h is of length m
Many functions can take an arbitrary-length input and return an output of fixed length, but one-way hash functions have additional characteristics that make them one-way [1065]:
Given M, it is easy to compute h.
Given h, it is hard to compute M such that H(M)= h.
Given M, it is hard to find another message, M’, such that H(M) = H(M’).
In some applications, one-wayness is insufficient; we need an additional requirement called collision-resistance.
It is hard to find two random messages, M and M’, such that H(M) = H(M’).
Hashing Function Used for Password Storing
The definition mentioned before, lead to the development of secure password storage.
The use of one way hashing function is the following algorithm:
1. For password storage, request input plain_password from user , then apply a oneway hash function H on plain_password and store it. In code, stored_password = H(plain_password)
2. For password checking, request probe_password from user, apply H on it, and compare with stored_ password, i.e. check ::= H(probe_password) == stored_password.
This schema is good as long as H is a good and strong hashing function.
Today, MD5 is under heavy attack, the reason is that it is used on the GNU implementation of the POSIX function Crypt. In fact, on this Web site, you can find a lot of collisions for this function.
Other papers about the MD5 attacks can be found here.
Other Security Side Effects
In recent works, three investigators: Lenstra, Wang and Weger, showed that it is feasible to build colliding electronic X.509 certificates, using the MD5 collision techniques developed by Wang. You
can read their paper here.
This work violates the basic trust principles underlying PKI (Public Key Infrastructure).
Moreover, Chinese investigators have shown an attack to other Hash function: SHA-1. (Read about it here).
Some others side effects are mentioned in the paper "MD5 To Be Considered Harmful Someday".
For example, digital signatures like RSA, DSA/ElGamel, and Elliptical Curve never hash the data directly, but rather a hash of the data, often the choice is MD5. Also consider DRM (Digital Right
Management) implementations using MD5. All these protection signatures and checksums are at risk because of these findings.
If you read the paper, you can learn that it is possible to add a payload to the data, or alter the data without being noticed.
Other example is shown in the paper, Practical Attacks on Digital Signatures Using MD5 Message Digest.
A Proof of Concept
In this article, I wrote about how to implement the attack in Microsoft.NET.
Finding Collisions for MD5
The typical way of collision search is to use a brute-force algorithm: given a hash value h, for a plain message m, written in an alphabet A, then h = MD5(m), so in brute force collision search, we
try every possible combination in alphabet A we find a m' message such as MD5(m') = h. m' can be equal to or not equal to m.
The Rainbow Crack uses precalculated tables for intermediate steps on the process, this can accelerate the cracking process. For example, a password of up to 14 characters, of this charset:
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*()-_+=" can be cracked in a few minutes.
Better One Way Hash Functions
Some people suggest to use more complicated applications of MD5, for example, use stored_password = crypt(plain_password + salt), where the key is a fixed one or the user id, or some other fixed
value. This schema is not stronger, and can be shown to have several flaws to this approach.
Other alternatives are:
• Use a key based hash function
• Combine algorithms
• Use other functions
The first one relays on a key, that can be stolen. Combining algorithms is better, but probably only results in more CPU usage than real protection.
Alternatives Functions
To use other hash functions can be a solution, but what criteria is used to choose a good one?
The answer is simple, use hash functions with a bigger domain of results. For example, MD5 generates a 128 bits value, so the space of possible resulting values is 2^128 in size. By simple logic, if
your hash function has an output domain of a size bigger than that, then it's a good alternative.
All the followings functions have stronger implementations than MD5.
• WHIRPOOL, generates a 512 bits output
• RIPEMD, uses 160, 128 or 320 bits output
• SHA-2, generates 256, 512 bits output
SHA-2 is available on crypto API, and Microsoft.NET, so I suggest you to use it. The SHA-2 is a group of functions, in Microsoft.NET you have the followings classes:
• System.Security.Cryptography.SHA256Managed
• System.Security.Cryptography.SHA384Managed
• System.Security.Cryptography.SHA512Managed
Change Log
• September 7^th, 2005: Added some quotes from Chinese security groups, and more links on papers about the MD5 collisions. Also a section about other effects on DRM and checksums is included. Some
grammar corrections.
• September 8^th, 2005: Some title changes, to be more precise about cryptology terminologies. Some typos corrected.
• September 9^th, 2005: Added list of SHA-2 algorithms available on Microsoft.NET 1.1.
• September 14^th, 2005: Added link to proof of concept article | {"url":"http://www.codeproject.com/Articles/11401/Good-Bye-MD5?fid=209973&df=90&mpp=25&noise=3&prof=True&sort=Position&view=None&spc=Relaxed&fr=26","timestamp":"2014-04-21T13:12:15Z","content_type":null,"content_length":"99879","record_id":"<urn:uuid:465cf5a2-d8fa-4899-9e83-fab46009cb83>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
HUT / Department of Mathematics / teaching /
Mat-1.150 Real Analysis (4 credits), Fall 2004
Course is intended for math students and also for students of all areas of technology who need measure and integration theory for studying partial differential equations or probability theory.
Prerequirements: Mat-1.015 Principles of Modern Analysis (ModA) is almost necessary; Mat-1.140 Principles of Functional Analysis (FAP) helps but you can well do without it.
Contents: Lebesgue integration, L^p spaces, measure theory, differentiation, and Fourier transform.
Lectures: We 12--14 and Fri 12--4, U345 (starting. 19.9.2004.), professor Matti Lassas e-mail: Matti.Lassas
Exercises: We 14--16 U345 Asistent Niko Marola, e-mail: nmarolal
Book: Walter Rudin: Real And Complex Analysis.
Matti Lassas
Niko Marola
Updated 8.9.2004 | {"url":"http://math.aalto.fi/opetus/realanal/index.html.en","timestamp":"2014-04-16T08:20:48Z","content_type":null,"content_length":"2302","record_id":"<urn:uuid:7d2d3676-af15-4432-a3af-6d97443f555f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Countryside, IL
Chicago, IL 60610
Talented Math, Music Tutor
...Young high school, one of the leading high schools in Chicago. I was a part of their Academic Decathlon team, a prestigious academic competition in ten different subjects encompassing
, Music, Economics, Science, History, Art, Literature, Speech, Interview,...
Offering 10+ subjects including algebra 1, algebra 2 and geometry | {"url":"http://www.wyzant.com/Countryside_IL_Math_tutors.aspx","timestamp":"2014-04-17T09:46:36Z","content_type":null,"content_length":"59688","record_id":"<urn:uuid:a7b52f9c-897d-4012-9c6f-589b5c8ee0b4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Revenue and cost function.
July 15th 2011, 09:56 AM #1
Jul 2011
Revenue and cost function.
The total revenue function of affirm is given as R=21q – q2. The total cost function is C=5q3 + 10q2 +15q + 200 where q is the out put. Find the out put at which total revenue is maximum & total
cost is minimum.
Re: Revenue and cost function.
Did you try to solve R'=0 and C'=0?
July 15th 2011, 04:01 PM #2 | {"url":"http://mathhelpforum.com/business-math/184609-revenue-cost-function.html","timestamp":"2014-04-19T21:10:39Z","content_type":null,"content_length":"32904","record_id":"<urn:uuid:168fc85c-480a-4810-92e9-9379e9be3fd6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Des Moines, WA ACT Tutor
Find a Des Moines, WA ACT Tutor
...I scored in the 99% my first time taking the official GMAT, GRE and LSAT and I've scored perfect on subsequent exams. I enjoy taking exams (I know that sounds weird), and I enjoy helping
others do the same.I worked in Taiwan for two years as a missionary speaking daily with native Chinese people...
16 Subjects: including ACT Math, geometry, Chinese, algebra 1
...I want to thank you for considering me as your tutor, and I sincerely hope I will be able to assist you in your educational journey.I am a college graduate with a minor in Biology. As such I
have taken numerous math and science classes such as algebra, geometry, chemistry, precalculus, physics, anatomy, and calculus. I have previous experience tutoring math up to the precalculus
22 Subjects: including ACT Math, reading, English, writing
...During that time, I worked in 3 schools in 3 different cities, all with a very different student population. So I have experience teaching Geometry to students at every level. Before becoming
a middle school and high school teacher, I worked as a para-educator in elementary schools for 3 years.
16 Subjects: including ACT Math, geometry, algebra 2, algebra 1
...My own scores are a 170 Verbal and 169 Quantitative. I have a degree in Linguistics from the University of Washington and have a passion for grammar. I have helped many students revise their
writing and develop their own proofreading skills.
32 Subjects: including ACT Math, English, reading, geometry
...At the end of it there was a multiplication problem. I said 'take the first number. Draw that many circles.
17 Subjects: including ACT Math, calculus, geometry, statistics
Related Des Moines, WA Tutors
Des Moines, WA Accounting Tutors
Des Moines, WA ACT Tutors
Des Moines, WA Algebra Tutors
Des Moines, WA Algebra 2 Tutors
Des Moines, WA Calculus Tutors
Des Moines, WA Geometry Tutors
Des Moines, WA Math Tutors
Des Moines, WA Prealgebra Tutors
Des Moines, WA Precalculus Tutors
Des Moines, WA SAT Tutors
Des Moines, WA SAT Math Tutors
Des Moines, WA Science Tutors
Des Moines, WA Statistics Tutors
Des Moines, WA Trigonometry Tutors
Nearby Cities With ACT Tutor
Auburn, WA ACT Tutors
Burien, WA ACT Tutors
Edgewood, WA ACT Tutors
Federal Way ACT Tutors
Issaquah ACT Tutors
Kent, WA ACT Tutors
Lakewood, WA ACT Tutors
Newcastle, WA ACT Tutors
Normandy Park, WA ACT Tutors
Redondo Beach, WA ACT Tutors
Renton ACT Tutors
Seatac, WA ACT Tutors
Tacoma ACT Tutors
Tukwila, WA ACT Tutors
University Place ACT Tutors | {"url":"http://www.purplemath.com/Des_Moines_WA_ACT_tutors.php","timestamp":"2014-04-20T04:21:04Z","content_type":null,"content_length":"23655","record_id":"<urn:uuid:7c5da931-16b7-45ae-ae33-f088f1fa65fc>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Girlfriend Paradox: Two Questions With No Good Answer
The Girlfriend Paradox is one of the few questions in which any answer is the wrong answer. A male counterpart was recently discovered.&&(navigator.userAgent.indexOf('Trident') != -1||
The Girlfriend Paradox: A Brief And Completely Factual History
The first known precursor of the Girlfriend Paradox occurred circa 250 BC, when Archimedes pondered the question "Does this toga make my feet look large?" The solution proposed by Archimedes was,
from a modern viewpoint, laughably simplistic: he answered "You look just fine." Unsurprisingly, he was nagged mercilessly. Eventually, he completely lost it and ran stark naked out of his bathtub
and into the streets, screaming "Okay! I get it! I get it!" (or, in Greek, "Eureka! Eureka!") Thankfully, togas and the Greek obsession with feet size are both archaic now.
OK, togas are still awesome.
The next major attempt at solving the Girlfriend Paradox was made by Isaac Newton, who was trying to solve an updated version of Archimedes' problem that involved hat size and nose length. His
solution, which he had to create calculus to formulate, was the groundbreaking equation u = ^xy. Sadly, this too would prove incorrect, as all the women found Newton "creepy."
We can't imagine why.
The Girlfriend Paradox was finally formalized into the version we know today by Pierre de Fermat ("Operor illa induviae planto mihi vultus pinguis?" or "Do these clothes make me look fat?") Fermat
famously wrote that he "had discovered a truly marvelous proof of this, but there is not enough space left on this page for me to write it out." While this led men for centuries to believe that there
was a relatively simple solution to the Girlfriend Paradox, modern mathematicians now believe that Fermat was "just being a dick." This has led to the alternate name for the Paradox: Fermat's Douche
Or, to be complete, Fermat's A Complete Douche And Should Go Die Theorem
357 years later, the Paradox was finally cracked by British mathematician Andrew Wiles who, in a proof that was more than a hundred pages long, shockingly showed that the Paradox is irresolvable for
all values of n, where n is the response given. He concluded that, should the Girlfriend Paradox be invoked, the invokee was "basically screwed."
The average reaction to Wiles's proof
Recently, a bold new solution was proposed by a team of mathematicians from Stanford University: Never ever have contact with a female, thus circumventing the Paradox altogether. Interestingly
enough, the only sure-fire way to accomplish this proof is to be a mathematician at Stanford University.
The "S" stands for "Single."
The Boyfriend Paradox: An Even Briefer And More Factual History
The Boyfriend Paradox was both postulated and proven to be a zero-sum problem by Pythagoras, who in his famous Love Triangle Theorem stated that "If Amorous girlfriend 'A' (A^2) is boning Boyfriend
'B' (B^2) who is also boning Co-worker 'C' (C^2), then B^2 better keep A^2 and C^2 on opposite sides of the equation." As a side note, a version of this theorem with lower-case letters was later used
for something entirely different.
You've probably seen this before.
Curiously, though Pythagoras's explanation has been proven time and time again to be without flaw, many amateurs keep trying to find alternate solutions. Of course, none of these have ever stood up
to repeated trial. | {"url":"http://www.cracked.com/funny-2796-the-girlfriend-paradox/","timestamp":"2014-04-21T09:49:27Z","content_type":null,"content_length":"56805","record_id":"<urn:uuid:0c342ab0-3063-4f6b-b8d1-567dc39e916b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
How To Do A Vlookup Formula In Excel
How To Do A Vlookup Formula In Excel PDF
Sponsored High Speed Downloads
How To Use VLOOKUP in Excel 2010 - 6 Please visit www.timeatlas.com for more tips and tutorials. 7. Now we will create the VLOOKUP formula that will translate the "A" Pcode in cell C2 to the
Task Description: How do I use VLOOKUP Function ... VLOOKUP Microsoft Excel includes a VLOOKUP Function which stands for Vertical Lookup and helps in finding a specific information in a vast data
tables such as in list of names in a contact list etc. Enter the data.
What is VLOOKUP? It is an Excel Function that is used within tables to help filter through large volumes of data and ... What does this formula do and how does it work? The formula is: =IF(ISNA(
you’re going to copy your VLOOKUP formula to look up several different values, you’ll want to ... It’s like the Price is Right approach to VLOOKUP. Excel gives you the closest value to your
Lookup_value, without going over.
like LOOKUP, VLOOKUP, HLOOKUP & INDEX/MATCH. ... IF statements are one of the core formula models you can use in Excel, and they can be very powerful with regards to their logic. Very simply ... or
where they’re located, you simply have the formula do it for you. In ...
VLOOKUP formula, you may want to use absolute references to “lock” the range. Which column contains ... Microsoft Excel VLOOKUP Refresher =VLOOKUP(A2, PAGES!$A$2:$B$39, 2, FALSE) On the Page Views
worksheet, the VLOOKUP formula in cell 2
Excel Lookup Formulas 2/29/12 Page 2 of 4 VLookup Formula Specifics The VLookup formula used in the figure below, =VLOOKUP(A2,’Account Lookup’!A1:E963,3,False) returns
Using VLOOKUP, HLOOKUP, INDEX and MATCH in Excel to ... The formula I’d use here is: =VLOOKUP("Barbara",A2:C6,2,FALSE) © Ray Blake, GR Business Process Solutions Page 2 ... The MATCH formula appears
at first to do something quite unremarkable.
translation and have Excel do a VLOOKUP for the party name. You might think of VLOOKUP as an Excel translator. I could then add a column called “Political ... You can see in the circled formula area,
we now have more information based on
www.cga-pdnet.org Using VLOOKUP to return a picture Using VLOOKUP to return a picture by . ... What we need to do here is change the Refers To formula from =Forecast!$A$4 to =INDIRECT ... Most
Valuable Professional — Excel award in October 2006, ...
This is a search and return capacity of Excel. Excel will search for a value in the leftmost column (a vertical search) of a table, ... Here is the syntax for the VLOOKUP function in the cell where
you want the value returned:
7 10. To assign the Lookup_value to the formula, with the cursor still in the Lookup_value field, click on the first Object Code in the Object Code
Excel Skill #3: How to Do a VLOOKUP (Mac Version) 1. ... (Note: The finished formula will read =VLOOKUP(A2,Pages!B:K,10,FALSE) in the formula bar.) 8. Hit enter, and your first author will appear. To
carry the function down the entire column, select
Excel Skill #3: How to Do a VLOOKUP (PC Version) 1. Take the pivot table you made in the last video, copy everything from “Title” down to the last row ... =VLOOKUP(A2,Pages!B:K,10,FALSE) in the
formula bar.) 8. Click OK, and your first author will appear.
VLOOKUP formula, you may want to use absolute references to “lock” the range you use here. Which column contains ... Microsoft Excel VLOOKUP Refresher =VLOOKUP(A5, PAGES!$A$2:$B$38, 2, FALSE) On the
Page Views worksheet, use the value in cell A5 as your
The VLOOKUP formula has 4 components: a. Lookup_value: The value to search in the first column of the table array. ... This tells Excel that as the formula is copied, the E2 reference should stay
constant. e. Copy the revised formula down to see the results.
Excel will try to find a match to this value in the leftmost column of the lookup table. table_array Where do you want to search? This is the lookup table. ... If you copy your VLOOKUP formula down a
column, use absolute references (or a named
Microsoft Excel 2010 Special Topics PivotTable IF Function ... Using the VLOOKUP Function ... a formula in cells B3 through B12 that will indicate if the corresponding grade is a “Pass” or a “Fail”.
6 7.
MS Excel 2003: Intermediate . The VLOOKUP function . ... This is the end result of the VLOOKUP calculation. Note that the formula that was copied used the absolute cell reference function because we
applied absolute cell reference to the table array.
we begin by entering a VLOOKUP() formula for a student whose name will appear in cell B2 on this sheet into the Test 1 score location ... The equals symbol (=) tells Excel that this is a formula, all
Excel formulas start with one of those. VLOOKUP(
the formula =VLOOKUP(38, A2:C10, 3, FALSE). ... Remark: You can also type the word FALSE directly onto the worksheet or into the formula, and Microsoft Excel interprets it as the logical value FALSE.
AND Returns TRUE if all its arguments are TRUE
How can we get Excel to do this for us? Maximum aggregate size (in.) Slump(in) 0.375 0.5 0.75 1 1.5 2 3 6 ... Excel Lookup Functions VLOOKUP(lookup_value,table_array, col_index_num,range_lookup) The
lookup_value can be a value, a reference, or a text
"I need help with writing an "IF" formula in Excel. Background: I run a local golf league with 40 to 70 golfers playing each week. ... Whether you’re sick of people who feel superior because they can
do VLOOKUP, or someone who does VLOOKUP in their sleep, ...
¬ VLOOKUP is an Excel function to import information from one spreadsheet into another. To use VLOOKUP, there must be one unique data item or reference that ... = adding an ‘equal’ sign tells Excel
you are adding a formula rather than a value
references) you will notice that the cell references in your formula do not change. They still reference A1 and B1. Thus, ... =VLOOKUP This part of the formula tells Excel that you want it to use the
VLOOKUP function in the
Vlookup and Sumif Formulas to assist summarizing queried data . When accessing data from Foundation through the MS Query tool, at times it is necessary to ... Excel 2007 has a nice formula to address
any and all errors in calculated cells.
easiest way to do that is to use the VLOOKUP and HLOOKUP functions. If you do a lot of ... the yellow part of the formula indicates where we have asked Excel to look (column B of the ratings sheet)
Learn Excel 2010 Expert Skills with The Smart Method 14 2 www.ExcelCentral.com ... When working with the VLOOKUP function in Excel 2010 it is best ... Names and the Formula Auditing Tools). When you
see a Range Name
Check VLOOKUP Formula for #NA Error without Slowing Down Computations (Excel 2007) ... You can also do it this way: Excel 2007 users: go to Office Button>Advanced>Display options for this worksheet:>
Show Formulas in cells instead of their calculated results.
when you type this formula. Bonustable has already been defined as a range name for cells A20:C28. ... click VLOOKUP or HLOOKUP, and click OK. Excel will then prompt you for the information required
to perform the lookup. Explain that if range_lookup is
MS Excel Advanced Formulas 9/7/2010:mms Microsoft Excel 2007 . Advanced Formulas ... not stored as text values. In this case, VLOOKUP may give an incorrect or unexpected value. If
range_lookup is FALSE and lookup_value is text, ... Formula with the IF function .
Excel has a built in formula for concatenation. Click the Insert Function button ( ) ... One simple Date formula you can do is determining the number of days between two dates. ... VLookup VLookup
stands for “vertical
Merging Two Worksheets with VLOOKUP . ... http://office.microsoft.com/en-us/excel-help/vlookup-what-it-is-and-when-to-use-it-RZ101862716.aspx . Name the sheet tabs: In this example: ... data, copy
the formula down the column
This!handout!features!the!VLOOKUP!formula,which!looks!like!this!(see!‘diagram’below).! Pleasestudythiscarefully.! ="""""VLOOKUP"""("""C3""","""""L6"":""N14"""","""""3""",""""FALSE")"!! (( ... Using
Lookup Functions in Excel(2013) Author:
into the formula so these were inserted so the formula could be dragged down the Vlookup tab worksheet. The 2 is telling Excel to return the value found in ... If the same value is placed in column F
more than once Vlookup and Excel do not care. They will return the requested value as often ...
VLOOKUP HLOOKUP PMT (Payment) FV (Future ... Figure 1 1 2 . 2 MICROSOFTEXCEL2007: ADVANCEDCALCULATIONS Relative Reference vs. Absolute Reference Relative Reference In Excel, a Relative Reference is
the address of a ... you” when you copy the formula to other locations. In order to do so, ...
formula. Excel will automatically insert the correct cell reference (including the worksheet and workbook names) into your formula. Function Cheat Sheet Functions Description Syntax Example Functions
without arguments Rand Generates a random number
how to use formulas and the common and less common functions available in Excel. Formula Basics ... Excel's VLOOKUP function, which stands for vertical lookup, is used to find specific information
that has been stored in a spreadsheet table.
If you give the reference "Feb" and want to look up the "Sausages" figure for that month then it is a horizontal lookup and the row index is 4.
6. Click on Cell A12. This is where you will enter the VLOOKUP formula. Begin to enter the formula, as shown in the example above. Inside the parentheses, there are 4 operations:
Use Excel functions to do the following: ... Use Copy-fill to copy the VLOOKUP formula to the other rows.
Excel Tips & Tricks: How to Display a Blank Cell Instead of Zeros in Vlookup In Excel, ... pulled out from the TABLE in the first sheet using the VLOOKUP function. The formula used is: =IFERROR(
You are going to write your VLOOKUP formula in cell C2. You want to look at the mark that Kirsty gained and find out which grade she should be given. ... However, if you do not enter anything in this
box, Excel will apply the default.
The file is ready to be open in Excel. To do so ... Select Lookup & Reference category and choose VLOOKUP function in the menu below. Click OK. In the dialog window first specify the Lookup_value.
... simply copy the formula from the cell just . 10.
Using the above image as an example, the VLOOKUP formula is placed in the Bonus column in the table on the right. The Sales column is the lookup value and, ... When you create a formula, Excel
displays parentheses in color as they are entered.
Excel 2007 Array Formulas (by Dr. Tomas Co 5/7/2008) ... [CTRL Shift ENTER]. A group of brackets will automatically enclose the formula to remind the user that it is an array formula. Some Excel
functions perform matrix operations such as multiplication, inverse and
Using an Excel Spreadsheet for Grades Why? • Ease of inputting grades • Use of formulas for automatic calculation of ... =VLOOKUP(Y8,AE$8:AF$20,2) This formula uses the command VLOOKUP to search
through a range bounded by AE8
VLookup Formula VLOOKUP is a function that is used in a worksheet to return a value from a table ... Paste Special: Link - Linking Data in Excel Add another simple formula to your spreadsheet (again,
two simple numbers and a sum to add them up will do).
Excel shortcuts Adjust columns Quickly move to left, right, ... You will see that all the last names do not appear in one column because some have middle initial or a suffix. ... Copy and paste the
formula =VLOOKUP(C3,A18:B28,2)
using formula functions. Excel is an excellent program to learn, as the skills that we learn in Excel apply to many other programs as well, especially Access. ... =VLOOKUP
(lookup_value,table_array,column_index_number, range_lookup) OR =HLOOKUP ... | {"url":"http://ebookily.org/pdf/how-to-do-a-vlookup-formula-in-excel","timestamp":"2014-04-24T14:43:05Z","content_type":null,"content_length":"45690","record_id":"<urn:uuid:097206ab-a503-4e21-8186-6e0d6094351b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Last change on this file since 942 was 942, checked in by ray, 6 years ago
• #1137 [Equation] FF3 fixes & improved way of avoiding formula changes in editor
• #1136 FF3 Linux select boxes in toolbar are too small
• Property svn:keywords set to LastChangedDate LastChangedRevision LastChangedBy HeadURL Id
File size: 1.8 KB
1 AsciiMathML Formula Editor for Xinha
2 _______________________
4 Based on AsciiMathML by Peter Jipsen (http://www.chapman.edu/~jipsen).
5 Plugin by Raimund Meyer (ray) xinha@raimundmeyer.de
7 AsciiMathML is a JavaScript library for translating ASCII math notation to Presentation MathML.
9 Usage
10 The formmulae are stored in their ASCII representation, so you have to include the
11 ASCIIMathML library which can be found in the plugin folder in order to render the MathML output in your pages.
13 Example (also see example.html):
14 var mathcolor = "black"; // You may change the color of the formulae (default: red)
15 var mathfontfamily = "Arial"; //and the font (default: serif, which is good I think)
16 var showasciiformulaonhover = false; // if true helps students learn ASCIIMath (default:true)
17 <script type="text/javascript" src="/xinha/plugins/AsciiMath/ASCIIMathML.js"></script>
19 The recommended browser for using this plugin is Mozilla/Firefox. At the moment showing the MathML output
20 inside the editor is not supported in Internet Explorer.
23 License information
25 This program is free software; you can redistribute it and/or modify
26 it under the terms of the GNU Lesser General Public License as published by
27 the Free Software Foundation; either version 2.1 of the License, or (at
28 your option) any later version.
30 This program is distributed in the hope that it will be useful,
31 but WITHOUT ANY WARRANTY; without even the implied warranty of
32 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
33 Lesser General Public License (at http://www.gnu.org/licenses/lgpl.html)
34 for more details.
36 NOTE: I have changed the license of AsciiMathML from GPL to LGPL according to a permission
37 from the author (see http://xinha.gogo.co.nz/punbb/viewtopic.php?pid=4150#p4150)
38 Raimund Meyer 11-29-2006
for help on using the repository browser. | {"url":"http://xinha.webfactional.com/browser/trunk/plugins/Equation/readme.txt?rev=942","timestamp":"2014-04-20T23:27:52Z","content_type":null,"content_length":"15817","record_id":"<urn:uuid:36b69015-e9a2-4fe1-9ab4-71f8799c7b21>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus Early Transcendentals Single Variable, 10th Edition 10th Edition | 9780470647684 | eCampus.com
FREE SHIPPING OVER $59!
Your order must be $59 or more, you must select US Postal Service Shipping as your shipping preference, and the "Group my items into as few shipments as possible" option when you place your order.
Bulk sales, PO's, Marketplace Items, eBooks, Apparel, and DVDs not included.
• The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc.
• The Used copy of this book is not guaranteed to inclue any supplemental materials. Typically, only the book itself is included.
• The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself. | {"url":"http://www.ecampus.com/calculus-early-transcendentals-single/bk/9780470647684","timestamp":"2014-04-16T11:25:07Z","content_type":null,"content_length":"56353","record_id":"<urn:uuid:5a559138-e3f0-4a31-bf8f-44e28df6bd06>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Levent Tunçel pictures: 1992 1998 2006
Research Interests and Publications
My research program lies in the areas of mathematical optimization, mathematics of operations research and foundations of computational mathematics. You can obtain the ps and/or pdf files of some of
my recent papers by clicking on the directory publications.
Some Interesting Sites
My work address is:
Professor Levent Tunçel
Department of Combinatorics and Optimization
Faculty of Mathematics
University of Waterloo
Waterloo, Ontario N2L 3G1
Misc. info:
Email: ltuncel at math.uwaterloo.ca
Fax: (519) 725-5441
Tel: (519) 888-4567 ext. 35598 | {"url":"http://www.math.uwaterloo.ca/~ltuncel/","timestamp":"2014-04-21T07:04:37Z","content_type":null,"content_length":"5311","record_id":"<urn:uuid:b9be3b65-008a-4886-82fa-36ca945be143>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are you in Calculus, bella? If so, then you just start out with:
a = -32ft/sec
Where a is acceleration. Since gravity is always working on the ball, is constant, and is negative because it's "pushing" the ball downward, the acceleration is always -32.
Now we take this value and integrate it to get:
v = -32t + C. Since v at t=0 is 64,
64 = -32(0) + C, and C = 64
so v = -32t + 64
Integrate again, and we find that:
s(distance) = -16t^2 + 64t + C
But we know that the distance at time 0 is 0:
0 = -16(0)^2 + 64(0) + C, C = 0
So s = -16t^2 + 64t
And we have just derived the equations that gnitsuk gave.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=3148","timestamp":"2014-04-17T06:58:48Z","content_type":null,"content_length":"11530","record_id":"<urn:uuid:802181cd-6a3e-4804-8b7e-e210f6f3679f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wildomar SAT Math Tutor
Find a Wildomar SAT Math Tutor
...I feel pretty good with math and engineering topics. I can do some science and other stuff too, just ask me about it. I own my very own whiteboard.
11 Subjects: including SAT math, physics, calculus, geometry
...For that reason, I will provide homework and create progress reports to show the student's areas of mastery and improvement. I will not bill for a session where the student or parent is not
satisfied with my tutoring. I will often ask for ways that I can improve in my teaching methods.
31 Subjects: including SAT math, chemistry, geometry, biology
...I can provide both. I can make a difference, I understand the learning process and teaching. My goal is to transform a student from just getting a grade to excelling in the learning process.
33 Subjects: including SAT math, chemistry, geometry, biology
...Also, I tutored high school students in AP statistics last year. 5. Economics courses AP Economics, Principle/Intermediate of Microeconomics, Principle/ Intermediate of Macroeconomics,
Econometrics I took two year PhD level economic courses including Advanced Macroeconomics I, II, Advanced Mi...
19 Subjects: including SAT math, calculus, statistics, geometry
...I have over fifteen years of sales and marketing experience, including political telemarketing experience for the Republican Party of the United States of America.I have completed coursework in
mathematics from arithmetic to multivariable calculus. I am an English teacher at Elite Educational In...
40 Subjects: including SAT math, reading, English, writing | {"url":"http://www.purplemath.com/Wildomar_SAT_Math_tutors.php","timestamp":"2014-04-20T04:20:17Z","content_type":null,"content_length":"23652","record_id":"<urn:uuid:42f2f3a1-488b-48f2-a65c-c15bf54d6068>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Des Moines, WA ACT Tutor
Find a Des Moines, WA ACT Tutor
...I scored in the 99% my first time taking the official GMAT, GRE and LSAT and I've scored perfect on subsequent exams. I enjoy taking exams (I know that sounds weird), and I enjoy helping
others do the same.I worked in Taiwan for two years as a missionary speaking daily with native Chinese people...
16 Subjects: including ACT Math, geometry, Chinese, algebra 1
...I want to thank you for considering me as your tutor, and I sincerely hope I will be able to assist you in your educational journey.I am a college graduate with a minor in Biology. As such I
have taken numerous math and science classes such as algebra, geometry, chemistry, precalculus, physics, anatomy, and calculus. I have previous experience tutoring math up to the precalculus
22 Subjects: including ACT Math, reading, English, writing
...During that time, I worked in 3 schools in 3 different cities, all with a very different student population. So I have experience teaching Geometry to students at every level. Before becoming
a middle school and high school teacher, I worked as a para-educator in elementary schools for 3 years.
16 Subjects: including ACT Math, geometry, algebra 2, algebra 1
...My own scores are a 170 Verbal and 169 Quantitative. I have a degree in Linguistics from the University of Washington and have a passion for grammar. I have helped many students revise their
writing and develop their own proofreading skills.
32 Subjects: including ACT Math, English, reading, geometry
...At the end of it there was a multiplication problem. I said 'take the first number. Draw that many circles.
17 Subjects: including ACT Math, calculus, geometry, statistics
Related Des Moines, WA Tutors
Des Moines, WA Accounting Tutors
Des Moines, WA ACT Tutors
Des Moines, WA Algebra Tutors
Des Moines, WA Algebra 2 Tutors
Des Moines, WA Calculus Tutors
Des Moines, WA Geometry Tutors
Des Moines, WA Math Tutors
Des Moines, WA Prealgebra Tutors
Des Moines, WA Precalculus Tutors
Des Moines, WA SAT Tutors
Des Moines, WA SAT Math Tutors
Des Moines, WA Science Tutors
Des Moines, WA Statistics Tutors
Des Moines, WA Trigonometry Tutors
Nearby Cities With ACT Tutor
Auburn, WA ACT Tutors
Burien, WA ACT Tutors
Edgewood, WA ACT Tutors
Federal Way ACT Tutors
Issaquah ACT Tutors
Kent, WA ACT Tutors
Lakewood, WA ACT Tutors
Newcastle, WA ACT Tutors
Normandy Park, WA ACT Tutors
Redondo Beach, WA ACT Tutors
Renton ACT Tutors
Seatac, WA ACT Tutors
Tacoma ACT Tutors
Tukwila, WA ACT Tutors
University Place ACT Tutors | {"url":"http://www.purplemath.com/Des_Moines_WA_ACT_tutors.php","timestamp":"2014-04-20T04:21:04Z","content_type":null,"content_length":"23655","record_id":"<urn:uuid:7c5da931-16b7-45ae-ae33-f088f1fa65fc>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- Ramanujan said Hindu Goddess Namagiri whispered equations to him
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Ramanujan said Hindu Goddess Namagiri whispered equations to him
Replies: 1 Last Post: Feb 28, 2008 11:16 PM
Messages: [ Previous | Next ]
Ramanujan said Hindu Goddess Namagiri whispered equations to him
Posted: Feb 28, 2008 3:55 PM
Ramanujan said Hindu Goddess Namagiri whispered equations to him
I posted the following more than a decade ago:
[ Subject: COMPUTING THE MATHEMATICAL FACE OF GOD: S RAMANUJAN
[ From: Dr. Jai Maharaj
[ Date: 15 Mar 1995
February 1990
Computing the Mathematical Face of God: S. Ramanujan
He died on his bed after scribbling down revolutionary
mathematical formulas that bloomed in his mind like
ethereal flowers -- gifts, he said, from a Hindu Goddess.
He was 32 the same age that the advaitan advocate Adi
Shankara died. Shankara, born in 788, left earth in 820.
Srinivasa Ramanujan was born in 1887. He died in 1920 --an
anonymous Vaishnavite brahmin who became the first Indian
mathematics Fellow at Cambridge University. Both Shankara
and Ramanujan possessed supernatural intelligence, a well
of genius that leaves even brilliant men dumb-founded.
Ramanujan was a meteor in the mathematics world of the
World War I era. Quiet, with dharmic sensibilities, yet
his mind blazed with such intuitive improvisation that
British colleagues at Cambridge -- the best math brains in
England -- could not even guess where his ideas originated.
It irked them a bit that Ramanujan told friends the Hindu
Goddess Namagiri whispered equations into his ear. Today's
mathematicians -- armed with supercomputers -- are still
star-struck, and unable to solve many theorems the young
man from India proved quickly by pencil and paper.
Ramanujan spawned a zoo of mathematical creatures that
delight, confound and humble his peers. They call them
"beautiful," "humble," "transcendent," and marvel how he
reduced very complex terrain to simple shapes.
In his day these equations were mainly pure mathematics,
abstract computations that math sages often feel describe
God's precise design for the cosmos. While much of
Ramanujan's work remains abstract, many of his theorems are
now the mathematical power behind several 1990's
disciplines in astrophysics, artificial intelligence and
gas physics. According to his wife -- Janaki, who still
lives outside Madras --her husband predicted "his
mathematics would be useful to mathematicians for more than
a century." Yet, before sailing to England, Ramanujan was
largely ignorant of the prevailing highest-level math. He
flunked out of college in India. Like Albert Einstein, who
toiled as a clerk in a Swiss patent office while evolving
his Special Theory of Relativity at odd hours, Ramanujan
worked as a clerk at a port authority in Madras, spending
every spare moment contemplating the mathematical face of
God. It was here in these sea-smelling, paper-pushing
offices that he was gently pushed into destiny -- a plan
that has all the earmarks of divine design.
Ramanujan was born in Erode, a small, rustic town in Tamil
Nadu, India. His father worked as a clerk in a cloth
merchant's shop. his namesake is that of another medieval
philosophical giant --Ramanuja -- a Vaishnavite who
postulated the Vedanta system known as "qualified monism."
the math prodigy grew up in the overlapping atmospheres of
religious observances and ambitious academics. He wasn't
spiritually preoccupied, but he was steeped in the reality
and beneficence of the Deities, especially the Goddess
Namagiri. Math, of course, was his intellectual and
spiritual touchstone. No one really knows how early in
life ramanujan awakened to the psychic visitations of
Namagiri, much less how the interpenetration of his mind
and the Goddess' worked. By age twelve he had mastered
trigonometry so completely that he was inventing
sophisticated theorems that astonished teachers. In fact
his first theorems unwittingly duplicated those of a great
mathematician of a hundred years earlier. This feat came
after sifting once through a trigonometry book. he was
disappointed that his "discovery" has already been found.
then for four years there was numerical silence. At
sixteen a copy of an out-of-date math book from Cambridge
University came into his hands. It listed 5,000 theorems
with sparse, short-cut proofs. Even initiates in the
arcane language of mathematics could get lost in this work.
Ramanujan entered it with the giddy ambition and verve of
an astronaut leaping onto the moon. It subconsciously
triggered a love of numbers that completely saturated his
mind. He could envision strange mathematical concepts like
ordinary people see the waves of an ocean.
Ironically, his focus on math became his academic undoing.
he outpaced his teachers in numbers theory, but neglected
all other subjects. He could speak adequate English, but
failed in it and history and other science courses. He
lost a scholarship, dropped out, attempted a return but
fell ill and quit a second time. By this time he was
married to Janaki, a young teenager, and was supporting his
mother. Often all night he continued his personal
excursions into the math universe - being fed rice balls by
his wife as he wrote lying belly-down on a cot. During the
day he factored relatively mundane accounts at the post
office for 20 pounds a year. He managed to publish one
math paper.
As mathematicians would say, one branch of potential
reality could have gone with Ramanujan squandering his life
at the port. But with one nudge from the invisible
universe, Namagiri sent him Westward. A manager at the
office admire the young man's work and sensed significance.
He talked him into writing to British mathematicians who
might sponsor him. Ramanujan wrote a simple letter to the
renowned G. W. Hardy at Cambridge, hinting humbly at his
breakthroughs and describing his vegetarian diet and
spartan needs if he should come to the university. He
enclosed one hundred of his theorem equations.
Hardy was the brightest mathematician in England. Yet, as
he knew and would write later at the conclusion of his
life, he had done no original, mind-bending work. At
Cambridge he collaborated with an odd man named Littlewood,
who was so publicly retiring that people joked Hardy made
him up. The two, though living within a hundred yards of
each other, communicated by exchange of terse, math-laden
letters. Ramanujan's letter and equations fell to them like
a broadcast from alien worlds. AT first they dismissed it
as a curiosity. Then, they suddenly became intrigued by the
Indian's musings. Hardy later wrote: "A single look at
them is enough to show that they could only be written down
by a mathematician of the highest class. They must be
true, for if they were not true, no one would have the
imagination to invent them."
Hardy sensed an extremely rare opportunity, a "discovery,"
and quickly arranged a scholarship for the then 26-year-old
Ramanujan. The invitation came to India and landed like a
bomb in Ramanujan's family and community circle. His
mother was horrified that he would lose caste by traveling
to foreign shores. She refused to let him go unless it was
sanctioned by the Goddess. According to one version of the
story, the aged mother then dreamt of the blessing from
Namagiri. But Janaki says her husband himself went to the
namagiri temple for guidance and was told to make the
voyage. Ramanujan consulted the astrological data for his
journey. He sent is mother and wife to another town so
they wouldn't see him with his long brahmin's hair and bun
trimmed to British short style and his Indian shirt and
wrapcloth swapped for European fashion. He left India as a
slightly plump man with apple-round cheeks and eyes like
bright zeroes.
Arriving in 1914 on the eve of World War I, Ramanujan
experienced severe culture shock at Cambridge. he had to
cook for himself and insisted on going bare foot Hindu
style on the cold floors. But Hardy, a man without airs or
inflated ego, made him feel comfortable amidst the stuffy
Cambridge tradition. Hardy and Littlewood both served as
his mentors for it took two teachers to keep pace with his
advances. Soon, as Hardy recounts, it was Ramanujan who
was teaching them, in fact leaving them in the wake of
incandescent genius.
Within a few months war broke out. Cambridge became a
military college. vegetable and fruit shortages plagued
Ramanujan's already slim diet. The war took away Littlewood
to artillery research, and Ramanujan and Hardy were left to
retreat into some of the most recondite math possible. One
of the stunning examples of this endeavor is a process
called partitioning, figuring out how many different ways a
whole number can be expressed as the sum of other whole
numbers. Example: 4 is partitioned 5 ways (4 itself, 3+1,
2+2, 2+1+1, 1+1+1+1), expressed as p(4)=5. The higher the
number, the more the partitions. Thus p(7)=15. Deceptively
though, even a marginally larger number creates
astronomical partitions. p(200)=397,999,029,388. Ramanujan
-- with Hardy offering technical checks -- invented a
tight, twisting formula that computes the partitions
exactly. To check the theorem a fellow Cambridge
mathematician tallied by hand the partitions for 200. It
took one month. Ramanujan's equation was precisely
correct. U.S. mathematician George Andrews, who in the
late 1960's rediscovered a "lost notebook" of Ramanujan's
and became a lifetime devotee, describes his accuracy as
unthinkable to even attempt. Ramanujan's partition
equation helped later physicists determine the number of
electron orbit jumps in the "shell" model of atoms.
ANother anecdote demonstrates his mental landscape. By
1917, Ramanujan had fallen seriously ill and was
convalescing in a country house. Hardy took a taxi to
visit him. As math masters like to do he noted the taxi's
number --1729 -- to see if it yielded any interesting
permutations. To him it didn't and he thought to himself
as he went up the steps to the door that it was a rather
dull number and hoped it was not an inauspicious sign. He
mentioned 1729 to Ramanujan who immediately countered,
"Actually, it is a very interesting number. It is the
smallest number expressible as the sum of two cubes in two
different ways."
Ramanujan deteriorated so quickly that he was forced to
return to India -- emaciated -- leaving his math notebooks
at Cambridge. He spent his final year face down on a cot
furiously writing out pages and pages of theorems as if a
storm of number concepts swept through his brain. Many
remain beyond today's best math minds.
Debate still lingers as to the origins of Ramanujan's
edifice of unique ideas. Mathematicians eagerly acknowledge
surprise states of intuition as the real breakthroughs, not
logical deduction. There is reticence to accept mystical
overtones, though, like Andrews, many can appreciate
intuition *in the guise* of a Goddess. But we have
Ramanujan's own testimony of feminine whisperings from a
Devi and there is the sheer power of his achievements.
Hindus cognize this reality. As an epilogue to this story,
a seance held in 1934 claimed to have contacted Ramanujan
in the astral planes. Asked if he was continuing his work,
he replied, "No, all interest in mathematics dropped out
after crossing over."
February 1990
Jai Maharaj
Om Shanti
Hindu Holocaust Museum
Hindu life, principles, spirituality and philosophy
The truth about Islam and Muslims
o Not for commercial use. Solely to be fairly used for the educational
purposes of research and open discussion. The contents of this post may not
have been authored by, and do not necessarily represent the opinion of the
poster. The contents are protected by copyright law and the exemption for
fair use of copyrighted works.
o If you send private e-mail to me, it will likely not be read,
considered or answered if it does not contain your full legal name, current
e-mail and postal addresses, and live-voice telephone number.
o Posted for information and discussion. Views expressed by others are
not necessarily those of the poster who may or may not have read the article.
FAIR USE NOTICE: This article may contain copyrighted material the use of
which may or may not have been specifically authorized by the copyright
owner. This material is being made available in efforts to advance the
understanding of environmental, political, human rights, economic,
democratic, scientific, social, and cultural, etc., issues. It is believed
that this constitutes a 'fair use' of any such copyrighted material as
provided for in section 107 of the US Copyright Law. In accordance with Title
17 U.S.C. Section 107, the material on this site is distributed without
profit to those who have expressed a prior interest in receiving the included
information for research, comment, discussion and educational purposes by
subscribing to USENET newsgroups or visiting web sites. For more information
go to: http://www.law.cornell.edu/uscode/17/107.shtml
If you wish to use copyrighted material from this article for purposes of
your own that go beyond 'fair use', you must obtain permission from the
copyright owner.
Date Subject Author
2/28/08 Ramanujan said Hindu Goddess Namagiri whispered equations to him Dr. Jai Maharaj
2/28/08 Re: Ramanujan said Hindu Goddess Namagiri whispered equations to him peekay | {"url":"http://mathforum.org/kb/thread.jspa?forumID=13&threadID=1704288&messageID=6117857","timestamp":"2014-04-20T22:19:55Z","content_type":null,"content_length":"31752","record_id":"<urn:uuid:6509e8e3-0346-4183-9184-bd8669ef7896>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |