content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Where does the deterministic simulation of non-deterministic ω-Turing machines fail?
up vote 6 down vote favorite
An $\omega$-Turing machine is just a usual Turing machine $T=(Q,\Sigma,\Gamma,\delta,q_0,F)$ where $Q$ is the finite set of states, $\Sigma$ is the input alphabet, $\Gamma\supset\Sigma$ is the tape
alphabet, $\delta$ is is the transition relation, $q_0$ is the initial state and $F\subset Q$ is the set of accepting states. We will consider a special acceptance condition on $\omega$-words
(elements of $\Sigma^\omega$): The language $L(T)\subset\Sigma^\omega$ recognised by $T$ is the set of all $\omega$-words such that—when the initial configuration is given by this word–there exists a
run of $T$ such that every input token is reald only finitely many times and only states in $F$ get visited. An $\omega$-Turing machine is called deterministic iff $\delta$ is a function. The
condition that an accepting must not visit any tape position infinitely often is non-essential in the non-deterministic case because we can transform every non-deterministic $\omega$-Turing machine
into an equivalent one where every run is non-oscillating.
In 1978 Cohen and Gold have proved that non-deterministic $\omega$-Turing machines are strictly more powerful than deterministic ones. The argument goes as follows: They consider a larger class of
deterministic $\omega$-Turing machines with more general acceptance conditions which can all be translated into equivalent non-deterministic $\omega$-Turing machines as defined above, but the
languages recognisable by these machines are closed under complementation. However, using a diagonalisation argument (using a simulation of non-deterministic $\omega$-Turing machines) they prove that
the general class of languages recognisable by $\omega$-Turing machines is not closed under complementation.
Now for me it is not clear why the standard simulation of non-deterministic Turing machines by deterministic ones does not work, let me sketch my approach, which must be wrong: We could try to
simulate the non-deterministic machine step-by-step by computing all possible next configurations (where the states are still in $F$). The possible configurations can get bigger and bigger, but their
number always remains finite. If this deterministic machine has an accepting run then by König’s lemma (which should be applicable because of the simple acceptance condition, for Büchi acceptance it
would not work because there can be arbitrary large gaps between to occurences of an accepting state) in the simulated non-deterministic one there should be an accepting run, too. I think I am
missing some subtle point, maybe related to the non-oscillation condition (?), but it might also be an obvious detail in the definition. Could anybody clarify that?
add comment
1 Answer
active oldest votes
You say we can remove the condition that a run read every input only finitely many times, but I don't think that's so. As you noted, König's lemma shows that acceptance is a $\Pi^0_1$
property if we remove that condition. That means that the language of such a machine is a $\Pi^0_1$-class.
On the other hand, if we retain that condition, we can build a machine that accepts Fin, the set of all infinite binary strings with only finitely many 1s. Simply make a machine that
scans right through the input until it sees a 1, then it turns that 1 into a 0, runs back to the beginning of the input and repeats. Since Fin is a properly $\Sigma^0_2$-class, this
shows that we can achieve strictly more by retaining the condition.
I looked at the paper you linked. Your definition of accepting is what the authors call 1'-accepting, while their Theorem 8.6, which I believe you were referring to when you said we
could remove the condition, is about 3-accepting. Now, the authors do show that every 3-accepting non-deterministic machine can be simulated by a 1'-accepting non-deterministic
up vote 4 down machine, but the 1'-machine they build is explicitly oscillating.
vote accepted
So you claim that any non-oscillating 1'-accepting non-deterministic machine can be simulated by a non-oscillating 1'-accepting deterministic machine. I'm fairly certain your argument
is correct. There is no contradiction, because the proof that non-determinism is stronger than determinism was for 3-accepting machines; 1'-accepting non-oscillating machines are less
Note also: the authors' proof that non-determinism is strictly stronger than determinism is for the special case of non-oscillating deterministic machines. If you allowed oscillating
deterministic machines, I think you could simulate non-deterministic machines by deterministic machines without too much work.
Thank you very much, I think that resolved my confusion. I did not realise how the possibility to reject words by oscillation makes the computation more powerful (and had different
stuff in mind). – The User Jul 17 '13 at 19:20
add comment
Not the answer you're looking for? Browse other questions tagged computability-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/136980/where-does-the-deterministic-simulation-of-non-deterministic-turing-machines-f","timestamp":"2014-04-19T07:30:59Z","content_type":null,"content_length":"56044","record_id":"<urn:uuid:2883596c-d2f4-46e4-9e8e-37e094e8e2d3>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Tutor] A question about how python handles numbers larger than it's 32 bit limit
[Tutor] A question about how python handles numbers larger than it's 32 bit limit
Adam Bark adam.jtm30 at gmail.com
Tue Sep 23 17:24:48 CEST 2008
2008/9/23 John Toliver <john.toliver at gmail.com>
> Greetings,
> The book I have says when you anticipate that you will be working with
> numbers larger than what python can handle, you place an "L" after the
> number to signal python to treat it as a large number. Does this
> "treating" of the number only mean that Python won't try to represent
> the number internally as a 32bit integer? Python still appears to be
> representing the number only with an L behind it so what is happening to
> the number then. Is the L behind the number telling python to handle
> this large number in HEX instead which would fit into the 32 bit limit?
> thanks in advance,
> John T
The L stands for long integer and means that it will usually use twice the
space of a regular integer so in this case 64bits.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/tutor/attachments/20080923/12d78d65/attachment.htm>
More information about the Tutor mailing list | {"url":"https://mail.python.org/pipermail/tutor/2008-September/064437.html","timestamp":"2014-04-19T13:26:26Z","content_type":null,"content_length":"4295","record_id":"<urn:uuid:4a8f453b-4223-4938-99e2-90a43099fb5c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Double Root of a Polynomial
May 23rd 2010, 04:43 AM #1
Jul 2009
Double Root of a Polynomial
If x^3 + 3px^2 + 3qx + r = 0 has a double root, show that the double root must be (pq-r)/(2q-2p^2)
It's not hard to see that it's a root of $f(x)=0$ in the first place.
Now you're left with showing that it's also a root of $f'(x)=3x^2+6px+3q=0$
(wich is using the abc-formula or plugging in $x=\frac{pq-r}{2q-2p^2}$ in f'(x))
Sorry, im not following. Could you please expand on that?
Since you didn't post this in the calculus subforum I assume that a calculus approach cannot be used.
Note that your cubic must have the form $(x - a)^2 (x - b)$ where $x = a$ is the repeated root and $x = b$ is the third root. So you can expand this expression and equate the coefficients of the
powers of x to the given cubic. This will give 3 equations:
$3p = -b - 2a$
$3q = 2ab + a^2$
$r = -a^2 b$
and your job is to solve for a in terms of p, q and r.
Since you didn't post this in the calculus subforum I assume that a calculus approach cannot be used.
Note that your cubic must have the form $(x - a)^2 (x - b)$ where $x = a$ is the repeated root and $x = b$ is the third root. So you can expand this expression and equate the coefficients of the
powers of x to the given cubic. This will give 3 equations:
$3p = -b - 2a$
$3q = 2ab + a^2$
$r = -a^2 b$
and your job is to solve for a in terms of p, q and r.
Yep, Ive gotten to that part.
But in simultaneous equations, I cannot isolate the a. If you could give me guidance that would be great.
Also, is it too late to migrate this post to the calculus forum? I would like to see a calculus approach if possible.
To be honoust, I don't think this approach will make things easier.
It's a pretty tough system to crack, especially considering we must find the expression $a= \frac{pq-r}{2q-2p^2}$
Another way ofcourse would be to fill in the the given expression for " $a$", show that it satisfies:
Instead of trying to solve the value $a$ yourself.
To get back to my post. A root $x=a$ of an equation $f(x)=0$ is a double root when it's also a root of $f'(x)=0$. This fact can be shown quite easily, but I'm not sure if you may use this. It
would make your work a lot easier.
Try this. First we'll number our equations
$<br /> 1)\;\; 2a + b + 3p = 0<br />$
$<br /> 2)\;\; a^2 + 2ab - 3 q = 0<br />$
$<br /> 3)\;\;a^2b - r = 0<br />$
First eliminate $b$ from $(1)$ and $(2)$ giving
$3a^2 + 6ap + 3q = 0,$
then from $(1)$ and $(3)$ giving
$<br /> 2a^3 + 3a^2p + r = 0,<br />$
then from $(2)$ and $(3)$ giving
$<br /> a^3 - 3aq + 2r = 0.<br />$
Now can you manipulate these three new equations as to get $a$ all by itself? ( ${\it i.e.}$ no $a^2$ or $a^3$ )
The 3 resulting equations should be
To be honoust, I don't think this approach will make things easier.
It's a pretty tough system to crack, especially considering we must find the expression $a= \frac{pq-r}{2q-2p^2}$
Another way ofcourse would be to fill in the the given expression for " $a$", show that it satisfies:
Instead of trying to solve the value $a$ yourself.
To get back to my post. A root $x=a$ of an equation $f(x)=0$ is a double root when it's also a root of $f'(x)=0$. This fact can be shown quite easily, but I'm not sure if you may use this. It
would make your work a lot easier.
Thank you guys very much. Ive finally got the answer.
But I am still very interested in the calculus approach. Could you please expand on that? How would you do that.
Last edited by Lukybear; May 24th 2010 at 09:56 PM.
At a double root, the curve has either a local maximum or local minimum.
We don't know which, but we know that the derivative is zero there.
The derivative is not zero at the other root.
Hence we solve for
re-arranging the 2nd equation for x^2, we get
Substituting back into f(x)=0
Again using the value for $x^2$, we get
I see Archie Meade allready outlined a way to do it:
The reason why this works:
If $a$ is a double root of $f(x)=0$, then it's also a root of $f'(x)=0$
With the chain rule: if $f(x)=(x-a)^2(x-b)$ then $f'(x)=(x-a)^2+2(x-a)(x-b)$. Hence $f'(a)=0$.
At a double root, the curve has either a local maximum or local minimum.
We don't know which, but we know that the derivative is zero there.
The derivative is not zero at the other root.
Hence we solve for
re-arranging the 2nd equation for x^2, we get
Substituting back into f(x)=0
Again using the value for $x^2$, we get
Thank you very much.
I did realize before all that you had put forward. But just didnt make the connection.
Can i just ask how is the equation able to accommodate for the sub of x^2 twice? I've never seen this type of solving method before.
Wouldn't there be like a "redudancy" in the equation?
Try this. First we'll number our equations
$<br /> 1)\;\; 2a + b + 3p = 0<br />$
$<br /> 2)\;\; a^2 + 2ab - 3 q = 0<br />$
$<br /> 3)\;\;a^2b {\color{red}{ \;+\;}} r= 0<br />$
First eliminate $b$ from $(1)$ and $(2)$ giving
$3a^2 + 6ap + 3q = 0,$
then from $(1)$ and $(3)$ giving
$<br /> 2a^3 + 3a^2p {\color{red}{ \;-\;}} r = 0,<br />$
then from $(2)$ and $(3)$ giving
$<br /> a^3 - 3aq {\color{red}{ \;-\;}} 2r = 0.<br />$
Now can you manipulate these three new equations as to get $a$ all by itself? ( ${\it i.e.}$ no $a^2$ or $a^3$ )
Yes, I there is a typo (in red above) as pointed out by Archie.
Last edited by Jester; May 25th 2010 at 07:34 AM. Reason: fixed latex
Thank you very much.
I did realize before all that you had put forward. But just didnt make the connection.
Can i just ask how is the equation able to accommodate for the sub of x^2 twice? I've never seen this type of solving method before.
Wouldn't there be like a "redudancy" in the equation?
By virtue of the fact that we are looking for the double root,
we can utilise the derivative which is a quadratic.
The second root does not satisfy .... derivative = 0
the double root and one other value of x satisfies this (if the cubic has both a local max and local min as in this case if f(x) has no triple root).
The double root is one of the two values of x for which the derivative is zero.
However, only the double root satisfies both f(x)=0 and f'(x)=0.
Notice the way Danny solved the equations....
We are looking for the double root "a".
The 3 equations all contain the 2nd root "b", which is a fly in the ointment,
hence he proceeded to eliminate "b" as we had a system of simultaneous equations.
He also eliminated $a^3$ and $a^2$ to be left with $a$, the double root.
Using the derivative, we proceed to eliminate $x^3$ and $x^2$ to be left with x,
because this x is the double root, corresponding to f(x)=0 and f'(x)=0.
$3x^2+6px+3q=0$ for the double root
hence we now have a linear expression for $x^2$ at the double root .....you could use $u=x^2$
$3x^2=-6px-3q\ \Rightarrow\ x^2=-2px-q$
Therefore we can rewrite f(x)=0 for the double root
We want x and f(x) still contains $x^2$ but $x^2=-2px-q$ since the value of x is the x co-ordinate of the double root.
is $f(x)=(-2p)(-2px-q)-qx-6p^2x-3pq+3qx+r=0$
for the x-value of the double root.
We just replace all $x^2$ terms at any stage by the value of $x^2$ given by the derivative, until the only x terms left are multiples of x.
I was thinking "graph" as the double root is on the x-axis and the x-axis is the tangent to the curve there.
Dinkydoe showed nicely how to work with it without that necessity.
May 23rd 2010, 05:02 AM #2
May 24th 2010, 01:02 AM #3
Jul 2009
May 24th 2010, 01:30 AM #4
May 24th 2010, 02:35 AM #5
Jul 2009
May 24th 2010, 03:11 AM #6
May 24th 2010, 05:55 AM #7
May 24th 2010, 08:24 AM #8
MHF Contributor
Dec 2009
May 24th 2010, 09:39 PM #9
Jul 2009
May 25th 2010, 02:16 AM #10
MHF Contributor
Dec 2009
May 25th 2010, 02:52 AM #11
May 25th 2010, 04:18 AM #12
Jul 2009
May 25th 2010, 07:15 AM #13
May 25th 2010, 10:10 AM #14
MHF Contributor
Dec 2009 | {"url":"http://mathhelpforum.com/calculus/146070-double-root-polynomial.html","timestamp":"2014-04-19T02:08:27Z","content_type":null,"content_length":"97144","record_id":"<urn:uuid:93848f90-5d10-4a9d-be8f-3d2e625b442a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Patterns and Advanced Features
This Tutorial Is Intended for Advanced Users
Due to the complex inner workings of the Notation Package, it is helpful to outline some of the more advanced features and structures of the Mathematica front end and how they relate to the Notation
Package. The following sections give a small overview of the functionality of tag boxes, the specific tags used by the Notation Package, and the tag box option SyntaxForm.
The reader should be familiar with the concepts in "Textual Input and Output" and moreover understand the following tutorials: "The Representation of Textual Forms," "The Interpretation of Textual
Forms," "Representing Textual Forms by Boxes," "String Representation of Boxes," "Converting between Strings, Boxes, and Expressions," and "Low-Level Input and Output Rules."
Tag Boxes
A TagBox is a box structure just like RowBox, SubscriptBox, or GridBox. It is used to change the structure of an expression or indicate a grouping or interpretation of a subexpression at an
underlying level. To illustrate tag boxes, consider the following input, which contains an embedded TagBox.
All Mathematica input and output is made up of box structures at a low level. When Mathematica receives input, these box structures are parsed into internal expressions, which can be thought of as
full-form expressions. Internal evaluation then takes place, and finally the internal structures are transformed back into box structures for displaying in the Mathematica front end. You can reveal
how Mathematica sees this input at a low level by choosing Show Expression under the Cell menu.
Here is the underlying representation of the expression above in terms of boxes, displayed using the
Show Expression
menu item located under the
The above expression contains a subexpression TagBox[SuperscriptBox["x", "2"], foo]]. It is important to note that this box expression as normally viewed in Mathematica looks visually like even
though it has an embedded TagBox. Information contained in the tag is visually hidden from the user. When an expression containing a TagBox is input into Mathematica, the default interpretation of
the subexpression surrounded by the TagBox is to wrap the TagBox name around the parsed subexpression, in this case to wrap foo around .
The embedded
tag has no special parsing behavior associated with it.
However, you can define your own rules for the way specific tag boxes are parsed. For instance, by using the low-level function MakeExpression, you can change how Mathematica will parse expressions
containing tag boxes.
By defining a new rule for
, you can change how
will parse expressions containing a
with the tag .
The embedded
tag now has the special parsing behavior associated with it to just return the boxes.
Knowing that special behaviors can be set up for specific tags, you can now examine the tags defined by the Notation Package. The Notation Package defines three tags that have special behaviors:
NotationTemplateTag, NotationPatternTag, and NotationMadeBoxesTag. These are all string tags for two reasons. First, by using string tags you can avoid any potential problems to do with package
contexts and redefinition of the symbols. Second, in Mathematica if a TagBox has a string tag and there is a named style on the stylesheet path with the same name as the string tag, then the TagBox
will be displayed in that style. This lets you omit the BaseStyle option from the TagBox and consequently your box structures are smaller and more readable.
The Tag NotationTemplateTag
NotationTemplateTag is a string tag used by the Notation Package to grab box structures before they can be parsed by Mathematica. In fact, NotationTemplateTag acts rather like the tag defined above.
All Notation, Symbolize, and InfixNotation templates on the Notation palette contain tag boxes with an embedded string tag: NotationTemplateTag. The embedded TagBox ensures that the Notation Package
can obtain the correct parsing information and retain the proper styling and grouping information. This embedded tag is used to capture the box structure, and the captured structure is thus wrapped
with a ParsedBoxWrapper.
You can avoid using notation templates if you wrap raw box structures with a
Complex Patterns and the Tag NotationPatternTag
For normal purposes it is usually sufficient that the patterns present in Notation and Symbolize statements are simple patterns. However, it is sometimes necessary or desirable to use more
complicated patterns in notations. For example, a notation might only be valid when a certain pattern is a number. To allow more complex patterns, you can embed a NotationPatternTag tag box inside a
notation statement. It is critical that any notation you define that uses a complex pattern has an embedded NotationPatternTag, otherwise the pattern will be treated as a verbatim expression and not
function as a pattern. Like NotationTemplateTag, this should be a string tag. The Notation palette has a button labeled InsertPatternWrapper that will embed a NotationPatternTag around the selection,
as well as tint the background of the selection to indicate that a complex pattern is present. (This tinting occurs as a result of the named style NotationPatternTag, since this is a string tag.)
It must also be pointed out that the pattern matching on the external representation is performed on the box structures, so usually you will have to make small transformations to convert box
structures into normal expressions. Pattern matching on the internal representation follows conventional pattern matching.
This defines a function analogous to
that operates on box structures.
You should be careful to avoid unwanted evaluation through testing functions when parsing expressions (see "Parse without Evaluation Where Possible").
You can see that the patterns a_?StringNumericQ and a_?NumericQ do not appear literally since they were surrounded by a NotationPatternTag in the notation statement.
The Tag NotationMadeBoxesTag
The tag NotationMadeBoxesTag is intended for advanced users. It is also a string tag. It is used to indicate that box processing and formatting has already been done and that the Notation Package
should not perform any processing. Typically you would use this tag for surrounding your own functions that return expressions that have already been turned into boxes or parsed into expressions. To
illustrate the tag NotationMadeBoxesTag, you can examine a notation statement that might be part of a number of statements used to create a notation for tensors.
You can see from the internal definition returned that there is no further processing of the expression , i.e. it is not surrounded by a MakeBoxes[..., StandardForm].
Changing Precedences and the TagBox Option SyntaxForm
Using the option SyntaxForm, you can change the precedence of an expression containing a TagBox. A tag box containing a SyntaxForm option will look like TagBox[SyntaxForm->, where string is a string
indicating the operator on which the precedence of the tag box is modeled. The following examples illustrate the SyntaxForm option.
You can define a new notation for a composite arrow
surrounded by a
that has the
option set to a low precedence.
You can illustrate the underlying groupings of the expressions above in the following table.
A table illustrating the precedences and grouping of expressions with and without precedence-changing tag boxes.
The SyntaxForm option value can be any operator string valid in Mathematica, that is, any operator contained in the UnicodeCharacters.tr file. The SyntaxForm value can also include symbols before and
after the operator to indicate whether the precedence is that of a prefix operator, an infix operator, or a postfix operator. Some typical values for the SyntaxForm option are given in the table
precedence behavior
group as the operator times
group as a symbol
group as an infix plus operator
group as a for-all operator
group as an integrate operator
group as a prefix union operator
group as white space
Typical SyntaxForm values and their associated precedence behaviors. | {"url":"http://reference.wolfram.com/mathematica/Notation/tutorial/ComplexPatternsAndAdvancedFeatures.zh.html","timestamp":"2014-04-17T04:17:45Z","content_type":null,"content_length":"58260","record_id":"<urn:uuid:0d278769-8188-4653-9690-06f46d1751f5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus/Some Important Theorems
This section covers three theorems of fundamental importance to the topic of differential calculus: The Extreme Value Theorem, Rolle's Theorem, and the Mean Value Theorem. It also discusses the
relationship between differentiability and continuity.
Extreme Value TheoremEdit
Classification of ExtremaEdit
We start out with some definitions.
Global Maximum
A global maximum (also called an absolute maximum) of a function $f$ on a closed interval $I$ is a value $f(c)$ such that $f(c)\geq f(x)$ for all $x$ in $I$.
Global Minimum
A global minimum (also called an absolute minimum) of a function $f$ on a closed interval $I$ is a value $f(c)$ such that $f(c)\leq f(x)$ for all $x$ in $I$.
Maxima and minima are collectively known as extrema.
The Extreme Value TheoremEdit
Extreme Value Theorem
If $f$ is a function that is continuous on the closed interval [$a,b$], then $f$ has both a global minimum and a global maximum on [$a,b$]. It is assumed that a and b are both finite.
The Extreme Value Theorem is a fundamental result of real analysis whose proof is beyond the scope of this text. However, the truth of the theorem allows us to talk about the maxima and minima of
continuous functions on closed intervals without concerning ourselves with whether or not they exist. When dealing with functions that do not satisfy the premises of the theorem, we will need to
worry about such things. For example, the unbounded function $f(x)=x$ has no extrema whatsoever. If $f(x)$ is restricted to the semi-closed interval $I=$[$0,1$), then $f$ has a minimum value of $0$
at $x=0$, but it has no maximum value since, for any given value $c$ in $I$, one can always find a larger value of $f(x)$ for $x$ in $I$, for example by forming $f(d)$, where $d$ is the average of
$c$ with $1$. The function $g(x)=\frac{1}{x}$ has a discontinuity at $x=0$. $g(x)$ fails to have any extrema in any closed interval around $x=0$ since the function is unbounded below as one
approaches $0$ from the left, and it is unbounded above as one approaches $0$ from the right. (In fact, the function is undefined for x=0. However, the example is unaffected if g(0) is assigned any
arbitrary value.)
The Extreme Value Theorem is an existence theorem. It tells us that global extrema exist if certain conditions are met, but it doesn't tell us how to find them. We will discuss how to determine the
extrema of continuous functions in the section titled Extrema and Points of Inflection.
Rolle's TheoremEdit
Rolle's Theorem
If a function, $f(x) \$, is continuous on the closed interval $[a,b] \$, is differentiable on the open interval $(a,b) \$, and $f(a) = f(b) \$, then there exists at least one number c, in the
interval $(a,b) \$ such that $f'(c) = 0 \ .$
Rolle's Theorem is important in proving the Mean Value Theorem. Intuitively it says that if you have a function that is continuous everywhere in an interval bounded by points where the function has
the same value, and if the function is differentiable everywhere in the interval (except maybe at the endpoints themselves), then the function must have zero slope in at least one place in the
interior of the interval.
Proof of Rolle's TheoremEdit
If $f$ is constant on $[a,b]$, then $f'(x)=0$ for every $x$ in $[a,b]$, so the theorem is true. So for the remainder of the discussion we assume $f$ is not constant on $[a,b]$.
Since $f$ satisfies the conditions of the Extreme Value Theorem, $f$ must attain its maximum and minimum values on $[a,b]$. Since $f$ is not constant on $[a,b]$, the endpoints cannot be both maxima
and minima. Thus, at least one extremum exists in $(a,b)$. We can suppose without loss of generality that this extremum is a maximum because, if it were a minimum, we could consider the function $-f$
instead. Let $f(c)$ with $c$ in $(a,b)$ be a maximum. It remains to be shown that $f'(c)=0$.
By the definition of derivative, $f'(c)=\lim_{h\to0}\frac{f(c+h)-f(c)}{h}$. By substituting $h=x-c$, this is equivalent to $\lim_{x\to c}\frac{f(x)-f(c)}{x-c}$. Note that $f(x)-f(c)\leq 0$ for all
$x$ in $[a,b]$ since $f(c)$ is the maximum on $[a,b]$.
$\lim_{x\to c^{-}}\frac{f(x)-f(c)}{x-c}\geq0$ since it has non-positive numerator and negative denominator.
$\lim_{x\to c^{+}}\frac{f(x)-f(c)}{x-c}\leq0$ since it has non-positive numerator and positive denominator.
The limits from the left and right must be equal since the function is differentiable at $c$, so $\lim_{x\to c}\frac{f(x)-f(c)}{x-c}=0=f'(c)$.
1. Show that Rolle's Theorem holds true between the x-intercepts of the function $f(x)=x^2-3x$.
Mean Value TheoremEdit
Mean Value Theorem
If $f(x) \$ is continuous on the closed interval $[a, b] \$ and differentiable on the open interval $(a,b) \$, there exists a number, $c \$, in the open interval $(a,b) \$ such that
$f'(c) = \frac{f(b) - f(a)}{b - a}$.
The Mean Value Theorem is an important theorem of differential calculus. It basically says that for a differentiable function defined on an interval, there is some point on the interval whose
instantaneous slope is equal to the average slope of the interval. Note that Rolle's Theorem is the special case of the Mean Value Theorem when $f(a)=f(b)$.
In order to prove the Mean Value Theorem, we will prove a more general statement, of which the Mean Value Theorem is a special case. The statement is Cauchy's Mean Value Theorem, also known as the
Extended Mean Value Theorem.
Cauchy's Mean Value TheoremEdit
Cauchy's Mean Value Theorem
If $f(x) \$, $g(x) \$ are continuous on the closed interval $[a, b] \$ and differentiable on the open interval $(a,b) \$, then there exists a number, $c \$, in the open interval $(a,b) \$ such that
If $g(b)e g(a)$ and $g'(c)e 0$, then this is equivalent to
$\frac{f'(c)}{g'(c)} = \frac{f(b) - f(a)}{g(b) - g(a)}$.
To prove Cauchy's Mean Value Theorem, consider the function $h(x)=f(x)(g(b)-g(a))-g(x)(f(b)-f(a))-f(a)g(b)+f(b)g(a)$. Since both $f$ and $g$ are continuous on $[a,b]$ and differentiable on $(a,b)$,
so is $h$. $h'(x)=f'(x)(g(b)-g(a))-g'(x)(f(b)-f(a))$.Since $h(a)=h(b)$ (see the exercises), Rolle's Theorem tells us that there exists some number $c$ in $(a,b)$ such that $h'(c)=0$. This implies
that $f'(c)(g(b)-g(a))=g'(x)(f(b)-f(a))$, which is what was to be shown.
2. Show that $h(a)=h(b)$, where $h(x)$ is the function that was defined in the proof of Cauchy's Mean Value Theorem.
3. Show that the Mean Value Theorem follows from Cauchy's Mean Value Theorem.
4. Find the $x=c$ that satisfies the Mean Value Theorem for the function $f(x)=x^3$ with endpoints $x=0$ and $x=2$.
5. Find the point that satisifies the mean value theorem on the function $f(x) = \sin(x)$ and the interval $[0,\pi]$.
Differentiability Implies ContinuityEdit
If $f'(x_0)$ exists then $f$ is continuous at $x_0$. To see this, note that $\lim_{x\to x_0}(x-x_0)f'(x_0)=0$. But
\begin{align}\lim_{x\to x_0}(x-x_0)f'(x_0)&=\lim_{x\to x_0}(x-x_0)\frac{f(x)-f(x_0)}{x-x_0}\\ &=\lim_{x\to x_0}(f(x)-f(x_0))\\ &=\lim_{x\to x_0}f(x)-f(x_0)\end{align}
This imples that $\lim_{x\to x_0}f(x)-f(x_0)=0$ or $\lim_{x\to x_0}f(x)=f(x_0)$, which shows that $f$ is continuous at $x=x_0$.
The converse, however, is not true. Take $f(x)=|x|$, for example. $f$ is continuous at 0 since $\lim_{x\to 0^-}|x|=\lim_{x\to 0^-}-x=0$ and $\lim_{x\to 0^+}|x|=\lim_{x\to 0^+}x=0$ and $|0|=0$, but it
is not differentiable at 0 since $\lim_{h\to 0^-}\frac{|0+h|-|0|}{h}=\lim_{h\to 0^-}\frac{-h}{h}=-1$ but $\lim_{h\to 0^+}\frac{|0+h|-|0|}{h}=\lim_{h\to 0^+}\frac{h}{h}=1$.
Last modified on 15 May 2012, at 17:09 | {"url":"http://en.m.wikibooks.org/wiki/Calculus/Some_Important_Theorems","timestamp":"2014-04-18T21:16:23Z","content_type":null,"content_length":"44879","record_id":"<urn:uuid:bb1607ca-3e5b-43e6-a49f-b9cfabcad7e3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayonne Calculus Tutor
Find a Bayonne Calculus Tutor
...I also make myself available by phone and e-mail outside of lessons--My goal is for you to succeed on your tests. My expertise is in basic and advanced math: algebra 1/2, trigonometry,
geometry, precalculus/analysis, calculus (AB/BC), and statistics. I also teach high school level physics 1 and 2.
10 Subjects: including calculus, physics, geometry, statistics
...As a tutor, I feel it is my job to fill in those gaps so the student can learn effectively. I have experience tutoring in a broad subject range, from Algebra through college level Calculus.I
recently passed and an proficient in the material on both Exams P/1 and FM/2. I am able to tutor for the Praxis for Mathematics Content Knowledge.
21 Subjects: including calculus, geometry, statistics, accounting
...Speaking of my qualifications, I think you should know the details of my background which include attending Rutger's University and majoring in Biomathematics with a minor in Psychology. Math
is one of those subjects that seems to follow you wherever you go in life doesn't it?? Mathemati...
14 Subjects: including calculus, geometry, algebra 1, precalculus
...Since then I have tutored Algebra 2 and Calculus. I've also had a few years' experience in tutoring the SAT (all subjects, with math being the most proficient). Currently I volunteer at a
school in Manhattan on Saturday mornings, teaching SAT math to high school students. My methods can be flexible or rigid-- I always work with students to see what they respond best to.
14 Subjects: including calculus, English, reading, writing
...My students are consistently amazed by what they learn in the sessions. I bring a level of excitement that is absolutely contagious, so even if you dread standardized tests, you will feel that
the sessions are far more interesting than you would otherwise expect. I am a former premed student wi...
27 Subjects: including calculus, chemistry, physics, writing | {"url":"http://www.purplemath.com/Bayonne_calculus_tutors.php","timestamp":"2014-04-19T19:48:22Z","content_type":null,"content_length":"24070","record_id":"<urn:uuid:6073af14-c79d-49dd-98be-220b9984d59d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Flushing, NY ACT Tutor
Find a Flushing, NY ACT Tutor
...Real Experience - I have tutored over 300 students in all areas of math including ACT/SAT math, algebra, geometry, pre-calculus, Algebra Regents, and more. I specialize in SAT/ACT Math. I teach
students how to look at problems, how to break them down, which methods, strategies, and techniques to apply, and how to derive the quickest solution.
30 Subjects: including ACT Math, reading, English, writing
...I have a degree in physics and a minor in mathematics. I'm also currently working on a masters in applied mathematics & statistics. In several courses, such as Ordinary Differential Equations
(ODE) and Partial Differential Equations (PDE), we make heavy use of programs such as Mathematica and Maple.
83 Subjects: including ACT Math, chemistry, calculus, geometry
...As work place, I prefer comfortable and suitable public places generally, like a coffee shop with wide tables, or a library. But I also take into account any other offers when needed.As an
professional test taker and experienced private Math tutor, I have got a lot of knowledge to share on Math ...
25 Subjects: including ACT Math, calculus, statistics, logic
...I do not have a standard formula for every student and I do not try to make my students fit a single mold. Rather I enable each student’s unique strengths to shine. My approach is rigorous: I
set high standards and expect 100% commitment from each student.
52 Subjects: including ACT Math, reading, English, writing
...I excel in differentiating curricula to maximize engagement and learning. I have an intimate knowledge of what it takes to excel academically, which in turn, enables me to provide my students
with confidence, knowledge, motivation, fostering student development across a wide spectrum. I currently teach SAT and ACT prep for an elite NYC tutoring company.
26 Subjects: including ACT Math, reading, English, writing | {"url":"http://www.purplemath.com/Flushing_NY_ACT_tutors.php","timestamp":"2014-04-18T01:14:11Z","content_type":null,"content_length":"23842","record_id":"<urn:uuid:969136ee-cf11-4d91-aac6-bf820a7fc557>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vineland Trigonometry Tutor
...As part of that job, I led calculus recitations, tutored students in a variety of math classes at a math assistance center, and also worked as a private tutor. I have a Certificate of
Eligibility to teach mathematics in New Jersey, and hope to become a teacher through the Alternate Route To Teac...
19 Subjects: including trigonometry, calculus, geometry, GRE
...Chemistry can be confusing for a lot of students, mainly because most of the work is conceptual and cannot be physically seen; however it is a very interesting field once you master the basics.
I love to get students excited about science, and chemistry is no exception. I passed the AP chemistry exam in high school and have taken 1 more year of inorganic in college.
30 Subjects: including trigonometry, chemistry, English, biology
Latoya graduated from the University of Pittsburgh in December of 2007 with a Bachelor's degree in Psychology and Medicine and a minor in Chemistry. Currently, she is pursuing her Master's in
Physician Assistance. Her goal is to practice pediatric medicine in inner city poverty stricken communities.
13 Subjects: including trigonometry, chemistry, geometry, biology
...I have also spent 2 years tutoring individuals in the different mathematics courses at The College of New Jersey's Tutoring center. I would say that my teaching style is one that reinforces
critical thinking. I do not wish for students to just memorize how to do particular problems.
9 Subjects: including trigonometry, calculus, algebra 1, algebra 2
...During my time in college, I took one 3-credit course in Linear Algebra. At least three of the other fourteen math courses I took also touched on topics from Linear Algebra. While I was
studying, I worked in the Math Center at my college.
11 Subjects: including trigonometry, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/vineland_nj_trigonometry_tutors.php","timestamp":"2014-04-21T12:52:01Z","content_type":null,"content_length":"24364","record_id":"<urn:uuid:368f035f-f861-4280-95df-719bd891318e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Simple definiton of force?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Force is something which changes or tends to change the state of motion of a body.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@jkasdhk :)
Best Response
You've already chosen the best response.
a force is any influence that causes an object to undergo a certain change, either concerning its movement, direction, or geometrical construction. (from Wikipedia)
Best Response
You've already chosen the best response.
force is rate of change of momentum
Best Response
You've already chosen the best response.
Thanks... :) I understand :)
Best Response
You've already chosen the best response.
according to feynman whose definition might be one of the best force is something which cause acceleration to a body however we can not say it is rate of change of momentum exactly cause this
leads us to the question what is momentum and then we say it is integral of force wrt to time so it keeps on going in a circle hence the finest or the most accurate answer (yet) could be force is
mass times acceleration until mass is constant or more better force is a quality of a physical body acting on it when it is accelerating............
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f0396fe4b0d4a537ce2f6d","timestamp":"2014-04-21T16:23:00Z","content_type":null,"content_length":"44656","record_id":"<urn:uuid:13ed0ea9-1c6e-46a1-8d37-1a74e5ce4c32>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
St Albans, NY Calculus Tutor
Find a St Albans, NY Calculus Tutor
...While I specialize in calculus and the more analytic subjects of mathematics, I am just as comfortable with the algebraic aspects. I usually tutor older high-school students or younger
undergrads, but I am willing to tutor others. When it comes to choosing a time for an appointment, I am very flexible.
3 Subjects: including calculus, logic, linear algebra
...I assign work to be done at home, so all that is being taught is retained. Critique is much appreciated, as I am always looking to strengthen my teaching skills. I am flexible with time, as
long as communication is maintained to notify cancelation or rescheduling.
56 Subjects: including calculus, reading, Spanish, English
...My fondness for math has always been present and I plan on ultimately pursuing a career in actuarial mathematics. I enjoy helping adults and young adults both reach the class level, if they
are behind, and even excel ahead of the class. I have had experience with topics and exams from grade school mathematics through calculus.
10 Subjects: including calculus, geometry, algebra 1, algebra 2
I obtained my BSc in Applied Mathematics and BA in Economics dual-degree from the University of Rochester (NY) in 2013. I am a part-time tutor in New York City and want to help those students who
need exam preparation support or language training. I used to work at the Department of Mathematics on campus as Teaching Assistance for two years and I know how to help you improve your skills.
7 Subjects: including calculus, algebra 1, algebra 2, SAT math
...This is how all math works! Math is a system that makes sense, and once you understand it, homework, quizzes, and tests are easy! Greetings!
12 Subjects: including calculus, physics, MCAT, trigonometry
Related St Albans, NY Tutors
St Albans, NY Accounting Tutors
St Albans, NY ACT Tutors
St Albans, NY Algebra Tutors
St Albans, NY Algebra 2 Tutors
St Albans, NY Calculus Tutors
St Albans, NY Geometry Tutors
St Albans, NY Math Tutors
St Albans, NY Prealgebra Tutors
St Albans, NY Precalculus Tutors
St Albans, NY SAT Tutors
St Albans, NY SAT Math Tutors
St Albans, NY Science Tutors
St Albans, NY Statistics Tutors
St Albans, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/st_albans_ny_calculus_tutors.php","timestamp":"2014-04-21T07:39:19Z","content_type":null,"content_length":"24124","record_id":"<urn:uuid:095172e6-e19c-41b1-a356-c5883ad527c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Radical Equations ( Read ) | Algebra
Suppose your teacher has instructed the members of your math class to work in pairs, and she has asked you to find the length of a line segment. You get $\sqrt{2x+6}$$x$
Solving radical equations is no different from solving linear or quadratic equations. Before you can begin to solve a radical equation, you must know how to cancel the radical. To do that, you must
know its inverse .
Original Operation Inverse Operation
Cube Root Cubing (to the third power)
Square Root Squaring (to the second power)
Fourth Root Fourth power
“ $n$ “ $n$
To solve a radical equation, you apply the solving equation steps you learned in previous Concepts, including the inverse operations for roots.
Example A
Solve $\sqrt{2x-1}=5$
The first operation that must be removed is the square root. Square both sides.
$\left ( \sqrt{2x-1} \right )^2&=5^2\\2x-1&=25\\2x&=26\\x&=13$
Remember to check your answer by substituting it into the original problem to see if it makes sense.
Extraneous Solutions
Not every solution of a radical equation will check in the original problem. This is called an extraneous solution . This means you can find a solution using algebra, but it will not work when
checked. This is because of the rule in a previous Concept:
or, in words, even roots of negative numbers are undefined.
Example B
Solve $\sqrt{x-3}-\sqrt{x}=1$
$\text{Isolate one of the radical expressions}. \qquad \ \sqrt{x-3}&=\sqrt{x}+1\\\text{Square both sides}. \quad \left ( \sqrt{x-3} \right )^2 & = \left ( \sqrt{x}+1 \right )^2\\\text{Remove
parentheses}. \qquad \quad \ x-3&=\left ( \sqrt{x} \right )^2 + 2\sqrt{x}+1\\\text{Simplify}. \qquad \quad \ x-3&=x+2\sqrt{x}+1\\\text{Now isolate the remaining radical}. \qquad \qquad -4&=2\sqrt{x}\
\\text{Divide all terms by} \ 2. \qquad \qquad -2 &= \sqrt{x}\\\text{Square both sides}. \qquad \qquad \quad \ x&=4$
Check: $\sqrt{4-3} \stackrel{?}{=} \sqrt{4}+1\Rightarrow \sqrt{1} \stackrel{?}{=} 2+1 \Rightarrow 1eq 3$$x=4$
Radical Equations in Real Life
Example C
A sphere has a volume of $456 \ cm^3$volume of the sphere?
1. Define variables. Let $R=$
2. Find an equation. The volume of a sphere is given by the formula: $V=\frac{4}{3}\pi r^3$
By substituting 456 for the volume variable, the equation becomes $456=\frac{4}{3} \pi r^3$
$\text{Multiply by} \ 3: \qquad \quad 1368 &= 4\pi r^3\\\text{Divide by} \ 4\pi: \qquad 108.92&=r^3\\\text{Take the cube root of each side:} \qquad \qquad \ r&=\sqrt[3]{108.92} \Rightarrow r = 4.776
\ cm\\\text{The new radius is 2 centimeters more:} \qquad \qquad \ r &= 6.776 \ cm\\\text{The new volume is}: \qquad \qquad V &= \frac{4}{3}\pi (6.776)^3 = 1302.5 \ cm^3$
Check by substituting the values of the radii into the volume formula.
$V=\frac{4}{3}\pi r^3=\frac{4}{3}\pi (4.776)^3=456 \ cm^3$
Video Review
Guided Practice
Solve $\sqrt{x+15}=\sqrt{3x-3}$
Begin by canceling the square roots by squaring both sides.
$\left ( \sqrt{x+15} \right )^2&=\left ( \sqrt{3x-3} \right )^2\\x+15&=3x-3\\\text{Isolate the} \ x-\text{variable}: \qquad \qquad 18 & = 2x\\x&=9$
Check the solution: $\sqrt{9+15}=\sqrt{3(9)-3} \rightarrow \sqrt{24}=\sqrt{24}$
Sample explanations for some of the practice exercises below are available by viewing the following videos. Note that there is not always a match between the number of the practice exercise in the
videos and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both. CK-12 Basic Algebra: Extraneous Solutions to Radical Equations
CK-12 Basic Algebra: Radical Equation Examples (5:16)
CK-12 Basic Algebra: More Involved Radical Equation Example (11:54)
In 1-16, find the solution to each of the following radical equations. Identify extraneous solutions.
1. $\sqrt{x+2}-2=0$
2. $\sqrt{3x-1}=5$
3. $2\sqrt{4-3x}+3=0$
4. $\sqrt[3]{x-3}=1$
5. $\sqrt[4]{x^2-9}=2$
6. $\sqrt[3]{-2-5x}+3=0$
7. $\sqrt{x}=x-6$
8. $\sqrt{x^2-5x}-6=0$
9. $\sqrt{(x+1)(x-3)}=x$
10. $\sqrt{x+6}=x+4$
11. $\sqrt{x}=\sqrt{x-9}+1$
12. $\sqrt{3x+4}=-6$
13. $\sqrt{10-5x}+\sqrt{1-x}=7$
14. $\sqrt{2x-2}-2\sqrt{x}+2=0$
15. $\sqrt{2x+5}-3\sqrt{2x-3}=\sqrt{2-x}$
16. $3\sqrt{x}-9=\sqrt{2x-14}$
17. The area of a triangle is $24 \ in^2$
18. The volume of a square pyramid is given by the formula $V=\frac{A(h)}{3}$$A=$ area of the base and $h=$ height of the pyramid . The volume of a square pyramid is 1,600 cubic meters. If its height
is 10 meters, find the area of its base.
19. The volume of a cylinder is $245 \ cm^3$$(\text{Volume} = \pi r^2 \cdot h)$
20. The height of a golf ball as it travels through the air is given by the equation $h=-16t^2+256$
Mixed Review
21. Joy sells two types of yarn: wool and synthetic. Wool is $12 per skein and synthetic is $9 per skein. If Joy sold 16 skeins of synthetic and collected a total of $432, how many skeins of wool did
she sell?
22. Solve $16 \ge |x-4|$
23. Graph the solution: $\begin{cases}y \le 2x-4\\y>-\frac{1}{4} x+6\end{cases}$
24. You randomly point to a day in the month of February, 2011. What is the probability your finger lands on a Monday?
25. Carbon-14 has a half life of 5,730 years. Your dog dug a bone from your yard. It had 93% of its carbon-14 remaining. How old is the bone?
26. What is true about solutions to inconsistent systems? | {"url":"http://www.ck12.org/algebra/Radical-Equations/lesson/Radical-Equations/","timestamp":"2014-04-20T04:52:50Z","content_type":null,"content_length":"122292","record_id":"<urn:uuid:25bd1f22-b553-4815-9002-88fa543b642a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
From patterns to space groups and the eigensymmetry of crystallographic orbits: a reinterpretation of some symmetry diagrams in IUCr Teaching Pamphlet No. 14
Volume 45
Part 4
834-837 ^aLaboratorio de Cristalografía, Estado Sólido y Materiales (Cryssmat-Lab)/DETEMA, Facultad de Química, Universidad de la República, Avenida Gral Flores 2124, 11800 Montevideo, Uruguay,
August and ^bUniversité de Lorraine, Faculté des Sciences et Technologies, Institut Jean Barriol FR 2843, Cristallographie, Résonance Magnétique et Modélisations (CRM2), UMR - CNRS 7036,
2012 Boulevard des Aiguillettes, BP 70239, F54506 Vandoeuvre-lès-Nancy Cedex, France
Correspondence e-mail: massimo.nespolo@crm2.uhp-nancy.fr
17 The space group of a crystal pattern is the intersection group of the eigensymmetries of the crystallographic orbits corresponding to the occupied Wyckoff positions. Polar space groups
February without symmetry elements with glide or screw components smaller than 1/2 do not contain characteristic orbits and cannot be realized in patterns (structures) made by only one
2012 crystallographic type of object (atom). The space-group diagram of the general orbit for this type of group has an eigensymmetry that corresponds to a special orbit in a centrosymmetric
Accepted 3 supergroup of the generating group. This fact is often overlooked, as shown in the proposed solution for Plates (i)-(vi) of IUCr Teaching Pamphlet No. 14, and an alternative interpretation
May 2012 is given.
Online 12
June 2012
1. Introduction
Although crystallography is a centennial science, its presence in higher education is in jeopardy. Indeed, crystallography is very often nothing more than a chapter in introductory solid-state
physics and chemistry books and in mineralogy textbooks, and therefore the treatment it receives in graduate-level courses is often only incidental. As a result of this lack of formal
crystallographic education, many young crystallographers have acquired their knowledge in the field through a rather slow and sometimes tortuous self-education process, using some excellent books
available on the different aspects and applications of crystallography, and attending schools and workshops covering basic and advanced aspects and new developments of crystallography at different
levels. This continues to be the case nowadays, especially, but not only, in the developing world, where the lack of strong crystallographic societies or associations keeps crystallography as a very
rarely taught topic at the undergraduate level and only for specific areas in graduate schools. Paradoxically, this difficulty of learning crystallography may also be at the root of its outstanding
development in the past century, since modern crystallographers come from such different knowledge areas as physics, chemistry, materials science, mineralogy and biology, permanently enriching this
already wide area of science.
This problem has been tackled by the IUCr at different times through different strategies, all aimed at compensating for the lack of formal education in crystallography. The creation of the IUCr
Teaching Commission (IUCr-TC) in 1954 was one of these actions, and the systematic work of convincing academia to include crystallography as a separate subject in undergraduate and graduate-level
courses has always been part of the work of the IUCr. Nowadays, in some universities in Europe specific graduate programmes on crystallography exist as a consequence of this push, but they are often
isolated efforts by researchers in just a few universities. Being very aware of the differences in development of crystallographic teaching in different regions of the world, in the late 1970s the
IUCr-TC undertook the task of providing academia with a series of short booklets or pamphlets directed at helping students to self-educate and teachers to introduce the basic concepts of
crystallography to advanced undergraduate or graduate students. These so-called IUCr Teaching Pamphlets have become standard and widely used crystallographic teaching materials. Since the first
series published in 1980, continued by the second series published in 1984, up to some recent additions, a total of 23 IUCr Teaching Pamphlets have been published and made available for free at the
IUCr website (http://www.iucr.org/education/pamphlets ). These are in general high-level teaching materials checked carefully for errors and inconsistencies. Nevertheless, some topics that should
form the common background of a crystallographer are not (yet?) included and their absence results in some inconsistencies, even in this professional series. Here, we point out the concepts of orbit
eigensymmetry and intersection symmetry, which are practically never presented even in graduate courses. Without them, serious oversights may occur, and indeed have occurred, as we will show.
2. Space groups as intersection groups of the eigensymmetries of crystallographic orbits
The operations of a space group G applied to an atomic position give rise to an infinite set of equivalent atoms called a crystallographic orbit or point configuration [for details of the difference
between these terms, see Koch & Fischer (1985 )]. Let the eigensymmetry of the ith orbit O[i] be E(O[i]), or E[i] for brevity. The relation between E[i] and G gives rise to the following subdivision,
where T is the normal subgroup of translations (Engel et al., 1984 ):
(1) E[i] = G: the orbit is called characteristic;
(2) E[i] > G: the orbit is called noncharacteristic; it can be further subdivided depending on whether
(2.1) T(E[i]) = T(G): the noncharacteristic orbit is non-extraordinary (term usually omitted);
(2.2) T(E[i]) > T(G): the noncharacteristic orbit is extraordinary, the latter term taking priority over the former (an extraordinary orbit is always noncharacteristic, while the opposite is not
A crystal structure S can be seen as the union (in the algebraic meaning) of all the crystallographic orbits O corresponding to the Wyckoff positions occupied by the atoms of the structure. The space
group of the structure G(S) is, instead, the intersection of the eigensymmetries of these orbits. In fact, for each orbit, only the symmetry operations that are common to the other orbits are
promoted to symmetry operations of the whole structure, the others being local symmetry operations [for the meaning of a local operation, see Nespolo et al. (2008 )];
A space group whose general orbit is noncharacteristic does not contain any characteristic orbits. This arises immediately from the consideration that the eigensymmetry of a special orbit is at least
equal to that of the general orbit. Space groups without characteristic orbits cannot be realized in structures with only one crystallographic type of atom. In fact, if a structure S is composed of
only one type of atom, which, under the action of G, generates one orbit O, then necessarily G(S) = E(O), which requires that the orbit is characteristic. Space groups without characteristic orbits
are typically pyroelectric groups without d mirrors, or screw axes with a screw component different from i.e. containing only 2[1], 4[2] and 6[3] as screw axes). In these space groups, the
eigensymmetry of each orbit has an additional symmetry element q perpendicular to the symmetry element defining the polar direction(s): either a mirror perpendicular to the polar axis or a twofold
axis perpendicular to the polar plane (in the absence of metric specialization, space group P1 is an exception because the triclinic metric is not compatible with a proper or improper rotation of
order higher than 1). In fact, atoms in the orbit are in one of the following four situations: (i) on planes separated by full lattice translations; (ii) on planes separated by half lattice
translations; (iii) along directions separated by full lattice translations; and (iv) along directions separated by half lattice translations. These atoms have q in their eigensymmetry, and the
symmetry operation s(q) about q defines a coset s(q)G so that G s(q)G = E is the eigensymmetry of the orbit. As a consequence, the general orbit in G corresponds to a special orbit in E, whose site
symmetry group is precisely defined by q. For these cases, the space-group diagrams in Volume A of International Tables for Crystallography (2011 ) do not indicate any additional symmetry elements,
because for structures composed of more than one orbit these are local elements, although each diagram gives only one general orbit. However, when applying the opposite reasoning, from the orbit to
the space group, the implicit assumption that the orbit is general may result in the underestimation of the eigensymmetry and thus of the space group, as we will now show.
3. Missing symmetry elements in the IUCr Teaching Pamphlets
IUCr Teaching Pamphlet 14 (Space Group Patterns; Meier, 2001 ) is the continuation of IUCr Teaching Pamphlet 13 (Symmetry; Dent Glasser, 2001 ). It contains 15 plates showing groups of feet or hands
(or, as we interpret them, footprints and handprints) periodically and symmetrically arranged to represent crystal patterns^1 in each of the 230 types of space group in a particular setting. Pamphlet
14 is designed to put into practice the concepts of symmetry introduced in Pamphlet 13, and starts with an explanatory introduction where the symbols and rules for the use of the plates are outlined.
In general, two types of symbol are used for the symmetry patterns. Footprints are used for space groups containing only twofold symmetry operations [Plates (i)-(vi)], while handprints are used for
space groups with rotations of higher order. The feet symbols are also used to exemplify planar groups that can be obtained as a projection of a space group along the vertical axis. The difference
between a hand and a foot may not be evident from examining real hands and feet, but in the plates footprints are only used to represent polar space groups, the polar direction being taken as the
direction of projection, since footprints are always looked at from above. Feet differ, however, in their handedness (right or left). Handprints, instead, are shown both right and left and palm up or
palm down.
The polar space groups represented by footprint patterns are precisely the types without characteristic orbits: Pma2 (No. 28) [Plate (i)], Pnc2 (No. 30) [Plate (ii)], Pbn2[1] (No. 33) [Plate (iii)],
Cc (No. 9) [Plate (iv)], Cmc2[1] (No. 36) [Plate (v)] and Aea2 (former space group symbol Aba2) (No. 41) [Plate (vi)]. The plates represent only the general orbit of these groups, so that G(S) = E(O
). Because the eigensymmetry of the orbit is higher than that of the generating group, the space group given in the text is systematically a subgroup of the space group corresponding to the plates.
In other words, each plate shows a special orbit in a centrosymmetric space group, while the text describes it as a general orbit in a polar group.
Let us examine Plate (i), reproduced in Fig. 1 . This is a crystal pattern corresponding to the general orbit of a space group of type Pma2. The + symbol at the top right of the picture indicates
that the z coordinate of the feet is located away from z = 0. A footprint has eigensymmetry m. Because all feet are located at the same z coordinate, this mirror also occurs in the space group of the
pattern, at z coordinates + and [[001]] with m[[001]] (Fig. 2 ). The space group of the pattern shown in Plate (i) is thus Pmam [standard symbol Pmma (No. 51) obtained by an space group the
footprints are no longer in a general position but in a special position with site symmetry ..m (.m. in the standard setting). A shift of the origin is necessary to obtain the standard description.
Once this shift is applied, it is possible to recognize that the orbit corresponds to Wyckoff position 4i or 4j, depending on where the origin is placed with respect to the orbit. The space group of
the pattern would only be of type Pma2 if the footprint did not possess eigensymmetry m, i.e. if the top and the bottom of the footprint were different, as is the case for the handprints, for which
palm up and palm down are shown.
│ │ Figure 1 │
│ │ Plate (i) of IUCr Teaching Pamphlet 14, with the symmetry elements of the space group that has generated the orbit represented by the set of footprints. Axes, not shown in the original plate │
│ │ but described in the text, are oriented as in the first projection of each orthorhombic group, i.e. c is the projection direction, a is directed vertically down and b is directed horizontally │
│ │ right. │
│ │ Figure 2 │
│ │ Plate (i) of IUCr Teaching Pamphlet 14, with the symmetry elements missing in Fig. 1 . The eigensymmetry of this orbit is Pmam [standard symbol Pmma (No. 51)], which is also the space-group │
│ │ type of a crystal pattern composed of this orbit alone. │
The same argument applies to Plates (ii)-(vi), where the supposedly polar arrangements of footprints correspond not to a general orbit in G (polar) but to a special orbit in the centrosymmetric
supergroup E. The correct space-group types are then Pncm [Plate (ii); standard symbol Pmna (No. 53)], Pbnm [Plate (iii); standard symbol Pnma (No. 62)], C2/c (No. 15) [Plate (iv)], Cmcm (No. 63)
[Plate (v)] and Aeam (No. 64) (former space group symbol Abam) [Plate (vi); standard symbol Cmce].
4. Discussion
The usual way of introducing space groups in crystallography courses is via the application of space-group operations to objects in a general position to generate a crystal pattern. The opposite
approach, from pattern to space group, is didactically more interesting, not only because a space group is indeed the a posteriori interpretation of a crystal pattern in terms of its symmetry, but
also because it underlines several features that normally go unnoticed, namely (i) the eigensymmetry of each orbit, (ii) the nature of a space group as the intersection group of these
eigensymmetries, and (iii) the presence of local symmetry operations, which are part of the eigensymmetry of an orbit but not common to the other orbits. Adopting this approach in parallel with the
more common way of introducing space-group symmetry avoids oversights like those present in Teaching Pamphlet No. 14 discussed in this article. This pamphlet is frequently downloaded from the IUCr
web site, suggesting it is still in widespread use, making our reinterpretation and this discussion of some didactic value for teachers who use it.
LS thanks the anonymous student who first solved Plate (i) `incorrectly' by placing a mirror plane in the plane of the feet, drawing our attention to the possible misinterpretation that the use of
footprints may lead to in the first six plates of pamphlet No. 14. The authors are also indebted to Brian McMahon from the Chester office of the IUCr for providing information on the download of IUCr
Teaching Pamphlet No. 14 from http://www.iucr.org/education/pamphlets/14 . The critical remarks of two anonymous reviewers are gratefully acknowledged.
Dent Glasser, L. S. (2001). Symmetry, IUCr Teaching Pamphlet No. 13, http://www.iucr.org/education/pamphlets/13 .
Engel, P., Matsumoto, T., Steinmann, G. & Wondratschek, H. (1984). The Non-characteristic Orbits of the Space Groups, Zeitschrift für Kristallographie, Supplement, Issue No. 1. Munich: R. Oldenbourg
International Tables for Crystallography (2011). Volume A, Space-group Symmetry, corrected reprint of 5th ed., edited by Th. Hahn. Heidelberg: Springer.
Koch, E. & Fischer, W. (1985). Acta Cryst. A41, 421-426.
Meier, W. M. (2001). Space Group Patterns, IUCr Teaching Pamphlet No. 14, http://www.iucr.org/education/pamphlets/14 .
Nespolo, M., Souvignier, B. & Litvin, D. B. (2008). Z. Kristallogr. 23, 605-606. | {"url":"http://journals.iucr.org/j/issues/2012/04/00/kk5107/kk5107bdy.html","timestamp":"2014-04-21T00:31:15Z","content_type":null,"content_length":"51993","record_id":"<urn:uuid:996af43a-3757-40d4-b6e1-f17607eb2dd6>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gloucester City ACT Tutor
Find a Gloucester City ACT Tutor
...I am a sociology/anthropology major at Swarthmore College. I have read numerous anthropological works. Moreover, I have done original ethnographic work in Costa Rica and am currently writing my
thesis in anthropology.
32 Subjects: including ACT Math, reading, English, writing
...If you are someone who feels like you aren't getting the productivity that you could out of your computer, I can help you. I have always spent my time tinkering with the settings in order to
maximize the efficiency with which a computer can operate. So, whether you need help setting up your wi...
21 Subjects: including ACT Math, reading, calculus, physics
...Creativity, practical applications, and fun are my trademarks. As my students would always say, "Geometry really is EVERYWHERE". It's not just formulas for area and volume, but the appreciation
of how shapes form our entire world. why bees make hexagonal honeycombs and where the words "square" and "trapezoid" originate.
12 Subjects: including ACT Math, geometry, algebra 1, ASVAB
I am graduate student working in engineering and I want to tutor students in SAT Math and Algebra and Calculus. I think I could do a good job. I studied Chemical Engineering for undergrad, and I
received a good score on the SAT Math, SAT II Math IIC, GRE Math, and general math classes in school.
8 Subjects: including ACT Math, calculus, geometry, algebra 1
...I'm a 2013 Princeton graduate currently pursuing a Master of Fine Arts at the University of Virginia, where I will begin teaching undergraduate classes in the fall. After spending a fantastic
year working in the UVA Writing Center, where I tutor students in many different disciplines and on wide...
17 Subjects: including ACT Math, reading, English, writing | {"url":"http://www.purplemath.com/Gloucester_City_ACT_tutors.php","timestamp":"2014-04-19T23:28:58Z","content_type":null,"content_length":"23991","record_id":"<urn:uuid:99b3ee95-0f9b-4c14-9842-d43898c73067>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
math cheating websites
Author Message
bhoobalan Posted: Wednesday 27th of Dec 13:26
I have a difficulty with my math that requires urgent solution. The difficulty is with math cheating websites. I have been on the look out for someone who can teach me at once as my
exam is drawing near. But it's tough to find somebody quick enough besides it being costly. Can anyone guide me ? It will be a huge help.
ameich Posted: Thursday 28th of Dec 09:08
I don’t think I know of any website where you can get your answers of math cheating websites checked within hours. There however are a couple of companies which do offer assistance,
but one has to wait for at least 24 hours before getting any reply .What I know for sure is that, this software called Algebrator, that I used during my college career was really good
and I was quite satisfied with it. It almost gives the type of results you need.
From: Prague,
Czech Republic
Svizes Posted: Saturday 30th of Dec 08:12
Thanks for the advice . Algebrator is actually a extremely helpful math software. I was able to get answers to queries I had about subtracting exponents, parallel lines and slope. You
just have to type in a problem, click on Solve and you get the all the solutions you need. You can use it for any number of algebra things, like Algebra 2, Basic Math and College
Algebra. I think everyone should use Algebrator.
From: Slovenia
[Secned©] Posted: Saturday 30th of Dec 21:35
Well, if the software is so effective then give me the link to it. I would like to try it once.
From: Québec,
Jrobhic Posted: Sunday 31st of Dec 08:14
Visit http://www.mathisradical.com/rational-equations.html and you can get all the details about this tool. I would advice you to try it at least once. All it takes is thirty minutes
to get familiar to the software.
DoniilT Posted: Monday 01st of Jan 08:02
I am a regular user of Algebrator. It not only helps me complete my homework faster, the detailed explanations offered makes understanding the concepts easier. I suggest using it to
help improve problem solving skills. | {"url":"http://www.mathisradical.com/how-to-simplify-radical-expressions/powers/math-cheating-websites.html","timestamp":"2014-04-16T10:18:16Z","content_type":null,"content_length":"50592","record_id":"<urn:uuid:946d47c6-2040-4034-9ae2-1589c15bdb23>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the area of triangle
August 4th 2012, 03:09 AM
Find the area of triangle
Given that in triangle ABC, $\angle A =60^\circ, \angle B =45^\circ, AC =1cm.$ D and E are respectively the midpoints of AB and AC. Find the area of triangle ADE.
(SOLVED using sine rule and $\frac{1}{2}ab sinC$)
August 4th 2012, 04:10 AM
Prove It
Re: Find the area of triangle
In triangle ABC, you have two angles, so finding the third angle won't be too hard. Then use the sine rule to evaluate another side length. Finally, you can evaluate the area using \displaystyle
\begin{align*} \frac{1}{2}ab\sin{C} \end{align*}.
For triangle ADE, notice that you have reduced the side lengths by \displaystyle \begin{align*} \frac{1}{2} \end{align*}. This means that the area of the triangle has been reduced by \
displaystyle \begin{align*} \left(\frac{1}{2}\right)^2 = \frac{1}{4} \end{align*}. | {"url":"http://mathhelpforum.com/geometry/201721-find-area-triangle-print.html","timestamp":"2014-04-19T10:44:54Z","content_type":null,"content_length":"5265","record_id":"<urn:uuid:4af8bbad-6ad9-4699-aff6-7caf577e0182>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using the Bisection Method
March 3rd 2010, 07:15 PM #1
Junior Member
Jan 2010
Using the Bisection Method
let $f(x)=(x-1)(x-2)(x-3)(x-4)(x-5)$. This function has five roots on the interval $[0,7]$. I have to use the bisection method on this interval and figure out what root is located.
I'm not entirely sure I know how to use this method correctly..
Here it goes,
First, since $f(0)<0$ and $f(7)>0$, there exists a $c\in\mathbb{R}$ in between those two points such that $f(c) = 0$ ..
Now would you divide the interval in two?
This is quite a strange function to use with this method as it is fully factored and gives you the roots.
Anyhow you choose the interval $[0,7]$ and find the signs of $f(0) <0$ and $f(7)>0$
Now bisect the interval $\frac{7-0}{2} = 3.5$ to make 2 smaller intervals and find the sign of $f(3.5)$
We know $f(3.5)>0$ so we choose the smaller intervals with opposite signs $f(0) <0$ and $f(3.5)>0$ , bisect again and so on until the interval is small enough.
This is quite a strange function to use with this method as it is fully factored and gives you the roots.
Anyhow you choose the interval $[0,7]$ and find the signs of $f(0) <0$ and $f(7)>0$
Now bisect the interval $\frac{7-0}{2} = 3.5$ to make 2 smaller intervals and find the sign of $f(3.5)$
We know $f(3.5)>0$ so we choose the smaller intervals with opposite signs $f(0) <0$ and $f(3.5)>0$ , bisect again and so on until the interval is small enough.
Ok that's kind of what I thought... and yea it gives you all the roots but they want you to find the root that will be found after using that method, which apparently ends up being $x=1$
let $f(x)=(x-1)(x-2)(x-3)(x-4)(x-5)$. This function has five roots on the interval $[0,7]$. I have to use the bisection method on this interval and figure out what root is located.
I'm not entirely sure I know how to use this method correctly..
Here it goes,
First, since $f(0)<0$ and $f(7)>0$, there exists a $c\in\mathbb{R}$ in between those two points such that $f(c) = 0$ ..
Now would you divide the interval in two?
Wow thats crazy i just took an exam today with almost that same exact problem just a little difference in like one of the numbers. Anyway the way I did it was that I just kept applying the
bisection method where a=0 b=7 and you have to find thier signs and all that, but then when you plug in p_n it either matches the sign of a or b, however you keep doing it until f(p)=0. However
in your case you can kind of see where its going and it keeps getting closer and closer to 1 so your root that will be determined would be 1. I'm not sure if its all the correct but I mean it
worked for me on both my hw's and the practice exam. Hope this helped
Wow thats crazy i just took an exam today with almost that same exact problem just a little difference in like one of the numbers. Anyway the way I did it was that I just kept applying the
bisection method where a=0 b=7 and you have to find thier signs and all that, but then when you plug in p_n it either matches the sign of a or b, however you keep doing it until f(p)=0. However
in your case you can kind of see where its going and it keeps getting closer and closer to 1 so your root that will be determined would be 1. I'm not sure if its all the correct but I mean it
worked for me on both my hw's and the practice exam. Hope this helped
How would you formally write the end of the proof, whenever you have a small enough interval?
I am down to the $[0,1.75]$ interval, so how would I go one to say that the root must be 1?
I don't understand. Are you to play dumb? Clearly $f(x)=0,\text{ }x=1,2,3,4,5$ and $f(x)e 0\text{ }x\in \mathbb{R}-\{1,2,3,4,5\}$. So clearly the zero given must be one.
March 3rd 2010, 07:48 PM #2
March 3rd 2010, 08:05 PM #3
Junior Member
Jan 2010
March 3rd 2010, 08:26 PM #4
Feb 2010
March 3rd 2010, 09:18 PM #5
Junior Member
Jan 2010
March 3rd 2010, 09:36 PM #6
March 3rd 2010, 10:06 PM #7
Junior Member
Jan 2010 | {"url":"http://mathhelpforum.com/differential-geometry/131952-using-bisection-method.html","timestamp":"2014-04-17T15:59:18Z","content_type":null,"content_length":"57099","record_id":"<urn:uuid:014130a9-de45-4d2f-bca7-bf0f9243c368>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can someone help me solve this using substitution please?
February 6th 2010, 07:20 AM
Can someone help me solve this using substitution please?
I tried to get the detailed solution from Wolfram Alpha: http://www.wolframalpha.com/input/?i=integral+((tanx)^3)
but, it uses the "reduction formula" which is something I have not learned yet not to mention that this question is in the "substitution" section of my text book.
I was just hoping somebody could solve this for me using substitution because when I let u = tanθ, I get stuck and I can't see what else to do.
Any help would be GREATLY appreciated!
Thanks in advance!
If you want to see my work, it is attached. This particular problem is the last one of the PDF and is #47.
February 6th 2010, 08:00 AM
I tried to get the detailed solution from Wolfram Alpha: http://www.wolframalpha.com/input/?i=integral+((tanx)^3)
but, it uses the "reduction formula" which is something I have not learned yet not to mention that this question is in the "substitution" section of my text book.
I was just hoping somebody could solve this for me using substitution because when I let u = tanθ, I get stuck and I can't see what else to do.
Any help would be GREATLY appreciated!
Thanks in advance!
If you want to see my work, it is attached. This particular problem is the last one of the PDF and is #47.
$\tan^3{t} = \tan{t} \cdot \tan^2{t} = \tan{t}(\sec^2{t} - 1) = \tan{t}\sec^2{t} - \tan{t}$
$\int \tan{t}\sec^2{t} - \tan{t} \, dt = \int \tan{t} \sec^2{t} \, dt - \int \tan{t} \, dt$
can you finish?
February 6th 2010, 08:03 AM
All you have to do is integrate for u and evaluate [F(b) - F(a)]
[F(√3) - F(-√3)]
...to solve it from your PDF work that is
February 6th 2010, 08:14 AM
February 6th 2010, 09:31 AM
@deathbycalc: my last step was incorrect because I still had du/dθ = (secθ)^2.
@skeeter: I completed it and got the right answer but could you just check if it's not just a fluke please?
February 6th 2010, 09:40 AM
for future reference, note that ...
$\int_{-a}^a (an \, odd \, function) \, dx = 0$ | {"url":"http://mathhelpforum.com/calculus/127442-can-someone-help-me-solve-using-substitution-please-print.html","timestamp":"2014-04-16T13:47:48Z","content_type":null,"content_length":"9000","record_id":"<urn:uuid:8f0d5cc8-d003-45e2-9fef-6a6056084d07>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00376-ip-10-147-4-33.ec2.internal.warc.gz"} |
A square is inscribed in a circle with diameter 2. Four smaller circles are then constructed with their diameters on each of the sides of the square.
Find the shaded area.
Consider the diagram:
This is best tackled by working towards the area of a single lune, but this needs to be done in a series of careful additive and subtractive steps.
The area of the right angle triangle is 1/2.
The area of the large circle is π ^2 = π, so the area of the (shaded) quarter circle is π/4.
Therefore the area of the shaded segment is π/4
Using the Pythagorean Theorem, the length of the square's diagonal, d =
Therefore the area of one small circle is π ^2 = π/2 and so the area of the shaded semi-circle will be π/4.
Hence the area of one lune is π/4 π/4
So the shaded area of the original shape is 2.
What would be the shaded area if the same construction was performed on the edges of an inscribed equilateral triangle?
What about other inscribed regular polygons?
Problem ID: 20 (Oct 2000) Difficulty: 3 Star | {"url":"http://mathschallenge.net/full/lunes","timestamp":"2014-04-25T04:58:26Z","content_type":null,"content_length":"7678","record_id":"<urn:uuid:b8aa40b0-797c-4bfb-8f20-70dc6bf13ced>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
question on one of the steps in a weak solution proof
November 7th 2011, 04:06 AM
question on one of the steps in a weak solution proof
My textbook has the following problem:
Show that for a continuous function $f$ the expression $u=f(x-ct)$ is a weak solution of the partial differential equation
[Hint: Transform for $\phi\in C_0^1(\mathbb{R}^2)$ the integral
to the coordinates $y_1=x-ct,y_2=x$. Use $\phi=\psi(y_1)X(y_2)$.]
They apparently mean in $\mathbb{R}^2$. Recall that a "weak solution" (in this case) is a solution $u$ which satisfies
(1) $0=\int\int(\phi_t+c\phi_x)u\;dx\;dt$
for every test function $\phi\in C^1_0(\mathbb{R}^2)$, i.e. for every continuously differentiable function with compact support in $\mathbb{R}^2$. However it is (supposedly) sufficient to show
that (1) holds for all $\phi\in C^\infty_0(\mathbb{R}^2)$, i.e. all smooth functions with compact support in $\mathbb{R}^2$.
My question is this: Why can we assume that a smooth (or continuously differentiable) function of two variables $\phi:\mathbb{R}^2\to\mathbb{R}^2$ can be written as the product of functions of a
single variable $\phi(y_1,y_2)=\psi(y_1)X(y_2)$ ? Or am I missing something here ?
Thanks !
November 20th 2011, 07:36 PM
Re: question on one of the steps in a weak solution proof
I don't have an answer, but for example your conditions imply that the identity works for $\phi \in H^1(\mathbb{R}^2)=W^{1,2}(\mathbb{R}^2)$. Now, it's well known that $L^2(X)\hat{\otimes} L^2(Y)
\cong L^2( X\times Y)$ so "separated" simple functions are dense in the product (alternatively "separated" linear combinations of test functions are dense). One could then suspect that something
along the lines of $H^1(\mathbb{R}) \hat{\otimes} H^1(\mathbb{R}) \cong H^1(\mathbb{R}^2)$ which would justify the argument. I don't know if the last identity holds. | {"url":"http://mathhelpforum.com/differential-equations/191362-question-one-steps-weak-solution-proof-print.html","timestamp":"2014-04-20T23:58:22Z","content_type":null,"content_length":"8893","record_id":"<urn:uuid:ad6fed21-3a40-4fdc-bd3e-3ae054d05705>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Massachusetts Institute of Technology Online Lectures and Courses - Academic Earth
Massachusetts Institute of Technology
The Massachusetts Institute of Technology (MIT), founded in 1861, is located in Cambridge, Massachusetts, and is one of the foremost U.S. institutions in science and technology. It is comprised of
five schools and one college, including the renowned School of Engineering and Sloan School of Management, offering Bachelor's, Master's, and Doctorate degrees. Notable alumni include Ben Bernanke,
Chairman of the Federal Reserve, Benjamin Netanyahu, prime minister of Israel, and American astronaut "Buzz" Aldrin. | {"url":"http://academicearth.org/universities/mit/?pg=5","timestamp":"2014-04-19T16:02:31Z","content_type":null,"content_length":"45737","record_id":"<urn:uuid:9a33cde1-da98-4c10-b80f-11c91b06da83>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
information entropy
information entropy (uncountable)
1. (information theory) A measure of the uncertainty associated with a random variable ; a measure of the average information content one is missing when one does not know the value of the random
variable (usually in units such as bits); the amount of information (measured in, say, bits) contained per average instance of a character in a stream of characters.
A passphrase is similar to a password, except it can be a phrase with a series of words, punctuation, numbers, whitespace, or any string of characters you want. Good passphrases are 10-30
characters long, are not simple sentences or otherwise easily guessable (English prose has only 1-2 bits of entropy per character, and provides very bad passphrases), and contain a mix of
upper and lowercase letters, numbers, and non-alphanumeric characters. — BSD General Commands Manual : ssh-keygen(1), October 2, 2010.
Imagine a full binary tree in which the probability of getting from any parent node to one of its child nodes is 50%. Associate labels of '0' and '1' to the paths to the child nodes of
each parent node. Then the probability of getting from the root to a leaf node is a negative power of 2, and the length of the path (from the root to that leaf node) is the negated base-2
logarithm of that probability. Now imagine a random bit stream in which each bit has an equal chance of being either 0 or 1. Think of the full binary tree as describing a Huffman encoding for
a script whose characters are located at the leaf nodes of the said full binary tree. Each character decodes a string of bits whose length is the length of the path from the root node to the
leaf node corresponding to that character. Now use "Huffman decoding" to convert the bit stream to a character stream (of the given script). The average compression ratio between the bit
stream and the character stream can be seen to be equal to the information entropy of the character stream.
Last modified on 21 June 2013, at 01:05 | {"url":"http://en.m.wiktionary.org/wiki/information_entropy","timestamp":"2014-04-17T22:40:11Z","content_type":null,"content_length":"18063","record_id":"<urn:uuid:fa822cf3-eef2-453e-aea6-d2b33c1f2ae3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Get Connected With Ohm's Law
Lesson Focus
Demonstrate Ohm's Law using digital multi-meters. Fun hands-on activities are presented that demonstrate Ohm's Law. Teachers use digital multi-meters to collect data that are plotted to show that
voltage and current are related by linear functions for ordinary resistors and by power functions for light bulbs.
Lesson Synopsis
Fun hands-on activities are presented that demonstrate Ohm's Law (E = I x R). Teachers use digital multi-meters to collect data that are plotted to show that voltage and current are related by linear
functions for ordinary resistors and by power functions for light bulbs.
Age Levels:
Learn about Ohm's Law.
Be able to use a digital multi-meter to collect data.
Explore the concepts of voltage and current.
Anticipated Learner Outcomes
As a result of the activities, students should develop an understanding of:
• Ohm's Law
• Relationship between Voltage, Current, and Resistance in an electrical circuit
• Measurement, plotting data, and graphing
• Basic wiring and construction of a digital multi-meter for data collection
What is Ohm's Law?
Ohms Law is a mathematical equation explaining the relationship between Voltage, Current, and Resistance within electrical circuits. It is defined as follows:
E = I x R
• E = Voltage (Voltage is an electric potential difference between two points on a conducting wire. Voltage is measured in volts and comes from various sources such as electric outlets and
• I = Current (Current is measured in amps. Current is charged particles which flow from the voltage source through conductive material to a ground.
• R = Resistance (Resistance is the opposition that a material body offers to the passage of an electric current. Resistance is measured in ohms. Examples of items with resistance are light bulbs
and coffeemakers.)
Lesson Activities
The activity consists of using a nominal six-volt battery (made up of four nominal 1.5 volt dry cells connected in series) to:
• Drive current through a simple circuit element and measure and record the current through the element and the voltage across the element as the number of cells in the battery is varied from a
single cell to four cells.
• Plot points on the graph that represent the voltages and currents recorded.
• Draw a "best fit" curve through the data points for the element tested.
• Repeat the process for two or three different resistor circuit elements.
• Compare the curves and make observations about the nature of the curves for each element.
Six teacher handouts are provided:
• Ohm's Law Information Sheet
• Step By Step Lesson Plan Guidelines
• Appendix 1: Materials Sourcing Suggestions
• Appendix 2: Continuity Tester Assembly Instructions
• Appendix 3: Alternate Single Cell Battery Holder Photos and Diagrams
• Appendix 4: Optional Insulators and Conductors Activity
Two student handouts are provided for advance review:
• Ohm's Law Information Sheet
• Step By Step Procedures
See attached student worksheets and teacher resource documents.
Alignment to Curriculum Frameworks
Curriculum alignment sheet is included in PDF.
comments powered by Disqus | {"url":"http://www.tryengineering.org/lesson-plans/get-connected-ohms-law?lesson=18","timestamp":"2014-04-17T00:52:31Z","content_type":null,"content_length":"20374","record_id":"<urn:uuid:92212cfe-c617-418f-a0f7-67376d0fb0e5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interest Formula
P = principal amount (the initial amount you borrow or deposit)
r = annual rate of interest (as a decimal)
t = number of years the amount is deposited or borrowed for.
A = amount of money accumulated after n years, including interest.
n = number of times the interest is compounded per year | {"url":"http://qrc.depaul.edu/StudyGuide2009/Notes/Savings%20Accounts/Compound%20Interest.htm","timestamp":"2014-04-19T01:47:41Z","content_type":null,"content_length":"6981","record_id":"<urn:uuid:71fd8da7-4b1a-444c-9247-30fc6b585422>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brianchon-Gram-Sommerville and ideal hyperbolic Dehn invariants
A beautiful identity in Euclidean geometry is the Brianchon-Gram relation (also called the Gram-Sommerville formula, or Gram’s equation), which says the following: let $P$ be a convex polytope, and
for each face $F$ of $P$, let $\omega(F)$ denote the solid angle along the face, as a fraction of the volume of a linking sphere. The relation then says:
Theorem (Brianchon-Gram relation): $\sum_{F \subset P} (-1)^{\text{dim}F} \omega(F)=0$. In other words, the alternating sum of the (solid) angles of all dimensions of a convex polytope is zero.
Sketch of Proof: we prove the theorem in the case that $P$ is a simplex $\Delta$; the more general case follows by generalizing to pyramids, and then decomposing any polytope into pyramids by coning
to an interior point. This argument is due to Shephard.
Associated to each face $F$ is a spherical polyhedron $A(F)$ in $S^{n-1}$; if the span of $F$ is the intersection of a family of half-spaces bounded by hyperplanes $H_i$ with inward normals $n_i$,
then $A(F)$ is the set of unit vectors $v \in S^{n-1}$ whose inner product with each $n_i$ is non-negative. Note further that for each $v \in S^{n-1}$ there is some $n_i$ that pairs non-negatively
with $v$; consequently to each $v \in S^{n-1}$ one can assign a subset $I(v)$ of indices, so that $n_i$ pairs non-negatively with $v$ if and only if $i \in I(v)$. On the other hand, each subset $J \
subset I(v)$ determines a unique face $F(J)$ of dimension $n - |J|$. By the inclusion-exclusion formula, we conclude that $\sum_{F} (-1)^{\text{dim}F}A(F)$ “equals” zero, thought of as a signed union
of spherical polyhedra. Since $\omega(F) = \text{vol}(A(F))/\text{vol}(S^{n-1})$, the formula follows. qed.
Another well-known proof starts by approximating the polytope by a rational polytope (i.e. one with rational vertices). The proof then goes via Macdonald reciprocity, using generating functions.
Example: Let $T$ be a triangle, with angles $\alpha,\beta,\gamma$. The solid angle at an interior point is $1$, and the solid angle at each edge is $1/2$. Hence we get $(\alpha + \beta + \gamma)/2\pi
- 3/2 + 1 = 0$ and therefore in this case Brianchon-Gram is equivalent to the familiar angle sum identity for a triangle: $\alpha + \beta + \gamma = \pi$.
Example: Next consider the example of a Euclidean simplex $S$. The contribution from the interior is $-1$, and the contribution from the four facets is $2$. There are six edges, with angles $\
alpha_i$, that contribute $\sum \alpha_i/2\pi$. Each vertex contributes one spherical triangle, with (spherical) angles $\alpha_i,\alpha_j,\alpha_k$ for certain $i,j,k$, where each $\alpha_i$
appears as a spherical angle in exactly two spherical triangles. The Gauss-Bonnet theorem implies that the area of a spherical triangle is equal to the angle sum defect: $\text{area}_{ijk} = \alpha_i
+ \alpha_j + \alpha_k - \pi$ so the vertices contribute $(2\sum \alpha_i - 4 \pi)/4\pi$ and the identity is seen to follow in this case too.
Note in fact that the usual proof of Gauss-Bonnet for a spherical triangle is done by an inclusion-exclusion argument involving overlapping lunes, that is very similar to the proof of Brianchon-Gram
given above.
The sketch of proof above just as easily proves an identity in the spherical scissors congruence group. For $X^n$ equal to spherical, Euclidean or hyperbolic space of dimension $n$, the scissors
congruence group $\mathcal{P}(X^n)$ is the abelian group generated by formal symbols $(x_0,x_1,\cdots,x_n,\alpha)$ where $x_i \in X^n$ and $\alpha$ is a choice of orientation, modulo certain
relations, namely:
1. $(x_0,x_1,\cdots,x_n,\alpha)=0$ if the $x_i$ are contained in a hyperplane
2. an odd permutation of the points induces multiplication by $-1$; changing the orientation induces multiplication by $-1$
3. if $g$ is an isometry of $X^n$, then $(x_0,\cdots,x_n,\alpha) = (gx_0,\cdots,gx_n,g_*\alpha)$
4. $\sum_i (-1)^i (x_0,\cdots,\widehat{x_i},\cdots,x_{n+1},\alpha)$ for any set of $n+2$ points, and any orientation $\alpha$
(Note that this definition of scissors congruence is consistent with that of Goncharov, and differs slightly from another definition consistent with Sah; this difference has to do with orientations,
and has as a consequence the vanishing of spherical scissors congruence in even dimensions; whereas with Sah’s definition, $\mathcal{P}(S^{2n}) = \mathcal{P}(S^{2n-1})$ for each $n$)
The argument we gave above shows that for any Euclidean simplex $\Delta$, we have $\sum_F(-1)^{\text{dim}F} A(F) = 0$ in $\mathcal{P}(S^{n-1})$.
Scissors congruence satisfies several fundamental properties:
1. $S^n = 0$ in $\mathcal{P}(S^n)$. To see this, “triangulate” the sphere as a pair of degenerate simplices, whose vertices lie entirely on a hyperplane.
2. There is a natural multiplication $\mathcal{P}(S^{a-1}) \otimes \mathcal{P}(S^{b-1}) \to \mathcal{P}(S^{a+b-1})$; to define it on simplices, think of $S^{a+b-1}$ as the unit sphere in $\mathbb{R}
^{a+b}$. A complementary pair of subspaces $\mathbb{R}^a$ and $\mathbb{R}^b$ intersect $S^{a+b-1}$ in a linked pair of spheres of dimensions $a-1,b-1$; if $\Delta_a,\Delta_b$ are spherical
simplices in these subspaces, the image of $\Delta_a \otimes \Delta_b$ is the join of these two simplices in $S^{a+b-1}$.
It follows that the polyhedra $A(F)=0$ in $\mathcal{P}(S^{n-1})$ whenever $F$ is a face of dimension at least $1$; for in this case, $A(F)$ is the join of a spherical simplex with a sphere of some
dimension, and is therefore trivial in spherical scissors congruence. Hence the identity above simplifies to $\sum_v A(v)=0$ in $\mathcal{P}(S^{n-1})$.
One nice application is to extend the definition of Dehn invariants to ideal hyperbolic simplices. We recall the definition of the usual Dehn invariant. Given a simplex $P \in X^n$, for each face $F$
we let $\angle(F)$ denote the spherical polyhedron equal to the intersection of $P$ with the link of $F$. Then $D(P) = \sum_F F\otimes \angle(F) \in \oplus_i \mathcal{P}(X^{n-i})\otimes \mathcal{P}(S
^{i-1})$. Ideal scissors congruence makes sense for ideal hyperbolic simplices, except in dimension one (where it is degenerate). For ideal hyperbolic simplices (i.e. those with some vertices at
infinity), the formula above for Dehn invariant is adequate, except for the $1$-dimensional faces (i.e. the edges) $e$. This problem is solved by the following “regularization” procedure due to
Thurston: put a disjoint horoball at each ideal vertex of $P$, and replace each infinite edge $e$ by the finite edge $e'$ which is the intersection of $e$ with the complement of the union of
horoballs; hence one obtains terms of the form $e' \otimes \angle(e)$ in $D(P)$. This definition apparently depends on the choice of horoballs. However, if $H,H'$ are two different horoballs, the
difference is a sum of terms of the form $c \otimes \angle(e)$ where $c$ is constant, and $e$ ranges over the edges sharing the common ideal vertex. The intersection of $P$ with a horosphere is a
Euclidean simplex $\Delta$, and the $\angle(e)$ are exactly the spherical polyhedra $A(v)$ as $v$ ranges over the vertices of $\Delta$. By what we have shown above, the sum $\sum_v A(v)$ is trivial
in scissors congruence; it follows that $D(P)$ is well-defined.
For more general ideal polyhedra (and finite volume complete hyperbolic manifolds) one first decomposes into ideal simplices, then computes the Dehn invariant on each piece and adds. A minor
variation of the usual argument on closed manifolds shows that the Dehn invariant of any complete finite-volume hyperbolic manifold vanishes.
Update(7/29/2009): It is perhaps worth remarking that the Brianchon-Gram relation can be thought of, not merely as an identity in spherical scissors congruence, but in the “bigger” spherical polytope
group, in which one does not identify simplices that differ by an isometry. Incidentally, there is an interesting paper on this subject by Peter McMullen, in which he proves generalizations of
Brianchon-Gram(-Sommerville), working explicitly in the spherical polytope group. He introduces what amounts to a generalization of the Dehn invariant, with domain the Euclidean translational
scissors congruence group, and range a sum of tensor products of Euclidean translational scissors congruence (in lower dimensions) with spherical polytope groups. It appears, from the paper, that
McMullen was aware of the classical Dehn invariant (in any case, he was aware of Sah’s book) but he does not refer to it explicitly.
Recent Comments
Ian Agol on Cube complexes, Reidemeister 3…
Danny Calegari on kleinian, a tool for visualizi…
Quod est Absurdum |… on kleinian, a tool for visualizi…
dipankar on kleinian, a tool for visualizi…
Ludwig Bach on Liouville illiouminated
4 comments
Dear Danny, This is a very interesting post and I wish I could understand better the hyperbolic things. Peter McMullen wrote quite a few papers on valuations on convex sets (include the Dehn
Invariants) and on dussections
(e.g. P. McMullen, Valuations and dissections, in Handbook of Convex Geometry (P.M. Gruber and J.M. Wills, eds.), North-Holland, Amsterdam, 1993, 933–988. MR1243000 (95f:52018).) He had a well known
conjecture on characterization of translation invariant valuations (those referred to in the update) that was settled by Alesker.
(Somehow it was not clear to me if Dehn invariants are important for hyperbolic manifolds (as volume is) or they all vanish for all of them. )
(There are some nice questions I know regarding Dehn’s invariants: 1) (due to Bob Connelly is if all Dehn’s invariant are fixed for a flexing of a flexible polyhedron; this is known for the volume by
Sabitov; 2) If the Dehn invariants (or some of them) can be read from the eigenvalues of the Laplacian like the volume can.)
Danny Calegari
Dear Gil – thank you for the very interesting pointer to work of McMullen and Alesker. It appears that some of their work is very close to things that I care a lot about, and yet – 1. I was
completely unaware of it, and – 2. it is apparently very well-known. I wonder what else is so close but completely out of my field of vision!
If I understand the situation correctly, the Hadwiger invariants are sufficient to determine translational scissors congruence for polyhedra (maybe this was shown by Hadwiger?) and Sah determined all
the relations between Hadwiger invariants. What Alesker apparently does (his paper is very interesting) is to characterize translation-invariant continuous (complex-valued) valuations on the space of
*all* convex compact subsets (of some Euclidean space, where continuity is with respect to Hausdorff distance).
Again, if I understand the situation correctly, Sabitov’s proof of the bellows conjecture works by showing that one can compute the volume enclosed by a Euclidean polyhedron as a root of a certain
polynomial whose coefficients are determined by the edge lengths; the volume is therefore determined up to finite ambiguity, and is “locally constant” on the space of flexings of the polyhedra (if
any). What is (very) clever about this is that it nowhere uses the hypothesis that the polyhedron is flexible. I think a dimension count, and Schlafli’s formula shows that “formally” the Dehn
invariant of a polyhedron depends on as many parameters as there are edges (maybe the polyhedron needs to be a sphere?) so I guess a similar proof for Dehn invariants is conceivable, but such a
dimension count also suggests that most polyhedra are not flexible, so it is probably not much of a guide to intuition. Are there flexible hyperbolic/spherical polyhedra in dimension 3? The same
dimension count holds there . . .
The fact that Dehn invariants (other than volume) vanish for hyperbolic manifolds is, in fact, very important, since the kernel of the Dehn invariant on hyperbolic scissors congruence is closely
related to the Bloch group, which is an important object in algebraic K-theory (of course, this is one of the themes of Goncharov’s paper). Some aspects of this theory are worked out for K_3 (of
number fields) by Walter Neumann and Jun Yang, and others.
“If I understand the situation correctly, the Hadwiger invariants are sufficient to determine translational scissors congruence for polyhedra (maybe this was shown by Hadwiger?) and Sah determined
all the relations between Hadwiger invariants.”
The information is wiki article on Hilbery 3rd problem (which is consistent more or less with my memory)
Hilbert’s third problem
is that this is still open in dimensions d>4 . It was proved in dimension 3 by Sydler in in dimension 4 by Jessen.
Danny Calegari
Dear Gil – I fixed the link to the wikipedia article in your comment.
I have been a bit sloppy with definitions above. There are in fact several variations of what is meant by scissors congruence. Given a space X in which one has polyhedra (usually, Euclidean,
hyperbolic or spherical space) there are several different groups of equivalence classes of polyhedra that one calls scissors congruence groups. One always allows cut and paste of polyhedra (the
scissors part) as equivalence, but one also identifies pairs of polyhedra that differ by the action of an element of some fixed (pseudo)-group G acting on X (usually by isometries). The case of
classical scissors congruence (that Dehn studied) is when one takes G to be the full group of isometries. You’re quite right that the question of whether Dehn’s invariants are a complete set of
invariants of classical scissors congruence is only known in Euclidean space in dimension up to 4 (the Dehn-Sydler-Jessen theorem) and in hyperbolic/spherical space in dimension up to 2 (!). Rich
Schwartz has some great notes on this, and a Java applet illustrating a key step in the proof, at his webpage.
One also studies, in Euclidean space, the translational scissors congruence, where one is only allowed to move polyhedra around by translation (and, in some versions, by an antipodal “flip” x -> -x).
In this context, Hadwiger defined his invariants, and showed that they are a complete invariant (in every dimension). Sah later found (I think) the complete set of relations between Hadwiger’s
invariants, thus explicitly calculating the translational scissors congruence groups. I mentioned translational scissors congruence in my comment, because I think this is closest to what Alekser
Incindentally, in my post, I also distinguished between two flavors of spherical scissors congruence — the “classical” spherical scissors congruence group, where one allows all isometries, and the
spherical polytope group, where one allows no isometries (except the identity).
Leave a Reply Cancel reply | {"url":"http://lamington.wordpress.com/2009/07/28/brianchon-gram-sommerville-and-ideal-hyperbolic-dehn-invariants/","timestamp":"2014-04-21T15:14:18Z","content_type":null,"content_length":"95673","record_id":"<urn:uuid:64a1fd51-9f5b-42d6-8f50-f56cbdf043a6>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Complexity of Counting in Sparse, Regular, and Planar Graphs
Salil P. Vadhan
We show that a number of graph-theoretic counting problems remain NP-hard, indeed #P-complete, in very restricted classes of graphs. In particular, we prove that the problems of counting matchings,
vertex covers, independent sets, and extremal variants of these all remain hard when restricted to planar bipartite graphs of bounded degree or regular graphs of constant degree. We obtain
corollaries about counting cliques in restricted classes of graphs and counting satisfying assignments to restricted classes of monotone 2-CNF formulae. To achieve these results, a new
interpolation-based reduction technique which preserves properties such as constant degree is introduced.
[ back to Salil Vadhan's research interests ] | {"url":"http://people.seas.harvard.edu/~salil/research/planar-abs.html","timestamp":"2014-04-18T10:34:47Z","content_type":null,"content_length":"2098","record_id":"<urn:uuid:5d6c1fc2-9da9-4396-a48e-a4ea253dbd7a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: December 2010 [00242]
[Date Index] [Thread Index] [Author Index]
Re: Replacement Rule with Sqrt in denominator. Also Bug in Series
• To: mathgroup at smc.vnet.net
• Subject: [mg114536] Re: Replacement Rule with Sqrt in denominator. Also Bug in Series
• From: Richard Fateman <fateman at cs.berkeley.edu>
• Date: Tue, 7 Dec 2010 06:45:21 -0500 (EST)
• References: <idigif$is4$1@smc.vnet.net>
On 12/6/2010 3:14 AM, Bill Rowe wrote:
>> (RJF, putting words in Bill Rowe's mouth)...
If a designer writes a program that does exactly what the designer
>> intended, then it is not a bug.
> (Bill Rowe) Yes.
>> OK, then, what term do you wish to use to refer to the result of
>> design and programming that results from an error in the design
> I don't agree the behavior of replacement rules reflects an
> error in design.
Aha, but you don't answer the question. Let us say that the designer
put in something you truly thought was wrong, for example the
designer truly thought that Cot[x] could be simplified to 1/x*Tan[x].
If this is not a bug, perhaps you would call it a BLUNDER?
>> that causes mathematical computations to proceed in a way that is
>> internally inconsistent, and is generally considered mathematical
>> nonsense?
Mathematica does not restrict its blunders to matching and rules.
> Since replacement rules are not doing mathematics there should
> be no requirement for the result to make mathematical sense.
I don't require rules to be applications of identities. I think that
it is mathematically inconsistent to match "I" in a+b*I but not
match "I" in 3+4*I.
> That is a replacement rule such as 2->1 is perfectly valid but
> clearly doesn't make 2 equal to 1.
OK, It is perfectly valid, but it doesn't work consistently.
(a+b)^2 /. 2-> 1 returns a+b AS EXPECTED.
(a+b)^(1/2) /. 2->1 returns Sqrt[a+b] IS THIS EXPECTED??
2+a*I /. 2->1 returns 1+a*I AS EXPECTED.
2+3*I /. 2->1 ...
>>> The fact replacement rules operate on the FullForm of an expression
>>> but what is displayed is not the FullForm does mean inexperienced
>>> users of Mathematica will encounter some difficulties with
>>> replacement rules.
Actually, replacement rules operate on the internal representation of
the expression. FullForm is simply another output form that is an
alternative string representation ( or XML or whatever..) of the
internal representation. One that is somewhat more explicit in some ways
than the usual standard form.
Clearly FullForm doesn't work as you think it does, because
FullForm[Sqrt[a+b]] has a "2" in it, but the rules don't know it.
So you REALLY need to know that FullForm of a Rational is not an
accurate representation of the internal form, and is hiding something
that is the yet Fuller Form, that explains that although
Rational[1,2] is what you see, it is not REALLY an object with a
Head of Rational and two parts;
f= F[3,5] has Head F, f[[1]] is 3, f[[2]] is 5.
g=3/5 has Head Rational g[[1]] is not 3, but an error.
But, that simply doesn't equate to being a bug.
No, but it means that any program that traverses an expression
(say, to do a replacement 2->1), has to check for a subexpression
that has head Rational, and must decide NOT to do replacements
within it. Is this extra work to get the wrong answer? Of course
if the replacement were 2->x then replacing Rational[1,2] by
Rational[1,x] would be a bad idea... so it is a tradeoff of some
sort. In my view, the wrong tradeoff. That means I think it is
a bug or blunder. You may view it otherwise. I may view your
view as incorrect. That's the way it goes.
>> No, I think the error is that users need to have another kind of
>> pattern matching, and that when too many people stumble over the
>> same feature, the correct response is NOT, "the customer is wrong,
>> yet again."
>> It might be , Oh, you want the semantic pattern matcher.
>> Maybe, Here's how we can set it up to be your default command pattern
>> matcher...
> In another post Daniel Lichtblau described the issues with
> trying to make replacement rules work on what is displayed
> rather than full form.
You (and DanL?) seem to think that I am advocating making replacement
rules work on what is displayed. Yet in a/b, I see no occurrence of
Numerator[] or Denominator[], which encode the semantics. Semantics
is what I think should be used for patterns. It would be possible to
translate a_/b_ into "let x=a_/b_; consider Numerator[x] and
Denominator[x] ...."
The alternative which is
presented for consumption in this newsgroup is that patterns operate on
the syntax as depicted by FullForm.
But that too is a simplification, since it
does not work for Complex[], Rational[], or for that matter, other forms
that you may not know about (yet). Example. (is this a bug??)
Series[ x^5, {x,0, 10} ] /. 5-> 6 gives x^6 + O[x]^11
Series[1+x^5, {x,0, 10} ] /. 5-> 6 gives 1 + x^5 + O[x]^11
or is it a feature?
And do you know about SparseArray, for which replacements don't work either?
and who knows what else. Blame the customer for not knowing. That's
the ticket. | {"url":"http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00242.html","timestamp":"2014-04-18T10:42:08Z","content_type":null,"content_length":"30225","record_id":"<urn:uuid:27d7fde3-baff-4674-a9ee-cad20f899ed7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phase to ground fault
Thank you very much for your response jim.
neutral is not grounded accidentally. the neutral was never connected to the grounding resistor, from which the 51N gets activated when ever there is flow of current through the neutral. as the
neutral is not connected the 51N relay never got activated.
Ahhh, so we did not have a flash-boom after all. I feel better, thanks.
what i was thinking was the neutral grounding resistor will dissipate the energy in case of a ground fault.
Yes, that is its purpose. It limits the current that flows through a phase-ground fault to perhaps ten or twenty amps so there's not much arc-welding inside the machine.
Now - ten amps at 13.8kv is 138kw, so the resistor is physically quite large.
The voltage across an arc is probably only forty volts or so, so at twenty amps you're putting less than a kilowatt into damaging the machine.
if there is no neutral grounding resistor then there might be a huge explosion if the neutral is touching the ground during a ground fault in the generator. Correct me if i am wrong.
You are exactly right. During a fault......nothing limits the current except the machine constants as i described above.
Vaporized copper expands by the same gas laws as dynamite vapor.
You've heard the "BOOM" from electrical explosions?
Now - let's think about what situation i think you are saying you had, the neutral wire grounded instead of tied to resistor.
That's just bypassing the neutral resistor, eh?
So this question becomes '...what if neutral resistor is bypassed, ie your neutral wire touching touching ground
in absence
of a phase-ground fault?
What current flows then ?'
Well, let us think about it - aloud.
First thought step will be Kirchoff's Current Law:
The current that flows into the neutral from ground is
the sum
of the currents that flows out of the individual phases into ground.
What path do those currents take?
Those currents flow mostly through the distributed capacitance of your generator windings, transformer windings and interconnecting wiring. That's the only connection...
So the dominant impedance is that capacitance.
Next thought step - Tesla's gift to EE's- remember the three phases are 120 degrees out, so those capacitive currents add to zero.
So neutral current will be quite small so long as there's not much imbalance. I'd wager it's less than an amp. Well, maybe two... carry on...
Next thought step: Fourier's gift to EE's - remember your generator does not make a perfect sine wave, it has harmonic content like any other real genertor. Probably two or three percent in a machine
that small.
And much if not all of that will be third harmonic.
Aha ! Third harmonic - that's 180 hz, only 5.555 milliseconds for a whole cycle , 360 degrees at 180 hz.
Aha^2 !
5.555 milliseconds is also 120 degrees
(1/3 cycle)
at 60 hz
What that means is this - the third harmonic of each generator phase is
in phase
with the third harmonic of the other two phases.
Aha^3 ! So the third harmonic currents flowing into the neutral do not cancel out as did the fundamental, they directly add.
SO ------- in normal operation, the current returning in the neutral is largely third harmonic.
I have measured this with a spectrum analyzer and find neutral curent to be maybe thirty percent third harmonic, maybe the majority like 85%. It doesn't look a lot like the 60 hz sinewave produced by
the machine.
Using the amplitudes and ratios of fundamental to third harmonic in both phase voltage and neutral current, you can get a pretty good guess at what is the distributed capacitance of your
generator-transformer system.
>>> Don't Give Up On Me Yet - I haven't forgot your question...<<<
I'm going to guess that in normal operation you'll find an amp or two of neutral current , at least 50% of it third harmonic. Harmonic content goes up with excitation.
Now - what happens if you bypass the neutral resistor with no phase fault present?
Basically, nothing. The dominant impedance in the circuit is still the distributed capacitance. Neutral current will change very little.
Incidentally that's how you size a grounding resistor, make it a few times smaller than the impedance of your distributed capacitance.
This is all described very clearly in IEEE's "Green Book", standard 142 i think it is.
Every power plant guy needs to have his own personal copy. Ask your company to get you one.
Well, thanks for listening. Helps an old guy feel useful.
I'm sorry the answer turned out so long -
it's just that i was trained by a stern mentor to always "answer the question that was asked"
and i hope this seemingly endless post helped.
old jim | {"url":"http://www.physicsforums.com/showthread.php?p=4242057","timestamp":"2014-04-21T02:09:35Z","content_type":null,"content_length":"58348","record_id":"<urn:uuid:5e48e6ec-6632-4566-ac7f-101a885a3c5b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section 11.2
Java Security Update: Oracle has updated the security settings needed to run Physlets.
Click here for help on updating Java and setting Java security.
Section 11.2: Finite Potential Energy Wells: Quantitative
check, then drag a slider to see the odd states' TE.
Restart | Show the transcendental equation as a function of energy instead | Show the energy eigenfunction and well instead
Please wait for the animation to completely load.
The finite square well problem is defined by a potential energy function that is zero everywhere except^1
V(x) = −|V[0]| −a < x < a.
Since the potential energy function is finite, quantum mechanically there will be some leakage of the wave function into the classically-forbidden regions specified by x > a and x < −a. We will have
three regions in which we must solve the time-independent Schrödinger equation. In Region I (x < −a) and Region III (x > a), for bound-state solutions, E < 0, we can write the time-independent
Schrödinger equation as
[d^2/dx^2 − κ^2] ψ(x) = 0 , (11.1)
where^2 κ^2 ≡ 2m|E|/ħ^2. The equation above has the solutions
ψ(x) = Aexp(+κx) + Bexp(−κx). (11.2)
Because the energy eigenfunction must be zero at ±∞ we have
ψ[I](x) = Aexp(+κx) and ψ[III](x) = Dexp(−κx) (11.3)
for the solutions in Region I and III, respectively.
In Region II (−a < x < a), we expect an oscillatory solution since the energy is greater than the potential energy: E > V[0] or |V[0]| > |E|. In this region we can write the time-independent
Schrödinger as
[d^2/dx^2 + k^2] ψ(x) = 0 , (11.4)
where k^2 ≡ 2m(|V[0]| − |E|)/ħ^2. The above equation has the solutions ψ[II](x) = Bsin(kx) + Ccos(kx) which are valid solutions for Region II.
Next, we must match the solutions across the boundaries at x = −a and x = a. Matching the energy eigenfunctions across these boundaries means that the energy eigenfunctions and the first derivatives
of the energy eigenfunctions must match at each boundary so that we have a continuous and smooth wave function (no jumps or kinks).
Since the potential energy function is symmetric about the origin, there are even and odd parity solutions to the bound-state problem.^3 We begin by considering the even (parity) solutions and
therefore the ψ[II](x) = Ccos(kx) solution in Region II.
Matching proceeds much like the scattering cases we considered in Chapter 8. At x = −a we have the conditions
ψ[I](−a) = ψ[II](−a) → Aexp(−κa) = Ccos(−ka),
ψ'[I](−a) = ψ'[II](−a) → Aκ exp(−κa) = Cksin(−ka).
From the symmetry in the problem, we need not consider the boundary at x = a as it yields the exact same condition on energy eigenfunctions. We now divide the resulting two equations to give a
condition for the existence of even solutions: κ/k = tan(ka). This is actually a constraint on the allowed energies, as both k and κ involve the energy.
We now consider the following substitutions in terms of dimensionless variables:
ζ ≡ ka = [2m(|V[0]| − |E|)a^2/ħ^2]^1/2 (11.5)
ζ[0] ≡ [2m|V[0]|a^2/ħ^2]^1/2 (11.6)
where ζ[0 ] > ζ. Using these variables we have
[(ζ[0]/ζ)^2 − 1]^1/2 = tan(ζ) . (11.7)
This equation is a transcendental equation for ζ which itself is related via Eq. (11.6) to the energy. In addition, Eq. (11.7) only has solutions for particular values of ζ. We can solve this
equation numerically or graphically, and we choose graphically in the animation. The right-hand side of Eq. (11.7) is shown in black and the left-hand side is shown in red. In the animation, ħ = 2m =
1. You may also select the Show the transcendental equation as a function of energy instead link to see the equations as a function of energy. You can change a and |V[0]| by dragging the sliders to a
particular value to see how the left-hand side of Eq. (11.7) changes.
We note that as the potential energy well gets shallower and/or narrower, ζ[0 ] < π/2 and there exists just one bound state. No matter how shallow or narrow the potential energy well, there will
always be at least one bound state.
As ζ[0] gets larger (meaning larger a and |V[0]|), the number of bound-state solutions increases. In addition, the intersection of the curves on the graph approaches ζ = nπ/2, with n odd. This means
that the energy (as measured from the bottom of the well) approaches that of the infinite square well of length 2a (ζ ≈ nπ /2 yields |V[0]| − |E| = ħ^2k^2/2m ≈ n^2π^2ħ^2/2m(2a)^2).
The solution for the odd (parity) energy eigenfunctions proceeds like the even-parity case except that we use the sine solution in Region II:
ψ[I](−a) = ψ[II](−a) → Aexp(−κa) = Bsin(−ka),
ψ'[I](−a) = ψ'[II](−a) → Aκ exp(−κa) = Bk cos(−ka).
Again, we need not consider the equations for x = a because by symmetry, they yield the same result. We again divide the two equations to give κ/k = −cot(ka), and using the same substitutions as
above yields: [(ζ[0]/ζ)^2 − 1]^1/2 = −cot(ζ).
This equation for the odd-parity solutions is shown in the animation by checking the text box and moving the slider. Note that as the potential energy well gets shallower and/or narrower, ζ[0] gets
smaller, and it is possible for there to be no intersections on the graph which means that there will not be any odd-parity states. No matter how shallow or narrow the symmetric finite potential
energy well, there will always be at least one bound state and it is an even-parity state.
As ζ[0] gets larger (meaning larger a and |V[0]|), the number of bound-state solutions increases. In addition, the intersection of the curves on the graph approaches ζ = nπ/2, with n even. Again this
means that the energy as measured relative to the bottom of the well approaches that of the infinite square well of length 2a.
In order to find the energy eigenfunctions, we must solve for the constants A, B, C, and D. This requires using the matching equations and then normalizing the wave function. In practice this is time
consuming, instead you can view the numerical solution by clicking the Show the energy eigenfunction and well instead link. To see other bound states, simply click-drag in the energy level diagram
and select a level. The selected level will turn red.
^1Note the differences between the potential energy functions describing the finite well and the infinite well. The width of the finite well is 2a and its walls are at V = 0 while the well is at V =
−|V[0]|. Bound states of the finite well, therefore, have E < 0. With the infinite square well, the width is L and its walls are at V = ∞ while the well is at V = 0. Bound states of the infinite
well, therefore, have E > 0.
^2Since E < 0, we choose to write E = −|E| to avoid any ambiguity in sign.
^3This is due to the fact that for even potential energy functions, the Hamiltonian commutes with the parity operator. As a consequence, there are even states in which ψ[e](−x) = ψ[e](x), and odd
states in which ψ[o](−x) = −ψ[o](x).
« previous
next » | {"url":"http://www.compadre.org/PQP/quantum-theory/section11_2.cfm","timestamp":"2014-04-20T23:55:57Z","content_type":null,"content_length":"25193","record_id":"<urn:uuid:773d8b1c-167f-46cb-87e3-f44494aaf48d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
ACP Atmospheric Chemistry and Physics ACP Atmos. Chem. Phys. 1680-7324 Copernicus GmbH Göttingen, Germany 10.5194/acp-8-7697-2008 Aerosol model selection and uncertainty modelling by adaptive MCMC
technique Laine M. ^1 Tamminen J. ^1 Finnish Meteorological Institute, Helsinki, Finland 19 12 2008 8 24 7697 7707 This is an open-access article distributed under the terms of the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. This article is available from http://
www.atmos-chem-phys.net/8/7697/2008/acp-8-7697-2008.html The full text article is available as a PDF file from http://www.atmos-chem-phys.net/8/7697/2008/acp-8-7697-2008.pdf
We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior
probabilities and model averaging in Bayesian way. <br><br> The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ). It uses Markov chain
Monte Carlo (MCMC) technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of
applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use. <br><br> We show how the AARJ algorithm can be implemented and used for
model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS
instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models
are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account
in assessing the uncertainty of the estimates. | {"url":"http://www.atmos-chem-phys.net/8/7697/2008/acp-8-7697-2008.xml","timestamp":"2014-04-17T07:45:51Z","content_type":null,"content_length":"9675","record_id":"<urn:uuid:0248eb43-3d92-4b81-a580-96153d6f6b43>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Definitions Relating to Mathematical Expressions
Definitions Relating to Expressions
□ A mathematical Expression is a combination of symbols that can designate numbers (constants), variables, operations, symbols of grouping and other punctuation.
□ Literal Numerals are letters that stand for numbers (e.g. a, b, x, y).
□ A Constant represents a known value in an algebraic expression.
□ A Variable is a symbol that represents a changeable or undetermined quantity in an algebraic expression.
□ Operations include addition, subtraction, multiplication, division and exponentiation.
□ Grouping Symbols include parentheses (), curly brackets {}, or square brackets [].
Return to Top | {"url":"http://www.aaamath.com/B/aa51.htm","timestamp":"2014-04-20T00:40:21Z","content_type":null,"content_length":"8208","record_id":"<urn:uuid:ed2c8a58-c1ce-4354-b97a-177f39f4abb2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tarryall, CO Math Tutor
Find a Tarryall, CO Math Tutor
...For math it helps for it to be fun. I have picked up many math games over the years from attending numerous learning workshops and then using them with my students. It's wonderful to hear
students say "now I get it!" I have given science/environmental education programs for children through the state parks, area nature centers & after-school science clubs for many (!!) years.
42 Subjects: including SAT math, geometry, algebra 1, prealgebra
...I prefer to build on the student's natural inclinations than force them to learn it "my way." Some of the tricks and techniques I use now to teach students originally came from other students.
With algebra 1 introducing the fundamentals needed to succeed in all other math areas, I like to spend ...
10 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...It takes many hours of hard work. I know this from experience. In return, I will be completely committed and do everything I possibly can to help you achieve your goals.The study of physics
encompasses the fundamental laws that govern our universe!
16 Subjects: including algebra 1, algebra 2, calculus, geometry
...My name is David. In addition to my hours on Wyzant, I have 2+ years experience working in and out of schools as a Paraprofessional Teacher and Tutor in reading, writing, math, music, and
business for Elementary and Middle School students and adults. I am also a recent college graduate with an ...
8 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I minored in Spanish at Willamette University. I've also taken graduate-level courses in Madrid at La Universidad Complutense. I've played clarinet for 16 years, including two university
13 Subjects: including prealgebra, Spanish, reading, Japanese
Related Tarryall, CO Tutors
Tarryall, CO Accounting Tutors
Tarryall, CO ACT Tutors
Tarryall, CO Algebra Tutors
Tarryall, CO Algebra 2 Tutors
Tarryall, CO Calculus Tutors
Tarryall, CO Geometry Tutors
Tarryall, CO Math Tutors
Tarryall, CO Prealgebra Tutors
Tarryall, CO Precalculus Tutors
Tarryall, CO SAT Tutors
Tarryall, CO SAT Math Tutors
Tarryall, CO Science Tutors
Tarryall, CO Statistics Tutors
Tarryall, CO Trigonometry Tutors
Nearby Cities With Math Tutor
Aspen Park, CO Math Tutors
Buckskin Joe, CO Math Tutors
Cadet Sta, CO Math Tutors
Cleora, CO Math Tutors
Crystal Hills, CO Math Tutors
Falcon, CO Math Tutors
Iron City, CO Math Tutors
Keystone, CO Math Tutors
Maysville, CO Math Tutors
Montclair, CO Math Tutors
Parkdale, CO Math Tutors
Rockrimmon, CO Math Tutors
Swissvale, CO Math Tutors
Wellsville, CO Math Tutors
Western Area, CO Math Tutors | {"url":"http://www.purplemath.com/Tarryall_CO_Math_tutors.php","timestamp":"2014-04-18T04:15:35Z","content_type":null,"content_length":"23906","record_id":"<urn:uuid:5ee6e582-084b-4a7c-8597-68f122221a56>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
IN MEMORIAM
CHARLES NATHANIEL FRIEDMAN
Chas Friedman joined the Department of Mathematics at UT Austin as an assistant professor in the fall of 1973. He had recently completed a dissertation in mathematical physics at Princeton under the
supervision of Edward Nelson, followed by an appointment as a C.L.E. Moore Instructor at MIT. His appointment at UT was seen as a step in building the presence here in applied mathematics/
mathematical physics, and he had colleagues of similar interests in Ralph Showalter, John Dollard, Klaus Bichteler, and Roy Kerr in mathematics, as well as a number of faculty members in physics.
There was a mathematical physics seminar with participation from both departments, in which Chas became an active participant. Presently, he began a collaboration with John Dollard that resulted in
the publication of the volume Product Integration with Applications to Differential Equations in the Addison-Wesley Encyclopedia of Applied Mathematics, which was edited by Gian-Carlo Rota. There
followed an additional extensive collaboration on product integration and quantum-mechanical scattering theory, with an accompanying stream of research publications. In fall 1979, Chas became a
tenured associate professor.
During the remainder of his career, Chas published further results in other branches of analysis. In particular, he published on probability, in which he had a lively interest. Recently, he wrote a
book chapter for Handbook of Probability, edited by Tamas Rudas, which is to be published soon. He also spent a considerable amount of time thinking about the notoriously difficult issue of
NP-completeness, which fascinated him throughout his career.
He was a popular teacher who cared deeply about his students and won a College of Natural Sciences teaching award in 1999. In addition he was an effective coach for the mathematics department's
Putnam exam team and served a stint as graduate adviser in mathematics. Starting around 2000, he became heavily involved in the department's information technology system, taking on the job of web
master and collaborating with the department's operating systems specialist, Dr. Maorong Zou, to ensure the integrity and usefulness of the departmental network.
Chas died of an exceptionally aggressive form of cancer on November 4, 2006, after an illness of less than six weeks. The material so far has been more concerned with Chas's persona. The remainder is
more concerned with the person.
Chas had a great sense of innocent fun, and he relished word play. In particular he loved utterances that could be interpreted in more than one way. This expressed itself in standard routines. If you
said to him, “Will you call me a taxi?’” he would say, “All right, you're a taxi.” If you said, “Did you take a shower?’” he would say, “Why, is one missing?” And he would victimize people by asking
them “Is Handel Baroque?’” and when they replied affirmatively saying “Well, why don't you lend him some money?”
Chas was married twice, and the period of his second marriage to the former Mehri Rae Loftin was doubtless the happiest of his life. Chas and Rae established a residence in Wimberley, Texas,
combining the best features of rusticity and modernity. Over a period of twenty years, they raised the children from their previous marriages and hosted many happy occasions, particularly for their
musical friends in mathematics. In this setting, the wide range of Chas's interests came into play. While the designation "Renaissance man" is often misapplied, Chas actually was one, with interests
including mathematical research, English literature, classical music, folk music, carpentry (practitioner), guitar (player), violin (player), trombone (player), oil painting (practitioner), and
jewelry making.
Chas will be remembered as a congenial colleague and a steadfast friend, who devoted more than thirty years of his life to research and to the education of students at UT. His devotion to his
students was shown in the fact that he insisted on grading tests in the classes he was teaching until he was too weak to continue. He died bravely, telling Rae that what she had to do was harder than
what he had to do, and he apologized for "dudding out" on her. His family and his friends miss him greatly.
William Powers Jr., President
The University of Texas at Austin
Sue Alexander Greninger, Secretary
The General Faculty
This memorial resolution was prepared by a special committee consisting of Professors John Dollard (chair), Efraim Armendariz, and Klaus Bichteler. | {"url":"http://www.utexas.edu/faculty/council/2006-2007/memorials/friedman/friedman.html","timestamp":"2014-04-18T12:15:29Z","content_type":null,"content_length":"23414","record_id":"<urn:uuid:94fe34bb-7898-4180-b14f-02315d2ceea2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Filters for Real World Accelerations
Integrating the accelerations twice to get displacements is equivalent to a high pass filter, without doing anything to the raw data. It multipliies a component with frequuency ##\omega## by ##-1/\
Filtering the data in the time domain can be problematic if the filter introduces phase shifts in the different frequency components of the measured data. You can design finite impulse response (FIR)
digital filters that don't introduce phase shifts, but unless there is a specific noise problem (e.g. vibration or electrical noise at a particular frequency that you want to remove) you probably
don't need to do it at all.
A common problem with this sort of measurement is low frequency "drift" because the accelerometer readings are not exactly correct for zero acceleration. One way to fix that is do it empirically. If
you know the start and end positions independent of the accelerometer data, add a constant acceleration that makes the end point of the graph correct, i.e. use the formulas ##v = at##, ##x = at^2/2##
to estimate the "zero error acceleration" ##a##.
Trapeziod integration is probably as good as anything. It uses all the data without "inventing" anything extra, or throwing any information away.
Another practical tip is use the highest data sampling rate than you can, to capture rapid changes of acceleration as accurately as possible. Unfortunately there is usually a trade off between
capturing the peak acceleration values without clipping, and getting accurate data when the accleration is low.
If you can do your experiment so the start and end positions are the same, you can enforce that by taking an FFT of all the data, and setting just the "zero frequency" Fourier coefficient to 0. You
can then integrate the data in the frequency domain by dividing the fourer coefficients by ##i\omega## or ##-\omega^2## before doing the inverse FFT, rather than doing the integration in the time
domain with the trapezoidal rule. | {"url":"http://www.physicsforums.com/showpost.php?p=4267957&postcount=2","timestamp":"2014-04-17T03:59:18Z","content_type":null,"content_length":"8952","record_id":"<urn:uuid:18ba0f0e-7766-49ca-81c5-eeccb73fcb73>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Procedural Fluency
Procedural fluency refers to knowledge of procedures, knowledge of when and how to use them appropriately, and skill in performing them flexibly, accurately, and efficiently. In the domain of number,
procedural fluency is especially needed to support conceptual understanding of place value and the meanings of rational numbers. It also supports the analysis of similarities and differences between
methods of calculating. These methods include, in addition to written procedures, mental methods for finding certain sums, differences, products, or quotients, as well as methods that use
calculators, computers, or manipulative materials such as blocks, counters, or beads.
Procedural fluency refers to knowledge of procedures, knowledge of when and how to use them appropriately, and skill in performing them flexibly, accurately, and efficiently.
Students need to be efficient and accurate in performing basic computations with whole numbers (6+7, 17–9, 8×4, and so on) without always having to refer to tables or other aids. They also need to
know reasonably efficient and accurate ways to add, subtract, multiply, and divide multidigit numbers, both mentally and with pencil and paper. A good conceptual understanding of place value in the
base-10 system supports the development of fluency in multidigit computation.^11 Such understanding also supports simplified but accurate mental arithmetic and more flexible ways of dealing with
numbers than many students ultimately achieve.
Connected with procedural fluency is knowledge of ways to estimate the result of a procedure. It is not as critical as it once was, for example, that students develop speed or efficiency in
calculating with large numbers by hand, and there appears to be little value in drilling students to achieve such a goal. But many tasks involving mathematics in everyday life require facility with
algorithms for performing computations either mentally or in writing.
In addition to providing tools for computing, some algorithms are important as concepts in their own right, which again illustrates the link between conceptual understanding and procedural fluency.
Students need to see that procedures can be developed that will solve entire classes of problems, not just individual problems. By studying algorithms as “general procedures,” students can gain
insight into the fact that mathematics is well structured (highly organized, filled with patterns, predictable) and that a carefully developed procedure can be a powerful tool for completing routine
It is important for computational procedures to be efficient, to be used accurately, and to result in correct answers. Both accuracy and efficiency can be improved with practice, which can also help
students maintain fluency. Students also need to be able to apply procedures flexibly. Not all computational situations are alike. For example, applying a standard pencil-and-paper algorithm to find
the result of every multiplication problem is neither neces- | {"url":"http://books.nap.edu/openbook.php?record_id=9822&page=121","timestamp":"2014-04-19T22:52:06Z","content_type":null,"content_length":"35944","record_id":"<urn:uuid:da01e197-d745-4f57-82e7-2a98b2558fc3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determine asymptotic behavior of algebraic curves
up vote 1 down vote favorite
Take an example polynomial $f(x, y) = y^2 x + y^3 - x^2$. A solution to $f(x,y)=0$ exists with Puiseux series given by $y(x) = x^{2/3} - x/3 + x^{4/3}/9+\cdots$. I got this by having Mathematica
directly solve $f=0$ for $y$ and then perform a Puiseux series expansion for me about $x=0$.
However, since a direct solution to $f=0$ is obviously not available for arbitrary $f$, what I'd like is a method to determine just the lowest-order term of the expansion of $y(x)$ about $x=0$ for
arbitrary $f(x,y)$. In the example above, this term would be $x^{2/3}$.
ag.algebraic-geometry algebraic-curves puiseux-series
add comment
2 Answers
active oldest votes
Finding such solutions was in fact Newton's original motivation for inventing "Newton polygons". There's a nice exposition of this in Chapter 2 of "The Implicit Function
up vote 2 down vote Theorem: History, Theory, and Applications" by Steve Krantz and Hal Parks.
add comment
The magic words are "Newton Polygon". (there is a vast literature on the subject, the wikipedia article is one place to start).
up vote 2 down vote
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry algebraic-curves puiseux-series or ask your own question. | {"url":"http://mathoverflow.net/questions/139489/determine-asymptotic-behavior-of-algebraic-curves","timestamp":"2014-04-16T16:26:42Z","content_type":null,"content_length":"53306","record_id":"<urn:uuid:8b3299d2-445f-4ded-9d6e-37365c43a25d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modeling of Hot-Mix Asphalt Compaction: A Thermodynamics-Based Compressible Viscoelastic Model
Modeling of Hot-Mix Asphalt Compaction: A Thermodynamics-Based Compressible Viscoelastic Model
508 Captions
Figure 1. Equation. Gradient and divergence operators with respect to reference configuration. Gradient of script a equals partial derivative of script a with respect to script X. In index notation,
gradient of script a subscript ij equals the partial derivative of script a subscript i with respect to script X subscript j. Divergence of script T equals partial derivative of script T with respect
to script X. In index notation, divergence of script T subscript i equals the partial derivative of script T subscript ij with respect to script X subscript j.
Figure 2. Equation. Gradient and divergence operators with respect to current configuration. Gradient of script a equals partial derivative of script a with respect to script x. In index notation,
Gradient of script a subscript ij equals the partial derivative of script a subscript i with respect to script x subscript j. Divergence of script T equals partial derivative of script T with respect
to script x. In index notation, divergence of script T subscript i equals the partial derivative of script T subscript ij with respect to script x subscript j
Figure 3. Illustration. Evolution of natural configurations associated with microstructural transformations resulting from the material response to an external stimulus. This graphic shows the
stress-free state that occurs when tractions are removed. The form B subscript K subscript R can go through two processes: F subscript K subscript R or G. The process F subscript K subscript R
produces the form B subscript K subscript (e times t). The process G produces the form B subscript K subscript (p times t), which goes through the process F subscript K subscript (p times t) and
produces the form B subscript K subscript (e times t)
Figure 4. Equation. Energy-dissipation inequality. Sum of the following: T scalar product with L minus rho product with material time derivative of psi minus rho product (zeta product with material
time derivative of theta) minus q scalar product with gradient of theta divided by theta, equals rho product (theta product Xi) equals xi, which is greater than or equal to zero
Figure 5. Equation. Reduced energy-dissipation inequality. Sum of the following: T scalar product with L minus rho product with material time derivative of psi equals symbol hat of xi, which is
greater than or to equal zero
Figure 6. Equation. Helmholtz potential for mixture. psi equals mu (a function of roman numeral III subscript G) product (sum of: roman numeral I subscript (B subscript K subscript (p times t)) minus
three minus natural logarithm of (roman numeral III subscript (B subscript K subscript (p times t))), the sum product divided by twice rho subscript K subscript (p times t)
Figure 7. Equation. Rate of dissipation function. Symbol hat of xi equals eta (a function of roman numeral III subscript G) times the scalar product of D subscript K subscript (p times t) with the
product of B subscript K subscript (p times t) and D subscript K subscript (p times t)
Figure 8. Equation. Shear-modulus function. mu equals symbol hat of mu times (sum of: one plus lambda subscript one times (roman numeral III subscript G) exponent twice n subscript one) exponent q
subscript one
Figure 9. Equation. Viscosity function. eta equals symbol hat of eta times (sum of: one plus lambda subscript two times (roman numeral III subscript G) exponent twice n subscript two) exponent q
subscript two
Figure 10. Equation. Constitutive equation for stress. Stress T equals sum of the following: twice rho times roman numeral III subscript B subscript K subscript (p times t) times partial derivative
of psi with respect to roman numeral III subscript B subscript K subscript (p times t) times the identity tensor I plus twice rho times partial derivative of psi with respect to roman numeral I
subscript B subscript K subscript (p times t) times the tensor B subscript K subscript (p times t)
Figure 11. Equation. Evolution equation. Convected time derivative of tensor B subscript K subscript (p times t) equals minus two times sum of the following: inverse of tensor V subscript K subscript
(p times t) times tensor T times tensor V subscript K subscript (p times t) minus rho times roman numeral III subscript G times partial derivative of psi with respect to roman numeral III subscript G
times the identity tensor I, the total product divided by eta
Figure 12. Illustration. Schematic for the one-dimensional compression problem. This graphic shows a cylinder with downward forces T subscript zz. Other forces are shown on the sides and bottom.
Three cylindrical coordinates are labeled z, r, and theta
Figure 13. Chart. Creep solution for the material model for HMA—creep load. This chart shows a plot obtained for the given material by calculating the solution to the semi-inverse creep problem using
MATLAB^®. The creep load chart shows T bar subscript zz as a straight line at 6.6 time parameter t bar for a range of stresses, T bar subscript zz. Stress, T bar subscript zz is on the y-axis and
Time parameter t bar is on the x-axis.
Figure 14. Chart. Creep solution for the material model for HMA—Almansi strain. This chart shows a plot obtained for the given material by calculating the solution to the semi-inverse creep problem
using MATLAB^®. The Almansi strain chart shows the line e bar with e tilde times t bar on the y-axis and time parameter t bar on the x-axis
Figure 15. Chart. Total stretch calculated from the solution to the one-dimensional creep problem. This chart shows a plot of total stretch, one of the relevant field variables calculated. The
material does return to a natural (stress-free) configuration upon unloading; therefore, the dissipation occurs because of a change in the microstructure of the material. The chart shows time
parameter t bar on the x-axis and total stretch, lambda tilde on the y-axis.
Figure 16. Chart. Evolution of natural configuration calculated from the solution to the one-dimensional creep problem. This chart shows a plot of evolution of natural configuration, one of the
relevant field variables calculated. The material does return to a natural (stress-free) configuration upon unloading; therefore, the dissipation occurs because of a change in the microstructure of
the material. The chart shows time parameter t bar on the x-axis and evolution of natural configuration, b bar superscript 2 on the y-axis
Figure 17. Chart. Stored energy calculated from the solution to the one-dimensional creep problem. This chart shows a plot of stored energy, one of the relevant field variables calculated. The
material does return to a natural (stress-free) configuration upon unloading; therefore, the dissipation occurs because of a change in the microstructure of the material. The chart shows time
parameter t bar on the x-axis and stored energy, psi bar tilde on the y-axis.
Figure 18. Chart. Rate of dissipation calculated from the solution to the one-dimensional creep problem. This chart shows a plot of rate of dissipation, one of the relevant field variables
calculated. The material does return to a natural (stress-free) configuration upon unloading; therefore, the dissipation occurs because of a change in the microstructure of the material. The chart
shows show time parameter t bar on the x-axis and dissipation rate, xi bar tilde on the y axis
Figure 19. Illustration. Schematic for the one-dimensional stress-relaxation problem. This graphic shows a cylinder with downward forces e subscript zz times t equals C, a constant. Below the
cylinder is the description r times t equals R times t, theta times t equals Theta times t, z times t equals lambda times t times Z times t
Figure 20. Chart. Applied strain calculated from the solution to the one-dimensional stress-relaxation problem. This chart shows the results of step-input strain calculations. This applied strain
chart shows time parameter t bar on the x-axis and the line e tilde subscript zz with strain (times 10 superscript -3) on the y-axis.
Figure 21. Chart. Stress relaxation calculated from the solution to the one-dimensional stress-relaxation problem. This chart shows the results of step-input strain calculations. This stress
relaxation chart shows time parameter t bar on the x-axis and the line T subscript zz with stress (times 10 superscript -4) on the y-axis.
Figure 22. Chart. Stored energy calculated from the solution to the one-dimensional stress-relaxation problem. This chart shows the results of step-input strain calculations. This stored energy chart
shows time parameter t bar on the x-axis and the line psi bar tilde with stored energy (times 10 superscript -4) on the y-axis.
Figure 23. Chart. Rate of dissipation calculated from the solution to the one-dimensional stress-relaxation problem. This chart shows the results of step-input strain calculations. This rate of
dissipation chart shows time parameter t bar on the x-axis and the line xi bar tilde with rate of dissipation (times 10 superscript -4) on the y-axis
Figure 24. Chart. FE solution to the model response to applied constant stress—creep load. This chart, with figure 25, shows the comparison for the constant applied stress. The chart shows load as a
line, using two axes: stress, T bar subscript zz, and time parameter, t bar
Figure 25. Chart. FE solution to the model response to applied constant stress—total stretch. This chart, with figure 24, shows the comparison for the constant applied stress. The chart shows
analytical as a line, using total stretch, lambda tilde, on the y-axis and time parameter, t bar, on the x-axis. Finite-element model (FEM) is a line that roughly parallels analytical. The calculated
values (using MATLAB^®) of the stretch and stress in response to applied stress and the corresponding finite-element (FE) solutions agree well
Figure 26. Chart. FE solution to the model response to applied constant strain (compressive 0.05)—applied strain. This chart, with figure 27, shows the comparison for the constant applied strain. The
chart shows a line on two axes: strain (e tilde times t bar) and time parameter, t bar.
Figure 27. Chart. FE solution to the model response to applied constant strain (compressive 0.05)—normal stress response. This chart, with figure 26, shows the comparison for the constant applied
strain. The chart shows analytical as a line, using two axes: stress, T bar subscript zz, on the y-axis and time parameter, t bar, on the x-axis. Finite-element model (FEM) is a line that roughly
follows analytical. The calculated values (using MATLAB^®) of the stretch and stress in response to applied strain and the corresponding finite-element (FE) solutions agree well
Figure 28. Illustration. Schematic of the constant shear loading applied to a unit cube. This graphic shows a unit cube labeled gamma with dimensions x, y, and z. The cube shears in only a lateral
direction, x, indicating a shear deformation
Figure 29. Chart. Shear stress (T[12]) observed in response to constant shear loading. This chart represents the shear in the xy-plane. The material response corresponds with that of a nonlinear
material due to the exhibition of normal stress differences. The chart shows the lines gamma equals 0.1 and gamma equals 0.5 using two axes: T subscript 12 (MPa) and time (seconds)
Figure 30. Chart. Comparison of the first normal stress (T[11]–T[22]) response to constant shear loading. This chart represents the first normal stress. The material response corresponds with that of
a nonlinear material due to the exhibition of normal stress differences. The chart shows the lines gamma equals 0.1 and gamma equals 0.5 using two axes: T subscript 11 through T subscript 22 (MPa)
and time (seconds)
Figure 31. Chart. Comparison of the second normal stress (T[22]–T[33]) response to constant shear loading. This chart represents the second normal stress. The material response corresponds with that
of a nonlinear material due to the exhibition of normal stress differences. The chart shows the lines gamma equals 0.1 and gamma equals 0.5 using two axes: T subscript 22 through T subscript 33 (MPa)
and time (seconds)
Figure 32. Illustration. Schematic of the constant shear rate applied to a unit cube. This graphic shows a unit cube labeled Kappa with dimensions x, y, and z. The cube shears in only a lateral
direction, x, indicating a shear deformation
Figure 33. Chart. Shear stress (T[12]) observed in response to constant shear rate loading. This chart represents the shear in the xy-plane. The chart shows the lines kappa equals 0.5 and kappa
equals 0.1 using two axes: T subscript 12 (MPa) and time (seconds). The material response corresponds with that of a nonlinear material due to the exhibition of normal stress differences
Figure 34. Chart. Shear stress (T[12]) as a function of the shear rate (using the model parameters in table 1). This chart represents the shear in the xy-plane. The chart shows a straight line that
begins at 0 MPa (T subscript 12) and 0 seconds to the power of -1 (K) and ends at 1.5 MPa and 0.5 seconds to the power of -1
Figure 35. Chart. Comparison of the first normal stress (T[11]–T[22]) response to constant shear rate loading. This chart represents the first normal stress. The chart shows the lines kappa equals
0.1 and kappa equals 0.5 using two axes: T subscript 11 through T subscript 22 (MPa) and time (seconds). The material response corresponds with that of a nonlinear material due to the exhibition of
normal stress differences
Figure 36. Chart. Comparison of the second normal stress (T[22]–T[33]) response to constant shear rate loading. This chart represents the second normal stress. The chart shows the lines kappa equals
0.1 and kappa equals 0.5 using two axes: T subscript 22 through T subscript 33 (MPa) and time (seconds). The material response corresponds with that of a nonlinear material due to the exhibition of
normal stress differences
Figure 37. Photo. Superpave^® gyratory compactor. This photo shows a piece of laboratory equipment, the Superpave^® gyratory compactor. There is a metal drum housed in blue, metal casing that is
attached to an electronic control system.
Figure 38. Photo. Static steel-wheel roller. This photo shows an unmanned static steel wheel roller on pavement. The vehicle has an operator's platform with a seat and controls and two large, steel
rollers at the front and back
Figure 39. Illustration FE mesh used in modeling the SGC. This graphic shows an example of the finite-element (FE) mesh of the Superpave^® gyratory compactor (SGC). A cylinder sits on top of a wider,
shorter cylinder. The top cylinder shows S, mises, from +5.750e-01 to +6.900e 01, with one round spot in the lower part of the cylinder from +5.175e-01 to +5.750e 01. The bottom, wider but shorter
cylinder shows S, mises, from +4.881e+06 to +6.900e-01. Below the cylinders is the following information: ODB: sh125sm1185387494.22096.odb, ABAQUS/Standard 6.4-5; Step: Step-2; Increment 13: Step
Time equals 4.513; Primary Var: S, Mises; Deformed Var: U, Deformation Scale Factor: +1.000e+00
Figure 40. Chart. Analysis of the sensitivity of compaction to This chart shows an example of the influence of mu, keeping all other parameters the same. More compaction is achieved as the estimated
mu decreases. The charts shows three lines: estimated mu equals 1900, estimated mu equals 1700, and estimated mu equals 1500. The lines are on two axes: normalized height and time of compaction
Figure 41. Chart. Analysis of the sensitivity of compaction to n[1]. This chart shows an analysis of the parameter n subscript 1. mu is significantly affected by n subscript 1, which controls the
maximum compaction of a mixture. The chart shows three lines: n subscript 1 equals 5.0, n subscript 1 equals 4.0, and n subscript 1 equals 3.0. The lines are on two axes: normalized height and time
of compaction (seconds)
Figure 42. Chart. Analysis of the sensitivity of compaction to This chart shows that the parameter estimated eta controls the point at which the material behavior starts to change from a very
low-viscosity fluidlike behavior to a high-viscosity fluidlike behavior. The chart shows three lines: estimated eta equals 1600, estimated eta equals 1400, and estimated eta equals 1300. The lines
are on two axes: normalized height and time of compaction (seconds).
Figure 43. Chart. Analysis of the sensitivity of compaction to λ[2]. This chart shows that the parameter lambda subscript 2 is directly related to the initial slope of the compaction curve. The chart
shows three lines: lambda subscript 2 equals 0.26, lambda subscript 2 equals 0.25, and lambda subscript 2 equals 0.22. The lines are on two axes: normalized height and time of compaction (seconds)
Figure 44. Chart. Analysis of the sensitivity of compaction to q[2]. This chart shows that the model parameter q subscript 2 contributes to the nonlinear change of viscosity from the start of the
compaction process. The overall compaction of the mixture is higher at lower (more negative) values of q subscript 2. The chart shows three line: q subscript 2 equals -34, q subscript 2 equals -30,
and q subscript 2 equals -26. The lines are on two axes: normalized height and time of compaction (seconds)
Figure 45. Equation. Modified shear-modulus function. mu equals symbol hat of mu times (sum of: one plus 0.25 times (roman numeral III subscript G) exponent twice n subscript one) exponent -25
Figure 46. Equation. Modified viscosity function. eta equals symbol hat of eta times (sum of: one plus lambda subscript two times (roman numeral III subscript G) exponent 5) exponent q subscript two
Figure 47. Chart. Illustration of the relationship of the model's parameters to the compaction process. This graphic shows a chart with normalized height on the y-axis and time of compaction
(seconds) on the x-axis. The chart contains a line and descriptions of points and planes on the line: the initial viscous dissipation limit, due lambda subscript 2; the onset of high viscosity,
determined by the estimated eta; the degree of nonlinear transition, controlled through q subscript 2; the beginning of material consolidation, determined by the estimated mu; the rate of
consolidation, governed by n subscript 1; and the asymptotic consolidation limit, due lambda subscript 1
Figure 48. Chart. Influence of angle of gyration on the compaction curve. This chart shows the finite-element (FE) simulations of the Superpave^® gyratory compactor (SGC) at different angles of
gyration. The chart shows normalized height on the y-axis and time of compaction (seconds) on the x-axis. It has lines for 0.5, 1.25, 2.0, and 3.0 degrees
Figure 49. Chart. Maximum shear stress at the top of the specimen for a gyration angle of 1.25 degrees. The finite-element (FE) model was used to determine the maximum shear stresses at the top of
the specimens for a gyration angle of 1.25 degrees. This chart shows Tresca stress (MPa) on the y-axis and time of compaction (seconds) on the x-axis, and it depicts a series of lines. Initially, the
shear stress decreases rapidly when the material behaves as a compressible fluid and then starts to increase gradually as the mixture starts to behave like a highly viscous fluid. The shear stresses
are higher for higher angles of compaction. This is because an increase in the angle of gyration is associated with an increase in the applied shear stresses. Also, the point at which shear stress
starts to increase occurs earlier at a 2.0-degree angle than at a 1.25-degree angle. This is because the mixture compacts and gains strength faster at a 2.0-degree angle of gyration
Figure 50. Chart. Maximum shear stress at the top of the specimen for a gyration angle of 2.0 degrees. The finite-element (FE) model was used to determine the maximum shear stresses at the top of the
specimens for a gyration angle of 2 degrees. This chart shows Tresca stress (MPa) on the y-axis and time of compaction (seconds) on the x-axis, and it depicts a series of lines. Initially, the shear
stress decreases rapidly when the material behaves as a compressible fluid and then starts to increase gradually as the mixture starts to behave like a highly viscous fluid. The shear stresses are
higher for higher angles of compaction. This is because an increase in the angle of gyration is associated with an increase in the applied shear stresses. Also, the point at which shear stress starts
to increase occurs earlier at a 2.0-degree angle than at a 1.25 degree angle. This is because the mixture compacts and gains strength faster at a 2.0-degree angle of gyration
Figure 51. Chart. Fitting of the compaction data at 1.25 degrees for project IH-35. This chart shows the final simulation results using the parameters in table 3. Simulation and experiment are
plotted as lines with normalized height on the y-axis and time of compaction (seconds) on the x-axis. The results show that the model has a reasonable representation of the compaction curves
Figure 52. Chart. Fitting of the compaction data at 1.25 degrees for project US-259. This chart shows the final simulation results using the parameters in table 3. Simulation and experiment are
plotted as lines with normalized height on the y-axis and time of compaction (seconds) on the x-axis. The results show that the model has a reasonable representation of the compaction curves
Figure 53. Chart. Fitting of the compaction data at 1.25 degrees for project SH-36. This chart shows the final simulation results using the parameters in table 3. Simulation and experiment are
plotted as lines with normalized height on the y-axis and time of compaction (seconds) on the x-axis. The results show that the model has a reasonable representation of the compaction curves
Figure 54. Chart. Fitting of the compaction data at 1.25 degrees for project SH-21. This chart shows the final simulation results using the parameters in table 3. Simulation and experiment are
plotted as lines with normalized height on the y-axis and time of compaction (seconds) on the x-axis. The results show that the model has a reasonable representation of the compaction curves
Figure 55. Chart. Fitting of the compaction data at 1.25 degrees for project US-87. This chart shows the final simulation results using the parameters in table 3. Simulation and experiment are
plotted as lines with normalized height on the y-axis and time of compaction (seconds) on the x-axis. The results show that the model has a reasonable representation of the compaction curves
Figure 56. Chart. Prediction of the compaction data at 2.0 degrees for project IH-35. This chart shows the final simulation results using the parameters in table 3. Simulation and experiment are
plotted as lines with normalized height on the y-axis and time of compaction (seconds) on the x-axis. The results show that the model has a reasonable representation of the compaction curves
Figure 57. Chart. Prediction of the compaction data at 2.0 degrees for project US-259. This chart shows the final simulation results using the parameters in table 3. Simulation and experiment are
plotted as lines with normalized height on the y-axis and time of compaction (seconds) on the x-axis. The results show that the model has a reasonable representation of the compaction curves
Figure 58. Chart. Prediction of the compaction data at 2.0 degrees for project SH-36. This chart shows the final simulation results using the parameters in table 3. Simulation and experiment are
plotted as lines with normalized height on the y-axis and time of compaction (seconds) on the x-axis. The results show that the model has a reasonable representation of the compaction curves
Figure 59. Chart. Prediction of the compaction data at 2.0 degrees for project SH-21. This chart shows the final simulation results using the parameters in table 3. Simulation and experiment are
plotted as lines with normalized height on the y-axis and time of compaction (seconds) on the x-axis. The results show that the model has a reasonable representation of the compaction curves
Figure 60. Chart. Prediction of the compaction data at 2.0 degrees for project US-87. This chart shows the final simulation results using the parameters in table 3. Simulation and experiment are
plotted as lines with normalized height on the y-axis and time of compaction (seconds) on the x-axis. The results show that the model has a reasonable representation of the compaction curves
Figure 61. Illustration. Pavement structure typically employed for studying field compaction. This graphic shows a base layer of 18 inches (457.2 mm), an old hot-mix asphalt (HMA) layer of 2 inches
(50.8 mm), and a top HMA layer of 3 inches (76.2 mm). A very thin layer exists between the base and the old HMA layer and between the old HMA layer and the top HMA layer
Figure 62. Illustration. Sectional view of the FE mesh used for setting up field compaction simulations. This graphic shows a three-dimensional representation of asphalt layers. The newly laid loose
mix is next to the compacted mix in the roller wake. Beneath this are the old asphalt layer and impedance layers around the top layers. Beneath these are the base layer and impedance layers around
the base layers
Figure 63. Illustration. Schematic diagram illustrating the edges of the lane that correspond to fixed and free edges of the mesh in figure 62. This graphic shows the boundary conditions most
frequently used. The layers are the compacted mix, new/newly compacted asphalt layer, and old asphalt layer. The fixed edge of the pavement corresponds to the inner edge of the pavement, while the
free edge corresponds to the outer edge of the lane. The entire width of the pavement section is 17 ft (5.185 m). The width of the new asphalt layer is 12 ft (3.66 m). The roller is 7-ft (2.135-m)
wide and covers the new asphalt layer in two overlapping passes
Figure 64. Chart. Typical displacement curve for a node under the cylindrical load for a cycle with forward and return passes followed by a forward pass with the load removed. This chart shows the
typical response curve with the compaction experienced at a point in the roller path as the roller passes over it and returns for a second pass from the opposite direction. The chart shows the
vertical displacement (mm) during the first pass, second pass, and third pass with no load. Vertical displacement from -5 mm–0 mm is labeled as instantaneous inelastic response; the material flows
and has a short associated relaxation time
Figure 65. Chart. Vertical displacement observed at point P on the path of the roller. This plot represents the vertical deflection observed at a point P, measured on the y-axis in millimeters (mm).
The chart notes the vertical deflection at point P at six different time instants during the motion. Four observations are made in the forward rolling in the first hundred time steps of motion, while
two observations are made when the motion is reversed in the next hundred time steps. The observations are made at time steps 10, 60, 90, 100, 120 and 200. There is very little deflection at time
steps 10 and 60, more than 5 mm at time step 90, about 3.5 mm at time step 100, about 6.5 mm at time step 120, and about 4 mm at time step 200. By the sixth observation, the amount of vertical
displacement has leveled off
Figure 66. Chart. Roller location from observation point P. This chart represents the roller location from the observation point on the y-axis and the time steps needed for compaction on the x-axis.
The roller location at six instants corresponding to the 10, 60, 90, 100 120 and 200 time steps are represented by vertical bars. The progression of the roller along the mat, relative to the point of
observation, is represented by the progression of the vertical bars, which progress upward on the chart until point 4, and then return to the original position by point 6
Figure 67. Chart. Deflection at the node of interest when a load is applied for a short duration and then removed. This chart shows that as the roller passes over a point and moves further away, the
material relaxes and is then subject to further deformation from this relaxed state during the immediate pass. The chart shows the creep relaxation as a line with normalized deflection on the y-axis
and time steps on the x-axis. During constant loading (time steps 1–20), normalized height decreases. During the time when the load is removed (time steps 20–50), the normalized height rebounds and
evens out.
Figure 68. Chart. Permanent deformation prediction of the model in multiple-pass loading. This chart shows how deformation builds up incrementally with each pass of the roller. However, the
increments decrease in magnitude as the number of passes increases, due to material densification. The chart shows two, four, and six roller passes. The decreasing permanent deformation is given in
three steps: delta x subscript 1 (time steps 0–200), delta x subscript 2 (time steps 200–400), and delta x subscript 3 (time steps 400–600)
Figure 69. Illustration. Roller contact geometry for static indentation and during motion. This graphic shows the distributed forces in the elements in contact during a static indentation, mimicking
the cylindrical profile of the roller. The resolution of normal reaction forces due to a static indentation into a pliable material is represented by the total angle of contact given by theta. The
angle of contact made by an arbitrary point on the profile of the roller with the material being compacted is represented by phi. The radius of the roller is represented by R. Employing the above
quantities along with the weight of the roller and the mesh used for the material being compacted, the necessary loads to be applied during simulation are calculated
Figure 70. Chart. Indentation of a cylindrical roller into pavement. This chart depicts vertical deflection (mm) on the y-axis for six full contact elements, four full and two half contact elements,
four full contact elements, two full and one half contact elements, and two full contact elements. Nodal x-coordinate (mm) is listed on the x-axis of the chart. The y-axis is disproportionate in
comparison to the x-axis
Figure 71. Chart. Change in nodal reaction forces as the load is applied over a smaller area. This chart shows the nodal reaction forces in the elements in response to the applied dead load of the
roller as the contact area (represented by the number of elements in contact) is changed. A1 and A2 represent the edge and middle nodal forces of the same orientation in an element when six elements
are in contact between the roller and the material. B1 and B2 are nodal forces at the edge and middle nodes of an element when four elements are in contact, and C1 and C2 are nodal forces when two
elements are in contact
Figure 72. Chart. Comparison of vertical displacement for the two different patterns of loading. This chart shows the differences observed in the vertical displacement response in an element in the
direct path of the traversing load over 200 time steps. Vertical displacement (mm) is on the y-axis, and the time steps are on the x-axis. The vertical displacement of the uniform pressure is
relatively steady, but the vertical displacement of the nonuniform pressure varies
Figure 73. Chart. Comparison of the normal stress distribution at the pavement's top surface due to different loading patterns. This chart shows the differences observed in the stress response in an
element in the direct path of the traversing load over 200 time steps. Normal stress (MPa) is on the y-axis, and the time steps are on the x-axis. The normal stress of the uniform pressure is
relatively steady, but the normal stress of the nonuniform pressure varies
Figure 74. Chart. Comparison of the shear-stress (XY) distribution of the pavement's top surface due to different loading patterns (on the plane of symmetry). This chart shows the differences
observed in the stress response in an element in the direct path of the traversing load over 200 time steps. XY-shear stress (MPa) is on the y-axis, and the time steps are on the x-axis. The shear
stress of the uniform pressure is relatively steady, but the shear stress of the nonuniform pressure varies
Figure 75. Equation. Mean contact pressure. Script p subscript m equals script P divided by the product of pi with square of script a
Figure 76. Equation. Normal stress in elastic medium. Sigma subscript z divided by script p subscript m equals -4 divided by pi times (sum of: 1 minus square of script x divided by square of a) to
the power one-half
Figure 77. Chart. Comparison of the normal stress predicted by the model to the normal stress predicted for an elastic medium (for the same mean contact pressure, P[m]). This chart shows the normal
stress predicted for hot mix asphalt (HMA) material and an elastic medium. Normal stress (MPa) is on the y-axis, and X coordinate (mm) is on the x-axis. The comparatively higher stresses in the
elastic medium at a given mean contact pressure are due to the lack of any dissipative mechanism in the elastic material
Figure 78. Chart. Comparison of pavement vertical response as the normal stiffness of the interface elements is lowered by an order of magnitude from 1,451 ksi (10,000 MPa) to 145 ksi (1,000 MPa).
This chart shows the results of the finite-element (FE) analysis using the interface layer. Vertical displacement (mm) is on the y-axis, and time steps are on the x-axis. The normal stiffness is
varied (stiff interface and soft interface) for the first and second passes. The use of interface elements does not adversely affect the vertical displacement response of the model
Figure 79. Chart. Comparison of pavement response in the rolling direction as the normal stiffness of the interface elements is lowered by an order of magnitude from 1,450 ksi (10,000 MPa) to 145 ksi
(1,000 MPa). This chart shows the results of the finite-element (FE) analysis using the interface layer. X displacement (mm) is on the y-axis, and time steps are on the x-axis. The normal stiffness
is varied (stiff interface and soft interface) for the first and second passes. The use of interface elements does not adversely affect the x-displacement response of the model
Figure 80. Chart. Comparison of pavement response to increasing the shear stiffness in the x direction. This chart shows the results of the finite-element (FE) analysis using the interface layer.
Vertical displacement (mm) is on the y-axis, and time steps are on the x-axis. The shear stiffness in the x-direction is varied (stiff shear x-direction and soft shear x-direction) for the first and
second passes. The use of interface elements does not adversely affect the vertical displacement response of the model
Figure 81. Chart. Comparison of pavement response to increasing the shear stiffness in the z direction. This chart shows the results of the finite-element (FE) analysis using the interface layer.
Vertical displacement (mm) is on the y-axis, and time steps are on the x-axis. The shear stiffness in the z-direction is varied (stiff shear y-direction and soft shear y direction) for the first and
second passes. The use of interface elements does not adversely affect the vertical displacement response of the model
Figure 82. Chart. Comparison of the dampening effect provided by impedance layers surrounding the structure laterally in the x-direction. This chart shows response plots that have impedance layers
(soft impedance and stiff impedance), which are useful in reducing horizontal reflections significantly. Horizontal displacement (mm) is on the y-axis, and time steps are on the x-axis
Figure 83. Chart. Comparison of the dampening effect provided by impedance layers surrounding the structure laterally in the y-direction. This chart shows response plots that have impedance layers
(soft impedance and stiff impedance), which are useful in reducing vertical reflections significantly. Vertical displacement (mm) is on the y-axis, and time steps are on the x-axis
Figure 84. Chart. Comparison of the horizontal dampening effect provided by impedance layers surrounding the top layer (pavement and old asphalt) laterally in the x-direction. This chart shows
response plots that have impedance layers surrounding the top layer (soft impedance and stiff impedance), which are useful in reducing horizontal reflections significantly—to the point that they are
completely eliminated. Horizontal displacement (mm) is on the y-axis, and time steps are on the x-axis
Figure 85. Chart. Comparison of the vertical dampening effect provided by impedance layers surrounding the top layer laterally in the x-direction. This chart shows response plots that have impedance
layers surrounding the top layer (soft impedance and stiff impedance), which are useful in reducing vertical reflections significantly—to the point that they are completely eliminated. Vertical
displacement (mm) is on the y-axis, and time steps are on the x-axis
Figure 86. Chart. Comparison of the volumetric component of the viscous-evolution gradient for soft (72.5 ksi (500 MPa)) versus stiff (290 ksi (2,000 MPa)) bases. This chart shows plots of the
volumetric response for a soft base and stiff base. The plots indicate that the material response is not very sensitive to changes in stiffness of the base when the response is viewed in an averaged
sense. The chart shows det(G)-1 on the y-axis and time steps on the x-axis
Figure 87. Chart. Comparison of the volumetric component of the viscous-evolution gradient at two base-stiffness moduli of interest. This chart shows plots of the volumetric response for E equals
482.6 MPa and E equals 965.3 MPa. The plots indicate that the material response is not very sensitive to changes in stiffness of the base when the response is viewed in an averaged sense. The chart
shows det(G)-1 on the y-axis and time steps on the x-axis
Figure 88. Chart. Comparison of the effect on the x-displacement of a node in the roller path as the base stiffness is varied from 72.5 ksi (500 MPa) (soft base) to 290 ksi (2,000 MPa) (stiff base).
This chart shows that the finite-element (FE) model is responsive to a drastic increase in base modulus due to increasing rigidity in the base structure, leading to interference waves reflected
internally. Horizontal displacement (mm) is on the y-axis, and time steps are on the x-axis. The horizontal displacement is given for soft base x-displacement and for stiff base x-displacement
Figure 89. Chart. Comparison of the effect on the y-displacement of a node in the roller path as the base stiffness is varied from 72.5 ksi (500 MPa) (soft base) to 290 ksi (2,000 MPa) (stiff base).
This chart shows that the finite-element (FE) model is responsive to a drastic increase in base modulus due to increasing rigidity in the base structure, leading to interference waves reflected
internally. Vertical displacement (mm) is on the y-axis, and time steps are on the x-axis. The vertical displacement is given for soft base y-displacement and for stiff base y-displacement
Figure 90. Chart. Comparison of the deflection for two base-stiffness moduli of interest. This chart shows that the finite-element (FE) model is responsive to a drastic increase in base modulus due
to increasing rigidity in the base structure, leading to interference waves reflected internally. Vertical displacement (mm) is on the y-axis, and time steps are on the x-axis. The vertical
displacement is given for E equals 482.6 MPa and for E equals 965.3 MPa
Figure 91. Chart. Compaction over a sequence of passes as the amplitude of vibration increases. This chart shows the material response according to the mechanics of the loading algorithm implemented.
Compaction (percent) is on the y-axis, and pass number is on the x-axis. An increase in the amplitude of the vibratory load (3 mm, 5 mm, and 7 mm) leads to an increase in the amount of compaction
achieved, as observed in field compaction
Figure 92. Chart. Compaction over a sequence of passes at different frequencies. This chart shows the material response according to the mechanics of the loading algorithm implemented. Compaction
(percent) is on the y-axis, and pass number is on the x-axis. An increase in the frequency of the vibration (60 Hz and 3,600 vpm, 50 Hz and 3,000 vpm, and 40 Hz and 2,400 vpm) leads to an increase in
the amount of compaction achieved, as observed in field compaction
Figure 93. Chart. Material response to change in dead load carried by each roller. This chart shows the material response according to the mechanics of the loading algorithm implemented. Compaction
(percent) is on the y-axis, and pass number is on the x-axis. An increase in the load (30,000 lb, 27,000 lb, and 24,000 lb) leads to an increase in the amount of compaction achieved
Figure 94. Chart. Field compaction response at a constant frequency over multiple passes on a point. This chart shows the material response according to the mechanics of the loading algorithm
implemented. Compaction (per unit) is on the y-axis, and time (seconds) is on the x axis. A decrease in the material's viscous nature (lambda equals 0.1 and lambda equals 4.0) leads to an increase in
the amount of compaction achieved, as observed in field compaction
Figure 95. Chart. Evolution of the volumetric viscous gradient with a change in values of individual parameters [1], and λ[2]. This chart shows the viscous evolution of the model as the different
parameters are varied. The chart shows det(G)-1 on the y-axis and time steps on the x-axis. The parameters are estimated mu equals 810, estimated eta equals 1,700, lambda subscript 1 equals 0.2, and
lambda subscript 2 equals 0.2. At the end of the time steps, lambda subscript 2 has the highest det(G)-1. Estimated eta has the next highest, and the other parameters are roughly the same
Figure 96. Chart. Evolution of the volumetric viscous gradient with a change in values of individual parameters n[1], n[2], q[1], and q[2]. This chart shows the viscous evolution of the model as the
different parameters are varied. The chart shows det(G)-1 on the y-axis and time steps on the x-axis. The parameters are estimated n subscript 1 equals 5, n subscript 2 equals 1.5, q subscript 1
equals -10, and q subscript 2 equals -15. At the end of the time steps, n subscript 1 has the highest det(G)-1. Next are q subscript 2 and q subscript 1; n subscript 2 has the lowest det(G)-1
Figure 97. Graphic. Regions of influence of model parameters in gyratory compaction.^(51) This graphic shows a chart with normalized height on the y-axis and time of compaction (seconds) on the
x-axis. The chart has a line and descriptions of points and planes on the line: the initial viscous dissipation limit, due lambda subscript 2; the onset of high viscosity, determined by estimated
eta; the degree of nonlinear transition, controlled through q subscript 2; the beginning of material consolidation, determined by the estimated mu; the rate of consolidation, governed by n subscript
1; and the asymptotic consolidation limit, due lambda subscript 1
Figure 98. Chart. Evolution of the volumetric viscous gradient with a change in λ[1]. This chart shows the results of simulations performed at different stiffnesses where lambda subscript 1 is varied
and its effect on volumetric viscous gradient is studied. The chart shows det(G)-1 on the y-axis and time steps on the x-axis. Three lines represent lambda subscript 1 equals 0.2, lambda subscript 1
equals 0.25, and lambda subscript 1 equals 0.3. The profiles of the deflection and volumetric part of viscous evolution indicate that the model response is not sensitive to lambda subscript 1. The
parameter can be a model constant and can be the same constant as in the Superpave^® gyratory compactor (SGC) simulations
Figure 99. Chart. Evolution of the volumetric viscous gradient with a change in q[1]. This chart shows the results of simulations performed at different stiffnesses where q subscript 1 is varied and
its effect on volumetric viscous gradient is studied. The chart shows det(G)-1 on the y-axis and time steps on the x-axis. Three lines represent q subscript 1 equals -20, q subscript 1 equals -15,
and q subscript 1 equals -10. The model is sensitive to q subscript 1. The volumetric part of the viscous evolution exhibits no sensitivity to q subscript 1. The same constant that was used in the
Superpave^® gyratory compactor (SGC) simulations can be used for this parameter
Figure 100. Chart. Evolution of the volumetric viscous gradient with a change in n[2]. This chart shows the results of simulations performed at different stiffnesses where n subscript 2 is varied and
its effect on volumetric viscous gradient is studied. The chart shows det(G)-1 on the y-axis and time steps on the x-axis. Three lines represent n subscript 2 equals 1.5, n subscript 2 equals 2.5,
and n subscript 2 equals 3.5. The model exhibits considerable sensitivity to changes in n subscript 2. This response from the model shows deviation from the response exhibited by the model during the
Superpave^® gyratory compactor (SGC) simulations
Figure 101. Chart. Evolution of the volumetric viscous gradient with a change in λ[2]. This chart shows the results of simulations performed at different stiffnesses where lambda subscript 2 is
varied and its effect on volumetric viscous gradient is studied. The chart shows det(G)-1 on the y-axis and time steps on the x-axis. Three lines represent lambda subscript 2 equals 0.30, lambda
subscript 2 equals 0.22, and lambda subscript 2 equals 0.15. The model indicates that lambda subscript 2 in the Superpave^® gyratory compactor (SGC) simulation is also significant for field
Figure 102. Chart. Evolution of the volumetric viscous gradient with a change in q[2]. This chart shows the results of simulations performed at different stiffnesses where q subscript 2 is varied and
its effect on volumetric viscous gradient is studied. The chart shows det(G)-1 on the y-axis and time steps on the x-axis. Three lines represent q subscript 2 equals -25, q subscript 2 equals -20,
and q subscript 2 equals -15. The model indicates that q subscript 2 in the Superpave^® gyratory compactor (SGC) simulation is also significant for field compaction
Figure 103. Chart. Evolution of the volumetric viscous gradient with a change in n[1]. This chart shows the results of simulations performed at different stiffnesses where n subscript 1 is varied and
its effect on volumetric viscous gradient is studied. The chart shows det(G)-1 on the y-axis and time steps on the x-axis. Three lines represent n subscript 1 equals 3.0, n subscript 1 equals 4.0,
and n subscript 1 equals 5.0. The model indicates that n subscript 1 in the Superpave^® gyratory compactor (SGC) is also significant for field compaction. However, while n subscript 1 is less than or
equal to 4.0, the model does not exhibit any sensitivity to this parameter, whereas at a higher value of 5.0, the model exhibits significantly higher stiffness. Hence, a correlation exists in
parametric behavior for both compaction processes
Figure 104. Chart. Plot representing the final compacted state of the material along the width of the pavement. This chart shows the overlap zone between two roller passes. The zone contains the
longitudinal joint where material shoving occurs to accommodate the compaction of the material. The chart shows the compaction predicted with a refined mesh for roller position 1 (passes 1 and 2) and
roller position 2 (passes 3 and 4). Material shoves laterally in the first and second passes, and the free edge compacts less than at the middle of the lane. This agrees with observations in the
Figure 105. Illustration. Schematic of a roller on a material with three locations for density measurements.^(49) This graphic shows how the finite-element (FE) model is used to simulate field
rolling compaction on material 12 ft wide with the roller centered on the material. The graphic gives pavement quality indicator measurement locations A, B, and C in the material. C is depicted at a
location marked 4.5 ft from the left side of the graphic, B is depicted at a location marked 7.5 ft from the left side of the graphic, and A is depicted at a location marked 9 ft from the left side
of the graphic
Figure 106. Chart. Measurements of the %AV in the asphalt mix.^(49) This chart shows the measured percent air voids in the material for locations A, B, and C. Percent air voids (percent) are on the
y-axis, and pass number is on the x-axis. A comparison of the typical simulated responses in using the finite-element (FE) model shows that the model developed does predict a trend of compaction over
multiple passes similar to that measured in the field.
Figure 107. Chart. Measurements of change in %AV in the asphalt mix.^(49) This chart shows the measured change in the percent air voids in the material for locations A, B, and C. Change in air voids
(percent) is on the y-axis, and pass number is on the x-axis. A comparison of the typical simulated responses in using the finite-element (FE) model shows that the model developed does predict a
trend of compaction over multiple passes similar to that measured in the field
Figure 108. Chart. Measurement and modeling results of %AV at point A of the pavement locations shown in figure 105. The chart is a comparison of the percent air voids measured at a point A of the
pavement location shown in figure 105 against the simulation results obtained from using the model. The percent air voids are represented on the y-axis and the roller pass numbers are represented on
the x-axis. The two lines are fairly close together, with the measurement line slightly higher than the model line
Figure 109. Chart. Measurement and modeling results of %AV at point B of the pavement locations shown in figure 105. The chart is a comparison of the percent air voids measured at a point B of the
pavement location shown in figure 105 against the simulation results obtained from using the model. The percent air voids are represented on the y-axis and the roller pass numbers are represented on
the x-axis. The two lines are close together, with the model line starting higher than the measurement line and the two intersecting at pass number 2
Figure 110. Chart. Measurement and modeling results of %AV at point C of the pavement locations shown in figure 105. The chart is a comparison of the percent air voids measured at a point C of the
pavement location shown in figure 105 against the simulation results obtained from using the model. The percent air voids are represented on the y-axis and the roller pass numbers are represented on
the x-axis. The two lines are close together, with the model line starting higher than the measurement line and the two intersecting at pass number 2
Figure 111. Illustration. Pavement structure for the US-87 project. This graphic shows the structure of the pavement for the US-87 pavement project. The bottom layer, depicted in magenta, is 6 inches
(152.4 mm) of lime-treated subgrade. The next layer, depicted in red, is 6 inches (152.4 mm) of flexible base. Next, depicted in yellow, is 3.5 inches (88.9 mm) of Type B hot-mix asphalt (HMA). The
surface layer, depicted in blue, is 2 inches (50.8 mm) of Type C HMA
Figure 112. Chart. Schematic for the rolling patterns for the US-87 project. This graphic shows the simulation of the sequence and location of the roller for the US-87 project, along with the
boundary conditions representative of the restrained and unrestrained edges of the pavement. Line segments represent rollers with arrows indicating their rolling directions. An upward arrow indicates
forward rolling, and a downward arrow indicates the reverse. The scale is 1 division equals 1 ft (0.304.8 m). The roller is 7 ft (2.135 m) wide
Figure 113. Illustration. Pavement structure for the US-259 project. This graphic shows the structure of the pavement for the US-259 pavement project. The bottom layer, depicted in red, is 10 inches
(254 mm) of flexible base. The next layer, depicted in yellow, is 9 inches (228.6 mm) of Type B hot-mix asphalt (HMA) and asphalt concrete pavement. The surface layer, depicted in blue, is 2 inches
(50.8 mm) of Type C HMA
Figure 114. Chart. Schematic for the rolling patterns for the US-259 project. This graphic shows the simulation of the sequence and location of the roller for the US-259 project, along with the
boundary conditions representative of the restrained and unrestrained edges of the pavement. Line segments represent rollers with arrows indicating their rolling directions. An upward arrow indicates
forward rolling, and a downward arrow indicates the reverse. The scale is 1 division equals 1 ft (0.305 m). The roller is 7 ft (2.135 m) wide
Figure 115. Chart. Comparison of the total percent compaction from simulations with the general trend of the %AV measured at the end of the field compaction process for US-87. This chart shows that
the compaction for US-87 predicted by the Superpave^® gyratory compactor (SGC) parameters is outside the range of the change in percent air void (%AV) measured in the field. Therefore, the parameters
(shear modulus) are adjusted so that the compaction obtained in simulations is contained within the range of measured %AV values. Within the compaction zone, the behavior of the mix in the
simulations correlates well with the trends observed in the field. In the chart, compaction (%) is shown on the y-axis and cores (group number) are shown on the x axis. The comparison is made per
core group, which represents a different location across the material relative to the edge
Figure 116. Chart. Total percent compaction from simulations compared to the general trend of the %AV measured at the end of the field compaction process for US-259. This chart shows that the
compaction for US-259 predicted by the Superpave^® gyratory compactor (SGC) parameters is outside the range of the change in percent air voids (%AV) measured in the field. Therefore, the parameters
(shear modulus) are adjusted so that the compaction obtained in simulations is contained within the range of measured %AV values. Within the compaction zone, the behavior of the mix in the
simulations correlates well with the trends observed in the field. In the chart, compaction (%) is shown on the y-axis and cores (group number) are shown on the x axis. The comparison is made per
core group, which represents a different location across the material relative to the edge
Figure 117. Chart. Comparison of prediction of percent compaction per roller pass for US 87 and US-259. This chart shows that the simulations predict that the US-87 material will undergo more
compaction by the end of the whole process than the US-259 material. Compaction is calculated at a common location on the material for both projects. Percent compaction is on the y-axis, and number
of passes over entire mat is on the x-axis. The calculations were taken at a distance of 1 ft (0.305 m) from the edge
Figure 118. Chart. Prediction of percent compaction per roller pass across the material for US-87 (cores taken at four locations). This chart shows the material behavior of US-87 pavement undergoing
compaction. Percent compaction is on the y-axis, and number of passes over entire mat is on the x-axis. Core group 4 undergoes the most compaction. Core groups 2 and 3 are next in percentage of
compaction, respectively. Core group 1 undergoes the least compaction
Figure 119. Chart. Prediction of percent compaction per roller pass across the material for US-259 (cores taken at four locations). This chart shows the material behavior of US-259 pavement
undergoing compaction. Percent compaction is on the y-axis, and number of passes over entire mat is on the x-axis. Core group 2 undergoes the most compaction. Core groups 1, 5, and 3 are next in
percentage of compaction, respectively. Core group 4 undergoes the least compaction
Figure 120. Illustration. Micromechanical response of an asphalt mix. This figure is a schematic illustration of the combination of a continuum level multiplicative split of the total deformation,
script F, into the inelastic deformation gradient, script G, and the elastic deformation gradient, script F subscript (script e), that takes into account the micromechanical response of asphalt mix
Figure 121. Chart. Representation of tasks involved in modeling asphalt compaction. This figure is a schematic representation of the modeling steps involved in relating the laboratory
characterization of asphalt mixes through the conduct of gyratory experiments, simulating the gyratory experiments, and their field compaction behavior. The relationship between the continuum model
parameters and the mixture compositional properties is first understood through the laboratory experiment and its simulation. The understanding gained is used to simulate and study laboratory and
field compaction is more general cases to characterize asphalt mix compaction. Finally, an understanding is gained of the influence of the compositional properties on the field compaction process. | {"url":"http://www.fhwa.dot.gov/publications/research/infrastructure/pavements/10065/508txt.cfm","timestamp":"2014-04-17T04:17:42Z","content_type":null,"content_length":"73632","record_id":"<urn:uuid:314f7aaa-e4c3-437f-97bd-bc5db23fdde6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fundamental Theorem of Calculus
March 9th 2008, 08:30 PM
Fundamental Theorem of Calculus
Solon Container receives 450 drums of plastic pellets every 30 days. The inventory function (drums on hand as a function of days) is I9x)= $450-\frac{x^2}{2}$
A. Find the average daily inventory (that is, the average value of I(x) for the 30-day period)
B. If the holding cost for one drum is $0.02 per day, fidn teh average daily holding cost (that is, the per-drum holding cost times the average daily inventory.)
I have no clue how to do this.
We have learned the F(b)-F(a) formula and that the derivative of the integral is the function.
March 9th 2008, 08:53 PM
Solon Container receives 450 drums of plastic pellets every 30 days. The inventory function (drums on hand as a function of days) is I9x)= $450-\frac{x^2}{2}$
A. Find the average daily inventory (that is, the average value of I(x) for the 30-day period)
B. If the holding cost for one drum is $0.02 per day, fidn teh average daily holding cost (that is, the per-drum holding cost times the average daily inventory.)
I have no clue how to do this.
We have learned the F(b)-F(a) formula and that the derivative of the integral is the function.
The average of a function on an interval [a,b] is given by the fomula | {"url":"http://mathhelpforum.com/calculus/30527-fundamental-theorem-calculus-print.html","timestamp":"2014-04-20T18:09:38Z","content_type":null,"content_length":"5672","record_id":"<urn:uuid:148668e4-2f90-4706-9268-cda844287b0c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/ronncc/answered","timestamp":"2014-04-20T10:49:49Z","content_type":null,"content_length":"119697","record_id":"<urn:uuid:0ee217b5-5716-4158-bfeb-0e50dca228ac>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jeopardy! Season 1 [Winner: DBandit70]
My favourite game show, so why not turn it into a tournament?
Premium and Non Premium are allowed. Non Premium must reserve one free space for the whole tournament.
Number of Entrants:
27 (plus 5 reserves)
Game Type:
Play Order:
Bonus Cards:
Fog of War:
Number of Games:
Minimum of 60 games, Maximum of 183.
Other Details:
For more info on playing Jeopardy, you can click
This Jeopardy will be played on the regular Jeopardy board with six categories of 5 maps each. That means you’ll be playing 30 games in the first round alone. You will be split by rank into 9 groups
of 3. Maps replace clues in this tourney version of Jeopardy.
Jeopardy! Round
You will be playing 30 games for this round, one for each map on the board. There will be 1 Daily Double hidden somewhere on the board. First clue would be chosen by random draw (as there is no
returning champion). When you win a game, you get the amount of money picked for the clue. Then you choose which clue goes next. Be warned! Some clues, if you pick, and lose that game, you lose a
specified amount of money. (No more than $1000). There are at least 5 of each money amount ($200, $400, $600, $800, $1000)
Double Jeopardy! Round
You will still be playing 30 games one for each map on the board. As the name states, everything will be doubled (number of Daily Doubles, amount of money that could be lost, and money amount for
clues). The only thing that won’t be doubled is amount of clues. There will be 2 Daily Double hidden somewhere on the board. First clue would be chosen by random draw (as there is no returning
champion). When you win a game, you get the amount of money picked for the clue. Then you choose which clue goes next. Be warned! Some clues, if you pick, and lose that game, you lose a specified
amount of money. (No more than $2000) There are at least 4 of each money amount ($400, $800, $1200, $1600, $2000)
Final Jeopardy
There will be a final clue at the end of both rounds. If you have less than $1, you are automatically disqualified for this round, and are eliminated. If nobody has $1, a random amount of money will
be given, and then you will play a game, where you wager an amount of money. If there is still a tie, I will think of a decent tie-breaker. Winner advances to next round.
Daily Double
A Daily Double is a clue, where the person who picks it gets to wager any amount of money (min. is 5, max. is however amount of money you have, if you have less than 5, Daily Double is disqualified,
and not in play anymore). If the person wins the Daily Double, they would win the amount they wagered. If somebody else wins the Daily Double, they would win half of what was wagered (rounded to
nearest 10,) and person who chose the Daily Double would lose all the money they wagered.
Winning the Jeopardy game in general, would advance you to the next round.
*Joining requires commitment to this tourney. Be prepared to be staying in this tourney for several months – or even half a year, depending on how fast the games are completed.
*If I feel that a rule needs to be changed to further run the tourney smoother, I will do so without any discussion.
*IL are ignored in my tourney. If somebody is on your IL, remove them for the time being or be replaced.
*You have 5 days to join the game before you are replaced.
Last edited by dittoeevee8888 on Fri Feb 05, 2010 12:08 am, edited 8 times in total.
Last edited by dittoeevee8888 on Wed Mar 11, 2009 7:45 pm, edited 11 times in total.
Re: Jeopardy!
Group A:
Group A Board
goggles paisano - $1600
aspalm - $3600
Group B:
Group B Board
Bones2484 - $2800
knighthawk - $3800
MEK - $1000
Group C:
Group C Board
Lufsen75 - $6500
AtreidesHouse - $2000
supposesublys - $3200
Bold means it's your turn to pick a clue.
Last edited by dittoeevee8888 on Fri Nov 27, 2009 11:12 pm, edited 28 times in total.
Re: Jeopardy! Season 1
Group D:
Group D Board
harvmax - $1600
Gilligan - $1400
reahma - $2600
Group E:
Group E Board
dustin800 - $5800
WPBRJ - $200
RedBaron0 - $2400
Group F:
Group F Board
sd031091 - $4200 [playing final Jeopardy with Group A]
Bold means it's your turn to pick a clue.
Last edited by dittoeevee8888 on Fri Nov 27, 2009 11:13 pm, edited 28 times in total.
Re: Jeopardy! Season 1 (0/27)
Group G:
Group G Board
denominator - $1700
DBandit70 - $7450
grant.gordon - $0
Group H:
Group H Board
KidWhisky - $3600
HighlanderAttack - $1800
shoop76- $800
Group I:
Group I Board
poptartpsycho18 - $1600
amazzony - $2000
BrotherWolf - $3000
Bold means it's your turn to pick a clue.
Strikeout means that you have no money and can't play in final Jeopardy.
Last edited by dittoeevee8888 on Fri Nov 27, 2009 11:29 pm, edited 28 times in total.
Re: Jeopardy! Season 1 (0/27)
wow seems like way to much work
Re: Jeopardy! Season 1 (0/27)
killmanic wrote:wow seems like way to much work
But I'm willing to do it...
Do you want to join?
Re: Jeopardy! Season 1 (0/27)
If ur willing to go to the effort, sign me up for it. Thx.
Re: Jeopardy! Season 1 (0/27)
I'll have a go
IL are ignored in my tourney
I like this sentence
Make sure that you think everything really really through (how to store data etc) to make your life as easy as possible with such a hard tourney
I'm in
"Thou shalt accept thy dice rolls as the will of the Gods" (Church of Gaming)
"amazzony is a beast" (Woodruff)
Re: Jeopardy! Season 1 (0/27)
Hmm ... looks like this will require quite a bit of thought and might even end up being kind of confusing (but probably more so for the organizer than us players).
Needless to say, I'm in.
Re: Jeopardy! Season 1 (0/27)
sign me up
Re: Jeopardy! Season 1 (0/27)
I am in. This sounds fun.
Re: Jeopardy! Season 1 (0/27)
I love Jeopardy. I'll play.
Re: Jeopardy! Season 1 (0/27)
amazzony wrote:
IL are ignored in my tourney
I like this sentence
Make sure that you think everything really really through (how to store data etc) to make your life as easy as possible with such a hard tourney
I'm in
Yeah...I'm just glad they made excel
poptartpsycho18 wrote:If ur willing to go to the effort, sign me up for it. Thx.
Yep, I am. If I wasn't, would I bother posting this?
geigerm wrote:Hmm ... looks like this will require quite a bit of thought and might even end up being kind of confusing (but probably more so for the organizer than us players).
Needless to say, I'm in.
Yeah...I kind of worried that it wouldn't be quick signups because some people don't have commitment for long term tourneys, and maybe a bit too confusing? But nonetheless, I'm determined to go
through this tourney. If this series continues is a different question
And I've gotten everybody up to here | {"url":"http://www.conquerclub.com/forum/viewtopic.php?p=1307154","timestamp":"2014-04-21T13:57:13Z","content_type":null,"content_length":"187337","record_id":"<urn:uuid:4162b437-9794-40f2-bda4-7f32ce46c960>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Baez & Huerta in Scientific American: "The Strangest Numbers in String Theory"
Baez & Huerta in Scientific American: “The Strangest Numbers in String Theory”
“Octonions were largely neglected since their discovery in 1843, but in the past few decades they have assumed a curious importance in string theory. And indeed, if string theory is a correct
representation of the universe, they may explain why the universe has the number of dimensions it does.” John C. Baez & and John Huerta, “The Strangest Numbers in String Theory,” Scientific American,
May 2011.
“The set of all real numbers forms a line, so we say that the collection of real numbers is one-dimensional. We could also turn this idea on its head: the line is one-dimensional because specifying a
point on it requires one real number. The set of all complex numbers of the form a+b i, where i²=–1, and a and b are ordinary real numbers, describe points on the plane and their basic operations
—addition, subtraction, multiplication and division— describe geometric manipulations in the plane. Almost everything we can do with real numbers can also be done with complex numbers.
On October 16, 1843, William Rowan Hamilton was walking with his wife along the Royal Canal to a meeting of the Royal Irish Academy in Dublin when he had a sudden revelation: quaternions a + b i + c
j + d k, with i²=j²=k²=–1. Quaternions provide an efficient way to represent threedimensional rotations. Hamilton’s college friend, John Graves, discovered on December 26 a new eight-dimensional
number system that he called the octaves and that are now called octonions. In 1845 the young genius Arthur Cayley rediscovered the octonions. For this reason, the octonions are also sometimes known
as Cayley numbers.”
In the generalization of numbers as tuples, division is the hard part: a number system where we can divide is called a division algebra. Not until 1958 did three mathematicians prove an amazing fact
that had been suspected for decades: any division algebra must have dimension one (which is just the real numbers), two (the complex numbers), four (the quaternions) or eight (the octonions).
Hamilton didn’t like the octonions because they break some cherished laws of arithmetic. Real an complex numbers are commutatitve, but quaternions are noncommutative. The octonions are much stranger.
Not only are they noncommutative, they also break another familiar law of arithmetic: the associative law (xy)z = x(yz). What the octonions would be good for? They are closely related to the geometry
of seven and eight dimensions, and we can describe rotations in those dimensions using the multiplication of octonions.
In the 1970s and 1980 s theoretical physicists developed a strikingly beautiful idea called supersymmetry, a symmetry between matter and the forces of nature. Every matter particle (such as an
electron) has a partner particle that carries a force, and vice versa. Supersymmetry also encompasses the idea that the laws of physics would remain unchanged if we exchanged all the matter and force
particles. Even though physicists have not yet found any concrete experimental evidence in support of supersymmetry, the theory is so seductively beautiful and has led to so much enchanting
mathematics that many physicists hope and expect that it is real.
In the standard three-dimensional version of quantum mechanics that physicists use every day, spinors describes the wave motion of matter particles and vectors describes that of force particles.
Particle interactions require the combination of spinors and vectors by a simulacrum of multiplication. As an alternative, imagine a strange universe with no time, only space. If this universe has
dimension one, two, four or eight, both matter and force particles would be waves described by a single type of vectorial object (vectors and spinors coincide), just real numbers, complex numbers,
quaternions or octonions, respectively. Supersymmetry emerges naturally, providing a unified description of matter and forces.
In string theory, every object corresponds to a little string with one dimension in space and another one in time, hence two dimensions have to be added to every point in sapce. Instead of
supersymmetry in dimension one, two, four or eight, we get supersymmetry in dimension three, four, six or 10. Coincidentally string theorists have for years been saying that only
10-dimensional versions of the theory are self-consistent: anomalies appear in anything other than 10 dimensions, breaking down string theory. But 10-dimensional string theory is, as we have just
seen, the version of the theory that uses octonions. So if string theory is right, the octonions are not a useless curiosity, on the contrary, they provide the deep reason why the universe must have
10 dimensions: in 10 dimensions, matter and force particles are embodied in the same type of numbers—the octonions.
Recently physicists have started to go beyond strings to consider membranes. In string theory we had to add two dimensions to our standard collection of one, two, four and eight, now we must add
three. Supersymmetric membranes naturally emerge in dimensions four, five, seven and 11. Researchers tell us that M-theory (the “M” typically stands for “membrane”) requires 11 dimensions—implying
that it should naturally make use of octonions.
Neither string theory nor M-theory have as of yet made no experimentally testable predictions. They are beautiful dreams—but so far only dreams. The universe we live in does not look 10- or
11-dimensional, and we have not seen any symmetry between matter and force particles. Only time will tell if the strange octonions are of fundamental importance in understanding the world we see
around us or merely a piece of beautiful mathematics.”
More information about these especulations in Peter Woit, “This Week’s Hype,” Not Even Wrong, April 28th, 2011, where the expository article about octonions by John Baez that appeared in the AMS
Bulletin (copy here, a web-site here) is recommended. In the comments, Thomas Larsson recalls that “octonions is the last division algebra, but if you relax your axioms a little the Cayley-Dickson
construction gives an infinite tower of increasingly uninteresting algebras: n=1, Reals; n=2, Complex numbers; n=4, Quaternions, not commutative; n=8, Octonions, not associative; n=16: Sedenions, not
alternative but power associative, n=32: 32-ions?; …
See also Philip Gibbs, “Octonions in String Theory,” viXra log, April 29, 2011, and Lubos Motl, “John Baez, octonions, and string theory,” The Reference Frame, April 29, 2011.
4 Responses to Baez & Huerta in Scientific American: “The Strangest Numbers in String Theory”
1. The key paragraph from Lubos is “In the slow comments under the 2009 blog entry, Robert Helling argued that there is a lot of interesting fog about the closure of the supersymmetry algebra etc. I
find this whole approach to these issues irrational.” Actually, Helling refers to the papers that show that there are more structure than the naive match argued in his blog entry.
2. Hamilton in his letter of 17 October 1843 to John Graves, seems very confused about the relationships between i. j, +1 and -1. He asks what are we to do with ij when i and j are the unequal roots
of a common square. In fact there is no law of arithmetic which makes ij equal to anything but +1. It is these doubts of Hamilton which are the source of his fallacious theory of the
non-commutative properties of the multiplication of imaginary numbers. All multiplication whether of real or imaginary numbers is commutative.
3. Hamilton’s quaternions equation i^2=j^2=k^2=ijk=-1 is incorrect because -1 cannot have more than two square roots, in the same way that any real, imaginary or complex number cannot have more than
two square roots, more than three cube roots, more than four fourth roots, more than five fifth roots etc. This means that k^2=-1 is incorrect unless k can equal either i or j.
4. Further to my previous comments, -1 does not have more than two square roots, nevertheless -1 does have three cube roots which are cos60+isin60, cos180+isin180 which equals -1, and
cos300+isin300. Hamilton seemed to have no understanding of these matters.
This entry was posted in Mathematics, Physics, Science, String Theory and tagged Applied Mathematics, Division Algebra, Geometry, Octonions, Supersymmetry. Bookmark the permalink. | {"url":"http://francisworldinsideout.wordpress.com/2011/04/30/baez-huerta-in-scientific-american-the-strangest-numbers-in-string-theory/","timestamp":"2014-04-19T09:24:50Z","content_type":null,"content_length":"80996","record_id":"<urn:uuid:e0d3dffb-084f-4b97-8cee-ef455fde912f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of binary code
Binary code is the system of representing text or computer processor instructions by the use of a two digit number system. This system is composed of only the number zero, representing the off state,
and the number one, representing on state, combined in groups of 8. These groups of 8 bits can represent up to 256 different values and can correspond to a variety of different symbols, letters or
instructions. An example of this is the uppercase A, which in ASCII binary is 01000001.
In computing and telecommunication, it is used for any of a variety of methods of coding data, such as sequences of characters, into sequences of groups of bits, including fixed-width words or bytes,
and variable-length codes such as Huffman code and arithmetic coding.
In a fixed-width binary code, each letter, digit, or other character, is represented by a sequence of bits of the same length, usually indicated in code tables by the octal, decimal or hexadecimal
notation for the value of that sequence of bits interpreted as a binary number.
For representing texts in the Latin alphabet often a fixed width 8-bit code is used. The ISO 8859-1 character code uses 8 bits for each character e.g. "R" is "01010010" and "b" is "01100010"; the
block of 8 bits is called a byte; it extended the earlier ASCII code, based on the version of the Latin alphabet used for English, which uses 7 bits to represent 128 characters (0–127).
The Unicode standard defines several variable-width encodings and the fixed-width 32-bit (4-byte) UTF-32 code, potentially having room for billions of characters, but using barely more than 1 million
combination as definable code points.
A binary sequence can be translated into a decimal number using the following formula, with $y$ being the 1/0:
$\left(2^0 times y\right) + \left(2^1 times y\right) + \left(2^2 times y\right) dots$
Repeat the bracket and increase the exponent for every 1/0 in the sequence. It is important to remember that the formula is used on the sequence from right to left.
See also | {"url":"http://www.reference.com/browse/binary+code","timestamp":"2014-04-17T03:22:34Z","content_type":null,"content_length":"80073","record_id":"<urn:uuid:55814526-b835-4e3c-ba13-f54d18d39296>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Santosh Vempala's papers
Fourier PCA (N. Goyal, Y. Xiao)
STOC 2014
Integer Feasibility of Random Polytopes (K. Chandrasekaran)
ITCS 2014
A Cubic Algorithm for Computing Gaussian Volume (B. Cousins)
SODA 2014
Near-optimal Deterministic Algorithms for Volume Computation via M-ellipsoids (D. Dadush)
Proc. National Academy of Sciences (PNAS)
The Complexity of Approximating Vertex Expansion (A. Louis and P. Raghavendra)
FOCS 2013
Statistical Algorithms and a lower bound for detecting planted cliques (V. Feldman, E. Grigorescu, L. Reyzin, Y. Xiao)
STOC 2013
The approximate rank of a matrix and its algorithmic applications. (N. Alon, T. Lee, A. Shraibman)
STOC 2013
Randomly oriented k-d trees adapt to intrinsic dimension.
FSTTCS 2012
The cutting plane method is polynomial for perfect matchings. (K. Chandrasekaran, L. Vegh)
FOCS 2012
Many sparse cuts via higher eigenvalues (A. Louis, P. Raghavendra, P. Tetali)
STOC 2012
Deterministic construction of an approximate M-ellipsoid and its applications to derandomizing lattice algorithms. (D. Dadush)
SODA 2012
Modeling High-dimensional Data: Technical Perspective.
Comm. ACM 55(2): 112, 2012.
Enumerative Lattice Algorithms in Any Norm via M-Ellipsoid Coverings (D. Dadush, C. Peikert)
Proc. of FOCS 2011.
A Deterministic Polynomial-time Approximation Scheme for Counting Knapsack Solutions(D. Stefankovic, E. Vigoda)
Proc. of FOCS 2011.
SIAM J. Comp., 41(2): 356-366, 2012
Algorithmic Extensions of Cheeger's Inequality to Higher Eigenvalues (A. Louis, P. Raghavendra, P. Tetali)
Proc. of APPROX 2011.
Semantic Communication for Simple Goals is Equivalent to On-line Learning (B. Juba)
Proc. of ALT 2011.
On Noise-Tolerant Learning of Sparse Parities and Related Problems (E. Grigorescu, L. Reyzin)
Proc. of ALT 2011.
LifeNet: A Flexible Ad hoc Networking Solution for Transient Environments (H. Mehendale, A. Paranjpe)
SIGCOMM 2011, demo.
Algorithms for Implicit Hitting Set Problems (K. Chandrasekaran, R. Karp, E. Moreno-Centeno)
Proc. of SODA 2011.
Learning Convex Concepts from Gaussian Distributions with PCA
Proc. of FOCS 2010.
On Nash-Equilibria of Approximation-Stable Games (P. Awasthi, N. Balcan, A. Blum, O. Sheffet)
Proc. of SAGT 2010.
Chipping Away at Censorship with User-Generated Content (S. Burnett, N. Feamster)
Proc. of USENIX Security 2010.
A Random Sampling Algorithm for Learning an Intersection of Halfspaces
JACM, 2010.
A New Approach to Strongly Polynomial Linear Programming (M. B�r�sz)
Proc. of ICS 2010.
Thin Partitions: Isoperimetry and Sampling for Star-shaped Bodies (K. Chandrasekara, D. Dadush)
Proc. of SODA 2010.
MyMANET: A Customizable Mobile Ad hoc Network (A. Paranjpe)
Proc. of ACM NSDR 2009.
Random Tensors and Planted Cliques (S. C. Brubaker)
Proc. of RANDOM 2009.
Sampling s-Concave Functions: The Limit of Convexity-based Isoperimetry (K. Chandrasekaran, A. Deshpande)
Proc. of RANDOM 2009.
Design and Deployment of a Blood Safety Monitoring Tool (S. Thomas, A. Osuntogun, J. Pitman, B. Mulenga)
Proc. of ICTD 2009.
Expanders via Random Spanning Trees (N. Goyal, L. Rademacher)
Proc. of SODA 2009.
Algorithmic Prediction of Health-Care Costs (D. Bertsimas, M. V. Bjarnad�ttir, M. A. Kane, J. C. Kryder, R. Pandey, G. Wang )
Operations Research, 2008 (special issue on Health Care).
Isotropic PCA and Affine-Invariant Clustering (S.C. Brubaker) [conf version]
Building Bridges (Ed.s M. Gr\"{o}tchel and G.O.H.Katona), Bolyai Society Mathematical Studies, 2008. (special issue in honor of L. Lov�sz).
Proc. of FOCS 2008.
Path Splicing (M. Motiwala, M. Elmore and N. Feamster)
Proc. of SIGCOMM, 2008.
Logconcave Random Graphs (A. Frieze and J. Vera)
Proc. of STOC 2008.
A Discriminative Framework for Clustering via Similarity Functions (M. F. Balcan and A. Blum)
Proc. of STOC, 2008.
Life (and Routing) on the Wireless Manifold (V. Kanade)
Proc. of HotNets, 2007.
Path Splicing: Reliable Connectivity with Rapid Recovery (M. Motiwala and N. Feamster)
Proc. of HotNets, 2007.
Filtering Spam with Behavioral Blacklisting (A. Ramachandran and N. Feamster)
Proc. ACM Computer and Communication Security, 2007.
Adaptive Simulated Annealing: A Near-optimal Connection between Sampling and Counting (D. Stefankovic and E. Vigoda)
JACM 2009
Proc. of the 48th IEEE Symposium on Foundations of Computer Science (FOCS '07), 2007. (invited to special issue)
An Efficient Re-scaled Perceptron Algorithm for Conic Systems (A. Belloni, R. Freund)
Proc. of 20th Conf. on Computational Learning Theory, San Diego, 2007.
Dispersion of Mass and the Complexity of Randomized Geometric Algorithms (L. Rademacher)
Proc. of the 47th IEEE Symposium on Foundations of Computer Science (FOCS '06), 2006.
Advances in Mathematics, 2008.
Fast Algorithms for Logconcave Functions: Sampling, Rounding, Integration and Optimization (L. Lov�sz)
Proc. of the 47th IEEE Symposium on Foundations of Computer Science (FOCS '06), 2006.
Adaptive Sampling and Fast Low-rank Matrix Approximation (A. Deshpande)
Proc. of RANDOM, 2006.
Symmetric Network Computation (D. Pritchard)
Proc. of ACM SPAA, 2006.
Matrix Approximation and Projective Clustering via Volume Sampling (A. Deshpande, L. Rademacher and G. Wang)
Proc. of the 17th ACM-SIAM Symposium on Discrete Algorithms, 2006.
Theory of Computing, 2006
Local versus Global Properties of Metric Spaces (S. Arora, L. Lov�sz, I. Newman, Y. Rabani and Y. Rabinovich)
Proc. of the 17th ACM-SIAM Symposium on Discrete Algorithms, 2006.
SIAM J. Comp. 41(1): 250-271, 2012
Nash Equilibria in Random Games (I. Barany and A. Vetta)
Proc. of the 46th IEEE Symposium on Foundations of Computer Science (FOCS '05), 2005. (invited to special issue)
Random Structures and Algorithms, 2007.
The spectral method for general mixture models (R. Kannan and H. Salmasian)
Proc. of the 18th Conference on Learning Theory, 2005 (Mark Fulk award).
SIAM J. Computing, 2008.
A Divide-and-Merge Methodology for Clustering (D. Cheng, R. Kannan and G. Wang)
Proc. of the ACM Symposium on Principles of Database Systems, 2005.
ACM Trans. Database Systems, 2006.
Tensor decomposition and approximation schemes for constraint satisfaction problems.
(W. F. de la Vega, R. Kannan and M. Karpinski)
Proc. of the 37th ACM Symposium on the Theory of Computing (STOC '05), 2005.
Geometric Random Walks: A Survey.
MSRI volume on Combinatorial and Computational Geometry.
Testing geometric convexity. (Luis Rademacher)
Proc. of the 24th FST & TCS, Chennai, 2004.
On Kernels, Margins and Low-dimensional Mappings. (Nina Balcan, Avrim Blum)
Proc. of the 15th Conf. Algorithmic Learning Theory, Padua, 2004.
Machine Learning
Hit-and-run from a corner. (L. Lov�sz)
Proc. of the 36th ACM Symposium on the Theory of Computing (STOC '04), Chicago, 2004.
SIAM J. Computing (STOC04 special issue).
A simple polynomial-time rescaling algorithm for solving linear programs.(J. Dunagan)
Proc. of the 36th ACM Symposium on the Theory of Computing (STOC '04), Chicago, 2004.
Math. Prog. A
Simulated annealing in convex bodies and an O*(n^4) volume algorithm. (L. Lov�sz)
Proc. of the 44th IEEE Foundations of Computer Science (FOCS '03), Boston, 2003.
JCSS (FOCS03 special issue).
Simulated annealing for convex optimization . (Adam Kalai)
Math of OR.
Logconcave functions: Geometry and efficient sampling algorithms. (L. Lov�sz)
Proc. of the 44th IEEE Foundations of Computer Science (FOCS '03), Boston, 2003.
Random Structures and Algorithms
Efficient algorithms for the online decision problem. (Adam Kalai)
Proc. of 16th Conf. on Computational Learning Theory, Washington D.C., 2003.
A spectral algorithm for learning mixtures of distributions. (Grant Wang)
Proc. of the 43rd IEEE Foundations of Computer Science (FOCS '02), Vancouver, 2002.
JCSS (special issue for FOCS '02), 68(4), 841--860, 2004.
Solving convex programs by random walks. (Dimitris Bertsimas)
Journal of the ACM (JACM) 51(4), 540--556, 2004.
Proc. of the 34th ACM Symposium on the Theory of Computing (STOC '02), Montreal, 2002.
An approximation algorithm for the minimum-cost k-vertex connected subgraph.
(Joseph Cheriyan, Adrian Vetta)
SIAM J. Computing, 32(4) (2003), 1050-1055.
Network design via iterative rounding of setpair relaxations. (Joseph Cheriyan, Adrian Vetta)
A preliminary version of the previous two appeared as
Approximation algorithms for minimum-cost k-connected subgraphs.
Proc. of the 34th ACM Symposium on the Theory of Computing (STOC '02), Montreal, 2002.
Flow metrics. (Claudson Bornstein)
Proc. of the 5th Symposium on Latin American Theoretical Informatics, Cancun, 2002.
Theoretical Computer Science (special issue for LATIN '02), 321(1), 13--24, 2004.
On Euclidean embeddings and bandwidth minimization. (John Dunagan)
Proc. of the 5th Workshop on Randomization and Approximation, Berkeley, 2001.
Optimal outlier removal in high-dimensional spaces. (John Dunagan)
Proc. of the 33rd ACM Symposium on the Theory of Computing (STOC '01), Crete, 2001.
JCSS (special issue for STOC '01), 68(2), 335--373, 2004.
Edge covers of setpairs and the iterative rounding method. (Joseph Cheriyan)
Proc. of Integer Programming and Combinatorial Optimization, Utrecht, 2001.
Fences are futile: on relaxations for the linear ordering problem. (Alantha Newman)
Proc. of Integer Programming and Combinatorial Optimization, Utrecht, 2001.
On clusterings: good, bad and spectral. (Ravi Kannan and Adrian Vetta)
Journal of the ACM (JACM) 51(3), 497--515, 2004.
Proc. of the 41st Foundations of Computer Science (FOCS '00), Redondo Beach, 2000.
Efficient algorithms for universal portfolios. (Adam Kalai)
J. Machine Learning Research, 3, (2002), 423--440 (invited).
Proc. of the 41st Foundations of Computer Science (FOCS '00), Redondo Beach, 2000.
Factor 4/3 approximations for minimum 2-connected subgraphs. (Adrian Vetta)
Proc. of the 3rd Workshop on Approximation , Saarbrucken, 2000.
On the approximability of the traveling salesman problem. (Christos H. Papadimitriou)
Proc. of the 32nd ACM Symposium on the Theory of Computing (STOC '00), Portland, 2000.
To appear in Combinatorica.
Randomized meta-rounding. (Bob Carr)
Random Structures and Algorithms, 20(3), (2002), 343-352 (invited).
Proc. of the 32nd ACM Symposium on the Theory of Computing (STOC '00), Portland, 2000.
On the Held-Karp relaxation for the asymmetric and symmetric TSPs. (Bob Carr)
Proc. of the 11th ACM-SIAM Symposium on Discrete Algorithms , San Francisco, 2000.
Mathematical Programming, 100(3), 569--587, 2004.
Approximating multicast congestion. (Berthold Vocking)
Proc. of ISAAC, Chennai, 1999.
An algorithmic theory of learning: Robust Concepts and Random Projection. (Rosa I. Arriaga)
Proc. of the 40th Foundations of Computer Science (FOCS '99), New York, 1999.
To appear in Machine Learning.
A convex relaxation for the Asymmetric TSP. (Mihalis Yannakakis)[two pages only]
Proc. of the 10th ACM-SIAM Symposium on Discrete Algorithms, Baltimore, 1999.
Clustering large graphs via the Singular Value Decomposition. (Petros Drineas, Ravi Kannan, Alan Frieze and V. Vinay)
Machine Learning 56, 9--33, 2004.
Proc. of the 10th ACM-SIAM Symposium on Discrete Algorithms, Baltimore, 1999.
Random Projection: A New Approach to VLSI Layout.
Proc. of the 39th Foundations of Computer Science (FOCS '98), Palo Alto, 1998.
Fast Monte-Carlo Algorithms for Finding Low-Rank Approximations. (A. Frieze, R. Kannan)
Proc. of the 39th Foundations of Computer Science (FOCS '98), Palo Alto, 1998.
Journal of the ACM (JACM), 51(6), 1025-1041, 2004.
Semi-Definite Relaxations for Minimum Bandwidth and other Vertex-Ordering Problems.
(Avrim Blum, Goran Konjevod and R. Ravi)
Proc. of the 30th ACM Symposium on the Theory of Computing (STOC '98), Dallas, 1998.
Theoretical Computer Science, 235 (2000), 25-42 (special issue in honour of Manuel Blum).
A Random Sampling based Algorithm for learning the Intersection of Half-spaces
Proc. of the 38th Foundations of Computer Science (FOCS '97), Miami, 1997. (Machtey Prize)
Sampling Lattice Points. (Ravi Kannan)
Proc. 29th ACM Symposium on the Theory of Computing (STOC '97), El Paso, 1997.
Invited for publication in Journal of Comp. and System Sciences.
Locality-Preserving Hashing in Multidimensional Spaces.
(Piotr Indyk, Rajeev Motwani and Prabhakar Raghavan)
Proc. 29th ACM Symposium on the Theory of Computing (STOC '97), El Paso, 1997.
Latent Semantic Indexing: A Probabilistic Analysis.
(Christos Papadimitriou, Prabhakar Raghavan and Hisao Tamaki)
Proc. 17th ACM Symposium on the Principles of Database Systems, Seattle, 1998.
Journal of Comp. and System Sciences (special issue for PODS '01), 61, 2000 217-235.
Simple Markov-Chain Algorithms for Generating Bipartite Graphs and Tournaments.
(Ravi Kannan and Prasad Tetali)
Proc. 8th ACM-SIAM Symposium on Discrete Algorithms, New Orleans, 1997.
Random Structures and Algorithms, 14(4), 1999, 293-308.
A Polynomial-Time Algorithm for Learning Noisy Linear Threshold Functions.
(Avrim Blum, Alan Frieze and Ravi Kannan)
Proc. 37th IEEE Symposium on the Foundations of Computer Science (FOCS '96), Burlington, 1996.
Algorithmica, 22(1), 35-52 (invited).
The Colin de Verdi�re Number and Sphere Representations of a Graph
(Andrew Kotlov and L�szl� Lov�sz)
Combinatorica, 17(4), 1997.
A Constant-Factor Approximation for the k-MST Problem. (Avrim Blum, R. Ravi)
Proc. 28th ACM Symposium on the Theory of Computing (STOC '96), Philadelphia, 1996.
J. Computer and System Sciences, 58(1), 101-108 (special issue for STOC '96).
Improved Approximations for Minimum-Weight k-Trees and Prize-Collecting Salesmen.
(Baruch Awerbuch, Yossi Azar, Avrim Blum)
Proc. 27th ACM Symposium on the Theory of Computing (STOC '95), Las Vegas, 1995.
SIAM J. on Computing, 28(1) 1999.
A Constant-Factor Approximation for the k-MST Problem in the Plane. (A. Blum, P. Chalasani)
Proc. 27th ACM Symposium on the Theory of Computing (STOC '95), Las Vegas, 1995.
A Constant-Factor Approximation Algorithm for the Geometric k-MST Problem in the Plane
(J.S.B. Mitchell, A. Blum, P. Chalasani)
Siam J. on Computing , 28(3) 1999.
Improved Approximations for Finding Minimum 2-connected Subgraphs via Better Lower-Bounding Techniques
(Naveen Garg, Aman Singla)
Proc. 4th ACM-SIAM Symposium on Discrete Algorithms, Austin, 1993.
``A Limited-Backtrack Greedy Schema for Approximation Algorithms,''
Proc. 14th Conf. on the Foundations of Software Technology and Theoretical Computer Science, Madras, 1994. | {"url":"http://www.cc.gatech.edu/~vempala/papers/AllPapers.html","timestamp":"2014-04-16T10:29:40Z","content_type":null,"content_length":"33639","record_id":"<urn:uuid:932977ca-7ea0-49ed-9e37-fa6ceb36e80f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Poisson process
November 23rd 2010, 05:08 PM #1
Oct 2008
Poisson process
A store opens at 8am. From 8 until 10, customers arrive at a poisson rate of (4) per hour. Between 10 and 12, they arrive at a poisson rate of (8) per hour. From 12 to 2, the arrical rate
increases steadily from (8) per hour at 12 to (10) per hour at 2. And from 2 to 5, the arrival rate drops steadily from (10) per hour 1t 2 to (4) per hour at 5. Determine the probability
distribution of the number of customers that enter the store on a given day.
My approach:
Generating the intensity function, I have:
Finding the respected M(t) (mean), I have 8,16,18,21. E.g. $\int_0^2 4.dt= 8$
Can I then conclude that the distribution follows Poi(8+16+18+21)=Poi(63)?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/164217-poisson-process.html","timestamp":"2014-04-20T19:10:26Z","content_type":null,"content_length":"30023","record_id":"<urn:uuid:5fb85990-9477-460c-8507-e139bc9d5a73>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Survival graphs following stcox
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Survival graphs following stcox
From "M.V. \(Trey\) Hood III" <th@uga.edu>
To <statalist@hsphsun2.harvard.edu>
Subject st: Survival graphs following stcox
Date Tue, 6 Apr 2004 13:41:05 -0400
I'm trying to draw a set of survival curves following an stcox model. Say
after stset something like:
stcox x1 x2 x3 x4 x5, robust strata(x6) cluster(x7)
I have a key variable of interest (say x1 in this case) that is binary and
so I want to plot one survival curve with the value on this variable set to
1 and another where the value is set to 0. This seems easy enough using
something like:
sts graph, by(X1)
but this command does not take into account the effects of the other
covariates in the model above.
My question: How does one run an stcox model and then plot various survival
curves holding the effects of the x's at some level (their means) while
manipulating the values on another variable of interest. This seems to be
easy to do following streg using the stcurv command, but not very
straightforward following stcox. Am I missing something simple or has
somone come up with a solution for this question?
Thanks--Trey Hood, UGA
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2004-04/msg00122.html","timestamp":"2014-04-19T07:08:38Z","content_type":null,"content_length":"5772","record_id":"<urn:uuid:c664cbf8-0578-40df-b1d9-48ac557aa0aa>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
tree diagram
April 30th 2010, 06:17 AM #1
Super Member
Oct 2008
Bristol, England
tree diagram
hi this is my question:
Danny has a biased coin.
The probability that the coin lands on heads is 2/3.
Danny throws the coin twice.
A) Fill in the probabilities on the tree diagram. (tree diagram at bottom of page)
on the first head it would be 2/3 and on the first tail 1/3.
on the second heads would it still be 2/3 and 1/3 on the tails?
Hello, andyboy179!
Danny has a biased coin.
The probability that the coin lands on heads is 2/3.
Danny throws the coin twice.
A) Fill in the probabilities on the tree diagram.
On the first, heads would be 2/3 and tail would be 1/3.
On the second, would heads still be 2/3 and 1/3 on the tails?
The first "branch" looks like this:
2/3 *
1/3 *
And every subsequent branch is exactly the same.
April 30th 2010, 06:59 AM #2
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/statistics/142277-tree-diagram.html","timestamp":"2014-04-18T03:40:36Z","content_type":null,"content_length":"33450","record_id":"<urn:uuid:fe31b83a-82c0-4914-879e-23fffaeac278>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anna Virágvölgyi
I intended to make elements wear on itself the features of the entire set which include them. To find these elements I start out from a marked state. The marked state here is a cylinder striped by
various colours. Elements are congruent squares (they may be considered words, codes, propositions, concepts, cells, etc. as well). There are ornaments on cylinder include each combinatorial possible
square with the same number of stripes continuously. In consequence of its origins certain elements can cohere (fit together) and others do not cohere. By occasion rearranging the squares various
constraint of coherence of elements are accepted or rejected. So the shape and inner structure of the result pattern visualise coherency. (Coherency is examining like the criteria of beauty and
This is a pattern of 48 different squares. Albeit the arrangement of the squares is not regular, since all the elements are different, the whole surface is symmetrical. There are several inner
pattern with identical outer form. Other changes in the neighborhoods of the elements engender different outer shapes. There are innumerable patterns possible on the plane and on surfaces of solid
figures as well.
This is a special picture of our favourite place of excursion. The level lines of the tourist-map was vectorized and shaded according to the scale of height. Coauthor Szécsi József
A picture of an unwrapped cylinder.
The universal cycle
"a b c a b a b a b c b c b c a b c a b c b c a c b a c a b a c b c a c a c a b a b c a c a b c b"
includes all possible words of six length from the alphabet (a, b, c) in which no letters of alphabet are paired. The picture is created by substituting stripes for letters of the universal cycle.
Due to of the nature of universal cycles all possible diagonal striped square tiles (with six stripes - each differ from its neighbour) can be found on this picture. | {"url":"http://gallery.bridgesmathart.org/exhibitions/2010-bridges-conference/anna-viragvolgyi","timestamp":"2014-04-20T23:27:36Z","content_type":null,"content_length":"18548","record_id":"<urn:uuid:2e148f69-2165-4502-a3c1-acf48de679e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section: C Library Functions (3) Updated: 13 May 1991 Local index Up
im_match_linear_search, im_match_linear - resample to make a match
#include <vips/vips.h>
im_match_linear( IMAGE *ref, IMAGE *sec, IMAGE *out,
int xr1, int yr1, int xs1, int ys1,
int xr2, int yr2, int xs2, int ys2 )
im_match_linear_search( IMAGE *ref, IMAGE *sec, IMAGE *out,
int xr1, int yr1, int xs1, int ys1,
int xr2, int yr2, int xs2, int ys2,
int hwindowsize, int hsearchsize )
im_match_linear_search() attempts to transform sec to make it match ref. The transformation is linear, that is, it only involves scale, rotate and translate.
im_match_linear_search() requires a pair of tie points to fix the parameters of its transformation. You should pick points as far apart as possible to increase accuracy. im_match_linear_search() will
search the area in the image around each tie point for a good fit, so your selection of points need not be exact. WARNING! This searching process will fail for rotations of more than about 10 degrees
or for scales of more than about 10 percent. The best you can hope for is < 1 pixel error, since the command does not attempt sub-pixel correlation.
hwindowsize and hsearchsize set the size of the area to be searched: we recommend values of 5 and 14.
The output image is positioned and clipped so that you can immediately subtract it from orig to obtain pixel difference images.
im_match_linear() works exactly as im_match_linear_search(), but does not attempt to correlate to correct your tie points. It can thus be used for any angle and any scale, but you must be far more
careful in your selection.
J.Ph.Laurent - 12/12/92
J.Cupitt - 22/02/93
This document was created by man2html, using the manual pages.
Time: 21:48:22 GMT, April 16, 2011 | {"url":"http://www.makelinux.net/man/3/I/im_match_linear","timestamp":"2014-04-21T02:03:24Z","content_type":null,"content_length":"9898","record_id":"<urn:uuid:e60b45b5-86c7-43af-998b-ec858e3f7525>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics for Health Care Research
1. Identifying Level of Measurement: Nominal
2. Identifying Level of Measurement: Ordinal
3. Identifying Level of Measurement: Interval/Ratio
4. Understanding Percentages
5. Frequency Distributions with Percentages
6. Cumulative Percentages and Percentile Ranks
7. Interpreting Histograms
8. Interpreting Line Graphs
9. Identifying Probability and Nonprobability Sampling Methods
10. Understanding the Sampling Section of a Research Report: Sample Criteria, Sample Size, Refusal Rate, and Mortality
11. Using Statistics to Describe a Study Sample
12. Using Power Analysis to Determine Sample Size
13. Understanding Reliability Values of Measurement Methods
14. Understanding Validity Values of Measurement Methods
15. Measurements of Central Tendency Mean, Median, and Mode
16. Mean and Standard Deviation
17. Mean, Standard Deviation, and 68% of Normal Curve
18. Mean, Standard Deviation, and 95% and 99% of Normal Curve
19. Determining Skewness of a Distribution
20. Understanding T Scores
21. Effect Size
22. Scatterplot
23. Pearson?s Product-Moment Correlation Coefficient
24. Understanding Pearson?s r, Effect Size, and Percentage of Variance Explained
25. Multiple Correlations I
26. Multiple Correlations II
27. Simple Linear Regression
28. Multiple Linear Regression
29. t-Tests for Independent Groups I
30. t-Tests for Independent Groups II
31. t-Tests for Dependent Groups
32. Significance of Correlation Coefficient
33. Standard Error of the Mean: 95th Confidence Interval
34. Standard Error of the Mean: 99th Confidence Interval
35. Standard Error of a Percentage and 95th Confident Interval
36. Analysis of Variance (ANOVA) I
37. Analysis of Variance (ANOVA) II
38. Post Hoc Analyses Following ANOVA
39. Analysis of Variance (ANOVA) with Confidence Intervals
40. Chi Square I
41. Chi Square II
42. Spearman Rank-Order Correlation Coefficient
43. Mann-Whitney U Test
44. Wilcoxon Matched-Pairs Signed-Ranks Test
45. Specificity and Sensitivity | {"url":"http://www.us.elsevierhealth.com/us/product/toc.jsp?isbn=9781416002260","timestamp":"2014-04-19T04:59:08Z","content_type":null,"content_length":"3987","record_id":"<urn:uuid:01e3cea1-82de-4414-840f-b5dba0ea17db>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Glenview, IL Prealgebra Tutor
Find a Glenview, IL Prealgebra Tutor
...Based on my background and GRE scores, I have been accepted into a PhD program for Molecular and Cellular Biology program beginning Fall 2013. Furthermore, I look forward to a career in
teaching high school or college level sciences, so I would love the opportunity to work with you to broaden my...
26 Subjects: including prealgebra, chemistry, GED, GRE
...Since then I have worked as a TA for "Finite Mathematics for Business" which had a major component of counting (combinations, permutations) problems, and linear programming, both of which are
common in discrete math. Other topics in which I am well versed are formulation of proofs, which is a ma...
22 Subjects: including prealgebra, calculus, computer programming, ACT Math
...I worked in the field of Architecture for 18 years and am a professional musician (songwriting and guitar).My methodology of teaching is inquiry based; it is preferable that a student discover
their own way of learning a new skill or a new set of facts than simply by me telling them.I have a cert...
41 Subjects: including prealgebra, reading, writing, geometry
...Having been raised by two mathematicians, I feel strongly that everyone has the potential to succeed at mathematics. It is unfortunate that the mathematics curriculum in this country does not
inspire creativity, at least not until upper level college courses, and this is one reason many students...
32 Subjects: including prealgebra, reading, English, calculus
I received my BA in Philosophy from Macalester College in 2010 and then decided that I wanted to pursue a second dual bachelor's degree in Biology and Chemistry with the ultimate goal of applying
to medical school and/or enter a PhD program in Chemistry or Biology. I have taken all of the required ...
24 Subjects: including prealgebra, English, chemistry, writing
Related Glenview, IL Tutors
Glenview, IL Accounting Tutors
Glenview, IL ACT Tutors
Glenview, IL Algebra Tutors
Glenview, IL Algebra 2 Tutors
Glenview, IL Calculus Tutors
Glenview, IL Geometry Tutors
Glenview, IL Math Tutors
Glenview, IL Prealgebra Tutors
Glenview, IL Precalculus Tutors
Glenview, IL SAT Tutors
Glenview, IL SAT Math Tutors
Glenview, IL Science Tutors
Glenview, IL Statistics Tutors
Glenview, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Glenview_IL_prealgebra_tutors.php","timestamp":"2014-04-18T15:48:01Z","content_type":null,"content_length":"24157","record_id":"<urn:uuid:72ae9064-3f52-4830-8af4-ac2f16c254d7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need some help with basic probability notation.
February 21st 2011, 05:36 PM #1
Junior Member
Sep 2008
Need some help with basic probability notation.
"A multiple choice test consists of four questions, each with four possible answers, only one of which is correct. You randomly guess each answer. What is the probability of:
A: Getting them all right
B: Getting at least one right"
A is (1/4)^4, which is 1/16. B is (1/4)^1, which is 1/4. I don't know what "proper" notation for this is, though.
Part A.
$\bigg(\dfrac{1}{4}\bigg)^4 eq \dfrac{1}{16}$
Re-do your part B.
At least one correct answer means:
$P(X \geq 1) = P(X=1)+P(X=2)+P(X=3)+P(X=4)$
Well, that's embarrassing. I was thinking 2^4 instead of 4^4. Three Calculus courses, Ordinary Differential Equations, and Linear Algebra down, and I apparently can't work with exponents.
Anyway, I understand that part A is 1/256. I just don't know the proper notation for this; I am in a Calculus based Statistics university course, and my professor is very anal about notation.
I'm not following with B. The chance of getting all four wrong is (3/4)^4. 1-(3/4)^4 would mean you didn't get all four wrong, meaning at least one is correct. Is that correct? What sort of
notation is correct for this?
Human beings make mistakes. So lets not worry about that!
I'm not following with B. The chance of getting all four wrong is (3/4)^4. 1-(3/4)^4 would mean you didn't get all four wrong, meaning at least one is correct. Is that correct? What sort of
notation is correct for this?
Yes That is correct. You can conclude that the probability of getting at least one answer correct is: $P(X \geq 1) = [1-P(X=0)]= 1-\bigg(\dfrac{3}{4}\bigg)^4=0.6835$
And for part A, the probability $P(X=4)=\bigg(\dfrac{1}{4}\bigg)^4=\dfrac{1}{256}=0 .0039$
You can also show the use of the binomial distribution in these problems. I assume you have used binomial distribution to get these answers.
Is that the notation you are talking about?
February 21st 2011, 05:59 PM #2
February 21st 2011, 06:31 PM #3
Junior Member
Sep 2008
February 21st 2011, 06:52 PM #4 | {"url":"http://mathhelpforum.com/statistics/172138-need-some-help-basic-probability-notation.html","timestamp":"2014-04-17T23:08:22Z","content_type":null,"content_length":"40675","record_id":"<urn:uuid:28f8ef71-478c-444d-8533-c09c403c8054>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
-regex-posix -package
sequence -regex-posix -package
Evaluate each action in the sequence from left to right, and collect the results.
Evaluate each action in the sequence from left to right, and ignore the results.
Evaluate each monadic action in the structure from left to right, and ignore the results.
Evaluate each action in the structure from left to right, and ignore the results.
General purpose finite sequences. Apart from being finite and having strict operations, sequences also differ from lists in supporting a wider variety of operations efficiently. An amortized running
time is given for each operation, with n referring to the length of the sequence and i being the integral index used by some operations. These bounds hold even in a persistent (shared) setting. The
implementation uses 2-3 finger trees annotated with sizes, as described in section 4.2 of * Ralf Hinze and Ross Paterson, "Finger trees: a simple general-purpose data structure", Journal of
Functional Programming 16:2 (2006) pp 197-217. http://www.soi.city.ac.uk/~ross/papers/FingerTree.html Note: Many of these operations have the same names as similar operations on lists in the Prelude.
The ambiguity may be resolved using either qualification or the hiding clause.
The subsequences function returns the list of all subsequences of the argument. > subsequences "abc" == ["","a","b","ab","c","ac","bc","abc"] | {"url":"http://www.haskell.org/hoogle/?hoogle=sequence+-regex-posix+-package","timestamp":"2014-04-16T11:42:24Z","content_type":null,"content_length":"11558","record_id":"<urn:uuid:f42cef4a-62eb-46dd-b909-e218b760e3d3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free Text Editor With Math Equations Checker: Lurch
Lurch is a free computer aided instruction software which will check how good you’ve written math problems in your text documents. This is actually a text editor, but instead of having a spell
checker for checking grammar, Lurch will test mathematical equations, proofs, algebra, arithmetic and all the other expressions that are important for solving math problems.
Interface of this free text editor with built-in math checker can be seen above. In appearance it’s not that much different from any other text editor. Toolbar at the top holds all the standard text
formatting options. Differences from standard text editors are noticed when you move down, just above the document you can see a large selection of math symbols, which can be used when writing math
proofs and equations. Key features of this free computer aided instructional text editor with built-in text editor are:
• Text editor – full set of text formatting tools, insert images, etc
• Mathematical expression checker for both logic and algebra problems
• Indicator which can mark on the fly errors in math – as you type
• Built-in rules for logic, number theory, set theory, and a few others
• Cross platform – works on Windows, Linux, BSD and Mac OS
• Simple and easy to use interface – large selection of math symbols
Lurch’s main goal is to allow instructors to create interactive math teaching aids. Lessons can be setup with the help of topics and also rules which are created within them. Rules are from where
this free mathematical text editor will check if the math that the student writes is correct. In order for Lurch to work, you need to have an understanding of how the match expressions have to be
written, and of course some understanding of math. Here’s a few pointers to help you get started.
Similar software: Lyx.
How to write math problems and check them with Lurch
When you open up Lurch, you’ll be greeted with a short 10 step tutorial to help you get the hang of things. To write mathematical equations and expressions you can use the toolbar at the top.
Something that’s more important would be topics, rules and reasons.
There are some built-in topics, which contain rules that are gonna be used by the math checker. Setting a different topic means that different rules are gonna be applied, for example logic or number
theory rules in the math problems that you’re writing. Set topic in File >> Choose topic.
Before you can start checking if the math proofs and expressions that you’ve written are correct, you need to select them and then click on the green icon in the top right corner. This will tell
Lurch that this isn’t just text and that it’s actually a math expression. Green icon where the arrow points is where you’ll turn on the checker. If there are problems, you should see a red indicator
next to the expression that you’ve selected.
There’s a lot more that you need to know to use Lurch. What we mentioned above is just the tip of the iceberg. If you’re teaching math and if you’re looking for something that might spark the
interest of your students, this is just the things. Try it, free download.
Home Page: Click Here Works With:
Free / Paid:
Link to This Page:
Be the first to know about Latest Free Software: | {"url":"http://www.ilovefreesoftware.com/21/windows/productivity/math-equations-checker-lurch.html","timestamp":"2014-04-17T00:58:19Z","content_type":null,"content_length":"36081","record_id":"<urn:uuid:eea36df2-752e-4697-bada-dea2661fed7e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logical Cryptanalysis as a SAT Problem
Find out how to access preview-only content
February 2000
Volume 24
Issue 1-2
pp 165-203
Logical Cryptanalysis as a SAT Problem
Purchase on Springer.com
$39.95 / €34.95 / £29.95*
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
Get Access
Cryptographic algorithms play a key role in computer security and the formal analysis of their robustness is of utmost importance. Yet, logic and automated reasoning tools are seldom used in the
analysis of a cipher, and thus one cannot often get the desired formal assurance that the cipher is free from unwanted properties that may weaken its strength.
In this paper, we claim that one can feasibly encode the low-level properties of state-of-the-art cryptographic algorithms as SAT problems and then use efficient automated theorem-proving systems and
SAT-solvers for reasoning about them. We call this approach logical cryptanalysis.
In this framework, for instance, finding a model for a formula encoding an algorithm is equivalent to finding a key with a cryptanalytic attack. Other important properties, such as cipher integrity
or algebraic closure, can also be captured as SAT problems or as quantified boolean formulae. SAT benchmarks based on the encoding of cryptographic algorithms can be used to effectively combine
features of “real-world” problems and randomly generated problems.
Here we present a case study on the U.S. Data Encryption Standard (DES) and show how to obtain a manageable encoding of its properties.
We have also tested three SAT provers, TABLEAU by Crawford and Auton, SATO by Zhang, and rel-SAT by Bayardo and Schrag, on the encoding of DES, and we discuss the reasons behind their different
A discussion of open problems and future research concludes the paper.
1. Abadi, M. and Needham, R.: Prudent engineering practice for cryptographic protocols, IEEE Trans. Software Engng. 22(1) (1996), 6–15.
2. Anderson, R. and Needham, R.: Programming Satan's computer, in Computer Science Today-Recent Trends and Developments, Lecture Notes in Comput. Sci. 1000, Springer-Verlag, 1996, pp. 426–440.
3. Andleman, D. and Reeds, J.: On the cryptanalysis of rotor machines and substitution-permutations networks, IEEE Trans. Inform. Theory 28(4) (1982), 578–584.
4. Ascione, M.: Validazione e benchmarking dei BDD per la criptanalisi del data encryption standard, Master's thesis, Facoltà di Ingegneria, Univ. di Roma I “La Sapienza”, March 1999. In Italian.
5. Bayardo, R. and Schrag, R.: Using CSP look-back techniques to solve real-world SAT instances, in Proc. of the 14th Nat. (US) Conf. on Artificial Intelligence (AAAI-97), AAAI Press/The MIT Press,
1997, pp. 203–208.
6. Biham, E. and Biryukov, A.: An improvement of Davies' attack on DES, in Advances in Cryptology-Eurocrypt 94, Lecture Notes in Comput. Sci., Springer-Verlag, 1994.
7. Biham, E. and Shamir, A.: Differential cryptanalysis of DES-like cryptosystems, J. Cryptology 4(1) (1991), 3–72.
8. Bryant, R.: Graph-based algorithms for Boolean function manipulation, IEEE Trans. Computers 35(8) (1986), 677–691.
9. Büning, H., Karpinski, M. and Flögel, A.: Resolution for quantified Boolean formulas, Inform. Comput. 117(1) (1995), 12–18.
10. Burrows, M., Abadi, M. and Needham, R.: A logic for authentication, ACM Trans. Comput. Systems 8(1) (1990), 18–36.
11. Cadoli, M., Giovanardi, A. and Schaerf, M.: An algorithm to evaluate quantified Boolean formulae, in Proc. of the 15th (US) Nat. Conf. on Artificial Intelligence (AAAI-98), AAAI Press/The MIT
Press, 1998, pp. 262–267.
12. Campbell, K. and Weiner, M.: DES is not a group, in Proc. of Advances in Cryptography (CRYPTO-92), Lecture Notes in Comput. Sci., Springer-Verlag, 1992, pp. 512–520.
13. Claesen, L. (ed.): Formal VLSI Correctness Verification: VLSI Design Methods, Vol. II, Elsevier Science Publishers, North-Holland, 1990.
14. Cook, S. and Mitchel, D.: Finding hard instances of the satisfiability problem: A survey, in Satisfiability Problem: Theory and Applications, Vol. 35, DIMACS Series in Discrete Math. Theoret.
Comput. Sci. Amer. Math. Soc., 1997, pp. 1–17.
15. Crawford, J. and Auton, L.: Experimental results on the crossover point in random 3SAT, Artif. Intell. 81(1–2) (1996), 31–57.
16. Cryptography Research Inc. DES key search project information, Technical report, Cryptography Research Inc., 1998. Available on the web at http://www.cryptography.com/des/.
17. Davis, M., Longemann, G. and Loveland, D.: A machine program for theorem-proving, Comm. ACM 5(7) (1962), 394–397.
18. Davis, M. and Putnam, H.: A computing procedure for quantificational theory, J. ACM 7(3) (1960), 201–215.
19. De Millo, R., Lynch, L. and Merrit, M.: Cryptographic protocols, in Proc. of the 14th ACM SIGACT Symposium on Theory of Computing (STOC-82), 1982, pp. 383–400.
20. Feistel, H., Notz, W. and Smith, L.: Some cryptographic techniques for machine-to-machine data communication, Proc. of the IEEE 63(11) (1975), 1545–1554.
21. Gomes, C. and Selman, B.: Problem structure in the presence of perturbation, in Proc. of the 14th Nat. (US) Conf. on Artificial Intelligence (AAAI-97), AAAI Press/The MIT Press, 1997.
22. Gomes, C., Selman, B. and Crato, N.: Heavy-tailed distributions in combinatorial search, in Third Internal. Conf. on Principles and Practice of Constraint Programming (CP-97), Lecture Notes in
Comput. Sci. 1330, Springer-Verlag, 1997, pp. 121–135.
23. Group of Experts on Information Security and Privacy. Inventory of controls on cryptography technologies, OLIS DSTI/ICCP/REG(98)4/REV3, Organization for Economic Co-operation and Development,
Paris, Sep. 1998.
24. Harrison, J.: Stalmarck's algorithm as a HOL derived rule, in Proc. of the 9th Internal. Conf. on Theorem Proving in Higher Order Logics (TPHOLs'96), Lecture Notes in Comput. Sci. 1125,
Springer-Verlag, 1996, pp. 221–234.
25. Johnson, D. and Trick, M. (eds): Cliques, Coloring, Satisfiability: The Second DIMACS Implementation Challenge, AMS Series in Discrete Math. and Theoret. Comput. Sci. 26, Amer. Math. Soc., 1996.
26. Kaliski, B., Rivest, R. and Sherman, A.: Is the Data Encryption Standard a group? (preliminary abstract), in Advances in Cryptology-Eurocrypt 85, Lecture Notes in Comput. Sci. 219,
Springer-Verlag, 1985, pp. 81–95.
27. Liberatore, P.: Algorithms and experiments on finding minimal models, Technical Report 09–99, Dipartimento di Informatica e Sistemistica, Università di Roma “La Sapienza”, 1999.
28. Lowe, G.: Breaking and fixing the Needham-Schroeder public-key protocol using CSP and FDR, in Tools and Algorithms for the Construction and Analysis of Systems, Lecture Notes in Comput. Sci.
1055, Springer-Verlag, 1996, pp. 147–166.
29. Marraro, L.: Analisi crittografica del DES mediante logica booleana, Master's thesis, Facolta di Ingegneria, Univ. di Roma I “La Sapienza”, December 1998. In Italian.
30. Marraro, L. and Massacci, F.: A new challenge for automated reasoning: Verification and cryptanalysis of cryptographic algorithms, Technical Report 05–99, Dipartimento di Informatica e
Sistemistica, Università di Roma “La Sapienza”, 1999.
31. Massacci, F.: Using walk-SAT and rel-SAT for cryptographic key search, in Proc. of the 16th Internat. Joint Conf. on Artificial Intelligence (IJCAI-99), Morgan Kaufmann, 1999, pp. 290–295.
32. Matsui, M.: The first experimental cryptanalysis of the Data Encryption Standard, in Proc. of Advances in Cryptography (CRYPTO-94), Lecture Notes in Comput. Sci. 839, Springer-Verlag, 1994, pp.
33. Matsui, M.: Linear cryptanalysis method for DES cipher, in Advances in Cryptology-Ewocrypt 93, Lecture Notes in Comput. Sci. 765, Springer-Verlag, 1994, pp. 368–397.
34. Mitchell, J., Mitchell, M. and Stern, U.: Automated analysis of cryptographic protocols using Murphi, in Proc. of the 16th IEEE Symposium on Security and Privacy, IEEE Computer Society Press,
1997, pp. 141–151.
35. Organization for Economic Co-operation and Development OECD emerging market economy forum (EMEF): Report of the ministerial workshop on cryptography policy, OLIS SG/EMEF/ICCP(98)1, Organization
for Economic Co-operation and Development, Paris, Feb. 1998.
36. National Institute of Standards and Technology. Data encryption standard. Federal Information Processing Standards Publications FIPS PUB 46–2, National (U.S.) Bureau of Standards, Dec. 1997.
Supersedes FIPS PUB 46–1 of Jan. 1988.
37. National Institute of Standards and Technology. Request for comments on candidate algorithms for the advanced encryption standard (AES), (U.S.) Federal Register 63(177), September 1998.
38. Committee on Payment, Settlement Systems, and the Group of Computer Experts of the central banks of the Group of Ten countries, Security of Electronic Money, Banks for International Settlements,
Basle, August 1996.
39. Paulson, L.: The inductive approach to verifying cryptographic protocols, J. Comput. Security (1998).
40. Rivest, R.: The RC5 encryption algorithm, in Proc. of the Fast Software Encryption Workshop (FSE-95), Lecture Notes in Comput. Sci. 1008, Springer-Veriag, 1995, pp. 86–96.
41. Rudell, R.: Espresso 1OCTTOOLS, January 1988.
42. Rudell, R. and Sangiovanni-Vincentelli, A.: Multiple valued minimization for PLA optimization, IEEE Trans. Comput. Aided Design. 6(5) (1987), 727–750.
43. Ryan, P. and Schneider, S.: An attack on a recurive authentication protocol: A cautionary tale, Inform. Process. Lett. 65(15) (1998), 7–16.
44. Schaefer, T.: The complexity of satisfiability problems, in Proc. of the 10th ACM Symposium on Theory of Computing (STOC-78), ACM Press and Addison Wesley, 1978, pp. 216–226.
45. Schneier, B.: Applied Cryptography: Protocols, Algorithms, and Source Code in C, Wiley, 1994.
46. Selman, B. and Kautz, H.: Knowledge compilation and theory approximation, J. ACM 43(2) (1996), 193–224.
47. Selman, B., Kautz, H. and McAllester, D.: Ten challenges in propositional resoning and search, in Proc. of the 15th Internat. Joint Conf. on Artificial Intelligence (IJCAI-97), Morgan Kaufmann,
Los Altos, 1997.
48. Selman, B., Mitchell, D. and Levesque, H.: Generating hard satisfiability problems, Artif. Intell. 81(1–2) (1996), 17–29.
49. Shannon, C.: Communication theory of secrecy systems, Bell System Technical J. 28 (1949), 656–715.
50. Suttner, C. and Sutcliffe, G.: The CADE-14 ATP system competition, J. Automated Reasoning 21(1) (1998), 99–134.
51. Zhang, H.: SATO: An efficient propositional prover, in Proc. of the 14th Internat. Conf. on Automated Deduction (CADE-97), Lecture Notes in Comput. Sci., 1997.
52. Zhang, H.: Personal communication, Nov. 1998.
53. Zhang, H. and Stickel, M.: An efficient algorithm for unit-propagation, in Proc. of the 4th Internat. Symposium on AI and Mathematics, 1996.
Logical Cryptanalysis as a SAT Problem
Cover Date
Print ISSN
Online ISSN
Kluwer Academic Publishers
Additional Links
□ cipher verification
□ Data Encryption Standard
□ logical cryptanalysis
□ propositional satisfiability
□ quantified boolean formulae
□ SAT benchmarks
Industry Sectors | {"url":"http://link.springer.com/article/10.1023%2FA%3A1006326723002","timestamp":"2014-04-19T12:51:20Z","content_type":null,"content_length":"57489","record_id":"<urn:uuid:2ed1c332-0072-4492-a23b-4d8b237b89f3>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
e^x integration problem
February 14th 2007, 10:11 AM #1
Junior Member
Feb 2007
e^x integration problem
I'm a bit rusty on my e's so:
The integral of e^x is e^x +C
So what the integral of (e^(-x^3))^2 be? -e^(-2x^3) + C?
Last edited by Nerd; February 14th 2007 at 10:24 AM. Reason: Sorry, mistyped problem
Ugh (I'm a horrible proofreader). The actual problem I was having trouble with is:
integral of (e^(-x^2))^2
CaptainBlank said that it is not an elemetary function.
Look The Integrator--Integrals from Mathematica.
The probability function is not elementary.
Ahhh. Okay....
So...what does that mean again?
If it's elementary, then it can't be explicitly be expressed using standard mathematical arguments (powers, logarithms, trig functions, etc).
The simple integral of 2x^2 dx is obviously elementary. The answer uses numbers and powers.
The more complex and nasty integral of sqrt{1+x^2} dx is elementary as well. It has some inverse trig which makes it not fun to do for most, but it can still be expressed explicitly.
The integrals of e^(x^2), e^(-x^2), e^[(x^2)^2], and many others like these are NOT elementary because we cannot explicity state them with our normal mathematical lanuage, so to speak.
This is not a rigourous definition of elementary at all, but I hope it helps you understand the concept more.
To check this yourself, type in some of the integrals I gave you on the site ThePerfectHacker gave you a link to. The site won't be able to do some of the problems and on others it will give you
an answer with something like, erf(x). All I'll say about the erf(x) is that it's NOT elementary.
If it's elementary, then it can't be explicitly be expressed using standard mathematical arguments (powers, logarithms, trig functions, etc).
The simple integral of 2x^2 dx is obviously elementary. The answer uses numbers and powers.
The more complex and nasty integral of sqrt{1+x^2} dx is elementary as well. It has some inverse trig which makes it not fun to do for most, but it can still be expressed explicitly.
The integrals of e^(x^2), e^(-x^2), e^[(x^2)^2], and many others like these are NOT elementary because we cannot explicity state them with our normal mathematical lanuage, so to speak.
This is not a rigourous definition of elementary at all, but I hope it helps you understand the concept more.
To check this yourself, type in some of the integrals I gave you on the site ThePerfectHacker gave you a link to. The site won't be able to do some of the problems and on others it will give you
an answer with something like, erf(x). All I'll say about the erf(x) is that it's NOT elementary.
I might observe that I have had some trouble with "the integrator" recently
not integrating what I know is an elementary integral. But then Mathematica
is not the only CAS that has had problems with that integral.
No it is not. I asked several times for a better definition but nobody gave me. It is anagolus to the meaning to "solution by radicals", the simple meaning: can be expressed as a finite
combination of radicals and standard arithmetical operations. See it is similar sounding by elementray. But there happens to be a nice way of stating solution by radicals. However, nobody seems
to write what it means in a nice way to what it means solutions by elementaries.
February 14th 2007, 11:15 AM #2
Global Moderator
Nov 2005
New York City
February 14th 2007, 12:34 PM #3
Grand Panjandrum
Nov 2005
February 14th 2007, 02:19 PM #4
Junior Member
Feb 2007
February 14th 2007, 04:47 PM #5
Global Moderator
Nov 2005
New York City
February 15th 2007, 02:04 PM #6
Junior Member
Feb 2007
February 15th 2007, 10:32 PM #7
MHF Contributor
Oct 2005
February 16th 2007, 03:22 AM #8
Grand Panjandrum
Nov 2005
February 16th 2007, 06:59 AM #9
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/calculus/11590-e-x-integration-problem.html","timestamp":"2014-04-20T17:54:50Z","content_type":null,"content_length":"57965","record_id":"<urn:uuid:df5151af-8ef0-4d88-a63b-9a4f169b7fed>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Families of quadratic Hamiltonians
up vote 2 down vote favorite
Hi. What type of 2n dimensional real symmetric matrices can be diagonalized with symplectic transformations (meaning M->SMS^T, S^T means transpose and S is an element of the 2n dimensional real
symplectic group. Usually normal forms of the literature are given as representatives of orthogonal group orbits, but I need to know the symplectic version. Thanks for any help, recommendation of
literature etc. Zoltan
linear-algebra matrices
I've retagged the question. – Marco Golla Sep 29 '11 at 14:26
add comment
2 Answers
active oldest votes
A $2n\times 2n$ dimensional Hermitian matrix that can be diagonalized by a symplectic transformation can be viewed as an $n\times n$ matrix with elements consisting of $2\
times 2$ blocks of the quaternion real form
${\bar{z}\;-\bar{w}}\choose{w\;\; z}$
so if you choose real $z$ and $w$ you have constructed a real symmetric matrix $M$ that can be diagonalized by a symplectic $S$.
@Federico: this is the general form for matrices that commute, $MT=TM$, with
$T=1_{N}\otimes$ ${0\; 1}\choose{-1\; 0}$ $K$
up vote 0 down vote ($K$ is the operator of complex conjugation); alternatively, one can take matrices that anticommute, $MT=-TM$; then the $2\times 2$ blocks have the form
${\bar{z}\;\bar{w}}\choose{w\;\; -z}$
and again, for a real $M$ one would choose real $w,z$. these two choices exhaust the possibilities.
In applications to physical systems, the matrix $M$ is a Hamiltonian and $T$ is the operator of time reversal. Then only commuting matrices, $MT=TM$, are permitted.
For a discussion in the physics context, see Section 1.4.2 of Forrester's book, online here:
Are the ones you construct all of them? – Federico Poloni Sep 30 '11 at 7:01
half of them, I added the other half. – Carlo Beenakker Sep 30 '11 at 10:20
add comment
I don't have a prompt answer (though I would guess "all those that are normal with respect to the scalar product induced by $J$"), but I suggest you to take a look at Indefinite
up vote 0 down linear algebra and applications, Gohberg and Lancaster.
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices or ask your own question. | {"url":"http://mathoverflow.net/questions/76763/families-of-quadratic-hamiltonians?sort=newest","timestamp":"2014-04-21T04:52:32Z","content_type":null,"content_length":"56129","record_id":"<urn:uuid:2c7afde9-130d-4fcf-9cfa-399aa62a9007>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linguistic Applications of Linear Logic
Social Sciences and Humanities Research Council of Canada
Standard Research Grant, 2006-2009
Grant #410-2006-1650
Principal Investigator: Ash Asudeh, Carleton University
Co-Investigator: Ida Toivonen, Carleton University
Collaborator: Christopher Potts, Stanford University
This interdisciplinary research project investigates potential contributions to linguistics of linear logic, an important logic in computer science and proof theory (the study of proofs as formal
mathematical objects). We will consider three key features of linear logic: 1) its notion of premises and conclusions in proofs as resources whose use is tightly controlled; 2) its rich set of
logical connectives, which allow varying perspectives on resource usage; 3) its utility in stating generalizations about and constraints on proofs. The research project builds on existing work that
investigates connections between logic and linguistics, but an important feature of this project is that we will first and foremost explore the linguistic consequences of logical properties — by
investigating empirical phenomena and proposals from linguistic theory (i.e., claims of importance to linguists) — rather than using language as a domain for logical investigations. We do, however,
expect our linguistic investigations to also be of interest to logicians — and researchers in connected fields, such as computer science, mathematics, and philosophy — since the application of logics
to different domains invariably reveals new logical properties and unsolved research problems.
The three properties of linear logic hold tremendous potential for linguistic theory as well as psycholinguistic models of language processing. There are two fundamental insights behind the proposed
research. First, the combinatorial elements of language (e.g., words, meanings, phonemes, morphemes, etc.) can be profitably construed as resources. This formally captures a common intuition behind a
variety of linguistic principles: The usage of combinatorial elements in linguistic structures is sensitive to their occurrences. For example, it is not possible to ignore the meaning contribution of
the word Sandy in the sentence Kim saw Sandy and to use the meaning of the word Kim twice to derive the meaning see(Kim, Kim). This seems like a trivial statement, yet a rather large variety of
stipulations have been made in linguistic theory to account for the effect. Furthermore, the large set of linear logic connectives permits careful consideration of exactly which kinds of resource
accounting are instantiated in natural language — in other words, which linear logic connectives are the ones that actually model linguistic processes?
The second insight is that linear logic proofs are themselves linguistically significant objects. This has a number of immediate implications. First, we can think of proofs as a representation of the
syntax–semantics interface itself. This is significant because a large portion of work in theoretical syntax and semantics over the last thirty-odd years has concentrated on investigating properties
of this interface, but the interface itself has arguably had no formal representation. Second, proofs allow us to investigate properties of semantic composition while abstracting away from meaning.
This approach allows us to consider semantic composition as a syntactic system that computes meanings, while setting aside potentially misleading denotational or truth conditional aspects of meaning.
This is a divide-and-conquer strategy: Certain aspects of semantics will invariably have a better proof-theoretic explanation, while others will have a better denotational/model-theoretic
explanation. Third, this aspect of the project will also benefit computational linguistics projects — not only in academia, but also in government and industry — because computing with logics is
well-understood (although not without unsolved problems) and the reduction of difficult semantic problems to computationally better-understood syntactic problems (via proof-theory) will be a welcome
result in this field. More generally, investigation of linguistic properties of proofs points to a new avenue of research on linguistic interfaces (i.e., connections between the modules of the
language faculty). | {"url":"http://users.ox.ac.uk/~cpgl0036/research/projects-sshrc.html","timestamp":"2014-04-20T20:58:31Z","content_type":null,"content_length":"6512","record_id":"<urn:uuid:0dc6f603-3012-4dc6-a038-eaa0b4b028d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Data Matrix Code Location Based on Finder Pattern Detection and Bar Code Border Fitting
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 515296, 13 pages
Research Article
Data Matrix Code Location Based on Finder Pattern Detection and Bar Code Border Fitting
^1College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
^2College of Mathematics and Computational Science, Shenzhen University, Shenzhen 518060, China
Received 25 November 2011; Accepted 7 January 2012
Academic Editor: Bin Fang
Copyright © 2012 Qiang Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The 2-D bar code possesses large capacity of data, strong ability for error correction, and high safety, which boosts the 2-D bar code recognition technology being widely used and developed fast.
This paper presents a novel algorithm for locating data matrix code based on finder pattern detection and bar code border fitting. The proposed method mainly involves three stages. It first extracts
candidate regions that may contain a data matrix code by morphological processing and then locates the data matrix code roughly by detecting “L” finder pattern and the dashed border on the candidate
regions. Finally, the lines fitted with the border points are used as the borders of data matrix code. A number of data matrix code images with complexity background are selected for evaluations.
Experimental results show that the proposed algorithm exhibits better performance under complex background and other undesirable conditions.
1. Introduction
2-D bar code consists of a certain white and black geometric modules that alternately arrange in the vertical and horizontal directions according to certain rules (see Figure 1), and it is a symbol
with large capacity for storing information. As the 2-D bar code with smallest size in the world, data matrix code is widely applied to electronic product components. 2-D bar code recognition
technology shows great commercial value, and at present, most COTS (commercial of the shell) recognition algorithms are proprietary and protected by patents, so the 2-D bar code recognition
technology is in a great demand for researching.
How to locate a 2-D bar code quickly and precisely in an image with complex background, poor illumination or other undesirable condition is crucial to the recognition process. For data matrix code
location, many kinds of location algorithms have been proposed. Donghong et al. [1] proposed an algorithm based on Radon transform, which mainly locates data matrix code by the “L” finder pattern and
dashed border detection. This algorithm has high precision and works well for the data matrix code within high density but is very time consuming and is not suitable to be applied to real-time
application. Chenguang et al. [2] proposed the locating algorithm based on Hough transform. This algorithm is very time consuming and space consuming though it can reduce the consumption by a second
Hough transformation. What’s worse, this algorithm has low precision for the complex background. Wenting and Zhi [3] discussed the method of locating data matrix code based on convex algorithm, which
determines the 3 vertexes of the “L” finder pattern according to the convex of the edge points of the bar code. This algorithm is simple and fast but requires that background is clean and the bar
code gets no stained and complete. There are other locating algorithms [4, 5] and are solely appropriate for specific situation, such as simple background situation, good illumination condition, and
low density. In reality, the bar code images are always accompanied with complex background, and furthermore the images might get stained, incomplete, or printed in high density. Under these
undesirable conditions, most of the algorithms mentioned above do not work effectively or are not demanded for higher processing power and more storage space, which cannot satisfy most real-time
application. The location problem of 2-D barcode involves nonlinear systems [6, 7].
In this paper, we propose a data matrix code location algorithm based on finder pattern detection and bar code border fitting, which is proved by extensive experiments to be effectively and fast. In
this algorithm, the finder pattern is detected mainly based on line segment detection and combination. Some work has been done for finding “L” finder pattern by segment detection, such as reference [
8–12]. About line segment detection, Grompone Von Gioi et al. [13] proposed a linear-time algorithm called line segment detector (LSD), which requires no parameter tuning and gives accurate results.
The LSD algorithm has improved the line segment finder proposed by Burns et al. [14] and combined with a validation criterion inspired from Desolneux et al. [15, 16]. In this paper, the LSD algorithm
is utilized to detect the “L” finder pattern, and an introduction of LSD is given in the related work section. About border fitting, the most important step is straight line fitting. An effective
straight line fitting solution is proposed by Fischler and Bolles called RANSAC [17] algorithm, which will be used to fit the bar code borders.
The remainder of this paper will be organized as followings. Section 2 is the introduction of the related algorithm about line segment detection. Section 3 will give details of the proposed data
matrix code location algorithm. Section 4 comments on the experimental results and Section 5 concludes the paper.
2. Related Work
LSD is a linear-time line segment detector, which draws and improves the idea of Burns et al. (see [14]) that defines a line segment as a region which only concerning the gradient information, and
combines the validation criterion inspired from Desolneux et al. (see [15, 16]). This algorithm is implemented in 4 steps.
Step 1 (finding the line-support regions). This method defines a line segment as a region called line-support region, which is a cluster of points in a connected region that sharing roughly the same
gradient orientation angle and whose gradient magnitude is greater than a threshold. To get the line-support region, a region-grow algorithm is applied. Firstly, a pseudoordering is done. The pixels
whose gradient magnitude value is larger than threshold ρ are classified into a finite number of bins according to their gradient magnitude value, then the pixels from higher bins are visited first
and pixels in lower bins later; secondly, each region starts with one pixel and initializes the line-support region angle with the first pixel’s gradient angle, if an adjacent pixel is under the
condition as inequality (2.1), add it to the line-support region and update the line-support region angle as formula (2.2). Repeat the steps until there is no pixel that can be added to the
line-support region. where is the adjacent pixel, is the threshold of the difference between adjacent pixel’s angle and line-support region angle.
Step 2 (finding the rectangular approximation of every line-support regions). A line segment that associated with a line-support region is defined as a rectangle which is the rectangular
approximation of the line-support region. Parameters, such as center, orientation angle, length, and width, can be used to describe a line segment, see Figure 2. In LSD algorithm, instead of using
the mean level-line angle as the line angle, which may lead to erroneous line angle estimation, the first inertia axis orientation is used to be the line segment orientation. The centroid of mass of
the rectangular approximate is selected as the center, when the gradient magnitude is used as pixel’s mass. The length and width are chosen in the way that covers the line-support region.
Step 3 (validation of each potential line segment). After getting the rectangular approximations of line-support region, it is necessary to validate each approximation a line segment or not according
to the number of aligned point and the total number of pixels of each approximation. Aligned point is defined as the pixel whose gradient angle is the same to the line segment angle up to the
tolerance .
Suppose that image background has the Gaussian white noise model , more formally, an image under the background model is a random image (defined on the grid ) such that:(1)for all is uniformly
distributed over .(2)The family is composed of independent random variables.Under the model , image is isotropic flat zones, while straight edges are exactly the opposite: highly anisotropic zones.
Thus, in practice, a set of pixels will not be accepted as a line segment if it could have been formed by an isotropic process. This algorithm defines the Number of False Alarms of a rectangle in an
image , as In the formula (2.3), is a random image under model , is the total number of potential rectangle (line segment) in image , , and represent the number of aligned points in rectangle in
image and image , respectively, is the probability that greater than or equal to .
The smaller the NFA value is, the more significant the rectangle is. Consider one pixel accuracy, there are potential line segments in a image for the start point and end point both have possible,
but in practice, line segment’s width can be at most pixels, thus . Under the model and a tolerance for the difference between aligned point gradient angle and line segment angle, is the probability
of that a given point is an aligned point. Each pixel’s gradient is independent in rectangle, so the number of aligned point obey binomial distribution, thus, where , finally, NFA is easy to be
calculated: The NFA is the key to validate the rectangle as a line segment or not, if NFA is less than a threshold , then the rectangle is accepted as a -meaningful line segment, vice, the rectangle
is not a line segment. This method is almost independent of (actually logarithmic). Thus, this algorithm let thoroughly as advised by Fuchao (see [17]).
Step 4 (improved the approximations of line-support region and validate them). In Step 3, the best rectangular approximation of a line-support region is the one that gives the smallest NFA value, in
order to get a better NFA, the LSD algorithm tries to adjust the width of the approximation and the probability . Five dyadic precision steps are considered before adjusting the rectangle width and
again five dyadic precision steps afterward. Repeat Steps 2 and 3 and keep the rectangular approximation with best NFA value as the final line segment.
3. The Proposed Location Algorithm
Observed the data matrix code, the “L” finder pattern and the dashed border make data matrix code distinct from other 2-D bar codes or objects, so the first idea that comes to mind is to locate the
data matrix code by detecting the “L” finder pattern and dashed border, but this procedure may lead to an imprecise location when there exists a perspective distortion in the bar code image or the
image get stained or obscured (these situations happen frequently). Therefore, it needs to locate the bar code more precisely. In this paper, we utilize some edge points of data matrix code to fit 4
straight lines as the 4 borders of the bar code and finally achieve accurate positioning with their intersections (4 vertexes).
The location steps discussed above are the main steps of the location algorithm, which consume most of processing time, and the computation time is closely related to the size of the image. Normally,
the bar code region is only a small part of the image, it is necessary to extract the bar code from the image to facilitate the follow-up location. In this way, it not only reduces the cost of
computation but also enhances the antijamming capability of the algorithm for the extraction of bar code removes most of the background. After the extraction of candidate regions (may include a data
matrix code), the location algorithm is applied on them to complete the whole location. The whole data matrix code location procedure is shown in Figure 3.
3.1. Extraction of Data Matrix Code Candidate Region
In general, 2-D barcodes consist of staggered white and black modules and have vast closely spaced edges, while other objects or background in the image has few sparse edges. This characteristic is
utilized to extract candidate regions in 3 steps:
Step 1 (edge detection, remove most of the background by canny edge detection). The proposed algorithm chooses canny operator to do edge detection because it can get more complete edges by
restraining the nonmaximum value and connecting the inconsecutive edges with mathematical morphology. Canny operator gets a good tradeoff between noise suppression and edge detection. In the standard
canny edge detection, the first step is the Gaussian filtering which is used to remove noise, but in this application, we ignore this step to save time bringing little impact on subsequent
Step 2 (morphological processing, highlight the bar code region by dilate operation and open operation). The dilate operation fills bar code but expands bar code boundary at the same time, which may
lead to connection of bar code and other objects (such as text). So open operation is used to separate these small adhesions. The result is susceptible to the shape and size of the structure element.
Actually, 2-D barcode module always has the size of 3 to 8 pixels in rectangle shape, so the structure elements are defined as follows. (3.1)
Step 3 (contour analysis, filter candidate regions with contour perimeter and area). Mark every connected region: and extract the contour: , then filter the connected regions with their perimeter and
area: To make sure that the whole bar code is included in the candidate region, the bounding box of the candidate is calculated and expanded. The experimental results are shown in Figure 4.
3.2. Preliminary Location Based on Finder Pattern
Based on the data matrix code feature analysis mentioned above, it can quickly determine whether there is a data matrix code or not in a candidate region and obtain the approximate location of data
matrix code by detecting the “L” finder pattern. Detection of dashed border helps determine the top and right boundary position of data matrix code. In this paper, “L” finder pattern and the dashed
border are detected to preliminarily locate data matrix.
3.2.1. “L” Finder Pattern Detection
L-detection is relatively complex, but an “L” finder pattern can be regards as two segments, and there are many mature line segment detection algorithms. The LSD algorithm (introduced in Section 2)
proposed by Grompone Von Gioi et al. [13] is utilized to detect line segment. Consider that Step 4 of LSD is not very meaningful for large and well-contrasted line segments (the “L” finder pattern of
data matrix code is always well contrasted) and time consuming within repeat of Steps 2 and 3, our algorithm ignores this step to save time. The detection result is shown in Figure 5.
An “L” finder pattern can be detected by combining the appropriate line segments. Assume that the line segments obtained in a candidate region are: , a line segment is described with its two
endpoints as: . Normally, if two line segments are belong to the same “L” finder pattern, then , . But this assumption no longer holds when the image has perspective deformation. Extensive
experiments have demonstrated that the angle usually ranges from to . A constrain about the segments’ length ratio that the length of the long line segment can not exceed 5 times the length of the
short one is added. Therefore, combination of line segments can be implemented as in Algorithm 1.
Inevitably, some pseudo-L will be detected, so an “L” finder pattern will be abandoned if the postprocessing cannot locate a data matrix code. The approximate location of 3 vertexes (two endpoints
and an intersection of “L”) can be obtained from the “L” finder pattern to locate data matrix code roughly just as Figure 5(c).
3.2.2. Dashed Border Detection
Composed by alternating black modules and white modules, the dashed border has a lot of edges. On the other hand, the dashed border is roughly parallel to the “L” finder pattern. Detecting a dashed
border by scanning edge point in the direction paralleling to “L” in two steps.
Step 1 (determine a detecting region). A quadruple: is used to defined the detecting region, where and are coordinates of the start point of the scanning, width and height are scanning arranged
paralleling to “L” (see Figure 6). The 3 vertexes of the “L” are , , and , and the lengths of the two segments of “L” are and . , then the detection regions are where is an adjustable parameter.
Step 2. Progressively scan edge points in the direction paralleling to the horizontal line segment of “L” in and count the number of edge points. The row with the most edge points is kept as the
horizontal dashed border. The row with the least edge points is kept as static region. The vertical dashed border is detected in the same way.
3.3. Border Fitting
The 3 vertexes obtained from the “L” finder pattern can only roughly locate data matrix code, even, the location is wrong when the “L” gets stained or partly covered and the detection of the dashed
borders is not precise. So it is necessary to get more information for further location. In this paper, the 4 borders will be fitted to finally locate data matrix code in 3 steps.
Step 1 (scan border points). Border point is defined as the first edge point from outside to inside in the direction perpendicular to bar code’s border. The scanning range is around the bar code
borders with an offset (such as 5 pixels). An example has been demonstrated in Figure 7, 4 sets of border points can be obtained: , , , and .
Step 2 (fit borders). In mathematics, it is usually to describe a straight line by equality [18]: , but it cannot work when the straight line is perpendicular to the axis, the slope tends to be
infinity, thus we use Hesse paradigm to describe a straight line: , in fact, it is a excessive parametric expression, 2 points can determine a straight line while there are 3 parameters in the
equality, so a constraint is added: .
In order to fit a straight line in a set of point: , the sum of the distance from each point to the straight line: should be minimized. Therefore, the straight line fitting can be converted into
distance minimization problem. But if , the equality above will get a zero measurement error. To avoid this problem, the constraint condition would be added as a Lagrangian multiplier: The
minimization problem can be expressed as: Solve (3.5): to avoid the situation that , accept the proposition that with greater value: In the formulas above:
Once the values of parameter are determined, a straight line is fitted. Let us consider the points in the set, some outliers must deviate the straight line which fitted by minimizing the distance
from points to it. An effective solution is the mature and typical RANSAC algorithms. This algorithm selects minimum quality of points (such as 2 points) randomly to fit a straight line, and then
check the proposition of outliers. Repeat this procedure until the proposition of outliers reaches level less than a threshold such as 1%, and the straight line with minimum proposition of outliers
is kept, finally used these certain 2 points and their inliers to fit the final straight line. The robustness RANSAC is used in this paper to fit 4 borders with the 4 set of border points obtained in
Step 1.
Step 3 (obtain 4 vertexes of data matrix code precisely). After the Step 2, 4 straight lines , , , and , have been fitted and then calculate their intersections to obtain data matrix code position.
The Figure 8 shows the final precise location.
4. Experimental Results
In order to verify the performance of the proposed algorithm, we conduct experiments on the images under different conditions (such as complex background, perspective distortion) and give four
representative experimental results. All the experiments run on the ARM 11 hardware platform, and all test images have the resolution of . Figures 9(a)~9(d) are the four original test images. Figure
9(a) is the mobile phone battery image with a data matrix code. Around the data matrix code, there are some texts and other objects closed to the bar code, which bring great interference to bar code
location. Figure 9(b) is an USB data cable connector image with a data matrix code. The size of the USB connector is small and the size of data matrix code is even smaller. Also, there are lots of
texts very closed to the data matrix code. Figure 9(c) is an award ticket image with a data matrix code; the image is dim with bad illumination. Figure 9(d) is an image of the printed data matrix
code, some perspective distortions exist in the image because of the tilt angle when taking photo.
Figures 9(e)~9(h) are the experimental results of data matrix code location. It can be seen that the proposed location method works effectively and precisely. Figure 9(e) shows that the proposed
algorithm has good robustness to interference brought by complex background. Figure 9(f) demonstrates that the algorithm has high precise for the small size of data matrix code. Figures 9(g) and 9(h)
reveal that the proposed algorithm can achieve good performance even under bad illumination or distortion conditions.
Moreover, the proposed algorithm costs no more than 100ms in these experiments, which can completely satisfy the demand of real-time application, and it is especially suitable for embedded device.
5. Conclusions
A data matrix code location algorithm is proposed in this paper, which utilizes the obvious features of “L” finder pattern and dashed border of data matrix code. This algorithm provides 3 advantages
compared to those algorithms mentioned. (1) First: robustness, it locates data matrix code on candidate regions excluding most of interference from background. The two most important algorithms,
namely, LSD and RANSAC, are used to achieve high robustness under complexity background conditions. (2) Second: it has high accuracy, preliminarily location by finder pattern, and then accurate
location by fitting border lines. (3) It is suitable for real-time application. Extraction of candidate region greatly reduces the operating area of location algorithm, which saves a lot of time.
Instead of using the time-consuming Hough transformation algorithm to detect line segment, the linear-time LSD algorithm is used. The proposed method is evaluated on four images with complex
background or distortion. The experimental results show that our proposed algorithm gives good performance.
This work was partially supported by Natural Science Foundation of China under Grant no. (60501026, 60873168), foundation for combination of industry and academy in Shenzhen under Grant no.
SY200806270120A and Science & Technology Planning Project of Shenzhen City (JC200903130300A), and the Opening Project of Guangdong Province Key Laboratory of Computational Science of Sun Yat-Sen
University (201106002).
1. H. Donghong, T. Hui, and C. Xinmeng, “Radon transformation applied in two dimensional barcode image recognition,” Journal of Wuhan University, vol. 5, pp. 584–588, 2005.
2. Z. Chenguang, Y. Na, and H. Rukun, “study of two dimensional barcode identification technology based on HOUGH transform,” Journal of Changchun Normal University, vol. 4, pp. 94–98, 2007.
3. C. Wenting and L. Zhi, “Two dimensional barcode localization algorithm based on convex,” Journal of Zhejiang University, vol. 46, pp. 669–672, 2008.
4. M. Li and W. Zhao, “Visiting power laws in cyber-physical networking systems,” Mathematical Problems in Engineering, vol. 2012, Article ID 302786, 13 pages, 2012. View at Publisher · View at
Google Scholar
5. M. Li, C. Cattani, and S. Y. Chen, “Viewing sea level by a one-dimensional random function with long memory,” Mathematical Problems in Engineering, vol. 2011, Article ID 654284, 13 pages, 2011.
View at Publisher · View at Google Scholar · View at Scopus
6. M. Li, “Fractal time series—a tutorial review,” Mathematical Problems in Engineering, vol. 2010, Article ID 157264, 26 pages, 2010. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH
7. Z. Liao, S. Hu, D. Sun, and W. Chen, “Enclosed Laplacian operator of nonlinear anisotropic diffusion to preserve singularities and delete isolated points in image smoothing,” Mathematical
Problems in Engineering, vol. 2011, Article ID 749456, 15 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
8. X. You and Y. Y. Tang, “Wavelet-based approach to character skeleton,” IEEE Transactions on Image Processing, vol. 16, no. 5, pp. 1220–1231, 2007. View at Publisher · View at Google Scholar
9. W. Zhang, Q. M. J. Wu, G. Wang, X. You, and Y. Wang, “Image matching using enclosed region detector,” Journal of Visual Communication and Image Representation, vol. 21, no. 4, pp. 271–282, 2010.
View at Publisher · View at Google Scholar · View at Scopus
10. J. Huang, X. You, Y. Y. Tang, L. Du, and Y. Yuan, “A novel iris segmentation using radial-suppression edge detection,” Signal Processing, vol. 89, no. 12, pp. 2630–2643, 2009. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at Scopus
11. P. L. Palmer, J. Kittler, and M. Petrou, “An optimizing line finder using a hough transform algorithm,” Computer Vision and Image Understanding, vol. 67, no. 1, pp. 1–23, 1997. View at Publisher
· View at Google Scholar · View at Scopus
12. H. Kato, K. T. Tan, and D. Chai, “Development of a novel finder pattern for effective color 2D-barcode detection,” in Proceedings of the International Symposium on Parallel and Distributed
Processing with Applications (ISPA '08), pp. 1006–1013, December 2008. View at Publisher · View at Google Scholar · View at Scopus
13. R. Grompone Von Gioi, J. Jakubowicz, J. M. Morel, and G. Randall, “LSD: a fast line segment detector with a false detection control,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 32, no. 4, Article ID 4731268, pp. 722–732, 2010. View at Publisher · View at Google Scholar · View at Scopus
14. J. B. Burns, A. R. Hanson, and E. M. Riseman, “Extracting straight lines,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 4, pp. 425–455, 1986. View at Publisher ·
View at Google Scholar · View at Scopus
15. A. Desolneux, L. Moisan, and J. M. Morel, “Meaningful alignments,” International Journal of Computer Vision, vol. 40, no. 1, pp. 7–23, 2000. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at Scopus
16. A. Desolneux, L. Moisan, and J. M. Morel, “Computational gestalts and perception thresholds,” Journal of Physiology Paris, vol. 97, no. 2-3, pp. 311–324, 2003. View at Publisher · View at Google
Scholar · View at Scopus
17. W. Fuchao, Mathematical Method in Computer Vision, Science Press, Beijing, China, 2008.
18. C. Steger and M. Ulrich, Machine Vision Algorithms and Applications, Tsinghua University Press, 2008. | {"url":"http://www.hindawi.com/journals/mpe/2012/515296/","timestamp":"2014-04-17T06:50:17Z","content_type":null,"content_length":"215040","record_id":"<urn:uuid:6637197c-0905-4be3-87b1-b6cbe810c18c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differentiation (Show That Question)
May 4th 2008, 01:49 AM
Differentiation (Show That Question)
Q: Given that $y = \mathrm{arcsin}\left( \frac{x}{a} \right)$, where $a$ is a constant, show that $\frac{\mathrm{d}^2y}{\mathrm{d}x^2} - x \left( \frac{\mathrm{d}y}{\mathrm{d}x} \right) ^3 = 0$.
My method:
$y = \mathrm{arcsin}\left( \frac{x}{a} \right)$
$\sin y = \frac{1}{a} . x$
$\cos y \frac{\mathrm{d}y}{\mathrm{d}x} = \frac{1}{a}$
$- \sin y \left( \frac{\mathrm{d}y}{\mathrm{d}x} \right) ^2 + \cos y \left( \frac{\mathrm{d}^2y}{\mathrm{d}x^2} \right) = 0$
But that isn't what they require. What do I do? (Headbang) Thanks in advance for the help.
May 4th 2008, 02:18 AM
04/05/08: Today, I have made many threads for question. Thanks for all the help!
I've noticed that :D
Well, keep $y=\arcsin \frac xa$
$(\arcsin x)'=\frac{1}{\sqrt{1-x^2}}$
Hence, $\frac{dy}{dx}=\dots$
From here, you can get $\frac{d^2y}{dx^2}$ and $\left( \frac{\mathrm{d}y}{\mathrm{d}x} \right) ^3$ :)
May 4th 2008, 02:22 AM
in other words. they want you to find dy/dx and then (d^2 x)/(dx^2) and plug it into the left hand side of the equation given, and show that you actually get 0
May 7th 2008, 11:37 AM
$y = \mathrm{arcsin}\left( \frac{x}{a} \right)$
$\therefore \frac{\mathrm{d}y}{\mathrm{d}x} = \frac{1}{\sqrt {a^2 - x^2}}$
But where would I go from here? Finding the second derivative may be difficult. :confused:
May 7th 2008, 11:50 AM
taking the derivative from hear is simple....
May 7th 2008, 11:54 AM
Chris L T521
There are two different ways of going through this. You can use the power rule or quotient rule.
Power Rule:
Quotient Rule:
Hope this helps out!!! :D
May 7th 2008, 01:01 PM
There are two different ways of going through this. You can use the power rule or quotient rule.
Power Rule:
Quotient Rule:
Hope this helps out!!! :D
I believe you meant to say the chain rule. and using the quotient rule for this question is long and pointless
This is slightly off topic, but seeing as Airs question has already been answered I doubt anyone will mind.
Just have a comment on your signature, you do realise that $i^i$ is multi-valued, I'll give an alternative proof of what you have written in your signature.
you can write $i$ as $e^{\frac{1+4n}{2} \pi i}$ for any integer n.
so $i^i$ becomes $(e^{\frac{1+4n}{2} \pi i})^i$
$\rightarrow e^{- \frac{1+4n}{2} \pi }$
May 7th 2008, 01:10 PM
Chris L T521
I believe you meant to say the chain rule. and using the quotient rule for this question is long and pointless
This is slightly off topic, but seeing as Airs question has already been answered I doubt anyone will mind.
Just have a comment on your signature, you do realise that $i^i$ is multi-valued, I'll give an alternative proof of what you have written in your signature.
you can write $i$ as $e^{\frac{1+4n}{2} \pi i}$ for any integer n.
so $i^i$ becomes $(e^{\frac{1+4n}{2} \pi i})^i$
$\rightarrow e^{- \frac{1+4n}{2} \pi }$
I agree that the quotient rule is long and pointless, but it may be good to know that it can be done various ways. However, I did mean Power Rule. When we differentiate it, we end up using the
chain rule any, for both the power rule and quotient rule.
Thanks for the input on the signature. Your adjustments to it somewhat makes sense. | {"url":"http://mathhelpforum.com/calculus/37068-differentiation-show-question-print.html","timestamp":"2014-04-23T21:49:46Z","content_type":null,"content_length":"20682","record_id":"<urn:uuid:b58dbbb9-5808-414f-b36c-f7477942fc47>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
rewriting a geometric sequence formula to solve for 'n'
The question is... Rewrite the equation tn=ar^n-1 to solve for 'n'
$t_n = ar^{n - 1}$ $\Rightarrow r^{n - 1} = \frac {t_n}{a}$ $\Rightarrow ln \left( r^{n - 1} \right) = ln \left( \frac {t_n}{a} \right)$ $\Rightarrow (n - 1) ln(r) = ln \left( \frac {t_n}{a} \right)$
$\Rightarrow n = \frac {ln \left( \frac {t_n}{a} \right) }{ln(r)} + 1$
Thank you so much for the help, now I know what I'm doing.:) | {"url":"http://mathhelpforum.com/algebra/15339-rewriting-geometric-sequence-formula-solve-n-print.html","timestamp":"2014-04-19T00:12:11Z","content_type":null,"content_length":"8202","record_id":"<urn:uuid:8f16a1a6-5134-42ac-ba6e-d3619cb4aa5c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Only non-informative Bayesian prior distributions agree with the GUM Type A evaluations of input quantities
Author(s): Raghu N. Kacker;
Title: Only non-informative Bayesian prior distributions agree with the GUM Type A evaluations of input quantities
Published: June 21, 2011
The Guide to the Expression of Uncertainty in Measurement (GUM) is self-consistent when Bayesian statistics is used for the Type A evaluations and the standard deviation of the
posterior state-of-knowledge distribution is used as the Bayesian standard uncertainty. Bayesian statistics yields posterior state-of-knowledge probability distributions which have
the same probabilistic interpretation as the Type B state-of-knowledge probability distributions. Thus the Type A and the Type B evaluations can be combined logically. This much is
well known. We show that there are limitations on the kind of Bayesian statistics that can be used together with the GUM. The GUM recommends that the (central) measured value should
Abstract: be an unbiased estimate of the corresponding (true) quantity value. Also, the GUM uses the expected value of state-of-knowledge probability distributions as the measured value for
both the Type A and the Type B evaluations. When Bayesian prior distributions are proper probability density functions (pdfs), the expected value of the Bayesian posterior
state-of-knowledge distribution can never be an unbiased estimate. Thus a measured value can be unbiased only when non-informative prior distributions (which are not proper pdfs) are
used. Therefore only non-informative improper prior distributions agree with the GUM. This note is relevant because wide availability of computational software for doing Bayesian
analysis numerically has stimulated great interest in the use of Bayesian statistics for the evaluation of uncertainty in measurement. The Bayesian computational software requires
that the prior distributions to be proper pdfs.
Proceedings: Proceedings of Advanced Mathematical and Computational Tools in Metrology and Testing (AMCTM 2011)
Pages: 11 pp.
Location: Gotenborg, -1
Dates: June 19-22, 2011
Keywords: Bayesian statistics, Unbiased estimate, Uncertainty in measurement
Research Areas: Math
PDF version: Click here to retrieve PDF version of paper (147KB) | {"url":"http://www.nist.gov/manuscript-publication-search.cfm?pub_id=907999","timestamp":"2014-04-20T16:07:53Z","content_type":null,"content_length":"23831","record_id":"<urn:uuid:fea00520-fb36-47f9-a991-1066ed41ff46>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
The number system of ancient Egypt was decimal in nature but did not make use of place value. As a result of this, they didn't have a symbol for zero. There were two kind of writing used in Egypt,
hieroglyphics and demotic (hieratic). In the hieroglyphic system, the symbols used were:
│1 │vertical stroke │
│10 │heel bone │
│100 │coiled rope (snare) │
│1,000 │lotus flower │
│10,000 │bent finger │
│100,000 │burbot fish │
│1,000,000 │kneeling figure │
The number one thousand, three hundred forty two (1,342) would look like :
In many cases, hieroglyphs were stacked rather than written in a single straight line.
Rosetta Stone
The Ancient Egyptians used both several methods of writing , including hieroglyphic, hieratic, and demotic scripts. The story behind how "modern" scholars were able to decipher these methods of
writing is the story of the Rosetta Stone (link courtesy of the British Museum where the Rosetta Stone is housed). Hieroglyphic script was used by Egyptians for important or religious documents,
while the demotic script was a simplified version of hieroglypics, and was the writing method for "the common". Demotic script evolved from hierotic, and was used during the "last period" of ancient
Egyptian, a 1000 year span from 500 BC to 500 AD. By 400 AD, demotic script was replaced almost entirely by the use of Greek writing.
Egyptian Fractions and the Rhind Papyrus
The Ancient Egyptians used unit fractions e.g. e.g 1/4, 1/7, 1/15. Unit fractions are those positive rational numbers (fractions) which have the number 1 as numerator . Their usual way of writing
fractions was to use the word r, meaning part, with the denominator written below and, if need be, beside it as well e.g.
Non unit-fractions like 2/5 or 7/8 did not exist- though there was one exception : 2/3.
Unit fractions were used because we can represent any fraction by adding unit fractions together e.g. 4/7 would be written as 1/2 + 1/14. (see class notes for a the explanation of how a unit fraction
decomposition is obtained)
To represent the sum of 1/3 and 1/5 for example, they would simply write 1/3 + 1/5, wheras we might represent this as a single fraction 8/15 . The problem the Egyptians had was that although they had
a notation for the unit fractions, i.e., fractions of the form 1/n , they did not have a compact notation for the general fraction m/n . Some might say their numeration system was faulty, but that
would be overly critical, since they were the first (as far as we know) to have any way of giving names to fractions.
Question : How would an Egyptian scribe would have written 3/8 ? 3/5 ?
Answer:One can write 3/8 = 1/2 + 1/8, and 3/5 = 1/2 + 1/10
Even though the Egyptian method of writing fractions continued to be used for a long time, there were many limitations to it s use. In the work the Almagast , written by the Greek scientist Ptolemy
in the first century AD, Ptolemy uses the ancient Bablonian method of writing fractions (sexagesimal) rather than the Egyptian method because of the embarrassments that the Egyptian method often
Much of what is known today about Egyptian fractions has been deduced from theRhind papyrus , written by the scribe Ahmes around 1650 BC. This book consists mainly of 84 word problems of a diverse
nature, plus a few tables to aid the young scriblets (young scribes) that Ahmes taught the art of calculation. Generally speaking, Ahmes seemed to be happy to write the answer to a problem as a whole
number plus a sum of unit fractions, with no unit fraction appearing more than once in an answer.
Problem Establish the following algebraic identities
• (*) For any n except 0, 1/n = 1/(n+1) + 1/n(n+1)
• (**) If n is odd, 2/n = 2/(n+1) + 2/n(n+1) = a sum of unit fractions, since 2/(n+1) and 2/n(n+1) can be reduced.
• (***) 2/n = 1/n + 1/(n+1) + 1/n(n+1)
• (****) 1 = 1/2 + 1/3 + 1/6
Solution:. (easy)
With these identities, once can show that there is no unique unit fraction decomposition of a given fraction.
Tables in the Rhind Papyrus
The largest table in the Rhind Papyrus is the 2/n table, where Ahmes gives decompositions of these fractions into sums of unit fractions. Most of the entries in the table come from the second
identity (**) which is obtained by multiplying (*) by 2 on both sides . Thus
2/7 = 2/8 + 2/(7*8) = 1/4 + 1/28
Note that using (***) we obtain:
2/7 = 1/7 + 1/8 + 1/56
an expression which is "longer".
Fibonnaci's theorem
The Egyptian system of writing fractions as sums of unit fractions continued in use even after much more efficient systems were developed. Fibonnaci was aware of the system in 1200 AD and included in
his book Liber Abaci a method for writing any fraction as a sum of unit fractions. His method was one which might be the most "natural":
Fibonnaci's method : Take the fraction you wish to express in the Egyptian manner and subtract from it the largest unit fraction which is not larger than it. If the remainder is not itself a unit
fraction, then repeat the process on the remainder. Continue this until the remainder is a unit fraction.
For example, if the fraction is 4/5, we note that the largest unit fraction not larger than 4/5 is 1/2 (5 DIV 4 + 1 give the value of the denominator of such a unit fraction) . Subtract 1/2 to obtain
3/10. The largest unit fraction not larger than 3/10 is 1/4. Subtract 1/4 to obtain 1/20.
Hence, Fibonnaci's method leads to the decomposition
4/5 = 1/2 + 1/4 + 1/20
There is no record indicating that Fibonnaci had a "proof" that his method always worked, but a proof that it does always work can be based on the following lemma.
Lemma Let p/q be any fraction which is not a unit fraction. Let 1/n be the largest unit fraction less than or equal to p/q . Then p/q - 1/n is a fraction r/s, with r < p. .
Proof (see class notes)
Theorem Fibbonaci's method works for all fractions p/q
Proof (see class notes)
Online Resources for UNIT FRACTIONS | {"url":"http://www.mta.ca/~amiller/m3031/The%20Egyptian%20Number%20System.htm","timestamp":"2014-04-17T18:39:27Z","content_type":null,"content_length":"10924","record_id":"<urn:uuid:3174bb66-4df2-48b3-b8c7-74822256afc0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis- Subsequences
March 25th 2013, 12:13 PM #1
Mar 2013
Analysis- Subsequences
For the given sequence, find the set S of subsequential limits
[Wn]= (0,1,2,0,1,3,0,1,4...)
I know the answer to be
I am just trying to figure out what the subsequences are.
I tried thinking of them to be
please let me know if I am correct on this.
Re: Analysis- Subsequences
For the given sequence, fiubseqund the set S of subsequential limits
[Wn]= (0,1,2,0,1,3,0,1,4...)
I know the answer to be
I am just trying to figure out what the subsequences are.
I tried thinking of them to be
please let me know if I am correct on this.
Those are correct. But the subsequence $u_n$ is not unique.
So you cannot list the subsequences.
March 25th 2013, 12:21 PM #2 | {"url":"http://mathhelpforum.com/advanced-math-topics/215539-analysis-subsequences.html","timestamp":"2014-04-17T16:50:35Z","content_type":null,"content_length":"34061","record_id":"<urn:uuid:c60c4062-1af8-4110-b180-0b29835c0d49>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
August 6th 2005, 05:44 PM #1
Junior Member
Jun 2005
help me plz.
Solve the system of equations by substitution:
y = x^2 + 2x - 2 and y = 3x + 4
I tried working the problem out but I think I'm doing somthing wrong. I hope someone can help me.
Thank you in advance.
Since your equations are both of the form y = (something only involving x) then either can be used to substitute for y in the other and you'll get the same thing, namely an equation in x alone.
In your example it would be x^2 + 2x - 2 = 3x + 4 which is a quadratic in x. You get two values of x, namely x=-2 and x=3 and the corresponding values of y are then -2 and 13. So the solutions
are (x=-2,y=-2) and (x=3,y=13).
You might like to graph the two functions y=x^2+2x-2 and y=3x+4: one is a parabola and the other a straight line. They should intersect in two points, with coordinates giving you the two
How did you solve that? I am sorry for being a pain in the butt. I just want to learn this because my book doesn't show me how to solve this type of problem. It shows it as an example but just
gives you the answer in a graph form.
Solving by substitution applies to a system of equations where you can use one equation to express one of the variables in terms of the others.
Typically you'll encounter it it a pair of simultaneous equations such as 2x+3y = 8, 3x+5y = 13. Let's consider that example for a moment. Take one of the equations, say the first, and get y on
its own on one side with the other side containing only x but not y. We have 3y = 8-2x, so y = (8-2x)/3. Now substitute this value for y into the second equation, so that 3x + 5(8-2x)/3 = 13.
Multiplying up (by 3 in this case) we get 9x + 40 - 10x = 39, so that x=1. We said that y = (8-2x)/3 = (8-2)/3 = 2. Of course we now remember to check that x=1,y=2 satisfies both of the original
Your example had two equations, each of the form y = (an expression involving only x). So it's easy to get y on its own, in fact it has already been done for you. Choose the first, say, y = x^
2+2x-2 and substitute into y = 3x+4 to get x^2+2x-2 = 3x+4. Solve for x (two possible values as this is a quadratic equation) and each value of x gives you a value for y. Remember to check your
The unusual thing in your example was that one of the equation involved an x^2 term. But fortunately each equation involved y only as a linear term (that is, y but not y^2 or any higher powers).
August 6th 2005, 10:35 PM #2
August 7th 2005, 09:04 AM #3
Junior Member
Jun 2005
August 7th 2005, 11:27 AM #4 | {"url":"http://mathhelpforum.com/advanced-algebra/714-help-me-plz.html","timestamp":"2014-04-16T11:26:01Z","content_type":null,"content_length":"33248","record_id":"<urn:uuid:97921266-4b1e-45df-995e-e52b948e872e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] sum and mean methods behaviour
Todd Miller jmiller at stsci.edu
Tue Sep 2 11:34:04 CDT 2003
On Mon, 2003-09-01 at 05:34, Peter Verveer wrote:
> Hi All,
> I noticed that the sum() and mean() methods of numarrays use the precision of
> the given array in their calculations. That leads to resuls like this:
> >>> array([255, 255], Int8).sum()
> -2
> >>> array([255, 255], Int8).mean()
> -1.0
> Would it not be better to use double precision internally and return the
> correct result?
> Cheers, Peter
Hi Peter,
I thought about this a lot yesterday and today talked it over with
Perry. There are several ways to fix the problem with mean() and
sum(), and I'm hoping that you and the rest of the community will help
sort them out.
(1) The first "solution" is to require users to do their own up-casting
prior to calling mean() or sum(). This gives the end user fine control
over storage cost but leaves the C-like pitfall/bug you discovered. I
mention this because this is how the numarray/Numeric reductions are
designed. Is there a reason why the numarray/Numeric reductions don't
implicitly up-cast?
(2) The second way is what you proposed: use double precision within
mean and sum. This has great simplicity but gives no control over
storage usage, and as implemented, the storage would be much higher than
one might think, potentially 8x.
(3) Lastly, Perry suggested a more radical approach: rather than
changing the mean and sum methods themselves, we could alter the
universal function accumulate and reduce methods to implicitly use
additional precision. Perry's idea was to make all accumulations and
reductions up-cast their results to the largest type of the current
family, either Bool, Int64, Float64, or Complex64. By doing this, we
can improve the utility of the reductions and accumulations as well as
fixing the problem with sum and mean.
Todd Miller jmiller at stsci.edu
STSCI / ESS / SSB
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2003-September/002240.html","timestamp":"2014-04-16T13:34:30Z","content_type":null,"content_length":"4690","record_id":"<urn:uuid:4d70770a-6f79-4b91-9aaf-13ad236218cd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R-SIG-Finance] Multivariate GARCH
Charles Evans cevans at chyden.net
Thu Dec 9 03:39:59 CET 2010
Currently, I am working on an analysis of ETF premiums. I have
estimated ECMs for my sample, but the error terms exhibit ARCH behavior.
Is there a more straightforward way to estimate a multivariate GARCH
model than mgarch or mgarchBEKK? I have searched for a usable
tutorial, and I have been unable to find any other than Jeff Ryan's
post from 2007:
and Ruey Tsay's transcript at:
Can anyone direct me to a tutorial that lays out how one estimates a
multivariate GARCH model in R? I'm not asking for a lesson in basic
econometrics, just an R-related URL that a researcher in a hurry can
Specifically, I want to fit a model of the following form for a sample
of ETFs:
p = e + n + [GARCH factors] + v, where
p: price returns
e: lagged error terms from a first-pass regression of p on n
n: net asset value (NAV) returns
v: white noise error term
I have tried:
garchFit(p ~ garch(1,1), data=x.data, trace=F)
but I get exactly the same results as when I run:
garchFit(~ garch(1,1), data=x.data, trace=F)
x.data <- cbind(timeSeries(p),timeSeries(e),timeSeries(n))
colnames(x.data) <- c("p","e","n")
Apparently, I am doing something wrong. Any hints would be greatly
Sample Data:
p <- c(0.005678744, 0.010188880, 0.004402934, -0.008585791,
0.002392346, 0.000000000, 0.013999517, 0.006107606, 0.012220111,
0.003809968, -0.008796353, -0.011222938, 0.003052004, 0.011073019,
-0.005346363, -0.008778660, 0.009826939, 0.000930882, -0.006300336,
n <- c(0.002903111, 0.013199136, 0.002619049, -0.009796972,
0.006343149, -0.004543832, 0.015105848, 0.007058022, 0.008869238,
0.011436617, -0.010623656, -0.014617579, 0.004231815, 0.011082062,
-0.002788106, -0.010055036, 0.008891027, 0.002559034, -0.008985415,
e <- c(2.144238e-03, -8.760417e-04, 9.058546e-04, 2.124476e-03,
-1.831144e-03, 2.716139e-03, 1.598335e-03, 6.425588e-04, 3.986696e-03,
-3.648639e-03, -1.813267e-03, 1.592476e-03, 4.094507e-04,
3.919921e-04, -2.164148e-03, -8.801350e-04, 4.902445e-05,
-1.581071e-03, 1.110833e-03, -1.465173e-03)
Charles Evans
More information about the R-SIG-Finance mailing list | {"url":"https://stat.ethz.ch/pipermail/r-sig-finance/2010q4/007082.html","timestamp":"2014-04-17T00:56:39Z","content_type":null,"content_length":"5281","record_id":"<urn:uuid:d2f9d091-dbb6-4ccf-8533-bc3dba3ceaed>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
The First 4,000,000 digits of Pi Condensed Into One Image
Since Pi is an irrational number with an infinite and non-repeating decimal representation, computers have been used on multiple occasions in an effort to calculate its digits, culminating in Shigeru
Kondo’s triumph of 10 trillion decimal places on October 17, 2011, after 371 days of computing.
Design studio TWO-N decided to visulize the first 4 million digits in an interactive image which assigned colors to 0-9 before rendering them as single, 1×1 pixels and lining them up in the order
designated by Pi. As you’ll see, the applet allows you to inspect 500,000-digit sections and employ a search function that lets you probe the mathematical mosaic for numbers up to eight digits in
(TWO-N via Information Aesthetics via i09) | {"url":"http://cubiclebot.com/pictures/first-4000000-digits-of-pi-condensed-into-one-image/","timestamp":"2014-04-16T19:25:34Z","content_type":null,"content_length":"17675","record_id":"<urn:uuid:e7d67fae-f237-4f1b-a3ed-2b545b4d91fc>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interesting optimization problem
May 28th 2010, 12:50 PM #1
May 2010
Interesting optimization problem
I would like to know if the following optimization problem has any application (or has been used to solve anything):
$<br /> \begin{array}{*{20}{c}}<br /> {{\text{maximize}}} & {J=\sum\limits_{j = 1}^n {{a_j}ln {p_j} } } & {} \\<br /> {{\text{subject to}}} & {\sum\limits_{j = 1}^n {{p_j} = 1} } & {} \\<br /> {}
& {{p_i} \geqslant 0,} & {i = 1, \ldots ,n} \\<br /> \end{array}<br />$
where $a_i$ is a positive constant for i=1,...,n
Thank you for your comments.
Last edited by luispipe; May 28th 2010 at 01:21 PM.
Looks remarkably similar to Boltzman's characterization for the distribution of the elements of a system across states $j$, where $P_j$is the probability of occupancy of state $j$. This is
subject to the constraint that $\sum P_j = 1$. In the case of Boltzmann, $a_j = P_j$
I took a stab at this problem, and find the optimum to arise when $a_j = <br /> \lambda P_j$ and $\sum a_j = \lambda$. Since $\lambda$ is an arbitrary scalar, simply take $\lambda = 1$ and you
arrive at Boltzmann's equation for H.
Thus $J_{max} = \sum_j P_jlnP_j$
with $\sum_j P_j = 1$
Certain economics problems could be expressed in this way.
If a person must spread a fixed resource (say, Time), over J activities, and the Utility derived from each activity is equal to ${a_k}ln(P_k)$ Where $P_k$ is the proportion of your time that you
spend doing activity k. You would maximise
The constraints would come from:
All time must be allocated
All time must be positive
It would be stupid to assume anyone had this utility function of course, but that never stopped economists before.
Last edited by SpringFan25; May 31st 2010 at 02:22 PM.
May 31st 2010, 07:08 AM #2
Junior Member
May 2010
May 31st 2010, 09:28 AM #3
Junior Member
May 2010
May 31st 2010, 12:09 PM #4
MHF Contributor
May 2010 | {"url":"http://mathhelpforum.com/advanced-applied-math/146770-interesting-optimization-problem.html","timestamp":"2014-04-16T10:24:10Z","content_type":null,"content_length":"40221","record_id":"<urn:uuid:34dc1050-c1a2-4742-90da-cf0cf8267eb8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Looking for a "scientific" application of a recreational puzzle.
up vote 8 down vote favorite
First of all the puzzle.
A barman's got 15 glasses which are initially somehow divided into several stacks. The barman repeats the following process a thousand times. He takes the top glass from each (nonempty) stack and
forms a new stack with these glasses. Which set of stacks (in terms of their heights) will he come up with?
It's a nice one, give it some thought =)
Having toyed with this problem and its obvious generalizations for an arbitrary number of glasses I came up with the (totally intuitive) hypothesis that such a process and its long-term behavior
might emerge in some more-or-less advanced field of research (algebra/geometry/mathematical physics). Can anybody comment?
Update. One can also notice that in this special case of 15 glasses both the problem's statement and the answer are pleasantly simple. I'd be very interested and even somewhat surprised to hear an
accordingly simple proof.
partitions recreational-mathematics reference-request
I did it with 10+3+2 and I know the answer in this case.=) – Olga May 14 '12 at 21:21
Reminds me a little of the card trick inspired by Birhoff's Ergodic theorem. It is described at the end of these lecture notes: maths.manchester.ac.uk/~cwalkden/ergodic-theory/lecture22.pdf –
Edmund Harriss May 14 '12 at 22:38
11 This is known as Bulgarian solitaire, see en.wikipedia.org/wiki/Bulgarian_solitaire. There is some literature on it, though I don't know whether any of the literature concerns scientific
applications. – Gerry Myerson May 14 '12 at 22:48
Wow, thanks! Nice to know. – Igor Makhlin May 15 '12 at 0:25
1 Brian Hopkins has written a survey, "30 Years of Bulgarian Solitaire," for the March 2012 issue of The College Mathematics Journal. Among other things, it gives a detailed history of the problem
and how it came by its name. – Barry Cipra May 15 '12 at 15:35
add comment
1 Answer
active oldest votes
I don't know about scientific applications, but you can solve the puzzle as follows (I hope you consider this solution simple):
Look at the Ferrers diagram of the partition of 15 you have, and consider the sum of the Manhattan distances from each dot to the corner of the diagram. At every step of the barman's
process, this sum either stays the same or decreases, and it only stays the same if the number of stacks is at least as large as the size of the largest stack minus 1. In this case, we get
the new Ferrers diagram by "rotating" the first column of the Ferrers diagram to make it into the first row, and shifting the rest of the diagram diagonally. For example:
x***** xxxxx
x*** *****
x** -> ***
x **
If we assume the existence of a cycle of partitions, then this sum must be constant along the cycle, so the Ferrers diagrams in the cycle can be produced by simply cyclically shifting each
up vote diagonal. If there are two adjacent partially filled diagonals, then since adjacent numbers are relatively prime, eventually in this cycle we will get a Ferrers diagram where a hole in the
6 down diagonal closer to the corner is next to a dot in the further diagonal, which is not allowed. For instance, if we continue the previous example, we eventually see the hole in the fifth
vote diagonal line up with the dot in the sixth diagonal:
****** ***** **** ***** ***** ***** ****** ****
**** ***** **** *** **** **** **** *****
*** -> *** -> **** -> *** -> ** -> *** -> *** -> ***
* ** ** *** ** * ** **
* * * ** * *
Thus, there is at most one partially filled diagonal in any partition that cycles, and the only partition of $15$ satisfying this condition is $15 = 1+2+3+4+5$. Since the number of
partitions of $15$ is less than $1000$, the barman enters this cycle before the end of his process.
Well, actually this is the solution I came up with myself, so I was hoping for something considerably shorter. Something that actually employs the sum being a triangular number. A
reduction to something widely known would also be nice. – Igor Makhlin May 14 '12 at 23:53
I was afraid of that... – zeb May 14 '12 at 23:56
If the sum were not a triangular number, there would be several partitions with partially filled diagonals. For instance, 16 has the cycle 6+4+3+2+1, 5+5+3+2+1, 5+4+4+2+1, 5+4+3+3+1,
5+4+3+2+2, 5+4+3+2+1+1, etc. If the number is far from a triangular number, there are multiple cycles. – Will Sawin May 15 '12 at 3:36
Yep, true story. This is what I meant by "in this special case of 15 glasses both the problem's statement and the answer are pleasantly simple". – Igor Makhlin May 15 '12 at 11:13
add comment
Not the answer you're looking for? Browse other questions tagged partitions recreational-mathematics reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/96950/looking-for-a-scientific-application-of-a-recreational-puzzle","timestamp":"2014-04-23T19:49:41Z","content_type":null,"content_length":"64233","record_id":"<urn:uuid:49b72b64-da76-40da-bbe7-e2db0a85e2b0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ideals question
1. Ideals question
This is another past exam question.
Define $(I:J)\; =\; \{ r \in R\; |\; rJ \subseteq I \}\; =\; \{ r \in R\; |\; \forall j \in J \;:\; rj \in I\}$.
For I,J,K ideals of R, where R is a commutative ring, prove that:
(a) (I:J) is an ideal of R
(b) I∩J is an ideal of R
(c) (I: (J+K)) = (I:J)∩(I:K)
Well, you have to figure out what the elements of I, J, and K look like; say r[1] ∈ R -> r[1]j ∈ I, when j ∈ J. Then show that r[1]j - r[2]j, say, is in (I:J), and that r[1]j r[2]j is in
(I:J). That's for the (a) question.
Have I got the idea here, or what?
The definition says that for elements r[1], r[2] ∈ R, j ∈ J, r[1] and r[2] are in (I:J) when r[1]j, r[2]j ∈ I, right?
So I need to show that r[1] - r[2] is also in (I:J) when (r[1] - r[2])j ∈ I, and that (r[1]r[2])j ∈ I, so r[1]r[2] is in (I:J)?
If (I:J) is an ideal of R, R/A (where A = (I:J)) is the ring of cosets {r + A | r ∈ R} such that for s,t ∈ R, (s + A)(t + A) = st + A = 0 + A when s ∈ A or t ∈ A, i.e. when s = aj or t = aj
for some a in R and j in J.
I still can't see how any of that helps to prove A = (I:J) is an ideal. Do I try to prove that R/A is a coset ring and so A is an ideal (if multiplication is 'well defined')? Just a hint
might be all I need.
Damn, I nearly got it.
The answer for part (a) is as follows:
Let a,b ∈ (I:J) such that aj, bj ∈ I, ∀j ∈ J.
So aj - bj = (a-b)j ∈ I, and hence a - b ∈ (I:J) ........(1)
Let r ∈ R, such that (ra)j ∈ I, ∀j ∈ J.
So r(aj) ∈ I since aj ∈ I (and because I is an ideal of R).
So that ra ∈ (I:J) ...............................................(2)
Therefore, by (1) and (2), (I:J) is an ideal of R.
Similar Threads
1. By ScaryMonster in forum Free Thoughts
Last Post: 02-27-12, 02:54 AM
Replies: 6
2. By audric in forum General Science & Technology
Last Post: 01-04-12, 08:43 PM
Replies: 1
3. By S.A.M. in forum Politics
Last Post: 11-09-11, 04:16 PM
Replies: 103
4. By Michael in forum Religion Archives
Last Post: 10-06-08, 01:50 PM
Replies: 44 | {"url":"http://www.sciforums.com/showthread.php?113579-Ideals-question&p=2933241","timestamp":"2014-04-19T09:25:32Z","content_type":null,"content_length":"42609","record_id":"<urn:uuid:6ac96f32-3cee-4ff9-997c-aeea4cd58247>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need some help with a set question:
November 8th 2012, 06:53 PM #1
Nov 2012
Need some help with a set question:
If (A is a subset B) and (C is a subset of D), then (A intersection C) is a subset of (B intersection D).
I think I have a correct answer, but I'm uncertain (I feel like I'm leaving something out and that there should be something between the last two lines that explains further) so if you can answer
/explain, that would be super helpful. Thanks in advance!
Here's what I have:
Let x be an element of A intersect C.
Since we know A is a subset of B, then we know that x is an element of A and therefore is an element of B too.
Also, since C is a subset of D and x is an element of C then x is also an element of D too.
This shows that A intersect C is a subset of B intersect D.
Re: Need some help with a set question:
Hey Linseykathleen.
I don't know about how your professor/lecturer/whatever expects of proofs but if you need a really rigorous one try considering that if A is a subset of B then A intersection B = A for all A and
B such that A is a subset of B.
This way if (A and C) is a subset of (B and D), then (A and C) and (B and D) = (A and C).
Re: Need some help with a set question:
Hey Linseykathleen.
I don't know about how your professor/lecturer/whatever expects of proofs but if you need a really rigorous one try considering that if A is a subset of B then A intersection B = A for all A and
B such that A is a subset of B.
This way if (A and C) is a subset of (B and D), then (A and C) and (B and D) = (A and C).
i thought her argument was rigorous enough. From my experience to show $A \subset B$. You assume $x \in A$ and show $x \in B$. So, she assumed $x \in A \cap C$ and showed $x \in B \cap D$
November 8th 2012, 11:00 PM #2
MHF Contributor
Sep 2012
November 8th 2012, 11:46 PM #3 | {"url":"http://mathhelpforum.com/discrete-math/207072-need-some-help-set-question.html","timestamp":"2014-04-20T22:07:44Z","content_type":null,"content_length":"32998","record_id":"<urn:uuid:fc5cb209-7e06-4048-906b-b7555667babe>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
We aren't entirely clear on your question, but I will answer this line:
Lyndamurphy wrote:
I have a monthly total and want to estimate over 3 months
If you have just one month and want to estimate for 3 months, the easiest way is to multiply by 3.
Example: April=60, so April, May and June together can be estimated to be 60×3=180
More accurately you would consider the number of days in each month. Since April has 30, May has 31, and June has 30, you would calculate as follows: Daily Average in April = 60/30 = 2, so estimate
for the three months = 2×(30+31+30) = 182 | {"url":"http://www.mathisfunforum.com/post.php?tid=3567&qid=34666","timestamp":"2014-04-18T08:27:36Z","content_type":null,"content_length":"17053","record_id":"<urn:uuid:ce433df6-ae37-4bac-8949-5729c459432f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
the definition of gauss
gauss (ɡaʊs)
—n , pl gauss
the cgs unit of magnetic flux density; the flux density that will induce an emf of 1 abvolt (10^--8 volt) per centimetre in a wire moving across the field at a velocity of 1 centimetre per
second. 1 gauss is equivalent to 10^--4 tesla
[after Karl Gauss]
"unit of intensity of a magnetic field," 1882, named for Ger. mathematician Karl Friedrich Gauss (1777-1855).
gauss (gous)n. pl. gauss or gauss·es The centimeter-gram-second unit of magnetic induction.
gauss (gous) Pronunciation Key
The unit of magnetic flux density in the centimeter-gram-second system, equal to one maxwell per square centimeter, or 10^-4 tesla.
Gauss, Carl Friedrich 1777-1855.
German mathematician, astronomer and physicist who introduced significant and rapid advances to mathematics with his contributions to algebra, geometry, statistics and theoretical mathematics. He
also correctly calculated the orbit of the asteroid Ceres in 1801 and studied electricity and magnetism, developing the magnetometer in 1832. The gauss unit of magnetic flux density is named for him.
unit of magnetic induction in the centimetre-gram-second system of physical units. One gauss corresponds to the magnetic flux density that will induce an electromotive force of one abvolt (10-8 volt)
in each linear centimetre of a wire moving laterally at one centimetre per second at right angles to a magnetic flux. One gauss corresponds to 10-4 tesla (T), the International System Unit. The gauss
is equal to 1 maxwell per square centimetre, or 104 weber per square metre. Magnets are rated in gauss. The gauss was named for the German scientist Carl Friedrich Gauss.
Learn more about gauss with a free trial on Britannica.com. | {"url":"http://dictionary.reference.com/browse/gauss","timestamp":"2014-04-21T14:47:53Z","content_type":null,"content_length":"107530","record_id":"<urn:uuid:55199d1b-bdbf-4159-bf23-e4e56d89682b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Amplitude and Period
Day 2 - Amplitude and Period
by Shannon Umberger
Objectives: To find the amplitude and period of a trigonometric function from its equation; to find the amplitude and period of a trigonometric function from its graph; to graph sine and cosine
functions with various amplitudes and periods.
"Amplitude and frequency are important concepts that are essential to the understanding of many fields, including music.
"The loudness, or amplitude, or a musical sound is a comparative measure of the strength of the sound that is heard. Loudness is related to the distance from the source and the energy of the
"Pitch is determined by the number of vibrations per second, or the frequency of the vibrations that yield a sound wave. Frequency is the reciprocal of the period."
Resource: Hayden, Jerome D. and Hall, Bettye C. (1993). Trigonometry. Prentice Hall: Englewood Cliffs, New Jersey.
Student Activity 1 - Amplitude
I. Graphing amplitude in sine functions.
1. Graph y = sinx in Graphing Calculator.
2. Go to the "Math" Menu and choose "New Math Expression". Graph y = 2sinx. How does the graph of y = 2sinx differ from the graph of y = sinx?
3. Delete the equation y = 2sinx. Graph y = 0.5sinx. How does the graph of y = 0.5sinx differ from the graph of y = sinx?
4. Delete the equation y = 0.5sinx. Graph y = -sinx. How does the graph of y = -sinx differ from the graph of y = sinx?
5. Delete the equations y = sinx and y = -sinx. Graph the equation y = nsinx, letting n vary from -10 to 10. Animate. What effect does "n" have on the graphs?
II. Graphing amplitude in cosine functions.
Graph y = ncosx, letting n vary from -10 to 10. Animate. Does the graph of y = ncosx react to the values of n the same way the graph of y = nsinx did?
III. Making sense of amplitude.
1. Answer the following questions about equations in the forms y = Asinx and y = Acosx:
a) What happens to the graph of y = sinx and y = cosx when A > 1?
b) What happens when 0 < A < 1?
c) When -1 < A < 0?
d) What about when A = -1?
e) And when A < -1?
2. Define amplitude.
3. By only looking at the equations in the form y = Asinx or y = Acosx (and not graphing), can you write an expression that gives the amplitude of the graph?
4. By only looking at the graphs of the equations in the form y = Asinx or y = Acosx (and not knowing the exact equation), can you write an expression that gives the amplitude of the graph?
Teacher Key for Student Activity 1
1. Here is a sample graph:
2. Here is a sample graph and answer:
The purple graph is "taller" up and down by 2.
3. Here is a sample graph and answer:
The purple graph is "shorter" up and down by 0.5.
4. Here is a sample graph and answer:
The purple graph is "flipped" around the x-axis.
5. Click HERE to see a sample animation. Answer: The graphs get "taller" or "shorter."
II. Click HERE to see a sample animation. Answer: Yes.
1. Here are sample answers:
a) The graph gets "taller" by A.
b) The graph gets "shorter" by A.
c) The graph flips around the x-axis and gets "shorter" by A.
d) The graph flips around the x-axis.
e) The graph flips around the x-axis and gets "taller" by A.
2. Amplitude is "related to the height of the graph."
3. Amplitude = | A |
4. Amplitude = 0.5(M-m), where M is the maximum value of the range and m is the minimum value of the range.
Student Activity 2 - Period
I. Graphing period in cosine functions.
1. Graph y = cosx in Graphing Calculator.
2. Go to the "Math" Menu and choose "New Math Expression". Graph y = cos2x. How does the graph of y = cos2x differ from the graph of y = cosx?
3. Delete the equation y = cos2x. Graph y = cos0.5x. How does the graph of y = cos0.5x differ from the graph of y = cosx?
4. Delete the equation y = cosx. Graph y = cos(-x). How does the graph of y = cos(-x) differ from the graph of y = cosx?
5. Delete the equations y = cosx and y = cos(-x). Graph the equation y = cosnx, letting n vary from -10 to 10. Animate. What effect does "n" have on the graphs?
II. Graphing period in sine functions.
Graph y = sinnx, letting n vary from -10 to 10. Animate. Does the graph of y = sinnx react to the values of n the same way the graph of y = cosnx did?
III. Making sense of period.
1. Answer the following questions about equations in the forms y = sinBx and y = cosBx:
a) What happens to the graphs of y = sinx and y = cosx when B > 1?
b) What happens when 0 < B < 1?
c) When -1 < B < 0? Be careful!!
d) What about when B = -1? Be careful!!
e) And when B < -1? Be careful!!
2. Define period.
3. By only looking at the equations in the form y = sinBx or y = cosBx (and not graphing), can you write an expression that gives the period of the graph?
4. By only looking at the graphs of the equations in the form y = sinBx or y = cosBx (and not knowing the exact equation), can you write an expression that gives the period of the graph?
Teacher Key for Student Activity 2
1. Here is a sample graph:
2. Here is a sample graph and answer:
The green graph got "squeezed" by 2.
3. Here is a sample graph and answer:
The green graph got "stretched" by 0.5.
4. Here is a sample graph and answer:
The green graph lies on top of the blue graph.
5. Click HERE to see a sample animation. Answer: The graph gets "squeezed" or "stretched."
II. Click HERE to see a sample animation. Answer: Yes, except for one difference. When n is negative, y = sinnx "flips" about the x-axis but y = cosnx does not. (This is because sin(-x) = -sinx and
cos(-x) = cosx).
1. Here are sample answers:
a) The graph gets "squeezed" by B.
b) The graph gets "stretched" by B.
c) For y = sinBx, the graph flips about the x-axis and gets "stretched" by B. For y = cosBx, the graph gets "stretched" by B.
d) For y = sinBx, the graph flips about the x-axis. For y = cosBx, the graph remains the same.
e) For y = sinBx, the graph flips about the x-axis and gets "squeezed" by B. For y = cosBx, the graph gets "squeezed" by B.
2. Period is one complete cycle or pattern of the graph.
3. Period = 2p / | B |
4. Period = | N - n |, where n is the x-value of any point on the graph and N is the x-value of a second point on the graph, found by tracing the graph until the pattern starts to repeat.
Student Practice
1. Determine the amplitude and period of each function.
a) y = sin4x
b) y = -4cos5x
c) y = -2cos(5/4)x
d) y = 3cos(-2x)
2. Determine the amplitude and period of each function. Then write an equation of each graph.
3. Give the amplitude and period of each function. Then sketch the graph of the function over the given interval.
a) y = 3sinx, [0, 2p]
b) y = 2cos2x, [-2p, 2p]
c) y = -cos0.5x, [-p, p]
d) y = 0.5sin(-x), [-2p, 0]
Teacher Key for Student Practice
a) amp = 1, pd = p/ 2
b) amp = 4, pd = 2p/5
c) amp = 2, pd = 8p/5
d) amp = 3, pd = p
a) amp = 3, pd = p, y = 3sin2x
b) amp = 2, pd = 2p, y = -2cosx
a) amp = 3, pd = 2p
b) amp = 2, pd = p
c) amp = 1, pd = 4p
d) amp = 0.5, pd = 2p | {"url":"http://jwilson.coe.uga.edu/EMT668/EMAT6680.2000/Umberger/EMAT6690smu/Day2/Day2.html","timestamp":"2014-04-18T00:13:25Z","content_type":null,"content_length":"16471","record_id":"<urn:uuid:ca95f417-6d4c-430a-8c3a-23679e621cbe>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: General intellectual interest/challenges
Harvey Friedman friedman at math.ohio-state.edu
Mon Dec 15 11:21:54 EST 1997
Lou writes:
>Well, P=NP is certainly highly interesting as a mathematical problem,
>and so is the abc-conjecture, and scores of other big problems,
>like the Lang conjectures.
It is easy to state clearly and concisely what the general intellectual
interest of P=NP is and its importance for lots of diverse contexts way
beyond mathematics. Furthermore, this can easily be done in such a way that
people from the following disciplines can readily grasp and relate to it,
and see its intellectual and other importance:
number theory, combinatorics, geometry, scientific philosophy, statistics,
finance, computer security, complex systems, complexity theory, database
theory, algorithm design, cryptography, logic, artificial intelligence,
robotics, networking, hardware design, control theory, operations research,
neural nets, quantum computing, expert systems, code verification, proof
checking, statistical mechanics, linguistics, dynamical systems, signal
processing, etcetera; and many, many more.
I repeat: it's not just that people from these diverse areas would grasp
the problem. It's that they would grasp its significance - although they
would be convinced that they pretty much know the answer. It would not look
like a frivolous puzzle to them.
Can you do this for the abc-conjecture and the Lang conjectures?
P=NP is something that I would not quite regard as mainstream FOM, although
it is clearly close in spirit, or at least far closer in spirit and actual
connection to FOM than any part of mathematics. It is definitely something
that emerges very very quickly in foundational studies. This is extremely
rare for a yes/no mathematical problem. I will save the big guns from
mainstream FOM in reserve for later.
> I am not inclined to compare them as to
>general intellectual interest.
I am, because otherwise you get to cite mathematical tradition without any
justification, in order to minimize the unique place of FOM in the history
of ideas with impunity. I use "general intellectual interest" to jolt you
into seeing that at least there is some sort of difference. Then maybe you
will cast the difference in your own terms, and we can proceed from there.
>Who, 200 years ago, would have
>predicted the central role of elliptic curves in so much that's
>going on? Or zeta functions? (Well, Euler had some inkling, presumably.)
Ah, the old needle in the haystack justification. It goes like this: "Since
no one can tell what is important in the long run, anything might be as
good as anything else. So we should just follow our instincts. Nothing
gained by being really critical. Let a thousand flowers bloom! Glory to our
instincts!! Those who waiver can't do!!!"
Well, this argument looks much less impressive when one injects a little
bit of probability theory in here. If you have a method by which you can
tell gigantic differences in probabilities of importance, you can't and
won't fail to use it. The real question is: can the method be taught?
>If these things are not of general intellectual interest, so much the
>worse for "general intellectual interests".
I have never met anybody yet who claims that eliptic curves and zeta
functions are of general intellectual interest. Of the people who care,
approximately 99.9.. percent are pure mathematicians (yes, they have had
uses - likely temporary - in some complexity theory situations). You don't
want to compare P=NP with eliptic curves and zeta functions once you step
out of the comfort of your math building - or do you?
>Of course, this is not
>to say I do not highly value good expositions on these things, and
>improvement in this direction is very desirable. But I don't see
>any reason for mathematicians to aim for approval by, for example,
>the big media, if this would subvert the intellectual standards
>we should be keeping up.
I would appreciate what you said if I thought you could delineate in a
convincing way any kind of fundamental underlying intellectual standards.
We know that mathematicians are addicted to complex and intricate
hierarchical strucutures, in which one can effectively use large complex
machineries - which have been built up over years. I have on several
occassions seen mathematicians, after a well known problem is solved, turn
a deaf ear because the solution did not employ big machinery. Which makes
me think that the mathematicians are not all that interested in
information, but rather in the process - one of the hallmarks of art.
What is unclear is to what extent mathematicians are driven by any wider
intellectual purposes, other than this special kind of process. To real
outsiders - and I am not really quite an outsider - it has all the
appearance of an intricate yet aimless art, which is admittedly very
"precious." Yet horrifically imbred, and very very snooty. To real
outsiders, it is supremely impenetrable. And when it is made to look
penetrable - on the surface - it merely looks like challenging puzzles
(coloring maps, FLT, etc.) - with the flavor of chess. A kind of climbing
of Mt. Everest in an age of airplanes.
I'll stop here for no good reason. One interesting question: suppose P=NP
is solved as an afterthought by people working on eliptic curves and zeta
functions? How would that affect my position? The answer is: not at all,
although given the way I am saying all of this, I could look real bad. Lou
- are you going to use your knowledge and experience with eliptic curves
and the zeta function to go try and solve P=NP? Or are you going to leave
it for the rest of us on the fom?
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1997-December/000528.html","timestamp":"2014-04-20T20:59:14Z","content_type":null,"content_length":"8195","record_id":"<urn:uuid:6b596dc2-8c89-4636-bc57-867221b5acf1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math::PlanePath::Base::Digits -- helpers for digit based paths
use Math::PlanePath::Base::Digits 'digit_split_lowtohigh';
foreach my $digit (digit_split_lowtohigh ($n, 16)) {
This is a few generic helper functions for paths based on digits or powering.
They're designed to work on plain Perl integers and floats and there's some special case support for Math::BigInt.
Nothing is exported by default but each function below can be as in the usual Exporter style,
use Math::PlanePath::Base::Digits 'round_down_pow';
(But not parameter_info_radix2(), for the reason described below.)
Return the power of $radix equal to or less than $n. For example
($pow, $exp) = round_down_pow (260, 2);
# $pow==256 # the next lower power
# $exp==8 # the exponent in that power
# 2**8=256 is next below 260
Return a list of digits from $n in base $radix, or in binary. For example,
@digits = digit_split_lowtohigh (12345, 10);
# @digits = (5,4,3,2,1) # decimal digits low to high
If $n==0 then the return is an empty list. The current code expects $n >= 0.
"lowtohigh" in the name tries to make it clear which way the digits are returned. reverse() can be used to get high to low instead (see "reverse" in perlfunc).
bit_split_lowtohigh() is the same as digit_split_lowtohigh() called with radix=2.
Return a value made by joining digits from $arrayref in base $radix. For example,
@digits = (5,4,3,2,1) # decimal digits low to high
$n = digit_split_lowtohigh (\@digits, 10);
# $n == 12345
Optional $zero can be a 0 of an overloaded number type such as Math::BigInt to give a returned $n of that type.
Return an arrayref of a radix parameter, default 2. This is designed to be imported into a PlanePath subclass as its parameter_info_array() method.
package Math::PlanePath::MySubclass;
use Math::PlanePath::Base::Digits 'parameter_info_array';
The arrayref is
[ { name => 'radix',
share_key => 'radix_2',
display => 'Radix',
type => 'integer',
minimum => 2,
default => 2,
width => 3,
description => 'Radix (number base).',
Return the single radix parameter hashref from the info above. This can be used when a subclass wants the radix parameter and other parameters too,
package Math::PlanePath::MySubclass;
use constant parameter_info_array =>
{ name => 'something_else',
type => 'integer',
default => '123',
If the "description" part should be more specific or more detailed then it could be overridden with for example
{ %{Math::PlanePath::Base::Digits::parameter_info_radix2()},
description => 'Radix, for both something and something.',
This function is not exportable since it's meant for a one-off call in an initializer and so no need to import it for repeated use.
Math::PlanePath, Math::PlanePath::Base::Generic
Copyright 2010, 2011, 2012, 2013, 2014 Kevin Ryde
This file is part of Math-PlanePath.
Math-PlanePath is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your
option) any later version.
Math-PlanePath is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License along with Math-PlanePath. If not, see <http://www.gnu.org/licenses/>. | {"url":"http://search.cpan.org/~kryde/Math-PlanePath/lib/Math/PlanePath/Base/Digits.pm","timestamp":"2014-04-21T00:08:59Z","content_type":null,"content_length":"19028","record_id":"<urn:uuid:6d1938a2-719b-4b39-901f-f0cfcfcb8829>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chicago Heights ACT Tutor
Find a Chicago Heights ACT Tutor
...I've had several jobs where I used Excel daily on a professional level. Beyond the typical workplace application of the program, I've also used Excel for extensive statistical and modeling
projects, pushing the boundaries of what Excel (and my poor laptop) is capable of. In short, I can help you learn and master Excel!
28 Subjects: including ACT Math, reading, writing, English
...Moreover, I have edited over 50 college application essays. In my analysis, I review content (making the story is cohesive and compelling), style (making sure the writing is unique, articulate,
and structured), and grammar (elimination of improper language and diction). I scored a 670 on the SAT...
17 Subjects: including ACT Math, geometry, algebra 1, algebra 2
...For the ACT Reading section, I show students how to read each passage quickly while grasping the main ideas. Then, I guide them in using the test questions to understand where to read more
closely. I scored a 34 on the ACT Reading section.
11 Subjects: including ACT Math, writing, SAT math, GMAT
...I am First-Aid, CPR and RedCross Lifeguard Certified. Earned Bachelor's degree with speciality in speech and communication. I have won awards in public speaking events as well as improv.
23 Subjects: including ACT Math, reading, English, writing
...Since I am actively teaching High School Mathematics to students that may not have a proper foundation from their earlier Mathematics classes, I have experience in breaking down Mathematical
problems and concepts that are often tested in standardized assessments. As a tutor and a teacher, I will...
11 Subjects: including ACT Math, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/chicago_heights_act_tutors.php","timestamp":"2014-04-20T06:48:53Z","content_type":null,"content_length":"23888","record_id":"<urn:uuid:d983b1c6-0ea1-4efd-bd30-1868c0a81ecc>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
Euclidean space
Euclidean space, In geometry, a two- or three-dimensional space in which the axioms and postulates of Euclidean geometry apply; also, a space in any finite number of dimensions, in which points are
designated by coordinates (one for each dimension) and the distance between two points is given by a distance formula. The only conception of physical space for over 2,000 years, it remains the most
compelling and useful way of modeling the world as it is experienced. Though non-Euclidean spaces, such as those that emerge from elliptic geometry and hyperbolic geometry, have led scientists to a
better understanding of the universe and of mathematics itself, Euclidean space remains the point of departure for their study. | {"url":"http://www.britannica.com/print/topic/194913","timestamp":"2014-04-18T09:02:29Z","content_type":null,"content_length":"7430","record_id":"<urn:uuid:927279c7-5138-4474-8b8c-29f2e89313a0>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector problem
May 3rd 2011, 02:55 PM #1
Jul 2010
Vector problem
Q: an airplane is flying at an airspeed of 600 km/hr in a cross-wind that is 30 degrees west of north at a speed of 50 km/hr. In what direction should the plane fly to end up going due west?
Its been years since I did this, but am having a test and this problem will show up on it
air vector + wind vector = ground vector
let $\theta$ = angle relative to due West for the Air vector
using the method of components ...
$A_x + W_x = G_x$
$600\cos{\theta} + 50\cos(60) = G$
$A_y + W_y = G_y$
$600\sin{\theta} + 50\sin(60) = 0$
solve for $\theta$ using the 2nd equation. If you need to find the groundspeed, $G$, use the value found for $\theta$ and the 1st equation.
May 3rd 2011, 03:39 PM #2 | {"url":"http://mathhelpforum.com/pre-calculus/179420-vector-problem.html","timestamp":"2014-04-20T20:14:58Z","content_type":null,"content_length":"34714","record_id":"<urn:uuid:e705c2e3-cb0b-4774-8e10-d502a3999cf1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: cluster and F test
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: cluster and F test
From Steven Samuels <sjhsamuels@earthlink.net>
To statalist@hsphsun2.harvard.edu
Subject Re: st: cluster and F test
Date Mon, 14 Jul 2008 17:26:53 -0400
On Jul 11, 2008, at 4:19 AM, Ángel Rodríguez Laso wrote:
Dear Steven,
From my readings I've understood that the design effect comprises all
loss of precission due to clustering and weighting. Once the sample
size is corrected by the design effect, what matters is the number of
observations. These are the results for the proportion of a variable
in a complex design survey:
In your example, the DEFT is <1, indicating that the cluster sample is more precise than a SRS. This would happen, for example, if in every cluster, the proportion of "si" is about 10%, the
population proportion. Essentially, the "between-cluster" SD would zero in the formula I previously presented. In such a case, the total sample size matters, not the number of clusters.
In the general case, however, the between-cluster SD is not zero This would happen is the trait you were studying was unevenly distributed among clusters. The most extreme case: if all subjects in
127 clusters had "si", and all subjects in the remaining 1,139 clusters had "non", then the effective sample size would be the number of clusters. In the absence of stratification contributions to
the design effect, the approximate value of DEFF would be "n", where "n" is the average cluster size.
What is surprising for me is that in regression in this context, only
the number of clusters count and not the number of individuals per
cluster (or the total number of individuals), as it's been said by
Austin. That amounts to saying that having 1000 observations per
cluster would yield the same precision than having 1.
You are misinterpreting Austin's statement (I could not find the one you mean). Of course, the number of observations per cluster matters, but only up to a point. The approximate formula for the
variance of a mean that I gave previously was:
var = [(s_b)^2]/m + [(s_w)^2]/nm.
where m = no. clusters, n = number of observations /cluster.
You can see that increasing n does decrease the variance, but this decrease affects only the 2nd term. On the list we occasionally see examples where investigators took a small number of clusters and
a huge sample size in some of them, and then were surprised at the big standard errors. For more details, find the formulas for the design effect and for choosing the sample size for clusters in one
of the texts I referred to.
(Aside: In your example DEFF = -5863. This is a number that should be positive! According to the Stata manual, the value for DEFF is valid only if original population weights are used. In your
example the weights are scaled to total the sample size, not the population size, and this may have caused the wild value.)
svyset psu [pweight=pesodef2007], strata(areasalud)fpc(secperarea)
pweight: pesodef2007
VCE: linearized
Strata 1: areasalud
SU 1: psu
FPC 1: secperarea
. svy:prop p45
(running proportion on estimation sample)
Survey: Proportion estimation
Number of strata = 11 Number of obs = 12174
Number of PSUs = 1266 Population size = 12172,5
Design df = 1255
| Linearized Binomial Wald
| Proportion Std. Err. [95% Conf. Interval]
p45 |
sí | ,0994565 ,0023199 ,0949052 ,1040077
no | ,9005435 ,0023199 ,8959923 ,9050948
. estat effects
| Linearized
| Proportion Std. Err. Deff Deft
p45 |
sí | ,0994565 ,0023199 -5863 ,855246
no | ,9005435 ,0023199 -5863 ,855246
Note: Weights must represent population totals for deff to be correct
when using an FPC; however, deft is
invariant to the scale of weights.
end of do-file
So the standard error is calculated on the effective sample size (16648;
p(1-p)/se*se) that, if corrected by deft*deft becomes
(16648*0.855246*0.855246) 12177, much closer to the number of
observations than to the number of clusters. That´s the reason why I
comment that for precision, the sample size is a very important
determinant. In fact, there is no disagreement between both points of
views because the total sample size is determined by the number of
clusters and the number of observations per cluster.
What is surprising for me is that in regression in this context, only
the number of clusters count and not the number of individuals per
cluster (or the total number of individuals), as it's been said by
Austin. That amounts to saying that having 1000 observations per
cluster would yield the same precision than having 1.
2008/7/8, Steven Samuels <sjhsamuels@earthlink.net>:
Angel, the primary determinant of precision is the number of clusters, and
degrees of freedom are based on these.
To compute the sample size needed in a cluster sample, you need to estimate
the number of clusters needed *and* the number of observations per cluster.
Consider an extreme case: everybody in a cluster has the same value of an
outcome "Y", but the means differ between clusters. Here one observation
will completely represent the cluster and only the number of clusters
matters. At the other extreme, if each cluster is a miniature of the
original population and cluster are very similar, then relatively few
clusters are needed and more observations can be taken per cluster.
In practice, the actual choice of clusters/observations per cluster is made
on the basis of the budget, on the relative costs of adding a cluster and of
adding an additional observation within a cluster, and the ratios the SD's
for the main outcomes between and within clusters. As there are usually
several outcomes, a compromise sample size is chosen. See: Sharon Lohr,
Sampling: Design and Analysis, Duxbury, 1999, Chapter 5; WG Cochran,
Sampling Techniques, Wiley, 1977; L Kish, Survey Sampling, Wiley, 1965.
There are many internet references.
Key concepts: the intra-class correlation, which measures how similar
observations in the same clusters are compared to observations in different
clusters; the "design effect", which shows how the standard error of a
complex cluster sample is inflated compared to a simple random sample of the
same number of observations. Joanne Garret's program -sampclus-, (findit
sampclus), requires the investigator to input the correlation. It is most
easily calculated by a variance components analysis of similar data.
A *theoretical* nested model can make some concepts clearer (Lohr). Suppose
there are observations Y_ij = c + a_i + e_ij. There are m random effects a_i
from a distribution with between-cluster SD s_b and, for each a_i, there are
n e_ij's drawn from a distribution with "within-cluster" SD s_w. The a's and
e's are independent. The total sample size is nm, and the variance of the
sample mean is:
var = [(s_b)^2]/m + [(s_w)^2]/nm. You can see that, holding m fixed,
increasing the number of observations per cluster decreases only the 2nd
The actual formulas for sampling from finite populations are more
complicated, but the same principles apply.
On Jul 8, 2008, at 5:07 AM, Ángel Rodríguez Laso wrote:
Following the discussion, I don´t understand very well how degrees of
freedom (number of clusters-number of strata) and the actual number of
observations are used in svy commands (which are related to cluster
regression). I say so because when I calculate the sample size needed
in a survey to get a proportion with a determined confidence level,
the number I get is the number of observations and not the number of
degrees of freedom. So I assume that the number of observations is
what conditions the standard error and then I don´t know what degrees
of freedom are used for.
Ángel Rodríguez
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-07/msg00518.html","timestamp":"2014-04-19T10:03:57Z","content_type":null,"content_length":"16444","record_id":"<urn:uuid:b2277512-ae6b-4aa5-b4c2-7d1079000206>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
Merrimack Precalculus Tutor
Find a Merrimack Precalculus Tutor
...Human Systems 9. The characteristics, distribution, and migration of human populations on Earth's surface. 10. The characteristics, distributions, and complexity of Earth's cultural mosaics.
44 Subjects: including precalculus, chemistry, writing, calculus
...This has trained me in most of the sections in the exam. The exam is divided into several parts. One section is the Physical Sciences, essentially physics and chemistry.
47 Subjects: including precalculus, chemistry, calculus, reading
...I have worked with elementary students to help bolster their basics, middle school students to build their algebra skills, and high school students to understand the fine points of geometry. I
use my love of math to find the best approach for each student. That being said, I try to make math fun.
8 Subjects: including precalculus, calculus, linear algebra, trigonometry
...Thank you for your interest and preference(?), AlexanderI taught an Algebra College course for twelve participating (future) nurses. It was very successful. I have MS degrees in Physics
(University of Stuttgart, Stuttgart - Germany and in Electrical Engineering (University of Florida,Gainesville - Florida.
6 Subjects: including precalculus, physics, algebra 1, prealgebra
...Try as I might, I could not avoid a great deal of memorization in order to succeed. By the end of the course in which I struggled for A's, I felt I had a pretty good handle on things. Through
the remainder of high school, I continued to work very hard (harder than I should have had to work) to achieve good grades.
6 Subjects: including precalculus, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/merrimack_nh_precalculus_tutors.php","timestamp":"2014-04-18T11:27:34Z","content_type":null,"content_length":"23880","record_id":"<urn:uuid:33ec673f-239a-4287-a306-4a7de0cb7002>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Palos Heights Calculus Tutor
Find a Palos Heights Calculus Tutor
...The level is not important---I'm just here to help you learn! I'm also available to tutor music theory. My grades in my first two semesters of the IU theory sequence were both A+; I earned A's
in subsequent honors theory courses.
13 Subjects: including calculus, statistics, geometry, algebra 1
...My teaching style is one of: * LISTENING to see what your student is doing; to learn how he or she thinks. * ENCOURAGING students to push a little farther; to show them what they're really
capable of. * EXPLAINING to teach what students need, in ways they can understand and remember! I have o...
21 Subjects: including calculus, chemistry, statistics, geometry
...During my time there, I took AP Calculus, Physics, Chemistry, and Biology, and played on the baseball team. I have worked at Flossmoor Country Club for 10 years, so I have met many of the South
Suburbs' most influential people. I have also helped many kids become great caddies at this club, by teaching them and helping them if they had any troubles.
28 Subjects: including calculus, chemistry, geometry, algebra 1
...They seem to march on, concerned not with whether they have left you far behind. Rather, I find it much more efficient to aide learning with conversation, marching side by side and never to
lead too far ahead. I do my best to pause and retrace our route so that when you inevitably ask the same ...
7 Subjects: including calculus, physics, geometry, algebra 1
...In addition, I also work with test preparation including district wide tests and will be working with students preparing to improve their scores on NWEA and the new PARCC test which will be
replacing the ISAT next academic year. I you want someone with a record of success with established growth...
76 Subjects: including calculus, English, Spanish, reading | {"url":"http://www.purplemath.com/Palos_Heights_calculus_tutors.php","timestamp":"2014-04-19T17:29:41Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:4a3f040e-2ec1-4b3f-a5a1-d0c0e4758f8f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Kim
Total # Posts: 1,974
Science ; help pleaase
1. In what way are the ocean currents in the Southern Hemisphere 'mirror images' of those in the Northern Hemisphere? 2. On which side of Australia (east or west) would you expect to find deserts?
rain forests? Explain why. 3. In what other regios of the world would yo...
Science - please please help
never mind about the third one. i just thought of it. Is it cause rocks and soil have low albedo?
Science - please please help
1. Explain why conduction, convection, and advection cannot occur in space. ^is it cause there is no gravity in space? 2. Does warm water rise or fall in cold water? Explain why this happens. 3.
Explain why rocks and soil are poor heat sinks. Thanks in advance.
please help! asap. thank you!
so for your 18th birthday, you hold a party like a 16th birthday party? ..is there any traditions of some sort? For those couples.. do you mean the 100th and 365th day they are together? Thanks so
much for your help!
please help! asap. thank you!
I need to find out events that are part of adolescences lives in korea. ex.birthday.. thanks! .. btw, would Koreans celebrate their 16th birthday party like how they celebrate it in North America?
Algebra 1
Thank you!
Algebra 1
Hi, Could anyone tell me how I would do: 4 out of 15 is ____% Thanks, Kim
you mean to divide 595 by 60 if so that comes out to 9.91 9 91/60 then I would need to break that down.. I am so confused
3 1/2 x 5/6 x 3 2/5 3x2+1=7/2 6x2+5=17/5 7/2x5/6x17/5 = 596/60 = 59 5/12 Is this right? I feel like I am doing something wrong.. thanks
Thank you
Can someone explain to me the different appeals used in writing (ethos, pathos, and logos)?
The charge and identity of an ion with 16 protons, 18 electrons and 17 neutrons would be?
Chemistry - please help
1. Write the balanced chemical equation for the single displacement reaction of magnesium and sodium thiosulfate. Is this right? 6Mg + (S*2*O*3)*2 -> 6MgO + 2S*2 2. Write the balanced chemical
equation for the double displacement reaction of hydrogen sulphide and potassium ...
how is the concept of empathy illustrated and developed in Raymond Carver's "cathedral"
discuss how the methaphor of death operate in Chinua Achebe's "dead men's path" with specific reference to culture identity?
Written Commuincations
How can I write a business solution for the following question. The trip schedule for Mexico during spring break has been cancelled due to bankruptcy of the bus company. You must tell 25 of your
classmates that the trip has been cancelled and that have lost their $100 deposit....
The Thanksgiving Day is coming. Do you have some ideas about it? For western people, it is more important than us Asian. But this kind of custom influences us gradually. So, this Thanksgiving Day I
want to send some interesting gifts to my families, friends and others in order...
math in general
but will i even need them after high school?
Proofs HELP
what is coplanar?
i need help with this problem: If the square of 2 less than an integer is 16, find the number(s).
Hello...i just had a quick question...its probably something easy to get...maybe i am thiking too hard...but the question is estimate the volume of solid that likes above the square R=[0,2] * [0,2]
and below the elliptical paraboloid z=16-x^2-2y^2..divide r into four equal squ...
I'm sorry, its suppose to be square root 5x+5-5=0
I need help solving: ã5x+5-=0 Thanks for the help!!1
social studies
Their were many machines. One machine was the cotton gin. I picked and cleaned the cotton
Can someone help me with this word problem. How far up a wall will an 11m ladder reach, if the foot of the ladder must be 4m from the base of the wall? Round your answer to the nearest tenth.
Partial derivative
k tnx for ur help
Partial derivative
Hello I trying to do a problem on partial derivative. I know the partial derivative of 3x+2y with respect to x would be 3 and with respect to y would be 2. But wut if it is squarroot of 3x+2y, wut is
the partial derivative of it with respect to x and y
actually, in a addition to the question i posted a few minutes ago, can someone please help me with a few actual specific questions? thanks so much i'm very confused and i've been working on these
all day. *Literal Equations* ~solve for x ax + bx = 8ab ~solve for y my ...
thank you so much that was so helpful i wish i could repay you! since you seem to know what you are doing can you take a look at the more recent question i posted becasue i am getting anxious that
nobody else will answer to my comment and im SO stuck on the problems!
what does it mean when you are solving an equation, and, the variable that you are trying to solve for cancels itself out?
CALC..plz help
hello again..i have another question... it says sketch the curve <3cost,t,2sint> i know the parametric equations are x=3cost y=t z=2sint but what do i do to sketch them..plz help me
Vector Calculus
Hello...i had a question. I would really appreciate it if someone could help me...tnx in advance! K my question is Find the T(t) , N(t), B(t) for r(t)=<2sint,2cost,t>; t=pie/4 ...which are normal
vectors, binomial vector and i am not sure about T(T)...anyhow...i have the...
Given: line segment AD is congruent to line segment CD, <1 is congruent to <2 Prove: triangle ADB is congruent to triangle CDB
select a web site that contains e-commerce applications. What are the advantages and disadvantages in using this web site?
62.8 m/min
What Fallacies type is this? In one of her columns, Abigail Van Buren printed the letter of "I'd rather be a window." The letter writer, a divorcee, complained about widows who aid they had a hard
time coping. Far better, she wrote,to be a window than to be a div...
list as many ways as u can how are crystalline solids connected to science? how are crystalline solids connected to math? how are crystalline solids connected to the world? how are crystalline solids
connected to you? why is it important?
Although most salamanders have four legs, the aquatic salamander resembles an eel. It lacks hind limbs and has very tiny forelimbs. Explain how limbless salamanders evolved according to Darwin's
theory of natural selection.
Tomorrow i will be interviewed by some highschools and i want to have a list of questions ready to ask them. Does anyone know any good websites where i can get questions to ask? Thanks for helping
prove Given: line r is parallel to line s prove: <1 is congruent to <2
determine which lines or segments are parallel. Give a reason for your answer. measure of angle 5= 65 degrees measure of angle 9= 65 degrees
how do you change fractions into decimals?
i am an eighth grader and i need help because we have to write about how there is a loyalist anda patriot and how they are arguing of why they should have a war and why they shouldnt and also why
they should declare independence? but it has to include the boston tea party and ...
physical science
how does matter connect to science? how does matter connect to the world? how does matter connect to you? how does matter connect to math?
im guessing but 7(2x-6)=x-1
Hum 130
Are the indigenous religions still being practiced today? If so, how have the practices changed over time?
Hum 130
How modern civilization has impacted the spiritual lives of the indigenous peoples?
1.Describe how a population differs from a community using your own examples. 2.Describe how an ecosystem differs from a community using your own examples. Please please help. thanks.
percentages: to use a machine,u must do the work on the machine called input work. the machine then foes the work on and object called output work. not all the input work gets transmitted to output
work. some work is lost due to friction in the machine. if a machine has an inp...
percentages a servings= of cereal jas 22 calories from fat. twenty percent of the cereal's calories from fat. how many total calories are in a serving of this cereals?
How fast is a bicycle traveling in feet per second if a wheel has a 21-in. diameter and the angular speed of the wheel is 33 radians per second? The speed of the bicycle is the same as the linear
speed of a wheel.
What is the angular speel of a 19-in. diameter bicycle wheel if the bicycle if traveling at 29 ft/s?
Approximately 3 out of every 10 students at Greg's college live on campus. The college has 6,000 students. How many of them live on campus?
the coordinates of the endpoints A and B of segment AB are 6 and 21, respestively. find the coordinate of point C between A and B such that AC= 2/3(CB)
The directors and their wives met the stockholders for a night on the town. rewrite this sentence.
Math game called 24
Art... Complementary colors
That's what I was wondering...
TC=50+16 Q -2 Q2+0.2 Q3 a.plot this curve for quantites 1 to 10 b.calculate the average total cost,average variable cost, and marginal cost for these quantities, and plot them on another graph c.
discuss your results in term of decreasing,constant, and increasing marginal costs.
Is Tallahassee the midpoint of Pensacola and St. A
Yes! we can set up the equation 3x+40+4x+5=395 and solve to get 7x+45=395 7x=350 so x=50 When we plug it in to check, we get 3(50)+40+4(50)+5=395 so the answer must be correct!
Env Science
Distinguish between high-quality energy and low-quality energy, and give an example of each. Relate these terms to energy efficiency High-quality energy is energy that can be converted to work, with
a high conversion efficiency. Two examples are electrical energy and gravitati...
Env Science
Distinguish between high-quality energy and low-quality energy, and give an example of each. Relate these terms to energy efficiency Since this is not my area of expertise, I searched Google under
the key words "'high quality' 'low quality' energy" to...
I am so lost with this assignment.I hope that someone can help me. For my business class we are to: Plan for an interview for a job you would like to have. Consider how you might manage the résumé,
job application letter, follow-up letter, and interview. C...
business vommunication
What are seven things you must do in both oral and written messages? You need to have a good idea of what the message is that you are trying to send, have a knowledge of your audience, and
communicate in ways that are clear and understandable for that audience. The other facto...
Consider the choices of Native Americans who decide to stay on their tribe's native land (reservation) and those who choose to relocate to a city. If you were presented with this decision, which
would you choose and why? What I would do is not revelant: My culture and etho...
quotation marks
Notice, theprofessor told the class, Cassius' choice of imagery when he asks, upon what meat doth this our Caesar feed that he is grown so great? http://ca.answers.yahoo.com/question/index.php?qid=
20070524215848AAk2aqq This is the correct punctuation for quotes and quotes ...
I have a table of standard potentials in water that gives a number of reactions and their e*reduction. Say I'm looking for the reaction Al(s) --> Al+3 + 3e-, but the book gives me the reaction Al+3 +
3e- --> Al(s) and a E*red value of -1.68. If this were to react wit...
Chem--- last lab question!!
My lab partner performed one section of a lab on her own while I worked on something else. This section involved first adding a drop of 6 M NaOH to Zn(NO3)2. A precipitate forms. This is repeated in
three separate test tubes. In each test tube, a different substance is added. ...
I have a few question about an equilibrium lab we performed. Approximately .1 g of CoCl2 * 6H2O was mixed with 2 mL of 12 M HCl. The solution turned blue. Then, water was added and stirred into the
solution in 2 mL incriments until no further color change occured- the final co...
We performed a lab in class about a month ago concerning different types of equilibrium. One particular portion of the lab adding 5 mL of .3M HCl to 5 mL of .3 M Pb(NO3)2. After a precipitate finally
formed, 8 mL of water was added to dissolve PbCl2. One of my lab questions as...
What type of bond exists between molecules in a homogeneous mixture such as air, sugar water etc is known as a(n)
Elements that tend to lose electrons to acieve the electron dot structure of noble gases below them in atomic number on the periodic table are known as metals?
Explain how each of thse affect climate. -latitude -ocean currents -winds and air masses -elevation(altitude) -relief(mountain barriers) -nearness to water Please help. I need straight forward
answers. I read some info online about these...but I do not understand. So can you p...
Couple of questions. Why does the earth keep moving? The engine of a rocket applies a continual force to push the rocket along. When the engine stops what will happen? I think the rocket will stop
acceleration You need to be aware of the concept of inertia. Inertia is the tend...
2 cars of equal mass m collide at an intersection. Driver E traveled eastward and driver N northward. After the collision, 2 cars remain joined together and slide, before coming to a rest. Police
measured the skid mark length d to be 9metres. Coefficient of friction is 0.9. ba...
how do you find the area of a hexagon? Here is the formula: Area of regular hexagon: (sqrt3/ 2)(Wsqrd) Where sqrt=square root W=the smallest width of the hexagon sqrd=squared
2 cars of equal mass m collide at an intersection. Driver E traveled eastward and driver N northward. After the collision, 2 cars remain joined together and slide, before coming to a rest. Police
measured the skid mark length d to be 9metres. Coefficient of friction is 0.9. ba...
a chemist is asked to identify 2 solutions whose labels have peeled off. One is known to contain 1.0 mol of NaCl, the other is 1 mol of Na2CO3. Both solutes are dissolved in 1 kg of water. If the
chemist measures the freezing point of each solution, can it identify which is wh...
French. Please edit this for me.
This is the english version: She targets men. She gives them the kiss of death. Her favourite targets are lying, cheating boyfriends and husbands The french translation I got online is: Elle cible
des hommes. Elle leur donne le baiser de mort. Ses cibles préfér&e...
factoring numbers
If 6 an 10 are actores of a number, name the four other factors of the number 5,3,2,12 who do you get that?
I need help rounding 423,607,492 to the following place values. ten thousands million Please tell us what you think, and we'll be glad to comment on your answers. Starting at the ten thousands digit,
it is now 07,492, so rounding it to the nearest, is one ten thousand 10,0...
how many ions do each produce when in a 1.0M solution CH3COOH Ca(NO3)2 NaNO3 NaCl H2SO4 To figure out this you have to be able to dissociate each one and multiply the amount of ions you get by the
molarity. I believe NaCl would look like: Na+ + Cl- and because the molarity is ...
coomon factors
I don't understand common factors. can someone show me how to find the greatest common factor of 385 and 1365? 385 = 5 x 7 x 11 1365 = 3 x 5 x 7 x 13 which factors are found in both? 5 x 7 so 35 is
the HCF, (it's like taking the intersection of the elements of two sets...
prime numbers
Which of these is a prime number? 407, 409, 411, or 413 409 how did you get that? Suppose you are given some number N. If x<sqrt[N] then N/x will be larger than sqrt[N]. This means that any factor of
N must be smaller than or equal to the square root of N. This follows beca...
Can anyone help me come up with an engaging title for my research paper regarding mood swings during pregnancy? Baby Blues? Ups and Downs of Pregnancy Many Months -- Many Moods
chemistry structures
what is the stucture for heptanoic acid i need a to draw it CH3CH2CH2CH2CH2CH2COOH http://en.wikipedia.org/wiki/Enanthic_acid Molarity Calculate the molarity of 9.25 mol HCL in 2.25L of solution.
mols/L = Molarity
what does amines plus acids yeild? salts. RNH2 + HCl ==> RNH2*HCl ==>RNH3Cl
algebra 1
in the equation 5x+y=-1,m= I will be happy to critique your work on this. Please note that we don't do students' homework for them. Once YOU have come up with attempts to answer YOUR questions,
please re-post and let us know what you think. Then someone here will be ha...
algebra 1
factor 2x3y2+12xydivided by 2xy Divide 2xy into each term, term by term. xy+6
algebra 1
factor 6ab-8b: Common factor: 2b 2b (3a-4)
algebra 1
the difference between 2x2+4xy-3 and x2-2xy-4 is I will be happy to critique your work on this.
algebra 1
change the equation x-y=4 to the form y=mx+b I will be happy to critique your work on this. y=x+4
algebra 1
in the equation y=3x-4, a point on the line is I will be happy to critique your work on this. When x is zero, what is y? (1,-1)
algebra 1
in equation 5x+y=-1,m= I will be happy to critique your work on this. -1
I am having trouble with -4less than or equal to -3x-13 less than or equal to 26 -4 < -3x-13 < 26 Add 13 to all three parts of the inequality. 9 < 3x < 39 Divide all three parts by 3. 3 < x < 13 The
< signs should all be "less than or equal, but I can...
A train company has signed a contract to deliver a new fuel driven train 3 years from now. The price they will receive at the end of 3 years is $20 million. If the firm's cost of capital is 6%, what
is the present value of this payment? A train company has signed a contrac...
organic chemistry
one mole of triphenylmeethanol lowers the freezing point of 1000g of 100% sulfuric acid twice as much as one mole of methanol. How do you account for this observation? the only thing I can think of
that would explain that would be intermolecular forces. Perhaps that has someth...
how long it would take to pay off a credit card balance of $1000 if you pay the minimum of $35 a month at a flat yearly interest rate of 18%. Is interest charged monthly? Compounded? Under monthly
compounding, the answer is three years and two months. http://calculators.intere...
An important process for the production of acrylonitrile C3H3N is given by the following reaction 2C3H6 + 2NH3 + 3O2 --> 2C3H3N + 6H2O A 150 -L reactor is charged tot he following partial pressures
at 25 C P(C3H6) = .5 MPa P(NH2) = .8 MPa P(O2) = 1.5 MPa What mass of acrylo...
What is the equation for the line of symmetry for the graph of this function? function y = x2 - 4x - 5, if you have a graphing calulator you can put it in there, or just graph it on a piece of paper
using a T chart. find it by one of those.
Pages: <<Prev | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Kim&page=19","timestamp":"2014-04-21T03:31:23Z","content_type":null,"content_length":"32110","record_id":"<urn:uuid:4859efdd-c9f2-4736-a9fe-1db96b3b8a2c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
al logic
, 2000
"... Abstract: This is a history of relevant and substructural logics, written for the Handbook of the History and Philosophy of Logic, edited by Dov Gabbay and John Woods. 1 1 ..."
Cited by 139 (16 self)
Add to MetaCart
Abstract: This is a history of relevant and substructural logics, written for the Handbook of the History and Philosophy of Logic, edited by Dov Gabbay and John Woods. 1 1
, 1999
"... Motivated by description logics, we investigate what happens to the complexity of modal satisfiability problems if we only allow formulas built from literals, ∧, ✸, and ✷. Previously, the only
known result was that the complexity of the satisfiability problem for K dropped from PSPACE-complete to co ..."
Cited by 16 (0 self)
Add to MetaCart
Motivated by description logics, we investigate what happens to the complexity of modal satisfiability problems if we only allow formulas built from literals, ∧, ✸, and ✷. Previously, the only known
result was that the complexity of the satisfiability problem for K dropped from PSPACE-complete to coNP-complete (Schmidt-Schauss and Smolka [8] and Donini et al. [3]). In this paper we show that not
all modal logics behave like K. In particular, we show that the complexity of the satisfiability problem with respect to frames in which each world has at least one successor drops from
PSPACE-complete to P, but that in contrast the satisfiability problem with respect to the class of frames in which each world has at most two successors remains PSPACE-complete. As a corollary of the
latter result, we also solve the open problem from Donini et al.’s complexity classification of description logics [2]. In the last section, we classify the complexity of the satisfiability problem
for K for all other restrictions on the set of operators. 1
, 1997
"... Although negation-free languages are widely used in logic and computer science, relatively little is known about their expressive power. To address this issue we consider kinds of non-symmetric
bisimulations called directed simulations, and use these to analyse the expressive power and model theory ..."
Cited by 12 (4 self)
Add to MetaCart
Although negation-free languages are widely used in logic and computer science, relatively little is known about their expressive power. To address this issue we consider kinds of non-symmetric
bisimulations called directed simulations, and use these to analyse the expressive power and model theory of negation-free modal and temporal languages. We first use them to obtain preservation,
safety and definability results for a simple negation-free modal language. We then obtain analogous results for stronger negation-free languages. Finally, we extend our methods to deal with languages
with non-Boolean negation. Keywords: Expressive power, modal logic, negation-free languages. 1
, 1996
"... We present a framework for machine implementation of families of non-classical logics with Kripke-style semantics. We decompose a logic into two interacting parts, each a natural deduction
system: a base logic of labelled formulae, and a theory of labels characterizing the properties of the Kripke m ..."
Cited by 11 (3 self)
Add to MetaCart
We present a framework for machine implementation of families of non-classical logics with Kripke-style semantics. We decompose a logic into two interacting parts, each a natural deduction system: a
base logic of labelled formulae, and a theory of labels characterizing the properties of the Kripke models. By appropriate combinations we capture both partial and complete fragments of large
families of non-classical logics such as modal, relevance, and intuitionistic logics. Our approach is modular and supports uniform proofs of correctness and proof normalization. We have implemented
our work in the Isabelle Logical Framework.
- Coalgebraic Methods in Computer Science (CMCS’03), volume 82.1 of ENTCS , 2002
"... Positive Modal Logic is the restriction of the modal local consequence relation defined by the class of all Kripke models to the propositional negation-free modal language. The class of positive
modal algebras is the one canonically associated with PML according to the theory of Abstract Algebraic L ..."
Cited by 8 (2 self)
Add to MetaCart
Positive Modal Logic is the restriction of the modal local consequence relation defined by the class of all Kripke models to the propositional negation-free modal language. The class of positive
modal algebras is the one canonically associated with PML according to the theory of Abstract Algebraic Logic. In [4], a Priestley-style duality is established between the category of positive modal
algebras and the category of K -spaces.In this paper, we establish a categorical equivalence between the category K -spaces and the category Coalg(V) of coalgebras of a suitable endofunctor V on the
category of Priestley spaces. 2000 Mathematics Subject Classification: 06D22 Keywords and Phrases: Positive Modal Logic, Positive Modal Algebra, Priestley space, coalgebra, Vietoris space,
equivalence of categories.
- Annals of Mathematics and Artificial Intelligence , 2007
"... This paper is an overview of a variety of results, all centered around a common theme, namely embedding of non-classical logics into first order logic and resolution theorem proving. We present
several classes of non-classical logics, many of which are of great practical relevance in knowledge repre ..."
Cited by 8 (4 self)
Add to MetaCart
This paper is an overview of a variety of results, all centered around a common theme, namely embedding of non-classical logics into first order logic and resolution theorem proving. We present
several classes of non-classical logics, many of which are of great practical relevance in knowledge representation, which can be translated into tractable and relatively simple fragments of
classical logic. In this context, we show that refinements of resolution can often be used successfully for automated theorem proving, and in many interesting cases yield optimal decision procedures.
, 2002
"... We give a uniform presentation of representation and decidability results related to the Kripke-style semantics of several nonclassical logics. We show that a general representation theorem
(which has as particular instances the representation theorems as algebras of sets for Boolean algebras, d ..."
Cited by 4 (2 self)
Add to MetaCart
We give a uniform presentation of representation and decidability results related to the Kripke-style semantics of several nonclassical logics. We show that a general representation theorem (which
has as particular instances the representation theorems as algebras of sets for Boolean algebras, distributive lattices and semilattices) extends in a natural way to several classes of operators and
allows to establish a relationship between algebraic and Kripke-style models. We illustrate the ideas on several examples. We conclude by showing how the Kripkestyle models thus obtained can be used
(if rst-order axiomatizable) for automated theorem proving by resolution for some non-classical logics.
, 1996
"... We present a framework for machine implementation of both partial and complete fragments of large families of non-classical logics such as modal, relevance, and intuitionistic logics. We
decompose a logic into two interacting parts, each a natural deduction system: a base logic of labelled formulae, ..."
Cited by 2 (2 self)
Add to MetaCart
We present a framework for machine implementation of both partial and complete fragments of large families of non-classical logics such as modal, relevance, and intuitionistic logics. We decompose a
logic into two interacting parts, each a natural deduction system: a base logic of labelled formulae, and a theory of labels characterizing the properties of the Kripke models. Our approach is
modular and supports uniform proofs of correctness and proof normalization. We have implemented our work in the Isabelle Logical Framework. 1 INTRODUCTION The origins of natural deduction (ND) are
both philosophical and practical. In philosophy, it arises from an analysis of deductive inference in an attempt to provide a theory of meaning for the logical connectives [24, 33]. Practically, it
provides a language for building proofs, which can be seen as providing the deduction theorem directly, rather than as a derived result. Our interest is on this practical side, and a development of
our work on ap...
- The Completeness of Propositional Dynamic Logic. LNCS 64:403–415 , 1978
"... We develop an algebraic modal logic that combines epistemic modalities with dynamic modalities with a view to modelling information acquisition (learning) by automated agents in a changing
world. Unlike most treatments of dynamic epistemic logic, we have transitions that “change the state ” of the u ..."
Cited by 1 (1 self)
Add to MetaCart
We develop an algebraic modal logic that combines epistemic modalities with dynamic modalities with a view to modelling information acquisition (learning) by automated agents in a changing world.
Unlike most treatments of dynamic epistemic logic, we have transitions that “change the state ” of the underlying system and not just the state of knowledge of the agents. The key novel feature that
emerges is the need to have a way of “inverting transitions” and distinguishing between transitions that “really happen ” and transitions that are possible. Our approach is algebraic, rather than
being based on a Kripke-style semantics. The semantics are given in terms of quantales. We study a class of quantales with the appropriate inverse operations and prove soundness and completeness
theorems. We illustrate the ideas with a simple game as well as a toy robot-navigation problem. The examples illustrate how an agent discovers information by taking actions. 1
, 2004
"... This paper is an exploration in the light of modal logic of Dunn’s ideas about two treatments of negation in non-classical logics: perp and star. We take negation as an impossibility modal
operator and choose the base positive logic to be distributive lattice logic (DLL). It turns out that, if we ad ..."
Add to MetaCart
This paper is an exploration in the light of modal logic of Dunn’s ideas about two treatments of negation in non-classical logics: perp and star. We take negation as an impossibility modal operator
and choose the base positive logic to be distributive lattice logic (DLL). It turns out that, if we add one De Morgan law and contraposition to DLL (call this system K−), then we can prove a natural
completeness and hence treat perp in this modal setting. Moreover, star can be dealt with in the extensions of K−. Based on these results, a complete table of star and perp semantics for Dunn’s kite
of negations is given. In the last section, we discuss perp and star in relevance logic and their related logics. The Routley star is interpreted at the end of this paper. Keywords: perp, Routley
star, modal logic, relevance logic, Meyer-Routley semantics 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=726098","timestamp":"2014-04-18T06:56:11Z","content_type":null,"content_length":"36836","record_id":"<urn:uuid:368f84e9-ceb3-4d97-9bd6-ef9e53393843>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Steady-State One-Dimensional Conduction
16.3 Steady-State One-Dimensional Conduction
Figure 16.3: One-dimensional heat conduction
For one-dimensional heat conduction (temperature depending on one variable only), we can devise a basic description of the process. The first law in control volume form (steady flow energy equation)
with no shaft work and no mass flow reduces to the statement that for all surfaces (no heat transfer on top or bottom of Figure 16.3). From Equation (16.6), the heat transfer rate in at the left (at
) is
The heat transfer rate on the right is
Using the conditions on the overall heat flow and the expressions in (16.9) and (16.10)
Taking the limit as approaches zero we obtain
If is constant (i.e. if the properties of the bar are independent of temperature), this reduces to
or (using the chain rule)
Equation (16.14) or (16.15) describes the temperature field for quasi-one-dimensional steady state (no time dependence) heat transfer. We now apply this to an example.
16.3.1 Example: Heat transfer through a plane slab
Figure 16.4: Temperature boundary conditions for a slab
For this configuration (Figure 16.4), the area is not a function of , i.e. . Equation (16.15) thus becomes
Equation (16.16) can be integrated immediately to yield
Equation (16.18) is an expression for the temperature field where and are constants of integration. For a second order equation, such as (16.16), we need two boundary conditions to determine and .
One such set of boundary conditions can be the specification of the temperatures at both sides of the slab as shown in Figure 16.4, say ; .
The condition implies that . The condition implies that , or . With these expressions for and the temperature distribution can be written as
This linear variation in temperature is shown in Figure 16.5 for a situation in which .
Figure 16.5: Temperature distribution through a slab
The heat flux is also of interest. This is given by
Muddy Points
How specific do we need to be about when the one-dimensional assumption is valid? Is it enough to say that is small? (MP 16.2)
Why is the thermal conductivity of light gases such as helium (monoatomic) or hydrogen (diatomic) much higher than heavier gases such as argon (monoatomic) or nitrogen (diatomic)? (MP 16.3) | {"url":"http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node117.html","timestamp":"2014-04-17T07:12:21Z","content_type":null,"content_length":"20770","record_id":"<urn:uuid:e0ef65a7-af8c-4fd1-8bc0-d3fce259d65a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Not a Twisty Puzzle
View unanswered posts | View active topics
All times are UTC - 5 hours
Print view Previous topic | Next topic
Author Message
Post subject: Not a Twisty Puzzle
Richard Posted:
Fri Oct 10, 2003 3:54 am
This is not a Twisty Puzzle, but is an entertaining 'puzzle' aside:
Joined: Mon Aug 18, 2003 11:44 am
Location: Leicester. United Kingdom. Try this
Perhaps the mathematical bods amongst us could explain to a simpleton like me how this works?
Post subject: Re: Not a Twisty Puzzle
darryl Posted:
Sat Oct 11, 2003 3:54 am
Weird, someone posted this to rec.puzzles the other day. If you go to that newsgroup, there is a discussion about it, the subject is something like: "7-up" puzzle
Joined: Fri Feb 18, 2000 8:50 am Hope that helps.
Location: chicago, IL area U.S.A
Post subject: Re: Not a Twisty Puzzle
Sandy Posted:
Mon Oct 13, 2003 4:48 am
Here's my half-assed go at an explanation.
Joined: Thu Jan
24, 2002 1:10 am First off, whatever your number is, after you subtract the scrambled version from the original (or vice versa), one crucial condition has to be met for the problem to work: the
Location: Toronto, subtraction must result in a non-zero number. They don't really tell you this in the problem, but the game won't work if you choose a number and a scramble of it that subtract to
Canada 0.
Next, no matter how you scramble it up, when you subtract the smaller from the larger, the digits in the result will always add up to a multiple of 9. Knowing that, the rest is
easy. To detect which number from 1-9 was removed from a group of digits, simply calculate the difference between the sum of the digits and the next highest multiple of 9.
Why does the sum of the digits in the difference always add up to 9? I dunno!
Post subject: Re: Not a Twisty Puzzle
TM-curtmack Posted:
Tue Oct 14, 2003 4:48 am
The sum of the digits of a multiple of 9 always adds up to a multiple of 9, because of a strange quirk in algebra that mathematicians call 'casting out nines.' The condensed version of
this law, is that whenever you start subtracting a bunch of nines from any two-or-more digit number, you'll eventually get the sum of the digits of your original number.
So, the trick works like this: When you do the first bit of subtraction, you get a multiple of nine. This means that the digits have to add up to a multiple of nine. (Why? Here's a hint:
Every time you subtract nine from a multiple of nine, you still have a multiple of nine...)
When you remove a non-zero digit, you end up with another number. You enter in this number. It first checks to see if the digits add up a multiple of nine. If so, your number has to be
nine, because of casting out nines and the fact that you can't remove a zero. Otherwise, it finds out the smallest multiple of nine that is greater than the sum it generated, and uses
simple subtraction to figure out your number.
If you didn't follow a word of that, don't worry... it's not that important.
All times are UTC - 5 hours
Who is online
Users browsing this forum: No registered users and 4 guests
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum | {"url":"http://www.twistypuzzles.com/forum/viewtopic.php?f=18&t=2055&p=11285","timestamp":"2014-04-19T05:49:26Z","content_type":null,"content_length":"34031","record_id":"<urn:uuid:55053521-382b-42a3-9fe0-9bf2b0c790b6>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Invariant sets with zero measure and full Hausdorff
This journal is archived by the American Mathematical Society. The master copy is available at http://www.ams.org/era/
Invariant sets with zero measure and full Hausdorff dimension
Luis Barreira and Jörg Schmeling
For a subshift of finite type and a fixed Hölder continuous function, the zero measure invariant set of points where the Birkhoff averages do not exist is either empty or carries full Hausdorff
dimension. Similar statements hold for conformal repellers and two-dimensional horseshoes, and the set of points where the pointwise dimensions, local entropies, Lyapunov exponents, and Birkhoff
averages do not exist simultaneously.
Copyright 1997 American Mathematical Society
Retrieve entire article
Article Info
• ERA Amer. Math. Soc. 03 (1997), pp. 114-118
• Publisher Identifier: S 1079-6762(97)00035-8
• 1991 Mathematics Subject Classification. Primary 58F15, 58F11
• Received by the editors September 2, 1997
• Posted on October 29, 1997
• Communicated by Svetlana Katok
• Comments (When Available)
Luis Barreira
Departamento de Matemática, Instituto Superior Técnico, 1096 Lisboa, Portugal
E-mail address: barreira@math.ist.utl.pt
URL: http://www.math.ist.utl.pt/~barreira/
Jörg Schmeling
Fachbereich Mathematik und Informatik, Freie Universität Berlin, Arnimallee 2-6, D-14195 Berlin, Germany
E-mail address: shmeling@math.fu-berlin.de
Luis Barreira was partially supported by the projects PRAXIS XXI, 2/2.1/MAT/199/94 and JNICT, PBIC/C/MAT/2140/95. Jörg Schmeling was supported by the Leopoldina-Forderpreis. | {"url":"http://www.emis.de/journals/ERA-AMS/1997-01-018/1997-01-018.html","timestamp":"2014-04-21T12:12:47Z","content_type":null,"content_length":"2765","record_id":"<urn:uuid:5bfe289c-262e-4412-999b-48cd9a9e23ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
P. Cousot, Méthodes itératives de construction et d'approximation de points fixes d'opérateurs monotones sur un treillis, analyse sémantique des programmes.
Semantic program analysis consists in the determination of the conditions in which the run-time execution of a program terminates, does not terminate or leads to an error (whether because the rules
of good usage of a programming language have not been respected, or because the program does not correspond to its specification). The semantic analysis of a program must also allow us to determine,
in each point of the program, the properties of the objects manipulated by the program.
We propose a theory of the semantic analysis of programs, providing a unified framework to carry out some analyses, from the most precise ones, like the ones that are performed to justify the total
correctness of programs, to the coarsest, like the ones that are used in compilation. In our opinion, there is no lack of continuity between these two extremes, and the theory we propose allows the
construction of a continuous range of applications, from exact analysis to the most approximate analyses. Since we care about practical applications, we have devoted part of our efforts to build up a
model leading to automatized solutions, (some automatizations having effectively been realized), to economically relevant problems.
• In most systems for program verification, the semantic analysis of the program to be justified must be done by the programmer, who has to provide a documentation of the program, often heavy
because of the amount of details. Now, if we exclude the specific issue describing the problem to solve, a fair amount of this documentation can be offered by the text of the program (with the
certainty that this documentation and the program itself are in accordance).
• The program debugging techniques, still widely used in computer software industry can be partly avoided (at least for the programming mistakes, if not for the design mistakes), using our ideas
for an automatic semantic analysis of the programs, and this, without waiting for the ten, or more, years, necessary for the theorem prover based techniques of program verification to be made
practical. On the other hand, it is for certain that the methods we propose are complementary and offer, for some kinds of analyses, a very profitable cost/benefit ratio.
• In high-level languages, the programmer is encouraged to formulate his/her algorithms in abstract terms, appropriate to the problem to be solved. To make an automatic choice for an effective
program implementation, one has to make a rather precise semantic analysis of it.
• Almost all the definitions of classic programming languages contain various restrictions which are necessary for the programs to be meaningful, but which generally cannot be checked
syntactically. One should in fact know the domain of variable values. The classic solution of runtime tests is generally considered unacceptable because of its cost. Only an automatic semantic
analysis of programs can provide an economically profitable solution.
• Most of the optimization techniques used in the compilation of programs can only be implemented when the conditions ensuring the equivalence between the transformed program and the original one
are satisfied, as well as a real improvement in their performance. When there is a doubt, the classical option is that of considering the most pessimistic hypothesis, but a more in-depth semantic
analysis of the program may allow us to avoid this.
Generally, the development of a theory for the semantic analysis of programs leading to automatized applications, is justified by the technical resolution of the software reliability and software
efficiency problems. It is, in our opinion, complementary to the efforts, which are currently made to turn the art of programming into the status of a science.
Let us now give a very short summary of the content of this thesis:
We will reduce the problem of the determination of semantic properties of a program, to the problem of building up the extreme fixed points of monotone operators on a complete lattice. After this
introduction, the second chapter is then dedicated to the mathematical study of fixed-point theorems in complete lattices. We provide a constructive demonstration of Tarski's theorem, showing that
the set of fixed points of a monotone operator F on a complete lattice L is the image of L by the pre-closures defined by means of transfinite iteration limits. It is a matter of showing how classic
iterative methods can be adapted to converge starting from any starting point and also, to reach fixed points other than the least and the greatest one. This also allows us to define the union and
the intersection in the lattice of the fixed points of F in a constructive way, that is by recurrences using F. We then obtain, as a particular case, the theorem of construction of the least fixed
point of a continuous operator. We will then consider some systems of monotone fixed point equations in a complete lattice. After remembering the formal resolution method by elimination of variables,
we will demonstrate a convergence result of chaotic iterative methods, asynchronous and asynchronous with memory. This opens the way to the resolution of systems of monotone equations on a lattice,
using different processors, calculating in parallel, without the necessity of any synchronization.
In the third chapter, the problem of semantic analysis of programs is studied independently on the problems of language definition, within the very general framework of studying the behavioral of a
discrete dynamic system. A program is a discrete dynamic system as long as it defines a transition relation (or a transition function, if it is deterministic), between the states of memory preceding
or following the execution of any elementary instruction. To study the behavior of a dynamic discrete system, it is necessary to characterize the set of reachable states satisfying a given entry
specification, or else, to characterize the set of ascendants satisfying a given exit specification. In other words, it is necessary to determine the weakest pre-condition, regarding the entry
states, so that the system may evolve towards a state satisfying a given postcondition, or the strongest post-condition characterizing the states towards which the system evolves, starting from any
entry state, satisfying a given pre-condition. We will show that these conditions are obtained as solutions of fixed-points equations or of equation systems, when the set of states of the dynamic
discrete system is partitioned. We will then formalize the operational semantics of a simple programming language, corresponding to sequential iterative programs, and we will show how a program
defines a discrete dynamic system. We will then apply the results obtained by the analysis of discrete dynamic systems behavior to the semantic analysis of programs. This leads us to define forward
and backward deductive semantics of programs, generalizing the classic forward program verification method of Floyd-Naur, and the backward method of Hoare-Dijkstra, to techniques for formalizing the
semantics of programming languages. In fact, the forward and backwards deductive semantics define the conditions in which a program is correctly carried out, is not carried out at all, or leads to an
error as a solution of semantic equation systems, associated to the program. Both semantics can be used to characterize, in each point of the program, the set of descendants of the entry states and
the set of ascendants of the exit states. As a consequence, they are equivalent, as both allow to make an exact semantic analysis of programs.
After having shown that the exact semantic analysis of programs consists in solving equation systems, keeping in mind that the solutions to these equations are not automatically computable, and
wishing, at the same time, to find some automatic techniques of semantic analysis of programs, we are forced to limit ourselves to approximate automatic analyses of programs. So in chapter four we
will study some calculation methods for approximating fixed points of monotone operators on a lattice. To effectively calculate lower and upper approximations of the solutions of an equation system,
we essentially propose two complementary methods. They consist, on one end, in simplifying the equations to solve and, on the other end, in accelerating the convergence of iterative methods of
fixed-point construction. To accelerate the convergence of an iteration which does not stabilize itself naturally in a fixed number of steps, we propose to extrapolate, while calculating the terms of
the sequence of iterates to obtain an approximation of its limit in a finite number of steps. As in iterative methods with convergence acceleration, the simplification of equations is widely used in
numerical analysis but, for what our problems need, we have to study them in a purely algebraic framework. To simplify the semantic equation systems associated to the programs, for each particular
problem of semantic analysis of programs, we propose to ignore a priori certain properties and only keep the program properties which are meaningful for this specific application. From an algebraic
viewpoint, this approximation is formalized by a closure in the domain of the equations to solve, a closure that we will define, in an equivalent way, by a Moore family, as relations of congruence,
or pairs of adjoint functions. Different approximate analyses can be combined by combining the corresponding closures, and in particular the lattice of closures formalizes the creation of a hierarchy
of approximations, depending on their precision.
In chapter five we will develop automatic semantic programs analysis methods, therefore necessarily approximate. In order to perform an approximate semantic analysis of programs, we propose to
compute an approximation of the forward and backward systems of semantic equations associated to this program. Having chosen a particular class of program properties providing useful answers to a
given problem, we will show how the results of chapter four allows us to design an algorithm which can automatically carry out the analysis of any program for this class of properties. The design of
this algorithm is based on the choice of a closure allowing to define a space of approximate properties, as well as the rules of construction of systems of simplified equation systems associated to a
program. To solve these equations, we will use an iterative method. Extrapolation operations will also have to be designed when conver gence must be accelerated. We will illustrate our approach by
giving some examples of the conception of an approximate semantic analysis of programs. After having briefly examined many different classic examples within program optimization, we will deal with
some applications to the discovery of pointers properties, the determination of the types of variables in high level languages without declarations, the analysis of the interval of values of numeric
variables, and also to the discovery of linear relations of equality or inequality between the variables of a program.
Chapter six deals with recursive procedures, whose analysis is more complex than the one of sequential iterative programs, given that it is necessary to consider functional equations of the form f(x)
= F(f)(x), and not equations of the type x = f(x), any more. We will use the same approach as for iterative sequential programs, by defining a deductive semantics, then, by introducing some
approximation methods which, in fact, generalize the study of chapter four in the case of functional equation systems. | {"url":"http://cs.nyu.edu/~pcousot/COUSOTpapers/CousotTheseEsSciences1978.html","timestamp":"2014-04-19T01:48:10Z","content_type":null,"content_length":"17198","record_id":"<urn:uuid:bb33d77a-c529-46da-a488-2362161d631e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Identifying the 30 – 60 – 90 Degree Triangle
The 30 – 60 – 90 degree triangle is in the shape of half an equilateral triangle, cut straight down the middle along its altitude. It has angles of 30°, 60°, and 90° and sides in the ratio of
The following figure shows an example.
Get acquainted with this triangle by doing a couple of problems. Find the lengths of the unknown sides in triangle UMP and triangle IRE in the following figure.
You can solve 30°- 60°- 90° triangles with the textbook method or the street-smart method.
Using the textbook method
The textbook method begins with the ratio of the sides from the first figure:
In triangle UMP, the hypotenuse is 10, so you set 2x equal to 10 and solve for x, getting x = 5. Now just plug 5 in for the x’s, and you have triangle UMP:
Plug in the value of x, and you’re done:
Using the street-smart method
Here’s the street-smart method for the 30°- 60°- 90° triangle.
Using that fact, do the following:
• The relationship between the short leg and the hypotenuse is a no-brainer: the hypotenuse is twice as long as the short leg. So if you know one of them, you can get the other in your head.
• If you know the short leg and want to compute the long leg (a longer thing), you multiply by the square root of 3. If you know the long leg and want to compute the length of the short leg (a
shorter thing), you divide by the square root of 3.
Try out the street-smart method with the triangles in the second figure. The hypotenuse in triangle UMP is 10, so first you cut that in half to get the length of the short leg, which is thus 5.
The 30°- 60°- 90° triangles almost always have one or two sides whose lengths contain a square root. In either case, the long leg is the odd one out. All three sides could contain square roots, but
it’s impossible that none of the sides would — which leads to the following warning.
Because at least one side of a 30°- 60°- 90° triangle must contain a square root, a 30°- 60°- 90° triangle cannot belong to any of the Pythagorean triple triangle families. So don’t make the mistake
of thinking that a 30°- 60°- 90° triangle is in, say, the 8 : 15 : 17 family or that any triangle that is in one of the Pythagorean triple triangle families is also a 30°- 60°- 90° triangle. There’s
no overlap between the 30°- 60°- 90° triangle and any of the Pythagorean triple triangles and their families. | {"url":"http://www.dummies.com/how-to/content/identifying-the-30-60-90-degree-triangle.navId-407422.html","timestamp":"2014-04-16T20:36:02Z","content_type":null,"content_length":"54428","record_id":"<urn:uuid:2f73215a-e568-4185-a0b2-a505ce672199>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Static Converter Optimization
C. Larouci
, J.P. Ferrieux, L. Gerbaud, and J. Roudet are a group of professors and researchers at the laboratoire d'electrotechnique de Grenoble. Their principal research orientations include power
electronics structures, electromagnetic compatibility (EMC), and the development of the tools dedicated to sizing in electrical engineering.
Simulation programs such as Saber, Pspice, and Simplorer are effective tools for time-domain analysis of power circuits. However, if these structures have an AC input and a high switching frequency
(various time scales), time-domain simulation becomes a long, memory-intensive process before you get a steady-state solution. The use of analytical models is a good compromise between the tool speed
and the model accuracy.
This article presents the time-domain study of the Flyback structure. We will demonstrate the optimization of the total volume of this structure under EMC and loss constraints using analytical
Time-Domain Study of a Flyback Circuit
You use the Flyback circuit (Figure 1) as a single-stage converter. To ensure a sinusoidal input current, you use a mixed-control circuit that combines two conduction modes: discontinuous and
continuous. This circuit controls the instantaneous average value of the input current (I1 in Figure 1) to a sinusoidal reference. On the other hand, you obtain the regulation of the output voltage
by using a bulky capacitor C.
Figure 1: Schematic of the flyback circuit
Figures 2 and 3 present the filtered input current and the output voltage simulated by Pspice and Gentiane software with the following parameters:
• Switching frequency: Fs = 50kHz
• Primary inductance: L1 = 2mH
• Transformation ratio: m = 0.5
• Output capacitance: C = 7mF (sized to have 1% ripple at 48V)
• The inductance and the capacitance of the input filter are Lf = 2mH and Cf = 0.1µF.
Figure 2: Input current for the flyback circuit shown in Figure 1
Figure 3: Output voltage for the flyback circuit shown in Figure 1
In this case, you obtain the steady state after three to four hours of simulation with Pspice, which confirms the difficulty of the time-domain study for such an application. The analytical modeling
approach provides results in less time than simulation.
Analytical Method for Optimizing the Flyback Circuit
The optimization technique consists of varying the parameters of interest in an analytical model.
Analytical Model of the Mixed-Control Circuit
During discontinuous conduction of the flyback circuit, the duty cycle a is constant and is a function of the output power Po, the switching frequency Fs, the primary inductance L1, and the input
voltage amplitude Vmax:
However, this duty cycle is variable in the continuous conduction mode according to Equation 2:
The EMC Spectrum Analytical Model
The analytical model of the mixed-control circuit lets you estimate the primary current spectrum (in other words, the current spectrum in the switch). You use this spectrum like a differential mode
generator to perform frequency modeling.
Figure 4: This schematic shows an equivalent diagram of the Flyback circuit in the differential mode. In this circuit p is the Laplace operator; Z1(p) and Z2(p) are the Line Impedance Stabilization
Networks (LISNs); Z3(p) and Z4(p) are input filters; and Ih(p) is the differential mode generator.
From Figure 4, the EMC spectrum V[LISN](p) in the Laplace space is expressed as:
Figures 5 and 6 present the superposition of the analytically estimated and simulated primary current spectrum and the difference between these two spectrums.
Figure 5: Simulated and analytically estimated results for the primary current spectrum
Figure 6: The difference between the simulated and estimated spectrums shown in Figure 5
The results of Figure 6 validate the analytical estimation of the primary current spectrum for the mixed-control circuit and differential mode generator.
Volume Analytical Models for Optimizing the Objective Function
The volumes of the transformer and input-filter inductor are related by the component winding areas and other parameters. These volumes are expressed with an analytical formula (Equation 4) by
• Electrical quantities (the primary and secondary RMS and maximum current values)
• Technological quantities (the current density and the peak flux density)
• Geometrical quantities (physical coefficients that depend on the magnetic shaping circuit and conductor insulation).
From the constructor abacus data, the volume of the input filter capacitor for the nominal voltage is analytically evaluated by (Equation 5):
Having an analytical model of volumes, the sum is an objective function. The aim of the optimization algorithm is to minimize this objective function under EMC and loss constraints.
Optimizing the Flyback Circuit
The developed analytical models are integrated in an optimization process. The optimization parameters are:
• Primary inductance (L1)
• Transformer ratio (m)
• Inductance and the capacitance of the input filter (Lf and Cf)
• Switching frequency (Fs).
The objective is to seek the best combination of these parameters to minimize the total volume of the structure, meet EMC requirements, and operate with good efficiency (minimize total circuit
losses). These models have been introduced into two environments that offer different optimization algorithms (EDEN , Mathcad ).
We use the parameters of the time-domain study as an initial set of values to start optimization (Equation 6). After the optimization procedure, the algorithm converges towards a new set of values (
Equation 7) where the EMC constraint is met (Max_EMC_spectrum < 79 dbµv, imposed by the ISM 55011 standard ) and circuit volume is 1.66 times smaller with almost the same efficiency. | {"url":"http://www.planetanalog.com/document.asp?doc_id=527298&site=planetanalog","timestamp":"2014-04-19T09:56:28Z","content_type":null,"content_length":"121786","record_id":"<urn:uuid:dfbe6197-416d-4431-b325-37324b735397>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
In addition to in-class projects, Math Modeling graduate students at Humboldt State must complete a thesis project as part of their degree. Projects tend to be practical and directly applicable to a
specific scientific question. This gives students a broader context for the math they are studying and a deeper understanding of the application problem their model can help solve.
Thesis projects allow students to work with other departments on problems to solve. Each student’s thesis committee must have an outside member, whether from another discipline or from outside the
university. And Math Modeling students take courses with those from other programs within environmental systems, expanding the scope of problems their work can help solve. This interdisciplinary
focus is a unique attribute of HSU’s program.
Thesis projects in Math Modeling have consistently been honored with top awards. For his thesis involving a model for landmine removal, Paul Burgess and his thesis advisor Ken Owens received the 2004
Intel Environment award, which came with a $50,000 prize. Three Math Modeling theses in recent years have also won the McConkey Outstanding Thesis Award, which recognizes distinguished scholarly
achievement at the master’s level. Nominations are accepted from all disciplines, so Math Modeling projects stand out for their high level of accomplishment.
Another area of research examines the use of individual-based models (IBMs) for applied and theoretical ecology, led by mathematics Prof. Steve Railsback. He collaborates with ecologists, biologists,
environmental engineers, and software professionals on research goals including:
• Developing a conceptual basis for individual-based ecology
• Applying fish IBMs to river management questions such as: How do changes in flow and temperature affect fish populations? What effects do loss of pools, increased turbidity, competition,
predation, and habitat connectivity have on population dynamics?
• Using IBMs to test and develop ecological theory
• Developing software and software engineering approaches for IBMs.
• Integrating IBMs in ecological and modeling courses at HSU and other institutions.
Thesis projects in Mathematical Modeling
Recent theses in Math Modeling have examined a wide diversity of topics. They range from measuring how plants sense gravity to working with foresters on understanding how fires spread. Such research
and critical thinking prepares students for the real problem-solving challenges they will face in their career. The following thesis projects are not the only theses completed in recent years but
they are representative of research within our program.
Energy and Sustainability Research
Darrell Ross, "A Distributed Renewable Energy System Meeting 100% of Electricity Demand in Humboldt County: A Feasibility Study." The study compared energy demand in Humboldt County with the
availability of wind, wave, solar and biomass energy. It found that enough renewable energy is available to supply Humboldt County but not in a timely fashion. Without suitable energy storage
capability, the county would have to import power in times of shortages and sell power in times of excess.
Steven Walker, "Using Transfer Functions to Explain Turbidity in Humboldt Bay, California." This thesis used statistical analyses to determine the degree to which water quality in streams that feed
into Humboldt Bay influence the bay's water quality.
Daniel Kanewske, "A Mathematical Model for the Onset of Water Flooding in the Cathode of a Proton Exchange Membrane Fuel Cell." Here a PDE model of water diffusion was developed to predict the onset
of water flooding in fuel cell membranes. This project won the Patricia O. McConkey Award for Outstanding Thesis.
Climate Change Research
Thé Thé Kyaw, "Modeling the Effect of Marine Snow Fragmentation by Euphausia Pacifica on Carbon Flux." Carbon is naturally sequestered in the deep ocean when carbon-containing particles called marine
snow settle out of the upper layer of the ocean. Small crustaceans called krill have been shown to fragment these particles. This thesis developed a mathematical model to quantify the effect of
marine snow fragmentation by krill on the rate of carbon storage. It found that the effect can be substantial in some cases.
Daniele Rosa, "Implementing a Dynamic Allocation Scheme for the Lund- Potsdam-Jena Dynamic Global Vegetation Model." This thesis implemented a new scheme to model how plants respond to elevated CO2
and incorporated the scheme into a global vegetation computer model. This is an important part of understanding how our planet will respond to high levels of CO2.
Conservation Biology
Benjamin Holt, "Stochastic spatial model for the consumption of organic forest soils in a smoldering ground fire." A spatial model for the consumption of organic forest soil (duff) by smoldering
combustion is developed. Smoldering ground fires have an enormous impact upon the ecology and management practice of forest lands throughout the temperate zone. This project won the Patricia O.
McConkey Award for Outstanding Thesis.
Emily Hobelmann, "Plant Invasion Models—Road Effects." This project involved creating a model to determine conditions in which a road could facilitate the invasion of non-native plants into areas
that would otherwise resist invasion.
Chris Panza, "A Model to Assess the Use of Nest Exclosures for Local Population Recovery of the Western Snowy Plover (Charadrius Alexandrinus Nivosus)." While nest exclosures protect nests from
predation, they may sometimes lead to increased predation of the adult parents. This thesis developed a mathematical model to assess these two factors in local populations of the endangered Western
Snowy Plover.
Intersection of the environment, society, and technology
Paul Burgess, "A Statistical Model of the Area Cleared by a Landmine Removal Vehicle Using Real-Time Kinematic Differential GPS and Inertial Sensing Technologies." The project included developing
software to guide landmine-clearing robots and to map the ground cleared of landmines. The main mathematical result was using conditioning to compute the probable location of the landmine-clearing
device. This project won the Patricia O. McConkey Award for Outstanding Thesis and the 2004 Intel Environment Award, which came with a $50,000 prize.
Stephanie Souza, "Using the Hough Transform to Detect Fish in Freshwater Creek." A Hough transform is capable of detecting lines in noisy images. We used this technique to detect fish outlines in
underwater video. We hope to use this technique to some day automate the estimation of salmon and steelhead populations.
Ari Kornfeld, "Measuring and Modeling the Gravitropic Response of Oat Shoots (Avena Sativa)." This thesis investigated the biochemical mechanisms behind plants' ability to sense a gravitational field
and orient themselves in it. The project combined experimental data collection and modeling of the bending angles of oat shoots. The data analysis relied on automation and sophisticated
image-processing techniques.
View a complete list of all theses. | {"url":"http://www.humboldt.edu/mathmodeling/research_theses.html","timestamp":"2014-04-21T03:42:13Z","content_type":null,"content_length":"13825","record_id":"<urn:uuid:287b1a12-d4ed-42a1-8175-abc0e565ec38>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Reduction Formulas for Integrals
The use of reduction formulas is one of the standard techniques of integration taught in a first-year calculus course. This Demonstration shows how substitution, integration by parts, and algebraic
manipulation can be used to derive a variety of reduction formulas. Selecting the "illustrate with fixed " box lets you see how the reduction formulas are used for small values of and shows more
detail for the algebraic manipulations needed for some of the examples.
When "illustrate with fixed " is selected and the integrand is or , the value is not available. In those two cases, no further reduction is needed. | {"url":"http://demonstrations.wolfram.com/ReductionFormulasForIntegrals/","timestamp":"2014-04-18T23:43:06Z","content_type":null,"content_length":"42809","record_id":"<urn:uuid:277a5f47-a86c-4fb7-ae6f-b9cc63c573ce>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOUR BASIC ARITHMETIC OPERATIONS Review of Introductory Mathematics FOUR BASIC ARITHMETIC OPERATIONS This chapter reviews the basic mathematical operations of addition,
subtraction, multiplication, and division of whole numbers. EO 1.2 APPLY one of the arithmetic operations of addition, subtraction, multiplication, and division using whole numbers.
Calculator Usage, Special Keys This chapter requires the use of the +, -, x, , and = keys. When using a TI-30 calculator, the ÷
number and operation keys are entered as they are written. For example, the addition of 3 plus 4 is entered as follows: 3 key, + key, 4 key, = key, the answer, 7, is displayed Parentheses
The parentheses keys allow a complicated equation to be entered as written. This saves the time and effort of rewriting the equation so that multiplication/division
is performed first and addition/subtraction is performed second, allowing the problem to be worked from left to right in one pass. The Decimal Numbering System
The decimal numbering system uses ten symbols called digits, each digit representing a number. These symbols are 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. The symbols are known as the numbers zero,
one, two, three, etc. By using combinations of 10 symbols, an infinite amount of numbers can be created. For example, we can group 5 and 7 together for the number 57 or 2 and 3 together
for the number 23. The place values of the digits are multiples of ten and given place titles as follows: MA-01 Page 6 Rev. 0 | {"url":"http://nuclearpowertraining.tpub.com/h1014v1/css/h1014v1_26.htm","timestamp":"2014-04-18T18:11:08Z","content_type":null,"content_length":"19345","record_id":"<urn:uuid:45eb82a1-f3f6-4826-8835-6b79655d2200>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Football lessons - Big Data, Plainly Spoken (aka Numbers Rule Your World)
Erik left a comment on a different post, pointing us to this informative article about the "data revolution" in football (soccer). This is a topic I have written about before as I follow the sport (
here and here).
Given the breadth of the article, I shall limit my comments to what I think are the most interesting points.
The example of baseball ala Moneyball has been cited often. More slowly, the same kind of data analytics has spread to basketball, which is regarded as an example of a "fluid game". Billy Beane,
the hero of Moneyball, is cited as saying "If it can be done there, it can be done on the soccer field."
Billy, it ain't so. Statistics is, as Andrew Gelman likes to say, about methods. There is truly a plutocracy of statistical methods. The reason is, however strange, that one method may work superbly
well on a particular problem but fail miserably when applied to another problem. Linear regression is probably one of the most widely applied method but even then it does not solve all problems.
Researchers have so far been stumped in learning why a given method succeeds or fails for a given problem.
I believe football analytics needs its own methods. Basketball is different from football in several important ways: the number of points scored is much higher than the number of goals scored; the
number of games played is much higher than the number of matches played; and the number of players on the court is half the number of footballers. In each case, the basketball problem is made easier
by virtue of larger sample size.
Similarly, baseball cannot be compared to football because it is a static game: baseball is as close as you get to a series of somewhat independent trials, which is the kind of thing for which
probability and statistics were originally discovered.
I'm glad to hear that the budding football statisticians have realized
many of the stats they had been trusting for years were useless. In any industry, people use the data they have. The data companies had initially calculated passes, tackles and kilometres per
player, and so the clubs had used these numbers to judge players. However, it was becoming clear that these raw stats... mean little.
I wish the engineers at Google, Yahoo!, AOL, Groupon, Linkedin, Netflix, etc. are reading this. It's not about how "big" the data is, it's not about how fast data can be processed, it's about how
relevant. It's about knowing what you want to measure, and going out to measure those things. See also my previous post on football statistics.
Correlation is not causation. All of the analytics in sports have to do with correlations. It is very easy to confuse the two. A coach was cited as saying "there is a correlation between the number
of sprints and winning". This might lead one to think "let's get our players to sprint more". Think about that conclusion for a moment. One has moved from correlation to causation.
As I argued in Chapter 2 of Numbers Rule Your World, correlational models are fine in many real-world applications. So I'm not debunking the whole field. I'm just cautioning against using an
assumption of causality without recognizing it or validating it.
The most promising anecdote in the article concerns analyzing "sociograms", which describes who passes the ball to whom, who tends to start dangerous attacks and so on. I truly believe that in
football, it's the pattern of interactions that is the key. If only I had the time, I would surely delve into this stuff.
You can follow this conversation by subscribing to the comment feed for this post.
I strongly agree, and have long argued with friends, that network effects have a much stronger influence on the outcome of football games than the traditionally-measured factors of shots. It will be
interesting to see how this research pans out.
Excellent article on the use of statistics in basketball.
Even the ESPN announcers are realizing that the statistics being printed on the screen have little to do with what is transpiring in the World Cup matches. In a recent match, I noted that ESPN showed
or spoke about these statistics: number of shots, number of shots on target, possession time (as proportion of playing time), total number of touches (or passes), number of corners, number of miles
This is only a preview. Your comment has not yet been posted.
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment
As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.
Having trouble reading this image? View an alternate.
Post a comment
Recent Comments | {"url":"http://junkcharts.typepad.com/numbersruleyourworld/2011/06/football-lessons.html","timestamp":"2014-04-20T10:47:33Z","content_type":null,"content_length":"67712","record_id":"<urn:uuid:81a572c4-9ce4-41d8-a536-136977c7626e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
One-Sample and Two-Sample Proportions
The Sample Size windows and computations to test sample sizes and power for proportions are similar to those for testing means. You enter a true Proportion
and choose an
level. Then, for the one-sample proportion case, enter the
Sample Size
Null Proportion
to obtain the
Or, enter the
Null Proportion
to obtain the
Sample Size
. Similarly, to obtain a value for
Null Proportion
, enter values for
Sample Size
. For the two-sample proportion case, either the two sample sizes or the desired
must be entered. (See
Power and Sample Window for One-Sample Proportions
Difference Between Two Proportions for a Two-Sided Test
Clicking the One Sample Proportion
option on the Sample Size and Power window yields a One Proportion window. In this window, you can specify the alpha level and the true proportion. The sample size, power, or the hypothesized
proportion is calculated. If you supply two of these quantities, the third is computed, or if you enter any one of the quantities, you see a plot of the other two.
For example, if you have a hypothesized proportion of defects, you can use the One Sample Proportion
window to estimate a large enough sample size to guarantee that the risk of accepting a false hypothesis (
) is small. That is, you want to detect, with reasonable certainty, a difference in the proportion of defects.
where p
is the population proportion and
is the null proportion to test against. Note that if you are interested in testing whether the population proportion is greater than or less than the null proportion, you use a one-sided test. The
one-sided alternative is either
is the proportion to test against (p0
) or is left blank for computation. The default value is 0.2.
is the sample size, or is left blank for computation. If Sample Size
is left blank, then values for
Null Proportion
must be different.
The Power
is calculated and is shown as approximately 0.7 (see
Power and Sample Window for One-Sample Proportions
). Note the Actual Test Size is 0.0467, which is slightly less than the desired 0.05.
The Two Sample Proportions
option computes the power or sample sizes needed to detect the difference between two proportions,
where p1
are the population proportions from two populations, and
is the hypothesized difference in proportions.
is the proportion difference (D0
) to test against, or is left blank for computation. The default value is 0.2.
Difference Between Two Proportions for a One-Sided Test
shows the Two Proportions windows with the estimated
calculation of 0.82. | {"url":"http://www.jmp.com/support/help/One-Sample_and_Two-Sample_Proportions.shtml","timestamp":"2014-04-21T07:43:37Z","content_type":null,"content_length":"56077","record_id":"<urn:uuid:8baf294f-eba3-4b5f-909d-cf4208e99cf9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
[MetaCRS] mercator scale calculation q
Mark Amend mamend at grsensing.com
Sat Apr 25 23:26:45 EDT 2009
Hi there-
This might not be an appropriate post for this list, but I'm struggling
to find a solution. Any ideas would be greatly appreciated.
I have a Mercator chart I'm building, at a scale of 1:5000 (Natural
scale, at the equator, or in my case he projection has a defined
standard parallel of 31N, where the scale factor is 1.00). And yes, I
have to use this projection. ;-)
The chart, however, is located north of 31N, somewhere around 34N in the
middle of the chart (for clarity of my question, let's just say it is
34N). I have been asked what is the True Scale of my chart at the
mid-latitude of that chart. So, it will not be exactly 1:5000 but
something ± that, like 1:5043.1 or something (hypothetically speaking).
I need to find an answer.
How do I calculate this "true scale" at the mid-latitude of my chart?
It seems that Mercator scale is defined in various references as the
secant of the latitude. If that is the case, then ACOS(34 degrees?
radians?) should give 1.xxx or so, right? But the scale for the
latitude of interest (center of chart) is really 34 - 31 = 3, ..if 31
is where the scale factor is 1. Right? So it should be ACOS(3
degrees)? I then multiply 5000 by this number. Correct?
Am I anywhere close to the solution?
Mark Amend
More information about the MetaCRS mailing list | {"url":"http://lists.osgeo.org/pipermail/metacrs/2009-April/000252.html","timestamp":"2014-04-21T14:40:18Z","content_type":null,"content_length":"3478","record_id":"<urn:uuid:c32a09ba-e069-4415-bf38-c3dab1cd63a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Search Materials
Return to What's new in MERLOT
Get more information on the MERLOT Editors' Choice Award in a new window.
Get more information on the MERLOT Classics Award in a new window.
Get more information on the JOLT Award in a new window.
Go to Search Page
View material results for all categories
Click here to go to your profile
Click to expand login or register menu
Select to go to your workspace
Click here to go to your Dashboard Report
Click here to go to your Content Builder
Click here to log out
Search Terms
Enter username
Enter password
Please give at least one keyword of at least three characters for the search to work with. The more keywords you give, the better the search will work for you.
select OK to launch help window
cancel help
You are now going to MERLOT Help. It will open a new window.
״At first blush one might think that of all areas of mathematics certainly arithmetic should be the simplest, but it is a...
see more
Material Type:
Open Textbook
W. Edwin Clark
Date Added:
Mar 10, 2010
Date Modified:
Sep 13, 2010
This is a free online course offered by the Saylor Foundation.'“Everything is numbers.” This phrase was uttered by the lead...
see more
Material Type:
Online Course
The Saylor Foundation
Date Added:
Feb 22, 2013
Date Modified:
Apr 10, 2014
This free and open online course in Linear Algebra was produced by the WA State Board for Community & Technical Colleges...
see more
Material Type:
Online Course
Tom Caswell
Date Added:
Dec 01, 2011
Date Modified:
Aug 22, 2012
This free and open online course in Logic was produced by the WA State Board for Community & Technical Colleges...
see more
Material Type:
Online Course
Tom Caswell
Date Added:
Nov 29, 2011
Date Modified:
Aug 22, 2012
This online course comes from the Open Learning Initiative (OLI) by Carnegie Mellon. “The course includes self-guiding...
see more
Material Type:
Online Course
Open Learning Initiative
Date Added:
Feb 04, 2011
Date Modified:
Feb 04, 2011
This is a recording of a webinar by the authors of the material, "Demos with Positive Impact" ("...
see more
Material Type:
David Hill, Lila Roberts
Date Added:
May 25, 2010
Date Modified:
Mar 24, 2011
According to The Orange Grove, "This book covers the following: Foundations of Trigonometry, Angles and their Measure,...
see more
Material Type:
Open Textbook
Jeff Zeager, Carl Stitz
Date Added:
Jan 06, 2011
Date Modified:
Nov 30, 2011
This free and open online course in Precalculus was produced by the WA State Board for Community & Technical Colleges...
see more
Material Type:
Online Course
Tom Caswell
Date Added:
Dec 01, 2011
Date Modified:
Aug 22, 2012
This free and open online course in Precalculus was produced by the WA State Board for Community & Technical Colleges...
see more
Material Type:
Online Course
Tom Caswell
Date Added:
Nov 30, 2011
Date Modified:
Aug 22, 2012
'From the MAA review of this book: "The discussions and explanations are succinct and to the point, in a way that pleases...
see more
Material Type:
Open Textbook
David Guichard
Date Added:
Apr 27, 2012
Date Modified:
Dec 02, 2012 | {"url":"http://www.merlot.org/merlot/materials.htm?nosearchlanguage=&pageSize=&page=3&category=2514&userId=11397","timestamp":"2014-04-18T06:54:57Z","content_type":null,"content_length":"182736","record_id":"<urn:uuid:928a7a02-ec25-4c06-b70f-b0a19e01cf6b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Neutron, proton collision problem
The following problem appeared on the A2 Edexcel Physics unit 4 exam paper January 2012 question 18. The solution, as given by the exam board, is attached.
Question: 18. James Chadwick is credited with discovering the neutron in 1932.
Beryllium was bombarded with alpha particles, knocking neutrons out of the beryllium atoms. Chadwick placed various targets between the beryllium and a detector. Hydrogen and nitrogen atoms were
knocked out of the targets by the neutrons and the kinetic energies of these atoms were measured by the detector.
(a) The maximum energy of a nitrogen atom wa found to be 1.2 MeV.
Show that the maximum velocity of the atom is about 4 x 10^6 m/s.
mass of nitrogen atom = 14u, where u = 1.66 x 10^-27 kg Solution:
The set up as I understand it is,
alpha --> Be --> neutron --> target --> Ni or H --> detector
v = sqrt(2(1.2x10
)) = 4.06x10
m/s No problems here.
Question (b)The mass of a neutron is Nu (where N is the relative mass of the neutron) and its initial velocity is x. the nitrogen atom, mass 14u, is initially stationary and is then knocked out of
the target with a velocity, y, by a collision with a neutron.
(i) Show that the velocity, z, of the neutron after the collision can be written as
z = (Nx - 14y)/N
momentum before = momentum after
Nux = 14uy - Nuz
rearranging gives,
z = (Nx - 14y)/N No problems here.
Question (ii)The collision between this neutron and the nitrogen atom is elastic. What is meant by an elastic collision? Solution
In an elastic collision the kinetic energy is conserved. No problems here.
Question (iii) Explain why the kinetic energy E[k] of the nitrogen atom is given by
E[k ]= (Nu(x^2 - z^2)/(2)
Using conservation of kinetic energy,
= E
+ E
= (1/2)14Nuy
= (1/2)Nuz
= (x
- y
= (1/2)14Nuy
= (1/2)(14Nu(x
- z
= (Nu(x
- z
For this calculation to work, the mass of the Ni has to be 14Nu but in the question it is given as 14u. That is the first thing I don't understand.
Question (c) The two equations in (b) can be combined and z can be eliminated to give
y = (2Nx)/(N + 14)
The question does not ask how this is done but I'd like to know and can't figure it out. I tried substituting
z = (Nx - 14y)/(N) into E
= (Nu(x
- z
and this gives,
)/(Nu) = x
- ((Nx - 14y)/(N))
But this has an E
in it, so I don't see how to get to the required y = (2nx)/(N + 14) This is the second problem I have, not understanding where this equation comes from.
Question (i) The maximum velocity of hydrogen atoms knocked out by neutrons in the same experiment was 30 x 10^7 m/s. The mass of a hydrogen atom is 1u.
Show that the relative mass N of the neutron is 1. Solution
There is an error in the question here. Instead of 30 x 10
m/s it should 3.0 x 10
The equation given in the question applies to Nitrogen and can be rearranged to give
2Nx = y
(N + 14) = 4.1 x 10
(N + 14)
= 4.1 x 10
m/s. This is obtained from part (a).
For hydrogen then
2nx = y
(N + 1) = 3.0 x 10
These two equation can be combined giving,
4.1 x 10
(N + 14) = 3.0 x 10
from which N can be solved
N = (3 x 10
-14 x 4.1 x 10
)/(4.1 x 10
- 3 x 10
= 1.05 which is approximately 1
Question (ii) This equation can not be applied to all collisions in this experiment. Suggest why. Solution
As the atoms approach the speed of light their mass does not remain constant, it increases. | {"url":"http://www.physicsforums.com/showthread.php?p=4218033","timestamp":"2014-04-16T22:18:18Z","content_type":null,"content_length":"39644","record_id":"<urn:uuid:68bb777b-db6a-49a0-bac2-f2e0374966f3>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research@StAndrews:FullText: Generating uncountable transformation semigroups
Research@StAndrews:FullText >
Mathematics & Statistics (School of) >
Pure Mathematics >
Pure Mathematics Theses >
Please use this identifier to cite or link to this item: http://hdl.handle.net/10023/867
Title: Generating uncountable transformation semigroups
Authors: Péresse, Yann
Supervisors: Quick, Martyn
Mitchell, James David
Issue Date: 2009
We consider naturally occurring, uncountable transformation semigroups S and investigate the following three questions. (i) Is every countable subset F of S also a subset of a finitely
generated subsemigroup of S? If so, what is the least number n such that for every countable subset F of S there exist n elements of S that generate a subsemigroup of S containing F as a
subset. (ii) Given a subset U of S, what is the least cardinality of a subset A of S such that the union of A and U is a generating set for S? (iii) Define a preorder relation ≤ on the
subsets of S as follows. For subsets V and W of S write V ≤ W if there exists a countable subset C of S such that V is contained in the semigroup generated by the union of W and C. Given
Abstract: a subset U of S, where does U lie in the preorder ≤ on subsets of S? Semigroups S for which we answer question (i) include: the semigroups of the injec- tive functions and the surjective
functions on a countably infinite set; the semigroups of the increasing functions, the Lebesgue measurable functions, and the differentiable functions on the closed unit interval [0, 1];
and the endomorphism semigroup of the random graph. We investigate questions (ii) and (iii) in the case where S is the semigroup Ω[superscript Ω] of all functions on a countably infinite
set Ω. Subsets U of Ω[superscript Ω] under consideration are semigroups of Lipschitz functions on Ω with respect to discrete metrics on Ω and semigroups of endomorphisms of binary
relations on Ω such as graphs or preorders.
URI: http://hdl.handle.net/10023/867
Type: Thesis
Publisher: University of St Andrews
Appears in Pure Mathematics Theses
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated. | {"url":"http://research-repository.st-andrews.ac.uk/handle/10023/867","timestamp":"2014-04-16T10:21:21Z","content_type":null,"content_length":"19310","record_id":"<urn:uuid:08ec62ed-c87e-4d13-98d4-24a1a83af0bc>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Please explain
March 9th 2011, 06:36 AM #1
Junior Member
Nov 2010
Please explain
The question:
What is wrong with the following "proof"? Let x=y. Then
$x^2 = xy$
$x^2-y^2= xy-y^2$
$(x+y)(x-y)= y(x-y)$
$x+y= y$
$2y= y$
Solution: I know that 2 does not equal 1 and my answer was an incorrect substitution in step 5 however that makes no sense as x=y. The book says that in step 3 'they' incorrectly divided by (x-y)
=0. I do not understand, if (x-y) is a whole number then would it not be valid to perform the division?
i.e. $\frac {y(x-y)}{(x-y)} = y$?
Now, perhaps this post would be better suited in the set theory/logic section but as the concept under discussion is of elementary character I felt that would be an unnecessary aggrandizement.
Also I am unsure as to the formal system of 'proof' being employed, is it deduction?
Last edited by Foxlion; March 9th 2011 at 06:53 AM. Reason: clarity
The question:
What is wrong with the following "proof"? Let x=y. Then
$x^2 = xy$
$x^2-y^2= xy-y^2$
$(x+y)(x-y)= y(x-y)$
$x+y= y$
$2y= y$
Solution: I know that 2 does not equal 1 and my answer was an incorrect substitution in step 5 however that makes no sense as x=y. The book says that in step 3 'they' incorrectly divided by (x-y)
=0. I do not understand, if (x-y) is a whole number then would it not be valid to perform the division?
i.e. $\frac {y(x-y)}{(x-y)} = y$?
since x=y then clearly x-y=o
It is clear isn't it, and yet here I am. Thank you
And it is NOT valid to divide by the whole number 0.
haha no, it isn't, though it is tempting to a layman such as I. It's an English thinker's problem in attempting to reconcile the irreconcilable.
March 9th 2011, 06:43 AM #2
Mar 2011
March 9th 2011, 07:01 AM #3
Junior Member
Nov 2010
March 9th 2011, 07:02 AM #4
MHF Contributor
Apr 2005
March 9th 2011, 07:07 AM #5
Junior Member
Nov 2010 | {"url":"http://mathhelpforum.com/algebra/173981-please-explain.html","timestamp":"2014-04-20T10:19:30Z","content_type":null,"content_length":"42482","record_id":"<urn:uuid:fbdad712-16ac-498d-b2a6-239df59d8f19>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Discrete Cosine Transform (DCT)
Next: Differential Encoding Up: Source Coding Techniques Previous: Relationship between DCT and
The discrete cosine transform (DCT) helps separate the image into parts (or spectral sub-bands) of differing importance (with respect to the image's visual quality). The DCT is similar to the
discrete Fourier transform: it transforms a signal or image from the spatial domain to the frequency domain (Fig 7.8).
DCT Encoding
The general equation for a 1D (N data items) DCT is defined by the following equation:
and the corresponding inverse 1D DCT transform is simple F^-1(u), i.e.:
The general equation for a 2D (N by M image) DCT is defined by the following equation:
and the corresponding inverse 2D DCT transform is simple F^-1(u,v), i.e.:
The basic operation of the DCT is as follows:
• The input image is N by M;
• f(i,j) is the intensity of the pixel in row i and column j;
• F(u,v) is the DCT coefficient in row k1 and column k2 of the DCT matrix.
• For most images, much of the signal energy lies at low frequencies; these appear in the upper left corner of the DCT.
• Compression is achieved since the lower right values represent higher frequencies, and are often small - small enough to be neglected with little visible distortion.
• The DCT input is an 8 by 8 array of integers. This array contains each pixel's gray scale level;
• 8 bit pixels have levels from 0 to 255.
• Therefore an 8 point DCT would be:
Question: What is F[0,0]?
answer: They define DC and AC components.
• The output array of DCT coefficients contains integers; these can range from -1024 to 1023.
• It is computationally easier to implement and more efficient to regard the DCT as a set of basis functions which given a known input array size (8 x 8) can be precomputed and stored. This
involves simply computing values for a convolution mask (8 x8 window) that get applied (summ values x pixelthe window overlap with image apply window accros all rows/columns of image). The values
as simply calculated from the DCT formula. The 64 (8 x 8) DCT basis functions are illustrated in Fig 7.9.
DCT basis functions
• Why DCT not FFT?
DCT is similar to the Fast Fourier Transform (FFT), but can approximate lines well with fewer coefficients (Fig 7.10)
DCT/FFT Comparison
• Computing the 2D DCT
Next: Differential Encoding Up: Source Coding Techniques Previous: Relationship between DCT and Dave Marshall | {"url":"http://www.cs.cf.ac.uk/Dave/Multimedia/node231.html","timestamp":"2014-04-19T02:07:01Z","content_type":null,"content_length":"7724","record_id":"<urn:uuid:497f103c-37ea-453f-b438-c6d2fafa55bf>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |