content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A number decreased by 32 is -58. Find the number. The number is ____. (Give only the value of the number as your answer.)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ff861dae4b058f8b762dc5d","timestamp":"2014-04-20T21:14:36Z","content_type":null,"content_length":"37086","record_id":"<urn:uuid:157f1801-d788-48bf-8787-463a6992dbf6>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learning Monotonic Linear Functions
- Foundations of Computational Mathematics
"... Abstract. We consider the regression problem and describe an algorithm approximating the regression function by estimators piecewise constant on the elements of an adaptive partition. The
partitions are iteratively constructed by suitable random merges and splits, using cuts of arbitrary geometry. W ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. We consider the regression problem and describe an algorithm approximating the regression function by estimators piecewise constant on the elements of an adaptive partition. The partitions
are iteratively constructed by suitable random merges and splits, using cuts of arbitrary geometry. We give a risk bound under the assumption that a “weak learning hypothesis” holds, and characterize
this hypothesis in terms of a suitable RKHS. 1.
, 2005
"... Generalized additive models are a powerful generalization of linear and logistic regression models. In this paper we show that a natural regression graph learning algorithm efficiently learns
generalized additive models. Efficiency is proven in two senses: the estimator’s future prediction accuracy ..."
Cited by 2 (0 self)
Add to MetaCart
Generalized additive models are a powerful generalization of linear and logistic regression models. In this paper we show that a natural regression graph learning algorithm efficiently learns
generalized additive models. Efficiency is proven in two senses: the estimator’s future prediction accuracy approaches optimality at rate inverse polynomial in the size of the training data, and its
runtime is polynomial in the size of the training data. Furthermore, the guarantees are nearly linear in terms of the dimensionality (number of regressors) of the problem, and hence the algorithm
does not suffer from the “curse of dimensionality. ” The algorithm is a simple generalization of Mansour and McAllester’s classification algorithm that generates decision graphs, i.e., decision trees
with merges. Our analysis is also viewed as defining a natural extension of the original classification boosting theorems (Schapire, 1990) to the regression setting. Loosely speaking, we define a
weak correlator to be a real-valued predictor that has a correlation coefficient with the target function that is bounded from zero. We show how to efficiently boost weak correlators to get
predictions with correlation arbitrarily close to 1 (error arbitrarily close to 0). Our boosting analysis is a natural extension of the classification boosting analysis of Kearns and Mansour (1999)
and Mansour and McAllester (2002).
"... Abstract. Predicting class probabilities and other real-valued quantities is often more useful than binary classification, but comparatively little work in PAC-style learning addresses this
issue. We show that two rich classes of real-valued functions are learnable in the probabilisticconcept framew ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. Predicting class probabilities and other real-valued quantities is often more useful than binary classification, but comparatively little work in PAC-style learning addresses this issue. We
show that two rich classes of real-valued functions are learnable in the probabilisticconcept framework of Kearns and Schapire. Let X be a subset of Euclidean space and f be a real-valued function on
X. We say f is a nested halfspace function if, for each real threshold t, the set {x ∈ X|f(x) ≤ t}, is a halfspace. This broad class of functions includes binary halfspaces with a margin (e.g., SVMs)
as a special case. We give an efficient algorithm that provably learns (Lipschitz-continuous) nested halfspace functions on the unit ball. The sample complexity is independent of the number of
dimensions. We also introduce the class of uphill decision trees, which are real-valued decision trees (sometimes called regression trees) in which the sequence of leaf values is non-decreasing. We
give an efficient algorithm for provably learning uphill decision trees whose sample complexity is polynomial in the number of dimensions but independent of the size of the tree (which may be
exponential). Both of our algorithms employ a real-valued extension of Mansour and McAllester’s boosting algorithm. 1
"... The Perceptron algorithm elegantly solves binary classification problems that have a margin between positive and negative examples. Isotonic regression (fitting an arbitrary increasing function
in one dimension) is also a natural problem with a simple solution. By combining the two, we get a new but ..."
Add to MetaCart
The Perceptron algorithm elegantly solves binary classification problems that have a margin between positive and negative examples. Isotonic regression (fitting an arbitrary increasing function in
one dimension) is also a natural problem with a simple solution. By combining the two, we get a new but very simple algorithm with strong guarantees. Our ISOTRON algorithm provably learns Single
Index Models (SIM), a generalization of linear and logistic regression, generalized linear models, as well as binary classification by linear threshold functions. In particular, it provably learns
SIMs with unknown mean functions that are nondecreasing and Lipschitz-continuous, thereby generalizing linear and logistic regression and linear-threshold functions (with a margin). Like the
Perceptron, it is straightforward to implement and kernelize. Hence, the ISOTRON provides a very simple yet flexible and principled approach to regression. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4785867","timestamp":"2014-04-20T18:04:19Z","content_type":null,"content_length":"20972","record_id":"<urn:uuid:b8feed02-a783-48ae-893d-e1d1b267b24b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
An isosceles triangle has one angle with a measure greater than 100 degrees and another with a measure of x. which is true - WyzAnt Answers
An isosceles triangle has one angle with a measure greater than 100 degrees and another with a measure of x. which is true
The question has answer choices. Here they are..
a. x>80
b. x=80
c. x>40
d. x<40
e. x=40
Tutors, please
to answer this question.
Given that an isosceles triangle is a triangle with two equal sides; The angles opposite the equal sides are also equal.
So if given one angle is greater then 100 and the other is x
The sum total of angles of any triangle is 180 so we'd have
180 = 100 + x + x
180 = 100 +2x
180 - 100 = 2x
80 = 2x
dividing both sides by 2
80 / 2 = 2x / 2
40 = x
therefore x = 40
Now haven stated that the value for x can't exceed 40 if the other angles is precisely 100. That would mean since the angle give already exceeded 100, the value for x would be less than 40.
Thus x < 40 (d)
Hope this helps
First off, you should know that for any triangle, the 3 angles add up to 180: a + b + c = 180.
Also, if a triangle is isosceles, that means 2 of the 3 angles (as well as 2 of the 3 sides) are the same.
If one of the angles is greater than 100 degrees, neither of the other two can also be greater than 100 (because then just those two angles would add up to more than 200, way more than 180). The
other two angles (x) are equal to each other and are way smaller than 100.
If the angles add up to 180 and one of them is more than 100, the other two must be less than 80 total (again, because they all add to 180). Since the twin angles are the same and add to less than
80, each must be less than 40.
Which means it's d). Hope this helps! | {"url":"http://www.wyzant.com/resources/answers/23119/an_isosceles_triangle_has_one_angle_with_a_measure_greater_than_100_degrees_and_another_with_a_measure_of_x_which_is_true","timestamp":"2014-04-16T11:25:25Z","content_type":null,"content_length":"41934","record_id":"<urn:uuid:6378fe70-aeba-4f11-84e2-e815e4a7f74e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
P-Branes as Antisymmetric Nonabelian Tensorial Gauge Field Theories of Diffeomorphisms in P + 1 Dimensions
Authors: Carlos Castro
Long ago, Bergshoeff, Sezgin, Tanni and Townsend have shown that the light-cone gauge-fixed action of a super p-brane belongs to a new kind of supersymmetric gauge theory of p-volume preserving
diffeomorphisms (diffs) associated with the p-spatial dimensions of the extended object. These authors conjectured that this new kind of supersymmetric gauge theory must be related to an infinite-dim
nonabelian antisymmetric gauge theory. It is shown in this work how this new theory should be part of an underlying antisymmetric nonabelian tensorial gauge field theory of p+1-dimensional diffs
(upon supersymmetrization) associated with the world volume evolution of the p-brane. We conclude by embedding the latter theory into a more fundamental one based on the Clifford-space geometry of
the p-brane configuration space.
Comments: 44 pages, This article has been submitted to the Journal of Mathematical Physics, Aug 2009
Download: PDF
Submission history
[v1] 21 Aug 2009
Unique-IP document downloads: 109 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/0908.0080","timestamp":"2014-04-20T01:37:03Z","content_type":null,"content_length":"7881","record_id":"<urn:uuid:74a8a945-be8c-4674-be5e-c3b7eec8768d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
implicit diff.
October 9th 2007, 05:51 PM #1
Junior Member
Oct 2007
implicit diff.
Ok, the question is: Find dy/dx by implicit differentiation.
(x^2)(y^2) + x sin y = 7
I tried and didn't get it right.
So, I used product rule first for (x^2)(y^2) and the same for xsiny.
(x^2)(2y dy/dx) + (y^2)(2x) + x(cosy dy/dx)+ siny = 0
For the part with the cosy dy/dx, I wasn't sure if I did that right. Should I have the dy/dx there like that? Is there anything else wrong from this point on?
Ok, the question is: Find dy/dx by implicit differentiation.
(x^2)(y^2) + x sin y = 7
I tried and didn't get it right.
So, I used product rule first for (x^2)(y^2) and the same for xsiny.
(x^2)(2y dy/dx) + (y^2)(2x) + x(cosy dy/dx)+ siny = 0
For the part with the cosy dy/dx, I wasn't sure if I did that right. Should I have the dy/dx there like that? Is there anything else wrong from this point on?
nope, this seems right so far. your mistake was probably made when you tried to solve for dy/dx
try again
Oh wait, I got it! Thanks anyway.
October 9th 2007, 05:54 PM #2
October 9th 2007, 06:26 PM #3
Junior Member
Oct 2007 | {"url":"http://mathhelpforum.com/calculus/20289-implicit-diff.html","timestamp":"2014-04-20T10:29:44Z","content_type":null,"content_length":"34906","record_id":"<urn:uuid:820e37be-8c4e-488d-acd8-e1d38f75c09c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Direct summands of direct sum of line bundles on projective varieties
up vote 3 down vote favorite
Serre-Swan's theorem (see the MO discussion) says that any locally free sheaf over an affine variety is a direct summand of a free sheaf. However, this is not true on projective varieties. It is not
hard to check that a non-trivial line bundle with non-zero global sections can not be a direct summand of a free sheaf. The reason is that, being a direct summand of a free sheaf implies that the
dual line bundle also has non-zero global sections. But that implies the line bundle is trivial.
I am wondering if the same holds for vector bundles, i.e. if a vector bundle and its dual over a projective variety both have non-zero global sections, then the vector bundle is trivial.
Another I think related question is the following:
Is a direct summand of a direct sum of line bundles on a projective variety also a direct sum of line bundles?
add comment
1 Answer
active oldest votes
1) The vector bundle $E=\mathcal O(1) \oplus \mathcal O(-1)$ over $\mathbb P^1$ has non trivial sections and so has its dual (which happens to be the same bundle $E$ ).
However $E$ is non trivial because each of its global sections has a zero.
2) Yes, a direct summand of a direct sum of line bundles on a complete variety is also a direct sum of line bundles.
This follows from Atiyah's general version of the Krull-Schmidt theorem .
up vote 4 down
vote accepted Edit
Since Fei asks, let me remark that the trick in 1) works also for bundles which are not direct sums of line bundles:
If $T$ is the tangent bundle to $\mathbb P^2$, then $E=T \oplus T^* $ has non-trivial global sections , and so has $E^*=E$
However, since $T$ is indecomposable $E$ is not a sum of line bundles by Atiyah's result and is thus a fortiori not trivial.
Thank you so much for your answer. For the first question, do you have a counterexample that $E$ is not a direct sum of line bundles? – Fei YE Mar 25 '12 at 11:07
Dear @Fei,yes: I have added such a counterexample in an Edit. – Georges Elencwajg Mar 25 '12 at 11:26
Dear George, Thank you so much for the example. I saw you used a fact $E^\ast=E$ for $E=T\oplus T^\ast$. Is there are general relation between $E$ and $E^\ast$. I saw somewhere
that $E^\ast\otiems \det(E)=E$. Is this really true in general? Thanks again for you help! – Fei YE Mar 25 '12 at 12:14
1 Dear @Fei, I have just used that the dual of a direct sum is the direct sum of the duals. The formula you quote is already false for every non trivial line bundle $L$ since $L^*
\otimes det(L)=L^* \otimes L$ is trivial. – Georges Elencwajg Mar 25 '12 at 12:24
1 Dear @Fei, yes, for rank $2$ it is correct: Hartshorne, Ex.II 5.16 – Georges Elencwajg Mar 25 '12 at 14:06
show 1 more comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/92146/direct-summands-of-direct-sum-of-line-bundles-on-projective-varieties","timestamp":"2014-04-17T06:57:41Z","content_type":null,"content_length":"57438","record_id":"<urn:uuid:33b4dc19-27a6-45db-83c3-0ca7bdb5ddca>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Categorical models for local names
Results 1 - 10 of 28
- In 14th Annual Symposium on Logic in Computer Science , 1999
"... Syntax Involving Binders Murdoch Gabbay Cambridge University DPMMS Cambridge CB2 1SB, UK M.J.Gabbay@cantab.com Andrew Pitts Cambridge University Computer Laboratory Cambridge CB2 3QG, UK
ap@cl.cam.ac.uk Abstract The Fraenkel-Mostowski permutation model of set theory with atoms (FM-sets) ..."
Cited by 146 (14 self)
Add to MetaCart
Syntax Involving Binders Murdoch Gabbay Cambridge University DPMMS Cambridge CB2 1SB, UK M.J.Gabbay@cantab.com Andrew Pitts Cambridge University Computer Laboratory Cambridge CB2 3QG, UK
ap@cl.cam.ac.uk Abstract The Fraenkel-Mostowski permutation model of set theory with atoms (FM-sets) can serve as the semantic basis of meta-logics for specifying and reasoning about formal systems
involving name binding, ff-conversion, capture avoiding substitution, and so on. We show that in FM-set theory one can express statements quantifying over `fresh' names and we use this to give a
novel set-theoretic interpretation of name abstraction. Inductively defined FM-sets involving this name-abstraction set former (together with cartesian product and disjoint union) can correctly
encode object-level syntax modulo ff-conversion. In this way, the standard theory of algebraic data types can be extended to encompass signatures involving binding operators. In particular, there is
an associated n...
- Millennial Perspectives in Computer Science , 2000
"... Drawing upon early work by Burstall, we extend Hoare's approach to proving the correctness of imperative programs, to deal with programs that perform destructive updates to data structures
containing more than one pointer to the same location. The key concept is an "independent conjunction" P & ..."
Cited by 107 (5 self)
Add to MetaCart
Drawing upon early work by Burstall, we extend Hoare's approach to proving the correctness of imperative programs, to deal with programs that perform destructive updates to data structures containing
more than one pointer to the same location. The key concept is an "independent conjunction" P & Q that holds only when P and Q are both true and depend upon distinct areas of storage. To make this
concept precise we use an intuitionistic logic of assertions, with a Kripke semantics whose possible worlds are heaps (mapping locations into tuples of values).
- Proc. FOSSACS 2002, Lecture Notes in Computer Science 2303 , 2002
"... We give semantics for notions of computation, also called computational effects, by means of operations and equations. We show that these generate several of the monads of primary interest that
have been used to model computational effects, with the striking omission of the continuations monad, demo ..."
Cited by 54 (7 self)
Add to MetaCart
We give semantics for notions of computation, also called computational effects, by means of operations and equations. We show that these generate several of the monads of primary interest that have
been used to model computational effects, with the striking omission of the continuations monad, demonstrating the latter to be of a different character, as is computationally true. We focus on
semantics for global and local state, showing that taking operations and equations as primitive yields a mathematical relationship that reflects their computational relationship.
- THEORETICAL COMPUTER SCIENCE , 2005
"... A standard monad of continuations, when constructed with domains in the world of FM-sets [4], is shown to provide a model of dynamic allocation of fresh names that is both simple and useful. In
particular, it is used to prove that the powerful facilities for manipulating fresh names and binding oper ..."
Cited by 26 (7 self)
Add to MetaCart
A standard monad of continuations, when constructed with domains in the world of FM-sets [4], is shown to provide a model of dynamic allocation of fresh names that is both simple and useful. In
particular, it is used to prove that the powerful facilities for manipulating fresh names and binding operations provided by the “Fresh ” series of metalanguages [15,17,18] respect α-equivalence of
object-level languages up to meta-level contextual equivalence.
- Science of Computer Programming , 2003
"... While the semantics of local variables in programming languages is by now well-understood, the semantics of pointer-addressed heap variables is still an outstanding issue. In particular, the
commonly assumed relational reasoning principles for data representations have not been validated in a se ..."
Cited by 23 (8 self)
Add to MetaCart
While the semantics of local variables in programming languages is by now well-understood, the semantics of pointer-addressed heap variables is still an outstanding issue. In particular, the commonly
assumed relational reasoning principles for data representations have not been validated in a semantic model of heap variables. In this paper, we de ne a parametricity semantics for a Pascal-like
language with pointers and heap variables which gives such reasoning principles. It is found that the correspondences between data representations are not simply relations between states, but more
intricate correspondences that also need to keep track of visible locations whose pointers can be stored and leaked.
- Information and Computation , 2002
"... Many object-oriented languages used in practice descend from Algol. With this motivation, we study the theoretical issues underlying such languages via the theory of Algollike languages. It is
shown that the basic framework of this theory extends cleanly and elegantly to the concepts of objects and ..."
Cited by 22 (5 self)
Add to MetaCart
Many object-oriented languages used in practice descend from Algol. With this motivation, we study the theoretical issues underlying such languages via the theory of Algollike languages. It is shown
that the basic framework of this theory extends cleanly and elegantly to the concepts of objects and classes. An important idea that comes to light is that classes are abstract data types, whose
theory corresponds to that of existential types. Equational and Hoare-like reasoning methods, and relational parametricity provide powerful formal tools for reasoning about Algol-like object-oriented
programs. 1
- In Proc. of POPL , 2011
"... Over the last decade, there has been extensive research on modelling challenging features in programming languages and program logics, such as higher-order store and storable resource
invariants. A recent line of work has identified a common solution to some of these challenges: Kripke models over w ..."
Cited by 19 (10 self)
Add to MetaCart
Over the last decade, there has been extensive research on modelling challenging features in programming languages and program logics, such as higher-order store and storable resource invariants. A
recent line of work has identified a common solution to some of these challenges: Kripke models over worlds that are recursively defined in a category of metric spaces. In this paper, we broaden the
scope of this technique from the original domain-theoretic setting to an elementary, operational one based on step indexing. The resulting method is widely applicable and leads to simple, succinct
models of complicated language features, as we demonstrate in our semantics of Charguéraud and Pottier’s type-and-capability system for an ML-like higher-order language. Moreover, the method provides
a high-level understanding of the essence of recent approaches based on step indexing. 1.
- of Lecture Notes in Computer Science , 2004
"... Abstract. We describe a game semantics for local names in a functional setting. It is based on a category of dialogue games acted upon by the automorphism group of the natural numbers; this
allows properties of names such as freshness and locality to be characterized semantically. We describe a mode ..."
Cited by 18 (4 self)
Add to MetaCart
Abstract. We describe a game semantics for local names in a functional setting. It is based on a category of dialogue games acted upon by the automorphism group of the natural numbers; this allows
properties of names such as freshness and locality to be characterized semantically. We describe a model of the nu-calculus in this category, and extend it with named references (without bad
variables) using names as pointers to a store. After refining the semantics via a notion of garbage collection, we prove that the compact elements are definable as terms, and hence obtain a full
abstraction result. 1 Introduction Local names are a pervasive and subtle feature of programming languages and other calculi. Not only are they used for manipulating important constructs such as
locally bound references and exceptions, name-passing is itself a very expressive computational paradigm, as demonstrated by the ss-calculus, for example. Local names can also represent items of
secret information which are dynamically generated, passed between agents and used to access further information or activity. They therefore have a key r^ole in specifying properties of secure
systems [1, 24].
, 1995
"... We present a new semantics for Algol-like languages that combines methods from two prior lines of development: ffl the object-based approach of [21,22], where the meaning of an imperative
program is described in terms of sequences of observable actions, and ffl the functor-category approach initiat ..."
Cited by 16 (7 self)
Add to MetaCart
We present a new semantics for Algol-like languages that combines methods from two prior lines of development: ffl the object-based approach of [21,22], where the meaning of an imperative program is
described in terms of sequences of observable actions, and ffl the functor-category approach initiated by Reynolds [24], where the varying nature of the run-time stack is explained using functors
from a category of store shapes to a category of cpos. The semantics
- In LICS ’07: Proceedings of the 22nd Annual IEEE Symposium on Logic in Computer Science (Wroclaw, 2007), IEEE Computer
"... Vol. 5 (3:8) 2009, pp. 1–69 www.lmcs-online.org ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=22977","timestamp":"2014-04-19T23:37:58Z","content_type":null,"content_length":"36885","record_id":"<urn:uuid:761f7ded-6a07-427b-af09-0eb5920c7eb6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Gnarly Journal
EurUsd restored the supply balance, for now.
Gold is showing a bearish divergence.
4H /GC
Play the players, not the cards.
Always a pleasure to visit this journal...
Keep it up
Always a pleasure to visit this journal...
Keep it up
Thank you.
I don't know why, but I've been playing with MA's and RSI's. There's a pattern I've spotted that used a 50 EMA crossing a 200 SMA after RSI 7 (weighted) hits an overbought/sold level set at 80/20.
I'm not done with this as I'm still trying to set some rules and whatnot, but it's very important that the MA crossing happens only after the RSI has reached it's OB/S levels. After the MA crossing,
look for the RSI to make convergence or divergence around the same level. During this time, it can not go back to 50. This will invalidate the move. This is the trigger.
Once you're in, taking profit can happen once the RSI hits 50, or the other OB/S level, or when price hits either MA, or wherever else you can think of. There's two instances in the chart below.
For the meantime, I suspect the Aud/Usd may be hitting around 1.035 this or next week. After that, parity looks probable.
15M AU
Play the players, not the cards.
If price does not return to 1.022 before reaching 1.0365, then I think price will be strongly rejected after reaching 1.0365 and if I'm correct in that price will turn south from there.
1H A/U
1.022 will now be used as a potential TP area in the future. The move to 1.0245 has softened the R/R of any potential trade once price revisits 1.022.
I am thinking of going short at 1.0365, but also 1.035 with maybe just a micro lot each. I am also thinking of using 1.039 as a SL. With this, the first position may occur a 55 pip loss, while the
second 40. This gives a total of a 95 pips loss, which would almost be 2% of my equity.
Profit can occur at 1.022 or 1.0245. The first target, 245, may give 90 pips for the first position and 75 for the second. A total of 165 pips which is a little over 3% gain.
I know there are some traders on here that are interested in the trade balance on the CNY...
Play the players, not the cards.
1.022 will now be used as a potential TP area in the future. The move to 1.0245 has softened the R/R of any potential trade once price revisits 1.022.
I am thinking of going short at 1.0365, but also 1.035 with maybe just a micro lot each. I am also thinking of using 1.039 as a SL. With this, the first position may occur a 55 pip loss, while
the second 40. This gives a total of a 95 pips loss, which would almost be 2% of my equity.
Profit can occur at 1.022 or 1.0245. The first target, 245, may give 90 pips for the first position and 75 for the...
Jeez, I get so wrapped up in my thoughts that I forget what I'm doing. Forget the % gain numbers and total pips, I just know the price levels are correct. I don't even feel like messing with the
numbers at the moment since I don't think the position will be entered until next week.
Play the players, not the cards.
Aud/Usd closed for the week at 1.023. I will have to update my trade parameters as I believe this will change the behavior of price if price goes to my entry target and falls back down to 1.022.
No mixed up math this time! Cognitive bias sucks!
Play the players, not the cards.
You're getting there mate
FF journal: Peaks and Troffs
So close to entry. Maybe in a couple hours...
Play the players, not the cards.
Taken out exactly to the tenth of a fucking pip. What the fuck is that shit? So demotivating.
Play the players, not the cards.
The worst part is that on my demo account, I'm in the money! So, taking that I lost on my real account, and won on my fake account, what did I do wrong? Manage my risk? That's all I can think of.
Play the players, not the cards.
Been over a month since I've posted in this thread. I know I've posted about a "half-star pattern" on here, and I'm here to somewhat update that idea. The chart attached shows two "half-stars" that
mirror each other.
I have been playing with circles, lately, too. When I draw them, I put that circle into two quadrants, or squares for the top/bottom half that price is moving in, and then start drawing lines from
end to end and then see what's up.
In the chart I have attached, there are only two main points. The first square, from the top-left corner to the bottom-right corner, those are the two main points. From there, after having drawn the
circle, the squares, and the first line, I start dissecting the squares individually and then together and sometimes draw lines from the line originating in the squares that intersect with the circle
through said intersection. I wish I had a "tool" or "indicator" for this so it would be less time consuming, but I'm no programmer. (Sometimes, price will respect the outside of the circle.)
I see the Euro going up abroad.
4H Eur/Gbp
Play the players, not the cards.
I hate getting stopped out to the pip, but worst, to the tenth of a pip. Although I'm not a big believer in retail brokers doing stop hunts, I wouldn't say an outright no to any price manipulation.
(I am a little paranoid.) (Or, maybe, I'm just that good at picking tops/bottoms even if I don't profit!)
Interesting to note that the big surge on the attached chart happened only once price hit the edge of the square/circle thingy I've been drawing.
(I need to work on better entries.)
15M A/U
Play the players, not the cards.
On the Usd/Jpy, I can see price falling back down to 80.2 or 79.3. If this does not occur before December, then the idea is invalid.
If price falls down to 80.2, but not 79.3, then price may become stuck in a pennant/mini ascending triangle which would have a high chance of an upward breakout. If price falls down to 79.3, with
80.2 being hit before December, before the new year, this would result in a descending triangle, which may cause price to become stuck in the bottom-right point of the star. If this happens, the
"fiscal cliff" deadline may very well provide serious momentum.
The first day of the new year lines up with a circle-TL intersection. The edge of the square is Feb 8th. If the "fiscal cliff" doesn't actually do anything on New Year's, then Feb 8th would be the
next date to watch. If Feb 8th is the day that price leaves the bottom-right point, then 91 will be a target.
Edit: Daily
Play the players, not the cards.
Is it a good time to go long Gbp/Usd? I guess we'll see!
Update for G/U 4H. It was a good time to go long, however, I did not actually go long. Aw shucks.
If price does not break the current resistance area, I'll be looking to get in if price touches the circular line.
Edit: Hold on, shit fucked up with my platform. It keeps moving my lines constantly. Getting annoying, actually.
Play the players, not the cards.
Update for G/U 4H. It was a good time to go long, however, I did not actually go long. Aw shucks.
If price does not break the current resistance area, I'll be looking to get in if price touches the circular line.
Edit: Hold on, shit fucked up with my platform. It keeps moving my lines constantly. Getting annoying, actually.
Okay, this should be correct now. Stupid platform playing tricks on me.
Play the players, not the cards. | {"url":"http://www.forexfactory.com/showthread.php?p=6200720","timestamp":"2014-04-18T08:16:24Z","content_type":null,"content_length":"77706","record_id":"<urn:uuid:c167ec11-45dc-4b1a-9dc6-33e9600def8d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free Online Math Games for
6th Grade Math Games
6th grade math can be quite challenging! But with a host of free online math games for 6th graders, it can also be very interesting. The cool math games in the virtual world here at Math Blaster are
the perfect combination of learning and fun. Engage 6th graders with these math games and watch them grow to love the subject!
Math for 6th Graders – What They Should Know
6th grade can be an academically challenging year. Whether the kids love math or hate it, they have to learn a lot of advanced concepts to keep up with the class.
Generally between 11 and 12 years of age, 6th graders have to learn certain complex math concepts. If they have a good foundation in all the concepts taught till Grade 5, they will be able to grasp
the new ones with ease. There are certain math skills that kids should have before starting 6th grade.
In this grade, students get a better understanding of geometrical shapes and can classify triangles and calculate their missing angles as well. Apart from knowing complementary and supplementary
angles, 6th graders learn ratio and proportion, evaluate formulas, solve two-step equations, learn probability and permutations and combinations, and calculate mean, median and mode.
Free Online Math Games for Grade 6
It is easy for some 6th graders to be overwhelmed by the sheer volume of new and advanced concepts that are introduced to them. Encouraging them solve problems and practice math is probably one of
the most important ways to ensure that they have understood all the concepts well. The availability of a wide range of online math games also makes it easier for 6th graders to not get intimated and
try their hands at tough math problems.
The virtual worlds at Math Blaster and JumpStart are filled with all kinds of cool math games for kids of all ages. Get them hooked to these games and watch them pick up and learn new concepts with
ease. A great way to practice math and to develop a love for the subject, these math games will make math fun for 6th graders! | {"url":"http://www.mathblaster.com/parents/math-games/6th-grade-math-games","timestamp":"2014-04-19T19:33:36Z","content_type":null,"content_length":"83487","record_id":"<urn:uuid:08abcaf9-3f56-4a7d-97fb-7732843388e2>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
NB. PVODE has been superseded by SUNDIALS.
PVODE actually refers to a trio of closely related solvers:
• PVODE, for systems of ordinary differential equations,
• KINSOL, for systems of nonlinear algebraic equations, and
• IDA, for systems of differential-algebraic equations.
These solvers have some code modules in common, primarily a module of vector kernels, and a generic linear system solver based on a Scaled Preconditioned GMRES method.
PVODE is a solver for large systems of ordinary differential equations on parallel machines. It contains methods for the solution of both stiff and non-stiff initial value problems. Integration
methods include the variable coefficient forms of the Adams and Backward Differentiation Formula methods. The linear systems that must be solved during the implicit time stepping are solved with
iterative, preconditioned Krylov solvers. The user can either supply a preconditioner or use one that is included in the PVODE package.
PVODE is an extension of the sequential package known as CVODE which has been widely distributed and used. CVODE is available from Netlib. Both PVODE and CVODE are written in C but are callable from
Fortran. The parallelization of CVODE to PVODE was accomplished through the modification of the vector kernels, allowing them to operate on vectors that have been distributed across processors. The
message passing calls are made through MPI.
Two of the four original CVODE methods are currently available in PVODE: the nonstiff method and the stiff method with iterative solution of the linear systems (the dense and banded direct solvers
from CVODE are not included in PVODE). Parallel preconditioners are also being developed for various major system classes. The initial release of PVODE includes the first such preconditioner which is
a block-banded preconditioner based on domain decomposition.
A closely related solver called KINSOL has also been developed for systems of nonlinear algebraic equations. It uses inexact Newton methods with line search strategies for accelerated convergence.
KINSOL exists in both a serial and a parallel version, by virtue of serial and parallel versions of the vector module.
The third member of the trio, IDA, is also closely related to the CVODE/PVODE pair. It addresses systems of differential-algebraic equations, and uses Backward Differentiation Formula methods. IDA
also exists in both serial and parallel versions. The serial version includes both direct and iterative (GMRES) methods for the linear systems, while the parallel version includes only the GMRES
The PVODE trio is most useful for solving large differential and nonlinear algebraic systems that arise in a variety of applications. Important DOE applications include chemical kinetics, atmospheric
chemistry, semiconductors, and structural or mechanical systems. It has not yet been widely used outside of groups with direct ties to the developers.
PVODE, KINSOL, and IDA are all being used for the solution of 3D Boltzmann transport equations. For this type of application, the problem size can exceed 600 million unknowns.
A beta release of PVODE with KINSOL is being used in the ParFlow groundwater flow model to solve a nonlinear pressure equation. The ParFlow Project applies high performance computing techniques to
the three-dimensional modeling of fluid flow and chemical transport through heterogeneous porous media to enable more realistic simulations of subsurface contaminant migration. These simulations are
used to improve the design, analysis, and management of engineered remediation strategies.
PVODE with KINSOL has also been used in the Tokamak Edge Plasma modeling effort UEDGE. The UEDGE project aims to provide a better understanding of plasma edge physics which could help to resolve
several major issues such as core confinement and prevention of core fuel dilution.
PVODE was developed in the Center for Applied Scientific Computing (CASC), at the Lawrence Livermore National Laboratory, beginning with a parallel extension to the popular CVODE package for the
solution of large systems of ordinary differential equations. Its principal developers were Alan Hindmarsh and Allan Taylor. | {"url":"http://acts.nersc.gov/formertools/pvode/","timestamp":"2014-04-17T15:51:46Z","content_type":null,"content_length":"8413","record_id":"<urn:uuid:47e3fb51-a1c5-4ebf-b271-19b5b1351eab>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
1 Euro equals how many american dollars
You asked:
1 Euro equals how many american dollars
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/1_euro_equals_how_many_american_dollars","timestamp":"2014-04-17T04:20:12Z","content_type":null,"content_length":"58404","record_id":"<urn:uuid:495e146e-4f56-4888-b6c0-c77481e2d7e2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - what is the proof of riemann integral?
Look on page 6, they use absolute convergence (not absolute continuity, I never said anything about that) in the definition of what is or is not integrable.
They use absolute convergence when refering to the integrability of a simple function. Which makes some sense, since the Lebesgue integral of a simple function is defined with a summation, and you
want that summation to converge absolutely.
But saying that integrability requires absolute convergence makes no sense when refering to an arbitrary function. Absolute convergence is a property of a series, not of a function. | {"url":"http://www.physicsforums.com/showpost.php?p=469436&postcount=34","timestamp":"2014-04-19T12:36:53Z","content_type":null,"content_length":"7736","record_id":"<urn:uuid:3a637aae-163e-4265-9cc4-ed29777b0128>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bounded-independence derandomization of geometric partitioning with applications to parallel fixed-dimensional linear programming. Discrete Comput
- In Proc. 20th ACM Sympos. Comput. Geom , 2004
"... We present memory-efficient deterministic algorithms for constructing #-nets and #-approximations of streams of geometric data. Unlike probabilistic approaches, these deterministic samples
provide guaranteed bounds on their approximation factors. We show how our deterministic samples can be used t ..."
Cited by 26 (0 self)
Add to MetaCart
We present memory-efficient deterministic algorithms for constructing #-nets and #-approximations of streams of geometric data. Unlike probabilistic approaches, these deterministic samples provide
guaranteed bounds on their approximation factors. We show how our deterministic samples can be used to answer approximate online iceberg geometric queries on data streams. We use these techniques to
approximate several robust statistics of geometric data streams, including Tukey depth, simplicial depth, regression depth, the Thiel-Sen estimator, and the least median of squares. Our algorithms
use only a polylogarithmic amount of memory, provided the desired approximation factors are inverse-polylogarithmic. We also include a lower bound for non-iceberg geometric queries.
"... . We present an implementation of Luby's algorithm for the calculation of maximal independent sets in graphs on the Cray T3E. 1 Introduction Due to the increasing practical availability of
powerful parallel architectures, the investigation of parallel algorithms has become a major research topic i ..."
Add to MetaCart
. We present an implementation of Luby's algorithm for the calculation of maximal independent sets in graphs on the Cray T3E. 1 Introduction Due to the increasing practical availability of powerful
parallel architectures, the investigation of parallel algorithms has become a major research topic in the eld of theoretical computer science. A widely used model for the high level description of
parallel algorithms is the PRAM-model, see e.g. [8]. A computational problem is considered to be eciently solvable in parallel, if it can be solved in polylogarithmic time, i.e., time O(log k (n))
for a xed k 0, with polynomially many processors on a PRAM. The class of all these computational problems is called NC, see e.g. [16]. The development of NC-algorithms for practically and
theoretically relevant problems is a major research eld in theoretical computer science. In this paper we consider the problem of calculating a maximal independent set in a given graph, briey MIS
problem. A ma... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=38479","timestamp":"2014-04-16T06:01:13Z","content_type":null,"content_length":"15895","record_id":"<urn:uuid:f98398b3-35a3-4a3f-b63a-f37c36ec5752>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Study of Long-Term Pavement Performance (LTPP): Pavement Deflections
Appendix M. The Generalized Likelihood Ratio For H48 Over H60
If two hypotheses are available for a model for data, a standard way to compare them is the generalized likelihood ratio. Briefly, this is a computation of the highest possible probability for the
data under each hypothesis, or the maximum likelihood estimates of the parameters for the data within each hypothesis. The H48 hypothesis is that sensor 7 was offset 121.9 cm (48 inches); the H60
hypothesis is that sensor 7 was offset 152.4 cm (60 inches).
In each of the three cases examined below, the data to be modeled are the predicted positions of sensor 7, using all of the nonjoint associated deflection data for a particular site and day. These
data fail tests of the hypothesis that they come from a normal distribution, but are approximately symmetric. The central limit theorem would then imply that the sample means have approximate normal
distributions, with the approximation improving as the sample size (i.e., the number of dates) increases.
Thus the sample mean, x bar, is approximately normally distributed with mean µ[H] and unknown standard deviation s divided by the square root of n . Here µ[H] is the appropriate value dictated by the
hypothesis; if H is H48, then µ[H] is 48, and if H is H60, then µ[H] is 60. The nuisance parameter s = s[H] is estimated by its maximum likelihood estimate shown in the equation given in figure 57.
Figure 57. Equation. Maximum likelihood estimate.
The likelihood under either hypothesis is shown in the equation in figure 58.
Figure 58. Equation. The likelihood.
For example, if the hypothesis is H48, then the value of µ[H] is replaced by 48, and the value of s[H] is replaced as shown in the equation in figure 59.
Figure 59. Equation. Maximum likelihood estimate, s[H] = 48.
This likelihood value is computed for each hypothesis, and the ratio of these two numbers is called the likelihood ratio, a computation of the relative likelihood of the data under the two competing
Table 12 captures all of the relevant information for the three datasets presented in the report. MLE of SD is the maximum likelihood estimate of s, the standard deviation, referenced above.
Table 12. Likelihood ratio stats for protocol versus nonprotocol d7 sensor positions.
│ FWD Serial Number │ Sample Mean, │ MLE of SD under H48 │ MLE of SD under H60 │ Sample Size, n │ Likelihood under H48 │ Likelihood under H60 │ Likelihood Ratio │
│ 8002–129 │ 47.99 │ 1.88 │ 12.16 │ 12 │ 0.7350095 │ 0.0003252 │ 2,260 │
│ 8002–132 │ 48.12 │ 2.33 │ 12.10 │ 29 │ 0.8840417 │ 1.531E–07 │ 5,773,093 │
│ 8002–061 │ 47.20 │ 1.44 │ 12.87 │ 65 │ 9.4002E–05 │ 2.789E–15 │ 33,706,837,035 │ | {"url":"http://www.fhwa.dot.gov/publications/research/infrastructure/pavements/ltpp/03093/appendm.cfm","timestamp":"2014-04-18T00:50:56Z","content_type":null,"content_length":"15834","record_id":"<urn:uuid:b4db4d29-2920-4a27-b435-2faea7edc3de>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Long division
In arithmetic, long division is a standard division algorithm suitable for dividing multidigit numbers that is simple enough to perform by hand. It breaks down a division problem into a series of
easier steps. As in all division problems, one number, called the dividend, is divided by another, called the divisor, producing a result called the quotient. It enables computations involving
arbitrarily large numbers to be performed by following a series of simple steps.^[1] The abbreviated form of long division is called short division, which is almost always used instead of long
division when the divisor has only one digit.
Place in education[edit]
Inexpensive calculators and computers have become the most common way to solve division problems, eliminating a traditional mathematical exercise, and decreasing the educational opportunity to show
how to do so by paper and pencil techniques. (Internally, those devices use one of a variety of division algorithms). In the United States, long division has been especially targeted for de-emphasis,
or even elimination from the school curriculum, by reform mathematics, though traditionally introduced in the 4th or 5th grades.
In English-speaking countries, long division does not use the slash (/) or obelus (÷) signs, instead displaying the dividend, divisor, and (once it is found) quotient in a tableau.
The process is begun by dividing the left-most digit of the dividend by the divisor. The quotient (rounded down to an integer) becomes the first digit of the result, and the remainder is calculated
(this step is notated as a subtraction). This remainder carries forward when the process is repeated on the following digit of the dividend (notated as 'bringing down' the next digit to the
remainder). When all digits have been processed and no remainder is left, the process is complete.
An example is shown below, representing the division of 500 by 4 (with a result of 125).
125 (Explanations) 4)500 4 (4 × 1 = 4) 10 (5 - 4 = 1) 8 (4 × 2 = 8) 20 (10 - 8 = 2) 20 (4 × 5 = 20) 0 (20 - 20 = 0)
In the above example, the first step is to find the shortest sequence of digits starting from the left end of the dividend, 500, that the divisor 4 goes into at least once; this shortest sequence in
this example is simply the first digit, 5. The largest number that the divisor 4 can be multiplied by without exceeding 5 is 1, so the digit 1 is put above the 5 to start constructing the quotient.
Next, the 1 is multiplied by the divisor 4, to obtain the largest whole number (4 in this case) that is a multiple of the divisor 4 without exceeding the 5; this product of 1 times 4 is 4, so 4 is
placed underneath the 5. Next the 4 under the 5 is subtracted from the 5 to get the remainder, 1, which is placed under the 4 under the 5. This remainder 1 is necessarily smaller than the divisor 4.
Next the first as-yet unused digit in the dividend, in this case the first digit 0 after the 5, is copied directly underneath itself and next to the remainder 1, to form the number 10. At this point
the process is repeated enough times to reach a stopping point: The largest number by which the divisor 4 can be multiplied without exceeding 10 is 2, so 2 is written above the 0 that is next to the
5 – that is, directly above the last digit in the 10. Then the latest entry to the quotient, 2, is multiplied by the divisor 4 to get 8, which is the largest multiple of 4 that does not exceed 10; so
8 is written below 10, and the subtraction 10 minus 8 is performed to get the remainder 2, which is placed below the 8. This remainder 2 is necessarily smaller than the divisor 4. The next digit of
the dividend (the last 0 in 500) is copied directly below itself and next to the remainder 2, to form 20. Then the largest number by which the divisor 4 can be multiplied without exceeding 20 is
ascertained; this number is 5, so 5 is placed above the last dividend digit that was brought down (i.e., above the rightmost 0 in 500). Then this new quotient digit 5 is multiplied by the divisor 4
to get 20, which is written at the bottom below the existing 20. Then 20 is subtracted from 20, yielding 0, which is written below the 20. We know we are done now because two things are true: there
are no more digits to bring down from the dividend, and the last subtraction result was 0.
If the last remainder when we ran out of dividend digits had been something other than 0, there would have been two possible courses of action. (1) We could just stop there and say that the dividend
divided by the divisor is the quotient written at the top with the remainder written at the bottom; equivalently we could write the answer as the quotient followed by a fraction that is the remainder
divided by the divisor. Or, (2) we could extend the dividend by writing it as, say, 500.000... and continue the process (using a decimal point in the quotient directly above the decimal point in the
dividend), in order to get a decimal answer, as in the following example.
31.75 4)127.00 124 3 0 (0 is written because 4 does not go into 3, using whole numbers.) 30 (0 is added in order to make 3 divisible by 4; the 0 is accounted for by adding a decimal point in the quotient.) 28 (7 × 4 = 28) 20 (an additional zero is brought down) 20 (5 × 4 = 20) 0
In this example, the decimal part of the result is calculated by continuing the process beyond the units digit, "bringing down" zeros as being the decimal part of the dividend.
This example also illustrates that, at the beginning of the process, a step that produces a zero can be omitted. Since the first digit 1 is less than the divisor 4, the first step is instead
performed on the first two digits 12. Similarly, if the divisor were 13, one would perform the first step on 127 rather than 12 or 1.
Basic procedure for long division by longhand[edit]
1. When dividing two numbers, for example, n divided by m, n is the dividend and m is the divisor; the answer is the quotient.
2. Find the location of all decimal points in the dividend and divisor.
3. If necessary, simplify the long division problem by moving the decimals of the divisor and dividend by the same number of decimal places, to the right, (or to the left) so that the decimal of the
divisor is to the right of the last digit.
4. When doing long division, keep the numbers lined up straight from top to bottom under the tableau.
5. After each step, be sure the remainder for that step is less than the divisor. If it is not, there are three possible problems: the multiplication is wrong, the subtraction is wrong, or a greater
quotient is needed.
6. In the end, the remainder, r, is added to the growing quotient as a fraction, r/m.
Example with multi-digit divisor[edit]
A divisor of any number of digits can be used. In this example, 37 is to be divided into 1260257. First the problem is set up as follows:
Digits of the number 1260257 are taken until a number greater than 37 occurs. So 1 and 12 are less than 37, but 126 is greater. Next, the greatest multiple of 37 less than 126 is computed. So 3 × 37
= 111 < 126, but 4 × 37 > 126. This is written underneath the 126 and the multiple of 37 is written on the top where the solution will appear:
3 37)1260257 111
Note carefully which columns these digits are written into - the 3 is put in the same column as the 6 in the dividend 1260257.
The 111 is then subtracted from the above line, ignoring all digits to the right:
3 37)1260257 111 15
Now digits are copied down from the dividend and appended to the result of 15 until a number greater than 37 is obtained. 150 is greater so only the 0 is copied:
3 37)1260257 111 150
The process repeats: the greatest multiple of 37 less than 150 is subtracted. This is 148 = 4 × 37, so a 4 is added to the solution line. Then the result of the subtraction is extended by digits
taken from the dividend:
34 37)1260257 111 150 148 225
Notice that two digits had to be used to extend 2, as 22 < 37.
This is repeated until 37 divides the last line exactly:
34061 37)1260257 111 150 148 225 222 37
Mixed mode long division[edit]
For non-decimal currencies (such as the British £sd system before 1971) and measures (such as avoirdupois) mixed mode division must be used. Consider dividing 50 miles 600 yards into 37 pieces:
m - yd - ft - in 1 - 634 1 9 r. 15" 37) 50 - 600 - 0 - 0 37 22880 66 348 13 23480 66 348 17600 222 37 333 5280 128 29 15 22880 111 348 == ===== 170 === 148 22 66 ==
Each of the four columns is worked in turn. Starting with the miles: 50/37 = 1 remainder 13. No further division is possible, so perform a long multiplication by 1,760 to convert miles to yards, the
result is 22,880 yards. Carry this to the top of the yards column and add it to the 600 yards in the dividend giving 23,480. Long division of 23,480 / 37 now proceeds as normal yielding 634 with
remainder 22. The remainder is multiplied by 3 to get feet and carried up to the feet column. Long division of the feet gives 1 remainder 29 which is then multiplied by twelve to get 348 inches. Long
division continues with the final remainder of 15 inches being shown on the result line.
Non-decimal radix[edit]
The same method and layout is used for binary, octal and hexadecimal. An address range of 0xf412df divided into 0x12 parts is:
0d8f45 r. 5 12 ) f412df ea a1 90 112 10e 4d 48 5f 5a 5
Binary is of course trivial because each digit in the result can only be 1 or 0:
1110 r. 11 1101) 10111001 1101 10100 1101 1110 1101 11
Interpretation of decimal results[edit]
When the quotient is not an integer and the division process is extended beyond the decimal point, one of two things can happen. (1) The process can terminate, which means that a remainder of 0 is
reached; or (2) a remainder could be reached that is identical to a previous remainder that occurred after the decimal points were written. In the latter case, continuing the process would be
pointless, because from that point onward the same sequence of digits would appear in the quotient over and over. So a bar is drawn over the repeating sequence to indicate that it repeats forever.
Notation in non-English-speaking countries[edit]
China, Japan and India use the same notation as English-speakers. Elsewhere, the same general principles are used, but the figures are often arranged differently.
Latin America[edit]
In Latin America (except Argentina, Mexico, Colombia, Venezuela and Brazil), the calculation is almost exactly the same, but is written down differently as shown below with the same two examples used
above. Usually the quotient is written under a bar drawn under the divisor. A long vertical line is sometimes drawn to the right of the calculations.
500 ÷ 4 = 125 (Explanations) 4 (4 × 1 = 4) 10 (5 - 4 = 1) 8 (4 × 2 = 8) 20 (10 - 8 = 2) 20 (4 × 5 = 20) 0 (20 - 20 = 0)
127 ÷ 4 = 31.75 124 3 0 (0 is written because 4 does not go into 3, using whole numbers.) 30 (a 0 is added in order to make 3 divisible by 4; the 0 is accounted for by adding a decimal point in the quotient) 28 (7 × 4 = 28) 20 (an additional zero is added) 20 (5 × 4 = 20) 0
In Mexico, the US notation is used, except that only the result of the subtraction is annotated and the calculation is done mentally, as shown below:
125 (Explanations) 4)500 10 (5 - 4 = 1) 20 (10 - 8 = 2) 0 (20 - 20 = 0)
In Brazil, Venezuela and Colombia, the European notation (see below) is used, except that the quotient is not separated by a vertical line, as shown below:
127|4 −124 31,75 3 − 0 30 −28 20 −20 0
Same procedure applies in Mexico, only the result of the subtraction is annotated and the calculation is done mentally.
In Spain, Italy, France, Portugal, Romania, Turkey, Greece, Belgium, and Russia, the divisor is to the right of the dividend, and separated by a vertical bar. The division also occurs in the column,
but the quotient (result) is written below the divider, and separated by the horizontal line.
127|4 −124|31,75 3 − 0 30 −28 20 −20 0
In France, a long vertical bar separates the dividend and subsequent subtractions from the quotient and divisor, as in the example below of 6359 divided by 17, which is 374 with a remainder of 1.
− 5 1 374
− 1 1 9
− 6 8
Decimal numbers are not divided directly, the dividend and divisor are multiplied by a power of ten so that the division involves two whole numbers. Therefore, if one were dividing 12,7 by 0,4
(commas being used instead of decimal points), the dividend and divisor would first be changed to 127 and 4, and then the division would proceed as above.
In Germany, the notation of a normal equation is used for dividend, divisor and quotient (cf. first section of Latin American countries above, where it's done virtually the same way):
127 : 4 = 31,75 −124 3 −0 30 −28 20 −20 0
The same notation is adopted in Denmark, Norway, Macedonia, Poland, Croatia, Slovenia, Hungary, Czech Republic, Slovakia, Vietnam and in Serbia.
In the Netherlands, the following notation is used:
12 / 135 \ 11,25 12 15 12 30 24 60 60 0
Rational numbers[edit]
Long division of integers can easily be extended to include non-integer dividends, as long as they are rational. This is because every rational number has a recurring decimal expansion. The procedure
can also be extended to include divisors which have a finite or terminating decimal expansion (i.e. decimal fractions). In this case the procedure involves multiplying the divisor and dividend by the
appropriate power of ten so that the new divisor is an integer – taking advantage of the fact that a ÷ b = (ca) ÷ (cb) – and then proceeding as above.
A generalised version of this method called polynomial long division is also used for dividing polynomials (sometimes using a shorthand version called synthetic division).
See also[edit]
External links[edit]
• [1] Long Division and Euclid’s Lemma | {"url":"http://blekko.com/wiki/Long_division?source=672620ff","timestamp":"2014-04-19T15:21:34Z","content_type":null,"content_length":"35658","record_id":"<urn:uuid:5233800d-e2de-4701-93d8-303cc61a6c75>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can a Mathematician Be All Things to All People?
March 8, 1998
Students learn mathematics "in courses that have been in existence for 30 or 40 years without much change in the curriculum," says Fan Chung Graham; "there is a nontrivial gap between classroom
mathematics and the math used in current technology."
In an undergraduate course she has developed at the University of Pennsylvania, Fan Chung Graham emphasizes the flexibility and reachout mentality that she sees as critical to anyone practicing
mathematics today.
"Mathematics is the foundation of science and technology," says Fan Chung Graham. "A student with solid mathematical training has an advantage in dealing with all sorts of tasks in this information
age. Teaching is just one of the good things for mathematicians to do." Chung would also like to see, for example, "many more CEOs who are mathematicians."
Currently a professor of mathematics and computer science at the University of Pennsylvania, Chung is exceptionally broad-minded about the paths taken by her students, counting as success stories not
only those who go on to do graduate work in combinatorics (her own field), but also a student who became an executive in a (family-owned) recycling company and another who accepted a position at
Salomon Brothers.
Since her arrival at Penn three years ago, Chung has been working toward what she believes should be the goal of any successful mathematics department today: the generation of "dynamic students." She
has pursued that goal in part by developing and teaching the predominantly undergraduate course whose graduates included the recycling executive and the Wall Street employee. Called Topics in Applied
Mathematics, the course is being offered at Penn as part of the university's participation in the National Science Foundation's Mathematical Sciences and Their Applications Throughout the Curriculum
undergraduate education initiative. The course, she tells SIAM News one afternoon over lunch, had its genesis in her 19 years at AT&T Bell Laboratories and Bell Communications Research.
Arriving at Bell Labs more than 20 years ago with a degree in mathematics, she recalls, she soon found her research enriched by problems arising in many areas of communication and computation. She
collaborated with engineers (on switching networks, dynamic routing, optical codes, network design), with computer scientists (on algorithmic design and analysis, parallel architectures and
computation), and with chemists and physicists (on metal clustering and on Buckminsterfullerene). Her research in combinatorics has also involved collaborations, with researchers in number theory,
spectral geometry, and group representations, among other areas.
At Bellcore, when research vice president Alfred Aho requested an in-depth study of software reliability, Chung was selected to lead the project. She pulled together a group of experts and immersed
herself for six months in both the global perspectives and the technical issues of protocols, networking, software engineering, and their mathematical foundations. She talked with hundreds of people
in both engineering and research communities, and she read many software engineering papers-easier, she believes, given her background in mathematics and computer science, than it would have been for
an engineer to acquire the necessary background in mathematics. She wrote the executive summary of the report, which later appeared in IEEE Communications. (Chung's prolific research, and especially
the extent to which collaboration with other researchers has been important to her career, is discussed in depth in a new book on women in mathematics; see review .)
The best part about working at industrial labs, says Chung, who headed the mathematics division at Bellcore before being named a fellow toward the end of her ten years there, is that there are "no
boundaries between disciplines. The mathematicians at the labs are dynamic-they can work on mathematical problems arising in all phases of communications." Some of the new mathematics graduates who
were applying for jobs, in contrast, appeared to Chung to be "very narrow." Although in most cases "very intelligent, they are not able to apply their ideas to other related areas."
Why was this happening? Students learn mathematics "in courses that have been in existence for 30 or 40 years without much change in the curriculum," Chung points out; "there is a nontrivial gap
between classroom mathematics and the math used in current technology." The world has changed, she says; "mathematics has its beauty and truth, but it also has power and impact, which are often
revealed by its connecting with real-world problems. Sometimes this takes the form of connecting several different disciplines within mathematics."
Having been in positions at both Bell Labs and Bellcore to make sure that the mathematics budgets were justified, Chung has a large collection of real-world connections that couple theory with the
practice of mathematics. Mathematics plays an essential role in permutation networks and routing, for example. Her mathematics group at Bellcore produced such specific products as a network
optimization package for optical-fiber networks. Having also been responsible for building up a group in cryptography, she cites advances in time-stamping and cryptographic protocols as examples of
that group's contributions.
Chung had her "dynamic" former colleagues in mind when she first began to develop her course at Penn, and in some cases she calls on them now to address her classes in person. In the one-semester
course she has now taught twice, with a theme of probability and random processes, she invites four or five mathematicians from industry to visit the class. Speakers to date have discussed
mathematical finance, applications of probability in communications, string-matching, and electronic digital cash. She prepares the students for each speaker and works with the speakers to make sure
that they will address the students' interests at an appropriate level. Much to her gratification, the speakers have been impressed by the students' interest and by the nature of their questions, in
some cases asking to return and sometimes even recruiting students on the spot. Looking ahead to next semester, when the theme of the course will be optimization, Chung envisions again a speaker in
mathematical finance, another in game theory or combinatorial chemistry, and probably one in network optimization.
Enrollment in the course has increased, is in fact at the maximum she can handle, given the extent to which she interacts with the students individually, especially in relation to their projects.
Project selection itself is a learning process, she explains; instead of giving a midterm exam, she meets with the students to help them figure out what they'd like to work on, which in many cases
turns out to be a topic addressed by a visiting speaker; once the students' areas of interest have been determined, she gives them materials to read and refers them to other papers. The students
later submit titles and outlines to her and finally make oral presentations-which must take the form of a story, with a beginning, a well-delineated research part, and an ending. After such
preparations, it is quite easy to complete the written report. "Working on projects is an effective way for the students to have a taste of creative thinking and independent study. Such experience
can be very useful for job interviews or graduate school applications."
Chung's own flexibility seems to have made her equally capable of succeeding in the corporate research laboratory environment of Bell Labs and Bellcore and in the very different world of academia
(before joining the Penn faculty, she had spent leaves from the labs at the Institute for Advanced Study and Harvard). The soft-spoken, gentle manner that has helped endear her to her students,
though, belies some forthright opinions that would be considered controversial and even subversive by many in academia. "Mathematical research means using existing knowledge to tackle the unknown.
Having first-hand experience in using mathematics can greatly help teaching," she says. "The mathematics community needs to reach out"; otherwise, she believes, its current decline will continue.
It is not a gloomy prospect that she holds out, however. Today, she says, the rapidity with which technology is moving is pushing mathematicians to use all available techniques: "The time of the
Renaissance mathematician is coming back." Her own work can be offered as evidence: In addition to the serious work on applications for which she is well known to the applied and computational
mathematics community, people in group representations, finite fields, differential geometry, analysis, as well as computer science and communication systems . . . "all think I am one of them," she
says; "I think mathematicians can be all things to all people."
Success in mathematics usually requires an individual to have a specialty in one topic, and to keep up with advances in that area; to that prescription, Chung would add, "Advances are often made by
focusing on one special aspect. However, the research will have very little impact if it cannot be transported to more than one place." Depth and breadth are not conflicting goals for a
mathematician, she believes; rather, they reinforce and complement each other. "Making connections is the key." | {"url":"https://www.siam.org/news/news.php?id=812","timestamp":"2014-04-18T11:53:34Z","content_type":null,"content_length":"16325","record_id":"<urn:uuid:26264783-9503-49a1-bb82-0a086236e48d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tiny Genetic Programming Library in AS3 and C++
GP Art Demo
Imagine there is a population of animals with different patterns on their skin. Members of the population that don't blend in with their environment are eaten by predators, and members of the
population with good camoflouge survive and reproduce, passing the genes that encode their skin pattern on to their offspring (possibly with slight mutations). Over time, the population will evolve
toward a well-camoflouged skin pattern. In the demo below, each image is like the skin of an animal in an evolving population. When you click on one of the images, you play the role of the predator,
and effectively eat all of the other members of the population. The one surviving member reproduces, creating the next generation of the population.
Inspired by the
Tiny GP Contest
, I decided to write my own tiny implementation of standard
genetic programming
(there are many other kinds of GP, but standard GP usually means GP as presented by John Koza). The demo above uses the TinyGP library to generate random evolving art. Each image above comes from a
randomly generated mathematical expression. Mouse over an image to see the expression that generates it. Click on an image to replace the entire population of images with mutations of the image you
clicked. Click the randomize button to randomize all of the images. You can download the
AS3 source for the tiny gp art demo
GP Regression Demo
This next demo requires a slightly more technical explanation. One of the many uses of genetic programming is non-linear regression—finding an equation that fits a set of example input and output
data. In this demo there are two fields where you can enter a series of inputs and expected outputs.
Click the 'Start New Problem' button, then click the 'Run' button.
It will generate some output that looks like this:
best: (+ x x) depth=2 size=3 fitness=0 evals=100 avgFitness=32.66962
Let's look at what this output means piece by piece. The first part,
best: (+ x x)
means that
(+ x x)
is the best expression it found to match the example data. The expression is written in
prefix notation
, and means the same as x + x or 2x. Notice that the example data we gave it exactly matches the function y=2x or y=x+x. Next, we have
depth=2 size=3
, which describes the size of the best expression it found. After that,
indicates that the error of the best expression with respect to the example data is 0, i.e. it exactly matches. After that,
is the number of expressions it has generated and tested so far. In this case, it stopped at 100, even though we told it to run 500 iterations, because it found a solution in the initial random
population (which has 100 members). Finally,
is the average error of all members of the population.
You can experiment with entering your own example input and output data. Keep a few things in mind:
• Remember to press the 'Start New Problem' button before you click the 'Run' button, and whenever you change the example input and output data.
• The input and output data must have the same number of items.
• This demo is configured to make only expressions that use +, -, *, / and the number 1. If you give it data that cannot be modeled by an expression consisting only of these primitives, it will
have no hope of finding a perfect match. However, it is not hard to modify the code to add more functions.
Genetic Programming Implementation Details
Now that we have seen some demos, lets get down to the technical details of this code. This GP library uses the standard Koza expression tree program representation. It uses the 'grow' algorithm to
generate random expressions. Mutation is performed by selecting a random subexpression in an expression tree, and replacing it with a new random expression (which satisfies the maximum tree depth
constraint). Crossover (mating) between two expressions is performed by selecting a random subexpression in each parent, then exchanging them (although it only makes on child, not two).
In addition to the core code for creating, mutating, mating and evaluating expressions, the library includes a steady-state genetic algorithm with tournament selection, and a worst-out, elitist
replacement policy (i.e. when a new child is created, it replaces the worse member of the population, only if it is better).
Note that the art demo does not use crossover or the steady-state genetic algorithm; it just uses random expression creation and mutation (which is like a
stochastic hill climbing algorithm
Implementation in C++
I originally wrote the TinyGP library in C++, then ported it to ActionScript 3. You can download the
C++ source code for TinyGP
. One nice thing about the C++ version is that it is all contained within one file, so it is easy to add to other projects (this wasn't possible in AS3, because each class must be in a file with the
same name).
Asside from differences in language syntax, the two implementations are algorithmically identical. I timed both implementations running through 1000 iterations, and the C++ implementation is about 10
times faster than the Flash implementation.
Strongly Typed, Functional Genetic Programming
TinyGP is simple as far as genetic programming systems go. If you want to see some really fancy genetic programming, check out my research on strongly typed functional genetic programming with
combinator expressions (shameless plug):
Genetic Programming Bibliography for Forrest Briggs
. It can evolve expressions in a subset of Standard ML, and functions that operate on arbitrary recursive algebraic data types, rather than just numbers. | {"url":"http://www.laserpirate.com/as3tinygp/","timestamp":"2014-04-17T06:48:25Z","content_type":null,"content_length":"8282","record_id":"<urn:uuid:b1703590-e022-47c6-9789-de1be682138f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstract Algebra: Rings with zero divisors involving Cartesian Product
October 7th 2010, 03:42 PM #1
Oct 2010
Abstract Algebra: Rings with zero divisors involving Cartesian Product
The problem states:
Let R and S be nonzero rings. Show that R x S contains zero divisors.
I had to look up what a nonzero ring was. This means the ring contains at least one nonzero element.
R x S is the Cartesian Product so if we have two rings R and S
If r1 r2 belong to R and s1 s1 belong to S
(r1, s1) + (r2,s2) = (r1+r2, s1 + s2)
I am using * to denote multiplication here.
(r1, s1)*(r2,s2) = (r1r2,s1s2)
Since we are talking about zero divisors I am going to need the definition of multiplication in the Cartesian Product.
I mean i going to need that (r1r2, s1s2 ) = (0,0)
so r1r2 = 0 and s1s2 = 0 . But neither r1 = 0 = r2 or s1 = 0 = s2
Can anyone help?
The problem states:
Let R and S be nonzero rings. Show that R x S contains zero divisors.
I had to look up what a nonzero ring was. This means the ring contains at least one nonzero element.
R x S is the Cartesian Product so if we have two rings R and S
If r1 r2 belong to R and s1 s1 belong to S
(r1, s1) + (r2,s2) = (r1+r2, s1 + s2)
I am using * to denote multiplication here.
(r1, s1)*(r2,s2) = (r1r2,s1s2)
Since we are talking about zero divisors I am going to need the definition of multiplication in the Cartesian Product.
I mean i going to need that (r1r2, s1s2 ) = (0,0)
so r1r2 = 0 and s1s2 = 0 . But neither r1 = 0 = r2 or s1 = 0 = s2
Can anyone help?
$(r,0)\cdot(0,s)=(0,0)\,,\,\,0eq r\in R\,,\,0eq s\in S$
October 7th 2010, 07:38 PM #2
Oct 2009 | {"url":"http://mathhelpforum.com/advanced-algebra/158760-abstract-algebra-rings-zero-divisors-involving-cartesian-product.html","timestamp":"2014-04-20T14:17:00Z","content_type":null,"content_length":"35065","record_id":"<urn:uuid:711b0886-50dc-4b07-b015-6462b0966407>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vacaville Prealgebra Tutor
Find a Vacaville Prealgebra Tutor
...Overall I feel that MAST has enhanced my tutoring capabilities and enriched my knowledge of teaching. I believe that one of the common problems that students have with math and science subjects
is that they do not understand what the problems are even asking them. In addition, I have found that...
34 Subjects: including prealgebra, chemistry, calculus, writing
...I teach in part to a large percentage of students who are college bound. The mathematics on the ACT and SAT is primarily Algebra and Geometry and I find the problems on the exams exciting
(especially those on the SAT) I am a teacher who had to pass the CBEST prior to becoming a student teacher. ...
13 Subjects: including prealgebra, calculus, geometry, ASVAB
...I have tutored adults and children and have taught for several years in High School and Community College. I am also a Special Education teacher, with experience teaching dyslexic students in
High School and Community College. I hold California teaching certificates in Elementary School Subjects, Special Education, and teaching English to ESL students.
21 Subjects: including prealgebra, reading, grammar, English
...It teaches students thinking, communicating, and problem solving skills. Applying axioms, postulates, and theorems in solving problems are practices that help students to become independent
learners. In Geometry, the more theorems are stored in the students' repertoire, the easier is the course.
17 Subjects: including prealgebra, calculus, statistics, geometry
...High School students will be prepared for pre-calculus and calculus instruction, and will be in position to confidently pass their AP tests. ALL students are fully capable of high academic
achievement. However, many students are not confident in their abilities.
13 Subjects: including prealgebra, calculus, physics, geometry | {"url":"http://www.purplemath.com/Vacaville_prealgebra_tutors.php","timestamp":"2014-04-17T16:12:36Z","content_type":null,"content_length":"24043","record_id":"<urn:uuid:280d28b7-a489-45b3-9964-2fe554f59e4f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
intermediate algebra-check my answer
Posted by Jena on Tuesday, February 15, 2011 at 7:57pm.
Find the equation of the line.
find equation of form y=ax+b, where a is slope, and b is intercept, which passes through points (x1, y1) = (-8, -6) and (x2, y2) = (0, 0)
slope 0-(-6)/0-(-8)= 3/4
how would I write it?
y=3/4x or y=3/4x+0 ?
• intermediate algebra-check my answer - helper, Tuesday, February 15, 2011 at 8:03pm
y = ax + b
m = 3/4
y = 3/4 x + b
You need to find b for this equation.
Substitute one of the given points into your equation and solve for b.
Then use that b value for your equation.
• intermediate algebra-check my answer - Jena, Tuesday, February 15, 2011 at 8:10pm
I don't get a value for b, What am I missing?
• intermediate algebra-check my answer - helper, Tuesday, February 15, 2011 at 8:25pm
You were given the points,
(x1, y1) = (-8, -6) and (x2, y2) = (0, 0)
y = 3/4 x + b
Use one of the given points, for example,
(-8, -6)
Solve for b
-6 = 3/4(-8) + b
b = 0
Therefore, the equation is
y = 3/4 x + 0
y = 3/4 x
I guess I miss-understood you. I didn't realize that you had already found the b value to be zero. (Since you didn't state that b = 0 originally). I shouldn't have assumed.
• intermediate algebra-check my answer - Jena, Tuesday, February 15, 2011 at 9:38pm
Ok thanks
Related Questions
math - i have more than one question so if u no any of the answers please tell ...
Math/Algebra do u know how to do slope intercept? - The following problems are ...
math help & correction - Problem #1 Is this correct or wrong? Find the slope of ...
Math - Write an equation in slope-intercept form for the line that passes ...
Linear Equation in Two Variables - I have to take a college placement test and ...
Algebra - Can someone check my answers? 1. Write the equation of the line that ...
math - What happens to y as x gets larger? What happens to x as y gets larger? d...
math - I have a few question so I will number them off. Please answer them for ...
Algebra HELP!! - Write the equation of the line that contains the points (2, 2) ...
intermediate algebra - Find the equation of the line that pass through the given... | {"url":"http://www.jiskha.com/display.cgi?id=1297817848","timestamp":"2014-04-18T18:29:35Z","content_type":null,"content_length":"9783","record_id":"<urn:uuid:8f22fc90-fc06-43a2-bca8-9a659d33375d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
A. Formulation for the analysis
B. Optimal low-rank approximation of the resolvent for inhomogeneous coordinates
C. Response of radially varying traveling waves and the connection with critical layer theory
D. Closing the loop: An explicit treatment of the nonlinearity
E. Where to look in 3D spectral space?
A. Near-wall cycle
B. Characteristics of the very large scale motions
C. Hairpin vortices and structural organization
D. Summary of results regarding unperturbed wall turbulence
A. Turbulent boundary layer excitation using a dynamic roughness impulse
B. Non-ideal forcing: Separation of roughness effects and dynamic forcing
C. Identification of the dynamic perturbation through phase-locking
D. Predicting the synthetic large-scale motion
A. Summary
B. Limitations of the approach
C. Future trends | {"url":"http://scitation.aip.org/content/aip/journal/pof2/25/3/10.1063/1.4793444","timestamp":"2014-04-21T07:56:46Z","content_type":null,"content_length":"122289","record_id":"<urn:uuid:7ce8d31f-9a1c-476e-a765-ba2847ae3b46>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
2D Perspective xform using CSS 3D matrix?
Hey SporkInTucson and welcome to the forums.
Are you trying to transform a plane from one orientation to another? Do you want to use the same projection for both objects.
It seems to me that from the images you are taking a quadrilateral (which can be defined using two triangles since every triangle defined a unique plane) and then you are translating the two
triangles using a transform.
If that is all you are doing (transforming the two triangles by using translations, rotations, and scaling matrices), then you need to basically concatenate a few matrix transforms together to get
your final position and then apply a standard projection.
I'm not completely sure about what you want to do though, so I based this response on your images that you have attached.
In terms of deriving the transformation matrices in the above context, you need to understand arbitrary rotations and translations. For translations, this won't be much of an issue (as will scaling),
but for rotations, especially ones that are not axis aligned, this will be a slight issue.
The rotation matrix you should use should be based on rotation about a specified axis. Here is something for you to read:
Once you have these down, the next part is choosing the right concatenation of the matrices. The order is important because rotating and then translating is completely different to translating and
then rotating. | {"url":"http://www.physicsforums.com/showthread.php?t=553078","timestamp":"2014-04-18T10:53:38Z","content_type":null,"content_length":"36842","record_id":"<urn:uuid:cae810fa-8e0e-4a4a-b544-6e3125866a23>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Expression Generics
Steve Samarov is a principal architect at Foliage Software. Over his career, Steve has held technical leadership and software development positions at Xerox, Microsoft, and other companies.
In my .NET development work, I always look for ways to apply lessons and techniques learned as a C++ programmer. There are substantial differences between C++ and C#, making it an interesting
challenge to find in C# a conceptual equivalent to a C++ construct or idiom.
Expression templates are one such example. Described by Todd Veldhuizen in 1995, they have been addressed by the likes of Angelika Langer and Klaus Kreft in C++ Expression Templates and Thomas Becker
in Expression Templates. Veldhuizen's goal was to create a C++ class library of numeric algorithms with "performance on par with Fortran 77/90." Expression templates are a useful tool in imaging
applications that calculate algebraic expressions with bitmaps as input. The focus of this article is finding a C# technique conceptually equivalent to C++ expression templates that could be used by
.NET programmers in imaging, numeric calculations, and other areas.
What are Expression Templates?
In this article, I follow the example in Veldhuizen's original paper -- computation of expressions over vectors of double-precision values. With a user-defined class Vector representing a container
of doubles, my goal is to calculate algebraic expressions that take Vector objects as arguments. For example, given vectors v1 and v2, I can write in C++:
v = Sqrt(v1 + v2)/Log(v1);
The Vector class behaves like built-in types, a desired attribute in object-oriented programming. For this to work, I need a complete set of binary and unary operators, as well as mathematical
functions that I intend to use in the expressions. With conventional operator overloading, evaluation of such expressions requires creation of temporary objects and multiple loops -- one for each
operator and function. In the previous example, four temporary objects and five loops (including the final assignment) are needed to evaluate the expression, making calculations costly in time and
Expression templates solve both problems: They eliminate temporary objects and fuse all the loops in the expression into one, resulting in a considerably smaller and faster code. At compile time, the
expression is parsed into a tree-like object. The leaf nodes are iterators positioned at the first elements of the input Vectors, and the inner nodes represent binary and unary operations, and
functions. At runtime, the expression is evaluated in a single loop. In each iteration, the result is calculated for current iterator positions, then all iterators are incremented.
How Does It Work In C++?
Class Vector, the container of double-precision numbers, must have an iterator and functions begin() and end() that return iterators positioned at both ends of the container. For example, my iterator
typedef double* Iter;
Next, I need two classes that represent binary and unary operations in the parse tree of the expression. The binary expression class, DVBinExpr, DV stands for "double vector", is a template with
three parameters: the iterator type of the left operand, the iterator type of the right operand, and the type of the operation. The last parameter is an "applicative" class; it defines a static
function apply() that takes two iterators as arguments and returns the result. The binary expression class must behave as an iterator and define operator*() and operator++(). Example 1 is the
definition of DVBinExpr.
template<class IterA, class IterB, class Op>
class DVBinExpr
typedef DVBinExpr<IterA, IterB, Op> Iter;
DVBinExpr(IterA ia, IterB ib) : _ia(ia), _ib(ib) {}
double operator*() const { return Op::apply(_ia, _ib); }
void operator++() { ++_ia; ++_ib; }
Iter begin() const { return *this; }
IterA _ia;
IterB _ib;
The unary expression class, DVUnaryExpr, is similar, except it has only one operand to work with. The addition applicative class DVOpAdd (Example 2), and similar classes are defined for other binary
and unary operations, as well as functions, such as Log() or Sin().
template<class IterA, class IterB>
class DVOpAdd
static double apply(const IterA& ia, const IterB& ib)
{ return *ia + *ib; }
Finally, Example 3 is the overloaded operator+().
template<class OperandA, class OperandB>
DVBinExpr<typename OperandA::Iter, typename OperandB::Iter,
DVOpAdd<typename OperandA::Iter, typename OperandB::Iter> >
operator+(const OperandA& a, const OperandB& b)
return DVBinExpr<typename OperandA::Iter,
typename OperandB::Iter,
DVOpAdd<typename OperandA::Iter,
typename OperandB::Iter> >(a.begin(), b.begin());
The rest of the operators follow the same pattern. As you can see, instead of executing addition of the two operands, I construct a binary expression with the iterators positioned at the beginning of
its operands and DVOpAdd as its applicative. The operands can be either Vector variables or other expressions. In turn, the constructed expression can be an operand of another expression higher in
the parse tree. Figure 1 illustrates the resulting structure.
[Click image to view at full size] | {"url":"http://www.drdobbs.com/parallel/expression-generics/218500391?pgno=1","timestamp":"2014-04-17T10:06:24Z","content_type":null,"content_length":"97310","record_id":"<urn:uuid:821ba366-c193-4386-ae27-bbbfcd55d389>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
WordWeb Get the FREE one-click dictionary software for Windows or the iPhone/iPad and Android apps Nearest
Help Us
Improve Noun: math teacher
• Report an 1. Someone who teaches mathematics
error - mathematics teacher
• Missing
word/ Derived forms: math teachers
Type of: educator, instructor, teacher
• Crossword
• Crossword | {"url":"http://www.wordwebonline.com/en/MATHTEACHER","timestamp":"2014-04-17T21:40:01Z","content_type":null,"content_length":"7119","record_id":"<urn:uuid:1f7d3292-fb22-450e-9549-f10edfa1fe66>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integrated Arithmetic and Basic Algebra Plus MyMathLab Student Access Kit (4th Edition)
home > paid book/ebook
Integrated Arithmetic and Basic Algebra Plus MyMathLab Student Access Kit (4th Edition)
A combination of a basic mathematics or prealgebra text with an introductory algebra text, Integrated Arithmetic and Basic Algebra provides a unique, integrated presentation of the material for these
courses that is extremely beneficial to students. As opposed to traditional texts that present arithmetic at the beginning and algebra at the end, this text integrates the two whenever possible, so
that students see how concepts are interrelated rather than learning them in isolation and missing the "big picture." The ideas and algorithms shared by arithmetic and algebra are introduced in an
arithmetic context and then developed through the corresponding algebraic concept. For example, operations with rational numbers and rational expressions are discussed together, whereas most texts
discuss operations with rational numbers early on and operations with rational expressions much later. The Jordan/Palow text helps students recognize algebra as a natural extension of arithmetic
using variables. This approach aligns directly with NCTM and AMATYC standards, which suggest playing upon students' previous knowledge to teach new concepts.
Merchant Format Price
Amazon US Paperback $140.99 - $162.67 | {"url":"http://pdfcast.org/paid/9780321566607","timestamp":"2014-04-18T08:06:19Z","content_type":null,"content_length":"27743","record_id":"<urn:uuid:3ff045dd-8b34-441a-9e83-6c125cf1f164>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
(04023) - Minor Planet Name
Named in memory of Vojtěch Jarník (1897-1970), Czech mathematician and professor at the Charles University in Prague. He studied number theory and was also well known for his work on the theory of
real variable functions. His excellent textbooks on the differential and integral calculus have been used by several generations of mathematicians and physicists. | {"url":"http://www.klet.org/names/view.php3?astnum=4023","timestamp":"2014-04-17T15:43:01Z","content_type":null,"content_length":"3433","record_id":"<urn:uuid:263b8eca-6566-4220-8f27-5f51bc6edb78>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Available Project Titles
Dr Stephen Baigent
1. Lotka-Volterra systems
2. Coupled oscillator models (mostly using Matlab/ Mathematica)
Dr Timo Betcke
1. Fasr boundary integral equation methods and their appellations
2. Spectral properties of boundary integral operators
Please note that both projects require a good deal of programming in Python.
It is therefore essential that candidates have some programming background and are willing to invest effort into learning Python develpment.
Dr Robert Bowles
1. Stability of shear flows over compliant surfaces
2. Modelling social interactions with differential equations
3. Motion of a slender tethered blade beneath a free surface or in a compliant channel
Dr Christian Boehmer
1. Continuum mechanics and general relativity
2. Dynamical systems and cosmology
Professor Erik Burman
1. Investigations of numerical methods for two dimensional incompressible flow at high Reynolds number
2. Different regularization methods applied to finite element methods for inverse problems
Dr Gavin Esler
1. Stochastic methods in advection-diffusion problems
A flecible numerical method for solving advection-diffusion problems, relevant to atmospheric transport problems, involves using ensembles of trajectories generated by solving stochastic differential
equations. An aspect of this method wil be explored.
2. Statistical mechanics of point vortices
The equations of motion of point vortices are Hamiltonian, and have some interesting properties. The methods of staticstical physics can be used to predict the behaviour of the point vortex system,
as a fuction of energy, when the number of vortices N is large. We will investigate these predictions in domains generated using conformal maps to the unit circle.
Dr Jonny Evans
1. Mapping class groups of surfaces
This project concerns symmetries of two-dimensional surfaces. The group of these symmetries (diffeomorphisms considered up to isotopy) is called the mapping class group of the surface and is one of
the most interesting algebraic objects in low-dimensional topology. Special cases include braid groups and SL(2,Z). Thurston proved a classification theorem for elements of mapping class groups
(generalising the classification of elements of SL(2,Z) into elliptic, parabolic and hyperbolic) and the goal of this project would be to explain his proof (though you could take it in other various
A good place to start (available in the maths library):
B. Farb and D. Margalit (2012) " A primer on mapping class groups", Princeton University Press
Good places to continue:
A. Fathi, F. Laudenbach, V. Poenaru (2012) "Thurston's Work on Surfaces", Princeton University Press (Engl. transl. by D. Kim and D. Margalit of "Travaux de Thurston"). Though this is not yet in the
maths library, I have ordered it.
W. Thurston, "On the geometry an dynamics of diffeomorphisms of surfaces", Bulletin of the AMS (New Series), Volume 19, Number 2, 1988 (Open access online).
Professor Rod Halburd
1. Differential equations in the complex domain
Dr Richard Hill
1. Verify the Birch-Swinnerton-Dyer conjecture modulo 2 for a family of elliptic curves (prerequisites 3705 and 3703)
2. Calculate some Iwasawa invariants, and make some corresponding deductions about the class groups of algebraic number fields (prerequisites 3704 and 3703)
Professor Ted Johnson
1. Simple inviscid flows with geophysical applications
Professor Francis EA Johnson
1. Cohomology groups of finite groups
2. Algebraic Topology
Dr Ilia Kamotski
1. Spectral problems for periodic graphs
2. Topics in Homogenisation Theory
Professor Yaroslav Kurylev
1. Topics in inverse and ill-posed problems
Dr Jason Lotay
1. De Rham Cohomology
2. Holonomy
Dr Christian Luebbe
1. Geometrical concepts in general relativity
2. Projective and conformal differential geometries
Professor Robb McDonald
1. Exact solution methods for Laplacian growth
2. Vortex motion around a sharp edge
Dr Nick Ovenden
1. Biomedical Flows
2. Sound transmission and propagation
Dr Karen Page
1. Trust, reputation and the Ultimatum Game
2. Students wishing to do a project in mathematical biology are welcome to come and disscuss potential projects directly with the tutor
Professor Leonid Parnovski
1. Periodic spectral problems
2. Variational approach in spectral theory
Dr Yiannis Petridis
1. Gaps between prime numbers
2. The Hardy-Littlewood circle method
3. Other topics in analytic number theory
4. Random matrices and moments of L-functions
5. Counting problems relating to infinite groups, lattices and graphs
Dr Mark Roberts
1. Non-commutative unique factorisation domains
2. Other projects in algebra
Dr Felix Schulze
1. Minimal surfaces and Bernstein's Theorem
Dr Nadia Sidorova
1. Topics in probability
Professor Michael Singer
1. Riemann surfaces and/or algebraic curves
2. Differential topology
3. Topics in geometric analysis: Hodge theory
Professor Frank Smith
1. Industrial modelling problems
2. Biomedical flows
3. Modelling of social dynamics
Professor Valery Smyshlyaev
1. High frequency scattering: asymptotic methods and analysis
2. Multi-scale problems and homogenisation: asymptotics and analysis
Professor Alex Sobolev
1. Spectra of compact operators
Dr Isidoros Strouthos
1. Algebraic K-theory
2. Whitehead torsion
3. Thurston's eight three-dimensional geometries
Dr Sergei Timoshin
1. Instabilities in weakly non-homogeneous systems
2. Turing Instability
Dr John Talbot
1. Cliques in graphs
A good place to start:
V. Nikiforov (2010) The number of cliques in graphs. Please click here to download the article.
C. Reiher (2012) The clique density theorem. Please click here to download the article.
Professor Jean-Marc Vanden-Broeck
1. Analytical and numerical studies of gravity waves of large amplitude
Professor Dmitri Vassiliev
1. Topics in spectral theory of partial differential operators and microlocal analysis
Dr Chris Wendl
1. Morse homology
The general idea of Morse theory is to recover information about the topology of a smooth manifold from the critical points of a smooth real-valued fuction on that manifold. Morse homology is a
formalisation of this idea, where one studies the spaces of gradient-flow lines of a generic fuction in order to define algebraic invariants that are independent of the choice of function but depend
on the topology of the domain. This idea has been extremely popular among symplectic and differential topologist since the 1980s, as it inspired a powerful new set of geometric invariants known as
"Floer homologies", which remain an active area of research.
Dr Helen Wilson
1. Non-Newtonian Fluid Mechanics
Dr Henry Wilton
1. Free groups and topology of finite graphs
Dr Andrei Yafaev
1. Topics in arithmetic algebraic geometry
Professor Alexey Zaikin
1. Stochastic modelling of coupled repressilators
Recent advances of synthetic biology have made it possible to construct synthetic genetic networks which demonstrate oscillations. One of such classical example is a
repressilator which consists of three mutually repressive genes. Recently it was shown that coupled repressilators can demonstrate very rich dynamics e.g. in-phase or anti-phase synchronized
oscillations, oscillatory death or clustering. The project is of computational nature. It is proposed to simulate coupled repressilators with Gillespie algorithm and study the influence of noise on
coexisting dynamical attractors.
Dr Sarah Zerbes
1. Brauer groups
Page last modified on 25 mar 13 10:08 | {"url":"https://www.ucl.ac.uk/maths/courses/undergraduates/year4-project/titles","timestamp":"2014-04-17T10:17:27Z","content_type":null,"content_length":"30015","record_id":"<urn:uuid:37772bd6-f693-4a94-8b6b-868ca7609c37>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
What's so super about Supersymmetry?
The headline discovery out of the LHC was, of course, the Higgs Boson. But the LHC is no one-trick pony. The search is on to find hints of Supersymmetry. What's that, you ask? In this week's column,
we'll find out.
All images via NASA/ESA/Hubble.
There's been a deafening silence from the "Ask a Physicist" desk over the last few months, but there's Good News, everyone! We're back. In even better news, there's going to be an exciting offering
in your local bookstores next summer. In the meanwhile, send me your questions about the universe, physics news, new particles, sci-fi contraptions, or whatever you like.
This week, we'll talk about some of the fallout following the discovery of the Higgs (the particle, you'll recall, that gives other particles their mass). Physics seems to be in pretty good shape.
We've basically confirmed all of the main features of the "Standard Model" of physics. We've now discovered every particle in the model, with no leftovers. But now isn't the time to get complacent.
There are still lots of unanswered questions.
You may have heard some murmurs about a popular idea known as Supersymmetry (or "SUSY" to its friends). There's a lot riding on the possibility of SUSY, including a few problems that are — ironically
— caused by the discovery of the Higgs itself. So today we're going to figure out:
What's so super about supersymmetry?
The Particle Zoo
This being io9, the particle zoo is probably second nature to most of you, but in case it isn't, let me give you a 10 second backgrounder to get you ready for SUSY.
The world of fundamental particles is filled with cool sounding things like quarks and gluons, but at the end of the day, most of the interesting distinctions between particles can be found by
putting them very neatly, and quite unambiguously, into two different piles known as Fermions and Bosons. The Fermions are the most familiar particles: electrons, quarks, neutrinos and the like.
These are the particles that make up matter. Quarks, for instance, make up protons and neutrons which are, themselves, also Fermions.
Bosons, on the other hand, are the particles of force: photons, the W and Z Bosons, gluons, and the Higgs.
The difference between the two groups comes down to a very weird property — one that I've written about before — known as spin. All known particles have an intrinsic, unchanging spin to them. Except
for the Higgs, which doesn't spin at all.
Fermions have a spin of 1/2 (one half, that is, of a number known as the "Reduced Planck Constant"), while Bosons have an integer multiple. Since the Higgs has zero spin and zero is an integer, the
Higgs gets lumped in with the other Bosons.
Those differences seem like the sort of trivia only the nerdiest physicist could get excited about, but they have enormous consequences. The distinction manifests itself when you switch one particle
with another of the exact same type. If you switch two Bosons, then the quantum wavefunction of the universe is multiplied by a +1 (totally unchanged), but if you do the same thing with Fermions, you
get a -1.
That's it. That's literally the most important difference between the two, and yet, that -1 is ultimately responsible for something known as the "Pauli Exclusion Principle," which gives rise to
everything, from all of chemistry to the behavior of White Dwarves.
What does this have to do with the Higgs?
The Higgs is a pretty important particle in the scheme of things. Did you see the excitement from the Physics community when it was discovered? It's like finding a mint condition Millenium Falcon
still in its original packaging.
We were fairly confident that the Higgs would be discovered. It is a linchpin in the Standard Model, something that seemed absolutely essential to explaining why the weak force was so weak.
Related to that, the Higgs also explained where the masses of particles come from. That's because the Higgs interacts with just about everything.
But these interactions are a two-way street. Remember Newton's 3rd Law, "Every Action has an equal and opposite Reaction." Since "interaction" is simply a fancy word for "energy" and energy and mass
are interchangeable (E=mc^2, remember), the the Higgs doesn't just give mass to other particles, other particles give mass to the Higgs. But here's the weird part. The contributions from other
particles can either add extra or subtract from the total. The Higgs mass that we measure at the LHC isn't necessarily the real mass that it would have if we could strip away all of those
This is roughly equivalent to when you go to the doctor's office and they let you leave your clothes on when they weigh you. Whatever weight the scale reads — the weight that's measured by the rest
of the world — is actually more than your "bare" mass. To get your bare mass, you'd need to subtract the weight of your clothes.
One of the strange things about the universe is that particles and antiparticles constantly pop into existence. For the most part, we don't notice them since they don't last for very long, but when
they interact with particles, those interactions can add (or subtract) energy (or what we measure as mass) from other particles.
For the Higgs, this correction should be huge, generally of order the Planck Mass — a hugely gargantuan mass (by particle standards) that basically sits at the limits of our ability to reconcile
quantum mechanics and general relativity.
To put some numbers on it, suppose the bare mass of the Higgs is something like 2,430,000,000,000,000,125 GeV, the interaction with electrons and positrons might subtract 2,430,000,000,000,000,000
GeV, yielding the observed value of 125 GeV.
The fact that the numbers come so close to matching – but don't exactly match – is too much to accept by chance. This means that the true mass of the of the Higgs would have to be incredibly finely
tuned so that the correction and the bare mass almost (but don't exactly) cancel each other to about 1 part in 10^17. The odds of something like that happening in nature by mere chance is so remote
as to be laughable.
I only gave you the correction for electrons and positrons, but there are lots of other types of particles out there. Each and every one is going to interact with the Higgs and add a correction to
the mass.
There's a weird wrinkle to all of this. We saw earlier that Fermions are associated with a -1, and Bosons got a +1 when you switched two identical particles. Those plus and minus 1's are going to be
drafted into service again; they just play a slightly different role this time around.
For each species of Fermion, we subtract from the bare mass to get the observed mass –- that's why I subtracted when talking about electrons –- and with Bosons we add. And for each, we add or
subtract roughly the same amount of mass.
But here's the thing: the pluses and minuses don't add up.
In the Standard Model at least, there aren't equal numbers of Fermions and Bosons. Including all of the combinations of spin and color, there are 28 different Bosons and 90 different Fermions. If you
work through the numbers, this means that the bare mass — the real Higgs mass that you'd measure if you could somehow erase the vacuum of the universe — needs to be something like 62 times the Planck
mass in order to see what we see at the LHC. The bare mass and the correction need to match to something like 18 digits.
If you have to do that level of fine tuning, then you are almost certainly cheating. It's a dirty little secret that a lot of what theoretical physicists do is to try to make infinities (or
near-infinities) go away.
Why the Higgs needs SUSY
No matter. The solution is simply to hypothesize more particles. This is the central idea of SUSY. SUSY supposes that even Bosons and Fermions are just different sides of the same coin. For every
Boson there should be a Fermion and vice-versa. If there are exactly the same number of Fermions and Bosons, then the plus and minus corrections to the Higgs should exactly cancel. It's as though you
attach just enough helium balloons to exactly cancel the weight of your clothes when you step onto the scale.
I realize that the solution "make up a bunch of new particles," sounds a) So easy that you don't need an advanced physics degree to come up with it, and b) So silly that it's not clear that it'll do
anything, but bear with me for a moment.
First off, the idea of coming up with symmetries — in this case between Fermions and Bosons — is really important in physics. The way we understand the weak and electromagnetic forces is ultimately
by supposing that the electron and the neutrino (also the up and down quarks) are just different aspects of the same fundamental particle. It's this symmetry that ultimately gives rise to our
understanding of the Higgs.
And of course supersymmetry is more than just starting off into space and supposing that there might be lots of other unknown particles out there. There's a fairly hairy mathematical structure to it
all; one that predicts interactions between all sorts of particles and their "Superpartners." (I call dibs on the animated series).
Every particle gets a partner of the opposite type. An electron is a Fermion. On the other side is a Boson called a selectron. All of the Bosons get partners with fun, pasta-sounding names. The
photon gets a partner called the photino, while the partner of the W Boson, incidentally, is known as the Wino, which if you want to save yourself some embarrassment at your next physicist party,
you'll want to pronounce "weeno."
If every particle gets a partner it does seem strange that we've never seen one, doesn't it? Maybe.
One of the generic predictions of supersymmetric models is that the supersymmetric partners should be hundreds or even thousands of times larger than our familiar versions of them. And very massive
particles, as you know, don't stick around for long.
There may be a whole bunch of particle states called "neutralinos" which are (as you might guess) electrically neutral. This means that even if we were to make them in an accelerator, they would be
very, very tough to detect directly.
And, um, we haven't even detected them indirectly.
Where we are, and what comes next
The initial experimental results from the LHC, as well as experiments to detect SUSY particles directly don't look terribly promising. The problem is that based on everything we know, these particles
should have about the same mass as the Higgs, but the current experimental limits suggest that they are, at minimum, several times heavier. That's a pretty big hurtle to clear by just tweaking
Now, the true believers will point out that there are lots of possible SUSY models, many of them fall into a group called the Minimal Supersymmetric Standard Model (MSSM). These simple models may be
on life-support, but more complicated schemes could still be viable.
It'd be a shame if SUSY turned out to be wrong, because it would give us huge hints about a lot of outstanding problems.
For example, one of the common assumptions is that the Lightest Superpartner is the neutralino. Hmm…. A massive, abundant particle that's stable because there's nothing for it to decay into? Sounds
like Dark Matter. If only SUSY turns out to be a real thing...
Even if supersymmetry is a fact of our universe, it must be at least a little bit broken. If it weren't, all the partners would be the same mass as the originals. But if that were the case, we would
have discovered them long ago.
One final note. Supersymmetry is often tied to string theory, and, in particular, people will talk about "superstrings," for the simple reason that most versions of string theory and its load of
extra dimensions requires supersymmetry as part of the model. The converse does not hold. Supersymmetry could well be right without string theory.
Just putting that out there. Ruling out supersymmetry might help to clear out some of the theoretical cobwebs in physics-land.
Dave Goldberg is a Physics Professor at Drexel University, and is the author of "A User's Guide to the Universe" and the forthcoming "The Universe in the Rearview Mirror," (Dutton, 2013) which will
be all about symmetry. In the meanwhile, follow him on twitter, send a question, or become a fan on facebook.
51 19Reply | {"url":"http://io9.com/5946941/whats-so-super-about-supersymmetry?tag=askaphysicist","timestamp":"2014-04-20T11:42:35Z","content_type":null,"content_length":"125198","record_id":"<urn:uuid:b433e1fc-f0ce-4851-a5a3-5b331405ba0d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new probabilistic primality test using Lucas sequences
- Math. Comp , 1997
"... Abstract. We give bounds on the number of pairs (P, Q)with0≤P, Q < n such that a composite number n is a strong Lucas pseudoprime with respect to the parameters (P, Q). 1. ..."
Cited by 3 (0 self)
Add to MetaCart
Abstract. We give bounds on the number of pairs (P, Q)with0≤P, Q < n such that a composite number n is a strong Lucas pseudoprime with respect to the parameters (P, Q). 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=430065","timestamp":"2014-04-20T14:48:32Z","content_type":null,"content_length":"11396","record_id":"<urn:uuid:66d33216-3a3c-40a7-bfe6-0af0460319eb>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fractions - Vocabulary Crossword Puzzle - Math
Tickle your students' brains with a crossword puzzle.
1 largest factor that two numbers have in common
6 _____ fractions - the numerator is larger than the denominator
7 tells how many equal parts are used, shaded, etc.
8 common denominators are not required when ____ 2 fractions
10 fractions that have the same value
12 the product of a number and its ____ equals one
15 whole number with a proper fraction is called a ____ number
2 prime _____ is when prime numbers are multiplied together to get the original number
3 a whole number with more than two factors
4 part / whole
5 tells how many equal parts make one whole
9 to change a fraction to its lowest terms
11 is useful in finding the least common denominator when adding unlike denominators
13 a whole number greater than 1 with only two factors
14 when adding fractions the denominators must be the ____.
Answers at the end of this page
Recommended Resources
Both resources include a DVD with over 6 hours of lessons.
1 GCF
6 Improper
7 Numerator
8 Multiplying
10 Equivalent
12 Reciprocal
15 Mixed
2 Factorization
3 Composite
4 Fraction
5 Denominator
9 Reduce
11 LCM
13 Prime
14 Same
If more review is needed, visit the links below. | {"url":"http://www.bellaonline.com/ArticlesP/art61129.asp","timestamp":"2014-04-19T10:13:31Z","content_type":null,"content_length":"6177","record_id":"<urn:uuid:96baf34e-4ebb-49b2-b1c3-203e9564c353>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] integrability of f and f^2
October 11th 2008, 12:33 AM #1
Mar 2008
Acolman, Mexico
[SOLVED] integrability of f and f^2
I need help with this problem.
Let[tex] f(x)[\math] be a bounded function on $[a,b]$. Let $B>0$ and $|f(x)| \leq B$ for $\forall x \in [a,b]$
Show that $U(f^2,P)-L(f^2,P) \leq 2B[U(f,P)-L(f,P)]$ for all partitions P of [a,b]
Difference of two squares: $\left|f^2(x_1) - f^2(x_2)\right| = |f(x_1) + f(x_2)|\,|f(x_1) - f(x_2)|\leqslant2B|f(x_1) - f(x_2)|$.
Ohh man! I can believe it! tank you very much!!
October 11th 2008, 03:28 AM #2
October 11th 2008, 06:59 AM #3
Mar 2008
Acolman, Mexico | {"url":"http://mathhelpforum.com/calculus/53069-solved-integrability-f-f-2-a.html","timestamp":"2014-04-16T20:33:16Z","content_type":null,"content_length":"37744","record_id":"<urn:uuid:43a9896b-6824-4e33-8b8a-eeacd911c9f0>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
10 search hits
Periodic structure in nuclear matter (1992)
István Lovas Lívia Molnár Kornél Sailer Walter Greiner
The properties of nuclear matter are studied in the framework of quantum hadrodynamics. Assuming an ω-meson field, periodic in space, a self-consistent set of equations is derived in the
mean-field approximation for the description of nucleons interacting via σ-meson and ω-meson fields. Solutions of these self-consistent equations have been found: The baryon density is
constant in space, however, the baryon current density is periodic. This high density phase of nuclear matter can be produced by anisotropic external pressure, occurring, e.g., in relativistic
heavy ion reactions. The self-consistent fields developing beyond the instability limit have a special screw symmetry. In the presence of such an ω field, the energy spectrum of the
relativistic nucleons exhibits allowed and forbidden bands, similar to the energy spectrum of the electrons in solids.
Formation of heavy quarks in ultrarelativistic heavy-ion collisions (1992)
S. M. Schneider Walter Greiner G. Soff
We investigate the production of heavy quarks in continuum and bound states in nuclear collisions. Creation rates for free bb̅ and tt̅ quark pairs and for bottomonium and toponium in
the ground state are computed at energies of the BNL Relativistic Heavy Ion Collider, CERN Large Hadron Collider (LHC), and Superconducting Super Collider. Central and peripheral heavy-ion
collisions are discussed. For top-quark creation we assumed a mass range of 90≤mt≤250 GeV. The creation rate for top quarks in peripheral collisions is estimated to be by a factor 40
to 130 smaller compared with corresponding central collisions. For mt=130 GeV we calculated a creation rate of about 4760 top-quark pairs per day at the LHC (3.5 TeV/nucleon) for Pb-Pb
Multiplicity distribution of electron-positron pairs created by strong external fields (1992)
Christoph Best Walter Greiner Gerhard Soff
We discuss the multiplicity distribution of electron-positron pairs created in the strong electromagnetic fields of ultrarelativistic heavy-ion transits. Based on nonperturbative expressions for
the N-pair creation amplitudes, the Poisson distribution is derived by neglecting interference terms. The source of unitarity violation is identified in the vacuum-to-vacuum amplitude, and a
perturbative expression for the mean number of pairs is given.
Delbrück scattering in a strong external field (1992)
Alexander Scherdin Andreas Schäfer Walter Greiner Gerhard Soff
We evaluate the Delbrück scattering amplitude to all orders of the interaction with the external field of a nucleus employing nonperturbative electron Green's functions. The results are given
analytically in form of a multipole expansion.
Pion chemical equilibration in heavy ion collisions : relativistic quantum molecular dynamic analysis (1992)
Debades Bandyopadhyay Mark I. Gorenstein Horst Stöcker Walter Greiner Heinz Sorge
In the framework of relativistic quantum molecular dynamics the authors find that the pion system produced in central heavy-ion collisions at Elab/A approximately 1 GeV/nucl. is out of chemical
equilibrium. Pion chemical potential is large and decreases during the expansion stage.
Dynamical treatment of Fermi motion in a microscopic description of heavy ion collisions (1992)
G. Peilert J. Konopka Horst Stöcker Walter Greiner M. Blann M. G. Mustafa
A quasiclassical Pauli potential is used to simulate the Fermi motion of nucleons in a molecular dynamical simulation of heavy ion collisions. The thermostatic properties of a Fermi gas with and
without interactions are presented. The inclusion of this Pauli potential into the quantum molecular dynamics (QMD) approach yields a model with well defined fermionic ground states, which is
therefore also able to give the excitation energies of the emitted fragments. The deexcitation mechanisms (particle evaporation and multifragmentation) of the new model are investigated. The
dynamics of the QMD with Pauli potential is tested by a wide range of comparisons of calculated and experimental double-differential cross sections for inclusive p-induced reactions at incident
energies of 80 to 160 MeV. Results at 256 and 800 MeV incident proton energy are presented as predictions for completed experiments which are as yet unpublished.
Intermediate mass fragment emission in Fe+Au collisions (1992)
G. Peilert Horst Stöcker Walter Greiner
Experimental results are presented on the charge, velocity, and angular distributions of intermediate mass fragments (IMFs) for the reaction Fe+Au at bombarding energies of 50 and 100 MeV/
nucleon. Results are compared to the quantum molecular dynamics (QMD) model and a modified QMD which includes a Pauli potential and follows the subsequent statistical decay of excited reaction
products. The more complete model gives a good representation of the data and suggests that the major source of IMFs at large angles is due to multifragmentation of the target residue.
The role of quantum effects and nonequilibrium transport coefficients for relativistic heavy ion collisions (1992)
M. Berenguer C. Hartnack G. Peilert Horst Stöcker Walter Greiner Jörg Aichelin A. Rosenhauer
Stopping power and thermalization in relativistic heavy ion collisions is investigated employing the quantum molecular dynamics approach. For heavy systems stopping of the incoming nuclei is
predicted, independent of the energy. The influence of the quantum effects and their increasing importance at low energies, is demonstrated by inspection of the mean free path of the nucleons and
the n-n collision number. Classical models, which neglect these effects, overestimate the stopping and the thermalization as well as the collective flow and squeeze out. The sensitivity of the
transverse and longitudinal momentum transfer to the in-medium cross section and to the pressure is investigated.
Anti-proton production and annihilation in nuclear collisions at 15-A/GeV (1992)
André Jahns Horst Stöcker Walter Greiner Heinz Sorge
We present a calculation of antiproton yields in Si+Al and Si+Au collisions at 14.5A GeV in the framework of the relativistic quantum molecular dynamics approach (RQMD). Multistep processes lead
to the formation of high-mass flux tubes. Their decay dominates the initial antibaryon yield. However, the subsequent annihilation in the surrounding baryon-rich matter suppresses the antiproton
yield considerably: Two-thirds of all antibaryons are annihilated even for the light Si+Al system. Comparisons with preliminary data of the E802 experiment support this analysis.
Energy and baryon flow in nuclear collisions at 15-A-GeV (1992)
Heinz Sorge R. Mattiello Horst Stöcker Walter Greiner
Strong correlations between baryon stopping in the projectile rapidity hemisphere and target excitation have been found in the light-ion-induced reactions at the BNL Alternating Gradient
Synchrotron (AGS) (E814 group). Results in the framework of the relativistic molecular dynamics approach (RQMD) describe recent E814 data quite well. We discuss the RQMD results together with
proton and pion data from the E802 group near midrapidity. They have raised the question of whether partial transparency could be seen in these experiments. The RQMD results indicate strong
transverse baryon flow in central Si+Au collisions after the projectile has been stopped in the target. | {"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Walter+Greiner%22/start/0/rows/10/yearfq/1992/sortfield/year/sortorder/asc","timestamp":"2014-04-21T05:21:08Z","content_type":null,"content_length":"40785","record_id":"<urn:uuid:e6b9f74d-2307-41b0-bff7-284b7ab80b32>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
The College of Engineering
The College of Engineering - Electrical & Computer Engineering
Tough Talk Graduate Seminar
ToughTalk is a weekly seminar, given by graduate or undergraduate students, as well as faculty and distinguished guests on their active reseach projects. Tough Talk is no ordinary seminar series. It
is a lively discussion, where we provide constructive criticism to the speaker on everything from the technical content of their presentation to their fluency, language skills, and their ability to
engage the audience. We reward not only the best presenters but also the audience members who provide the most insightful comments and questions that is deemed useful and helpful the project being
conducted. Tough Talk is open to public. The speakers, their topics and short abstracts will be posted here as they are scheduled.
Fall 2013 Tough Talk Schedule
This page will be constantly updated as new talks are scheduled.
October 25, 2013
Title: Face Recognition based on Sparse Representation
Speaker: Turan Can Artunc
Abstact: Parsimony has a rich history as a guiding principle for inference and its role in human perception has also been strongly supported by studies of human vision. One of its most celebrated
instantiations, the principle of minimum description length in model selection, stipulates that within a hierarchy of model classes, the model that yields the most compact representation should be
preferred for decision making tasks such as classification. In the statistical signal processing community, the algorithmic problem of computing sparse linear representations with respect to an over
complete dictionary of base elements or signal atoms has seen a recent surge of interest. This excitement emanates from the discovery that whenever the optimal representation is sufficiently sparse,
it can be efficiently computed by convex optimization, even though this problem can be extremely difficult in the general case. In this presentation, the discriminative nature of sparse
representation is exploited to perform automatic face recognition. We will address the question of which features are best for classification and show that the theory of compressed sensing implies
that the precise choice of feature space is no longer critical: even random features contain enough information to recover the sparse representation and hence correctly classify any test image. The
theory is validated by experiments on the Extended Yale Face B database.
October 11, 2013
Title: Active Learning in Initially Labeled Nonstationary Environments
Speaker: Rob Capo
Abstract: The recent surge in online connectivity allows us to collect more data than ever before. This data can represent anything: email messages, bank transactions, online trends, sensor data, and
many more. The raw data, however, doesn’t usually tell us much unless it has been classified (i.e. spam or not spam, fraudulent or not fraudulent, etc.). To further complicate things, the class
distributions of the incoming data are often changing by nature. We approach the problem of nonstationary classification, which is a common thread in machine learning. One challenging version of this
problem that has not received as much attention, however, is classifying nonstationary data when only a small initial set of data are labeled. After the initial labeled data, we have access to
primarily unlabeled data, which ideally changes only gradually. We refer to these environments as initially labeled nonstationary streaming environments (ILNSEs). Some challenges that are introduced
in ILNSEs are sudden changes (i.e. the addition of a new class or an abrupt drift of an existing class) and mixing class distributions. We present COMPOSE.AL, an active batch learning method that
works well even when the aforementioned challenges are imposed. COMPOSE.AL requires a batch of unlabeled data at each timestep. It analyzes the data and may request the class labels of carefully
selected, informative instances from the current batch; this is referred to as active learning. Being that labeled data is generally expensive, the algorithm will request labels only when it needs
them. COMPOSE.AL then classifies all of the unlabeled data in the batch and uses it to help classify the next batch of data it receives. The process continues as long as unlabeled data is available.
We are also interested in exploring online classification methods for ILNSEs where streaming data can be classified as it is received (rather than in batches). Potential solutions for such online
classification are presented briefly.
October 14, 2013
Title: Temporal Memory Learning
Speaker: Tony Samaritano
Abstract: A truly intelligent machine has been the Holy Grail of machine learning and pattern recognition research field since its modern day inception more than a half century ago. Intelligence
itself is a controversial topic with mixed ideas from neuroscientists, machine learning researchers and various other experts in dozens of fields. Recent breakthroughs in brain imaging have allowed
researchers to fundamentally understand how signals are routed and memories are formed in the human neocortex. Jeff Hawkins, an entrepreneur, computer scientist and neuroscience researcher at
Berkeley, has outlined the fundamental learning algorithm for what himself and many others believe identifies how the human neocortex learns and how that relates to intelligence. In this talk, we
will outline the Hierarchical Temporal Memory (HTM) system as defined by Jeff Hawkins, our interpretation of intelligence and implementation of this cortical learning algorithm (CLA). Furthermore, we
will share where the algorithm excels and falters in relation to the field of machine learning and share our initial results.
September 27, 2013
Title: Probabilistic Non-negative Matrix Factorization: Theory and Application to Microarray Data Analysis
Speaker: Belhassan Bayar
Abstract: Non-negative matrix factorization (NMF) has proven to be a useful decomposition for multivariate data, where the non-negativity constraint is necessary to have a meaningful physical
interpretation. It has been widely applied in clustering and feature extraction of microarray data where the genomic data should be nonnegative. The NMF algorithm, however, assumes a deterministic
framework. In particular, the effect of the data noise on the stability of the factorization and the convergence of the algorithm are unknown. Collected data, on the other hand, is stochastic in
nature due to measurement noise and sometimes inherent variability in the physical process. In this talk, we propose a new theoretical and applied developments to the problem of non-negative matrix
factorization. We first extend the NMF framework to the probabilistic case (PNMF). We show that the Maximum A Posteriori estimate of the non-negative factors is the solution to a weighted regularized
non-negative matrix factorization problem. We subsequently derive update rules that converge towards an optimal solution. Finally, we apply the PNMF to cluster and classify DNA microarrays data. The
proposed PNMF is shown to outperform the deterministic NMF and the sparse NMF algorithms in clustering stability and classification accuracy.
September 20, 2013
Title: Compressive Kalman Filtering for Recovering Temporally-Rewiring Genetic Networks
Speaker: Jehandad Khan
Abstract: Genetic regulatory networks undergo rewiring over time in response to cellular developments and environmental stimuli. The main challenge in estimating time-varying genetic interactions is
the limited number of observations at each time point; thus making the problem unidentifiable. We formulate the recovery of temporally-rewiring genetic networks as a tracking problem, where the
target to be tracked over time consists of the set of genetic interactions. We circumvent the observability issue (due to the limited number of measurements) by taking into account the sparsity of
genetic networks. With linear dynamics, we use a compressive Kalman filter to track the interactions as they evolve over time. Our simulation results show that the compressive Kalman filter achieves
good tracking performance even with one measurement available at each time point; whereas the classical (unconstrained) Kalman filter completely fails in obtaining meaningful tracking. | {"url":"http://www.rowan.edu/colleges/engineering/programs/electricalcomputer/research/gradseminars.html","timestamp":"2014-04-20T01:56:32Z","content_type":null,"content_length":"18077","record_id":"<urn:uuid:2c4f6dfe-93f0-4265-bf1b-3a02a370efb7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
What if Napoleon hadn’t abolished decimal time?
SINCE the start of 2010, your correspondent has amused himself by interpreting the date as a binary number, and then converting that into its decimal equivalent. Expressed internationally as
dd.mm.yy, the first day of this year was 010110. In decimal form, that works out to be 2^4+2^2+2^1=22. The game is pointless, of course. But it has made him ponder the whole date and time arrangement
people take for granted.
There are only four years a century when you can play this little game. In the current century, two years (2000 and 2001) have already passed. Like the previous pair, the two that remain (2010 and
2011) contain only three days (1st, 10th and 11th) in three months (January, October and November) that lend themselves to this phoney binary treatment.
Obviously, the smallest binary number in this century's set was January 1st at the turn of the millennium (010100). The largest will be November 11th next year, when all the bits in the six-digit
sequence are present (111111). In decimal terms, that is equal to 2^5+2^4+2^3+2^2+2^1+2^0= 63. There is nothing magical about such a number, though November 11th does happen to be the birthday of a
member of your correspondent's family. For his own amusement, 63 of something will figure in the celebration.
All this playing around with binary numbers has made him wonder why binary time—or, for that matter, decimal time—never caught on in the Western world. Decimal time has been tried on many occasions.
Indeed, a decimal calendar based on a ten-month year was used by Romans during the time of Romulus and Remus. Their calendar ran from March to December. The two missing months needed to make up a
solar year were dismissed as winter when nothing grew or happened—and therefore not worth worrying about.
The ancient Egyptians were far smarter. For three millennia before Christ, they used a 12-month calendar, with each month comprising three ten-day weeks. Five rogue days were tacked on the end of the
cycle to complete the solar year. By the time of Augustus, the so-called Alexandrian calendar had even incorporated an additional day for leap years. This was essentially the decimal calendar that
the French introduced during the revolution. But the French Republic's official ten-day week lasted for little more than a dozen years before Napoleon abolished it in 1806.
Ironically, when the French invented the metric system in 1795, two years after they changed the calendar, they decimalised everything except time. There were base units for length, area, volume,
weight and even currency. But seconds and minutes, hours and days, weeks and months were left unscathed.
That, along with the failure to decimalise the compass, was perhaps the metrication commission's biggest setback. Its august president, the noted mathematician Joseph-Louis Lagrange, tried in vain to
get the Republic to adopt the déci-jour and centi-jour (a tenth and a hundredth of a day, respectively). But the decimal calendar was deemed enough of a gesture to the new age of rationalism, even
though it did not comply with the strict divisions and multiples of ten, and used none of the metric system's prefixes (milli-, centi-, deci-, deca-, hecto-, kilo-, etc).
Even so, the idea of a centi-jour (14.4 minutes) has cropped up on several occasions since. One reason is that a ten-hour clock, with each hour divided into 100 decimal minutes, and each decimal
minute sub-divided into 100 decimal seconds, would make navigation easier—provided, of course, you had a decimal sextant and compass to go with it. Decimal time and longitude would then correlate
directly without the need for logarithmic conversion tables. Even the Royal Geographical Society in Victorian England was keen on decimal navigation, and published tables to convert sexagesimal
angles and hours into centi-jours and their decimal subdivisions.
Numerous clocks were made in France and elsewhere during the 19th century with faces showing both the numbers 1-12 for standard time and 1-10 for decimal time. The supposed advantage was that any
observer with a decimal chronometer and a view of the sun's height above the horizon would then know instantly where on the planet he was. With 100 decimal degrees (or “gons” as they became known) to
a right-angle, and the distance from the pole to the equator being almost exactly 10,000 kilometres, 1 km along the surface subtends an angle of one centigon (a hundredth of a decimal degree) at the
centre of the Earth. Had it come to pass, decimal time and decimal angular measurement might have done for the 19th century what GPS did for the 20th.
But the French were not the first to think of the ten-hour day, nor even the centi-jour. Like the Egyptians with their decimal calendar, the Chinese used decimal (not to mention duodecimal) time
several millennia before Christ. Since the beginning of history, they have divided the day into a 100 equal parts called ke (14.4 minutes), and split each of those into 60 fen (14.4 seconds). When
Jesuit missionaries introduced Western clocks to China in the 17th century, the local inhabitants simply changed the number of divisions in a day from 100 to 96, making a ke equal to exactly 15
To this day, the term ke is used in China to denote “a quarter of a hour”. In Japan, the same character (pronounced either “koku” or “kizamu”) translates roughly into “carving out a small amount of
time” and was used, until the Meiji era, to signify “hour”, while the character for fen (pronounced “fûn” in Japanese) is used to this day to denote “minute”.
Ultimately, the only unit of time that really matters is the second. Originally, the internationally accepted system of units known as SI (Système International d'Unités) defined the second as 1/
86,400 of a mean solar day—simply the inverse of the number of seconds in 24 hours. But irregularities in the rotation of the Earth made that unreliable. Thus, in 1967, SI adopted a more precise
definition based on the frequency of the radiation a caesium atom emits when it flips between two energy states. No ifs and buts, at absolute zero temperature, this is exactly 9,192,631,770 hertz.
Physicists have no trouble using, on the one hand, picoseconds (trillionths of a second) or even femtoseconds (quadrillionths of a second) to discuss time at the atomic scale. They also talk
cheerfully of 10^18 seconds needed for light to travel from the farthest reaches of the universe. Likewise, in computing, “Unix Time” gives the date and time in terms of the number of seconds since
January 1st 1970, and Microsoft's “Filetime” is recorded as multiples of 100-nanosecond units since January 1st 1601.
But computer scientists are just as likely to divide their day into hexadecimal hours, with each hour broken up into hexadecimal minutes. (The 16-base hexadecimal system uses the numbers 0-9 followed
by the letters A-F.) The hexadecimal day begins at midnight at .0000. One second after midnight, the time is .0001. Half a day later, noon arrives at .8000. A second before the next midnight is
.FFFF. Got it? Your correspondent neither.
In normal life, people have to go to work, arrange schedules, catch planes and trains, and pick up children from school at given times. The number of seconds needed to do such useful things has to be
given names everyone recognises and agrees upon. It would be nice if such units of time were decimal multiples of one another. Unfortunately, here on planet Earth, with its decidedly undecimal
sidereal year of 365-and-a-quarter days, that is just not going to happen. | {"url":"http://www.economist.com/node/15311296","timestamp":"2014-04-19T06:13:54Z","content_type":null,"content_length":"132368","record_id":"<urn:uuid:e38d4119-c0d0-42dd-a89f-a86cca2a07bf>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Figure 4: Malaria life cycle model. The inner model of malaria transmission parameters is based on a diagram and parameters from Smith et al. 2007 [97]. Parameter definitions: R[0] (basic
reproductive number) = ma^2 bce^-gn/rg, a (human feeding rate): the number of bites on a human, per mosquito, per day, b (transmission efficiency): the probability that a human becomes infected from
a bite by an infectious mosquito, c (transmission efficiency): the probability that a mosquito becomes infected from a bite on an infected human, g (death rate of mosquitoes): expected lifespan of a
mosquito in days: 1/g, m: ratio of mosquitoes to humans, n (incubation period): number of days required for the parasite to develop within the mosquito, 1/r: duration of infection in humans. | {"url":"http://www.hindawi.com/journals/ipid/2009/385487/fig4/","timestamp":"2014-04-17T13:28:12Z","content_type":null,"content_length":"1959","record_id":"<urn:uuid:907adb81-4c7e-4cf3-b96b-fa3a500b169a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Publications and Presentations
This page lists papers and talks in my various areas of interest. Because I tend to work on the borders between areas, it's hard to split the topics up meaningfully. However, one clear distinction,
at least by intended audience, is between computer science and mathematics on the one hand, and my recent (and, I hope, future) forays into natural language on the other, so those are the two
Within each major division, the organization is reverse chronological.
To indicate the general type of the papers and talks, I use the following codes:
• J article refereed to journal standards (whether in a journal or a one-off book).
• BC book chapter in surveys, handbooks etc.
• RC/RW refereed full conference or workshop paper.
• IC/IW invited conference or workshop talk/paper.
• UC/UW unrefereed conferences or workshops, including those with selection only by short abstract. This also includes Dagstuhl Seminar presentations.
• TR technical report, usually an incipient journal paper.
• S presentations given as departmental/faculty seminars in various institutions -- including recreational or tutorial talks.
• MS anything else, whether ultimately intended to become a journal article, or just something I felt like writing.
Rather few of the slides from my talks are currently up here, as I only recently came round to the idea of making slides generally available -- if you remember a talk that I gave and would like to
see the slides, drop me an email.
For things published in the last decade or so, I took care to retain publication rights. For very old materials, the copies here may be a technical violation of the publishers' copyrights. I don't
suppose they care any more than I do. In the case of published papers, the file here is usually the last version I sent, rather than the published version -- there should be no differences other than
formatting and the publisher's logo.
Recent papers are all PDF; older papers may be gzipped PostScript.
Natural Language
(J) Clicks, Concurrency, and Khoisan
This paper, to appear in Phonology 31(1), is the article arising from the MFM talk below. The abstract is
I propose that the notions of segment and phoneme be enriched to allow, even in classical theories, some concurrent clustering. My main application is the Khoisan language ǃXóõ, where by treating
clicks as phonemes concurrent with phonemic accompaniments, the inventory size is radically reduced, so solving the problems of many unsupported contrasts. I show also how phonological processes
of ǃXóõ can be described more elegantly in this setting, and provide support from metalinguistic evidence and experiment evidence of production tasks. I describe a new allophony in ǃXóõ. I go on
to discuss other, some rather speculative, applications of the concept of concurrent phoneme. The article also provides a comprehensive review of the segmental phonetics and phonology of ǃXóõ,
together with previous analyses.
This version is not quite the version sent to the journal; it has minor differences in typography, and one or two additional paragraphs and tables that were cut to save space. It's available in three
clickscon-a4.pdf is suitable for printing on A4 paper.
clickscon-2up.pdf is formatted for A5 paper and then arranged 2-up on A4 -- the type is larger than that obtained by doubling up the previous version, so should be comfortable to read in print.
clickscon-a5.pdf is the single page A5 format, and is probably the most comfortable for reading on screen.
(UC) Stress-testing GP - the phonology of Taa
This poster was presented at the 20th Manchester Phonology Meeting in May 2012.
This is a different take on !Xoo or Taa - here I wonder what it looks like trying to do a GP-based analysis. The tentative conclusion (the work is early) is that the representation does not raise
serious problems, but describing the phonological processes is challenging.
Here is the poster.
(UC) Where's the contrast?
This poster was presented at the 19th Manchester Phonology Meeting in May 2011.
It's a simple simulation experiment, aiming to show that effects that have been claimed to require features, can instead arise from purely phonetic effects. It is a development of part of the
following talk.
Here is the poster.
This was a talk at the conference on Phonetic Universals conference in Leipzig in November 2010. It discussed aspects of simulation, and some experiments also presented in the poster above. Slides
are available, but not very useful as the presentation relied on live simulations.
(UC) Acquisition and Complexity of Phonemes and Inventories
This poster was presented at the workshop on Computational Modelling of Sound Pattern Acquisition in Edmonton (February 2010), and then at the 18th Manchester Phonology Meeting in May 2010
It's a follow-up to the talk at the previous MFM. Using simulations, it looks at how the difficulty of learning large vowel systems (such as that of !Xoo) depends on whether they are learned simply
as lots of vowels, or with internal structure (such as the voice qualities used in the !Xoo system). This gives some support to the claim that the !Xoo vowel system is just too complicated to learn,
if given the traditional analysis.
Here is the poster.
(UC) OT or not OT - is that a question?
This talk was presented at the 7th Old World Conference on Phonology in Nice (January 2010). It's a talk designed for the end of a long day (as it was scheduled), which goes through some (well known)
arguments for and against OT, and then looks at how these apply to OT's original poster child, Imdlawn Tashlhiyt Berber syllabification. It concludes with a simple re-writing account of ITB
syllabification that encapsulates what, in my view, is a very plausible account of how speakers might actually do syllabification (as opposed to solving a constraint set over several million
Here are the slides.
(UC) Clicks, Concurrency and the Complexity of Khoisan
This talk was presented at the 17th Manchester Phonology Meeting in May 2009. It argues that the notoriously large consonant and vowel inventories of Khoisan languages are really due to an
inappropriate analysis: rather, one should slightly broaden the notion of phoneme to include segments that occur concurrently with other segments as well as sequentially. A fair part of the argument
is, of course, aesthetic, but phonological and psycholinguistic arguments are also adduced.
Here is the printed handout; the jokes won't make too much sense without the context of voice-over and conference, and you can't see the gratuitous special effects, but the argument should be clear.
A formal article is in progress - if you are interested in seeing (and commenting on!) the draft, send me an email.
(UC) Modelling, Formality and the Phonetics-Phonology Interface
Originally, this was to be a talk at the 14th MFM, but owing to illness I was unable to present it. Instead, it became a poster (A4 version, really needs colour) presented at the 5th Old World
Conference on Phonology in Toulouse (January 2008). It's really a reply to the Port and Leary paper ‘Against Formal Phonology’, but is also an indication of things I might do. (See above for a
concrete instantiation.) There is also a four page abstract which goes into a little more detail, but still without any actual applications.
Computer Science, Logic, Mathematics
(RC) Team building in dependence
This paper appeared in CSL'13.
Hintikka and Sandu's Independence-Friendly Logic was introduced as a logic for partially ordered quantification, in which the independence of (existential) quantifiers from previous (universal)
quantifiers is written by explicit syntax. It was originally given a semantics by games of imperfect information; Hodges then gave a (necessarily) second-order Tarskian semantics. More recently,
Väänänen (2007) has proposed that the many curious features of IF logic can be better understood in his Dependence Logic, in which the (in)dependence of variables is stated in atomic formula,
rather than by changing the definition of quantifier; he gives semantics in Tarskian form, via imperfect information games, and via a routine second-order perfect information game. He then
defines Team Logic, where classical negation is added to the mix, resulting in a full second-order expressive logic. He remarks that no game semantics appears possible (other than by playing at
second order).
In this article, we explore an alternative approach to game semantics for DL, where we avoid imperfect information, yet stay locally apparently first-order, by sweeping the second-order
information into longer games (infinite games in the case of countable models). Extending the game to Team Logic is not possible in standard games, but we conjecture a move to transfinite games
may achieve a `natural' game for Team Logic.
Here is the official version.
For those who find the LIPICS style distasteful, here is a version (with identical content) in my standard format for A4 paper: backtrack.pdf.
(RC) Recursive checkonly QVT-R transformations with general when/ and where/ clauses via the modal mu calculus
This paper, with Perdita Stevens, discusses and solves a problem with the semantics of recursive relations in the modelling language QVT-R. It appeared in FASE 2012.
In earlier work we gave a game-based semantics for checkonly QVT-R transformations. We restricted when and where clauses to be conjunctions of relation invocations only, and like the OMG
standard, we did not consider cases in which a relation might (directly or indirectly) invoke itself recursively. In this paper we show how to interpret checkonly QVT-R - or any future model
transformation language structured similarly - in the modal mu calculus and use its well-understood model-checking game to lift these restrictions. The interpretation via fixpoints gives a
principled argument for assigning semantics to recursive transformations. We demonstrate that a particular class of recursive transformations must be ruled out due to monotonicity considerations.
We demonstrate and justify a corresponding extension to the rules of the QVT-R game.
Here is the PDF.
(TR) Fixpoint alternation and the Wadge Hierarchy
This paper, written with Sandra Quickert and Jacques Duparc, takes up a problem raised in the TIA paper below. It will be (we hope) the journal version of the CSL paper below; however, it currently
relies on unpublished results of Duparc, and we would like to wait until those are published before submitting.
It is perhaps worth warning that this is a long and very difficult paper, especially the second part, which has been greatly expanded since the conference version. If you just want to know the
results, go to the conference paper.
In earlier work Bradfield found a link between finite differences formed by Σ^0[2] sets and the mu-arithmetic introduced by Lubarski. We extend this approach into the transfinite: in allowing
countable disjunctions we show that this kind of extended mu-calculus matches neatly to the transfinite difference hierarchy of Σ^0[2] sets. The difference hierarchy is intimately related to
parity games. When passing to infinitely many priorities, it might not longer be true that there is a positional winning strategy. However, if such games are derived from the difference
hierarchy, this property still holds true. In the second part, we use the more refined Wadge hierarchy to understand further the links established in the first part, by connecting game-theoretic
operations to operations on Wadge degrees.
Here is the paper.
This version can be cited as:
Bradfield, J., Duparc, J. and Quickert, S.: Fixpoint Alternation and the Wadge Hierarchy, University of Edinburgh Informatics Report Series EDI-INF-RR-1366 (2010).
(J) A general definition of malware
This paper, with Simon Kramer, uses temporal logic to try to characterize the notion of malware (as in computer viruses). Note that Simon is the main and corresponding author, so if you have queries
contact him - unless they're about the formal details, in which case you can also ask me.
We propose a general, formal definition of the concept of malware (malicious software) as a single sentence in the language of a certain modal logic. Our definition is general thanks to its
abstract formulation, which, being abstract, is independent of—but nonetheless generally applicable to—the manifold concrete manifestations of malware. From our formulation of malware, we derive
equally general and formal definitions of benware (benign software), anti-malware (“antibodies” against malware), and medware (medical software or “medicine” for affected software). We provide
theoretical tools and practical techniques for the detection, comparison, and classification of malware and its derivatives. Our general defining principle is causation of (in)correctness.
Here is the published paper, which is available on open access from SpringerLink.
The reference is:
Kramer, S. and Bradfield, Julian C.: A general definition of malware. Journal in Computer Virology 6(2): 105-114 (2010).
(RC) Model-checking games for fixpoint logics with partial order models
This paper, with Julian Gutierrez, discusses model-checking for the logic SFL that Julian develops in his Ph.D. A journal version is in final revision.
We introduce model-checking games that allow local second-order power on sets of independent transitions in the underlying partial order models wh ere the games are played. Since the one-step
interleaving semantics of such models is not considered, some problems that may arise when using interleaving semantics are avoided and new decidability results for partial orders are achieved.
The games are shown to be sound and complete, and therefore determined. While in the interleaving case they coincide with the local model-checking games for the μ-calculus (Lμ), in a
non-interleaving setting they verify properties of Separation Fixpoint Logic (SFL), a logic that can specify in partial orders properties not expressible with Lμ. The games underpin a novel
decision procedure for model-checking all temporal properties of a class of infinite and regular event structures, thus improving previous results in the literature.
Here is the preprint.
The reference is:
Gutierrez, J. and Bradfield, J.: Model-checking games for fixpoint logics with partial order models, in: Bravetti, M. and Zavattaro, G. (Eds.): Proc. CONCUR 2009 LNCS 5710, Springer 2009, 354-368.
(J) The complexity of independence-friendly fixpoint logic
This paper, written with Stephan Kreutzer, studies what it says.
We study the complexity of model-checking for the fixpoint extension of Hintikka and Sandu's independence-friendly logic. We show that this logic captures EXPTIME; and by embedding PFP, we show
that its combined complexity is EXPSPACE-hard, and moreover the logic includes second order logic (on finite structures).
A preliminary version appeared as the CSL '05 paper below; this version was published in the post-proceedings of a meeting on Infinite Games in the series Foundations of the Formal Sciences.
Here is the published PDF file.
The reference is:
Bradfield, J. and Kreutzer, S.: The complexity of independence-friendly fixpoint logic, in: Stefan Bold, Benedikt Löwe, Thoralf Räsch, Johan van Benthem (eds.), Foundations of the Formal Sciences V,
Infinite Games 39-62. [Studies in Logic 11]. London: College Publications (2007).
(BC) Modal Mu-Calculi
This is a chapter for the (very expensive) Handbook of Modal Logic, published by Elsevier in 2006. It is written together with Colin Stirling. While it includes some considerable chunks from the
Handbook of Process Algebra chapter, it is more concerned with logical, game and automata theoretic issues in mu-calculi than with applications to processes. It covers: background; syntax and
semantics of modal mu-calculus; expressive power; satisfiability; axiomatization; alternation; bisimulation invariance; generalized mu-calculi.
Here is the PDF text (300kB).
The reference is:
Bradfield, Julian and Stirling, Colin. Modal mu-calculi. In: P. Blackburn, J. van Benthem and F. Wolter (eds.), The Handbook of Modal Logic pp. 721-756. Elsevier (2006).
(J) Independence: logics and concurrency
This is a considerably revised and expanded version of the CSL '00 paper below. It appeared in a Festschrift for Gabriel Sandu. It concentrates more on `design issues' around the application of IF
concepts to modal logics than on the mathematical properties of such logics, though it includes and extends some of the results from the CSL paper.
Here is a gzipped PostScript preprint.
The reference is:
Independence: logics and concurrency. In: Tuomo Aho and Ahti-Veikko Pietarinen (eds). Truth and Games: Essays in Honour of Gabriel Sandu (Acta Philosophica Fennica 78) 47-70. Helsinki: Societas
Philosophica Fennica. (2006)
(RC) Transfinite extension of the mu-calculus
This paper, written with Sandra Quickert and Jacques Duparc, takes up a problem raised in the TIA paper below. It appeared in in CSL '05 (proceedings in LNCS).
In [Bra03] Bradfield found a link between finite differences formed by Σ^0[2] sets and the mu-arithmetic introduced by Lubarski [Lub93]. We extend this approach into the transfinite: in allowing
countable disjunctions we show that this kind of extended mu-calculus matches neatly to the transfinite difference hierarchy of Σ^0[2]. The difference hierarchy is intimately related to parity
games. When passing to infinitely many priorities, it might not longer be true that there is a positional winning strategy. However, if such games are derived from the difference hierarchy, this
property still holds true.
gzipped PostScript text
The reference is:
Bradfield, J. C., Duparc, J. and Quickert, S., Transfinite extension of the mu-calculus. In: Proc. 14th Int. Conf. on Computer Science Logic (CSL '05)} LNCS 3634 384-396. Springer (2005).
A much expanded version is currently available as a technical report.
(RC) The complexity of independence-friendly fixpoint logic
This paper, in CSL '05 (proceedings in LNCS), was the extended abstract of the full paper above, with some simplications to the logic. Please work from the full paper. For the record, here is the
gzipped PostScript text of the conference version.
(J) On independence-friendly fixpoint logics
This is the somewhat revised and expanded journal version of the CSL'03 paper below, invited for an issue of Philosophia Scientiae devoted to Logic and Game Theory. The paper is part of the programme
of developing modal and temporal version of independence-friendly logic. It is mostly concerned with the difficulties of adding fixpoint constructors to IF logics.
We introduce a fixpoint extension of Hintikka and Sandu's IF (independence-friendly) logic. We obtain some results on its complexity and expressive power. We relate it to parity games of
imperfect information, and show its application to defining independence-friendly modal mu-calculi.
Here is the published version (PDF) (280 kB), and the current version (gzipped PostScript) (150kB), which fixes a couple of minor errors.
The reference is:
Bradfield, J. C. On independence-friendly fixpoint logics. Philosophiae Scientiae 8(2) 125-144 (2004).
(RC) Parity of Imperfection or Fixing Independence
The cutesy title was a feeble attempt at recalling the spirit of the 18th century with some even feebler punning. Oh well.
This paper, which appeared in CSL '03 (proceedings in LNCS), is the preliminary version of the full paper immediately above. Please work from the journal version. For the record, here is the gzipped
PostScript text of the conference version.
(J) Fixpoints, games and the difference hierarchy
This is the journal version of the CSL '99 and FICS '01 papers, without significant changes.
Drawing on an analogy with temporal fixpoint logic, we relate the arithmetic fixpoint definable sets to the winning positions of certain games, namely games whose winning conditions lie in the
difference hierarchy over Δ^0[2]. This both provides a simple characterization of the fixpoint hierarchy, and refines existing results on the power of the game quantifier in descriptive set
theory. We raise the problem of transfinite fixpoint hierarchies.
The problem of transfinite fixpoint hierarchies was finally dealt with in the CSL '05 paper above.
Here is the gzipped PostScript text (139 kB).
The reference is:
Bradfield, J. C. Fixpoints, games and the difference hierarchy. Theoretical Informatics and Applications 37 1-15 (2003).
(J) Independence-friendly modal logic and true concurrency
This paper, with Sibylle Fröschle, studies the relationships between traditional concurrent equivalences and those induced by a modal analogue of independence-friendly logic. It is the extended
journal version of the EXPRESS paper below.
We consider modal analogues of Hintikka et al.'s ‘independence-friendly first-order logic’, and discuss their relationship to equivalences previously studied in concurrency theory.
gzipped PostScript text (134 kB).
The reference is:
Bradfield, J. C. and Fröschle, S. B. Independence-friendly modal logic and true concurrency. Nordic Journal of Computing 9 102-117 (2002).
(RC) Enriching OCL using observational mu-calculus
This paper, with Juliana Küster Filipe (now Juliana Bowles) and Perdita Stevens, was presented at FASE 2002, with proceedings published in LNCS.
The Object Constraint Language is a textual specification language which forms part of the Unified Modelling Language\cite{UML1.4}. Its principal uses are specifying constraints such as
well-formedness conditions (e.g. in the definition of UML itself) and specifying \emph{contracts} between parts of a system being modelled in UML. Focussing on the latter, we propose a systematic
way to extend OCL with temporal constructs in order to express richer contracts. Our approach is based on observational mu-calculus, a two-level temporal logic in which temporal features at the
higher level interact cleanly with a domain specific logic at the lower level. Using OCL as the lower level logic, we achieve much improved expressiveness in a modular way. We present a unified
view of invariants and pre/post conditions, and we show how the framework can be used to permit the specification of liveness properties.
gzipped PostScript text (162kB).
The reference is:
Bradfield, J.C., Küster Filipe, J. and Stevens, P. Enriching OCL using observational mu-calculus, Proc. of the 5th International Conference on Fundamental Approaches to Software Engineering (FASE)
(R.-D. Kutsche and H. Weber, eds.), LNCS 2306 203--217 (2002).
(RW) On logical and concurrent equivalences
This paper, with Sibylle Fröschle, appeared in the EXPRESS '01 workshop in Aalborg; its journal version is the NJC paper above. Please work from the journal version. Here is the original workshop
paper, as gzipped PostScript text.
(RW) Some remarks on transfinite fixpoint alternation
A note for the Fixpoints in Computer Science 2001 workshop in Florence. It asks questions about how transfinite fixpoint alternation might be defined, and points out some problematic issues. It was
included (with improvements) as the final section of the TIA paper above. The questions were answered in the CSL '05 paper above. Here is the gzipped PostScript text.
(RC) Independence: logic and concurrency
This paper appeared in CSL 2000, published in LNCS. A considerably revised and expanded version appeared several years later as the Sandu Festschrift paper above - please work from that. Here is the
original conference paper, as gzipped PostScript text.
(BC) Modal logics and mu-calculi: an introduction
This is a preprint of a chapter by Colin Stirling and myself in the (very expensive) Handbook of Process Algebra, published by Elsevier.
Abstract: We briefly survey the background and history of modal and temporal logics. We then concentrate on the modal mu-calculus, a modal logic which subsumes most other commonly used logics. We
provide an informal introduction, followed by a summary of the main theoretical issues. We then look at model-checking, and finally at the relationship of modal logics to other formalisms.
The style of this article is relatively high-level and untechnical: my aim while writing was to make it as much like bedtime reading as the subject can manage!
Here is the text (144 kB).
The reference is:
Bradfield, J. C. and Stirling, C. P. Modal logics and mu-calculi: an introduction. In Handbook of Process Algebra (eds. J. Bergstra, A. Ponse and S. Smolka) 293--330. Elsevier (2001).
(J) Fixpoint alternation: Arithmetic, transition systems, and the binary tree
This paper is the journal version of the STACS 98 and FICS papers.
We provide an elementary proof of the fixpoint alternation hierarchy in arithmetic, which in turn allows us to simplify the proof of the modal mu-calculus alternation hierarchy. We further show
that the alternation hierarchy on the binary tree is strict, resolving a problem of Niwinski.
gzipped PostScript text (95 kB).
The reference is:
Bradfield, J.C. Fixpoint alternation: Arithmetic, transition systems, and the binary tree. Theoretical Informatics and Applications 33(4/5) 341-356 (1999).
(RC) Fixpoint alternation and the game quantifier
This paper appeared in CSL '99, published in LNCS. It's essentially filling in a link that has been more or less implicit for a long time; not deep, but quite pretty, I think. Some of the open
questions are quite interesting. This version has been superseded by the journal version above. The text here is not quite what appeared in LNCS---it contains fewer typos and better notation! gzipped
PostScript text (85 kB).
(RW) Fixpoint alternation on trees
This short paper appeared in the Fixpoints in Computer Science workshop in Brno, 1999. It was absorbed into the TIA journal paper above. Here is the original gzipped PostScript text.
(RC) Simplifying the modal mu-calculus alternation hierarchy
This paper, which appeared in STACS 98, provided the simple basis for the proof of the alternation hierarchy theorem that I had unaccountably failed to see in the original paper. A journal version
appeared as the TIA paper above - please work from that. Here is the original gzipped PostScript text.
(J) The modal mu-calculus alternation hierarchy is strict
This paper settles one of the (then) three major open problems about the modal mu-calculus, namely, whether the alternation hierarchy is a strict hierarchy in expressive power. The file given here is
a preprint of the journal version.
ABSTRACT: One of the open questions about the modal mu-calculus is whether the alternation hierarchy collapses; that is, whether all modal fix-point properties can be expressed with only a few
alternations of least and greatest fix-points. In this paper, we resolve this question by showing that the hierarchy does not collapse.
This paper should be read in conjunction with the TIA paper above, which removes the quite unnecessary use in this paper of a very technical theorem of Lubarsky.
Here is gzipped PostScript text (111 kB).
The reference is:
Bradfield, J.C. The modal mu-calculus alternation hierarchy is strict. Theoretical Computer Science 195 133-153 (1998).
(RC) An effective tableau system for the linear time mu-calculus
This paper appeared in ICALP '96. The text given here is a slightly earlier tech report (ECS-LFCS-95-337). The paper was written with Javier Esparza and Angelika Mader. It describes - well, the title
says it all. Probably only of interest to the specialist.
ABSTRACT: We present a tableau system for the model checking problem of the linear time mu-calculus. It improves the system of Stirling and Walker by simplifying the success condition for a
tableau. In our system success for a leaf is determined by the path leading to it, whereas Stirling and Walker's method requires the examination of a potentially infinite number of paths
extending over the whole tableau.
gzipped PostScript text (59 kB).
The reference for the published version is:
Bradfield, J. C., Esparza, J. and Mader, A. An effective tableau system for the linear time mu-calculus, Proc. 25th Int. Coll. on Automata, Languages and Programming (ICALP '96) LNCS 1099 98--109
(RC) The modal mu-calculus alternation hierarchy is strict
This CONCUR '96 paper was the conference announcement. It is an extended abstract of the TCS paper. For old time's sake, here is the conference version as gzipped PostScript.
(RC) On the expressivity of the modal mu-calculus
The paper published in STACS '96 was a shortened version of a technical report, the text of which is given here. The differences are that in the published version, all references to Petri nets have
been removed, and replaced by non-deterministic register machines---this reduces the number of definitions required, but also removes one or two mildly entertaining propositions.
ABSTRACT: We analyse the complexity of the sets of states, in certain classes of infinite systems, that satisfy formulae of the modal mu-calculus. Improving on some of our earlier results, we
establish a strong upper bound (namely Δ^1[2]). We also establish various lower bounds and restricted upper bounds, incidentally providing another proof that the mu-calculus alternation hierarchy
does not collapse at level 2.
Within a couple of months of submission, this paper became redundant when I finally proved the alternation hierarchy theorem. So it's here purely for historical interest.
The tech report was gzipped PostScript text (55 kB).
The reference for the conference version is:
Bradfield, J.C. On the expressivity of the modal mu-calculus. Proc. 13th Annual Symposium on Theoretical Aspects of Computer Science (STACS '96) LNCS 1046 479-490 (1996).
(MS) Verifying temporal properties of systems
This text was provided to accompany two lectures given at the XXI-st International Winter School on Theoretical and Practical Aspects of Computer Science (SOFSEM), Milovy (Czech Republic) in December
1994. It is basically an updated version of the next paper, concentrating on the practical side, but also giving more explanation of the underlying theory. It is probably the best quick introduction
to the theory and practice of tableau model-checking as practised at Edinburgh in the early 1990s.
ABSTRACT: The modal mu-calculus is a powerful logic with which to express properties of concurrent systems. There are algorithms which allow one to check whether a finite system satisfies a
formula of this logic; but many interesting systems are infinite, or at least potentially infinite. In this paper we present an approach to verifying infinite systems. The method is a tableau
style proof system, using the modal mu-calculus as the logic. We also describe a software tool to assist humans in using the method.
gzipped PostScript text (78 kB).
(RC) A proof assistant for symbolic model checking
This paper, presented at CAV '92 (Proceedings: Springer LNCS 663, 316-329), describes a brief excursion into actual implementation. See also the previous paper.
ABSTRACT: We describe a prototype of a tool to assist in the model-checking of infinite systems by a tableau-based method. The tool automatically applies those tableau rules that require no user
intervention, and checks the correctness of user-applied rules. It also provides help with checking the well-foundedness conditions required to prove liveness properties. The tool has a general
tableau-manager module, and may use different reasoning modules for different models of systems; a module for Petri nets has been implemented.
gzipped PostScript text (70 kB).
The reference is:
Bradfield, J.C. A proof assistant for symbolic model-checking. Proc. Int. Conf. on Computer Aided Verification '92. LNCS 663 316-329 (1993).
(J) Local model checking for infinite state spaces
This paper, written with Colin Stirling, appeared in Theoretical Computer Science 96 (1992) 157-174. It is our standard journal reference for the technique of verifying modal mu-calculus properties
by tableau-style local model-checking. It contains (most of) the material in the Advances in Petri Nets paper and in the CONCUR '90 paper.
If you're interested in the soundness and completeness proofs for the tableau system, this paper contains Stirling's versions of the proofs; my versions are (tersely) in the Advances paper and
(verbosely) in my thesis. Some people prefer one, some the other...
ABSTRACT: We present a sound and complete tableau proof system for establishing whether a set of elements of an arbitrary transition system model has a property expressed in (a slight extension
of) the modal mu-calculus. The proof system, we believe, offers a very general verification method applicable to a wide range of computational systems.
gzipped PostScript text (81 kB).
The reference is:
Bradfield, J.C. and Stirling, C.P. Local model checking for infinite state spaces. Theoret. Comput. Sci. 96 157--174 (1992).
(Book) Verifying Temporal Properties of Systems
This is my thesis, subsequently published. It has its own page.
(RC) Verifying temporal properties of processes
This paper, written with Colin Stirling, was presented at CONCUR '90 (Proceedings: ed. Baeten and Klop, Springer LNCS 458, 115-125). It describes the tableau system for local model-checking as
applied to CCS processes. It is strictly contained in the TCS paper. (I'm afraid the typography is fairly disgusting, as the paper was written in LaTeX, and that was years before I switched to LaTeX
and coerced it into producing something reasonable.)
gzipped PostScript text (57 kB).
(J) Proving Temporal Properties of Petri Nets
This paper was published in Advances in Petri Nets 1991, (ed. G. Rozenberg), Springer LNCS 524. It is strictly contained in my thesis.
ABSTRACT: We present a sound and complete tableau system for proving temporal properties of Petri nets, expressed in a propositional modal mu-calculus which subsumes many other temporal logics.
The system separates the checking of fix-points from the rest of the logic, which allows the use of powerful reasoning, perhaps specific to a class of nets or an individual net, to prove liveness
and fairness properties. Examples are given to illustrate the use of the system. The proofs of soundness and completeness are given in detail.
KEYWORDS: Petri nets, temporal logic, tableau systems, model-checking.
Here is the text (80 kB).
The reference is:
Bradfield, J. C. Proving temporal properties of Petri nets Advances in Petri Nets 1991 LNCS 524 29--47 (1991). | {"url":"http://homepages.inf.ed.ac.uk/jcb/Research/papers.html","timestamp":"2014-04-16T13:03:19Z","content_type":null,"content_length":"43544","record_id":"<urn:uuid:f02d0c10-647b-4bb9-b4ca-e01ed8e2f04a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sample Problem
In the following discussion and solutions the derivative of a function h (x) will be denoted by or h (x) . The following problems require the use of these six basic ...
Differentiation of Inverse Trigonometric Functions
None of the six basic trigonometry functions is a one-to-one function. However, in the following list, each trigonometry function is listed with an appropriately ...
Trigonometric Functions
Trigonometric Functions Arbitrary angles and the unit circle Weve used the unit circle to define the trigonometric functions for acute angles so far.
Trigonometry/PreCalculus Sample Pages MathUSee
Lesson 6 Angles of Elevation and Depression Now we get a chance to apply all of our newly acquired skills in real life applications, otherwise known as word problems.
Inverse Trigonometric Function Sample Problems .doc MSWord ...
We found several results for Inverse Trigonometric Function Sample Problems. Download links for Inverse Trigonometric Function Sample Problems .doc MSWord Document
Sample Problems Of Trigonometric Functions .pdf Full Version
Results for sample problems of trigonometric functions High Speed Direct Downloads sample problems of trigonometric functions [Full Version] 8558 downloads @ 2775 KB/s
solving problems using trigonometric ratios eBook Downloads
solving problems using trigonometric ratios free PDF ebook downloads. eBooks and manuals for Business, Education,Finance, Inspirational, Novel, Religion, Social, Sports ...
inverse-trigonometric-functions - MathGuru - Indias First Online ...
Pre recorded lessons for middle, high, senior and secondary school students. Get a learning program specific to your level. This 24x7 guru has online audio visual ...
Math 1115 - College Algebra and Trigonometry
PRAIRIE VIEW AM UNIVERSITY Department of Mathematics Fall 2010 Math 1115-P05 COLLEGE ALGEBRA and TRIGONOMETRY Math 1115 - College Algebra and Trigonometry
Southern Regional High School District
PreCalculus/Trigonometry Honors Polynomial Rational Functions Page 2 of 4 Content: 1. Analyze functions numerically, graphically and algebraically 2.
Advanced Algebra Trigonometry
Updated 7/22/2009 3 Quality Core Curriculum Standards Advanced Algebra Trigonometry 1 Topic: Problem Solving, Reasoning, Estimation Standard: Solves problems throughout ...
Graphing Calculator
TRIGONOMETRY USINGTHESHARPEL-9600 i Applying TRIGONOMETRY using the SHARP EL-9600 GRAPHING CALCULATOR David P. Lawrence Southwestern Oklahoma State University This Teaching ...
Algebra III
Updated 9/3/2009 3 Textbook Correlation to QCCs Unit Topics from QCC Chapter (Course III: Integrated Mathematics) Unit 1: Equations Inequalities (review equations ...
Online Tutoring For Trigonometric Functions, Trigonometric ...
Transwebtutors provides expert tutoring for basic as well as advance trigonometry via graphs. Vast array of tutorials available online, module based tutoring plans also ...
3 Course/Level Trigonometry PASS Objective: Time Range: Hours 5 Quarter 1 Strand/National Standard Problem Solving - 3 Assessment Instrument ITBS CRT EOI ACT X SAT X AP Lesson ...
Trigonometric functions - Wikipedia, the free encyclopedia
In mathematics, the trigonometric functions (also called circular functions) are functions of an angle. They are used to relate the angles of a triangle to the lengths of ...
Prentice Hall Algebra and Trigonometry
Algebra and Trigonometry 2010 (Sullivan) Correlated to: North Carolina Essential Standards Draft dated 3/26/09 for Advanced Functions and Modeling 1 SE = Student Edition ...
GaDOE Mathematics Curriculum Comparison of QCC GPS Course Content
Revised 11-29-07 Georgia Department of Education Kathy Cox, State Superintendent of Schools December 3, 2007 Page 1 of 4 All Rights Reserved GaDOE Mathematics Curriculum ... | {"url":"http://www.cawnet.org/docid/a+sample+problem+on+trigonometry+functions/","timestamp":"2014-04-17T03:56:36Z","content_type":null,"content_length":"50979","record_id":"<urn:uuid:ca9791fc-7eb6-4502-be47-81f4e8de4a9e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
polynomial identity algebra
polynomial identity algebra
Let $R$ be a commutative ring with 1. Let $X$ be a countable set of variables, and let $R\langle X\rangle$ denote the free associative algebra over $R$. If $X$ is finite, we can also write $R\langle
X\rangle$ as $R\langle x_{1},\ldots\,x_{n}\rangle$, where the $x_{i}^{{\prime}}s\in X$. Because of the freeness condition on the algebra, the variables are non-commuting among themselves. However,
the variables do commute with elements of $R$. A typical element $f$ of $R\langle X\rangle$ is a polynomial over $R$ in $n$ (finite) non-commuting variables of $X$.
Definition. Let $A$ be a $R$-algebra and $f=f(x_{1},\ldots,x_{n})\in R\langle X\rangle$. For any $a_{1},\ldots,a_{n}\in A$, $f(a_{1},\ldots,a_{n})\in A$ is called an evaluation of $f$ at $n$-tuple $
(a_{1},\ldots,a_{n})$. If the evaluation vanishes (=0) for all $n$-tuples of $\Pi_{{i=1}}^{{n}}A$, then $f$ is called a polynomial identity for $A$.
Definition. An algebra $A$ over a commutative ring $R$ is said to be a polynomial identity algebra over $R$, or a PI-algebra over $R$, if there is a proper polynomial $f\in R\langle x_{1},\ldots,x_
{n}\rangle$, such that $f$ is a polynomial identity for $A$. A polynomial identity ring, or PI-ring, $R$ is a polynomial identity $\mathbb{Z}$-algebra.
PI-algebra, algebra with polynomial identity
Mathematics Subject Classification
no label found
no label found
Added: 2004-04-29 - 19:15 | {"url":"http://planetmath.org/PolynomialIdentityAlgebra","timestamp":"2014-04-20T18:27:21Z","content_type":null,"content_length":"94245","record_id":"<urn:uuid:5a8265e5-5720-4975-a5fb-59bb10115f09>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Principle of Least Action
PRINCIPLE OF LEAST ACTION INTERACTIVE
Java programming by Slavomir Tuleja
Text by Edwin F. Taylor and Slavomir Tuleja
Draft of March 12, 2003
Throw an apple vertically upward from the ground (zero height). We demand that 3 seconds later the apple return to our hand at the same height (zero) from which we launched it. What is the motion of
this apple between the events of launch and catch? At what height can the apple be found at any given time? Or to express the question more technically: What is the worldline of the apple between
launch and catch? We use the principle of least action to find answers to these questions.
The principle of least action defines the action S for motion along a worldline between two fixed events:
Here L is called the Lagrangian. In simple cases the Lagrangian is equal to the difference between the kinetic energy T and the potential energy V, that is, L = T – V. In this interactive document we
will approximate a continuous worldline with a worldline made of straight connected segments. The computer then multiplies the value of (T – V) on each segment by the time lapse t for that segment
and adds up the result for all segments, giving us an approximate value for the action S along the entire worldline. Our task is then to move the connected segments of the worldline so that they
result in the minimum total value of the action S.
In the following we assume a mass of 0.2 kilogram for the apple.
We assume that you are acquainted with the concepts of acceleration, energy, and worldline.
Display #1: Manual hunting for the worldline of least action
Find the worldline of the apple that has the minimum value of the action. To do this quickly, use the interactive display to manipulate intermediate points of the worldline. With the cursor, drag
each of the three black intermediate points on the worldline up and down. Notice the following features of the display:
• You can find the height and time of any location (any event) on the screen by pointing the cursor at that location. The coordinate values appear at the lower right.
• The display shows the current value of the total action S at the bottom left of the display. Below it is displayed the minimum value S[min] of the action found so far as you have dragged that dot
up and down.
• Your goal is to find the worldline for which the total action is the minimum. For this particular motion, the minimum value of the action is the most negative.
Start with the initial worldline that lies straight across the bottom of the spacetime diagram. (RESET if you have been trying out the display.)
Put the cursor on one of the three intermediate point-events and drag it upward. What happens to the value of the action shown below the display? Continue dragging that same point upward.
Drag the dot to the height for which the value of the total action is minimum. Then drag another intermediate point up and down. You can tell immediately when you have passed the height for minimum
action, because the S[min] display stops changing.
Set the second dot to the minimum-action value and move along to drag the third dot up and down. When you have set the third dot to the height for minimum action, make a prediction about the answer
to the following question:
Try it by dragging the first dot up and down again. What is the answer to the question? Did you predict correctly?
Now cycle repeatedly through the three dots, moving each one to find the minimum (most negative) value of the action. PLEASE BE PATIENT. It is worth going through this tedious manual process
completely at least once. After many cycles through the three dots, you reach a condition in which added cycles through the dots does not further reduce the value of the total action. Record the
value of the minimum action for your final worldline.
The worldline you have constructed satisfies the condition of least action for this special case of a worldline of four straight segments equally spaced along the time axis.
Display #2: Automatic hunting for worldline of least action
Of course a worldline with only three intermediate points is not realistic. The second display is a bit better, with many intermediate points. If you want, you can start the process of finding the
worldline of least action by dragging each of these many event-dots up and down. But life is short, and computers are designed to do routine work quickly. So let the computer find the worldline of
minimum action by repeated cycling through the dot-events at the ends of each segment of the worldline.
Click on the button labeled HUNT. The computer begins cycling quickly through the dots, hunting for the worldline of least action. You can interrupt the process at any time by clicking on PAUSE and
then either continue by pressing HUNT again or return to the initial horizontal worldline by clicking on RESET. When the action stops changing as the program hunts, record the value of the action for
this worldline and the value of the maximum height of the trajectory.
Of course even a worldline consisting of many straight segments is not the same as a continuous worldline, made up of an infinite number of event-points. The computer is not able to carry out the
minimization of action for an infinite number of points. But you can imagine that as more and more dots are added, the value of the resulting action gets closer and closer to the value for a
continuous worldline.
This second program differs from the first one in that it allows you to change the heights of initial and final events. Drag the final event to 5 meters height. Notice that the program has rescaled
the vertical axis to make it possible for the worldline to fit into the window. Before you press the HUNT button answer the following question:
Display #3: Constant acceleration under uniform gravitation.
Use the program below to show that for the worldline of least action the acceleration of the apple, corresponding to each pair of successive segments, is constant, i.e. the motion of the apple is
governed by Newton's second law of motion.
The acceleration for each pair of successive segments of current test worldline is displayed in table situated on the right. Click the HUNT button to start automatic hunting for the minimum-action
worldline. When the worldline stops changing, click on PAUSE. What is the value of the constant acceleration?
Display #4: Constant mechanical energy for the least-action worldline
Use the program below to show that for the worldline of least action the mechanical energy of the apple along each segment is a constant of the motion.
The total energy E = T + V for each segment of current trial worldline is displayed in the table at the right of the display. Click the HUNT button to find the worldline of least action. Write down
the value of the constant energy.
Display #5: The Incremental Principle of Least Action
Mechanics is often taught using Newton's second law: F = ma. What is the relation between F = ma and the principle of least action that we have been describing here? Richard Feynman answers this
question clearly:
There is quite a difference in the characteristic of a law which says a certain integral from one place to another is a minimum—which tells something about the whole path—and of a law which says that
as you go along, there is a force that makes it accelerate. The second way tells how you inch your way along the path, and the other is a grand statement about the whole path. ... Let's suppose that
we have the true path and that it goes through some point a in space and time, and also through another nearby point b. Now if the entire integral from t[1] to t[2] is a minimum, it is also necessary
that the integral along the little section from a to b is also a minimum. It can't be that the part from a to b is a little bit more. Otherwise you could just fiddle with just that piece of the path
and make the whole integral a little lower. So every subsection of the path must also be a minimum. And this is true no matter how short the subsection. Therefore the principle that the whole path
gives a minimum can be stated also by saying that an infinitesimal section of path also has a curve such that it has a minimum action. ... So the statement about the gross property of the whole path
becomes a statement of what happens for a short section of the path—a differential statement. ... That's the qualitative explanation of the relation between the gross law and the differential law.
—Richard P. Feynman et. al, The Feynman Lectures on Physics, Volume II, page 19-8
In brief, Feynman is saying that if the entire worldline has minimum action compared with possible nearby worldlines, then every small segment along the worldline must have minimum action compared to
possible nearby segments.
This final display allows you to check out Feynman's description by moving one of the point-events on the worldline and watching the resulting change in the action for segments on either side of that
point and also the total action of the entire worldline.
Start by playing with the Zoom feature that allows you to look closely at any moveable point-event on the worldline.
This diagram demonstrates that when the worldline minimizes action, every segment also minimizes action. Why is this important? Because, as Feynman says, it connects the principle of least action to
Newton's acceleration law F = ma. By analyzing the action on a pair of segments, as in this display, you can derive F = ma using elementary calculus. See details in the paper Classroom derivation of
Newtonian mechanics from the principle of least action. by Jozef Hanc, Slavomir Tuleja, and Martina Hancova at the website:
But there is a further payoff: The so-called Lagrange equations are another expression of Newton's laws which are in some ways more general and powerful than F = ma . For an introduction to
Lagrange's equations and a paper deriving the Lagrange equations from the principle of least action, see the same website. | {"url":"http://www.eftaylor.com/software/ActionApplets/LeastAction.html","timestamp":"2014-04-16T19:09:16Z","content_type":null,"content_length":"28426","record_id":"<urn:uuid:ab43bba3-ca30-4f77-8ee1-a347f8338707>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unbalanced distribution power flow analysis using sequence and phase components
Abdel Akher, Mamdouh and Mohamed Nor, Khalid (2008) Unbalanced distribution power flow analysis using sequence and phase components. In: Recent Developments In Three Phase Load Flow Analysis.
Penerbit UTM, Johor, pp. 107-137. ISBN 978-983-52-0680-1
Official URL: http://www.penerbit.utm.my/bookchapterdoc/FKE/book...
Three-phase distribution networks are traditionally modeled in phase coordinates frame of reference. This is because the mutual inductances between different phases of an asymmetrical transmission
lines are not equal to each other. Besides the sequence networks are coupled together and cannot be broken into independent circuits and distribution systems contain multi-phase unbalanced laterals.
A variety of three-phase power-flow algorithms have been developed based on phase components for solving unbalanced power systems. Some of these algorithms solve a general network structure such as
standard Newton Raphson method or its variants [1-5]. However, the three-phase Newton- Raphson method is computationally expensive for large systems due to the size of the Jacobian matrix and its
fast decoupled version is sensitive for high line R/X ratios. The admittance or impedance methods [6-9] have convergence characteristics that are highly dependent on the number of the PV nodes in
electrical network [8]. There are also unbalanced power-flow methods which consider primarily the radial structure of distribution networks. Therefore, these power-flow methods can solve only radial
or weekly meshed systems such as methods given in [10-11].
Repository Staff Only: item control page | {"url":"http://eprints.utm.my/28062/","timestamp":"2014-04-17T12:37:24Z","content_type":null,"content_length":"23228","record_id":"<urn:uuid:fbdab15d-8bc3-4982-ad5f-3a3c6858120c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
1st Download Center - free download ESBPDF Analysis - Probability Software 2.4.1 (probability analysis software probability software probability analysis probability distributions normal distribution probability discrete distribution continuous distribution hypergeometric distribution poisson distribution esbpdf)
Description from the Publisher
ESBPDF Analysis is Probability Analysis Software that provides everything needed for using Discrete and Continuous Probability Distributions in a single Windows application. Most Tables and supplied
functions (such as in MS Excel) give P(X less than A) and using algebra other results can be found whereas ESBPDF Analysis is Probability Analysis Software that handles all the combinations for you.
Features include Binomial, Poisson, Hypergeometric, Normal, Exponential, Student t, Chi Squared, F, Beta and Lognormal Distributions; Inverses of Normal, Student t, Chi Squared, F, Beta and Lognormal
Distributions; Lists of Binomial Coefficients, Factorials and Permutations; Calculations of Gamma and Beta Functions; Printing of Standard Normal Tables and Critical t Values; Fully Customisable;
Integrated Help System which includes a Tutorial. We also plan on adding many more Distributions and features.
Also includes Graphing of Distributions, additional Information on Distributions and registered users also get Electronic Documentation and a PDF version of the documentation designed for printing.
Ideal for the Maths/Stats Student who wishes to understand Probability Distributions better, as well as the Maths Buff who wants a well designed calculating tool.
Designed for Windows and optimised for Windows XP.
Most popular product from this category
AshSofDev Math Tables
AshSofDev Math Tables is a small program written for my grandaughter to help with her math. Maybe someone else can get some use from it.
FC-Win (tm) is a front-end program for Fortran Calculus (tm). The Fortran Calculus (FC) language is for math modeling, simulation, and optimization. FC is based on Automatic Differentiation that
simplifies computer code to an absolute minimum.
CurveFitter performs statistical regression analysis to estimate the values of parameters for linear, multivariate, polynomial, exponential and nonlinear functions.
ScienCalc is a convenient and powerful scientific calculator. ScienCalc calculates mathematical expression. It supports the common arithmetic operations (+, -, *, /) and parentheses.
Automaton Lab 3D
The automata which are modeled in this application are composed of a set of spheres whose size and axis are relative to one another, and where each sphere is rolling upon the surface of one other
sphere in a fully deterministic pattern.
Calc Pro
Calc Pro is a handy program for standard calculations and operations with numbers. It's as easy to use as an ordinary handheld calculator, but has quite a few advantages. Log window you can activate
by space key.
trendingBot is a powerful numerical simulation tool able to find the equation explaining the behaviour of the given data with no external help (user-defined parameters)
keyCulator is the simplest calculator software in the world. However, the simplicity is both a concept and a power! This is the only calculator software without a program window. Actually, there is
no window at all!
Orneta Calculator Mobile
Calculator Mobile, by Orneta is a simple and easy to use application that acts like a standard scientific handheld calculator for Windows Mobile based Pocket PC and Smartphone. Solve your math and
science problems at work, school, the lab, wherever.
STATOOL Statistic and Probability Tools
StaTool - Statistic and Probability Tools for Windows- Hypothesis testing.- Confidence interval estimation - Probability distributions - One variable statistic - Two variables statistic - Total
Probability Law and Bayes' Theorem | {"url":"http://www.1st-download.com/download/free/mathematics/esbpdf-analysis-probability-software/827.html","timestamp":"2014-04-19T17:02:26Z","content_type":null,"content_length":"11488","record_id":"<urn:uuid:b24a7847-ed3f-4ec2-aae3-8342c62bcb5e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Reads data incorrectly
Anonymous posted on Saturday, June 28, 2003 - 9:34 pm
I am trying to fit a SEM model with a number of y categorical variables. In reading the data though it says "Categorical variable Y4 contains less than 2 categories" where as I can clearly see that
the .txt file that I am supplying contains values 1, 2, 3, 4, and 5 (ordered categories). I tried creating that file on Excel and SPSS but still the message is the same. What might have gone wrong?
Linda K. Muthen posted on Saturday, July 05, 2003 - 9:02 am
With categorical outcomes, any observation with a missing value on one or more analysis variables is not included in the analysis (listwise deletion). I would imagine that this is what is happening.
After listwise deletion, y4 contains only one category. If this is not the case, send the data and input and I will find the explanation.
Anonymous posted on Monday, February 16, 2004 - 12:04 pm
Could you please see what I am doing wrong here (i am trying to create a categorical group variable)?
if (gleason gt 6) then group5=1;
if (gleason le 6) and (gleason>0) then group5=0;
NAMES ARE ID..oareax group4;
CATEGORICAL is group5;
MISSING ARE .;
age1x wst_hipx trigplx hdlx
cpepx insulinx glucosex hgbx group5;
ANALYSIS: type=meanstructure;
MODEL: group5 ON age1x wst_hipx IR;
IR ON age1x wst_hipx;
IR BY trigplx* hdlx cpepx insulinx
glucosex hgbx;
age1x WITH wst_hipx;
OUTPUT: stand tech2;
*** ERROR in Variable command
Unknown variable(s) in CATEGORICAL option:
Linda K. Muthen posted on Monday, February 16, 2004 - 12:52 pm
In Version 2.14, a variable listed on the CATEGORICAL statement cannot be a new variable created in DEFINE. See pages 55-56 of the Mplus User's Guide. Use the name from one of the other variables in
the NAMES statement that is not on the USEV statement. That should work. This will be changed in Version 3.
May Guo posted on Tuesday, July 08, 2008 - 1:33 pm
I am tring to import data from SPSS to MPlus. I changed all the missing value to -9, saved the SPSS file to .dat format. When I check the .dat file with notepad, it seems that the data are read
correctly. However after I imported data to MPlus and run the frequency, the means of any varaibles with missing values are different from what I get from SPSS. The difference is not due to decimal
rounding (e.g. M=4.02 in SPSS vs. M = 4.33 in Mplus)
I also tried to run CFA using Mplus, the model is not converged. But using exactly the same data to run CFA in AMOS, the model fit is reasonably good (CFI=.963, RMSEA=.064, NFI=.958).
Can anyone suggest what might be the problem?
Linda K. Muthen posted on Tuesday, July 08, 2008 - 2:03 pm
The default in Mplus is to use all available data (TYPE=MISSING). I believe the sample statistics in SPSS use the number of observations for each variable that are not missing. Different n's is the
most likely reason for the discrepancy.
The convergence problem may be due to large variances. See your sample statistics. If you have large variances, you can rescale the variances by dividing by a constant using the DEFINE command. We
recommend keeping variances between one and ten. If this does not help, please send your input, data, output, and license number to support@statmodel.com.
Nikolaos Stavrakakis posted on Thursday, August 12, 2010 - 1:52 am
Hello Linda,
I am trying to run the model fit for a SEM with 5 imputed datasets where I get this error message "Test of model fit, standard errors and sample statistics are not computed. This is due to zero
successful imputations. chech tech 9"
When I check tech9 for all imputed datasets this error message appears "The degrees of freedom for this model are negative. the model is not identified....check your model. The model estimation
terminated normally. The standard errors of the model parameter estimates could not be computed....problem involving parameter 28...."
Can you please explain what the problem is here? Is there something wrong with the imputed datasets (a quick inspection did not show anything there)? or is there not enough information in the data to
estimate all of the parameters that i have specified?
thanks for your time and efforts
Linda K. Muthen posted on Thursday, August 12, 2010 - 9:27 am
It sounds like the problem is with your model not the data. Please send the full output and your license number to support@statmodel.com.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=11&page=297","timestamp":"2014-04-19T17:07:42Z","content_type":null,"content_length":"27640","record_id":"<urn:uuid:6c045d08-4a22-416f-9d84-279d7f7b2521>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Field equations in spherical symmetry
3.1 Field equations in spherical symmetry
The Einstein equations are and the matter equation is Note that the matter equation of motion is contained within the contracted Bianchi identities. Choptuik chose Schwarzschild-like coordinates,
where r giving the surface area of 2-spheres as t being orthogonal to r (polar-radial coordinates). One more condition is required to fix the coordinate completely. Choptuik chose r = 0, so that t is
the proper time of the central observer.
In the auxiliary variables
the wave equation becomes a first-order system, In spherical symmetry there are four algebraically independent components of the Einstein equations. Of these, one is a linear combination of
derivatives of the other and can be disregarded. The other three contain only first derivatives of the metric, namely Because of spherical symmetry, the only dynamics is in the scalar field equations
(22, 23). The metric can be found by integrating the ODEs (24) and (25) for a and t, given 26) can be ignored in this “fully constrained” evolution scheme. | {"url":"http://www.univie.ac.at/EMIS/journals/LRG/Articles/lrr-2007-5/articlesu10.html","timestamp":"2014-04-19T17:11:49Z","content_type":null,"content_length":"9574","record_id":"<urn:uuid:e6a0aa9e-37ba-4d7e-bb0c-7f7070a60416>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solid Geometry
Silver, Burdett and Company, 1934
In this text solid geometry is so presented that the student will gain (1) a knowledge of the properties of planes and lines in space; of polyhedrons, cones, cylinders, and spheres; (2) further
practice in the processes of logic which were begun in plane geometry; (3) a realization that he must understand solid geometry if he wishes to comprehend nature and art and the fascinating and
diversified developments of modern means of communication, transportation, construction, and fabrication; and (4) skills and attitudes that will be helpful as a part of the training essential in
meeting his own personal problems of everyday living.
As examples of these attitudes the following are cited: (1) geometry is a science of necessary conclusions; (2) it wrestles with difficult problems; (3) it tolerates no guessing or jumping at
conclusions; and (4) it requires strict attention to the whole problem with discriminating use of the means at hand to obtain a solution.
In this text the student's attention is directed repeatedly to the relations of solid geometry to other school subjects, such as algebra, trigonometry, geography, and physics. The functional
relations of geometric quantities are emphasized, thus tying up solid geometry with modern scientific thinking. Relations of solid geometry to various phases of modern life outside the classroom are
also dealt with in exercises. Attention should be directed especially to the problems which contain data of actual work of present-day steel fabrication, refractories, and public utilities.
BOOK VI. LINES AND PLANES IN SPACE
Lines Parallel or Perpendicular to a Plane
Parallel Planes
Dihedral Angles
Locus and Projection
Polyhedral Angles
General Polyhedrons
Circles of a Sphere
Area and Volume of a Sphere
Spherical Triangles and Polygons
Useful Reference Material
Alternative Proofs
Helpful Material for Teachers | {"url":"http://www.agnesscott.edu/Lriddle/women/abstracts/cowley_solidgeometry.htm","timestamp":"2014-04-17T13:05:35Z","content_type":null,"content_length":"4819","record_id":"<urn:uuid:08cbab60-24d9-4703-8062-319c4a19a24e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating Percentiles in Memory-bound Applications
Suppose you have a long list of numbers and you want to find the 5^th and 95^th percentiles. If the list is small enough, you can read the list into memory, sort it, and see which numbers are 5% of
the way from each end. Simple. But what do you do if the list is being generated sequentially and the entire list is too large to fit into memory at once? This article presents a simple C++ class for
solving this problem. The class takes template parameters so it can be used with any data type.
Motivating Application
The solution presented here could be used in a variety of memory-bound problems, but I'll describe briefly the problem that originally motivated the code. As part of its processing, the statistical
package WFMM generates a huge matrix one row at a time, then reports a couple percentiles of each column. It would be nice if it were possible to produce the data one column at a time, but that's not
the case. The nature of the problem is that rows can be computed independently but columns cannot. The size of this matrix varies according to the input, and the amount of available memory determines
the maximum problem size the software can handle. The code presented here was used in the WFMM project to conserve memory.
To make the problem tangible, suppose we have a 1000 numbers and we want to know what the 50^th number would be if we were to sort the list in increasing order. Imagine we want to do this using as
little memory as possible. We could read in the first 50 numbers and sort them. Then when we read in the 51^st number, we compare it to the largest of the 50 numbers we've saved. If the new number is
larger, we throw it away because we know it cannot be the 50^th smallest number of the full list. If the new number is smaller, we throw away the largest number we were keeping and insert the new
number into our cache. This gives us a way to compute the 50^th smallest number using 50 memory locations. Obviously we could use a similar approach to find the 50^th largest number as well.
Could we do any better? No. Until we see the last 50 numbers, we can't rule out the possibility of any one of the numbers we're saving being the one we're after. When we see the 951^st number, we can
rule out one of the numbers in our cache. When we see the 952^nd number we can throw out another number from the cache, etc. But for the majority of the runtime of the algorithm, we need to hold on
to 50 numbers.
In general, if you want to find the n^th smallest number in a list of M numbers, you need to save n numbers along the way. (Unless n is larger than half of M. Then you could turn the problem around
and find the (M-n)^th largest number in the list.) So if you want to calculate the 5^th or 95^th percentile of a list of M numbers, you need to store 0.05 M numbers. If you want to find both the 5^th
and the 95^th percentiles you'd need to store 0.10 M numbers along the way. By storing just the numbers you need, you can solve a problem 10 times larger than you could if you were to load everything
into memory first and then sort.
Using the Code
The class TailKeeper keeps track of the “tails” of a sequence of numbers, the largest and smallest values, in order to compute upper and lower percentiles as described above. Values are inserted into
a hash data structure as they arrive. This allows us to keep the stored numbers sorted and make fast inserts without requiring additional memory.
The class uses templates to work with any data type that supports comparison, not just numeric types. Also, the input and storage types may differ. In our application, the input data had type double
but was stored as type float. This allowed us to save twice as much data in the same memory. There was enough noise in the data that the loss of precision in casting double to float did not matter.
To use TailKeeper in your application, #include the files TailKeeper.h and FixedSizeHeap.h. The class needs two constants to specify which values you're looking for. If you want to find the m^th
smallest and n^th largest values, you can specify m and n either as arguments to the constructor or as arguments to the Initialize function.
For example, suppose you're wanting to find the 40
smallest and the 10
largest element in a list of
s and you are willing to cast your input to
s to save space. You could declare a
class as follows:
TailKeeper<double, float> tk(40, 10);
The input type defaults to double and the storage type defaults to float and so <double, float> could be left out in this case.
As each value in the list becomes available, insert it into TailKeeper by calling the AddSample method:
tk.AddSample( x );
To find the 40^th smallest element along the way, call GetMaxLeftTail(). Similarly, to find the 10^th largest element, call GetMinRightTail(). (The smallest 40 elements seen at any point constitute
the left tail, so the maximum of these is the 40^th smallest. The largest 10 elements are the right tail, so the minimum of these is the 10^th largest.)
Testing the Code
The demo project first tests the FixedSizeHeap class that the TailKeeper depends on by checking and inserting several values and comparing the internal state of the heap to the results of manual
The project then creates a list of 1000 random integers and finds the 5^th and 96^th percentiles using TailKeeper directly by sorting the list.
// Generate a list of random integers.
int length = 1000;
std::vector<int /> data(length);
data[0] = 137; // arbitrary seed
for (int i = 1; i != length; ++i)
// This is George Marsaglia's "CONG" random number generator.
data[i] = 69069*data[i-1] + 1234567;
// Find the 50th smallest and 40th largest elements using TailKeeper.
int lower = 50, upper = 40;
TailKeeper<int, /><int, int> tk(lower, upper);
for (int i = 0; i != length; ++i)
tk.AddSample( data[i] );
int leftTailTK = tk.GetMaxLeftTail();
int rightTailTK = tk.GetMinRightTail();
// Find the same values directly by sorting the entire list.
std::sort(data.begin(), data.end());
int leftTailSort = data[lower-1];
int rightTailSort = data[length - upper];
// Compare the results.
assert(leftTailTK == leftTailSort);
assert(rightTailTK == rightTailSort);
For further testing, you could vary the random number generator seed value data[0] or the values of the length, upper, and lower parameters.
• 28^th April, 2008: Initial post | {"url":"http://www.codeproject.com/Articles/25656/Calculating-Percentiles-in-Memory-bound-Applicatio","timestamp":"2014-04-20T19:46:29Z","content_type":null,"content_length":"96336","record_id":"<urn:uuid:78631357-6a89-4d61-a5f5-e7d9ab14bfb0>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
BBB Which cannot be the third side of a triangle which has two sides as 4 cm and 9 cm? BBB View Solution
BBB Select the measurements that match the sides of a triangle. BBB View Solution
BBB Which of the following does not represent the lengths of the sides of a triangle? View Solution
BBB Which cannot be the third side of a triangle which has two sides as 6 cm and 8 cm? BBB View Solution
BBB Select the wrong statement/statements for a right triangle.
1. Sum of the legs is always greater than the hypotenuse.
2. One leg = square root of the product of the sum and difference of the hypotenuse and the other leg View Solution
3. One leg is always greater than the other.
BBB The difference of any two sides of a triangle is View Solution
BBB Choose the way(s) that would make ΔABC isosceles.
1. By increasing ∠A
2. By decreasing ∠C View Solution
3. By increasing ∠C
BBB Select the correct statement(s) with respect to a triangle.
I. Sides containing the smallest angle will be larger than the third side. View Solution
II. Sides containing the largest angle will be longer than the third side.
III. Sum of the lengths of the smaller sides will be less than the length of the larger side. BBB
BBB What is the ascending order for the lengths of the sides of ΔABC? BBB View Solution
BBB Select the correct statement/statements.
1. BC is always greater than 8 cm.
2. BC is always equal to 8 cm. View Solution
3. BC is always greater than 2 cm.
4. BC is always less than 8 cm. BBB
BBB If ∠A is the largest angle in ΔABC, then which of the following can be the length of BC? BBB View Solution
BBB Sides PQ and PR are produced and ∠SQR > ∠TRQ. What is the relationship between PQ and PR? View Solution
BBB Arrange the angles of ΔABC in descending order. View Solution
BBB The side lengths of a triangle are 5 cm, 7 cm and z cm respectively. Which of the following is true? View Solution
BBB Which of the following cannot be the length of BC? View Solution
BBB In the figure, x is a whole number. What is the smallest possible value for x? View Solution
BBB Which of the following can be the length of BC? BBB View Solution
BBB Which of the following can be the length of BC? BBB View Solution
BBB AB > BC > CA. What could be the measures of ∠A, ∠B and ∠C respectively? BBB View Solution
BBB In a triangle, the smallest angle is 40^o. The largest angle is BBB View Solution
BBB In ΔABC, p cm and h cm are the perimeter and the sum of the altitudes of the triangle. Then what is the relationship between p and h? BBB View Solution
BBB In ΔABC, let p cm and r cm denote the perimeter and the sum of its medians respectively. What is the relationship between p and r? BBB View Solution
BBB In ΔABC, ∠A = 68^o, ∠B = 52^o. Which is the greatest side? BBB View Solution
BBB In a triangle, which is the side opposite to the obtuse angle? BBB View Solution
BBB In ΔPQR, PQ = PR, ∠Q = 65^o. Find the relationship between QR and PR. BBB View Solution
BBB m∠A = 50, m∠C = 50, m∠B = 80, AB = 6 cm, BC = 7 cm. These measures represent --------------- View Solution
BBB In ΔABC, m∠A = 42, m∠B = 78. What is the relationship between AC and AB? BBB View Solution
BBB In ΔXYZ, ∠X = 42^o and ∠Y = 52^o. The bisectors of ∠X and ∠Y meet at O. Which of the following is true? View Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgegaxkjaae&.html","timestamp":"2014-04-21T04:33:05Z","content_type":null,"content_length":"72335","record_id":"<urn:uuid:c2822c11-5070-4dfa-abef-eded45e9b932>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gain calculations - diyAudio
Gain calculations
I'm just working through Douglas Selfs book and running through some of the calculations.
At the moment I'm looking at LF and HF gain
He gives LF gain to be gm*Beta*Rc
Could someone just clarify that the following assumptions are correct please?
gm in mA/V?
Beta is that of VAs transistor? and if a darlington stage is used, does that need to be used as beta*beta in the equation?
Rc is the output resistance of the VAS current source? I know it's going to be quite high, but can someone tell me how to calculate it for a common 2 transistor CC? I know I should be able to work
this one out, but I think I've been looking at textbooks for too long and I can't see the wood for the trees now!
At the moment I'm getting a horrifyingly large number for a circuit setup with gm=95mA/V beta=14400 for 2 2SD669s in VAS, and 5Mohm for the VAS current source. I've done a lot of searching, but
failed to find anything that can help yet.
Thanks for looking | {"url":"http://www.diyaudio.com/forums/solid-state/117045-gain-calculations.html","timestamp":"2014-04-24T15:17:29Z","content_type":null,"content_length":"73131","record_id":"<urn:uuid:8ff075f4-5e57-4c0e-9b56-9943f6a5f192>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
fast_floor :: Double -> Word8Source
This is a special version of the regular floor function. It works by directly calling the low-level internal GHC primitives, and thus is as fast as you'd expect for such a trivial operation.
(The standard floor function does something crazy like converting a Double to a numerator/denominator Integer pair and then computing the integer part of the quotient as an Integer, then truncating
that to a Word8. Which, obviously, is ludicrously slow.)
Hopefully one day the need for this low-level hackery will disappear. | {"url":"http://hackage.haskell.org/package/AC-Colour-1.1.3/docs/Data-Colour-FastFloor.html","timestamp":"2014-04-17T21:49:15Z","content_type":null,"content_length":"3537","record_id":"<urn:uuid:7480115f-e36f-4d61-ab31-8c4a7d1a4f5a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ordered geometries from convex subsets of the plane
up vote 8 down vote favorite
In the Klein disk model of the hyperbolic plane, the points are the interior of the disk, and the lines in $H^2$ correspond to lines intersecting the interior.
Similarly, the Euclidean plane can be modeled by the interior of a hemisphere of $S^2$ (or $\mathbb {RP}^2$ minus a line) so that lines in $\mathbb R^2$ are the intersections of geodesics of the
sphere with the hemisphere.
In both cases, the angles aren't preserved, but the orderings of points on lines are preserved.
For any open convex set in $\mathbb R^2$, consider the nonempty intersections of lines with the set as lines in an ordered geometry with the induced ordering from $\mathbb R^2$. Two such geometries
are equivalent if there is a bijection between them preserving lines and the ordering on each line.
1. Which convex open sets produce geometries equivalent to $H^2$?
2. Which pairs of convex open sets produce equivalent geometries?
Either line-segment preserving maps are quite flexible, or else there should be ways to recover much of the information about convex sets from their incidence geometries.
Some weak results
You can distinguish the interior of a triangle from $H^2$ (or any other bounded convex open set) through the incidence relations. In the triangle, there are three lines such that every other line
intersects at least one of the three. Any three lines through the vertices work. In $H^2$, you can always find a line disjoint from any finite collection of lines.
Similarly, if a line segment makes up part of the boundary of a set in the plane, then the incidence geometry is not $H^2$.
The incidence relation plus ordering is enough to construct ideal points of the boundary of the set. These correspond to maximal sets of rays so that for any two disjoint rays $R_1$ and $R_2$ in the
set, the set of points $p$ so that for some $x_1 \in R_1$ and $x_2 \in R_2$, $p$ is between $x_1$ and $x_2$, is a triangle subgeometry.
mg.metric-geometry geometry
Surely if one convex set is a projective transformation of another, then they define equivalent geometries. In particular, the convex side of a parabola gives $H^2$. – Konrad Swanepoel Feb 3 '10 at
Yes, that's correct, and a triangle is equivalent to a half-strip. – Douglas Zare Feb 3 '10 at 22:01
add comment
1 Answer
active oldest votes
Convex sets give the same geometries if and only if they are projectively equivalent. In particular, it is only conics that give $H^2$.
It is more natural to work in the projective plane $P^2(\mathbb{R})$. Then we define a set to be convex if its intersection with any line is empty or connected. We are given two open
convex sets $C_1$ and $C_2$ in the plane with a bijection $\phi:C_1\to C_2$ which is order preserving in the sense that $a$ and $b$ separates $c$ and $d$ (on some projective line) if and
only if $\phi(a)$ and $\phi(b)$ separates $\phi(c)$ and $\phi(d)$ (on some projective line).
We would like to prove that $\phi$ is a projective transformation.
The following attempt uses the theorem of Desargues together with the fundamental theorem of projective geometry.
Claim. $\phi$ can be extended to the whole of $P^2(\mathbb{R})$ such that lines are mapped to lines.
Consider any point $x\notin C_1$. We locate $\phi(x)$ by using the Theorem of Desargues as follows. Choose three lines through $x$ that intersect $C_1$ in three connected sets $c_1$,
$c_2$, $c_3$. Choose points $a_i$ and $b_i$ on chord $c_i$. Then the triangles $\triangle a_1 a_2 a_3$ and $\triangle b_1 b_2 b_3$ are in perspective, so by the theorem of Desargues, the
intersection points $p_{ij}$ of the lines $a_i a_j$ and $b_i b_j$ are collinear. This can be done in such a way that the $p_{ij}$ are all in $C_1$. For instance, the $c_i$ has to be
chosen sufficiently close together, and each triangle has to be chosen so that its points are "almost collinear", with the two lines of collinearity intersecting inside $C_1$.
up vote 5 Then this picture of the triangles in perspective, without the point $x$, can be transferred to $C_2$ using $\phi$. All incidences are preserved, so by the converse of the theorem of
down vote Desargues, the three connected sets $\phi(c_i)$ lie on concurrent lines. Define $\phi(x)$ to be the point of concurrency. It is easy to see that the definition is independent of which
accepted chords through $x$ are used.
We are halway there. It remains to prove that the extended $\phi$ preserves collinearity.
So let $x,y,z$ be collinear in $\mathbb{R}^2$. We would like to show that $\phi(x), \phi(y), \phi(z)$ are collinear.
If at least two of $x,y,z$ are in $C_1$, it is clear that their images will also be collinear. So assume without loss of generality that $y,z\notin C_1$.
The case $x\in C_1$ is simple: the chord of $C_1$ through $x,y,z$ maps to the chord of $C_2$ through $\phi(x)$ and $\phi(y)$, and also to the chord through $\phi(x)$ and $\phi(z)$. Thus $
\phi(x),\phi(y),\phi(z)$ are collinear.
If on the other hand, $x\notin C_1$, we again use the theorem of Desargues. Find triangles inside $C_1$ such that the points in which their corresponding sides intersect, all lie on the
line through $x,y,z$. (As before, it is easy to see that this is possible.) By the converse of Desargues, the triangles are in perspective. Transfer this picture with $\phi$ to the plane
in which $C_2$ lives. We again get two triangles in perspective, and by Desargues, $\phi(x)$, $\phi(y)$, $\phi(z)$ are collinear.
We have shown that lines are mapped onto lines. By (a very special case of) the fundamental theorem of projective geometry, $\phi$ is a projective transformation.
Nice proof. Thanks! – Douglas Zare Feb 6 '10 at 2:57
I tried to find a reference, but failed. My guess is that this should be about 100 years old. Hilbert introduced his metric on these geometries in 1895. – Konrad Swanepoel Feb 6 '10 at
add comment
Not the answer you're looking for? Browse other questions tagged mg.metric-geometry geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/13968/ordered-geometries-from-convex-subsets-of-the-plane/14306","timestamp":"2014-04-19T07:13:56Z","content_type":null,"content_length":"60642","record_id":"<urn:uuid:755fec18-e92d-4bd9-9958-2746cdf0f0a3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving for X
Date: 5/14/96 at 23:44:24
From: Tanya Chongchit
Subject: X^2 - 7X + 10 = 0
I am having a totally hard time trying to figure out this equation and
it is getting on my nerves! Please help me and show me
each step to solving X. Thank you!
X^2 - 7X + 10 = 0
Is this what I am supposed to do?
Then is the answer X = 5 and X = 2?
Thank you again!
Date: 5/16/96 at 19:41:14
From: Doctor Syd
Subject: Re: X^2 - 7X + 10 = 0
Hey! We're glad you wrote. Sometimes it can certainly be frustrating
when you don't think you know how to do a problem, I know! But, hey,
you got this problem!
Do you understand why and how you factored the polynomial? You were
hoping that x^2 - 7x + 10 would factor into something nice, in the
form (x + a)(x + b) where a and b are integers, right? Well, how do
you figure out what a and b are? Let's expand the product:
(x + a)(x + b) = x^2 + (a+b)x + ab
We want to find out for what values of a and b we have that
(x + a)(x + b) = x^2 + (a-b)x + ab = x^2 -7x + 10
This means that we must have a-b = -7 and ab = 10.
You have two equations now and two unknowns. You can either solve for
a and b algebraically or you can try some numbers until you find some
that work. You can see that a = -5 and b = -2 work, and thus you can
say with confidence that
(x-2)(x-5)=x^2 - 7x +10
If this product is equal to 0, then one of the terms in the product is
zero. In other words, either x-2 = 0 or x-5 =0. Solving for x, this
means that either x = 2 or x = 5. Thus we have found the solution to
the equation.
You had all of the right steps...I just filled in an explanation. I
hope this helps! Good luck with other problems like this. Once
you've practice a little it will all seem much easier, I promise!
-Doctor Syd, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/58533.html","timestamp":"2014-04-19T09:40:53Z","content_type":null,"content_length":"6776","record_id":"<urn:uuid:6542efe3-f105-4b85-ab3d-ce7d972fad17>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The derivative of f(x)=(x^4/3)-(x^5/5) attains its maximum value at what x value? a. -1 b. 0 c. 1 d. 4/3 e. 5/3
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
can you explain it to me, please?
Best Response
You've already chosen the best response.
Hold on
Best Response
You've already chosen the best response.
find f''(x)
Best Response
You've already chosen the best response.
f''(x) = 4x^2 - 3x^3 what next?
Best Response
You've already chosen the best response.
Isn't that it
Best Response
You've already chosen the best response.
Ummm, looks like your \(f''(x)\) is a bit off.
Best Response
You've already chosen the best response.
The condition To attain Max is f''(x) > 0
Best Response
You've already chosen the best response.
That is true
Best Response
You've already chosen the best response.
They want the max of the derivative.
Best Response
You've already chosen the best response.
So you want to find the critical numbers \(f''(x)=0\) or undefined. Then you'd plug them back into \(f'(x)\) to see where the max is.
Best Response
You've already chosen the best response.
nope @wio they ask..For where the max is attained
Best Response
You've already chosen the best response.
"The derivative of f(x)=(x^4/3)-(x^5/5) attains its maximum" "f'(x) attains its maximum"
Best Response
You've already chosen the best response.
aha! f"(x) = 4x^2 - 4x^3. i got my critical numbers. x = 0 & x = 1?
Best Response
You've already chosen the best response.
That looks good. So your local maximum value is at either x=0 or x=1. Also, since your first derivative is a negative quartic (has a \(-x^4\) term), you have a parabola type shape opening
downwards, so the local maximum is the global maximum. Now you just need to determine which one is the maximum.
Best Response
You've already chosen the best response.
make sense so far?
Best Response
You've already chosen the best response.
i think. i got f'(0) = 0 and f'(1) = 1/3. so the answer is... 1?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
All riiiiiiiiight. thank you. and wio. but s/he left.
Best Response
You've already chosen the best response.
You're welcome.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ea6098e4b07cd2b6485ef9","timestamp":"2014-04-19T07:15:51Z","content_type":null,"content_length":"72968","record_id":"<urn:uuid:b788c92b-0eeb-4cac-8ab3-347ace9f32cb>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I was asked to write a program that finds the solutions, if any, to a given quadratic equation. My code is below. I have two issues: First When i input for example, a=3, b=4, c=5 the result prints"
The roots are -nan and -nan" Secondly, I am to ensure that division by 0 is not permitted and the discriminant b^2-4ac cannot be negative. I am having a hard time with this bit. Could you help me
solve this part?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
HERE IS MY CODE: #include <iostream> #include <cmath> using namespace std; int main(){ /* Here I have initialized some variables a,b and c that you would see in a standard quadratic equation.
positive_root and negative_root are the roots you get when using the quadratic formula to solve a quadratic equation given the values of a,b and c */ double a, b, c, positive_root, negative_root;
cout<<" enter value of a,b and c : "<<endl; cin>>a>>b>>c; // using the functions sqrt() and pow(), we positive_root= (-b + sqrt(( pow(b,2)- 4*a*c)))/ 2*a ; negative_root= (-b - sqrt(( pow(b,2)-
4*a*c)))/ 2*a ; cout << " The roots are:"<<positive_root<< " and "<< negative_root<<endl; return 0; }
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f8bfe1e4b027eb5d9a4da4","timestamp":"2014-04-20T10:49:02Z","content_type":null,"content_length":"28678","record_id":"<urn:uuid:fd75c45c-2f74-495f-8bba-cf70035ea8bb>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
On-line learning of linear functions
Results 1 - 10 of 31
- ARTIFICIAL INTELLIGENCE , 1997
"... In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting
relevant features, and the problem of selecting relevant examples. We describe the advances that have been mad ..."
Cited by 423 (1 self)
Add to MetaCart
In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant
features, and the problem of selecting relevant examples. We describe the advances that have been made on these topics in both empirical and theoretical work in machine learning, and we present a
general framework that we use to compare different methods. We close with some challenges for future work in this area.
- JOURNAL OF THE ASSOCIATION FOR COMPUTING MACHINERY , 1997
"... We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no
assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the ..."
Cited by 317 (66 self)
Add to MetaCart
We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no
assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit
sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum
achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching
leading constants in most cases. We then show howthis leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in
this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes.
- Information and Computation , 1995
"... this paper, we concentrate on linear predictors . To any vector u 2 R ..."
, 2000
"... We give a unified account of boosting and logistic regression in which each learning problem is cast in terms of optimization of Bregman distances. The striking similarity of the two problems in
this framework allows us to design and analyze algorithms for both simultaneously, and to easily adapt al ..."
Cited by 203 (43 self)
Add to MetaCart
We give a unified account of boosting and logistic regression in which each learning problem is cast in terms of optimization of Bregman distances. The striking similarity of the two problems in this
framework allows us to design and analyze algorithms for both simultaneously, and to easily adapt algorithms designed for one problem to the other. For both problems, we give new algorithms and
explain their potential advantages over existing methods. These algorithms can be divided into two types based on whether the parameters are iteratively updated sequentially (one at a time) or in
parallel (all at once). We also describe a parameterized family of algorithms which interpolates smoothly between these two extremes. For all of the algorithms, we give convergence proofs using a
general formalization of the auxiliary-function proof technique. As one of our sequential-update algorithms is equivalent to AdaBoost, this provides the first general proof of convergence for
AdaBoost. We show that all of our algorithms generalize easily to the multiclass case, and we contrast the new algorithms with iterative scaling. We conclude with a few experimental results with
synthetic data that highlight the behavior of the old and newly proposed algorithms in different settings.
- IEEE Transactions on Information Theory , 1998
"... Abstract — This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the
self-information loss function, which is directly related to the theory of universal data compression. Both th ..."
Cited by 136 (11 self)
Add to MetaCart
Abstract — This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the
self-information loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem
are described with emphasis on the analogy and the differences between results in the two settings. Index Terms — Bayes envelope, entropy, finite-state machine, linear prediction, loss function,
probability assignment, redundancy-capacity, stochastic complexity, universal coding, universal prediction. I.
- MACHINE LEARNING , 2000
"... We consider on-line density estimation with a parameterized density from the exponential family. The on-line algorithm receives one example at a time and maintains a parameter that is
essentially an average of the past examples. After receiving an example the algorithm incurs a loss, which is the n ..."
Cited by 116 (11 self)
Add to MetaCart
We consider on-line density estimation with a parameterized density from the exponential family. The on-line algorithm receives one example at a time and maintains a parameter that is essentially an
average of the past examples. After receiving an example the algorithm incurs a loss, which is the negative loglikelihood of the example with respect to the past parameter of the algorithm. An o-line
algorithm can choose the best parameter based on all the examples. We prove bounds on the additional total loss of the on-line algorithm over the total loss of the best o-line parameter. These
relative loss bounds hold for an arbitrary sequence of examples. The goal is to design algorithms with the best possible relative loss bounds. We use a Bregman divergence to derive and analyze each
algorithm. These divergences are relative entropies between two exponential distributions. We also use our methods to prove relative loss bounds for linear regression.
- Advanced Lectures on Machine Learning, LNCS , 2003
"... ..."
- Machine Learning , 1994
"... In this paper we study a Bayesian or average-case model of concept learning with a twofold goal: to provide more precise characterizations of learning curve (sample complexity) behavior that
depend on properties of both the prior distribution over concepts and the sequence of instances seen by the l ..."
Cited by 108 (12 self)
Add to MetaCart
In this paper we study a Bayesian or average-case model of concept learning with a twofold goal: to provide more precise characterizations of learning curve (sample complexity) behavior that depend
on properties of both the prior distribution over concepts and the sequence of instances seen by the learner, and to smoothly unite in a common framework the popular statistical physics and VC
dimension theories of learning curves. To achieve this, we undertake a systematic investigation and comparison of two fundamental quantities in learning and information theory: the probability of an
incorrect prediction for an optimal learning algorithm, and the Shannon information gain. This study leads to a new understanding of the sample complexity of learning in several existing models. 1
Introduction Consider a simple concept learning model in which the learner attempts to infer an unknown target concept f , chosen from a known concept class F of f0; 1g-valued functions over an
instance space X....
- Journal of Computer and System Sciences , 1997
"... We consider the following problem. At each point of discrete time the learner must make a prediction; he is given the predictions made by a pool of experts. Each prediction and the outcome,
which is disclosed after the learner has made his prediction, determine the incurred loss. It is known that, u ..."
Cited by 106 (7 self)
Add to MetaCart
We consider the following problem. At each point of discrete time the learner must make a prediction; he is given the predictions made by a pool of experts. Each prediction and the outcome, which is
disclosed after the learner has made his prediction, determine the incurred loss. It is known that, under weak regularity, the learner can ensure that his cumulative loss never exceeds cL+ a ln n,
where c and a are some constants, n is the size of the pool, and L is the cumulative loss incurred by the best expert in the pool. We find the set of those pairs (c; a) for which this is true.
- IEEE Transactions on Information Theory , 1998
"... We consider adaptive sequential prediction of arbitrary binary sequences when the performance is evaluated using a general loss function. The goal is to predict on each individual sequence
nearly as well as the best prediction strategy in a given comparison class of (possibly adaptive) prediction st ..."
Cited by 75 (7 self)
Add to MetaCart
We consider adaptive sequential prediction of arbitrary binary sequences when the performance is evaluated using a general loss function. The goal is to predict on each individual sequence nearly as
well as the best prediction strategy in a given comparison class of (possibly adaptive) prediction strategies, called experts. By using a general loss function, we generalize previous work on
universal prediction, forecasting, and data compression. However, here we restrict ourselves to the case when the comparison class is finite. For a given sequence, we define the regret as the total
loss on the entire sequence suffered by the adaptive sequential predictor, minus the total loss suffered by the predictor in the comparison class that performs best on that particular sequence. We
show that for a large class of loss functions, the minimax regret is either \Theta(log N) or \Omega\Gamma p ` log N ), depending on the loss function, where N is the number of predictors in the
comparison class a... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=57444","timestamp":"2014-04-16T21:44:30Z","content_type":null,"content_length":"37792","record_id":"<urn:uuid:955feaf5-5d4b-4663-8948-1980a2e38e05>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jamesburg, NJ Math Tutor
Find a Jamesburg, NJ Math Tutor
As a Chemistry and Math tutor, I am excited and committed to motivate students in order for them to succeed. After a successful career as a Ph.D. Chemist in the Pharmaceutical Industry, I
mentored several chemists, and my experience is a great asset in my tutoring sessions.
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I have also worked with a lot of 1st to 3rd grade students focusing on phonics and phonemic awareness. I have also worked with high school students on the SAT Math, Reading and Writing
sections. I have experience in tutoring in line with the Core Curriculum standards.
21 Subjects: including algebra 1, ESL/ESOL, English, prealgebra
...Army Applicants for Army OCS (Officer Candidate School) must score a minimum of 110 on the Army's General Technical (GT) Line Score of the ASVAB. Marine Corps Candidates for Marine OCC
(Officer Candidate Class) or PLC (Platoon Leaders Course), must score a minimum of 115 on the Marine's GT line ...
43 Subjects: including trigonometry, discrete math, ASVAB, Java
...Spent another semester aiding MATLAB instructor at Rutgers for recitation purposes. I am currently a Junior undergraduate at Rutgers University School of Engineering for Chemical and
Biochemical Engineering. I have taken all my basic chemical engineering courses, which in my opinion should suffice students who seek aide on this tutoring website.
8 Subjects: including SAT math, calculus, ACT Math, Chinese
...I have had numerous algebra classes and passed them all with A's. I have tutored college and high school and middle school algebra. My many math courses and engineering courses required me to
know algebra proficiently, since it is used consistently throughout both my career, classes and life.
10 Subjects: including algebra 1, algebra 2, calculus, geometry
Related Jamesburg, NJ Tutors
Jamesburg, NJ Accounting Tutors
Jamesburg, NJ ACT Tutors
Jamesburg, NJ Algebra Tutors
Jamesburg, NJ Algebra 2 Tutors
Jamesburg, NJ Calculus Tutors
Jamesburg, NJ Geometry Tutors
Jamesburg, NJ Math Tutors
Jamesburg, NJ Prealgebra Tutors
Jamesburg, NJ Precalculus Tutors
Jamesburg, NJ SAT Tutors
Jamesburg, NJ SAT Math Tutors
Jamesburg, NJ Science Tutors
Jamesburg, NJ Statistics Tutors
Jamesburg, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Jamesburg_NJ_Math_tutors.php","timestamp":"2014-04-21T05:09:36Z","content_type":null,"content_length":"23923","record_id":"<urn:uuid:1a858e5f-14fc-4b68-aca5-55140d855b47>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 18
, 1994
"... : We survey 25 years of research on decidability issues for Petri nets. We collect results on the decidability of important properties, equivalence notions, and temporal logics. 1. Introduction
Petri nets are one of the most popular formal models for the representation and analysis of parallel proc ..."
Cited by 90 (5 self)
Add to MetaCart
: We survey 25 years of research on decidability issues for Petri nets. We collect results on the decidability of important properties, equivalence notions, and temporal logics. 1. Introduction Petri
nets are one of the most popular formal models for the representation and analysis of parallel processes. They are due to C.A. Petri, who introduced them in his doctoral dissertation in 1962. Some
years later, and independently from Petri's work, Karp and Miller introduced vector addition systems [47], a simple mathematical structure which they used to analyse the properties of "parallel
program schemata', a model for parallel computation. In their seminal paper on parallel program schemata, Karp and Miller studied some decidability issues for vector addition systems, and the topic
continued to be investigated by other researchers. When Petri's ideas reached the States around 1970, it was observed that Petri nets and vector addition systems were mathematically equivalent, even
though thei...
- STACS 2002, LNCS 2030 , 2002
"... Abstract. High-level Message Sequence Charts are a well-established formalism to specify scenarios of communications in telecommunication protocols. In order to deal with possibly unbounded
specifications, we focus on star-connected HMSCs. We relate this subclass with recognizability and MSO-definab ..."
Cited by 27 (4 self)
Add to MetaCart
Abstract. High-level Message Sequence Charts are a well-established formalism to specify scenarios of communications in telecommunication protocols. In order to deal with possibly unbounded
specifications, we focus on star-connected HMSCs. We relate this subclass with recognizability and MSO-definability by means of a new connection with Mazurkiewicz traces. Our main result is that we
can check effectively whether a star-connected HMSC is realizable by a finite system of communicating automata with possibly unbounded channels. Message Sequence Charts (MSCs) are a popular model
often used for the documentation of telecommunication protocols. They profit by a standardized visual and textual presentation (ITU-T recommendation Z.120 [11]) and are related to other formalisms
such as sequence diagrams of UML. An MSC gives a graphical description of communications between processes. It usually abstracts away from the values of variables and the actual contents of messages.
However, this formalism can be used at a very early stage of design to detect errors in the specification
- Petri Nets Newsletter , 1994
"... Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent
publications in the BRICS Report Series. Copies may be obtained by contacting: BRICS ..."
Cited by 19 (0 self)
Add to MetaCart
Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent
publications in the BRICS Report Series. Copies may be obtained by contacting: BRICS
- In ATVA ’05
"... Abstract. This paper argues that flatness appears as a central notion in the verification of counter automata. A counter automaton is called flat when its control graph can be “replaced”,
equivalently w.r.t. reachability, by another one with no nested loops. From a practical view point, we show that ..."
Cited by 19 (6 self)
Add to MetaCart
Abstract. This paper argues that flatness appears as a central notion in the verification of counter automata. A counter automaton is called flat when its control graph can be “replaced”,
equivalently w.r.t. reachability, by another one with no nested loops. From a practical view point, we show that flatness is a necessary and sufficient condition for termination of accelerated
symbolic model checking, a generic semi-algorithmic technique implemented in successful tools like FAST, LASH or TREX. From a theoretical view point, we prove that many known semilinear subclasses of
counter automata are flat: reversal bounded counter machines, lossy vector addition systems with states, reversible Petri nets, persistent and conflict-free Petri nets, etc. Hence, for these
subclasses, the semilinear reachability set can be computed using a uniform accelerated symbolic procedure (whereas previous algorithms were specifically designed for each subclass). 1
- THEORETICAL COMPUTER SCIENCE , 2004
"... We look at 1-region membrane computing systems which only use rules of the form Ca Cv, where C is a catalyst anoncatalW:k and v is a(possiblW:kky string ofnoncatal sts. There are norulk of the
form a v. Thus, we can think of these systems as"purelx catalxyMWe consider two types: (1) when thein ..."
Cited by 16 (3 self)
Add to MetaCart
We look at 1-region membrane computing systems which only use rules of the form Ca Cv, where C is a catalyst anoncatalW:k and v is a(possiblW:kky string ofnoncatal sts. There are norulk of the form a
v. Thus, we can think of these systems as"purelx catalxyMWe consider two types: (1) when theinitial configuration containsonl onecatalxkA and (2) when theinitial configuration contains mulains
catalsy-" We show that systems of the first type are equivalyM to communication-free Petri nets, which are aly equivalyM to commutative context-free grammars. They defin eprecisel-kq semilel- sets.
ThispartialkyM-:W"k an open question (in: WMC-CdeA'02, Lecture Notes in Computer Science,vol 2597, Springer,Berlge 2003, pp. 400 -- 409; Computational" universal P systems without priorities: two
catal"k# are su#cient, availt,y at http://psystems.disco.unimib.it, 2003). Systems of the second type define exactl""# recursivelM-kq""yl sets oftupl# (i.e., Turing machinecomputablWk Weal" studyan
extended model where therul- are of the form q :(p; Ca Cv) (where q and p are states), i.e., the appl":xyMk of therul- is guided bya #nite-statecontrol For thisgeneral"yM model type (1) aswel as type
(2) with some restriction correspond to vector addition systems. Finally, we briefly investigate the closure properties of catalytic systems.
- Journal of Computer and System Sciences , 1989
"... We examine both the modeling power of normal and sinkless Petri nets and the computational complexities of various classical decision problems with respect to these two classes. We argue that
although neither normal nor sinkless Petri nets are strictly more powerful than persistent Petri nets, th ..."
Cited by 11 (5 self)
Add to MetaCart
We examine both the modeling power of normal and sinkless Petri nets and the computational complexities of various classical decision problems with respect to these two classes. We argue that
although neither normal nor sinkless Petri nets are strictly more powerful than persistent Petri nets, they nonetheless are both capable of modeling a more interesting class of problems. On the other
hand, we give strong evidence that normal and sinkless Petri nets are easier to analyze than persistent Petri nets. In so doing, we apply techniques originally developed for conflict-free Petri nets
--- a class defined solely in terms of the structure of the the net --- to sinkless Petri nets --- a class defined in terms of the behavior of the net. As a result, we give the first comprehensive
complexity analysis of a class of potentially unbounded Petri nets defined in terms of their behavior. 1 Introduction Many aspects of the fundamental nature of computation are often studied via
formal m...
- International Journal of Foundations of Computer Science , 2004
"... 1. For 1-membrane catalytic systems (CS's), the sequential version is strictlyweaker than the parallel version in that the former defines (i.e. generates) exactly the semilinear sets, whereas
the latter is known to define nonrecursivesets. 2. For 1-membrane communicating P systems (CPS's), the seque ..."
Cited by 10 (7 self)
Add to MetaCart
1. For 1-membrane catalytic systems (CS's), the sequential version is strictlyweaker than the parallel version in that the former defines (i.e. generates) exactly the semilinear sets, whereas the
latter is known to define nonrecursivesets. 2. For 1-membrane communicating P systems (CPS's), the sequential versioncan only define a proper subclass of the semilinear sets, whereas the parallel
version is known to define nonrecursive sets.3. Adding a new type of rule of the form: ab! axbyccomedcome to the CPS(a natural generalization of the rule ab! axbyccome in the original model),where x;
y 2 fhere; outg, to the sequential 1-membrane CPS makes itequivalent to a vector addition system.
"... Abstract. In [13], Yen defines a class of formulas for paths in Petri nets and claims that its satisfiability problem is EXPSPACE-complete. In this paper, we show that in fact the satisfiability
problem for this class of formulas is as hard as the reachability problem for Petri nets. Moreover, we sa ..."
Cited by 9 (0 self)
Add to MetaCart
Abstract. In [13], Yen defines a class of formulas for paths in Petri nets and claims that its satisfiability problem is EXPSPACE-complete. In this paper, we show that in fact the satisfiability
problem for this class of formulas is as hard as the reachability problem for Petri nets. Moreover, we salvage almost all of Yen’s results by defining a fragment of this class of formulas for which
the satisfiability problem is EXPSPACE-complete by adapting his proof. 1
"... Summary. Motivated by the intriguing complexity of biochemical circuitry within individual cells we study Stochastic Chemical Reaction Networks (SCRNs), a formal model that considers a set of
chemical reactions acting on a finite number of molecules in a well-stirred solution according to standard c ..."
Cited by 8 (2 self)
Add to MetaCart
Summary. Motivated by the intriguing complexity of biochemical circuitry within individual cells we study Stochastic Chemical Reaction Networks (SCRNs), a formal model that considers a set of
chemical reactions acting on a finite number of molecules in a well-stirred solution according to standard chemical kinetics equations. SCRNs have been widely used for describing naturally occurring
(bio)chemical systems, and with the advent of synthetic biology they become a promising language for the design of artificial biochemical circuits. Our interest here is the computational power of
SCRNs and how they relate to more conventional models of computation. We survey known connections and give new connections between SCRNs and
"... Abstract. In the last fifteen years, several research efforts have been directed towards the representation and the analysis of metabolic pathways by using Petri nets. The goal of this paper is
twofold. First, we discuss how the knowledge about metabolic pathways can be represented with Petri nets. ..."
Cited by 8 (3 self)
Add to MetaCart
Abstract. In the last fifteen years, several research efforts have been directed towards the representation and the analysis of metabolic pathways by using Petri nets. The goal of this paper is
twofold. First, we discuss how the knowledge about metabolic pathways can be represented with Petri nets. We point out the main problems that arise in the construction of a Petri net model of a
metabolic pathway and we outline some solutions proposed in the literature. Second, we present a comprehensive review of recent research on this topic, in order to assess the maturity of the field
and the availability of a methodology for modelling a metabolic pathway by a corresponding Petri net. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=528300","timestamp":"2014-04-24T20:27:32Z","content_type":null,"content_length":"38220","record_id":"<urn:uuid:c3e235af-5044-4fa2-9965-8700c494c708>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bristol, PA Calculus Tutor
Find a Bristol, PA Calculus Tutor
I am a fun, helpful, and experienced tutor for the Sciences (biology and chemistry), Math (geometry, pre-algebra, algebra, and pre-calulus), English/Grammar, and the SATs. For the SAT, I implement
a results driven and rigorous 7 week strategy. PLEASE NOTE: I only take serious SAT students who have...
26 Subjects: including calculus, chemistry, English, reading
...This will especially help to personalize the teaching experience and is an effective way to create a trusting relationship. I am especially personable and I know I have the ability to inspire
students to have success beyond their expectations especially with the creative method I use for teachin...
16 Subjects: including calculus, Spanish, physics, algebra 1
...Beyond academics, I spend my time backpacking, kayaking, weightlifting, jogging, bicycling, metalworking, woodworking, and building a wilderness home of my own design. In between formal
tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems...
14 Subjects: including calculus, physics, geometry, ASVAB
...It does not deal with the real numbers and it's continuity. I have studied discrete math as I obtained my BS in mathematics from Ohio University. I have studied logic as an integral part of my
mathematics education.
14 Subjects: including calculus, geometry, ASVAB, algebra 1
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because
this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including calculus, physics, geometry, algebra 2 | {"url":"http://www.purplemath.com/Bristol_PA_Calculus_tutors.php","timestamp":"2014-04-19T12:32:46Z","content_type":null,"content_length":"24036","record_id":"<urn:uuid:1d3359cc-abea-46cb-a2d9-98e3210c6919>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
3x - 4 + 4x = 24 - WyzAnt Answers
3x - 4 + 4x = 24
how do I solve this equation?
Tutors, please
to answer this question.
First add all the (x)s together on one side of the equation. So 3x +4x is 7x
Now the problem becomes 7x -4 = 24 or -4 +7x = 24
Either way is fine.
7x -4 = 24 you need to get the x alone on one side away from the number -4 so you add 4 to each side of the = sign.
If you add +4 to -4 you get 0 so the 4 goes away, but you can't add 4 to one side unless you add 4 to the other so the problem becomes:
7x = 28 (because you added the 4 to the left side of the = sign and the right side of the = sign)
7x = 28 (now to take the 7 off of the x, you need to do the same thing to each side, but this time you divide to make the x be by itself.) 7x divided by 7 = 1x so divide 28 by 7 too and you get 4.
Just remember what you do to one side (add, subtract, divide, etc.) to make the x alone, you need to do to whatever is on the other side.
Here we go: first always write out the equation. Always show all your work and you will make less mistakes.
3x - 4 + 4x = 24
Combine like variables and isolate any single numbers. Remember these equations are like a scale. What you do to one side, you do to the other. Always do the opposite of any operation on the left
side and continue the same to the right, remember a scale or balance.
3x + 4x -4 = 24
7x -4 = 24
7x -4 + 4 = 24 + 4
7x = 28
Isolate X
7x/7 =28/7
x = 4
THe first thing you do is figure out the parts that can be combined. In this case you are able to add 3x and 4 x to get 7 x.
Now you are able to isolate the "x" by adding 4 to each side : 7x-4+4=24+4
The last part to figure out what x equals and to solve the equation is to divide both sides by 7: | {"url":"http://www.wyzant.com/resources/answers/4959/3x_4_4x_24","timestamp":"2014-04-19T08:21:05Z","content_type":null,"content_length":"40227","record_id":"<urn:uuid:a570073e-10cd-4c5c-8f2e-bd7631e0694b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compatible with AIAG
Measurement System Analysis - Gage R&R
Gage R&R Compatible with AIAG MSA 4th Edition
Measurement System Analysis (MSA) involves Gage R&R (repeatability and reproducibility) studies to evaluate your measurement systems.
When I first got involved with quality, I learned about the "five M's" that constituted most root causes: man, machine, materials, methods, and measurement.
Because I worked in a predominantly service industry, I couldn't quite grasp how measurement could be a common cause of variation. But, if you work in manufacturing, you know that gages and how they
are used can be a key cause of variation.
Measurement Systems Analysis (MSA)
MSA is actually quite simple, but even seasoned SPC veterans don't seem to understand it. So I thought I'd simplify it for you.
First, Gage R&R studies are usually performed on variable data - height, length, width, diameter, weight, viscosity, etc.
Second, when you manufacture products, you want to monitor the output of your machines to make sure that they are producing products that meet the customer's specifications. This means that you have
to measure samples coming off the line to determine if they are meeting your customer's requirements.
Third, when you measure, three factors come into play:
1. Part variation (differences between individual pieces manufactured)
2. Appraiser variation (aka, reproducibility) -
Can two different people get the same measurement using the same gage?
3. Equipment variation (aka, repeatability) -
Can the same person get the same measurement using the same gage on the same part in two or more trials?
You want most of the variation to be between the parts, and less than 10% of the variation to be caused by the appraisers and equipment. Makes sense, doesn't it? If the appraiser can't get the same
measurement twice, or two appraisers can't get the same measurement, then your measurement system becomes a key source of error.
Conducting a Gage R&R Study
To conduct a Gage R&R study, you will need:
1. five to ten parts (# each part) that span the distance between the upper and lower spec limits. The parts should represent the actual or expected range of process variation. Rule of thumb: if
you're measuring to 0.0001, the range of parts should be 10 times the resolution (e.g., 0.4995 to 0.5005).
2. two appraisers (people who measure the parts)
3. one measurement tool or gage
4. and a minimum of two measurement trials, on each part, by each appraiser
5. a Gage R&R tool like the Gage R&R excel template in the QI Macros.
QI Macros for Excel Gage R&R Template
Here are samples of the Gage R&R template input sheet and results sections using sample data from the AIAG Measurement Systems Analysis Third Edition.
Gage R&R System Acceptability
• % R&R<10% - Gage System Okay
(Most variation caused by parts, not people or equipment)
• % R&R<30% - May be acceptable based on importance of application and cost of gage or repair
• % R&R>30% - Gage system needs improvement
(People and equipment cause over 1/3 of variation)
What To Look For
Repeatability: Percent Equipment Variation
(%EV - Can the same person using the same gage measure the same thing consistently)
If you simply look at the measurements, can each appraiser get the same result on the same part consistently, or is there too much variation?
Example (looking at measurements from one appraiser only):
• No Equipment Variation: (Part 1: 0.65, 0.65; Part 2: 0.66, 0.66)
• Equipment Variation: (Part 1: 0.65, 0.67; Part 2: 0.67, 0.65)
If repeatability (Equipment variation) is larger than reproducibility (appraiser variation), reasons include:
1. Gage needs maintenance (gages can get corroded)
2. Gage needs to be redesigned to be used more accurately
3. Clamping of the part or gage, or where it's measured needs to be improved (imagine measuring a baseball bat at various places along the tapered contour; you'll get different results.)
4. Excessive within-part variation (Imagine a steel rod that's bigger at one end than the other. If you measure different ends each time, you'll get widely varying results.)
Reproducibility: Percent Appraiser Variation
(% AV - can two appraisers measure the same thing and get the same answer?)
Example (looking at measurements of the same part by two appraisers):
• No Appraiser Variation: (Appraiser 1, Part 1: 0.65, 0.65; Appraiser 2, Part 1: 0.65, 0.65)
• Appraiser Variation: (Appraiser 1, Part 1: 0.65, 0.65; Appraiser 2, Part 1: 0.66, 0.66)
If you look at the line graph of appraiser performance, you'll be able to tell if one person over reads or under reads the measurement.
If reproducibility (appraiser variation) is larger than repeatability (equipment variation), reasons include:
1. Operators need to be better trained in a consistent method for using and reading the gage
2. Calibrations on gage are unclear
3. Fixture required to help the operator use gage more consistently
Mistakes People Make
Many people call us because they don't like the answer they get using the Gage R&R template. Most of the time, it's because they didn't follow the instructions for conducting the study. Here are some
of the common mistakes I've seen:
1. Forgetting that the Gage R&R study is evaluating their measurement system and NOT their products. Gage R&R does not care about how good your products are. It only cares about how good you measure
your products.
2. Using only one part. If you only use one part, THERE CAN'T BE ANY PART VARIATION, so people and equipment are the ONLY source of variation.
3. Using the one part measurement for all 10 parts (again, there won't be any part variation, so it all falls on the people and equipment).
4. Using too many trials (if you use five trials, you have more opportunity for equipment variation).
5. Using too many appraisers (if you use all three, you have more opportunity for appraiser variation).
6. Using fake data. Try using the AIAG SPC data the QI Macros loads on your computer at c:\qimacros\testdata.
7. Using a gage that measures in too much detail. If your part is 74mm +/- 0.05, then you don’t need a gage that measures to a thousandth of an inch (0.001) you only need one that measures to the
hundredth of an inch (0.01).
Challenges You Will Face
One customer faced an unusual challenge: they were producing parts so precisely that there was little or no part variation even when measured down to 1/10,000th of an inch. Their existing gages
ceased to detect any variation from part to part.
As your process improves and your product approaches the ideal target measurement, you'll have less part variation and more chance for your equipment or people to become the major source of
variation. As your product and your process improve, your measurement system will need to improve as well.
Your goal is to minimize the amount of variation and error introduced by measurement, so that you can focus on part variation. This, of course, leads you back into the other root causes of variation:
process, machines, and materials.
If you manufacture anything, measurement system analysis can help you improve the quality of your products, get more business from big customers, and baffle your competition. Enjoy.
Gage R&R Video Series
Gage R&R Training Resources
To create an Aiag Msa Gage R And R in Excel using the QI Macros... | {"url":"http://www.qimacros.com/gage-r-and-r-study/aiag-msa-gage-r-and-r/","timestamp":"2014-04-21T09:35:52Z","content_type":null,"content_length":"29264","record_id":"<urn:uuid:c08da581-59dc-4ad1-b831-de2fb0caea15>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematics
A Guide for the Perplexed On the College Calculus Sequences
This guide is meant to dispel a widely held misconception about calculus at the U. of C. The misconception is that the mathematical theory of calculus is taught only in the 16000's. In a related
matter, some people think that the 13000's sequence is for those who have "never been any good at mathematics." Neither of these views is correct (as hinted by the use of 'misconception'). Here are
the facts (or some facts--a kind of distinction you should get used to at the U. of C.).
The sequence Mathematics 16100-16200-16300 is called Honors Calculus; it might be better referred to as Introduction to Mathematical Analysis. The course starts with the mathematical definition of
the real numbers, and concentrates on theory all year. The mechanics of differentiation and integration and all the standard applications are covered, but the emphasis throughout the year is on
rigorous proof. Students in the 16000's are expected to prove things in their homework assignments and on tests. You don't have to be a mathematics major to take the 16000's, and you don't have to
take the 16000's to be a mathematics major.
Mathematics 15100-15200-15300 and Mathematics 13100-13200-13300 focus less on theory, in the sense that students don't prove as many things for themselves. However, it is not the case that these
courses only provide students with recipes and formulas (plug and chug). Students are expected to understand the definitions of key concepts (limit, derivative, integral) and to be able to apply
definitions and theorems to solve problems. In particular, ALL calculus courses require students to do epsilon-delta limit proofs.
The major difference between the 13000's and 15000's is the amount of mathematics students have seen and mastered before coming to the U. of C. The 13000's sequence is designed for those who are less
familiar with pre-calculus mathematics, specifically trigonometry, logarithms, and exponential functions, and to a lesser extent, high school algebra. Therefore, these subjects are covered more
thoroughly in the 130's than in the 15000's. The 13000's and 15000's sequences end up covering almost the same material, but 15000's classes usually see more advanced applications.
Placement in these courses depends on tests you took during Orientation Week. Many students entering the U. of C. took calculus in high school; this fact alone will not determine your placement.
Despite our best efforts, some students will find themselves placed in the wrong level of calculus. It's easy to switch from the 16000's to the 15000's. Switching from the 15000's to the 13000's
requires a little more persuasion. Going the other way is unusual but can be done. If you are convinced that you are in the wrong class, talk to your instructor and to Diane Herrmann.
The College only requires two quarters of calculus. Many departments (including the physical sciences, HiPSS, and economics) require concentrators to take three quarters of calculus. So, even if you
don't like calculus, it may be worth taking all three quarters your first year so that you don't have to worry about taking the third quarter later on when you find out that your concentration
requires it. | {"url":"http://www.math.uchicago.edu/undergraduate/calculus_guide.shtml","timestamp":"2014-04-19T04:19:30Z","content_type":null,"content_length":"10612","record_id":"<urn:uuid:3f6bf22a-a817-46ec-940a-a4b4e0721b89>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume of Prisms - Concept
We can calculate the volume of any prism simply by knowing the height of the prism and the area of one of its bases. When calculating prism volume, this volume formula can be applied to both right
and oblique prisms with bases of any shape, such as triangles, quadrilaterals, or other polygons. A prism volume is a measurement of the space occupied by such solid.
If you want to calculate the volume of any prism, there is only two things that you need to know: One, what is the height of that prism, and two what is the area of one of your bases. So I'm going to
shade in our bottom base here, and I'm going to label this as capital B. So when I write my volume formula, I'm going to say the volume, "V" of this prism, is equal to its base area times its capital
H, its height. Where capital B is your base area and capital H is the height of the prism.
So the reason why this formula is useful is because you might have a triangular prism, a trapezoidal prism, a hexagonal prism. This formula will work no matter what kind of prism you have.
So whatever your base area is, and I guess I should write base area, you're going to substitute in that formula. So if this was a trapezoid, then you would substitute in B1 plus B2 times H, all
divided by two. And that's how you would calculate your base area.
If you had, let's say, a regular hexagon, you're going to use apothem times side length times the number of sides, divided by two. So this way, this formula volume, equals base area times height, can
be applied to any kind of prism.
One other thing, this is a right prism. If you had an oblique prism, as long as you know the height, and you can calculate its base area, that will be the same. You can use the same formula. It works
for right prisms and oblique prisms.
volume base area height trapezoid regular polygon oblique prism | {"url":"https://www.brightstorm.com/math/geometry/volume/volume-of-prisms/","timestamp":"2014-04-19T10:10:45Z","content_type":null,"content_length":"57150","record_id":"<urn:uuid:215a2e91-05ed-4967-bcaa-94958e3fa9dd>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Windows of periodicity scaling
Windows of periodicity
It is a commonly observed feature of chaotic dynamical systems [1] that, as a system parameter is varied, a stable period-n orbit appears (by a tangent bifurcation) which then undergoes a
period-doubling cascade to chaos and finally terminates via a crisis. This parameter range between the tangent bifurcation and the final crisis is called a period-n window. Note, that the central
part of the picture to the left is similar to the whole bifurcation diagram (see also at the bottom of the page).
For c = -1.75 in the period-3 window stable and unstable period-3 orbits appear by a tangent bifurcation. The stable period-3 orbit is shown to the left below. If N = 3 is set (see to the right), we
get 8 intersections (fixed points of f[c]^o3) which correspond to two unstable fixed points and 6 points of the stable and unstable period-3 orbits of f[c ].
On the left picture the stable period-3 orbit goes through two "linear" and one central quadratic regions of the blue parabola. Therefore in the vicinity of x = 0 the map f[c]^o3 is "quadratic-like"
and iterations of the map repeat bifurcations of the original quadratic map f[c ]. This sheds light on the discussed similarity of windows of periodicity.
f[c]^on map renormalization. The "linear" approximation
Consider a period-n window. Under iterations the critical orbit consecutively cycles through n narrow intervals S[1] → S[2] → S[3] → ... → S[1] each of width s[j] (we choose S[1] to include the
critical point x = 0). Following [1,2] we expand f[c]^on(x) for small x (in the narrow central interval S[1 ]) and c near its value c[c] at superstability of period-n attracting orbit. We see that
the s[j] are small and the map in the intervals S[2 ], S[2 ], ... S[n] may be regarded as approximately linear; the full quadratic map must be retained for the central interval. One thus obtains
x[j+n] ~ Λ[n] [x[j]^2 + β(c - c[c] )] ,
where Λ[n] = λ[2 ]λ[3 ] ...λ[n] is the product of the map slopes, λ[j] = 2x[j] in (n-1) noncentral intervals and
β = 1 + λ[2]^-1 + (λ[2 ]λ[3 ])^-1 + ... + Λ[n]^-1 ~ 1
for large Λ[n ]. We take Λ[n] at c = c[c] and treat it as a constant in narrow window.
Introducing X = Λ[n] x and C = β Λ[n]^2 (c - c[c] ) we get quadratic map
X[j+n] ~ X[n]^2 + C
Therefore the window width is ~ (9/4β)Λ[n]^-2 while the width of the central interval scales as Λ[n]^-1. This scaling is called f[c]^on map renormalization.
For the biggest period-3 window Λ[3] = -9.29887 and β = 0.60754. So the central band is reduced ~ 9 times and reflected with respect to the x = 0 line as we have seen before. The width of the window
is reduced β Λ[3]^2 = 52.5334 times. On the left picture below you see the whole bifurcation diagram of f[c] . Similar image to the right is located in the centeral band of the biggest period-3
window and is stretched by 9 times in the horizontal x and by 54 times in the vertical c directions.
[1] J.A.Yorke, C.Grebogi, E.Ott, and L.Tedeschini-Lalli "Scaling Behavior of Windows in Dissipative Dynamical Systems" Phys.Rev.Lett. 54, 1095 (1985)
[2] B.R.Hunt, E.Ott Structure in the Parameter Dependence of Order and Chaos for the Quadratic Map J.Phys.A 30 (1997), 7067.
Contents Previous: Period trippling bifurcations Next: The Julia set renormalization
updated 29 Dec 2013 | {"url":"http://www.ibiblio.org/e-notes/MSet/windows.html","timestamp":"2014-04-21T13:50:42Z","content_type":null,"content_length":"6797","record_id":"<urn:uuid:cbef134e-45f0-4a9b-83ee-fc75265b6c39>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Software IHPC
The Institute of High Performance Computing provides its software under a number of flexible licences, designed to meet the usage needs and distribution requirements of different types of users. In
addition, we also provide technical reports under an Attribution-NonCommercial-Share Alike 2.5.
By: Advanced Computing Programme
A log-analysis toolkit for pre-processing semi-structured syslogs and for establishing probable cause-and-effect relationships when given a symptom event, has been developed and released.
By: Advanced Computing Programme
Lightdraw allows one or more users to communicate with the computer by moving light points on a display surface. The motions of these light points are then interpreted by the computer system to
trigger appropriate responses on the display. The light tracking is done with a commodity-off-the-shelf webcam and the light points can be generated by a number of devices including laser pointers,
torchlight and bright hand phone screens.
By: Computational Electronics and Photonics
FeMOS is a finite element simulator to solve Poisson-NEGF equation. The simulator solves the Poisson equation and the Non-equilibrium Green's function self-consistently and is written in Octave, a
free high-level language for numerical calculation.
FDS Optimisation for Itanium II Arcitecture
By: Advanced Computing Programme
is a computational fluid dynamics (CFD) model of fire-driven fluid flow. The software solves numerically a form of the Navier-Stokes equation appropriate for low-speed, thermally-driven flow with an
emphasis on smoke and heat transport from fire. In this report, we evaluate the effectiveness of the Intel compiler optimisations on the Itanium II architecture and investigate the improvements of
parallelizing the hot spots using OpenMp.
Matrix Multiplication on GPU in Octave
By: Advanced Computing Programme
In this document, we will discuss how to perform matrix-matrix multiplication using the GPU from the Octave command line. We make use of Nvidia's CUBLAS library, which is an implementation of a
subset of single precision BLAS functions that runs on the GPU. The CUBLAS library was implemented using the CUDA API, and the function call of interest is cublasSgemm.
Literature Review on a Matlab Alternative - Octave
By: Advanced Computing Programme
Due to the increase in license fee and an effort to cut cost, alternatives to Matlab emerge and are gaining attention among the researcher. More and more researchers have started using these
alternatives. More tools are being developed for these alternatives. Octave is one of the promising Matlab replacements. This study seeks to briefly outline the competitive advantage of using Octave.
This page is last updated at: 01-OCT-2008 | {"url":"http://www.ihpc.a-star.edu.sg/opensource@ihpc.php","timestamp":"2014-04-21T04:32:56Z","content_type":null,"content_length":"27575","record_id":"<urn:uuid:2cd674eb-72c7-454a-abac-2dcab2e92a50>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus : An Applied Approach
ISBN: 9780618547180 | 0618547185
Edition: 7th
Format: Hardcover
Publisher: Cengage Learning
Pub. Date: 3/14/2005
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/calculus-applied-approach-7th-larson-ron/bk/9780618547180","timestamp":"2014-04-19T15:55:01Z","content_type":null,"content_length":"34876","record_id":"<urn:uuid:984121de-634b-4a1f-86a9-1c67323cbb59>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Holonomy of compact Ricci-flat Kaehler manifold
I have come across the following apparent contradiction in the literature. In "Joyce D.D., Compact manifolds with special holonomy" I find on page 125 the claim that if M is a compact Ricci-flat
Kaehler manifold, then the global holonomy group of M is contained in SU(m) if and only if the canonical bundle of M is trivial.
In "Candelas, Lectures on Complex manifolds" however, on page 61 I read that any Ricci-flat Kaehler manifold has global holonomy in SU(m) and there is no mentioning of any condition on the the
canonical bundle.
Note that I am assuming M to be multiply connected and I am talking about the global holonomy group, i.e. not the restricted holonomy group.
So my questions is, what is going on ? Who is right? (Both somehow provide proofs of their claims).
I hope anyone can help me out. | {"url":"http://www.physicsforums.com/showthread.php?t=322945","timestamp":"2014-04-18T23:28:04Z","content_type":null,"content_length":"20208","record_id":"<urn:uuid:f2c4afbf-b13c-4554-8501-11bb32013f72>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Difference Between Amplitude and Frequency
Amplitude vs Frequency
Amplitude and frequency are two of the basic properties of periodic motions. A proper understanding in these concepts is required in the study of motions such as simple harmonic motions and damped
harmonic motions. In this article, we are going to discuss what frequency and amplitude are, their definitions, the measurement and dependencies of amplitude and frequency, and finally the difference
between amplitude and frequency.
Frequency is a concept discussed in periodic motions of objects. To understand the concept of frequency, a proper understanding of periodic motions is required. A periodic motion can be considered as
any motion that repeats itself in a fixed time period. A planet revolving around the sun is a periodic motion. A satellite orbiting around the earth is a periodic motion even the motion of a balance
ball set is a periodic motion. Most of the periodic motions we encounter are circular, linear or semi-circular. A periodic motion has a frequency. The frequency means how “frequent” the event is. For
simplicity, we take frequency as the occurrences per second. Periodic motions can either be uniform or non-uniform. A uniform can have a uniform angular velocity. Functions such as amplitude
modulation can have double periods. They are periodic functions encapsulated in other periodic functions. The inverse of the frequency of the periodic motion gives the time for a period. Simple
harmonic motions and damped harmonic motions are also periodic motions. Thereby the frequency of a periodic motion can also be obtained using the time difference between two similar occurrences. The
frequency of a simple pendulum only depends on the length of the pendulum and the gravitational acceleration for small oscillations.
Amplitude is also a very important property of a periodic motion. To understand the concept of amplitude, the properties of harmonic motions must be understood. A simple harmonic motion is a motion
such that the relationship between the displacement and the velocity takes the form of a = -ω^2x where “a” is the acceleration and “x” is the displacement. The acceleration and the displacement are
antiparallel. This means the net force on the object is also on the direction of the acceleration. This relationship describes a motion where the object is oscillating about a central point. It can
be seen that when the displacement is zero the net force on the object is also zero. This is the equilibrium point of the oscillation. The maximum displacement of the object from the equilibrium
point is known as the amplitude of the oscillation. The amplitude of a simple harmonic oscillation strictly depends on the total mechanical energy of the system. For a simple spring – mass system, if
the total internal energy is E, amplitude is equal to 2E/k, where k is the spring constant of the spring. At that amplitude, the instantaneous velocity is zero; thereby, the kinetic energy is also
zero. Total energy of the system is in the form of potential energy. At the equilibrium point, the potential energy becomes zero.
│What is the difference between amplitude and frequency? │
│ │
│• Amplitude strictly depends on the total energy of the system, whereas frequency of an oscillation depends on the properties of the oscillator itself.│
│ │
│• For a given system, the amplitude can be changed but frequency cannot. │
Related posts: | {"url":"http://www.differencebetween.com/difference-between-amplitude-and-vs-frequency/","timestamp":"2014-04-20T19:32:25Z","content_type":null,"content_length":"90817","record_id":"<urn:uuid:b2c676d7-c4ee-4af3-b400-c26aa1caeb3f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
*** International Conference on Information Security and Cryptololgy ***
Warning: include(./include/menu.php) [function.include]: failed to open stream: No such file or directory in /icisc/public_html/icisc12/asp/03_1.html on line 24
Warning: include() [function.include]: Failed opening './include/menu.php' for inclusion (include_path='.:/usr/local/php/lib/php') in /icisc/public_html/icisc12/asp/03_1.html on line 24
Warning: include(./include/host.php) [function.include]: failed to open stream: No such file or directory in /icisc/public_html/icisc12/asp/03_1.html on line 31
Warning: include() [function.include]: Failed opening './include/host.php' for inclusion (include_path='.:/usr/local/php/lib/php') in /icisc/public_html/icisc12/asp/03_1.html on line 31
Invited Talk 1
Damien Stehlé,
Ecole Nromale Supérieure de Lyon,
"Making NTRUEncrypt and NTRUSign as Secure as Worst-Case Problems over Ideal Lattices"
Abstract. NTRUEncrypt, proposed in 1996 by Hoffstein, Pipher and Silverman, is the fastest known lattice-based encryption scheme. Its moderate key-sizes, excellent asymptotic performance and
conjectured resistance to quantum computers make it a desirable alternative to factorization and discrete-log based encryption schemes. However, since its introduction, doubts have regularly arisen
on its security. In this talk, I will describe a modification of NTRUEncrypt that is semantically secure (IND-CPA). The security holds under the assumption that quantum computers cannot efficiently
solve some standard worst-case problems on euclidean lattices, when they are restricted to a family of lattices related to cyclotomic number fields. The main component of the security proof is to
show that if the secret key polynomials are selected from discrete Gaussians, then the public key, which is their ratio in a polynomial ring over a prime field, is statistically close to uniform
over its range. I will also describe the first key generation algorithm for NTRUSign that provably runs in polynomial time. Combined with the above results on the NTRUEncrypt key generation
algorithm, this leads to a variant of NTRUSign that is provably unforgeable (in the random oracle model). The security of the modified NTRUEncrypt and NTRUSign schemes then follows from the already
proven hardness of the Ring-SIS and Ring-LWE problems.
Affiliation and short biography. Damien Stehlé is professor at the Ecole Normale Supérieure de Lyon, France. He received his Ph.D. Degree in computer science from the Université Nancy 1, France,
in 2005, and his Habilitation from ENS de Lyon in 2011. His research interests include cryptography, algorithmic number theory, computer algebra and computer arithmetic, with emphasis on the
computational aspects of Euclidean lattices.
Invited Talk 2
Jeong Woon Choi,
Quantum Tech. Lab in SK telecom, developing a signal processing board for QKD system,
"Introduction to Quantum Cryptography and its Technology Trends"
Abstract. According to the advance of semiconductor fabrication technology, the speed of processors has been evolving exponentially faster. This is making a great contribution to the growth of
market area and the performance of security technology. However, it has also, ironically, facilitated various hacking techniques and increased security breaches. Especially, in 1994, it was
theoretically proved that quantum computer could break any public cryptosystems based on integer factoring or discrete logarithm.
Quantum cryptography provides an unconditional security with no limit on the computing power of adversaries, by using the fundamental principles of quantum mechanics such as uncertainty and
no-clonability. The most representative area of quantum cryptography is the quantum key distribution (QKD) and in fact several foreign companies are on selling their products already. In this talk,
I will introduce the concepts and importance of quantum cryptography and its global technology trends.
Affiliation and short biography. Jeong Woon Choi is research associate at Quantum Tech. Lab in SK telecom, Korea. He received his Ph.D. Degree in Department of Mathematical Science, Seoul National
University, Korea, in 2006, and majored in quantum information and computation. He worked at Cryptography Research Team, Electronics and Telecommunications Research Institute from 2008 to 2011. His
research interests include developing a signal processing board for QKD system. | {"url":"http://icisc.org/icisc12/asp/03_1.html","timestamp":"2014-04-16T22:07:33Z","content_type":null,"content_length":"10074","record_id":"<urn:uuid:ebee0d43-c8bc-4517-9ca6-5aa994142915>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sort by:
Per page:
Now showing results 1-10 of 23
This is an activity about rockets. Learners will research facts about Atlas V rockets, which will launch the MMS satellites. After, they will compute the speed of the launch rocket, given a data
chart of time vs. distance from lift-off. Then, they... (View More) will write a report synthesizing their researched information. This lesson requires student access to internet accessible
computers. This is lesson two as part of the MMS Mission Educator's Instructional Guide. (View Less)
During the last sunspot cycle between 1996-2008, over 21,000 flares and 13,000 clouds of plasma exploded from the Sun's magnetically active surface. These events create space weather. Students will
learn more about space weather and how it affects... (View More) Earth through reading a NASA press release and viewing a NASA eClips video segment. Then students will explore the statistics of
various types of space weather storms by determining the mean, median and mode of a sample of storm events. This activity is part of the Space Math multimedia modules that integrate NASA press
releases, NASA archival video, and mathematics problems targeted at specific math standards commonly encountered in middle school textbooks. The modules cover specific math topics at multiple levels
of difficulty with real-world data and use the 5E instructional sequence. (View Less)
During the last sunspot cycle between 1996-2008, over 21,000 flares and 13,000 clouds of plasma exploded from the Sun's magnetically active surface. Students will learn more about space weather
through reading a NASA press release and viewing a NASA... (View More) eClips video segment. Then students will explore the statistics of various types of space weather storms by determining the
mean, median and mode of different samples of storm events. This activity is part of the Space Math multimedia modules that integrate NASA press releases, NASA archival video, and mathematics
problems targeted at specific math standards commonly encountered in middle school textbooks. The modules cover specific math topics at multiple levels of difficulty with real-world data and use the
5E instructional sequence. (View Less)
Students will learn about the Transit of Venus through reading a NASA press release and viewing a NASA eClips video that describes several ways to observe transits. Then students will study angular
measurement by learning about parallax and how... (View More) astronomers use this geometric effect to determine the distance to Venus during a Transit of Venus. This activity is part of the Space
Math multimedia modules that integrate NASA press releases, NASA archival video, and mathematics problems targeted at specific math standards commonly encountered in middle school textbooks. The
modules cover specific math topics at multiple levels of difficulty with real-world data and use the 5E instructional sequence. (View Less)
Students will learn about the twin STEREO spacecraft and how they are being used to track solar storms through reading a NASA press release and viewing a NASA eClips video segment. Then students will
examine data to learn more about the frequency... (View More) and speed of solar storms traveling from the Sun to Earth. This activity is part of the Space Math multimedia modules that integrate NASA
press releases, NASA archival video, and mathematics problems targeted at specific math standards commonly encountered in middle school textbooks. The modules cover specific math topics at multiple
levels of difficulty with real-world data and use the 5E instructional sequence. (View Less)
Students will learn about NASA's Radiation Belt Storm Probes (RBSP), Earth's van Allen Radiation Belts, and space weather through reading a NASA press release and viewing a NASA eClips video segment.
Then students will use simple linear functions to... (View More) examine the scale of the radiation belts and the strength of Earth's magnetic field. This activity is part of the Space Math
multimedia modules that integrate NASA press releases, NASA archival video, and mathematics problems targeted at specific math standards commonly encountered in middle school textbooks. The modules
cover specific math topics at multiple levels of difficulty with real-world data and use the 5E instructional sequence. (View Less)
This is an activity about assessing magnetic activity on the Sun as astronomers do. Learners will select and compare five visible light solar images and identify and label each individual sunspot
group. Then, learners will count all possible... (View More) sunspots from each group and use both counts in a standard equation to calculate the Relative Sunspot Number for each respective solar
image. This activity requires access to the internet to obtain images from the SOHO image archive. This is Activity 8 of the Space Weather Forecast curriculum. (View Less)
This is an activity about cause and effect. Learners will calculate the approximate travel time of each solar wind event identified in the previous activity in this set to estimate the time at which
the disturbance would have left the Sun. Then,... (View More) they will examine solar images in an attempt to identify the event on the Sun that may have caused the specific solar wind episode. This
is Activity 12 of the Space Weather Forecast curriculum. (View Less)
This is an activity about solar flare activity. Learners will use whole-Sun maps of magnetic activity in order to identify possible future magnetic activity. They will take into account the rotation
of the Sun and make day-to-day predictions of the... (View More) overall Earth-side magnetic activity as suspected farside features rotate onto the Earth-side, and as Earth-side features rotate out
of view onto the farside. Finally, learners will check the accuracy of their predictions. This activity requires access to the internet to obtain images from the Stanford University solar magnetic
map archive from 1996 to 2011 and the GOES X-ray image archive. This is Activity 9 of the Space Weather Forecast curriculum. (View Less)
This is an activity about searching online data archives for solar wind events. Learners will find at least three episodes of increased solar wind activity impacting Earth using direct measurements
of solar wind velocity and density. Then, they will... (View More) characterize each events by its rise time, the time it takes for the solar wind speed to rise from normal levels to the peak speed
of the event, and the percentage increase in solar wind velocity. This is Activity 11 of the Space Weather Forecast curriculum. (View Less)
«Previous Page123 Next Page» | {"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Mathematics%3AProblem+solving&resourceType%5B%5D=Instructional+materials%3AActivity&resourceType%5B%5D=Instructional+materials%3AData","timestamp":"2014-04-18T12:01:06Z","content_type":null,"content_length":"81537","record_id":"<urn:uuid:cffffc5b-e7da-4312-801d-23ab9b1c09b9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consumer Spending and Customer Satisfaction: Untying the Knot
Economics Research International
Volume 2012 (2012), Article ID 534209, 7 pages
Research Article
Consumer Spending and Customer Satisfaction: Untying the Knot
School of Business, Queen’s University, Kingston, ON, Canada K7L 3N6
Received 1 August 2011; Accepted 23 October 2011
Academic Editor: Donald Lien
Copyright © 2012 Peter Sephton. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The recession of 2007–2009 has led to renewed interest in forecasting discretionary consumer spending and whether marketing variables contain predictive content. Using the ACSI customer satisfaction
index and both linear and nonlinear methods, this note suggests the index fails to enhance our understanding of the temporal evolution of discretionary spending.
1. Introduction
Throughout 2009, there was renewed interest in forecasting business cycle turning points as the economy struggled with the forces of creative destruction. As financial markets began a period of
sustained growth, market-watchers wondered aloud whether recovery was in sight or if the upward trend was just a “dead cat bounce.” How would we know when consistent growth had returned? When would
discretionary spending begin to rise and how should businesses react? Should firms raise production now in anticipation of higher levels of demand or wait until stockouts signalled the worst was over
and an increase in production was warranted? Put differently, could marketing variables such as customer satisfaction help predict discretionary spending? Consumer confidence and customer
satisfaction sometimes appear to dominate the effects of variables traditionally thought to drive consumption. Could measures of customer satisfaction help predict consumption above and beyond the
usual panoply of economic data on disposable income and debt-service ratios?
Fornell et al. [1] reported preliminary success in using measures of satisfaction to predict discretionary expenditures, extending related work by Gupta and Zeithaml [2] on linking satisfaction to
financial performance and Keiningham et al. [3] on the relationship between satisfaction and customer retention and recommendation. However, Fornell et al. [1] employ an ad hoc specification
involving nonlinearities without sufficient testing to ensure that their model is properly specified. The purpose here is to determine whether there are long-run linear and/or nonlinear relationships
linking satisfaction and other economic data to discretionary spending. If so, forecasting models can be improved by incorporating the restrictions implied by these relationships. To anticipate the
empirical results, perhaps, unfortunately, customer satisfaction has little, if any predictive content for discretionary spending. The search for an effective forecasting model linking measures of
marketing performance to discretionary consumption continues.
The next section contains single equation test results that indicate there is little merit in considering satisfaction when modeling the determinants of discretionary spending whereas a systems
approach to testing provides some evidence that satisfaction “matters.” Tests for threshold cointegration, which allows asymmetric adjustment back to the long-run relationship, uniformly reject the
view that satisfaction measures affect consumption. A nonlinear nonparametric approach provides clear evidence of threshold effects that can improve marketers’ abilities to predict spending, albeit
with no input from measures of customer satisfaction.
2. Cointegration and Error Correction Models
The central question here is whether measures of customer satisfaction are cointegrated with discretionary consumption spending. If so, our forecasting models should incorporate those effects, along
with those coming from the variables traditionally considered to be primary determinants of consumption. To that end, data on real personal disposable income and real personal consumption
expenditures were obtained from the Bureau of Economic Analysis. Real discretionary personal expenditures (DCEs) are constructed as total expenditures less spending on housing, food, and medical
care. The American Customer Satisfaction Index (ACSI) was obtained from the ACSI website, http://www.theacsi.org/, while data on the all-items CPI, the debt-service ratio (DSR), and consumer credit
outstanding was obtained from the St. Louis Federal Reserve Bank’s FRED database. All series span 1994 Q4 through 2008 Q3, with monthly data adjusted to quarterly figures by taking monthly averages.
Real credit outstanding was constructed by deflating outstanding credit by the CPI.
The current financial crisis was precipitated by an overreliance on credit—particularly revolving lines of credit—in anticipation of continually higher housing prices. The collapse of those prices
and the associated effects on consumer credit outstanding suggest that it might be interesting to break total outstanding consumer credit into both revolving and nonrevolving credit to determine
whether discretionary expenditures are more highly related to one form of credit than the other. To that end, both real outstanding consumer credit based on revolving (CCR) and nonrevolving credit
(CCNR) will be included in the analysis.
The specification of the cointegrating regression is given by (1). Note that at this stage there are no asymmetries in the relationship and, if the series are cointegrated, each must individually be
integrated of the same order (i.e., they must each be difference stationary): Unit root and stationarity tests indicated that each series integrated of order one. For brevity, those test results will
not be reported.
Given there are six variables under examination, there could be as many as five cointegrating vectors. The next section begins with single-equation tests of cointegration as well as estimates based
on the Johansen and Juselius [4] methodology. Given the empirical findings, this is followed by a single equation analysis to determine whether there is asymmetric adjustment in the system. A
nonparametric approach using nonlinear methods follows.
3. Empirical Results
The cointegration analysis will employ data spanning 1994 Q4 to 2008 Q3. Engle and Granger [5] introduced the concept of cointegration and provided one of the first tests in the single equation
environment. Their test examines the null hypothesis of noncointegration by testing the residual from the cointegrating regression to see whether it contains a unit root. To save space, I will simply
report that, for tests including a constant or a constant and a trend and for lag adjustments between zero and four (and even up to eight), one cannot reject the null hypothesis of a unit root at the
five percent level of significance. This suggests the series are not cointegrated and there is no meaningful long-run link between real discretionary consumption and the other series, a result that
is at odds with what one might expect. Closer inspection of the cointegrating regression indicates that some variables should be dropped from the system. Consider(2) where -statistics based on
robust standard errors appear in parentheses. Clearly, the only variables that should be retained are real personal disposable income and revolving consumer credit outstanding. Customer satisfaction,
nonrevolving outstanding consumer credit, and the debt-service ratio have coefficients that are statistically insignificantly different from zero at the five percent level. Deleting these series from
the cointegrating regression and repeating the test do not qualitatively affect inference, at or about the five percent level of significance, independently of lag length and deterministic
components. It would appear that single-equation cointegration analysis is particularly unhelpful in modeling discretionary spending.
However, the systems approach to cointegration testing does indicate the six variables are cointegrated with one cointegrating vector (at the five percent level of significance, given a lag length of
two). The associated error correction models (excluded for brevity) suggest that real personal disposable income and the ACSI index are weakly exogenous in that they do not help restore the system
back to its attractor. These findings are intuitively pleasing, since it might be reasonable to assert that the ACSI index and real disposable income can be taken to be given. The ACSI index measures
customer satisfaction, and while (in natural logarithms) its correlation with real GDP is roughly 0.61, one might assert that the causal link is from real GDP to the ACSI index, with adjustment in
real GDP rather than customer satisfaction, absent psychological and international and industry-specific effects such as those identified by Johnson et al. [6]. Similarly, personal disposable income
is not under the direct control of most consumers—it is taken as given. Discretionary spending and measures of outstanding consumer credit should respond to any disequilibrium in the system, as
should the debt-service ratio if it is related to macroeconomic conditions, which are, in part, determined by discretionary consumer expenditures and their effects on inflation—and as the central
bank’s policy response filters through the term structure of interest rates.
These results provide a role for using the ACSI index in setting marketing plans, but they fail to provide the asymmetric or threshold effects identified by Fornell et al. [1]. Asymmetric adjustment
in the error correction mechanism was examined by Enders and Siklos [7]. In essence, when testing whether the residual from the cointegrating regression has a unit root or not (the Engle-Granger null
of noncointegration), Enders and Siklos [7] allow for asymmetry; positive and negative (or above and below a threshold) departures from the attracter garner different patterns of adjustment back to
equilibrium. They specified threshold autoregressive (TAR) and momentum threshold autoregressive (MTAR) models to test whether there was evidence of asymmetric adjustment, extending previous work by
Balke and Fomby [8].
Cook [9] provided an improved TAR and MTAR testing regime that has better statistical properties than the original Enders and Siklos [7] approach due to the fact that the series are initially “GLS
detrended” to eliminate deterministic components. Application of the Cook [9] tests failed to identify any evidence of threshold cointegration, either around a set threshold of zero or around a
consistent-threshold chosen using the methods outlined in Chan [10]. This was true for any lag length considered and for both detrending constant and/or trend deterministics from the series. It was
also true for the non-GLS detrended versions of the Enders and Siklos [7] TAR and MTAR tests. It would appear that there is little evidence in favor of asymmetric adjustment in the linear
relationship between discretionary spending and its primary determinants.
4. Nonlinear Cointegration
While the multivariate cointegration test identified one cointegrating vector, using linear models, there is little evidence of threshold effects in the link between discretionary spending and its
determinants. It seems appropriate then to consider nonlinear relationships to see whether they can provide additional insight.
Nonlinear cointegration has been examined by many authors (see, e.g., Bierens [12] and Breitung [13]). The methods of Sephton [14] seem most appropriate for the present analysis. Multivariate
adaptive regression spline (MARS) models due to Friedman [15] are used to fit the nonlinear cointegrating regression to which the Engle-Granger cointegration test is applied. The MARS routine finds
the optimal transformations of series and their interactions by creating a large set of basis functions—transformations of series above and below certain threshold values, or knots—fitting a
relatively large model and then paring down the size of the fitted relationship based on a variation of a degrees of freedom adjusted sum-of-squared residuals metric. The final model can be linear in
the transformed variables or it can involve products of basis functions relating several variables so that both additive and interactive effects of series on the target variable are allowed. Plots of
the transformed series show the contributions of the basis functions to the fitted model, which is determined by an ordinary regression of the dependent variable against the various basis functions
chosen by the routine.
MARS models have been applied successfully in across a variety of disciplines, in economics, pharmaceuticals, customer relationship management, and other intensive data-mining applications (see http:
//www.salford-systems.com/ and the section “Product Overview” for over fifty published examples and applications across the natural and social sciences and management). One could consider the MARS
routine as the optimal search for thresholds or knots amongst the various predictors of discretionary spending. Indeed, a model that allows two-way multiplicative interactions is in the spirit of
Fornell et al. [1], since they allowed changes in customer satisfaction to interact with the debt-service ratio. A finding of nonlinear cointegration would lead to the specification of an error
correction model using the transformed variables relative to their threshold values and the error correction term. Such a model would endogenize the search for threshold cointegration.
The MARS routine contains a large number of “tuning parameters” which set the size of the largest model (from which the optimal model is chosen), the order of variable interaction, and the penalty
function (related to the sum-of-squared residuals) used to choose the final specification. As part of the output one finds the relative importance of each variable in the final model, measures of
goodness of fit, and plots of the optimal transformations of the series. Table 1 presents the estimated model characterizing the nonlinear cointegrating regression. There are seven basis functions
involving all of the predictors except the customer satisfaction index. Revolving credit has a unique effect onto itself while there are interactions between the debt-service ratio and nonrevolving
credit, and personal disposable income and nonrevolving credit.
MARS is a very flexible algorithm and the fact that it chose a model that excludes the ACSI index suggests the index has no power to predict discretionary spending. Indeed, one of the challenges of
nonparametric routines such as MARS is that there is a potential to overfit the relationship by building a model that is “too” large. Experimentation with the model parameters to find a specification
that did include the ACSI index required a very large model that was clearly overparameterized given the dataset only includes 56 observations, which is already relatively short and may strain the
data-mining capabilities of the MARS algorithm.
We now examine two tests for whether the series are cointegrated. The first is based on the cointegrating regression residuals to see if the variables which were chosen by the routine provide
evidence of nonlinear cointegration. Sephton [14] provides estimates of critical values of the Engle and Granger [5] approach to testing for cointegration in MARS models using the unit root test on
the cointegrating residuals for a variety of sample sizes. Given advances in computational power, it is a trivial matter to update these critical values for current use. They were constructed from a
simulation experiment based on 50,000 replications under the null of no cointegration. The test statistic has a simulated five percent critical value of −6.028. The calculated test value was −6.593,
leading us to conclude that the series do appear to be nonlinearly cointegrated (the probability value associated with the calculated test statistic is approximately 1.9 percent based on the
simulated test values).
A second approach to testing for cointegration involves a direct examination of the error correction model, testing the statistical significance of the lagged error correction term. The null of no
cointegration is not rejected if the lagged error correction term’s coefficient is insignificantly different from zero. In the context of the MARS algorithm it is not clear that the statistical
theory associated with the usual -statistic on this error correction term actually follows the distribution because the variables in the final MARS model involve generated regressors (combinations of
basis functions, variables relative to their threshold values). Palm et al. [16] provide an attractive approach to inference in this situation, developing a sieve bootstrap Wald test for
non-cointegration using the conditional error correction model. This model takes as its regressors the lagged changes in the predictors relative to their thresholds, as well as the lagged residual
from the cointegrating regression (the so-called error correction term). The multivariate sieve bootstrap allows dependence in the series, and their test is shown to be asymptotically valid. Palm et
al. [16] demonstrate that it performs well even in small samples such as those considered here. In all versions of their tests (either based on the vector error correction model or the conditional/
marginal model), one rejects the null of no cointegration at about the 5 percent level of significance, further indicating the presence of nonlinear cointegration. The Palm et al. [16] test of
noncointegration was also applied to the logarithms of the series in the ECM derived from the linear specification in (1). Test statistics including a constant or constant and trend were 18.402 and
13.219, respectively, with five percent critical values well above the calculated test statistics (values ranged from 22.48 to 71.96 depending on the alternative hypothesis). At the five percent
level, one does not reject the null of noncointegration in the linear specification.
The exclusion of the ACSI index from the final model does not support the view that customer satisfaction helps to predict discretionary spending. Since the model was estimated over the entire sample
period, 1994 Q4 to 2008 Q3, one might be concerned with the structural stability of the estimated parameters in the final regression of discretionary spending against the basis functions. Instability
may point to the need to include the ACSI index because of an “omitted variable problem.” Results from the Hansen [11] stability test appear in Table 1, whence it is clear that all parameter
estimates, including those of the variance of the errors, are structurally stable, both individually and jointly. Using nonlinear methods, one needs not look to customer satisfaction to predict
discretionary spending.
This finding was robust across a large number of settings of the MARS routine parameters. Only when the number of basis functions as allowed to be relatively large (23) did the ACSI index play a role
in the final model. In that case, the goodness of fit of the larger model was 0.999 relative to 0.998 in the smaller model. Parsimony and the principal of Occam’s razor dictate the choice of the
smaller specification.
Figures 1 through 3 contain the contributions of each variable to the final fitted MARS relationship, with two views provided for each of the surface plots. Figure 1 shows that outstanding revolving
consumer credit has a unique effect by itself, with values below the first knot or threshold having little effect on discretionary spending. The higher the level of revolving credit, the greater is
its effect on spending, with values above the second knot having no effect (estimated using the logarithms of the series at 0.74 and 1.06, resp., or in terms of levels, $2.09 trillion and $2.88
trillion, resp.).
Figure 2 shows the joint contributions of the debt-service ratio and nonrevolving credit—as each series rises, so too does their contribution to spending. The second view of this figure demonstrates
this interaction is strongly nonlinear. Figure 3 demonstrates that personal disposable income and nonrevolving lines of credit appear to interact directly, with higher levels raising discretionary
Given the nonlinear relationship between discretionary spending and its predictors, we can estimate error correction models to determine whether any of the series is weakly exogenous. These models
are linear in the parameters and involve the lagged changes in the dependent variable and the predictors relative to their estimated knots as well as the error correction term. Rather than present
voluminous output, Table 2 presents the estimated speeds of adjustment coefficients—the terms on the lagged cointegrating regression terms—and their associated -statistics. At the five percent level
of significance, discretionary spending and revolving credit appear to respond to restore the system back to equilibrium while disposable income, the debt-service ratio, and nonrevolving credit do
not move in response to disequilibria. These results are very similar to those obtained from the Johansen and Juselius [4] multivariate linear cointegration results and suggest that discretionary
spending and the stock of outstanding revolving credit appear to play significant roles in reequilibrating the markets. Perhaps unfortunately, consumer satisfaction does not play an important role
when one is willing to adopt nonlinear nonparametric methods to predict discretionary spending.
5. Conclusion
The purpose here was to revisit the information in the ACSI customer satisfaction index to determine whether it should be included amongst the variables thought to affect discretionary consumption
expenditures. Using single equation methods, there was little support for a link from satisfaction to spending, and this result carried through to testing for asymmetries in the relationships. When a
multivariate approach to specifying a cointegrating relationship was examined, there did appear to be a stable link amongst the various predictors that could be used to enhance forecasts of consumer
A nonlinear nonparametric routine failed to find evidence of interaction effects of satisfaction with any of the other predictors, all of which were shown to contribute to the temporal behavior of
discretionary expenditures. This highly flexible specification provided insight into the manner in which nonrevolving consumer credit interacted with personal disposable income and the debt-service
ratio, and it identified a nonlinear effect of revolving consumer credit on discretionary spending. The MARS routine failed to find a statistically significant role for customer satisfaction in
modeling discretionary spending.
While perhaps somewhat discouraging, the ACSI index was not found to be an optimal predictor of discretionary spending. This suggests that future work might examine the roles that satisfaction may
play in a spatially separated environment. For example, retail sales in some states may be significantly affected by measures of customer satisfaction but their net effect in economy-wide data may be
difficult to identify. It may be helpful for marketing managers to examine geographically distinct areas to determine whether local economic activity is sensitive to satisfaction on a regional basis.
It might also be interesting to examine whether there are industrial sensitivities to customer satisfaction that can enhance marketing strategies.
1. C. Fornell, R. T. Rust, and M. G. Dekimpe, “The effect of customer satisfaction on consumer spending growth,” Journal of Marketing Research, vol. 47, no. 1, pp. 28–35, 2010. View at Publisher ·
View at Google Scholar · View at Scopus
2. S. Gupta and V. Zeithaml, “Customer metrics and their impact on financial performance,” Marketing Science, vol. 25, no. 6, pp. 718–739, 2006. View at Publisher · View at Google Scholar · View at
3. T. L. Keiningham, B. Cooil, L. Aksoy, T. W. Andreassen, and J. Weiner, “The value of different customer satisfaction and loyalty metrics in predicting customer retention, recommendation, and
share-of-wallet,” Managing Service Quality, vol. 17, no. 4, pp. 361–384, 2007. View at Publisher · View at Google Scholar · View at Scopus
4. S. Johansen and K. Juselius, “Maximum likelihood estimation and inference on cointegration–with applications to the demand for money,” Oxford Bulletin of Economics and Statistics, vol. 52, pp.
169–210, 1990.
5. R. Engle and C. Granger, “Cointegration and error correction: representation, estimation and testing,” Econometrica, vol. 55, pp. 251–276, 1987.
6. M. D. Johnson, A. Herrmann, and A. Gustafsson, “Comparing customer satisfaction across industries and countries,” Journal of Economic Psychology, vol. 23, no. 6, pp. 749–769, 2002. View at
Publisher · View at Google Scholar · View at Scopus
7. W. Enders and P. L. Siklos, “Cointegration and threshold adjustment,” Journal of Business and Economic Statistics, vol. 19, no. 2, pp. 166–176, 2001. View at Publisher · View at Google Scholar ·
View at Scopus
8. N. S. Balke and T. B. Fomby, “Threshold cointegration,” International Economic Review, vol. 38, no. 3, pp. 627–645, 1997. View at Scopus
9. S. Cook, “A threshold cointegration test with increased power,” Mathematics and Computers in Simulation, vol. 73, no. 6, pp. 386–392, 2007. View at Publisher · View at Google Scholar · View at
10. K. Chan, “Consistency and limiting distributions of the least squares estimator of a threshold autoregressive model,” The Annals of Statistics, vol. 21, pp. 520–533, 1993.
11. B. E. Hansen, “Testing for parameter instability in linear models,” Journal of Policy Modeling, vol. 14, no. 4, pp. 517–533, 1992. View at Scopus
12. H. J. Bierens, “Nonparametric nonlinear co-trending analysis, with an application to inflation and interest in the U.S.,” Journal of Business and Economic Statistics, vol. 18, no. 3, pp. 323–337,
2000. View at Scopus
13. J. Breitung, “Rank tests for nonlinear cointegration,” Journal of Business and Economic Statistics, vol. 19, no. 3, pp. 331–340, 2001. View at Publisher · View at Google Scholar · View at Scopus
14. P. S. Sephton, “Cointegration tests on MARS,” Computational Economics, vol. 7, no. 1, pp. 23–35, 1994. View at Publisher · View at Google Scholar · View at Scopus
15. J. Friedman, “Multivariate adaptive regression splines,” Annals of Statistics, vol. 19, p. 1, 1991.
16. F. C. Palm, S. Smeekes, and J. P. Urbain, “A sieve bootstrap test for cointegration in a conditional error correction model,” Econometric Theory, vol. 26, no. 3, pp. 647–681, 2010. View at
Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/ecri/2012/534209/","timestamp":"2014-04-17T20:07:05Z","content_type":null,"content_length":"76601","record_id":"<urn:uuid:328cbf3e-821b-46c3-9e8f-07df317e5fc9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
You might be a mathematician if
Re: You might be a mathematician if
You do not need to^.
Engineer wrote:
I can write 70 words per minute but i cant read my own handwriting.
I spend more time with my mobile than with my family.
My IQ is greater than my weight.
I explain 3 yr olds, why the sky is blue using terms like scattering,
interference & diffraction.
I have no life & can prove it mathematically.:->
I think in "maths".
I can translate english into binary.
I consider any non-science course "easy".
An 40/100 is heaven to me.
My xerox bills soar higher than my pocket money.!!
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda | {"url":"http://mathisfunforum.com/viewtopic.php?id=19704&p=2","timestamp":"2014-04-17T12:52:50Z","content_type":null,"content_length":"9324","record_id":"<urn:uuid:47e0393b-edac-46fd-a97b-468ae6e815b9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Smoothing of rectified voltage with capacitor
Interesting result. How did you measure voltage and current?
To that comes why the wavetops for the yellow graph is apruptly cut of, compared to the round tops of the orange and the red graph.
The yellow graphs follows the power supply - probably similar to a sine wave (minus some offset for the diode).
The peak values of the voltage increases with the capacitance, so that kind of explains it
No, as the additional current is needed to charge the capacitor. It discharges itself via the lamp when the power supply is at a negative voltage.
One guess: The capacitor increases current and therefore the average load of the lamp - it gets hotter, and its resistance might increase. If the power supply is not perfect and its voltage depends
on the current, this might lead to a higher peak voltage. I would not expect this, but something has to explain your observations and I don't see anything better at the moment. | {"url":"http://www.physicsforums.com/showpost.php?p=4203071&postcount=2","timestamp":"2014-04-18T18:15:25Z","content_type":null,"content_length":"8163","record_id":"<urn:uuid:585c969c-d37d-4e97-95e7-2886a173f9c1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
P G Tait's obituary of Listing
Peter Guthrie Tait wrote an obituary of Johann Benedict Listing which was published in Nature on 1 February 1883. Clerk-Maxwell had discovered Listing's work on knots in 1868 although by that time
Listing's work was twenty years old. Maxwell showed Listing's book to Tait, and lectured on it to the London Mathematical Society in February 1869. It was not until 1876 that Tait began his work
classifying knots. Tait's obituary of Listing is interesting for the details he relates but it is also interesting in giving us an insight into Tait's own thoughts. We reproduce the obituary:-
One of the few remaining links that still continued to connect our time with that in which Gauss had made Göttingen one of the chief intellectual centres Of the civilised world has just been broken
by the death of Listing.
If a man's services to science were to be judged by the mere number of his published papers, Listing would not stand very high. He published little, and (it would seem) was even indebted to another
for the publication of the discovery by which he is most widely known. This is what is called, in Physiological Optics, Listing's Law. Stripped of mere technicalities, the law asserts that if a
person whose head remains fixed turns his eyes from an object situated directly in front of the face to another, the final position of each eye-ball is such as would have been produced by rotation
round an axis perpendicular alike to the ray by which the first object was seen and to that by which the second is seen.
Let us call that line in the retina, upon which the visible horizon is portrayed when we look, with upright head, straight at the visible horizon, the horizon of the retina. Now any ordinary
person would naturally suppose that if we, keeping our head in an upright position, turn our eyes so as to look, say, up and to the right, the horizon of the retina would remain parallel to the
real horizon. This is, however, not so. If we turn our eyes straight up or straight down, straight to the right or straight to the left, it is so, but not if we look tip or down, and also to the
right or to the left. In these cases there is a certain amount of what Helmholtz calls 'wheel-turning' (Raddrehung) of the eye, by which the horizon of the retina is tilted so as to make an angle
with the real horizon. The relation of this 'wheel-turning' to the above-described motion of the optic axis is expressed by Listing's law, in a perfectly simple way, a way so simple that, it is
only by going back to what we might have thought nature should have done, and from that point of view, looking at what the eye really does, and considering the complexity of the problem, that we
see the ingenuity of Listing's law, which is simple in the extreme, and seems to agree with fact quite exactly, except in the case of very short-sighted eyes.
The physiologists of the time, unable to make out these things for themselves, welcomed the assistance of the mathematician. And so it has always been in Germany, Few are entirely ignorant of the
immense accessions which physical science owes to Helmholtz. Yet few are aware that he became a mathematician in order that he might be able to carry out properly his physiological researches. What a
pregnant comment on the conduct of those "British geologists" who, not many years ago, treated with outspoken contempt Thomson's thermodynamic investigations into the admissible lengths of geological
Passing over about a dozen short notes on various subjects (published chiefly in the Göttingen Nachrichten), we come to the two masterpieces, on which (unless, as we hope may prove to be the case, he
have left much unpublished matter) Listing's fame must chiefly rest. They seem scarcely to have been noticed in this country, until attention was called to their contents by Clerk-Maxwell.
The first of these appeared in 1847, with the title Vorstudien zur Topologie. It formed part of a series, which unfortunately extended to only two volumes, called Göttinger Studien. The term Topology
was introduced by Listing to distinguish what may be called qualitative geometry from the ordinary geometry in which quantitative relations chiefly are treated. The subject of knots furnishes a
typical example of these merely qualitative relations. For, once a knot is made on a cord, and the free ends tied together, its nature remains unchangeable, so long as the continuity of the string is
maintained, and is therefore totally independent of the actual or relative dimensions and form of any of its parts. Similarly when two endless cords are linked together. It seems not unlikely, though
we can find no proof of it, that Listing was led to such researches by the advice or example of Gauss himself; for Gauss, so long ago as 1833, pointed out their connection with his favourite
electromagnetic inquiries.
After a short introductory historical notice, which shows that next to nothing had then been done in his subject, Listing takes up the very interesting questions of Inversion (Umkehrung) and
Perversion (Verkehrung) of a geometrical figure, with specially valuable applications to images as formed by various optical instruments. We cannot enter into details, but we paraphrase one of his
examples, which is particularly instructive:-
A man on the opposite bank of a quiet lake appears in the watery mirror perverted, while in an astronomical telescope he appears inverted. Although both images show the head down and the feet up,
it is the dioptric one only which - if we could examine, it - would, like the original, show the heart on the left side; for the catoptric image would show it on the right side. In type there is
a difference between inverted letters and perverted ones. Thus the Roman V becomes, by inversion, the Greek L - the Roman R perverted becomes the Russian (insert pict); the Roman L, perverted and
inverted, becomes the Greek G. Compositors read perverted type without difficulty: -many newspaper readers in England can read inverted type. ... The numerals on the scale of Gauss' Magnetometer
must, in order to appear to the observer in their natural position, be both perverted and inverted, in consequence of the perversion by reflection and the inversion by the telescope.
Listing next takes up helices of various kinds, and discusses the question as to which kind of screws should be called right-handed. His examples are chiefly taken from vegetable spirals, such as
those of the tendrils of the convolvulus, the hop, the vine, &c., some from fir-cones, some 'from snail-shells, others from the "snail" in clockwork. He points out in great detail the confusion which
has been introduced in botanical works by the want of a common nomenclature, and finally proposes to found such a nomenclature on the forms of the Greek d and l.
The consideration of double-threaded screws, twisted bundles of fibres, &c., leads to the general theory of paradromic winding. From this follow the properties of a large class of knots which form
"clear coils." A special example of these, given by Listing for threads, is the well-known juggler's trick of slitting a ring-formed band up the middle, through its whole length, so that instead of
separating into two parts, it remains in a continuous ring. For this purpose it is only necessary to give a strip of paper one half-twist before pasting the ends together. If three half-twists be
given, the paper still remains a continuous band after slitting, but it cannot be opened into a ring, it is in fact a trefoil knot. This remark of Listing's forms the sole basis of a work which
recently had a large sale in Vienna: - showing how, in emulation of' the celebrated Slade, to tie an irreducible knot on an endless string!
Listing next gives a few examples of the application of his method to knots. It is greatly to be regretted that this part of his paper is so very brief; and that the opportunity to which be deferred
farther development seems never to have arrived. The methods he has given are, as is expressly stated by himself, only of limited application. There seems to be little doubt, however, that he was the
first to make any really successful attempt to overcome even the preliminary difficulties of this unique, and exceedingly perplexing subject.
The paper next gives examples of the curious problem: - Given a figure consisting of lines, what is the smallest number of continuous strokes of the pen by which it can be described, no part of a
line, being gone over more than once? Thus, for instance, the lines bounding the 64 squares of a chess-board can be drawn at 14 separate pen strokes. The solution of all such questions depends at
once on the enumeration of the points of the complex figure at which an odd number of lines meet.
Then we have the question of the "area" of the projection of a knotted curve on a plane; that of the number of interlinkings of the orbits of the asteroids; and finally some remarks on hemihedry in
crystals. This paper, which is throughout elementary, deserves careful translation into English very much more than do many German writings on which that distinction has been conferred.
We have left little space to notice Listing's greatest work, Der Census räumlicher Complexe (Göttingen Abhandlungen, 1861). This is the less to be regretted, because, as a whole, it is far too
profound to be made popular; and, besides, a fair idea of the nature of its contents can be obtained from the introductory Chapter of Maxwell's great work on Electricity. For there the importance of
Listing's Cyclosis, Periphractic Regions, &c., is fully recognised.
One point, however, which Maxwell did not require, we may briefly mention.
In most works on Trigonometry there is given what is called Euler's Theorem about polyhedra: - viz. that if S be the number of solid angles of a polyhedron (not self-cutting), F the number of its
faces, and E the number of its edges, then
S + F = E + 2.
The puzzle with us, when we were beginning mathematics, used to be "What is this mysterious 2, and how came it into the formula?" Listing shows that this is a mere case of a much more general theorem
in which corners, edges, faces, and regions of space, have a homogeneous numerical relation. Thus the mysterious 2, in Euler's formula, belongs to the two regions of space: - the one enclosed by the
polyhedron, the other (the Amplexum, as Listing calls it) being the rest of infinite space. The reader, who wishes to have an elementary notion of the higher forms of problems treated by Listing, is
advised to investigate the modification which Euler's formula would undergo if the polyhedron were (on the whole) ring-shaped: - as, for instance, an anchor-ring, or a plane slice of a thick
cylindrical tube.
JOC/EFR March 2006
The URL of this page is: | {"url":"http://www-groups.dcs.st-and.ac.uk/~history/Extras/Tait_Listing_obituary.html","timestamp":"2014-04-18T10:50:17Z","content_type":null,"content_length":"12204","record_id":"<urn:uuid:ec557070-5d64-4cca-8c04-bdba3a11c079>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
(325 worksheets available to subscribers)
Funsheets 4 Math are created for Middle School Math and Pre-Algebra students.
Funsheets are unique, fun worksheets that integrate middle school math and Pre-Algebra skills with fun activities including sudoku, word finds, riddles, color patterns, crosswords, games, matching
cards, etc. Although math worksheets are not the primary activity in the classroom, there is still value in practicing math skills with paper and pencil. These math worksheets are designed to provide
variation in work assigned to students beyond the standard worksheet.
Most funsheets also have a Standard worksheet counterpart. Use these as regular worksheet exercises, or as a resource for interactive quizzing systems.
You may browse all the worksheets that are available to subscribers by clicking on each Math unit listed.
It is suggested that teachers require students to show their work on a separate sheet of paper. | {"url":"http://funsheets4math.com/","timestamp":"2014-04-18T23:16:18Z","content_type":null,"content_length":"11777","record_id":"<urn:uuid:d32a0d68-b5b8-4efd-9d30-a05674a9161f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
SIMULTANEOUS EQUATION BIAS (Social Science)
Simultaneous equation bias is a fundamental problem in many applications of regression analysis in the social sciences that arises when a right-hand side, X, variable is not truly exogenous (i.e., it
is a function of other variables). In general, ordinary least squares (OLS) regression applied to a single equation from a system of simultaneous equations will produce biased, that is,
systematically wrong, parameter estimates. Furthermore, the bias from OLS does not decrease as the sample size increases. Estimating parameters from a simultaneous equation model requires advanced
methods, of which the most popular today is two-stage least squares (2SLS).
Consider the following single-equation regression model:
a sample of x, y observations, we fit a line using ordinary least squares, so named because coefficients are chosen to minimize the sum of squared residuals. A residual is the vertical distance
between the actual and predicted value. The equation of the fitted line is
slope coefficient from the OLS fitted line is actually a random variable.
Figure 1
Figure 1 provides a concrete example of the abstract ideas underlying OLS. The points in the graph correspond to those in the table. The estimated slope, 4.2, does not equal the true slope, 5,
because of the random error term, which in this case is normally distributed with mean zero and standard deviation of 50. A new sample of ten observations would have the same X values, but the Ys
would be different and, thus, the estimated slope from the fitted line would change.
There are other estimators (recipes for fitting the line) besides OLS. The circle in Figure 2 represents all of the possible estimators. The vertical oval contains all of the linear estimators. This
does not refer to the fitted line itself, which can have a curved or other nonlinear shape, but to the algorithm for computing the estimator. All of the unbiased estimators are included in the
horizontal oval. Unbiasedness is a desirable property referring to the accuracy of an estimator. Unbiased estimators produce estimates that are, on average, equal to the parameter value. Bias means
that the estimator is systematically wrong, that is, its expected value does not equal the parameter value. The area where the ovals overlap in Figure 2 is that subset of estimators, including OLS,
which are both linear and unbiased.
According to the Gauss-Markov Theorem, when the DGP obeys certain conditions, OLS is the best, linear, unbiased estimator (BLUE). Of all of the linear and unbiased estimators, OLS is the best because
it has the smallest variance. In other words, there are other estimators that are linear and unbiased (centered on (1), but they have greater variability than OLS. The goal is unbiased estimators
with the highest precision, and the Gauss-Markov Theorem guarantees that OLS fits the bill.
Figure 2
Figure 3
Figure 3 shows histograms for three rival linear estimators for a DGP that conforms to the Gauss-Markov conditions. The histograms reflect the estimates produced by each estimator. Rival 1 is biased.
It produces estimates that are systematically too low. Rival 2 and OLS are unbiased because each one is centered on the true parameter value. Although both are accurate, OLS is more precise. In other
words, using OLS rather than Rival 2 is more likely to give estimates near the true parameter value. The Gauss-Markov Theorem says that OLS is the most precise estimator in the class of linear,
unbiased estimators.
Suppose one faces a simultaneous equation DGP like this:
There are two dependent (or endogenous) variables, y1 and y2. Each equation has a regressor (a right-hand side variable) that is a dependent variable.
If one is interested in the effect of y1 on y2, can one toss out the first equation and treat the second equation as a single-equation model? In other words, what happens if one ignores the
simultaneity and simply runs an OLS regression on an individual equation? One gets simultaneous equation bias. The OLS estimator of a1, the slope parameter in the second equation, will be biased,
that is, it will not be centered on ar With every sample to which one applies the OLS recipe, the resulting estimates will be systematically wrong. OLS is now behaving like the Rival 1 estimator in
Figure 3 (although one does not know if the bias centers OLS above or below the true parameter value).
Consider the following concrete example. A researcher is interested in estimating the effect of the crime rate (number of crimes per 100,000 people per year) on enforcement spending (dollars per
person per year). As the crime rate rises, more police officers and prison guards are needed, so enforcement spending will rise. The researcher is interested in estimating the slope coefficient, (31,
in the following model:
Unfortunately, in this situation, as in most social science applications, the real world does not follow a single-equation DGP. Although it is true that government policy makers allocate resources to
enforcement spending depending on the crime rate, criminals make decisions based on enforcement spending (and other variables). Increased crime causes more enforcement spending, but more enforcement
spending causes less crime. This kind of feedback loop is common in the social sciences. The appropriate model is not a single-equation DGP because the crime rate is not a truly exogenous variable.
Instead, the researcher must cope with a simultaneous system of equations where both enforcement spending and crime rate are dependent variables.
If the researcher naively applies OLS to the single equation, her estimate of the effect of crime on enforcement spending, (1, will be biased. Because ignoring the fact that the crime rate is
actually a dependent variable with its own DGP equation causes this bias, it is called simultaneous equation (or simultaneity) bias.
The source of the poor performance of the OLS estimator lies in the fact that we have a violation of the conditions required for the Gauss-Markov Theorem: The crime rate is a right-hand side variable
that is not independent of the error term. In a given year a high crime rate will result in high enforcement spending, but that will trigger a low crime rate. Conversely, a low enforcement spending
year will lead to more crime. When the error term is correlated with a regressor, OLS breaks down and is no longer an unbiased estimator.
Estimating an equation with dependent variables on the right-hand side requires advanced methods. It is important to recognize that increasing the sample size or adding explanatory variables to the
single-equation regression will not solve the problem.
The approach typically taken is called two-stage least squares (2SLS). In the first stage, an OLS regression utilizes truly exogenous variables (called instrumental variables) to create artificial
variables. In the second stage, these artificial variables are then used in place of the endogenous, right-hand side variables in each equation in the system.
In the enforcement spending and crime rate example, the researcher would first regress the crime rate on a set of truly exogenous variables to create a Predicted Crime Rate variable. Determining the
instruments to be used in the first stage regression is a crucial step in the 2SLS procedure. In the second stage, she would substitute the Predicted Crime Rate for the Crime Rate variable and run
OLS. It can be shown that as the sample size increases, the expected value of the 2SLS estimator gets closer to the true parameter value. Thus, unlike OLS, 2SLS is a consistent estimator of a
parameter in a simultaneous equation model.
In practice, two separate regressions are not actually run. Modern statistical software packages have an option for 2SLS that performs the calculations, computing appropriate standard errors and
other regression statistics, in one step. As a practical matter, even if there are strong theoretical reasons to suspect the presence of simultaneous equation bias, it need not be a particularly
large bias.
Attempts to estimate demand curves in the first quarter of the twentieth century led economists to model supply and demand equations as a simultaneous system. This work culminated in the
probabilistic revolution in the 1940s. In "The Probability Approach in Econometrics," Trygve Haavelmo called for explicit description of the data generation process, including the source of variation
in the error term and the use of a simultaneous system of equations to model complicated interrelationships among variables.
Haavelmo’s program was supported by Tjalling Koopmans and others at the Cowles Commission, a research think tank housed at the University of Chicago from 1939 to 1955. These econometricians made
progress in several key areas, including the identification problem, understanding the nature of simultaneous equation bias, and methods for properly estimating an equation embedded in a simultaneous
system. They concentrated their simultaneous equation estimation efforts on full- and limited-information maximum likelihood. Two-stage least squares, a much more efficient computational approach,
was not discovered—independently by Henri Theil and Robert Basmann—until the 1950s.
Simultaneous equation bias occurs when an ordinary least squares regression is used to estimate an individual equation that is actually part of a simultaneous system of equations. It is extremely
common in social science applications because almost all variables are determined by complex interactions with each other. The bias lies in the estimated coefficients, which are not centered on their
true parameter values. Advanced methods, designed to eliminate simultaneous equation bias, use instrumental variables in the first stage of a two-stage least squares procedure. | {"url":"http://what-when-how.com/social-sciences/simultaneous-equation-bias-social-science/","timestamp":"2014-04-16T13:03:20Z","content_type":null,"content_length":"23187","record_id":"<urn:uuid:6c68af83-e811-4cfc-b3ca-4668d8b2140c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
simplify 5z + 3y - (z + y)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51228636e4b06821731d6d30","timestamp":"2014-04-21T04:33:24Z","content_type":null,"content_length":"37063","record_id":"<urn:uuid:381c56b8-ab37-46b1-8f28-d542ed8f595c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probabilistic description of ice-supersaturated layers in low resolution profiles of relative humidity^1Centre for Atmospheric Science, Department of Chemistry, University of Cambridge, Cambridge, UK
^2Deutsches Zentrum für Luft- und Raumfahrt, Institut für Physik der Atmosphäre, Oberpfaffenhofen, Germany
Abstract. The global observation, assimilation and prediction in numerical models of ice super-saturated (ISS) regions (ISSR) are crucial if the climate impact of aircraft condensations trails
(contrails) is to be fully understood, and if, for example, contrail formation is to be avoided through aircraft operational measures. A robust assessment of the global distribution of ISSR will
further this debate, and ISS event occurrence, frequency and spatial scales have recently attracted significant attention. The mean horizontal path length through ISSR as observed by MOZAIC aircraft
is 150 km (±250 km). The average vertical thickness of ISS layers is 600–800 m (±575 m) but layers ranging from 25 m to 3000 m have been observed, with up to one third of ISS layers thought to be
less than 100 m deep. Given their small scales compared to typical atmospheric model grid sizes, statistical representations of the spatial scales of ISSR are required, in both horizontal and
vertical dimensions, if global occurrence of ISSR is to be adequately represented in climate models.
This paper uses radiosonde launches made by the UK Meteorological Office, from the British Isles, Gibraltar, St. Helena and the Falkland Islands between January 2002 and December 2006, to investigate
the probabilistic occurrence of ISSR. Specifically each radiosonde profile is divided into 50- and 100-hPa pressure layers, to emulate the coarse vertical resolution of some atmospheric models. Then
the high resolution observations contained within each thick pressure layer are used to calculate an average relative humidity and an ISS fraction for each individual thick pressure layer. These
relative humidity pressure layer descriptions are then linked through a probability function to produce an s-shaped curve describing the ISS fraction in any average relative humidity pressure layer.
An empirical investigation has shown that this one curve is statistically valid for mid-latitude locations, irrespective of season and altitude, however, pressure layer depth is an important
variable. Using this empirical understanding of the s-shaped relationship a mathematical model was developed to represent the ISS fraction within any arbitrary thick pressure layer. Here the
statistical distributions of actual high resolution RHi observations in any thick pressure layer, along with an error function, are used to mathematically describe the s-shape. Two models were
developed to represent both 50- and 100-hPa pressure layers with each reconstructing their respective s-shapes within 8–10% of the empirical curves. These new models can be used, to represent the
small scale structures of ISS events, in modelled data where only low vertical resolution is available. This will be useful in understanding, and improving the global distribution, both observed and
forecasted, of ice super-saturation.
Citation: Dickson, N. C., Gierens, K. M., Rogers, H. L., and Jones, R. L.: Probabilistic description of ice-supersaturated layers in low resolution profiles of relative humidity, Atmos. Chem. Phys.
Discuss., 10, 2357-2395, doi:10.5194/acpd-10-2357-2010, 2010. | {"url":"http://www.atmos-chem-phys-discuss.net/10/2357/2010/acpd-10-2357-2010.html","timestamp":"2014-04-21T14:43:11Z","content_type":null,"content_length":"29951","record_id":"<urn:uuid:f93e2265-d063-4b75-9c5f-47b5f601c193>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paintball Spin Physics - Getting to the final Answer
11-20-2002, 09:42 AM
Can we even measure the Magnus effect?
Woo hoo!
I found a webpage which describes the mathematical foundation for the Magnus effect. It is located at the following URL, and describes the effect for rotating cylinders of infinite length:
I think it could be applied to our paintball, at least as a first cut approximation. After staring blearily at the equations, we come up with
L = - (rho) * v0 * (gamma), where
rho is the density of the fluid,
v0 is the velocity of the fluid (relative to the object in question), and
gamma is something called the circulation, defined as the line integral of the flow velocity around a closed loop.
What gamma basically means for us is "take the surface speed of the object and multiply it by the object's circumference". Thus,
(gamma) = vc * 2 * (pi) * r, where
vc is the circumferential (surface) velocity of the object,
pi is 3.14159 or thereabouts, and
r is the object's radius.
Let's plug in some of the data derived from the test 101 picture.
rho = 1.293 kg/m^3 (reasonable value for the density of air)
v0 = 85.34 m/s (280 fps)
vc = 1.25 m/s (4.1 fps surface speed)
r = 0.0086 m (radius of a paintball 0.68 inches in diameter)
So from this we get
L = -1.293 * 85.34 * 1.25 * 2 * 3.14159 * 0.0086
L = -7.45 (units?)
I'm not sure what the units should be. Doing a bit of unit analysis on the equation for L, we have the following:
L-units = (kg/m^3) * (m/s) * (m/s) * (unitless) * (unitless) * (m)
L-units = kg/s^2 ???
That sure isn't a unit of force I'm familiar with. We're missing a distance unit in the numerator somewhere, and darn if I can find it. Any help, anyone? :)
I also have happened across the following webpage, which gives a different equation (of similar form) for the Magnus effect:
Here they describe the effect as follows:
M = cM * (rho) * D^3 * f * v, where
cM is the Magnus force coefficient (1.23 works pretty well, according to the webpage),
rho is the density of the fluid,
D is the diameter of the object,
f is the object's rotational frequency (rotations per second), and
v is the velocity of the fluid (relative to the object).
If the Magnus force coefficient is unitless, then a unit analysis of this equation actually ends up with units of force coming out of it. Let's plug in some numbers derived from test 101:
cM = 1.23 (unitless?)
rho = 1.293 kg/m^3 (reasonable value for the density of air)
D = 0.0173 m (0.68 inch diameter paintball)
f = 23.1 rotations/s (15 degrees per strobe flash, one strobe flash every 1.8 milliseconds)
v = 85.34 m/s (280 fps)
M = 1.23 * 1.293 * 0.0173^3 * 23.1 * 85.34
M = 0.016 N
So for that amount of spin we end up with a Magnus effect force of 0.016 newtons. For a 3 gram paintball, this force results in an acceleration of 5.3 m/s^2, or right around 0.54 g's. Let's
assume this equation has given us a correct answer, and see what the picture can tell us based on what we've calculated.
I am going to define the "first" strobe as the first image of the ball after it has exited the barrel, and incrementally name each successive ball image to the right of the first strobe (second,
third, etc.).
An acceleration of 5.3 m/s should deflect the ball approximately 8.6 microns between the first and second strobes. This deflection is nearly two orders of magnitude smaller than the spatial
sampling in the image (around 0.6 millimeters per pixel for the "bottom view" portion of test image 101). So even if the Magnus effect is at work, we simply can't see it from strobe to strobe. So
let's look at the first and last strobes in the (strobes 1 and 5).
In this situation, the time interval is four times as large. Assuming the acceleration resulting from the Magnus effect is constant throughout the measured time, we should expect a deflection 16
times greater than that predicted between strobes 1 and 2, or around 140 microns (0.14 millimeters). This deflection is still smaller than the spatial sampling in the image.
If the ball is spinning at 15 degrees per strobe flash (and based on my other spin calculations and Tom's additional comments, I think we can safely assume the ball is not spinning at 195 or -165
degrees per strobe flash), and if the formula I used above is reasonable and accurate, then the high resolution pictures we have of the test are simply not sufficient to detect the resultant
Magnus effect.
I looked at the "bottom view" portion of the test 101 picture to see if I could see any horizontal deviation in the ball's path, and I noticed something interesting... the laser aligned string is
not straight. It curves slightly in the image. This leads me to believe we either have a camer/lens perspective effect going on here, or the string is vibrating during firing. The blast of air
that escapes the barrel could be moving the string around.
I attempted to correct the slightly curved string by using Photoshop's transformation tools, but had little success. It seems that the transformation I'm looking for just isn't available in my
version of Photoshop. Instead of measuring each ball's position from a single reference line, I generated local references corresponding to the string's location at each ball's position. I
measured the following offsets:
Strobe 1: -4 pixels (-2.48 mm)
Strobe 2: -2 pixels (-1.24 mm)
Strobe 3: -2 pixels (-1.24 mm)
Strobe 4: -1 pixel (-0.62 mm)
Strobe 5: 0 pixels (0 mm)
These offsets are greater than the Magnus effect alone would suggest, unless the Magnus equation I used was incorrect. They may be the result of the escaping gas buffetting the ball around during
the first few milliseconds of flight.
Well, that's about all I've got for now. Time to go do some work that actually fills up a paycheck. :)
11-20-2002, 10:56 AM
Originally posted by flanders
as for airpoctes they allow a ball to not be so birrtal it pops in teh gun, while not makeing a shell even harder, if a bakl infact has no spin forward or back then the airpocet won't matter if
it's facing backwards, or if it's roatating at a constant rate so tat it's never forward back up down etc then it will be ok do to compensation
As I struggle to read this and make sense of all your spelling errors / typos my only thoughts are that if you ever wish to be taken seriously you should read your post before submitting it.
Could you please explain how an airpocket in the ball will make the shell more brittle?
Originally posted by flanders
barrel do ahve something to do with accurac, not lengthj ( i dun know if theres a barrel to big or 2 small) but quality, if they are rifled to much and the ball has spin goes pop, if the barrel
has burs pop, if the barrel is to big, bouncy bouncy pop bad accuracy, if the barrel aint supper clean pop, if the barrel has odd porting can cause poor release of air causing spin, to tight air
can't get through in the right way or ball gets stuck or shell peels off
Here you are making some statements about barrels and accuracy. Were I to make claims like this I would also present some data to support my claims.
Originally posted by flanders
the ball can distort with high pressure bursts but it would have a reverse translation (as in all liquid filled objects) impact on oen side creats lump on other side this may or may not be a good
thing (depending on spin) if spin occurs left or right ball witll be good havign a bullet effect, if ball spins up or down bad ball is lopided pops or screws it up more
An impact on one side of the ball will not cause a lump on the opposite side. The balls diameter will swell before a lump is created... unless of course there is a thin spot in the shell... it
will be the weakest area and will deform first from the hydrostatic pressure.
If the spin occurs left or right the ball will hook left or right.
Originally posted by flanders
as for spin not hapening with liquid, if there was no air pocets then the ball would be hard like a solid object or atleast it doesn't impact or adjust every thign is filled, ball still spins, if
there is an air pocket, and the liquid can still move then yes will cause lopsided pickle ball spin. but if the buble does not move it creats a weighted zone creating a similer odd spin, if spin
Originally posted by flanders
General hypothesis
If spin lateral spin (left to right)occurs then one or many of the factros will effect accuracy
If spin longitudanal spin (up and down)occurs then one or many of the factros will effect accuracy
You could also say if the ball was blue then one or many of the factors will effect the accuracy.
Anyhow... I don't mean to be a jerk but, please review your posts before submitting them. Make sure they contain useful information that is similar to that being posted by the other people.
11-20-2002, 11:05 AM
This is about getting the facts and involves more than just an open or closed bolt issue. In paintball this is as close as we get to publishing.
Nice job, that's the kind of effort I was looking for. The result of your investigation points out that the magnus effect is SMALL. So if the expected deviation from the Magnus effect is 1% and
in tracking the paintball we see a deviation of 10% then we can safely say that another force besides Magnus is acting on the ball. We don't have to actually see it to make sense of it.
Do not let the string fool you, it was streached tight between two posts and only looks curved because the mirror was not perfect.
Everyone concentrate on test number 114 because we have the additional data for the flight path on that one.
11-20-2002, 11:19 AM
If Magnus force is dependant on spin, then what is involved to cause a ball to randomly move when little to no spin is applied. I take this from volleyball a sport in which if you serve the ball
with a "flat" or minimized spin the ball will (for lack of a better word) "wiffle" in the air which will directly affect its trajectory. Is this a possibility in paintball?
11-20-2002, 12:05 PM
Originally posted by AGD
The result of your investigation points out that the magnus effect is SMALL.
I am withholding that conclusion until I recieve some validation of the equation I used to calculate the force resulting from the Magnus effect. It could well be that the equation I used applies
well to things like baseballs and basketballs, but not necessarily to small, paintball sized spheres.
Originally posted by AGD
Do not let the string fool you, it was streached tight between two posts and only looks curved because the mirror was not perfect.
Ah, a slight sag in the mirror would definitely explain the string curvature I was seeing. Hadn't thought of that.
Originally posted by AGD
Everyone concentrate on test number 114 because we have the additional data for the flight path on that one.
I'll take a look at that one this evening and see what I can come up with.
While you're reading this, Tom, I've got a question I've been meaning to ask you. In the video tour of AGD's facility, you show off a large mirror you were grinding to make a large aperture
telescope. Has the mirror been finished, and if so, what kind of figure does it have? I ask because if it still has a spherical figure it would be awesome to use it in some large scale Schlieren
photography of the airflow behind the paintball as it exits the barrel. Even if it's got a parabolic figure it could still be used with a correction lens for such imaging. :)
11-20-2002, 02:58 PM
Sadly I cannot post the evidence, but the raw egg theory does not applie ot paintballs. Paintballs will spin while in flight regardless of the liquid fill. Once the shell is set in motion it will
not stop spinning as it moves through the air. As I stood leaning out of my window with a camcorder i filmed and egg pushed off the 5th story of an apartment building. A slight spin was imparted
upon the egg as it fell. Earlier I had drawn a red line around the egg and in slow motion I could clearly see the line rotate. I would post the video if I could but this camcorder will not let
Eggs spin in the air. I think this is due to the lack of friction on the egg. When you spin an egg on a table it is grinding againt the table, causing friction. In the air there is a much smaller
amount of friction on the egg. In addition, I think that because the shell of a paintball accounts for more of the total mass and volume when compared to the fill than an eggs shell to it's
"fill" the paintball would probably spin better.
11-20-2002, 06:00 PM
california why?
*turns around with blunt object in hand*
11-20-2002, 07:05 PM
Here is a site with formulas. It is about the physics of paintball and he explains his formulas. bjjb99, you could see if your magnus effect formulas match his theory. You will find that he
claims that the magnus effect must be determined experimentally. He does give a "guess", I believe.
There is also an applet that attempts to predict the trajectory of a spinning paintball.
11-20-2002, 08:10 PM
I've read significant portions of the site you mentioned before and have found it interesting if somewhat challenging to wrap one's mind around. I've computed the Reynolds number for a 0.68 inch
diameter ball moving at 280 fps through air, and it comes out around 1.03x10^5. Judging from the graphs on the site you mentioned, this results in a positive lift coefficient under all
conditions, instead of the reverse Magnus effect the site describes. It ends up following near to the curve fitted to the triangular shaped datapoints. Thus, I don't necessarily agree with the
anti-Magnus conclusions the author describes.
Using test 101 data, the value for V/U used on that site would be (1.25/85.34), or 0.015. This puts our data point darn close to the left edge of the graph shown, and thus the lift (read
"Magnus") coefficient is very very small to the point that it is difficult to even estimate it from the plot. I'd estimate the coefficient at around 0.02.
The site states that the Magnus effect force is characterized in a similar fashion to that of the drag force, and thus I used the drag equation to determine the Magnus force with a Magnus
coefficient of 0.02. It works out to be around 0.04 newtons if I did my math right, which is about 2.5 times larger than the value I calculated using the equation on the
carini.physics.indiana.edu site. Given the difficulty in estimating the Magnus coefficient from the plot, I think the result is close enough to say that either treatment is "good enough" for a
first cut look at this phenomenon. I think the two methods agree with each other sufficiently to convince me that my original calculations are a reasonable approximation of the amount of
displacement to be expected in test 101 (i.e. unmeasureable given the spatial sampling of the pictures taken during testing).
I am curious what this conclusion means in terms of the Flatline barrel system... I mean, I've seen floaters that just go and go and go coming out of that barrel, and I've seen horrendous curve
balls when a Flatline gun is held sideways. I suppose the spin induced on Flatline-launched paintballs is much much greater than the mere 23 revs per second that test 101 exhibited, pushing the V
/U point to the right and increasing the Magnus coefficient significantly.
11-21-2002, 12:04 PM
Re: Paintball Spin Physics - Getting to the final Answer
Well, I've stared at the pictures until I'm cross-eyed and there seems to be very little that I can reach any value conclusions on that would fit into the ongoing discussion.
I cannot see anything here that addresses either, the issue of closed bolt versus open bolt or what effect internal ballistics has on the flight of the ball. However, I did do a bunch of shooting
through powdered barrels as Tom suggested and arrived at the same conclusions that I reached through reading the scuff marks on paintballs shot through an un-powdered barrel. Results do seem to
indicate that the ball does in fact distort or compress lineally from the forces of acceleration; causing it to tighten against the bore of the barrel for a period of time when launched.
My tests were done with a Blazer (closed bolt) operating with 400 to 450 psi input to the gun and firing Pro-ball paint at velocities of between 305 fps and 220 fps.
Only balls that demonstrated a consistent, loose fit in the barrel were used and each was blown through the unpowdered barrel with breath alone to relativly ensure consistent fit in the barrels
before moving to higher pressures for launch.
In addition, each ball was chambered manually to ensure consistent positioning of the ball in the chamber prior to launch. Positioning of the ball was only made relative to the face of the bolt
and did not address the position of the seem in relation to the axis of the bore.
Two barrels used: One with a straight bore of .690 and the other with an eliptically honed barrel, also with .690 base bore size but the center section of the barrel tapers out to .694 at 6" from
the chamber and back down to .690 at 10" from the chamber. I did not have any Desenex brand powder on hand but Gold Bond, medicated powder seems close in consistency to Desenex.
The results: Almost every shot fired wiped the powder from the complete perimeter of the bore during the acceleration in the first part of the bore and then showed only a two-point track in the
front half of the barrel. The length of time that the ball made full perimeter contact with the bore decreased as velocity was lowered. Also noted that the transition from full perimeter contact
to a two-point track was more abrupt in the eliptically shaped barrel.
To further verify my results, the target was a bed sheet setup to catch the paint so I could read the balls as well as the barrel bore. Looking at the balls showed that the powder that was wiped
from the bore was built up on the ball well forward of the center line of the ball with a wide "scuff" mark of imbeded powder. Thus indicating a significantly wide contact of the surface of the
ball with the surface of the barrel bore.
Also, I did not see anything to indicate that any "spin" was happening inside the bore but I was not looking real close in that regard. My focus was on "data" that would indicate whether or not
the ball would upset under acceleration. All indications to me are that the ball does in fact change its shape to a somewhat cylindrical form when pressure is applied and acceleration begins and
the amount of distortion is relative to the velocity achieved. Your results may vary with different types of valving that might generate different rates of acceleration or different blast impact
Now, to address some of the issues "on the table":
"Spin is the only major factor accounting for paintball inaccuracy. Promoted by Pbjosh"
In essence, a true statement IMHO. In this regard, I can only go by what I've seen which indicates to me that the less spin seen on a ball in flight, the more likely it is to go where it is
intended. Less spin = tighter shot groups.
"Closed bolt operation has an effect on overall accuracy. Promoted by Glen Palmer." (two N's for this Glenn please :p
Actually, my contention is that closed bolt firing gives me the best opportunity to tune my gun (the whole gun) to maximize the effectiveness of the shot. Without appropriate setup and tuning,
closed bolt firing is not likely to be any more effective than any other mode of operation.
"The paintball flight is subject to "knuckleball effect"."
It certainly is and I'd bet that we have all seen it at one time or another.
"Spin may or may not be possible because of the liquid in the paintball."
Spin is certainly possible but in relation to the effect on the flight of a paintball, is this discussion based on the lateral spin as would be imparted by rifling a barrel or the random spin
generated by numerous other factors ??
"Barrels have something to do with accuracy."
This is one of the few points that I can see as being addressed in the "data" posted. The data in both pictures of "shot patterns" indicates that different barrels will achieve different shot
groups. In both "shotpattern" jpgs, the Smart Parts barrel shows a tighter shot group than either, the Crown Point or Rail ? barrel. However, the tightest grouping seems to be shown in the lower
left of the "shotpatterns2" jpg and I cannot make out what that barrel is.
"Seams have something to do with accuracy."
I believe that the seems themselves have less to do with overall accuracy than the size/condition and position of the seem on the ball which can and does effect the consistency of the flight
"Balls distort with the impact of the air blast."
A certain amount of distortion seems to be a fact. However, either as a result of impact of the air blast or the "G" forces of acceleration?
"Balls distort when leaving the barrel."
Lets hope not. Hopefully, the forces that cause distortion are relieved before the ball leaves the barrel. Although, the "smoke" tests seem to indicate a very interesting shape to the ball after
it leaves the barrel.
The "smoke" shots also seem to show that there is not a collum of air being forced out ahead of the ball but that there is a small cushion of air around the ball. This brings up a question or
three about the barrel and valving used for the smoke shots. Automag valving? Length of barrel? Venting in the barrel ?
The biggest problem that I run into with all of this is that I don't/can't use scientific calculations to prove out what I see and can measure. When you guys start spouting all sorts of
scientific jargon and presenting formulas that often seem to me to be lacking in variables and such, you put the conversation well out of reach for me. I learned the old style math where 2+2=4 so
I am relagated to believing what I see. When the things that I do can demonstrate consistent velocity readings and a tight shot group, I'm pretty well convinced that I'm doing something right.
Then when something is changed and the results change too, it is quite easy to see whether or not the change was an improvement or not. Common sense and K.I.S.S. principle engineering has served
me quite well for a long time. Unfortunately those things are just not accepted here.
Originally posted by AGD
Currently on the table:
Spin is the only major factor accounting for paintball inaccuracy. Promoted by Pbjosh
Closed bolt operation has an effect on overall accuracy. Promoted by Glen Palmer.
The paintball flight is subject to "knuckleball effect".
Spin may or may not be possible because of the liquid in the paintball.
Barrels have something to do with accuracy.
Seams have something to do with accuracy.
Balls distort with the impact of the air blast.
Balls distort when leaving the barrel.
Ok lets have a pointed discussion on the subject. For those of you just reading this, this thread is a continuation of the "closed bolt" thread found here.
original thread
11-21-2002, 12:09 PM
Speaking of the Flatline barrel...
Tom, do you know if anyone other than Tippmann has performed tests to measure the amount of spin imparted to a paintball fired from the Flatline barrel system? It would give us another set of
datapoints to work with for a barrel that is supposed to put spin on the ball.
Got sidetracked last night and didn't get a chance to look at test 114. I'll see if I can work on it this evening instead. :)
11-21-2002, 02:23 PM
Thanks for doing the math. My college physics class was a LONG time ago, and I have never had any practical application for what I learned (I took the class for fun, I know, I'm strange). I can't
even factor equations anymore! As the site I gave you stated, the Magnus effect must be determined experimentally. Maybe we will get enough data to determine what it is.
Based on bjjb calculations (I'm using his) we have the following answers (For test 101):
1. What is the ball's RPM? = 23
2. Is there only one spin axis or does it corkscrew on two spin axis? = One axis
3. Does the spin maintain, speed up or slow, down range? = Maintain*
4. What is difference in surface speed? = 8.2 fps
* = I do not have the equipment to measure the spin rate (I couldn't find anywhere that bjjb did) of the downrange paintballs. I also do not know what the difference is in spacing between the
uprange and downrange strobes. Based on my observations, the downrange strobes are approx. twice the distance of the uprange strobes. Based on my observations the spin rate is the same downrange.
11-21-2002, 03:17 PM
Originally posted by hitech
3. Does the spin maintain, speed up or slow, down range? = Maintain*
* = I do not have the equipment to measure the spin rate (I couldn't find anywhere that bjjb did) of the downrange paintballs. I also do not know what the difference is in spacing between the
uprange and downrange strobes. Based on my observations, the downrange strobes are approx. twice the distance of the uprange strobes. Based on my observations the spin rate is the same downrange.
Exactly the problem I ran into...
The balls are chronoed at around 280 fps and travel around 6 inches between successive strobe flashes for the near-barrel case in test 101. This gave me a strobe rate of one flash every 1.8
If we use the same strobe rate for the downrange image, we get a ball travel of around 10 inches in the same 1.8 milliseconds, for a ball velocity of 463 fps... which of course makes no sense...
no ball is going to miraculously speed up to dangerous velocities during the downrange portion of its flight, no matter how many elves are behind it pushing with all their might. Thus, the strobe
rate downrange must be different.
We don't know the ball velocity in the downrange images, as it is likely to have slowed down significantly from the 280 fps chronoed value. All we have is a distance of around 10 inches between
strobes. I suppose one could estimate what the speed should be by using air resistance, but I think that's just adding more errors into the mix.
So I've basically ignored the downrange data until I can get a handle on how fast the ball is moving at that point, or else get a value for the strobe rate from someone. Without one of those two
values I think wer're just shooting in the dark for the downrange data.
Tom, did the strobe unit have a readout for the strobe rate? I'm assuming it didn't since you didn't put those numbers up here for us to utilize. How did you go about figuring the downrange
velocity or strobe rate back when the test was conducted?
11-21-2002, 11:10 PM
Tippman as far as I know just did empirical testing and never measured anything.
The ball is moving slower down range, I would have to try and find the exact FPS. You can get close by shooting over a crono at 40 ft with a ball that leaves the barrel at 280.
Now that I think about it we most likely spaced the strobes wider apart so we could measure the spin in the same place.
Glenn (sorry about the n),
Intersting that you scraped off most of the powder half way down the barrel. Did you dry fire the excess powder out first? We have never seen that happen in our tests, I might have to repeat
them. We have seen the ball on initial launch bang sideways into the bore but most of the time we just got two streaks.
Yes we are departing from the closed bolt issue temporarily. We are trying to come to terms with the influences on the ball so we can sort them out and rank them.
11-22-2002, 08:34 AM
On Powder testing
What barrels are you guys ysing and what paint?
I would be interested to see these tests with a tight barrel
to paint match. Try turning the bead so that it is parallel to the
direction of the barrel, ie would leave 2 streaks down the
bore and find a match such that the entire ball was touching
as much as possible around the
I get my best accuracy when shooting paint with this fit
characteristic. The consistency
is best I think with your fit
method, but mine takes you from
+/- 1/2 fps to +/- 3/4 fps in most of the better guns.
I will dig around and see if I have the equipment to duplicate
any of this testing, but we have a tourney this weekend.
You guys (Glenn and Tom) put a lot of thought and work into
this, thanks.
-rob from clemson
11-22-2002, 11:59 AM
You could (if you have the supplies) use video cameras to video theb all as it moves downrange. Just set them up sideby side. Then afterwards you could just measure the spin in slow motion. Like
what I did with the egg.
I am going to try and aqquire 4 stock barrels for my 98c. I have heard of a bad flatline idea that some people ave done. If you polish the barrel and then put a piece of tape along the top of the
barrel it will increase range a bit. I was thinking of using tape on the sides and bottom to see how much change there is. Ill give you grouping size and how fart he groupings are from the no
tape grouping. Plus ill get 4 stock barrels to play with
11-22-2002, 12:02 PM
Hey crimson:
check out the official
data thread in the deep blue
Those pics are posted from
like 1992 :-)
11-22-2002, 12:03 PM
ehhh, oops.
11-22-2002, 01:19 PM
A normal everyday videocamera does not have a high enough frame rate to capture the spinning paintball's behavior. A videocamera captures thirty full frames per second (60 fields per second), or
one full frame every 0.033 seconds. A ball moving 280 fps will travel 9 feet in that amount of time... quite possibly clear out of the camera's field of view. The ball in test 101 was spinning at
a little over 20 rotations per second. To capture this spin you really want to sample the scene (i.e. take a snapshot or a video frame) at least 10 times as fast, or 200 images per second. The
strobe used in test 101 resulted in a capture rate of around 550 images per second, which is enough to see and measure the spin of the ball.
If you want to use video, it's going to have to be high speed video on the order of 500 frames per second. As has been stated earlier in this thread, such videocameras are not cheap; they can
cost upwards of tens to hundreds of thousands of dollars depending on the size of your image frame, the amount of collection time you want (they suck up memory like you wouldn't believe), the
maximum framerate, and so on.
11-22-2002, 01:31 PM
I doubt ill be able to do it but my step father is a director of photography. Next time I'm in LA he might be able to hook me up with something really nice.
11-22-2002, 02:50 PM
Originally posted by AGD
Tippman as far as I know just did empirical testing and never measured anything.
Thanks for "empirical",, I needed that. The definition that I found is: "Based on obsevation or experiment and relying on practical experience rather than theory." Good to have a way to
accurately define one's views. Fortunately, it does not preclude the use of measuring devises like a pretty good "calibrated eyeball". ;)
Intersting that you scraped off most of the powder half way down the barrel. Did you dry fire the excess powder out first? We have never seen that happen in our tests, I might have to repeat
them. We have seen the ball on initial launch bang sideways into the bore but most of the time we just got two streaks. [/B]
Actually, I did not dry fire air through the barrels before shooting paint through it but I did blow shop air through them first. I was carefull to minimize the amount of powder in the barrel and
maintain a level of consistency by thorough cleaning and re-powdering for each test shot. I even tried smoke coating the inside of the bore with soot from an oil lamp (lamp black) and saw the
same basic results.
I kind of figured that you would have seen the only "two streaks" results in your tests and I think I know why. However, there are a couple more things that I want to try/test before I stick my
neck out with a plain language, "empirical " definition. :D
Yes we are departing from the closed bolt issue temporarily. We are trying to come to terms with the influences on the ball so we can sort them out and rank them.
AGD [/B]
Isn't the most dominating influence on the effective accuracy of a paintgun, the nature of paintballs themselves and the fact that they are relatively inconsistent in size, shape and seem
position in every batch of balls encountered ? What is actually to be gained by this sorting and ranking the "influences on the ball". Aren't we really looking for a definition of what it takes
for a paintgun to make the most of what we have to work with (paintballs and velocity limits) and be forgiving of the inconsistencies that it is fed ?
11-22-2002, 05:35 PM
I have been a big fan ever since my faculty advisor here had one
of your typhoons. Then when I got into cockers and used your
components there, I was impressed.
I posted earlier about what I thought about mechanical consistency
and projectile systems. I won't recap much of it here other than
to hit a highlight or two.
If we can put to rest the idea that a paintball gets some magic
spin in most conditions (ie not a flatline barrel/zbody/cooper bolt),
Then I think we should look at the following:
A lot of people don't fit the paint to the barrel in the
best way.
Paintball guns should fire the ball under the most consistent,
repeatable circumstances every time.
If you read my earlier 2 posts in this thread there is a lot
more said on both these issues.
-rob from clemson
11-22-2002, 11:35 PM
There is a good reason to rank them. If you find out that a force acts on the ball ONLY after it leaves the barrel and that the barrel has no influence on this force, this is important.
If that force makes up a large percentage of the inaccuracy of a marker, then whatever you do in the gun could not make a significant improvement.
11-23-2002, 04:13 AM
Is there a relationship between the amount and direction of spin and the impact point?
Do the balls that are spinning faster tend to land further from the target? If so is the effect consistant? How about the direction of the spin? Do balls that are spinning to the right
consistantly hit to the right of the target?
What about balls with no spin? do they always hit the target?
What about velocity drop off? do spinning balls lose velocity faster or slower than non-spinning balls?
While all this is very interesting and a fun to think about... don't most people just carry an extra pod or two of paint onto the field? Perhaps an accuracy vs rate of fire test would be more
Finally... does AGD plan on doing anymore of these tests?
Sorry I cannot contribute more than just questions at this time.
11-23-2002, 03:46 PM
I havn't got a deep understanding about paintballs, but I know a little about aerodynamics and shooting, particularly round ball (lead) shooting. It seems that the fact that tippman flatline
barrels will curve a paintball when held sideways is prima facie evidence of the magus effect at work. The Clairaut theorem for rotating fluids should also come into play, which would mean that a
paintball will distort into an ellipsoid if spun fast enough, increasing the surface speed, and the lift produced. If the ball is in solid contact with the (flatline) barrel and not slipping
(theoretical here) the maximum imparted spin should be on the order of 1500 rpm. The real number is undoubtedly lower, but that does represent more than an order of magnitude beyond the 26 rpm
noted above. Has anyone compared the velocity drop down range between a flatline launched ball and a standard one? The flatlined ball should slow down much faster than a standard one as the lift
imparted by the rotation induces extra drag. So, the flatline ball may drop less at say, 45 feet, but it is most likely moving slower when it gets there. The only way I can see that spin would be
of real benfit would be if the spin is imparted with the spin axis along the same vector as the flight path, as rifles do. The effect here is to curve the relative wind around the ball,
effectivly decreasing the ball's CD. The egg/paintball analogy isn't very good BTW, the egg has a thin fluid (the white) surrounding a thicker fluid (the yoke). The yoke is suspended along the
long axis by energy absobing elastic bands, which makes it hard to spin along the long axis. Try laying the egg on it's side and spinning it that way (go ahead and try it, I'll wait). You'll find
that not only can you impart spin that way, but it sustains the spin nicely as well.
In regards to ball deformation, that is one thing that I'm afraid I have always taken for granted. The only thing different about paintballs and lead balls is that the paintballs may return to
spherical shape once the pressure goes down. I was surprised to see the Angel still keeping positive pressure as the ball leaves the barrel. I can envision all kinds of bad things that this would
cause. The Dark Angel's chart is more what I expected.
Manned Flight Simulation
Naval Air Warfare Center, Aircraft Division
NAS Patuxent River MD
11-24-2002, 10:25 AM
Originally posted by Thurman
If the ball is in solid contact with the (flatline) barrel and not slipping (theoretical here) the maximum imparted spin should be on the order of 1500 rpm. The real number is undoubtedly lower,
but that does represent more than an order of magnitude beyond the 26 rpm noted above.
Careful with the units there. I'm sure you mean 1500 and 26 rotations per second, not per minute.
Originally posted by Thurman
The egg/paintball analogy isn't very good BTW, the egg has a thin fluid (the white) surrounding a thicker fluid (the yolk). The yolk is suspended along the long axis by energy absobing elastic
bands, which makes it hard to spin along the long axis. Try laying the egg on it's side and spinning it that way (go ahead and try it, I'll wait). You'll find that not only can you impart spin
that way, but it sustains the spin nicely as well.
An egg is resistant to transients in rotational force. It is entirely possible to get a raw egg to spin on a table, particuarly if one applies a gradual increase in the force applied. As an
interesting note, spin a raw egg and while it's spinning quickly stop it from spinning and then remove your hand immediately; the egg will resume spinning because of the decoupled nature of the
white/yolk and the shell.
How does this apply to paintball? Not sure if it does. It is difficult to estimate whether the fill/shell of a paintball will be sufficiently decoupled at the rotational accelerations we're
looking at. Going from zero to as much as 1500 rotations per second in only a few milliseconds is one heck of a transient force. In this regime, the shell/fill may well act like a raw egg does at
lower transient levels.
I've got a couple of ancient (1980s vintage) paintballs sitting in a shot glass. The fill has completely separated in them and the shells are clear. I can spin them by hand and see that the fill
remains coupled to the shell at the low end of the transient force scale; so at least at the low end of things a paintball does not behave like a raw egg.
11-24-2002, 02:00 PM
Originally posted by AGD
There is a good reason to rank them. If you find out that a force acts on the ball ONLY after it leaves the barrel and that the barrel has no influence on this force, this is important.
Well now, if the barrel itself was the only thing responsible for getting the ball moving in the general direction of down range, that could make a little more sense to me. The barrel is just one
of the forces to be dealt with. It is not going to change the actions of other forces, but what goes on inside the barrel will always be a factor in the the results of influence of the several
forces that act on a paintball in flight. If not, anything or everthing that could move a paintball to desired velocities would show the exact same results in the size and shape of a shot group
as long as the external conditions stayed the same. I kind of doubt that many people will buy into that hypotheses, regardless of how it comes out on paper.
Originally posted by AGD
If that force makes up a large percentage of the inaccuracy of a marker, then whatever you do in the gun could not make a significant improvement.
Now I have to ask; at what point does a "large percentage" become so dominant in the equation, that the balance of the percentage should be overlooked? Also, what is the percentage of improvemnt
that must be gained in order for it to be deemed "significant" ? :confused:
For the sake of argument; lets say you come up with a determination that some external force is (let's say) 80% of the total influence on the flight characteristics of a paintball. Now, if
changes to the barrel and/or valving then made to a piece of equipment, demonstrated a change in the shot group size of (let's say) 10%; would that be considered as "significant"? How about 5% or
even just 1%??To my way of thinking, 10% is extremely significan't and I'm elated when I can get even 1% improvement from something I might do. As an analagy; A professional drag racer might
spend thousands or tens of thousands of dollars for as little as .1% improvement in his elapsed time for 1/4 mile. The little things don't just ADD up, they multiply up.
11-24-2002, 02:30 PM
[QUOTE]Originally posted by bjjb99
Careful with the units there. I'm sure you mean 1500 and 26 rotations per second, not per minute.
Whoa geez... NEVER post before the coffees up! Yes, it's 1500 rotations per second, not minute. And, as a way of confusing the issue still more...
Anyone else ever own an AT-85? Nice, big, fat holed barrel. The kind balls just fall right through. Very accurate. The difference was a neoprene gland in the breach end of the barrel that
replaced the earlier screw in sizing plugs. Apparently, a good barrel to paint fit didn't make much difference in that gun.
older than dirt
stingray (yup, still got it)
spyder compact
just bought a sentinel
thinking about a mag
11-24-2002, 03:11 PM
if somebody would send me a rifled and straight bore barrel and a way to secure a marker so that it wont move when fired i would be perfectly willing to test the grouping differences between
rifled and straight bore. i would use an automag with a max-flo air tank. so if i can get the stuff i will do the test.....
11-25-2002, 02:31 AM
If a barrel did it's job perfectly every time and 100% of the spread was due to external forces then it would be a waste of time to try and improve it.
The problem, as I see it, is that people spend 300 dollars for a barrel not knowing if it will make a 1% or 50% difference.
In general I have to ask you, how much of an increase in accuracy have we really seen in 15 years? Given the fact the barrel prices have increased by 10x and are now honed and sized to
perfection, what are we getting for the money?
While you may be willing to spend big dollars on a 1% improvement most will not or at least would like to know what they are getting. | {"url":"http://www.automags.org/forums/printthread.php?t=64669&pp=30&page=2","timestamp":"2014-04-17T15:42:23Z","content_type":null,"content_length":"66082","record_id":"<urn:uuid:83d2b2df-3c67-4e09-9896-f04358f1f6aa>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
- J. London Math. Soc , 1995
"... One of the main goals in the study of the automorphism group Aut {Jt) of a countable, recursively saturated model Jt of Peano Arithmetic is to determine to what extent (the isomorphism type of)
Jt is recoverable from (the isomorphism type of) Aut(^). A countable, recursively saturated model Jt of PA ..."
Cited by 1 (0 self)
Add to MetaCart
One of the main goals in the study of the automorphism group Aut {Jt) of a countable, recursively saturated model Jt of Peano Arithmetic is to determine to what extent (the isomorphism type of) Jt is
recoverable from (the isomorphism type of) Aut(^). A countable, recursively saturated model Jt of PA is characterized up to isomorphism by two invariants: its first-order theory Th(^) and its
standard system SSy {Jt). At present, there seems to be no indication of how to recover any information about Th {Jt) from Aut {Jt) with the exception of whether or not Th {Jt) is True Arithmetic. We
define the notion of arithmetically saturated in Definition 1.7; however, a model Jt of PA is arithmetically saturated if and only if it is recursively saturated and the standard cut is a strong cut.
The following is our main theorem. THEOREM. Suppose that Jtx and Jt2 are countable, arithmetically saturated models of PA such that Aut(^) s Aut {Jt2). Then SSy {Jtx) = SSy {Jt2). In the Theorem, it
suffices to assume that Jt2 is just recursively saturated. For, as shown by Lascar [8], if Jtx and Jt2 are countable, recursively saturated models of PA
, 2000
"... this paper please consult me first, via my home page. ..."
, 908
"... Abstract. We observe that the classification problem for countable models of arithmetic is Borel complete. On the other hand, the classification problems for finitely generated models of
arithmetic and for recursively saturated models of arithmetic are Borel; we investigate the precise complexity of ..."
Add to MetaCart
Abstract. We observe that the classification problem for countable models of arithmetic is Borel complete. On the other hand, the classification problems for finitely generated models of arithmetic
and for recursively saturated models of arithmetic are Borel; we investigate the precise complexity of each of these. Finally, we show that the classification problem for pairs of recursively
saturated models and for automorphisms of a fixed recursively saturated model are Borel complete. 1.
"... ABSTRACT. We study some generalizations of the notion of a definable type, first in an abstract setting in terms of ultrafilters on certain Boolean algebras, and then as applied to model theory.
The notion of a weakly definable ultrafilter or type was developed by one of the authors [K] in a study o ..."
Add to MetaCart
ABSTRACT. We study some generalizations of the notion of a definable type, first in an abstract setting in terms of ultrafilters on certain Boolean algebras, and then as applied to model theory. The
notion of a weakly definable ultrafilter or type was developed by one of the authors [K] in a study of models of arithmetic. It generalizes the notion of a definable type; and just as this latter
notion has interesting properties in a much more general context, especially in stability theory, it seemed worthwhile to investigate weakly definable types in a general model-theoretic setting. A
goal of this paper is to present the results of our investigations on these lines. It is natural to ask why such notions turn up both in arithmetic and in elementary stability theory. Ressayre, for
example, in a review [R] of Gaifman's paper [G], says...although the notion of definable type was introduced by Gaifman in the study of PA, which is the most unstable theory, this notion turned out
to be a fundamental one for stable theories. And minimal as well as uniform types also correspond more or less to properties important in the stable case. I expect | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1038888","timestamp":"2014-04-17T06:50:02Z","content_type":null,"content_length":"19460","record_id":"<urn:uuid:e1aceee4-1597-468a-ad86-e17bcfcf50a8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Berkeley, CA
Find a Berkeley, CA Calculus Tutor
...Other technical subjects that I am available to tutor are middle-school and high-school and basic undergraduate math (algebra, geometry, pre-calc, calculus), and MATLAB programming. I am a
native Spanish speaker, so I can tutor Spanish and English as a Second Language. Additionally, I can tutor any of the technical subjects that I mentioned above in Spanish as well as English.
15 Subjects: including calculus, Spanish, geometry, ESL/ESOL
...As a high school student, I was a private math tutor for a 5th grader, and I also regularly helped students in our school's after school tutoring program. I have a lot of experience helping
students of all levels! In addition to teaching the material, I also like to emphasize study strategies and skills.
27 Subjects: including calculus, chemistry, physics, geometry
...I have years of experience tutoring and teaching classes related to undergraduate and MBA level courses in business. I have taught economics, operations research and finance related courses. I
have acted as a tutor for MBA students in every course they took in their graduate school curriculum.
49 Subjects: including calculus, physics, geometry, statistics
...At different universities I taught my own courses that built on linear algebra. I also taught groups with aspiring teachers in linear algebra. "Great Tutor" - Elizabeth from Moraga, CA Andreas
is a very thorough and patient tutor. After tutoring with Andreas, my test scores went up by two whole letter grades in college-level Linear Algebra, which led to an overall B grade in the class.
41 Subjects: including calculus, geometry, statistics, algebra 1
...I began tutoring for these tests as an instructor for a couple of premier Test Prep companies. In the 10+ years since then, I honed my skills and knowledge by helping hundreds of students
one-on-one. I can help students improve their test scores whether they are quite new to the test, already had a lot of prep, or somewhere in between.
14 Subjects: including calculus, statistics, geometry, algebra 2
Related Berkeley, CA Tutors
Berkeley, CA Accounting Tutors
Berkeley, CA ACT Tutors
Berkeley, CA Algebra Tutors
Berkeley, CA Algebra 2 Tutors
Berkeley, CA Calculus Tutors
Berkeley, CA Geometry Tutors
Berkeley, CA Math Tutors
Berkeley, CA Prealgebra Tutors
Berkeley, CA Precalculus Tutors
Berkeley, CA SAT Tutors
Berkeley, CA SAT Math Tutors
Berkeley, CA Science Tutors
Berkeley, CA Statistics Tutors
Berkeley, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/berkeley_ca_calculus_tutors.php","timestamp":"2014-04-17T07:22:46Z","content_type":null,"content_length":"24306","record_id":"<urn:uuid:a6d24151-60c4-495a-b104-dbe8aca0e7ad>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
A recursive descent parser with an infix expression evaluator
March 20th, 2009 at 6:01 pm
Last week I wrote about some of the inherent problems of recursive-descent parsers. An elegant solution to the operator associativity problem was shown, but another problem remained – and that is of
the unwieldy handling of expressions, mainly performance-wise.
Here I want to present one alternative to the pure-RD approach, and that is intermixing RD with another parsing method.
The code
I’ll begin by pointing to the code for this article. It contains several Python files and a readme.txt explaining what is what. Throughout the article I’ll present short snippets from the code, but
it’s encouraged to run it on your own. The code is self-contained and only requires Python (version 2.5) to run.
Extending the grammar
To illuminate some of the points I’m presenting better, I’ve greatly extended the EBNF grammar we’ll be parsing. Here’s the new grammar (taken from the top of the rd_parser_ebnf.py in the code .zip):
# EBNF:
# <stmt> : <assign_stmt>
# | <if_stmt>
# | <cmp_expr>
# <assign_stmt> : set <id> = <cmp_expr>
## Note 'else' binds to the innermost 'if', like in C
# <if_stmt> : if <cmp_expr> then <stmt> [else <stmt>]
# <cmp_expr> : <bitor_expr> [== <bitor_expr>]
# | <bitor_expr> [!= <bitor_expr>]
# | <bitor_expr> [> <bitor_expr>]
# | <bitor_expr> [< <bitor_expr>]
# | <bitor_expr> [>= <bitor_expr>]
# | <bitor_expr> [<= <bitor_expr>]
# <bitor_expr> | <bitxor_expr> {| <bitxor_expr>}
# <bitxor_expr> | <bitand_expr> {^ <bitand_expr>}
# <bitand_expr> | <shift_expr> {& <shift_expr>}
# <shift_expr> | <arith_expr> {<< <arith_expr>}
# : <arith_expr> {>> <arith_expr>}
# <arith_expr> : <term> {+ <term>}
# | <term> {- <term>}
# <term> : <power> {* <power>}
# | <power> {/ <power>}
# <power> : <power> ** <factor>
# | <factor>
# <factor> : <id>
# | <number>
# | - <factor>
# | ( <cmp_expr> )
# <id> : [a-zA-Z_]\w+
# <number> : \d+
As you can see, this simple calculator is starting to approach a real programming language, as it supports a plethora of mathematical and logical expressions, as well as conditional statements (if
... then ... else) and assignments. I’ve added a simplistic "prompt" so you can experiment with the calculator from the command line:
D:\zzz\rd_parser_calc>rd_parser_ebnf.py -p
Welcome to the calculator. Press Ctrl+C to exit.
--> set x = 2 + 2 * 3
--> set y = (x - 1) * (x - 2)
--> if y > x then set y = x else set x = y
--> x
--> y
--> x ** ((y - 10) * -3)
--> ... Thanks for using the calculator.
Note that since a separate expression "level" is required for each precedence, the resulting code is somewhat repetitive. I’ll get back to this point later on.
Evaluating infix expressions
An alternative method of evaluating expressions is required, then. Luckily, such a need arose early enough (in the 1950s and 60s, when first compilers and interpreters were constructed) and some
luminaries examined this problem in detail. In particular, Edsger W. Dijkstra proposed an efficient and intuitive algorithm for converting from infix notation to RPN, called the Shunting Yard
I will not describe the algorithm here, as it’s been done several times already. If the Wikipedia article is not enough, here’s another good source (which I’ve actually used as the basis for my
The algorithm employs two stacks to resolve the precedence dilemmas of infix notation. One stack is for storing operators of relatively low precedence that await results from computations with higher
precedence. The other stack keeps the result accumulated so far. The result can either be a RPN expression, an AST or just the computed result (a number) of the computation.
In my code, the file rd_parser_infix_exper.py implements a hybrid parser, using Shunting Yard to evaluate expressions and a top-level RD parser for statements and combining everything together. It’s
instructive to examine the implementation and see how things fit together.
The grammar this parser accepts is exactly the same as the pure RD EBNF parser presented eariler. The statements (assign_stmt, if_stmt, and stmt) are evaluated by traditional RD, but getting deeper
into expressions is done with an "infix evaluator", the gateway to which is the _infix_eval method [1]:
def _infix_eval(self):
""" Run the infix evaluator and return the result.
self.op_stack = []
self.res_stack = []
return self.res_stack[-1]
This method prepares the Shunting Yard stacks and begins evaluating the expression, terminating with returning its results.
Note that the connection to the RD parser is seamless. When _infix_eval is called, it assumes that the current token is the beginning of an expression (just like any RD rule), and consumes as much
tokens as required to parse the full expression before returning the result.
The rest of the implementation (the _infix_eval_expr, _infix_eval_atom, _push_op and _pop_op methods) is pretty much a word by word translation of the algorithm described in this article into Python.
Adding expressions
Here’s a big advantage of this hybrid parser: adding new expressions and/or changing precedence levels is much simpler and requires far less code. In the pure RD parser, the operators and their
precedences are determined by the structure of recursive calls between methods. Adding a new operator requires a new method, as well as modifying some of the other methods [2]. Changing the
precedence of some operator is also troublesome and requires moving around lots of code.
Not so in the infix expression parser. Once the Shunting Yard machinery is in place, all we have to do to add new operators or modify existing ones is update the _ops table:
_ops = {
'u-': Op('unary -', operator.neg, 90, unary=True),
'**': Op('**', operator.pow, 70, right_assoc=True),
'*': Op('*', operator.mul, 50),
'/': Op('/', operator.div, 50),
'+': Op('+', operator.add, 40),
'-': Op('-', operator.sub, 40),
'<<': Op('<<', operator.lshift, 35),
'>>': Op('>>', operator.rshift, 35),
'&': Op('&', operator.and_, 30),
'^': Op('^', operator.xor, 29),
'|': Op('|', operator.or_, 28),
'>': Op('>', operator.gt, 20),
'>=': Op('>=', operator.ge, 20),
'<': Op('<', operator.lt, 20),
'<=': Op('<=', operator.le, 20),
'==': Op('==', operator.eq, 15),
'!=': Op('!=', operator.ne, 15),
I also find this table much more descriptive in the sense of understanding how the operators relate to one another than the parallel 9 methods required to implement them in the pure RD version (
Now here is the funny thing. My initial motivation for examining the infix expression hybrid was the allegedly poor performance of the RD parser for parsing expressions (as described in the previous
article). But the performance hasn’t improved! In fact, the new hybrid parser is a bit slower than the pure RD parser!
And the annoying thing is that it’s entirely unclear to me how to optimize it, since profiling shows that the runtime divides rather evenly between the various methods of the algorithm. Yes, the pure
RD parser requires the full precedence-chain of methods called for each single terminal, but the infix version has more method calls in total.
If anything, this has been a lesson in optimization, as profiling initially showed that the vast majority of the time is spent in the lexer [3]. So I’ve managed to optimize my lexer (by precompiling
all its regexes into a single large one using alternation), which greatly reduced the runtime.
This article has presented an alternative to the pure recursive-descent parser. The hybrid parser developed here combines RD with infix expression evaluation using the Shunting Yard algorithm.
We’ve seen that the new code is more manageable for operator-rich grammars. If even more operators are to be added to the parser (such as the full set of operators supported by C), it’s much simpler
to implement into the parser, and the operator table is a single place summarizing the operators, their associativities and precedences, making the parser more readable.
However, this has not made the parser any faster. The pure-RD implementation is lean enough to be efficient even when the grammar consists of many precedence levels. This is an important lesson in
optimization – it’s difficult to assess the relative runtimes of complex chunks of code in advance, without actually trying them out and profiling them.
[1] It would be a swell idea to read the description of the algorithm and have an intuitive understanding of it from this point and on in the article.
[2] Suppose we had no multiplication and division and had to add the term rule. In addition to writing the code for the new rule, we must modify the arith_expr rule to now call term instead of power.
[3] Which makes lots of sense, as it’s well known that lexing/tokenization is usually the most time consuming stage of parsing. This is because the lexer has to examine every single character of the
input, while the parser above it works on the level of whole tokens.
Related posts:
Eric Olivier LEBIGOT (EOL) Says:
March 25th, 2009 at 00:15
Thanks for sharing!
Wesley Kerfoot Says:
May 25th, 2013 at 05:58
Nice article. I did something similar with a parser I’m working on, except i used the precedence-climbing algorithm instead of the shunting yard algorithm. It worked out pretty well as a way of
allowing a mixture of infix and prefix notation in my grammar.
Stephan Beal Says:
March 3rd, 2014 at 11:09
Thanks very much for this. A very informative article. While i would _love_ to sit down and rewrite my current project’s script engine based on this, it’s more work than i’d like to commit to… so
maybe i’ll have to start a new project to justify it :/. It would solve some of my precedence/associativity problems, though. | {"url":"http://eli.thegreenplace.net/2009/03/20/a-recursive-descent-parser-with-an-infix-expression-evaluator/","timestamp":"2014-04-19T19:39:36Z","content_type":null,"content_length":"30692","record_id":"<urn:uuid:770cbc29-cf45-46f2-b672-949e3063b83f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simply efficient functional reactivity
fundamental choices
They're both fruit
I think these are the most important sort of comparisons. It's easy to say "We're just like the FrobNozzle system [12], but with the extra blarg feature". The interesting comparison talks about the
impact of the fundamental choices you're making. For example, is purity important to the semantics of FRP in your system? Is it important to the optimizations? Or is the reason for purity just the
underlying use of Haskell? Is synchronous update easy for the programmer to understand? Does asynchrony mean their system if faster? Slower?
These questions are obviously harder to answer than most of what we write for related work. But they are the ones that make a real difference in the end.
Sam Tobin-Hochstadt
at Tue, 2008-04-08 00:08 |
to post comments
FrTime comparison
Thanks. I unintentionally omitted FrTime from the Related Work section, and I'll add some remarks. It's perhaps something of an apples-and-oranges comparison, as the semantics are quite different:
pure & synchronous vs impure & asynchronous.
Conal Elliott
at Mon, 2008-04-07 21:42 |
to post comments
FrTime (in PLT Scheme) is a FRP implementation that combines data- and demand-driven. I can see there is difference in this paper and in FrTime's approach. But I'm not convinced the approach here is
at Mon, 2008-04-07 20:08 |
to post comments
Thanks, Sam. I like these questions.
The most fundamental choice for me is simple denotational semantics. Determinacy (in spite of the multi-threaded implementation) and lack of side-effects both have a huge impact on the semantic
simplicity. The denotational semantics literally is as simple as
B a = Time -> a
E a = [(Time,a)]
and semantic definitions are as simple as
at (fmap f b) = f . at b
Beside simplicity, another fundamental semantic issue for me is supporting continuously (not just frequently) varying values. I don't know if FrTime had continuously varying behaviors or just
discrete ones. I think its temporal primitives seconds and milliseconds were integer-valued, so perhaps it had only discretly-changing behaviors (step functions).
After formal simplicity, my next goals are responsiveness and efficiency of implementation. For these goals, the purity & simplicity of the formal semantics are both challenging (especially
determinacy) and helpful (to easily and rigorously justify optimizations).
After re-reading the FrTime paper today, I see that the term "synchronous" is probably being used in (at least) two distinct ways. I used the word to refer to a semantic property, e.g., that fmap f b
undergoes transitions exactly when b changes. I did not mean the implementation property of sampling all behaviors at the same time, as in most previous FRP systems, which is very inefficient.
So: no, the semantic purity (and more generally, simplicity) is not just a by-product of using Haskell; and yes I believe semantic purity much simpler for the programmer and user to understand. I
don't know about the net impact on performance, though I wouldn't sacrifice the semantics to get a faster implementation.
Conal Elliott
at Tue, 2008-04-08 05:34 |
to post comments
I am not familiar with FRP.
I am not familiar with FRP. I see there is a construct for a continuous function of time, and one for a series of discrete events. Are they these?
B a = Time -> a
E a = [(Time,a)]
How do you combine them? Can you make a series of events into a function of time?
Denis Bredelet -jido
at Tue, 2008-04-08 08:29 |
to post comments
Continuous & discrete
B and E are the semantic domains for the library type (constructor)s Behavior and Event, which capture the continuous and discrete aspects FRP. The domains relate to the data types via via semantic
at :: Behavior a -> B a
occs :: Event a -> E a
For making a behavior from an event (from what you called "a series of events"), the basic tool is
switcher :: Behavior a -> Event (Behavior a) -> Behavior a
The first argument is the initial behavior, and the second provides a timed stream of new behaviors to switch to.
The other main connection between behaviors and events is the ability to take a boolean behavior b and construct an event that occurs each time b changes from false to true.
Conal Elliott
at Tue, 2008-04-08 17:09 |
to post comments
Sounds analogous to Lucid
Lucid and its related languages use an evaluation model called eduction, a form of demand driven dataflow, coupled with a caching mechanism nominally termed a "warehouse".
Although I'm sure that Conal's new FRP formulation is not isomorphic from the basic description it seems to address the same issues.
at Tue, 2008-04-08 00:17 |
to post comments
I wonder I don't see comparisons of FRP to LabView. Is there large difference between FRP and the dataflow driven user interaction of Labview? Or is it because Labview is proprietary? Or because the
graphical nature leads reasearchers to think of it as a toy? Or has it just been around too long (I guess no one really compares their shiny new languages to C)?
Greg Buchholz
at Tue, 2008-04-08 02:15 |
to post comments
If LabView is mentioned, I must plug-in the Hardware Description Langages and the way their simulators work : VHDL, Verilog, SystemC,...
It is essentially PUSH based event-driven interactions, with the availability of arbitrary scheduling of future events
Clock = NOT Clock after 10 ns;
The mixed mode analog+digital variants (like VHDL-AMS) are somewhat more functional with the continuous behaviour of analog parts.
[rant](Bbtw, yes, LabVew is a toy ;-)[/rant]
at Wed, 2008-04-09 10:13 |
to post comments
not really
These are similar to other early data flow languages: you define the next time step's value in terms of the current. This leads to tight cycles - I don't think these systems typically feature
reasoning to determine whether fixpoints exist in such cases, breaking synchrony assumptions. Regardless, citing them seems redundant to me. [Not to say they're not important; for example, there's
some fun work going on in the FPGA world to more directly support them].
I haven't really seen good survey papers providing sufficient depth on the language properties of modern data flow / reactive systems. By that I mean the couple I've read both skipped FRP and Lucid/
Synchrone styles of languages, which, to me, were fairly significant departures from Esterel et al.
at Thu, 2008-04-10 05:53 |
to post comments | {"url":"http://lambda-the-ultimate.org/node/2756","timestamp":"2014-04-16T10:24:44Z","content_type":null,"content_length":"23394","record_id":"<urn:uuid:9144c836-899b-4ad4-8e90-959dfa4f8f88>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Do You Find the Slant Height of a Cone?
Trying to find the slant height of a cone? Use the height of the cone and the radius of the base to form a right triangle. Then, use the Pythagorean theorem to find the slant height. Watch this
tutorial to see this process step-by-step! | {"url":"http://www.virtualnerd.com/pre-algebra/perimeter-area-volume/cone-slant-height-example.php","timestamp":"2014-04-18T09:46:53Z","content_type":null,"content_length":"29549","record_id":"<urn:uuid:8b21f5f8-da4c-4794-98db-7b80c8144234>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Latin: sector "a cutter"
Definition: The part of a circle enclosed by two
of a circle and their
intercepted arc
. A pie-shaped part of a circle.
Try this Drag one of the orange dots that define the endpoints of the blue arc. The sector of the circle is shown in yellow.
As you can see from the figure above, a sector is a pie-shaped part of a circle. It has two straight sides (the two radius lines), the curved edge defined by the arc, and touches the center of the
Radius The radius of the circle of which the sector is a part
Central Angle The angle subtended by the sector to the center of the circle. See Central Angle of an Arc for more.
Arc length The length around the curved arc that defines the sector (shown in red here). For more on this see Arc length definition.
The area of a sector can be found if you know the radius and the central angle or arc length. For more on this see Area of a sector.
Other circle topics
Equations of a circle
Angles in a circle
(C) 2009 Copyright Math Open Reference. All rights reserved | {"url":"http://www.mathopenref.com/arcsector.html","timestamp":"2014-04-19T07:30:58Z","content_type":null,"content_length":"12060","record_id":"<urn:uuid:07f57948-120f-4fcf-bb87-77efe1b42696>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
what does this math statement mean?..
There are at least two reasons for the introduction of "in x" when describing these polynomials;
1) In a more formal setting, polynomials are treated like strings of symbols where some of the symbols (the coefficients) come from one set and the other symbols (the powers of the "variable(s)")
from another. In order to avoid problems, it is usually stipulated that the variable(s) can't also be coefficient symbols, and so one must stipulate what the "variable" symbol is.
In these settings the "variable" usually isn't meant to actually be a variable/place-holder, and the polynomial isn't intended to represent a "function" per se. This idea can be extended to formal
"power series over __ in __", which are quite powerful in a somewhat "abstract nonesense" sorta way; one can develop quite a bit of complex analysis, for example, without even talking about complex
numbers and functions. Heck, one particularly common way of developing the complex numbers uses formal polynomials.
2) To to those familiar with the "polynomial in __" terminology, one can talk about polynomials in ##e^x## ( ##e^{2x}+2e^x+1##), polynomials in ##\sin\theta## (##\sin^2\theta+2\sin\theta+1##),
polynomials in ##y^2## (##y^4+2y^2+1##), etc. Methods for solving equations of these types then become more obvious; i.e. we can use techniques for solving polynomial equations to help us solve
equations involving things that wouldn't normally be considered according-to-Hoyle polynomials. | {"url":"http://www.physicsforums.com/showthread.php?s=b538e8bd12e44a1f799f5b57dc35dd1a&p=4637952","timestamp":"2014-04-23T12:18:29Z","content_type":null,"content_length":"32889","record_id":"<urn:uuid:ab12d2db-7e64-49b1-9e74-c3f049120113>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 147
you need to clean the gutters of your home. the gutters are 24 FT above the ground. for safety, the distance a ladder reaches up a wall should be four times the distance from the bottom of the ladder
to the base of the side of the house. Therefore the ladder must be 6 ft from ...
Accounting. Present Value.
Suppose you want to have $5,000 saved at the end of five years. The bank will pay you 2% interest on your money. How much would you have to deposit today to have the $5,000 you want at the end of
five years.
How many moles of Aluminium Bromide is fromed from 4.0 moles of Aluminium in equation: 2Al + 3 Br2 = 2 AlBr3?
The Fourier series expansion for the periodic function,f(t) = |sin t|is defined in its fundamental interval. Taking π = 3.142, calculate the Fourier cosine series approximation of f(t), up to the 6th
harmonics when t = 1.09. Give your answer to 3 decimal places. Anybody c...
The Fourier series expansion for the periodic function,f(t) = |sin t|is defined in its fundamental interval. Taking π = 3.142, calculate the Fourier cosine series approximation of f(t), up to the 6th
harmonics when t = 1.09. Give your answer to 3 decimal places. Anybody c...
A 2.5 kg block sits on an inclined plane with a 30 degree inclination. A light cord attached to the block passes up over a light frictionless pulley at the top of the plane and is tied to a second
2.5 kg mass freely hanging vertically. The coefficients of static and kinetic fr...
10.7L of a gas at 1.75 atm are expanded to 20.0L at a mass constant temperature.what the new gas pressure?
10.7L of a gas at 1.75 atm are expanded to 20.0L at a mass constant temperature.what the new gas pressure?
10.7L of a gas at 1.75 atm are expanded to 20.0L at a mass constant temperature.what the new gas pressure?
any kind soul please help me. thanks.
A function of two variables is given by, f(x,y) = e^2x-3y Find the tangent approximation to f(0.989,1.166) near (0,0), giving your answer to 4 decimal places. Any kind soul to help on the problem. My
professor raise this problem to me. i am so stuck.
English Composition 1
I need to change the syle of writing from a chapter in a book. From whatever syle it is written in to one of, gothic romance, pulp fiction or horror story. I don't know how to write in any of those
styles, I don't know how to write!
Using the given zero, find one other zero of f(x). Explain the process you used to find your solution. 1 - 6i is a zero of f(x) = x4 - 2x3 + 38x2 - 2x + 37.
Calculate the return and standard deviation for the following stock, in an economy with five possible states. If a Boom (Probability=25%) economy occurs, then the expected return is 30%. If a Good
(Probability=25%) economy occurs, then the expected return is 15%. If a Normal (...
Consider the titration of 100.0 mL of 0.260 M propanoic acid (Ka = 1.310−5) with 0.130 M KOH. Calculate the pH of the resulting solution after each of the following volumes of KOH has been added.
(Assume that all solutions are at 25°C.) (a) 0.0 mL (b) 50.0 mL (c) 100...
Consider the titration of 41.5 mL of 0.213 M HCl with 0.123 M KOH. Calculate the pH of the resulting solution after the following volumes of KOH have been added. (Assume that all solutions are at
25°C.) (a) 0.0 mL (b) 10.0 mL (c) 40.0 mL (d) 80.0 mL (e) 100.0 mL
Calculate the pH of a solution formed by mixing 124.8 mL of 0.481 M KF and 130.7 mL of 0.084 M HClO3.
What volumes of 0.47 M HF and 0.47 M NaF must be mixed to prepare 1.00 L of a solution buffered at pH = 3.50? HF NaF
Calculate the pH after 0.013 mole of gaseous HCl is added to 272.0 mL of each of the following buffered solutions. (Assume that all solutions are at 25°C.) (a) 0.080 M C5H5N and 0.19 M C5H5NHCl (b)
0.72 M C5H5N and 1.52 M C5H5NHCl
You make 1.00 L of a buffered solution (pH = 5.10) by mixing propanoic acid and potassium propanoate. You have 1.00 M solutions of each component of the buffered solution. What volume of each
solution do you mix to make such a buffered solution? propanoic acid potassium propan...
How many moles of NaOH must be added to 1.0 L of 2.2 M HF to produce a solution buffered at each pH? (a) pH = pKa (b) pH = 4.24 (c) pH = 4.60
Consider a solution that contains both C6H5NH2 and C6H5NH3+. Calculate the ratio [C6H5NH2]/[C6H5NH3+] if the solution has the following pH values. (Assume that the solution is at 25°C.) (a) pH = 4.70
(b) pH = 5.26 (c) pH = 5.42 (d) pH = 4.96
A buffered solution is made by adding 48.6 g C2H5NH3Cl to 1.00 L of a 0.77 M solution of C2H5NH2. Calculate the pH of the final solution. (Assume no volume change. Assume that all solutions are at
Calculate the pH after 0.016 mole of NaOH is added to 1.05 L of a solution consisting of 0.138 M HONH2 and 0.127 M HONH3Cl, and calculate the pH after 0.016 mole of HCl is added to 1.05 L of the same
solution of HONH2 and HONH3Cl. (Assume that all solutions are at 25°C.) 0...
Calculate the pH after 0.14 mole of NaOH is added to 1.07 L of a solution that is 0.54 M HF and 1.15 M NaF, and calculate the pH after 0.28 mole of HCl is added to 1.07 L of the same solution of HF
and NaF. 0.14 mole of NaOH 0.28 mole of HCl
Calculate the pH of a solution that is 0.124 M C2H5NH2 and 0.124 M C2H5NH3Cl. (Assume that the solution is at 25°C.)
Calculate the mass of HONH2 required to dissolve in enough water to make 255.5 mL of solution having a pH of 10.02 (Kb = 1.110−8).
Calculate the mass of HONH2 required to dissolve in enough water to make 255.5 mL of solution having a pH of 10.02 (Kb = 1.110−8).
Would you expect the pressure of a 12.5 mL syringe of natural gas to be the same as the pressure of 12.5 mL of air? In other words, is Boyle's Law dependent on the type of gas that is being measured?
I'm sorry but I am still confused.
Given the following functions, answer the questions: Temperature (°C) Kw 0 1.1410−15 25 1.0010−14 35 2.0910−14 40. 2.9210−14 50. 5.4710−14 Is the autoionization of water exothermic or endothermic?
(b) What is the pH of pure water at 35°C? ...
Determine if the solubility would increase, decrease, or stay the same: 1) A saturated solution of sugar water is cooled from room temp to 0 degrees Celsius. 2) A solution of ocean water with
dissolved oxygen is heated from 10 degrees Celsius to 15 degrees Celsius. 3) A satura...
What is the % M/M concentration of 5.00 mL of methanol in 25 g solution of methanol in water? The density of methanol is 0.791 g/mL. The density of water is 1.000 g/mL. What is the % V/V
concentration for the solution? Please show work.
How many grams of magnesium sulfate would you need to make 0.500 L of a 10.0% M/V solution in water? Please show work.
If you want to make 75 mL of a 0.015 M solution of B(OH)3 in water, how many grams of B(OH)3 would you need?
NO(g) + NO2(g) N2O3(g) N2O4(g) 2 NO2(g) Sorry about that
Given the following equilibrium constants are given at 377°C, 1/2 N2(g) + 1/2 O2(g) NO(g) K1 = 110−17 1/2 N2(g) + O2(g) NO2(g) K2 = 110−11 N2(g) + 3/2 O2(g) N2O3(g) K3 = 210−33 N2(g) + 2 O2(g) N2O4
(g) K4 = 410−17 determine the values for the equilib...
it says my answer is still wrong. I got 0.017683 but it says the right answer is 0.071777081878092.
At a particular temperature, 13.7 mol of SO3 is placed into a 3.9-L rigid container, and the SO3 dissociates by the reaction given below. 2 SO3(g) 2 SO2(g) + O2(g) At equilibrium, 3.8 mol of SO2 is
present. Calculate K for this reaction.
At 25°C, Kp = 2.910−3 for the following reaction. NH4OCONH2(s) 2 NH3(g) + CO2(g) In an experiment carried out at 25°C, a certain amount of NH4OCONH2 is placed in an evacuated rigid container and
allowed to come to equilibrium. Calculate the total pressure in the ...
A sample of gaseous PCl5 was introduced into an evacuated flask so that the pressure of pure PCl5 would be 0.54 atm at 425 K. However, PCl5 decomposes to gaseous PCl3 and Cl2, and the actual pressure
in the flask was found to be 0.85 atm. Calculate Kp for the decomposition rea...
Determine the theoretical field of aspirin for a student who started the experiment with 2.489 g of salicylic acid. Show your work.
Two students measured the outside temperature at noon from a thermometer hung in the playground. The two students recorded very different measurements. Give two possible reasons for why the students
got such different measurements. I know one reason is one student is reading C...
7. contend 8. poignant thank you so much i am ready for a break from school...
4. conversely 7. poignant 8. contend 9. charisma
4. conversely 7. poignant 8. contend 9. charisma
ty i was headed down the right path now... i just got and typed 1 and 2 backwards.
1. contempory 2. prevalent 4. contend I did look them up 1. charisma - personal appeal 2. contemporary current 3. contend to declare 4. conversely in contrast 5. extrovert a sociable person
6. poignant affecting the emotions 7. prevalent ...
Many people are surprised to learn how __1__ poverty is __2__ America. Today millions live below the poverty line, and the number seems to escalate daily. Judy and Martin Reed exemplify the old
saying, Opposites attract. A __3__ Judy chooses work that brings her in...
2) Tube (b) is also filled with water, A1 is 0.05m2 and A2 is 0.08m2 Two pistons are placed in both ends of the tube and forces, F1 and F2 are exerted on the pistons so that they remain at the same
height. If F1 = 20N what is F2? (I got the answer to this question) 3) Tube (c)...
The right angle uniform slender bar AOB has mass m. Assuming a frely hinged pivot O, determine the magnitude of the normal force at A and the magnitude of the pin reaction at O.
What causes depression? A. Genetic issues B. Biochemical issues C. There is no known cause D. Environmental issues Reset Selection
In black and white photography, a photon energy of about 4.00 x 10 to the negative 19th power I is needed to bring about the changes in the silver compounds used in the film. Explain why a red light
used in a darkroom does not affect the film during developing.
How many minutes are required for a radio signal to travel from Earth to a space station on Mars if Mare is 7.83 x 10 to the 7th power km?
Jessie time 7 minute Michael s time is 2.5 times slower Divide 7 minutes by 2.5 to get answer Jesse = 7 Michael = 2.5 X 7 = 2.5 X 7/2.5 X = 17.5 to check answer divide 17.5 by the 2.5, your answer is
How much time is required for reflected sunlight to travel from the Moon to Earth if the distance between Earth and the Moon is 3.85 x 10 to the 5th exponent?
How does the market price of a good in a monopoly market compare with the market price of the same good in a perfectly competitive market? A. The price is higher. B. The price is lower. C. The prices
cannot be compared. D. The prices are the same.
To improve its standard of living, a nation s economy must A. remain stable. B. grow through innovation. C. reach economic equity. D. allow the central government to make economic decisions. Reset
Solve the problem using inductive reasoning. Find the 12th square number that corresponds to the following dot sequence. A. s12 = 144 B. s12 = 24 C. s12 = 169 D. s12 = 78 Reset Selection
I need to write a paragraph why i choose medical billing. It has to have different experiences that motivate you to choose that field
Suppose the market for the magazine is in equilibrium. Some students insist on raising the cover price by $1 and printing the same quantity. What is likely to happen? A. The demand for the magazine
will go up. B. There will be a shortage of 150 magazines. C. There will be a su...
Advances in technology have reduced the cost of manufacturing MP3 players. If demand does not change, A. more MP3 players will be sold at a higher price. B. fewer MP3 players will be sold at a higher
price. C. more MP3 players will be sold at a lower price. D. fewer MP3 player...
A major characteristic of monopolistic competition is that prices will be A. higher than in perfect competition. B. lower than in perfect competition. C. higher than in a true monopoly. D. unrelated
to the type of competition.
what is the behaviors associated with editing
English 12
In Porphyria s Lover how does the speaker feel when Porphyria says she loves him? A. angry B. sad C. surprised and angry D. surprised and happy
What is the definiton of marginal proficiency consumer?
cells - science
Ohh haha thanks anyways
A red bead releasd from rest at height H slides down a wire without sticking to a green bead initally at rest. The mass of the green bead is half that of the stuck together beads both continue to
slide down the wire and then slide back up. With the starting height of the green...
chemistry lab
Explain how the differences in the solubility products of AgI and AgCl make this experiment possible. Include in your discussion a narrative of what is happening to the electrode potentials and the
I-, Cl-, and Ag+ ions.
Math Help Please
that should read A intersection B and union C
Math Help Please
Let U = {9, 10, 11, 12, 13, 14, 15, 16, 17, 18}, A = {10, 12, 14, 16}, B = {13, 17}, and C = {10, 13, 16, 17, 18}. List all the members of each of the sets. (Enter solutions from smallest to largest.
If there are any unused answer boxes, enter NONE in the last boxes.) A ...
tenth grade science
What can you conclude about the effect of the thermal expansion of water on sea level?
I did what you said but I keep getting the wrong answer no matter what I do.
How man grams of Fe+3 are required to make 1.0L of an aqueous solution of 10 ppm Fe+3: (Assume the densit of water is 1.00g/mL)
engineering physics
An object of mass m is dropped at t=0 from the roof of a building of hdight h. A wind blowing parallel to the face of the building exerts a constant force F on the object. At what time t does the
object strike the ground? Express t in terms of g and h.
A landscape architect is planing an artificial water fall. Water flowing at 1.11 m/s will leave the end of a horizontal channel at the top of a vertical wall h=4.00 m high. Will the space behind the
waterfall be wide enough for a pedistrian walkway? b. To sell her plan to the ...
A landscape architect is planing an artificial water fall. Water flowing at 1.11 m/s will leave the end of a horizontal channel at the top of a vertical wall h=4.00 m high. Will the space behind the
waterfall be wide enough for a pedistrian walkway? To sell her plan to the cit...
A ball is thrown upward. After reaching a maximum height, it continues falling back to- ward Earth. On the way down, the ball is caught at the same height at which it was thrown upward.If the time
(up and down) the ball remains in the air is 2.3 s, find its speed when it caugh...
A stone is thrown straight up from the ground with an initial speed of 30.5 m/s . At the same instant, a stone is dropped from a height of h meters above ground level. The two stones strike the
ground simultaneously.Find the height h. gravity is 9.8 m/s2 . Answer in units of m...
. Solve using the multiplication principle. 5¡Á¡Ý-4
Physics 112
Two stationary point charges of 5.00 and 2.00 are separated by a distance of 60.0 . An electron is released from rest at a point midway between the charges and moves along the line connecting them.
What is the electric potential energy of the electron when it is at the midpoin...
Physics 112
Three point charges with values = 4.00 , = 1.00 , and = 7.00 are placed on three consecutive corners of a square whose side measures = 5.00 . Point A lies at the fourth corner of the square,
diagonally across from . Point B lies at the center of the square. What are the values...
still stuck on simplififying 3x^3+5x^5...please assist
is x to the 4th power plus x to the 4th power equal to x to the 8th power?
how is the following simplified 3x to the third power plus 5x to the fifth power
how is the following simplified 3x to the third power plus 5x to the fifth power
3x to the 3rd power plus 5x to the fifth power
A bill has been approved in the House and Senate, albeit in slightly different versions. The bill now goes to ??
A 21.0-kg box rests on a frictionless ramp with a 15.1° slope. The mover pulls on a rope attached to the box to pull it up the incline. If the rope makes an angle of 43.6° with the horizontal, what
is the smallest force F the mover will have to exert to move the box up...
4NH3+5O2 = 4NO + 6 H20 If a container were to have 10 molecules of O2 and 10 molecules of NH3, initially, how many total molecules (reactants plus products) would be present in the container after
this reaction goes to completion?
using the 68- 95-99.7 rule: Assume that a set of test scores is normally distributed with a mean of 100 and a standard diviation of 20. Use the 68-95-99.7 rult to find the following quantities:
percentage of scores less than 100 relative frequency of scores less than 120. perc...
Marketing Foundation
Select a purchase for a refrigerator which recently made and please describe your buying process, including any other parties involved in the buying process and their roles
Marketing Foundation
Select a purchase you personally have made recently and describe your buying process, including any other parties involved in the buying process and their roles in your purchase decision. To do this,
please complete the following template and make sure to mentioned what your n...
Math 7th grade
Susan has red, blue, and yellow sweaters. Joanne has green, red, and white sweaters. Diane's sweaters are red, blue, and mauve. Each girl has only one sweater of each color and will pick a sweater to
wear at random. Find the probability that each girl chooses a different c...
7th grade
Susan has red, blue, and yellow sweaters. Joanne has green, red, and white sweaters. Diane's sweaters are red, blue, and mauve. Each girl has only one sweater of each color and will pick a sweater to
wear at random. Find the probability that each girl chooses a different c...
College Algebra
car traveling on a road perpendicular to railroad track, car is 30 meters from the crossing, train is 50 meters from the car; how far is the train from the crossing?
confidentiality of health information
The answer is c.
5x-y=36 5x+6y=-6 5x-y=36 -1(5x+6y=-6) Distribute 5x-y=36 -5x-6y=6 add these two equations right on top of each other. 0x-7y=42 -7y=42 y=-6 plug in the -6 for y, into either equation, and solve for x.
5x-(-6)=36 5x+6=36 5x=30 x=6 in ordered pairs, the answer is (6,-6)
Pages: 1 | 2 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=margaret","timestamp":"2014-04-20T02:47:37Z","content_type":null,"content_length":"31000","record_id":"<urn:uuid:4cb0deeb-3854-4d9a-9a9b-f4c44bd334cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
Holliston SAT Math Tutor
Find a Holliston SAT Math Tutor
...In Massachusetts I am certified to teach Chemistry in high school and planning to join the teaching profession very soon. I like to interact with students. I provide clear and simple
explanation of concepts which helps my students a lot.Hi everyone, I have keen interest in cooking Indian food.I...
13 Subjects: including SAT math, chemistry, geometry, biology
...I have given workshops to high school groups on selecting the best college and have used online tools to help students find schools that match their selection criteria. I have proofread and
assisted students with their college application essays as well. I work for Boston Tutoring Services and have been trained to tutor the ISEE.
20 Subjects: including SAT math, Spanish, reading, English
...The other went successfully through a very rigorous elite scholarship process, and, in spite of being legally disabled, did receive a full scholarship in STEM. I have great contacts in
international placements as well, and a good understanding of scholarship programs (Academic and athletic) and ...
90 Subjects: including SAT math, chemistry, English, reading
...I also have lived in a number of foreign countries in Asia and in South America, and have family members who have English as their second language. I have years of experience teaching math as a
classroom teacher, as a tutor, as a math club coach, and in coaching other teachers. I focus on helpi...
33 Subjects: including SAT math, reading, English, GRE
...Please contact me for more details, past results, and references. Looking forward to working with you! Thanks.I have helped several students in this area.
41 Subjects: including SAT math, reading, English, Spanish | {"url":"http://www.purplemath.com/Holliston_SAT_math_tutors.php","timestamp":"2014-04-18T08:17:26Z","content_type":null,"content_length":"23846","record_id":"<urn:uuid:77dc4b96-a3c1-480c-87b1-8b370c38119b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why multiplication is commutative, but exponentiation is not?
December 8th 2011, 06:05 PM #1
Dec 2011
Why multiplication is commutative, but exponentiation is not?
A definition of exponentiation as an operation made from multiplication seem to be similar to a definition of multiplication made from addition.
I.e., 2*5 is 2 + 2 + 2 .. 2, for the total of five numbers two. Similarly, 2^5 is 2 * 2 * 2 .. 2, for the total of five numbers two.
Yet multiplication is commutative, but similarly defined exponentiation is not. Why?
Re: Why multiplication is commutative, but exponentiation is not?
Well exponentiation is commutative and associative in the sense that $(x^k)^l = x^{(kl)} = (x^l)^k$. But if your asking why $x^y eq y^x$ (provided that $x eq y$) it is because you cannot consider
the exponent as a number the way you consider the base a number. The exponent is more of a descriptor.
Re: Why multiplication is commutative, but exponentiation is not?
Can you elaborate on why I can't consider the exponent the number?
My concern is what is the difference between making multiplication out of addition and making the exponentiation out of multiplications. Yes, I'm asking why $x^y eq y^x$ .
Re: Why multiplication is commutative, but exponentiation is not?
One little thing I want to clarify first... when I say exponents are associative and commutative I only mean when they are on the same "level" so $a^{b^{c}} eq a^{bc}$.
On to your question... Let us assume we are working in the real numbers (hopefully this set suffices your needs). The real numbers are a field, which by definition means that multiplication and
addition are commutative. Exponentiation came along later and is a definition of convenience more than anything else. There really is no deep answer to your question, it's just the way
exponentiation was defined does not allow for commutativity among the different "levels".
Re: Why multiplication is commutative, but exponentiation is not?
one reason is that 1 and 0 are "badly behaved" with regard to exponentiation.
for example, 1^n = 1, but n^1 = n, and 0^n = 0 (let's just agree for the moment n is not 0), while n^0 = 1.
since for natural numbers, n is defined as +1 done to 0 n times, if 0 and 1 do not commute via exponents, it's unreasonable to expect anything else will, either.
and of course, 0^1 = 0, while 1^0 = 1.
but i am afraid you're drawing the wrong conclusion from this. in actual fact, it's the commutativity of multiplication that is the real mystery. for example, with matrices,
A+B = B+A, but AB usually doesn't equal BA.
Re: Why multiplication is commutative, but exponentiation is not?
Thank you guys.
Let me rephrase a bit. Suppose one has an operation - call it A - and numbers.
Then he creates operation AA in the following manner - defining "x AA y" as "x A x A x .. x" so that the total number of x-s is y.
The question is - can we find out if AA is commutative? Turns out if A is addition, then the answer is yes, and if A is multiplication, the answer is no.
Now what information we need to predict the commutativeness of AA, made out of A? What is different between addition and multiplication, which leads to different results?
Re: Why multiplication is commutative, but exponentiation is not?
Looks like a problem is either somehow incorrectly stated or less well known than I thought
Is there a branch dealing with properties of operations such as commutativity? For example, Cayley-Dickson construction allows creation of algebras with various properties. Are there ideas why
particular properties exist or not in each case?
December 8th 2011, 06:25 PM #2
Junior Member
Dec 2011
December 8th 2011, 06:33 PM #3
Dec 2011
December 8th 2011, 06:54 PM #4
Junior Member
Dec 2011
December 8th 2011, 07:27 PM #5
MHF Contributor
Mar 2011
December 8th 2011, 07:59 PM #6
Dec 2011
December 16th 2011, 04:26 PM #7
Dec 2011 | {"url":"http://mathhelpforum.com/number-theory/193843-why-multiplication-commutative-but-exponentiation-not.html","timestamp":"2014-04-19T02:34:14Z","content_type":null,"content_length":"47021","record_id":"<urn:uuid:2dfcfe76-6dfc-4874-8b65-6b4a2ee1bf28>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
An outside circular ring has a circumference of 90 cm. What is the circumference of an inner ring which is 8 cm shorter in radius? Both circles have the same center. i came to 39.8but im barley
learning this so just making sure if its wrong don't need to explain just say its wrong thx
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fa4ff63e4b029e9dc35364a","timestamp":"2014-04-18T00:46:32Z","content_type":null,"content_length":"40374","record_id":"<urn:uuid:da1404fa-c0ba-4af9-a541-f9f714b8c809>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Mclean ACT Tutor
Find a West Mclean ACT Tutor
...My promise is to be supportive and to ensure that you have the best chance to do well. Yours in math, ~DexterReviewing basic algebraic concepts (e.g. variables, order of operations, and
exponents); understanding and graphing functions; solving linear (with one variable 'x') and quadratic equati...
15 Subjects: including ACT Math, chemistry, calculus, geometry
...Though I am located in Arlington, Virginia, I am happy to travel to meet students, particularly to areas that are easily accessible via Metro.I work as a professional economist, where I utilize
econometric models and concepts regularly using both STATA and Excel. I have also had extensive course...
16 Subjects: including ACT Math, calculus, statistics, geometry
I am an experienced math tutor who has worked at George Mason University Math Learning Center for about three years, where I tutored students in variety of math courses such as algebra,
probability, and calculus. I have a strong passion for teaching and more importantly I have been highly successfu...
34 Subjects: including ACT Math, physics, calculus, statistics
...His BS in Civil Engineering Technology is complemented with a MS in Operations Management. He is especially strong in SAT/ACT Math and worked extensively with both his son and daughter at an
early age to where they both attended TJ (Thomas Jefferson High School for Science & Technology). This t...
17 Subjects: including ACT Math, chemistry, calculus, physics
...Let me help you learn to express your ideas.My experience with Word includes many different aspects from writing outlines and drafts through final manuscript preparation of my own work, and a
substantial amount of editing of other people's manuscripts for publication. I have been using the progr...
38 Subjects: including ACT Math, reading, English, ESL/ESOL | {"url":"http://www.purplemath.com/West_Mclean_ACT_tutors.php","timestamp":"2014-04-16T13:42:14Z","content_type":null,"content_length":"23799","record_id":"<urn:uuid:7aed287a-c537-4238-af4f-de5fb84e1bcb>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topics in Abstract Algebra/Non-commutative rings
A ring is not necessarily commutative but is assumed to have the multiplicative identity.
Proposition. Let $R$ be a simple ring. Then
• (i) Every morphism $R \to R$ is either zero or an isomorphism. (Schur's lemma)
• (ii)
Theorem (Levitzky). Let $R$ be a right noetherian ring. Then every (left or right) nil ideal is nilpotent.
Last modified on 8 November 2012, at 22:46 | {"url":"http://en.m.wikibooks.org/wiki/Topics_in_Abstract_Algebra/Non-commutative_rings","timestamp":"2014-04-19T19:58:19Z","content_type":null,"content_length":"14393","record_id":"<urn:uuid:1eb49eb4-c752-4451-98c6-54e7a1b4950f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
(Specific detailed sources for individual functions and distributions are given at the end of each individual section).
DLMF (NIST Digital Library of Mathematical Functions) is a replacement for the legendary Abramowitz and Stegun's Handbook of Mathematical Functions (often called simply A&S),
M. Abramowitz and I. A. Stegun (Eds.) (1964) Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, National Bureau of Standards Applied Mathematics Series, U.S.
Government Printing Office, Washington, D.C.
NIST Handbook of Mathematical Functions
Edited by: Frank W. J. Olver, University of Maryland and National Institute of Standards and Technology, Maryland, Daniel W. Lozier, National Institute of Standards and Technology, Maryland, Ronald
F. Boisvert, National Institute of Standards and Technology, Maryland, Charles W. Clark, National Institute of Standards and Technology, Maryland and University of Maryland.
ISBN: 978-0521140638 (paperback), 9780521192255 (hardback), July 2010, Cambridge University Press.
NIST/SEMATECH e-Handbook of Statistical Methods
Mathematica Documentation: DiscreteDistributions The Wolfram Research Documentation Center is a collection of online reference materials about Mathematica, CalculationCenter, and other Wolfram
Research products.
Mathematica Documentation: ContinuousDistributions The Wolfram Research Documentation Center is a collection of online reference materials about Mathematica, CalculationCenter, and other Wolfram
Research products.
Statistical Distributions (Wiley Series in Probability & Statistics) (Paperback) by N.A.J. Hastings, Brian Peacock, Merran Evans, ISBN: 0471371246, Wiley 2000.
Extreme Value Distributions, Theory and Applications Samuel Kotz & Saralees Nadarajah, ISBN 978-1-86094-224-2 & 1-86094-224-5 Oct 2000, Chapter 1.2 discusses the various extreme value distributions.
pugh.pdf (application/pdf Object) Pugh Msc Thesis on the Lanczos approximation to the gamma function.
N1514, 03-0097, A Proposal to Add Mathematical Special Functions to the C++ Standard Library (version 2), Walter E. Brown
We found (and used to create cross-check spot values - as far as their accuracy allowed).
The Wolfram Functions Site The Wolfram Functions Site - Providing the mathematical and scientific community with the world's largest (and most authorititive) collection of formulas and graphics about
mathematical functions.
100-decimal digit calculator provided some spot values.
http://www.adsciengineering.com/bpdcalc/ Binomial Probability Distribution Calculator.
Cephes library by Shephen Moshier and his book:
Methods and programs for mathematical functions, Stephen L B Moshier, Ellis Horwood (1989) ISBN 0745802893 0470216093 provided inspiration.
CDFLIB Library of Fortran Routines for Cumulative Distribution functions.
DCDFLIB C++ version DCDFLIB is a library of C++ routines, using double precision arithmetic, for evaluating cumulative probability density functions.
NAG libraries.
JMSL Numerical Library (Java).
John F Hart, Computer Approximations, (1978) ISBN 0 088275 642-7.
William J Cody, Software Manual for the Elementary Functions, Prentice-Hall (1980) ISBN 0138220646.
Nico Temme, Special Functions, An Introduction to the Classical Functions of Mathematical Physics, Wiley, ISBN: 0471-11313-1 (1996) who also gave valueable advice.
Statistics Glossary, Valerie Easton and John H. McColl.
_R R Development Core Team (2010). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org.
For use of R, see:
Jim Albert, Bayesian Computation with R, ISBN 978-0-387-71384-7.
C++ Statistical Distributions in Boost - QuantNetwork forum discusses using Boost.Math in finance.
Quantnet Boost and computational finance. Robert Demming & Daniel J. Duffy, Introduction to the C++ Boost Libraries - Volume I - Foundations and Volume II ISBN 978-94-91028-01-4, Advanced Libraries
and Applications, ISBN 978-94-91028-02-1 (to be published in 2011). discusses application of Boost.Math, especially in finance.] | {"url":"http://www.boost.org/doc/libs/1_53_0_beta1/libs/math/doc/sf_and_dist/html/math_toolkit/backgrounders/refs.html","timestamp":"2014-04-20T19:40:54Z","content_type":null,"content_length":"12488","record_id":"<urn:uuid:f2750097-bdb4-47f3-a935-81b1670f898a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computing the Mahalanobis distance with the Perl Data Language
Many machine learning and data analysis tasks involve calculating distances between items. The Mahalanobis distance is a very popular distance because it is scale invariant.
In this snippet, I present how to compute the Mahalanobis distance using the Perl Data Language. The inputs are two or three piddles (see comment below for a definition). The first
piddle is a p-dimensional vector. The second piddle could be either a p-dimensional vector (when a third input is provided) or a matrix with N rows of p-dimensional vectors. If the
second piddle is a matrix, the distance is computed between the center of the second piddle and the first piddle (if only two inputs are provided, the second piddle is used to compute
the covariance needed to determine the Mahalanobis distance). The third piddle, which is optional, represents the covariance matrix of the distribution from which the two other piddles
were drawn. Note: to compute the covariance matrix, I use the snippet presented in Computing Covariance Matrices with PDL
What are Piddles?
They are a new data structure defined in the Perl Data Language. As indicated in RFC: Getting Started with PDL (the Perl Data Language):
Piddles are numerical arrays stored in column major order (meaning that the fastest varying dimension represent the columns following computational convention rather than the rows as
mathematicians prefer). Even though, piddles look like Perl arrays, they are not. Unlike Perl arrays, piddles are stored in consecutive memory locations facilitating the passing of
piddles to the C and FORTRAN code that handles the element by element arithmetic. One more thing to note about piddles is that they are referenced with a leading $
use warnings;
use strict;
use PDL;
# ================================
# mahalanobis:
# $distance = mahalanobis( $x, $y, $cov )
# computes the mahalanobis distance from a point
# $x to another point $y (from the same
# distribution) or from a point $x to
# the centre of a group of values $y
# ================================
sub mahalanobis {
my ( $x, $y, $cov, $diff, $dist );
if ( @_ < 3 ) {
( $x, $y ) = @_;
$cov = covariance( $y );
} else {
( $x, $y, $cov ) = @_;
if ( $y->getdim(1) > 1 ) {
$diff = $x - average( $y->xchg(0,1) );
} else {
$diff = $x - $y;
my @dist = list( $diff x inv( $cov ) x transpose( $diff ) );
return $dist[0];
Back to Snippets Section
Log In^?
Node Status^?
node history
Node Type: snippet [id://625719]
How do I use this? | Other CB clients
Other Users^?
Others musing on the Monastery: (13)
As of 2014-04-23 18:07 GMT
Find Nodes^?
Voting Booth^?
April first is:
Results (551 votes), past polls | {"url":"http://www.perlmonks.org/?node_id=625719","timestamp":"2014-04-23T18:09:54Z","content_type":null,"content_length":"22175","record_id":"<urn:uuid:3cb8b6a9-3b47-4d95-9bdc-baf32638a676>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A plane flies with a speed of 278 mps along a course which passes over an anti-aircraft gun on the ground. The airplane is at an altitude of 5390 m. If the muzzle velocity of the projectile is 735
mps with a slope of 4 vertical to 3 horizontal, determine the angle between the horizontal and the line of sight at which the projectile must be fired in order to hit the airplane during the upward
motion of the projectile.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Is it meters per second or miles per second?
Best Response
You've already chosen the best response.
meters per second
Best Response
You've already chosen the best response.
I know you may not like this, but I'm going to say that the bullet never touches the aircraft. I may off on something, but I think I'm right.
Best Response
You've already chosen the best response.
is ur conclusion based on any solution or....?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I think I see where I'm wrong. It has to do with the bullet velocity. I'll redo.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I need to know what this muzzle velocity of the projectile is 735 mps is?
Best Response
You've already chosen the best response.
the muzzle velocity is the velocity of the projectile of the bullet as it leaves the muzzle of the gun. it is 735 mps
Best Response
You've already chosen the best response.
Is it a horizontal component or vertical component or both or what. It tells me the slope but you can't use it directly because its probably a reduced form the slope. Slope is velocity and they
say 4 up and 3 right, but you can't take those values literally. 8/6 is same as 4/3 so we don't know.
Best Response
You've already chosen the best response.
Is it 735 in the vertical or horizontal? That's what I need to know.
Best Response
You've already chosen the best response.
well, i think it is both the horizontal and vertical v.
Best Response
You've already chosen the best response.
and thats the problem.
Best Response
You've already chosen the best response.
but i think that we can get the horizontal and vertical v though the slope. use the slope to get the angle, then after u get the angle, use the 735 mps and the angle to get the vx and vy...im not
sure with that
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
i think i got it hang on
Best Response
You've already chosen the best response.
I looked at this question earlier. Honestly, it doesn't really make sense. They stipulate the angle and then they ask for the angle. Make sure you got the wording right, or, if there's a diagram
that somehow explains this, post it.
Best Response
You've already chosen the best response.
well, i agree with u @Algebraic! because u can simply get the angle with the given slope. im also confused with that, but maybe the point is that we will use the angle to get the vx and vy, and
then use the equations to get the theta
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
changing the angle changes Vx and Vy.
Best Response
You've already chosen the best response.
yet they stipulate Vy/Vx ... doesn't make sense.
Best Response
You've already chosen the best response.
is this from an online problem set? or if it's from a text book, can you scan and post the pic. or take an ss?
Best Response
You've already chosen the best response.
it is from a book, but there's no picture. i posted exactly the same problem as it is in the book
Best Response
You've already chosen the best response.
Well, I've tried interpreting this problem statement in a way that somehow makes sense.. but nothing has come to me.
Best Response
You've already chosen the best response.
or maybe we won't be using the slope....haha, i just don't know
Best Response
You've already chosen the best response.
please help =(
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
this is our homework to be submitted tomorrow....=(
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
=( i just don't know what to do
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
and i dont think the speed of the plane matters at all
Best Response
You've already chosen the best response.
the angle between the muzzle and horizontal plane is tan-1(4/3)or 53.13 roughly. thus the initial vertical velocity of the missile is 735*Sin(53.13) or 588 m and accn is 9.8 mps^2 towards the
ground. thus using the formula s = ut-.5gt^2 the time taken to reach 5390 m comes to be 10 secs. In this 10 secs the projectile moved horizontally 10*735*cos(53.13) metres or 4410 metres. and the
plane has moved 2780 metres. Thus the plane was initially at (4410-2780) or 1630 metres horizontal distance from muzzle. thus the initial angle becomes tan-1(5390/1630) m or 73.174 m.
Best Response
You've already chosen the best response.
thnx a lot =)))))))))))
Best Response
You've already chosen the best response.
How can the angle be arctan(4/3) =53 and then at the end its 73 ???
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5070d2fde4b060a360ffffb8","timestamp":"2014-04-24T01:00:52Z","content_type":null,"content_length":"178160","record_id":"<urn:uuid:c10fd7bf-e8ee-4768-8f29-e4e471e5cb08>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Difference-differential Analogue of the Burgers Equation and Some Models of Economic Development
Polterovich, Victor and Henkin, Gennadi (1998): A Difference-differential Analogue of the Burgers Equation and Some Models of Economic Development.
Download (5Mb) | Preview
The paper is devoted to investigation of a number of difference-differential equations, among them the following one plays the central role: dFn/dt<=φ(Fn)(Fn-1 - Fn) (*) where, for every t, {Fn(t), n
= 0, 1, 2, ...} is a probability distribution function, and φ is a positive function on [0, 1]. The equation (*) arose as a description of industrial economic development taking into account
processes of creation and propagation of new technologies. The paper contains a survey of the earlier received results including a multy-dimensional generalization and an application to the economic
growth theory.
If φ is decreasing then solutions of the Cauchy problem for (*) approach to a family of wave-trains. We show that diffusion-wise asymptotic behavior takes place if φ is increasing. For the
nonmonotonic case a general hypothesis about asymptotic behavior is formulated and an analogue of a Weinberger's (1990) theorem is proved. It is argued that the equation can be considered as an
analogue of Burgers equation.
Item Type: MPRA Paper
Original A Difference-differential Analogue of the Burgers Equation and Some Models of Economic Development
Language: English
Keywords: difference-differential equations; Burgers equations; non-linear diffusion; long-time asymptotic of Cauchy problem; evolution of industries; economic growth; innovation and imitation
O - Economic Development, Technological Change, and Growth > O4 - Economic Growth and Aggregate Productivity > O41 - One, Two, and Multisector Growth Models
Subjects: O - Economic Development, Technological Change, and Growth > O3 - Technological Change; Research and Development; Intellectual Property Rights > O33 - Technological Change: Choices and
Consequences; Diffusion Processes
Item ID: 21031
Depositing Victor Polterovich
Date 04. Mar 2010 03:22
Last 20. Feb 2013 23:13
Belenky, V., 1990a, A New Version of Evolutionary Model of Technology Diffusion. In: "Veroyatnostnye modeli mathematicheskoi economiki", CEMI Academy of Science of the USSR (in
Russian), 19-54
Belenky. V., 1990b, Diagram of growth of a monotonic function arid a problem of their reconstruction by the Diagram. Preprint. CEMI, Academy of Science of the USSR. Moscow, 1-44 (in
Burgers, J.M., 1948, A mathematical model illustrating the theory of turbulence. Advances in Applied Mechanics, ed. R.V.Mises and T.V.Karman, v. 1, 171-199.
Davies, S., 1979, The diffusion of process innovations. Cambridge Univ. Press, Cambridge.
Fisher, R.A., 1937, The wave of advance of advantageous genes. Amm. Eugen. 7. 355-369.
Gelman L.M., Levin M.I., Polterovich V.M., Spivak V.A., 1993, Modelling of Dynamics of Enterprises Distribution by Efficiency Levels for the Ferrous Metallurgy. Economic and Math.
Methods, v. 29, # 3. 1071-1083 (in Russian).
Glazjev, S.Y., Karimov, I.A., 1988, On a nontraditional model of technology substitution. In: Mathematical Modelling of Social, Economic and Demographic Processes. All-Union Scientific
Conference: Institute of Economics and forecasting. Erevan. 89-92 (in Russian).
Griliches, Z., 1957, Hybrid Corn: An Exploration in the Economics of Technological Change. Econometrica, 25. # 4.
Henkin, G.M., Polterovich, V.M, 1991, Schumpeterian dynamics as a nonlinear wave theory. .J. Math. Econ., v. 20, 551-590.
Henkin, G.M., Polterovich. V.M.. 1994, A Difference-Differential Analogue of the Burgers Equation: Stability of the Two-Wave Behavior. J. Nonlinear Sci.. v. 4, 497-517.
Hopf, E., 1950, The partial differential equation ut + uux = µuхх. Comm. on Pure and Appl. Math., v. 3, 201-230.
References: Iljin A., Olejnik О.A., I960, Asymptotic long-time behavior of the Cauchy problem for some quasilinear equation, Mat. Sbornic, v. 51, 191-216 (in Russian).
Iwai K., 1984a, Schumpeterian Dynamics, Part I: An evolutionary model of innovation and imitation, Journal of Economic Behavior and Organization, v. 5, 159-190.
Iwai K., 1984b, Schumpeterian Dynamics, Part II: Technological Progress. Form growth and "Economic Selection", Journal of Economic Behavior and Organization, v. 5, 287-320.
Kolmogoroff, A., Petrovsky, I., Piskunoff, N., 1937, Etude le'equation de la diffusion avec croissance de la quantite de matriere et son application a un probleme biologique. Bul. Univ.
Moskau. Ser. Internet. Sect. A., v. 1. 1-25.
Lax, P.D., 1954, Weak solutions of nonlinear hyperbolic equation and their numerical computation, Comm. Pure Appl. Math., v. 7, 159-193.
Levi D., Ragnisco 0., Brushi M., 1983, Continuous and Discrete Matrix Burgers' Hierarchies. II Nuovo Cimento. v. 74, # 1, 33-51.
Polterovich, V., Henkin, G., 1988a, An Evolutionary Model of the Interaction of the Processes of Creation and Adoption of Technologies. Economics and Mathematical Methods, v. 24, # 6,
1071-1083 (in Russian).
Polterovich, V., Henkin, G., 1988b, Diffusion of Technologies and Economic Growth. Preprint. CEMI Academy of Sciences of the USSR, 1-44 (in Russian).
Polterovich, V., Henkin, G., 1989, An Evolutionary Model of Economic Growth. Economics and Mathematical Methods, v. 25, # 3, 518-531 (in Russian).
Sato, K., 1975, Production functions and aggregation. North-Holland, New York.
Weinberger, H.F., 1990, Long-time behavior for a regulated scalar conservation law in the absence of genuine nonlinearity. Annale de l'Institute Henri Poincare, Analyse Nonlineaire,
Whitham, G.B., 1974, Linear and nonlinear waves. Wiley, New York.
Schumpeter, J.A., 1939, Business Cycles: A Theoretical, Historical and Statistical Analysis of the Capitalist Process, McGraw-Hill, New York.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/21031 | {"url":"http://mpra.ub.uni-muenchen.de/21031/","timestamp":"2014-04-19T22:13:06Z","content_type":null,"content_length":"26774","record_id":"<urn:uuid:9b1fc427-c319-4fcf-af34-4688ba35b816>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Power Systems Objective Questions: Part1
Power System Objective questions from competitive questions(GATE, IES)
[1] Power is transfered from system A to system B by an HVDC link as shown in the figure.If the voltages V[AB] and V[CD] are as indicated in the figure, and I>0,then
A. V
B. V
C. V
D. V
[2] Consider a step voltage wave of magnitude 1pu travelling along a lossless transmission line that terminates in a reactor.The voltage magnitude across the reactor at the instant the travelling
wave reaches the reactor is
A. -1pu
B. 1pu
C. 2pu
D. 3pu
[3] Consider two buses connected by an impedance of (0+j5)Ω. The bus 1 voltage is 100∠30° V, and bus 2 voltage is 100∠0° V. The real and reactive power supplied by bus 1, respectively are
A. 1000W,268Var
B. -1000W,-134Var
C. 276.9W,-56.7Var
D. -276.9W,56.7Var
[4] A three-phase, 33kV oil circuit breaker is rated 1200A, 2000MVA, 3s. The symmetrical breaking current is
A. 1200A
B. 3600A
C. 35kA
D. 104.8kA
[5] Consider a stator winding of an alternator with an internal high-resistance ground fault.The currents under the fault condition are as shown in the figure.The winding is protected using a
differential current scheme with current transformers of ratio 400/5A as shown. The current through the operating coil is
A. 0.17875A
B. 0.2A
C. 0.375A
D. 60kA
[6] A 50Hz synchronous generator is initially connected to a long lossless transmission line which is open circuited at the receiving end.With the field voltage held constant, the generator is
disconnected from the transmission line.Which of the following may be said about the steady state terminal voltage and field current of the generator?
A. The magnitude of terminal voltage decreases,and the field current does not change
B. The magnitude of terminal voltage increases,and the field current does not change
C. The magnitude of terminal voltage increases,and the field current increases
D. The magnitude of terminal voltage does not change,and the field current decreases
[7] Consider a three-phase,50Hz,11kV distribution system.Each of the conductors is suspended by an insulator string having two identical porcelain insulators.The self capacitance of the insulator is
5 times the shunt capacitance between the link and the ground,as shown in the figure.The voltage across the two insulators is
A. e1=3.74kV,e2=2.61kV
B. e1=3.46kV,e2=2.89kV
C. e1=6.0kV,e2=4.23kV
D. e1=5.5kV,e2=5.5kV
[8] Consider a three-core, three-phase,50Hz,11kV cable whose conductors are denoted as R,Y and B in the figure.The inter-phase capacitance(C1) between each pair of conductors is 0.2μF and the
capacitance between each line conductor and the sheath is 0.4μF.The per-phase charging current is
A. 2.0A
B. 2.4A
C. 2.7A
D. 3.5A
[9] For the power system shown in the figure below,the specifications of the components are the following:
G1: 25kV,100MVA,X=9%
G2: 25'kV,100MVA,X=9%
T1: 25kV/220kV,90MVA,X=12%
T2: 220kV/25kV,90MVA,X=12%
Line1: 220kV, X=150 ohms.
Choose 25kV as the base voltage at the generator G1 and 200MVA as the MVA base.The impedance diagram is.....Options A,B,C,D are given below
[10] For enhancing the power transmission in a long EHV transmission line, the most preferred method is to connect a
A. Series inductive compensator in the line
B. Shunt inductive compensator at the receiving end
C. Series capacitive compensator in the line
D. Shunt capacitive compensator at the sending end
2:26 PM
No comments: | {"url":"http://electricalobjectivequestion.blogspot.com/2012/11/power-systems-objective-questions-part1.html","timestamp":"2014-04-17T19:06:16Z","content_type":null,"content_length":"86523","record_id":"<urn:uuid:822f78bb-f971-4fe2-aa24-b93eca418886>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manipulating Lists by Their Indices
You can always reset one or more pieces of a list by doing an assignment like .
It is sometimes useful to think of a nested list as being laid out in space, with each element at a coordinate position given by its indices. There is then a direct geometrical interpretation for .
If a given is a single integer, then it represents extracting a single slice in the k is then the collection of elements obtained by slicing in each successive dimension.
Part is set up to make it easy to pick out structured slices of nested lists. Sometimes, however, you may want to pick out arbitrary collections of individual parts. You can do this conveniently with
Getting slices versus lists of individual parts.
An important feature of Extract is that it takes lists of part positions in the same form as they are returned by functions like Position.
Taking and dropping sequences of elements in lists.
Much like Part, Take and Drop can be viewed as picking out sequences of slices at successive levels in a nested list, you can use Take and Drop to work with blocks of elements in arrays.
Adding and deleting elements in lists.
It is important to understand that ReplacePart always creates a new list. It does not modify a list that has already been assigned to a symbol the way does. | {"url":"http://reference.wolfram.com/mathematica/tutorial/ManipulatingListsByTheirIndices.html","timestamp":"2014-04-18T16:18:07Z","content_type":null,"content_length":"73082","record_id":"<urn:uuid:3a70c6cb-5ce9-4ab3-8c1c-1979f4e34b6f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
A textbook on linear algebra where involutions on linear spaces are considered
up vote 2 down vote favorite
Let us call an involution on a complex linear space $X$ an arbitrary $\mathbb R$-linear map $x\in X\mapsto x^*\in X$ that satisfies the following identities: $$ x^{**}=x,\qquad (\lambda\cdot x)^*=\
overline{\lambda}\cdot x^*\qquad (\lambda\in{\mathbb C},\quad x\in X). $$ This is strange, but I can't find a textbook on linear algebra where this notion is considered. Can anybody recommend
something? I need a reference for some elementary facts like "$X$ is a complexification of the subspace of real elements" (i.e. elements satisfying the equality $x^*=x$), or "the dual space (of $\
mathbb C$-linear functionals) also has a natural involution", and so on.
I posted this in math.stackexchange, but without success.
reference-request linear-algebra
I can't help with your actual question, since I never saw this in a linear algebra textbook, but I saw these mentioned in passing in some monograph on an advanced topic, as "real structures on a
complex vector space". Perhaps that phrase will yield more search results – Yemon Choi Jul 7 '13 at 6:16
You don't remember details? Title, author? – Sergei Akbarov Jul 7 '13 at 6:20
I think it was somewhere in the middle of John Roe's book on the Atiyah-Singer Index Theorem - I don't even remember why it was mentioned or what role it played, to be honest, this was at least
five years ago. Sorry. – Yemon Choi Jul 7 '13 at 6:24
1 This is indeed the notion of a real structure on a complex vector space, and does the opposite of complexification. If you replace your first equation with $x^{**} = -x$ then you get a
quaternionic structure on a complex vector space. These are discussed in Adam's book 'Lie Groups' for representations, but everything he says is relevant for mere vector spaces as well. He calls
both of the above objects 'structure maps'. I suppose I could put this as an answer but I've typed it here now. – Paul Reynolds Jul 7 '13 at 8:48
It should be apparent to you by now that there is no standard linear algebra textbook which does what you need, and if you find some more or less obscure counterexample it will be of little use;
2 that someone somewhere wrote this down does not mean that giving that as a reference wil be useful for anyone! Maybe it is worth stating the properties you want, possibly —since the proofs should
not be hard— omiting all details about the proofs? – Mariano Suárez-Alvarez♦ Jul 9 '13 at 10:01
show 5 more comments
3 Answers
active oldest votes
Fulton and Harris, Representation Theory: A First Course, p. 444, section 26.3, the definition of real representation.
up vote 1
down vote
I do not understand. I need a source for reference, the book must contain a list of properties of this notion. – Sergei Akbarov Jul 7 '13 at 11:05
Which properties do you need? As a I wrote above, a complex vector space is the complexification of a real vector space just when it bears a conjugate linear involution operator; the
real vector space is the set of fixed points of the involution. So the category of conjugate linear involutions is isomorphic to the category of real vector spaces, and the properties
are just those of real vector spaces. There can't be any more or fewer properties. – Ben McKay Jul 7 '13 at 13:39
"a complex vector space is the complexification of a real vector space just when it bears a conjugate linear involution operator" -- I need this and the construction of involution on the
dual space. But I think it's not nice to write this without reference. Or you mean that this is stated in the Fulton-Harris book? – Sergei Akbarov Jul 7 '13 at 13:51
This equivalence is stated in Fulton and Harris, p. 444. But it is easy to prove anyway. – Ben McKay Jul 7 '13 at 13:54
It is easy to prove of course, but it is not good to write this proof in a paper, or to use this result without reference. Ben, I don't like the formulation of this fact that Fulton and
1 Harris give at p.444. This is not for normal mathematician, you must have some background in representation theory to recognize what I need in what they say. I believe there are texts
with simpler formulations. And, by the way, I need also the constructions of real and imaginary parts, and their properties. Of course this is trivial, but I'd like to have a book on my
table. – Sergei Akbarov Jul 7 '13 at 14:07
show 1 more comment
Perhaps you might be interested in Section 4.3 of Linear Algebra and Geometry by Shafarevich and Reznikov (which is my favourite Linear Algebra textbook, by the way), in which a complex
up vote 1 structure on a vector space is introduced in a coordinate-free way starting on page 150.
down vote
2 Remizov, actually... I deduce from this mistake, that you are Russian. Приятно познакомиться, сейчас погляжу. – Sergei Akbarov Jul 7 '13 at 11:29
I think this is not what I need. They introduce the operation of taking complex conjugate vector, but only in the case when $X$ is a complexification of a given real vector space
$Y$, not for an arbitrary complex vector space $X$... Or I missed something? – Sergei Akbarov Jul 7 '13 at 11:59
A complex vector space is the complexification of a real vector space just when it bears a conjugate linear involution operator; the real vector space is the set of fixed points of
the involution. – Ben McKay Jul 7 '13 at 13:37
No doubts, but which book to cite? – Sergei Akbarov Jul 7 '13 at 13:47
add comment
The result you want is a special case of Lemme 26, V, no 21 in Serre, Groupes Algebriques... 1959; also Borel, Linear Algebraic Groups, I, 14.1; also Milne's online Algebraic
up vote 1 down Geometry notes, 16.14.
Excuse me, the numbers I, 14.1 in the reference to Borel, what do they mean? I can't find... – Sergei Akbarov Jul 9 '13 at 11:05
add comment
Not the answer you're looking for? Browse other questions tagged reference-request linear-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/135977/a-textbook-on-linear-algebra-where-involutions-on-linear-spaces-are-considered","timestamp":"2014-04-19T09:28:18Z","content_type":null,"content_length":"76808","record_id":"<urn:uuid:f947014b-32e7-4015-b73c-4dd5dcc6626b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2005/393
Multivariate Quadratic Polynomials in Public Key CryptographyChristopher WolfAbstract: This thesis gives an overview of Multivariate Quadratic polynomial equations and their use in public key
In the first chapter, some general terms of cryptography are introduced. In particular, the need for public key cryptography and alternative schemes is motivated, i.e., systems which neither use
factoring (like RSA, Rivest-Shamir-Adleman) nor the discrete logarithm (like ECC, elliptic curve cryptography).
This is followed by a brief introduction of finite fields and a general discussion about Multivariate Quadratic systems of equations and ways of representing them. In this context, affine
transformations and their representations are also discussed. After these tools are introduced, they are used to show how Multivariate Quadratic equations can be used for signature and encryption
applications. In addition, the problem of Multivariate Quadratic polynomial equations is put into perspective and a link with the theory of NP-completeness is established. The second chapter
concludes with the two related problems "isomorphism of polynomials" and "minimal rank" of the sum of matrices. Both prove useful in the cryptanalysis of Multivariate Quadratic systems.
The main part of this thesis is about concrete trapdoors for the problem of Multivariate Quadratic public key systems. We can show that all such systems fall in one of the following four classes:
unbalanced oil and vinegar systems (UOV), stepwise triangular systems (STS), Matsumoto-Imai Scheme A (MIA), and hidden field equations (HFE). Moreover, we demonstrate the use of several modifiers. In
order to evaluate the security of these four basic trapdoors and their modifiers, we review some cryptanalytic results. In particular, we were able to develop our own contributions in this field by
demonstrating an affine approximation attack and an attack using Gr"obner base computations against the UOV class. Moreover, we derived a key recovery and inversion attack against the STS class.
Using our knowledge of the HFE class, we develop two secure versions of the signature scheme Quartz.
Another important part of this thesis is the study of the key space of Multivariate Quadratic public key systems. Using special classes of affine transformations, denoted ``sustainers", we are able
to show that all four basic classes have some redundancy in their key spaces and hence, have a smaller key space than previously expected. In particular for the UOV and the STS class, this reduction
proves quite dramatic. For HFE and MIA, we only find some minor redundancies. Moreover, we are able to show that our results for MIA are the only ones possible, i.e., there are no other redundancies
than the one we describe in this thesis. In addition, we extend our results to several important variations of HFE and MIA, namely HFE-, HFEv, HFEv-, and MIA-. They have been used in practice for the
construction of signature schemes, namely Quartz and Sflash.
In order to demonstrate the practical relevance of Multivariate Quadratic constructions and also of our taxonomy, we show some concrete examples. In particular, we consider the NESSIE submissions
Flash, Sflash, and Quartz and discuss their advantages and disadvantages. Moreover, we describe some more recent developments, namely the STS-based schemes enhanced TTS, Tractable Rational Maps, and
Rainbow. Then we move on to some application domains for Multivariate Quadratic public key systems. In particular, we see applications in the area of product activation keys, electronic stamps and
fast one-way functions. Finally, we suggest some new schemes. In particular, we give a generalisation of MIA to odd characteristics and also investigate some other trapdoors like STS and UOV with the
branching and the homogenisation modifiers.
All in all, we believe that Multivariate Quadratic polynomial systems are a very practical solution to the problem of public key cryptography. At present, it is not possible to use them for
encryption. However, we are confident that it will be possible to overcome this problem soon and use Multivariate Quadratic constructions both for encrypting and signing.
Category / Keywords: public-key cryptography / multivariate quadratic, overview, Quartz, Sflash, taxonomy, signature schemes, alternative schemes, isomorphism of polynomials,one-way functions,
cryptanalysis Publication Info: Ph.D. thesis, Katholieke Universiteit Leuven, B. Preneel (supervisor), 156+xxiv pages, November 2005Date: received 31 Oct 2005, last revised 1 Nov 2005Contact author:
Christopher Wolf at esat kuleuven beAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Version: 20051101:131045 (All versions of this report) Discussion
forum: Show discussion | Start new discussion[ Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2005/393","timestamp":"2014-04-16T10:32:55Z","content_type":null,"content_length":"6291","record_id":"<urn:uuid:3432f252-9303-4f44-8e71-84317b75224c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Making Primes More Random
January 26, 2013
Mathematical tricks
Ben Green grew up about 100 meters from the house in Bristol where Paul Dirac was born and lived until age 10. He has followed Dirac to Cambridge University as a Fellow of Trinity College, which
adjoins St. John’s College where Dirac studied and taught. Both also held appointments at Princeton. Green is known for many things including, proving with Terence Tao that for every natural number $
{k}$ there exist ${k}$-many primes in arithmetic progression. That is, there is a prime ${p}$ and an integer ${c}$ such that for all ${i < k}$, the number ${p + ic}$ is also prime.
Today I want to talk about mathematical tricks—not magic, not games, but small insights that are very useful. Tricks.
There are two kinds of tricks. One is a device for understanding a theorem, the other is a shortcut to getting it in the first place. The Dirac belt trick is the former kind: it illustrates physical
systems that are preserved under two turns of a circle—a 720-degree rotation—but not under one turn. The Dirac delta function is the latter kind: It allows you to postulate a function ${f}$ that is
zero except at one point, such that ${\int f = 1}$, and also whose Fourier transform is the constant-1 function. This can speed up calculations, and certain “meta-theorems” often remove the need to
do anything more to make a proof involving such calculations rigorous.
Green’s co-author Tao maintains tricks as a category on his blog. Both of them assisted Tim Gowers in setting up the Tricki Wiki for collecting and discussing mathematical tricks, along with Alex
Frolkin and Olof Sisask. In a followup, Gowers asked whether it needed more examples of research-level tricks compared to undergraduate-level tricks. Let’s talk a little more about what tricks are.
I really like “tricks.” One recurrent theme in mathematics is that big results are easy to find, usually are well known—especially to experts—and beginners can find them in almost any textbook. This
is true whether the area is graph theory, number theory, or complexity theory; real analysis, complex analysis, or functional analysis; algebraic geometry, differential geometry, or topology. How did
the last manage to avoid being called “-theory” or “-analysis”?
A trick is different from an ordinary “tool” in that it has some element of surprise. It achieves something by a way that is at first counter-intuitive. That’s what makes it fun. Magic tricks without
surprise, without the watcher’s expectation being deceived, are nothing.
The trouble with tricks is that they often have no name, or an un-descriptive name, as with the “W-trick” below, and consequently are hard to find. The tricks often are not written down, or if they
are, they are hard to find even if you have a general idea of what to search for. A trick may be used in a long proof, causing a causal reader to miss it. Or even if you see them, you may not
internalize them and add them to your own bag. Another way that tricks are learned is through oral conversations, lectures, and technical talks. When I teach a class I try to highlight the tricks—at
least I try.
The Tricki Wiki was an attempt to make tricks, mathematical tricks, better documented. I thought it was, and still is, a brilliant idea; yet it seems to not have become as popular as it should have.
Maybe in time it will.
For now let me explain one trick that is simple, useful, and powerful: the W-trick. Then Ken will ask whether research-level power can be obtained by extending an old and simple trick of arithmetic.
The W-Trick
The trouble with the prime numbers is that while they embody considerable randomness, they are biased in many obvious ways. For one they are all odd, except the poor number ${2}$. Even among the odd
numbers the bias is progressive: only one prime is divisible by ${5}$, and so on. This messes up lots of simple arguments, and makes some proofs about primes extra complicated. A second trouble is
that their density is close-but-not-quite constant: the number of primes below ${n}$ goes as ${n/\log n}$.
Enter the W-trick. Consider the mapping
$\displaystyle x \mapsto Wx+b$
where ${W}$ is the product of all the primes less than some threshold ${w}$ and ${b}$ is relatively prime to ${W}$. Often ${b=1}$ is just fine.
The numbers that primes map to are called “W-tricked primes.” They behave more like “pseudorandom numbers” than primes do. W-tricked primes are well distributed modulo ${q}$ provided ${w}$ is greater
than or equal to all the prime factors of ${q}$.
The key to using W-tricked primes in the Green-Tao theorem on arithmetic progressions is that such progressions are preserved by this linear map: if the W-tricked primes have a progression of length
${3}$, for example, then so do the real primes.
Recently I have used this same idea to show that W-tricked numbers are more likely to be relatively prime: have no common factor in common. If ${x}$ and ${y}$ are chosen randomly in a long interval,
then it is well known that they will be relatively prime about ${6/{\pi^{2}}}$ of the time, about ${61\%}$. But W-tricked numbers will be relatively prime almost all the time: as ${w}$ goes to
infinity the probably of a common factor goes to zero.
Duality is an umbrella term for a host of tricks. For example, the proof of the Four-Color Map Theorem begins by forming the dual graph in which every “country” becomes a vertex, and two vertices are
connected if their countries share a boundary. Repeating the process would restore the original map, which makes the graph strictly a dual. In linear programming and other optimization techniques the
presence of dual programs is often important. The Fourier transform gives a dual form to any function, and the Hadamard transform provides an important dual basis in quantum computation.
Change-of-basis is another umbrella term for many tricks in linear algebra—or maybe many of them are just tools.
A kind of duality may operate with the Green-Tao theorem. Instead of the set of primes being “tricked up” to be denser, one can also “trick down” the integers into a suitably pseudorandom subset ${X}
$. Whereas the primes have density approaching zero in ${\mathbb{N}}$, their density in ${X}$ may stay above some constant. The trick is set up so that one can carry over Szemeredi’s Theorem that
every subset of ${\mathbb{N}}$ of positive density has arbitrarily long arithmetic progressions, from ${\mathbb{N}}$ to ${X}$. This view was developed in a paper by Omer Reingold, Luca Trevisan,
Madhur Tulsiani, and Salil Vadhan, people we know well in complexity theory. Gowers proved something similar independently.
Often the dual is “the same” as the original but just viewed in a mirror. This may be the case here, since this is actually proved by enlarging ${P \subset X}$ to ${M \subset \mathbb{N}}$ where ${M}$
stays having positive density in ${\mathbb{N}}$. The W-trick is a concrete instance of the latter. Still, these papers show that the dual view of the Green-Tao “transference principle” has its own
uses, all derived from tools of pseudorandomness developed in complexity theory. Cool and worth deeper study.
The Square Trick
We have previously discussed an old trick known to the Babylonians for reducing any integer multiplication to the lookup of three squares, or alternatively two squares:
$\displaystyle a\cdot b = \frac{1}{2} ((a+b)^2 - a^2 - b^2) = \frac{1}{4}((a+b)^2 - (a-b)^2).$
Ken uses this as a first example of a Turing reduction that needs more than one oracle call. In the post I stopped at consulting a table of large enough squares, but Ken wants to go further with
simulating the squaring part. The hitch is in the denominators of the fractions—if we could pretend they don’t exist, we’d be home.
The reason is another trick that allows replacing squaring by shifting. Regard the finite field ${\text{GF}(2^n)}$ as a vector space of dimension ${n}$ over ${\mathbb{Z}_2}$. A vector ${v}$ in this
space is normal if its iterated squares
$\displaystyle v^2, v^4 = (v^2)^2, v^8 = (v^4)^2, v^{16}, v^{32}, \dots, v^{2^{n-1}}$
are all linearly independent, and hence form a basis of the vector space. Then for any field element ${w}$ represented over this basis, the left-shift (wrapping the first bit around to be the last)
represents ${w^2}$. This trick is exploited to implement codes that do frequent squaring.
Alas this only works in characteristic 2, so that the final dividing by 2 or 4 in the formulas for ${a\cdot b}$ is like dividing by zero. Over ${\text{GF}(p^n)}$ the corresponding concept involves $
{p}$-th powers of ${v}$, not squares. The vanishing of multiples of the characteristic is needed anyway so that expressions of the form ${(w_1 z_1 + \cdots + w_n z_n)^p}$ leave only the terms in $
{w_i^p z_i^p}$, which is what gives the shifting behavior for ${p=2}$ to begin with. Unfortunately, the formula above for ${a\cdot b}$ becomes undefined in characteristic 2. Ken still wonders,
though, whether there is some consistent way to postulate it, so that—as with the Dirac delta function—it can be used in computations.
Open Problems
The general question is how do we document and pass on tricks to others? This is an important issue especially for new researchers to an area of mathematics.
A specific question that I think arises is, can we create a “non-linear” W-trick? The idea would be to build a nonlinear mapping that allowed us to map primes to another set and use the existence of
progressions there to conclude something useful about primes themselves? Is there any way to create a W-tricked set of primes that allows progress on open questions in the structure of primes? I am
sure that Green and Tao, among many others, have thought about this. But perhaps you might have a new idea here?
1. January 26, 2013 10:54 am
There is a study called generalized primes that investigates in a more general way the relationship between arbitrary things called primes and all the things called composites that can be
composed from them according to specified laws of composition. Comparisons can be made between different numerical systems or any kind of combinatorial species that one might imagine. I seem to
recall at least one old monograph by Rademacher on the subject.
My own fascination with questions like that led me many years ago to the Riff & Rote trick, a special case of the Make A Picture (MAP) trick, and there is a bit on that here.
2. January 26, 2013 6:04 pm
“The numbers that primes map to are called ‘W-tricked primes.’ ”
Or are they the numbers mapped to the primes?
3. January 27, 2013 2:03 am
“The trouble with tricks is that they often have no name, or an un-descriptive name… and consequently are hard to find.”
I probably emphasize learning the names of things more than any other instructor I know of, for the purposes of (a) enabling conversation, (b) engaging the student’s language-center, and (c)
writing justifications. Sometimes in desperation I use very obscure names or make up my own. Interesting that an added point is to make topics searchable in the current context.
4. January 27, 2013 11:48 am
Some mathematicians are good at finding tricks, others at using them, and yet others at converting them into theorems.
5. January 27, 2013 1:55 pm
Geometric (thermo)dynamics — both classical and quantum — is replete with natural dualities, of which a partial (and not-too-formal) list is:
$\begin{array}{rcl} \hline \rule[2.5ex]{0pt}{0pt}\text{Riemannian ``sharp''}\ \sharp&\leftrightarrows&\text{Riemannian ``flat''}\ \flat\\ \text{symplectic ``sharp''}\ \sharp&\leftrightarrows&\
text{symplectic ``flat''}\ \flat\\ \rule[-1.5ex]{0pt}{0pt}\text{complex structure (tangents)}&\leftrightarrows&\text{complex structure (forms)}\\ \hline \rule[2.5ex]{0pt}{0pt}\text{Hamiltonian
flows}&\leftrightarrows&\text{Carmichael unravellings}\\ \text{local informatic causality}&\leftrightarrows&\text{nonlocal quantum entanglement}\\ \rule[-1.5ex]{0pt}{0pt}\text{thermodynamic
potentials}&\leftrightarrows&\text{thermodynamic densities}\\ \hline \rule[2.5ex]{0pt}{0pt}n\text{-forms}&\leftrightarrows&\text{Hodge-dual}\ (n\text{-}k)\text{-forms}\\ \rule[-1.5ex]{0pt}{0pt}\
text{exterior derivatives}\ d&\leftrightarrows&\text{exterior coderivatives}\ d^\ast\\ \hline \end{array}$
These interlocking dualities serve to “geometrize” various wonderful conjectures in computational complexity theory (for example, that matrix multiplication and matrix inversion have the same
numerical complexity). Moreover, these dualities provide a unifying mathematical naturality to texts that otherwise seem disparate:
• Jack Lee’s Introduction to Smooth Manifolds
• Vladimir Arnold’s Mathematical Methods of Classical Mechanics
• Michael Nielsen and Isaac Chuang’s Quantum Computation and Quantum Information
• Howard Carmichael’s Statistical Methods in Quantum Optics (I and II)
• BSL’s Transport Phenomena
Thus what initially seems like a mathematical “trick” ofttimes can be appreciated more broadly.
Summary the progression $\text{``tricks''}\Rightarrow\text{dualities}\Rightarrow\text{naturality}\Rightarrow\text{unity}$ is fun to unravel :)
□ January 27, 2013 4:42 pm
Golly, here are two further crucial dualities:
• Ito increments (informatic naturality) $\leftrightarrows$Stratonovich increments (geometric naturality)
• the pushforward of trajectories (and tangents) $\leftrightarrows$ the the pullback of functions (and $n$-forms)
So ubiquitous are these dualities that they have given rise not just to new mathematical notations, but even new art (per Hoffman’s Commutative Diagrams in the Fine Arts, for example).
The standard texts of the 20th century seldom employ diagrammatic notation (e.g. the above references by Arnold, Nielsen & Chuang, Carmichael, BSL) … this constitutes a substantial — and
needless — barrier to cultivating a unified appreciation of these texts … because it makes these texts look like unstructured compendia of tricks … when in fact they are no such thing!
☆ January 27, 2013 7:17 pm
Two more dualizing transforms are time reversal and Legendre transforms (formally both are involutions), which are associated to the celebrated computational/physical “tricks” of Onsager
reciprocity and the duality of (entropy)$\,\leftrightarrows\,$(free energy).
6. January 28, 2013 1:40 am
You probably forgot that topology used to be called analysis situs.
□ January 28, 2013 10:57 am
As Poincaré was aware of, analysis situs – which is how he called topology – is pervasive in math. I think that’s why it ended up with a single-word name. Most other subjects quoted by Dick
actually make use of topology.
7. January 28, 2013 11:40 am
One of my favorite tricks — it seems almost too tricky to be true — is the Propositions As Types Analogy. And I seem to see hints that the 2-part analogy can be extended to a 3-part analogy.
• proof hint : proof : proposition :: untyped term : typed term : type
See the “Propositions As Types” link on this page.
□ January 29, 2013 7:49 pm
For me, the proof-program equivalence is much more than a trick – it’s as fundamental as the wave-particle duality in quantum mechanics. It’s no coincidence that in both domains, uncertainty
holds. In physics we have Heisenberg’s relations and in complexity theory, the impossible separation of some well-known complexity classes.
☆ January 30, 2013 1:40 pm
Perhaps, but other times I think that mode-node duality, and all its ilk, are just a manifestation — the word implies being struck with a fist, or maybe just a facepalm, but never mind
that now — of the fact that our currently received concepts are just too clumsy to conceive of reality as it really is.
☆ January 30, 2013 4:08 pm
You’re probably right. In any case, making sense of even the above-mentioned analogy requires a kind of math that doesn’t exist as of yet.
8. January 31, 2013 12:50 am
It’s been a long time since I could consider myself (almost) a mathematician. Am wondering if since W-tricked numbers are relatively prime almost all the time, there might be some way to use them
to improve upon the Miller-Rabin primality test?
9. January 31, 2013 4:46 pm
Dualities like these point to a higher unity — a calculus of forms whose expressions can be read in two different ways by switching the meanings assigned to a pair of primitive terms.
□ February 6, 2013 11:12 am
Update —
C.S. Peirce explored a variety of De Morgan type dualities in logic that he treated on analogy with the dualities in projective geometry. This gave rise to abstract formal systems where the
initial constants — and later on their geometric or graph-theoretic representations — had no fixed meaning but could be given dual interpretations in logic.
It was in this context that his systems of logical graphs developed, issuing in dual interpretations of the same formal axioms that Peirce referred to as “entitative graphs” and “existential
graphs”. It was only the existential interpretation that he developed very far, since the extension from propositional to relational calculus seemed easier to visualize there, but whether
there is some truly logical reason for the symmetry to break at that point is not yet known to me.
When I have explored how Peirce’s way of doing things might be extended to “differential logic” I have run into many themes that are analogous to differential geometry over GF(2). Naturally,
there are many surprises.
Recent Comments
Mike R on The More Variables, the B…
maybe wrong on The More Variables, the B…
Jon Awbrey on The More Variables, the B…
Henry Yuen on The More Variables, the B…
The More Variables,… on Fast Matrix Products and Other…
The More Variables,… on Progress On The Jacobian …
The More Variables,… on Crypto Aspects of The Jacobian…
The More Variables,… on An Amazing Paper
The More Variables,… on Mathematical Embarrassments
The More Variables,… on On Mathematical Diseases
The More Variables,… on Who Gets The Credit—Not…
John Sidles on Multiple-Credit Tests
KWRegan on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests | {"url":"http://rjlipton.wordpress.com/2013/01/26/making-primes-more-random/","timestamp":"2014-04-18T13:06:24Z","content_type":null,"content_length":"116666","record_id":"<urn:uuid:b5b84d7a-c2b5-4f8f-bd1f-ac21e92ac5dd>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 51
- In Proceedings of the 4th Annual Symposium on Combinatorial Pattern Matching (CPM), volume 684 of Lecture Notes in Computer Science , 1993
"... A perfect single tandem repeat is defined as a nonempty string that can be divided into two identical substrings, e.g. abcabc. An approximate single tandem repeat is one in which the substrings
are similar, but not identical, e.g. abcdaacd. ..."
Cited by 75 (2 self)
Add to MetaCart
A perfect single tandem repeat is defined as a nonempty string that can be divided into two identical substrings, e.g. abcabc. An approximate single tandem repeat is one in which the substrings are
similar, but not identical, e.g. abcdaacd.
, 1994
"... The problem of finding sections of code that either are identical or are related by the systematic renaming of variables or constants can be modeled in terms of parameterized strings (p-strings)
and parameterized matches (p- matches) [Baker93a]. P-strings are strings over two alphabets, one of whic ..."
Cited by 71 (5 self)
Add to MetaCart
The problem of finding sections of code that either are identical or are related by the systematic renaming of variables or constants can be modeled in terms of parameterized strings (p-strings) and
parameterized matches (p- matches) [Baker93a]. P-strings are strings over two alphabets, one of which represents parameters. Two p-strings are a parameterized match (p-match) if one pstring is
obtained by renaming the parameters of the other by a one-to-one function. In this paper, we investigate parameterized pattern matching via parameterized suffix trees (p-suffix trees), defined in
[Baker93a]. We give two algorithms for constructing p-suffix trees: one (eager) that runs in linear time for fixed alphabets, and another that uses auxiliary data structures and runs in O(nlog (n))
time for variable alphabets, where n is input length. We show that using a psuffix tree for a pattern p-string P, it is possible to search for all p-matches of P within a text p-string T in space
linear in ï P ï...
, 2002
"... The classical algorithm for computing the similarity between two sequences [36, 39] uses a dynamic programming matrix, and compares two strings of size n in O(n 2 ) time. We address the
challenge of computing the similarity of two strings in sub-quadratic time, for metrics which use a scoring ..."
Cited by 56 (4 self)
Add to MetaCart
The classical algorithm for computing the similarity between two sequences [36, 39] uses a dynamic programming matrix, and compares two strings of size n in O(n 2 ) time. We address the challenge of
computing the similarity of two strings in sub-quadratic time, for metrics which use a scoring matrix of unrestricted weights. Our algorithm applies to both local and global alignment computations.
The speed-up is achieved by dividing the dynamic programming matrix into variable sized blocks, as induced by Lempel-Ziv parsing of both strings, and utilizing the inherent periodic nature of both
strings. This leads to an O(n 2 = log n) algorithm for an input of constant alphabet size. For most texts, the time complexity is actually O(hn 2 = log n) where h 1 is the entropy of the text.
Institut Gaspard-Monge, Universite de Marne-la-Vallee, Cite Descartes, Champs-surMarne, 77454 Marne-la-Vallee Cedex 2, France, email: mac@univ-mlv.fr. y Department of Computer Science, Haifa
University, Haifa 31905, Israel, phone: (972-4) 824-0103, FAX: (972-4) 824-9331; Department of Computer and Information Science, Polytechnic University, Six MetroTech Center, Brooklyn, NY 11201-3840;
email: landau@poly.edu; partially supported by NSF grant CCR-0104307, by NATO Science Programme grant PST.CLG.977017, by the Israel Science Foundation (grants 173/98 and 282/01), by the FIRST
Foundation of the Israel Academy of Science and Humanities, and by IBM Faculty Partnership Award. z Department of Computer Science, Haifa University, Haifa 31905, Israel; On Education Leave from the
IBM T.J.W. Research Center; email: michal@cs.haifa.il; partially supported by by the Israel Science Foundation (grants 173/98 and 282/01), and by the FIRST Foundation of the Israel Academy of Science
- In Symposium on Foundations of Computer Science , 1999
"... A repetition in a word is a subword with the period of at most half of the subword length. We study maximal repetitions occurring in, that is those for which any extended subword of has a bigger
period. The set of such repetitions represents in a compact way all repetitions in.We first prove a combi ..."
Cited by 50 (4 self)
Add to MetaCart
A repetition in a word is a subword with the period of at most half of the subword length. We study maximal repetitions occurring in, that is those for which any extended subword of has a bigger
period. The set of such repetitions represents in a compact way all repetitions in.We first prove a combinatorial result asserting that the sum of exponents of all maximal repetitions of a word of
length is bounded by a linear function in. This implies, in particular, that there is only a linear number of maximal repetitions in a word. This allows us to construct a linear-time algorithm for
finding all maximal repetitions. Some consequences and applications of these results are discussed, as well as related works. 1.
- TREES, AND SEQUENCES: COMPUTER SCIENCE AND COMPUTATIONAL BIOLOGY , 1998
"... A tandem repeat (or square) is a string ffff, where ff is a nonempty string. We present an O(jSj)-time algorithm that operates on the suffix tree T (S) for a string S, finding and marking the
endpoint in T (S) of every tandem repeat that occurs in S. This decorated suffix tree implicitly represents ..."
Cited by 34 (2 self)
Add to MetaCart
A tandem repeat (or square) is a string ffff, where ff is a nonempty string. We present an O(jSj)-time algorithm that operates on the suffix tree T (S) for a string S, finding and marking the
endpoint in T (S) of every tandem repeat that occurs in S. This decorated suffix tree implicitly represents all occurrences of tandem repeats in S, and can be used to efficiently solve many questions
concerning tandem repeats and tandem arrays in S. This improves and generalizes several prior efforts to efficiently capture large subsets of tandem repeats.
, 1992
"... ) Alberto Apostolico Purdue University and Universit`a di Padova Dany Breslauer yyz Columbia University Zvi Galil z Columbia University and Tel-Aviv University Summary of results Optimal
concurrent-read concurrent-write parallel algorithms for two problems are presented: ffl Finding all the pe ..."
Cited by 32 (13 self)
Add to MetaCart
) Alberto Apostolico Purdue University and Universit`a di Padova Dany Breslauer yyz Columbia University Zvi Galil z Columbia University and Tel-Aviv University Summary of results Optimal
concurrent-read concurrent-write parallel algorithms for two problems are presented: ffl Finding all the periods of a string. The period of a string can be computed by previous efficient parallel
algorithms only if it is shorter than half of the length of the string. Our new algorithm computes all the periods in optimal O(log log n) time, even if they are longer. The algorithm can be used to
compute all initial palindromes of a string within the same bounds. ffl Testing if a string is square-free. We present an optimal O(log log n) time algorithm for testing if a string is square-free,
improving the previous bound of O(log n) given by Apostolico [1] and Crochemore and Rytter [12]. We show matching lower bounds for the optimal parallel algorithms that solve the problems above on a
general alphab...
- J. Algorithms , 1992
"... Some quantities associated with periodicities in words are analyzed within the Bernoulli probabilistic model. In particular, the following problem is addressed. Assume that a string X is given,
with symbols emitted randomly but independently according to some known distribution of probabilities. T ..."
Cited by 27 (8 self)
Add to MetaCart
Some quantities associated with periodicities in words are analyzed within the Bernoulli probabilistic model. In particular, the following problem is addressed. Assume that a string X is given, with
symbols emitted randomly but independently according to some known distribution of probabilities. Then, for each pair (W , Z) of distinct suffixes of X, the expected length of the longest common
prefix of W and Z is sought. The collection of these lengths, that are called here self-alignments, plays a crucial role in several algorithmic problems on words, such as building suffix trees or
inverted files, detecting squares and other regularities, computing substring statistics, etc. The asymptotically best algorithms for these problems are quite complex and thus risk to be unpractical.
The present analysis of self-alignments and related measures suggests that, in a variety of cases, more straightforward algorithmic solutions may yield comparable or even better performances. Key
words and ph...
- SIAM J. Comput , 1993
"... . In this paper we present an O(N 2 log 2 N) algorithm for finding the two non-overlapping substrings of a given string of length N which have the highest-scoring alignment between them. This
significantly improves the previously best known bound of O(N 3 ) for the worst-case complexity of thi ..."
Cited by 26 (3 self)
Add to MetaCart
. In this paper we present an O(N 2 log 2 N) algorithm for finding the two non-overlapping substrings of a given string of length N which have the highest-scoring alignment between them. This
significantly improves the previously best known bound of O(N 3 ) for the worst-case complexity of this problem. One of the central ideas in the design of this algorithm is that of partitioning a
matrix into pieces in such a way that all submatrices of interest for this problem can be put together as the union of very few of these pieces. Other ideas include the use of candidate-lists, an
application of the ideas of Apostolico et al.[1] to our problem domain, and divide and conquer techniques. 1. Introduction. Let A = a 1 a 2 :::a N be a sequence of length N , and let A[p::q] denote
the substring a p a p+1 :::a q of A. The problem we consider is that of finding the score of the best alignment between two substrings A[p::q] and A[r::s] under the the generalized Levenshtein model
of alignmen...
, 1994
"... There are many solutions to the string matching problem which are strictly linear in the input size and independent of alphabet size. Furthermore, the model of computation for these algorithms
is very weak: they allow only simple arithmetic and comparisons of equality between characters of the in ..."
Cited by 24 (8 self)
Add to MetaCart
There are many solutions to the string matching problem which are strictly linear in the input size and independent of alphabet size. Furthermore, the model of computation for these algorithms is
very weak: they allow only simple arithmetic and comparisons of equality between characters of the input. In contrast, algorithm for two dimensional matching have needed stronger models of
computation, most notably assuming a totally ordered alphabet. The fastest algorithms for two dimensional matching have therefore had a logarithmic dependence on the alphabet size. In the worst case,
this gives an algorithm that runs in O(n log m) with O(m log m) preprocessing. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=128701","timestamp":"2014-04-21T12:37:49Z","content_type":null,"content_length":"38190","record_id":"<urn:uuid:740a0031-371e-4e2d-9141-53f81c190005>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
March 25th 2011, 09:03 AM #1
Junior Member
Oct 2009
Hi. can you help me to solve the following problems?
thanks for your help.
1.A system of four charges -3q, 2q, -3q and 2q are at rhomb vertex with margin “a” (equals charges at opossed vertex). Find: a) potential energy of system in function of “b” (with b:angle between
rhomb adjacent angles) b)Find the angle “b” which the configuration is stable c) to configuration from b) find the net forces that each charge experiment
2.A plates capacitor has two squares plates with area: A= 100 cm2 separated a distance d= 1cm and the space between them is full with two dielectrics with k1=2 and k2=3 as a wedge. If the plates
has a voltage 120V. a) What is the capacitanece of system?. b) what is charge density at the plates as function from distance x?. c) what´s polarization charge at the surface between dielectrics?
3. A capacitor is maked with three long coaxial conductor cylinders wit h radios: r1 (internal cylinder), r2 (middle cylinder) and r3 (external cylinder). The conductors are separated with
dielectrics with constants k1=3 and k2=8. The conductors internal and external are electric connecting. Find in function from r1 and r2 the value of r3 such that minimize the system capacitance.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/math-topics/175802-capacitance.html","timestamp":"2014-04-20T00:19:18Z","content_type":null,"content_length":"29885","record_id":"<urn:uuid:f8e3fd86-88a1-4a18-a9f0-4c9fd6822269>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |