content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
How did the Romans Calculate?
If the Roman Empire is really so far removed from us in time, why is it that Roman numerals were still in commercial use until the 14th century? Before we throw our own guess into the debate, let us
look at the nature of these much maligned numerals. How could anyone calculate with them? Well, how can anyone compute "three hundred and seventy-six times two hundred and thirty-seven". You type
these data into you pocket calculator and press the "x" button, that's how. You certainly would not fill page after page with number words. Neither did the Romans: they would load CCCLXXVI and
CCXXXVII onto their counting board or abacus and manipulate the pebbles and beads until they had the result. We shall presently do such a multiplication, but first we'll look at addition and
The counting board shown here is divided into two vertical strips, the right hand one for addition, the other for subtraction. The top number shown on the right is MDCCCCLXV, the lower one is
MCCCCXXV. To add them, we just pile everything together into a mess which is shown in third field from the top. To make it readable we have to reduce it: any five beads on a line are converted to
one button in the space to the left of that line, any two buttons in a space turn into one bead on the next line over. The answer is MMMCCCLXXXX as shown in the bottom field.
Note: were are using the word "beads" to remind you of an abacus; our "buttons" would be found in the separate top compartment (called "heaven" by the Chinese) of the abacus. We are also ignoring
the medieval convention of writing IV, XL, CD instead of the longer but clearer IIII, XXXX, CCCC, used by the ancients.
In the subtraction on the left strip, the first number MCCCCXXV must first be expanded in order to have enough tokens on every line and space to allow the second number DCLIII to be subtracted. The
expansion, which is reduction in reverse, is shown in the second field from the top. It need not be done all at once, but can be performed as needed for subtracting. Answer: DCCLXXII.
The power and flexibility of the Roman system is best shown in how it handles multiplication: because of the numbers V, L,D, etc. you need not memorize any multiplication table beyond five. But five
itself is just ten halves , and halving is an easy operation. Doubling is another easy operation, quadrupling is doubling twice -- so the hardest multiplier is three. If you do happen to know the
ten-by-ten table, you can read every line together with its preceding space as a single decimal digit, and thus increase your speed.
The multiplication shown here is CLXXXXVIIII times DCLIII. There are four partial products (in the blue and yellow fields) corresponding to the four digits of the multiplier: by three, by five
(shifted), by one (shifted twice), and by five (shifted twice). As you pile all that into the first of the fields marked green, something special happens on the M-line: three sets of four. Since
there is no space for that many, you turn them into a twelve (cf. blue beads) and carry on. After reducing this you get C*X*X*V*MMMMDCCCCXXXXVII, as shown in the bottom field. If you find this too
long, compare it to "one hundred twenty-nine thousand nine hundred and forty-seven". By the way, we have changed the Roman bars to asterisks: they mean "thousand".
A Roman wine merchant would have done this in his head: CLXXXXVIIII is one less than CC, so double DCLIII to MCCCVI, shift to C*X*X*X*DC, and subtract DCLIII, and you'll be LIII short of C*X*X*X* --
factus est.
After all this, you must be dying to see a division, and here it is: MMMMDCXXVIIII divided by XIII (the divisor is not written in). It goes just as you expect. Since XIII takes up two lines, you
look at the first two lines (plus spaces) of the number to be divided, and you see XXXXVI, which can accommodate three times XIII. So you write a III on the line where your XXXXVI had its I. Then
you subtract III times XIII and are left with VII, which is really DCC in disguise. Then you repeat the game, this time taking aim at what looks like LXXII -- and so on, always wandering toward the
smaller values on the right.
To appreciate the ease and freedom of this simple gadget, you owe it to yourself to try one. For starters, why not take a chessboard and a supply of pennies? You can start your calculations on the
right or on the left, change direction when you spot an opportunity for an easy move -- as long as you keep track of where you are in the calculation, it cannot go wrong. You can add or subtract
tokens to undo a lousy move -- you never need an eraser.
The Indo-Arabic numeral system was supposedly introduced to Europe in the early 13th century by a book called Liber Abaci (book of the abacus) written by the widely travelled Leonardo di Pisa (alias
Fibonacci) himself no mean mathematician. Present-day scholars say that it was known in the West much earlier -- though still regarded as a Levantine curiosity -- but that the 13th century
introduction of paper from China, as a cheap medium for writing, made it the system of choice for all auditors and tax-collectors who wanted to see the details of every calculation.
The pen-on-paper computation with Indo-Arabic numerals -- including the famous zero (originally a punctuation mark) -- made it possible to check calculations for errors, but also penalised false
starts and other trivial mistakes by ugly and confusing erasures. To avoid these, you had to follow certain very tight algorithms, which to this day make elementary arithmetic an incomprehensible and
unpleasant discipline to many people. As Scott Carlson points out in the article preceding Kasparov's, the paper method makes little sense when a calculator is at hand -- although mental arithmetic
is something he evidently likes. To build the bridge between the two, how about re-introducing the counting board?
This ancient and user-friendly tool was still being used in Europe long after people had begun writing numbers in the more compact Indo-Arabic style. As late as 1550, a German textbook was published
by Adam Rise, in which the multiplication shown above would be written as 199 times 653 equals 129947, but the intermediate steps would be left as unnamed patterns on the board. Even the Chinese and
Japanese write input and output of their abacus work in this style, and this would probably be the right way to bridge the gap between mental arithmetic and the calculator.
In conclusion: the counting board survived (at least) until the 16th century, and for a while (we guess) just carried the Roman numerals along with it. The fact that they are harder to falsify may
also have helped. | {"url":"http://www.pims.math.ca/~hoek/teanum/Rom/","timestamp":"2014-04-20T05:59:04Z","content_type":null,"content_length":"7796","record_id":"<urn:uuid:b4f24346-e525-48c3-b1e0-ab316ed46b47>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maths in a minute: Countable infinities
Maths in a minute: Countable infinities
An infinite set is called countable if you can count it. In other words, it's called countable if you can put its members into one-to-one correspondence with the natural numbers 1, 2, 3, ... . For
example, a bag with infinitely many apples would be a countable infinity because (given an infinite amount of time) you can label the apples 1, 2, 3, etc.
Two countably infinite sets A and B are considered to have the same "size" (or cardinality) because you can pair each element in A with one and only one element in B so that no elements in either set
are left over. This idea seems to make sense, but it has some funny consequences. For example, the even numbers are a countable infinity because you can link the number 2 to the number 1, the number
4 to 2, the number 6 to 3 and so on. So if you consider the totality of even numbers (not just a finite collection) then there are just as many of them as natural numbers, even though intuitively
you'd think there should only be half as many.
Something similar goes for the rational numbers (all the numbers you can write as fractions). You can list them as follows: first write down all the fractions whose denominator and numerator add up
to 2, then list all the ones where the sum comes to 3, then 4, etc. This is an unfailing recipe to list all the rationals, and once they are listed you can label them by the natural numbers 1, 2, 3,
... . So there are just as many rationals as natural numbers, which again seems a bit odd because you'd think that there should be a lot more of them.
It was Galileo who first noticed these funny results and they put him off thinking about infinity. Later on the mathematician Georg Cantor revisited the idea. In fact, Cantor came up with a whole
hierarchy of infinities, one "bigger" than the other, of which the countable infinity is the smallest. His ideas were controversial at first, but have now become an accepted part of pure mathematics.
You can find out more about all this in our collection of articles on infinity. | {"url":"http://plus.maths.org/content/maths-minute-countable-infinities","timestamp":"2014-04-17T06:55:40Z","content_type":null,"content_length":"24230","record_id":"<urn:uuid:65876555-a4e5-4c8e-8c40-0348c201a31d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
statistics - WyzAnt Answers
I have done a survey and I need to calulate the mean, median and mode for my data. To try and determine an average divorce rate I interviewed 15 married people, 8 females, 7 males. I asked each
person how many times they had been divorced. 10 people said 0, and 5 people said 1. I am unsure how to caculate the mean, median and mode with this data. Please help.
The mean is the average. This is where you find the sum of your data, and then divide by the number of people you surveyed.
(10x0) + (5x1) = 5
5/15 = 1/3 = 0.33
To find the median you list your data in ascending order, and find the number that is is the middle.
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1
In this case, the median would be 0.
If there were an even number of data points, for example if there were 4 who said 0, 6 who said 2, you would find the mean of the two center numbers.
0, 0, 0, 0, 2, 2, 2, 2, 2, 2
2 + 2 = 4
4/2 = 2
In this case the median would be 2.
The mode is found by determining which data point is most frequently expressed. So, in the case where 10 said 0, and 5 said 1, the mode would be 0.
In the second example I suggested where 4 said 0, and 6 said 2, the mode would be 2.
Tutors, please sign in to answer this question.
2 Answers
Hi Pam. I see you edited your question. Let me adjust my answer.
The mean is the sum of all your data divided by the number of data points. You have 15 answers. The sum is 10 x 0 + 5 x 1 = 5 divorces.
5/15 = 1/3 ≈0.33. This is your mean divorce per person.
To get the median, line up the data in order and find the middle data point:
0, 0,0,0,0,0,0,0,0,0, 1, 1,1,1,1
0 is in the middle and is the median. The mode is the most frequent occurring data point--in this case, 0 occurs 10 times, which is more than any other value. 0 is your mode.
Old answer (when you had yes/no data):
You have categorical data. No mean or median can be determined. The mode is simply the most frequent response. In your case, the mode is "NO, I have not been divorced." That's all there is to it.
The median and the mean can ONLY be determined if there is a quanitative component to your variables. For example, you could have asked "How many times have you been divorced?" The people surveyed
could say 0, 1, 2, 3, 4, and so on. This would be discrete quantitative data that does have a mean and a median. But that is not what you presented here with your "yes/no" data.
Thank you.
This data set has no numerical values, just qualitative responses ('divorced/ not divorced') so it does not really lend itself to the traditional understandings of mean, median, and mode that you
would use to calculate an average divorce rate. But a very liberal interpretation is below!!
The mean is the average. Since you interviewed 15 people and 5 said they had been divorced, the mean divorce rate is 5 of 15 = 5/15 ≈ 33.3%. But more generally, to calculate a mean of a set of
numbers add up all the numbers and divide by the number of values.
The median of a numerical data set (that is organized from least to greatest) is the middle value. Since this data set does not have numbers, the median is not really well-defined. The only way that
I see to force a median here is to assign numbers. Divorced = 1, Not divorced = 0. So your data set is {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1} to represent 10 non-divorced, and 5 divorced. The
median (middle number) is 0. So your median is "Not divorced." (Again, not a good measure here.)
Likewise, the nature of your survey does not lend itself to a natural mode, which is just the most frequently ocurring value. Your most frequent response was "not divorced." So that is your mode.
(But like median (and mean), mode would work better with number values rather than qualitative responses like "not divorced.)
If you want to design a survey to calculate average divorce rate, you might look at divorce rates of several different countries. For example, for six countries, the divorce rates, in percents, are
given by:
{20, 25, 26, 30, 38, 50}
• Mean: (20 + 25 + 26 + 30 + 38 + 50) / 6 = 31.5
• Median: 26 & 30 are both in the "middle", so the median is the average of 26 & 30, or 28
• Mode: No number occurs more frequently than any other, so there is no mode. If there were 2 26's and only one of every other number, the mode would be 26. | {"url":"http://www.wyzant.com/resources/answers/1177/statistics","timestamp":"2014-04-19T13:36:33Z","content_type":null,"content_length":"49306","record_id":"<urn:uuid:84b27c2d-15a9-4a63-b4c2-45c75d6f8e5f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra 1 Michigan Edition
Similar Searches: math, pre algebra, prealgebra, algebra 1, first math, basic math, pre algebra teacher edition, pre algebra teacher edition, holt, algebra 1 math, holt algebra, pre algebra math,
basic algebra, danica mckellar math, saxon algebra1, saxpub, pre algebra man m sharma, math pre algebra, holt pre algebra, and algebra 2, holt
We strive to deliver the best value to our customers and ensure complete satisfaction for all our textbook rentals.
As always, you have access to over 5 million titles. Plus, you can choose from 5 rental periods, so you only pay for what you’ll use. And if you ever run into trouble, our top-notch U.S. based
Customer Service team is ready to help by email, chat or phone.
For all your procrastinators, the Semester Guarantee program lasts through January 11, 2012, so get going!
*It can take up to 24 hours for the extension to appear in your account. **BookRenter reserves the right to terminate this promotion at any time.
With Standard Shipping for the continental U.S., you'll receive your order in 3-7 business days.
Need it faster? Our shipping page details our Express & Express Plus options.
Shipping for rental returns is free. Simply print your prepaid shipping label available from the returns page under My Account. For more information see the How to Return page.
Since launching the first textbook rental site in 2006, BookRenter has never wavered from our mission to make education more affordable for all students. Every day, we focus on delivering students
the best prices, the most flexible options, and the best service on earth. On March 13, 2012 BookRenter.com, Inc. formally changed its name to Rafter, Inc. We are still the same company and the same
people, only our corporate name has changed. | {"url":"http://www.bookrenter.com/algebra-1-michigan-edition/search--p4","timestamp":"2014-04-20T16:24:23Z","content_type":null,"content_length":"40144","record_id":"<urn:uuid:2ff05370-48cd-4026-948f-6336bd52056e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Re: Godel, f.o.m.
charles silver silver_1 at mindspring.com
Tue Jan 25 17:31:55 EST 2000
> Vladimir Sazonov writes:
> > <Snip Matt Insall's Comment>
> > Here is not so much of contradiction because mathematics is just
> > a kind of formal engineering. In a sense, computers as some
> > (physically realized) formal systems belong to mathematics.
Steve Stevenson wrote:
> James Fetzer has a really great article on this particular view of
> computing.
Geez, I remember thinking about 8 years ago when I read the article that
it had little content. Wasn't the paper an extended argument against
program verification, and wasn't his main point just the standard Cartesian
criticism that we can always make a error, no matter how much we try to
verify a program with a formal proof? The proof that the program is
correct may itself wrong, and other things can go wrong as well. The reason
I thought his paper had little content--supposing I remember it
correctly--is that his argument seemed to apply to *everything*. For
example, suppose I offer the following argument schema as valid in standard
All A are B
x is an A
Therefore: x is a B
And, suppose some large number of people--like the people in FOM--were to
examine the above schema and pronounce it correct. It *could* be the case
that we're all mistaken. It's *possible* for all of us to be wrong, even
in judging the validity of so simple an argument. That is, it isn't
*necessarily* the case we can't make such a mistake. But, I don't think
this overly general fact about our human fallibility has any specific
importance to the proofs of program correctness. That is, our fallibility
applies to *everything*, doesn't it?
>The gist of his article (please read it for all the other
> things) is what I have termed the Fetzer boundary. There is a point at
> which computing is no longer mathematics because it is no longer
> formal.
Please correct me if I am wrong, but as I recall, his arguments apply
equally well to mathematics (except for his specific comments about
compilers, of course). That is, consider a formal proof checker that's been
created to check the accuracy of math proofs. It is possible for a proof to
be be wrong and for the proof checker to err (for any number of reasons) and
incorrectly label it as a good proof. (And, if we thought we could shore
this up with a proof checker to check the proof checker, then *this* very
proof checker would need a proof checker, etc. etc.) Again, this overly
general fact about fallibility doesn't seem to cut any ice about computer
science specifically. Here, the Fetzer conclusion would be that math itself
is not formal, since one can't (in Descartes's sense) be *absolutely
certain* that a mistake in a math proof hasn't been made.
(The more general criticism of program correctness, which seems valid to
me, is that proofs of program correctness are themselves much harder to be
sure of than the original programs. The proofs of correctness of even very
simple programs can become extremely unwieldy. This raises the question of
the worth of a proof that could be wrong, if the program itself is easier to
check by standard methods without a proof. On the other hand, having a
proof may increase one's degree of confidence in the program's correctness,
despite the fact there's still a chance both the program and the proof could
be wrong. One proposed solution is to not waste time verifying individual
programs, but to have *programs* that prove correctness--though of course
these programs need to be checked. And, one program can check another,
though we know by Turing that there's an absolute limit to this.
On the other hand, I don't want to claim computer science *is* a formal,
mathematical discipline. I tend to think there was an interaction between
the engineering and the formal sides of computer science as it developed.
I think the mathematical discoveries in the 30's gave c.s. a tremendous
boost, but I think historically the engineering side came first. If it's
to be argued whether c.s. is really engineering or whether it's really, say,
an implementation of recursive function theory, then I think the argument
would depend upon what is meant by "really".
Charlie Silver
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-January/003641.html","timestamp":"2014-04-19T14:53:11Z","content_type":null,"content_length":"6534","record_id":"<urn:uuid:3f930d8d-6c4d-415a-8f88-d379eefe44a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Variation of parameters
Well, can't say that expression looks familiar...
But you won't be able to find two linearly independent solutions to the homogeneous equation. There's only one, since it's first order and you found it: Kexp(-3x)
Variation of parameters is where you t let the arbitrary constants in the solution to the homogeneous equation be functions of the independent variable in order to find the general solution.
Did a search and found
. So it's the same thing, but your expression only works for 2nd-order equations. My advice: just let y(x)=K(x)exp(-3x) and plug it in the equation. It'll work out perfectly, but you'll be solving an
annoying integral with lots of integration by parts. | {"url":"http://www.physicsforums.com/showthread.php?t=114460","timestamp":"2014-04-18T08:28:33Z","content_type":null,"content_length":"36563","record_id":"<urn:uuid:d982471a-a518-48c9-acc8-d32ab6469619>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solve an IVP and give the largest interval
June 10th 2008, 08:17 AM #1
May 2008
I really need help with this problem, can someone show me how to work it out step by step if possible.
Thank You so much
Solve an IVP and give the largest interval I over which the solution is defined
(x+1) dy/dx + y = ln x ; y(1) = 10
If you put this in standard from and get the integrating factor you will find that it is (x+1). It will turn back into what you started with we notice that the left hand side is the the implicit
derivative $\frac{d}{dx}[(x+1)y]$
So we get
$\frac{d}{dx}[(x+1)y]=\ln|x|$ so now we integrate both sides to get
$(x+1)y=x\ln|x|-\ln|x|+c \iff y=\frac{x\ln(x)}{(x+1)}-\frac{\ln(x)}{(x+1)}+\frac{c}{(x+1)}$
using the intial condition we get
$10=\frac{1\ln(1)}{(1+1)}-\frac{\ln(1)}{(1+1)}+\frac{c}{(1+1)} \iff 20 =c$
So we get
For the interval It depends on if you had $\ln(x) \mbox{ or } \ln|x|$
If it is the first then solution only exits for $x \in (x,\infty)$
If it is the 2nd the only singular point is x=-1 so the two intervals would be
$(-\infty,-1) \And (-1,\infty)$
So the largest would be $x \in (-1,\infty)$
June 10th 2008, 09:41 AM #2 | {"url":"http://mathhelpforum.com/calculus/41194-solve-ivp-give-largest-interval.html","timestamp":"2014-04-21T02:10:04Z","content_type":null,"content_length":"35709","record_id":"<urn:uuid:57db6411-cd02-424d-b106-83e9a4e674cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differentiation question
October 15th 2008, 07:25 PM
Differentiation question
The equation of a curve is xy(x+y)=2(a)^3 where a is a non-zero constant. show that there is only one point on the curve at which the tangent is parallel to the x-axis, and find the coordinates
of this point.
can some1 please help me with this question please.
thanks in advance
October 15th 2008, 07:47 PM
$\begin{array}{ccl}xy(x+y) & = & 2a^3 \\ x^2y + xy^2 & = & 2a^{3} \\ {\color{red}(x^2y)'} + {(\color{blue} xy^2)' } & = & {\color{magenta}(2a^3)'} \\ {\color{red}\left[(x^2)'y + x^2(y)'\right]} +
{\color{blue} \left[ (x)'y^2 + x(y^2)'\right]} & = & {\color{magenta}0} \end{array}$
So far, I've multiplied in the xy into the brackets. Then, I differentiate both sides. Since y is a function of x, we have to use product rule on each product on the left hand side. On the right
hand side, since a is just a constant, its derivative is simply 0.
See where you can go from this and come back with any more questions. | {"url":"http://mathhelpforum.com/calculus/53955-differentiation-question-print.html","timestamp":"2014-04-20T05:53:54Z","content_type":null,"content_length":"4645","record_id":"<urn:uuid:24367135-fd4d-4ae8-b0da-da59747b5f03>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Single Group CFA
Anonymous posted on Tuesday, November 30, 2004 - 10:15 am
Dear Dr. Muthen,
I am running a single group CFA with categorical observed indicators. My ultimate goal is to do a multiple group comparison. But first, I am trying to fit my model for each group. To do so, in mplus
I just wrote Y ON X1-X5 and ran the model for each group seperately. The results produced factor loadings, thresholds, factor variance, and residual variance. My questions :
1- Do I need to constrain my thresholds in a single group CFA?
2- I did not get scale factors in my single group CFA. Do I need to specify scale factros and/or maybe constrain them?
3- In the output, I also did not see the factor mean. Shoud I specify it in my model?
In general, could you please let me know what should I specify in my model for a single group CFA with categorical outcomes?
Linda K. Muthen posted on Tuesday, November 30, 2004 - 10:51 am
You should use the BY statement to specify a CFA model. See Example 5.2 in the Mplus User's Guide.
1. No.
2. No. They are fixed at one as the default.
3. Factor means are zero.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=23&page=523","timestamp":"2014-04-17T16:35:37Z","content_type":null,"content_length":"17571","record_id":"<urn:uuid:48eeee79-bb75-46a8-81b1-e1d625382594>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maths in a minute: Countable infinities
Maths in a minute: Countable infinities
An infinite set is called countable if you can count it. In other words, it's called countable if you can put its members into one-to-one correspondence with the natural numbers 1, 2, 3, ... . For
example, a bag with infinitely many apples would be a countable infinity because (given an infinite amount of time) you can label the apples 1, 2, 3, etc.
Two countably infinite sets A and B are considered to have the same "size" (or cardinality) because you can pair each element in A with one and only one element in B so that no elements in either set
are left over. This idea seems to make sense, but it has some funny consequences. For example, the even numbers are a countable infinity because you can link the number 2 to the number 1, the number
4 to 2, the number 6 to 3 and so on. So if you consider the totality of even numbers (not just a finite collection) then there are just as many of them as natural numbers, even though intuitively
you'd think there should only be half as many.
Something similar goes for the rational numbers (all the numbers you can write as fractions). You can list them as follows: first write down all the fractions whose denominator and numerator add up
to 2, then list all the ones where the sum comes to 3, then 4, etc. This is an unfailing recipe to list all the rationals, and once they are listed you can label them by the natural numbers 1, 2, 3,
... . So there are just as many rationals as natural numbers, which again seems a bit odd because you'd think that there should be a lot more of them.
It was Galileo who first noticed these funny results and they put him off thinking about infinity. Later on the mathematician Georg Cantor revisited the idea. In fact, Cantor came up with a whole
hierarchy of infinities, one "bigger" than the other, of which the countable infinity is the smallest. His ideas were controversial at first, but have now become an accepted part of pure mathematics.
You can find out more about all this in our collection of articles on infinity. | {"url":"http://plus.maths.org/content/maths-minute-countable-infinities","timestamp":"2014-04-17T06:55:40Z","content_type":null,"content_length":"24230","record_id":"<urn:uuid:65876555-a4e5-4c8e-8c40-0348c201a31d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Provides a domain specific language (DSL) for math engineering (matlab-like syntax).
Module Overview
GroovyLab is a set of Groovy classes and Java libraries. It provides common linear algebra and plot static methods easily usable in any groovy script or class.
GroovyLab is fully usable, but still in development status. It is based on JMathTools Java API (based on JAMA and RngPack).
Team Members
• Yann Richet - Contributor to JMathTools Java project
GroovyLab is just provided to start a math engineering DSL sub-project of Groovy. If you need GroovyLab, GroovyLab also needs you...
Source release available at GroovyLab website
Just extract the GroovyLab archive, and try to run examples cases using groovylab.bat or groovylab script: '
• groovylab examples/simpleTest.gvl
• groovylab examples/moreTest.gvl
GroovyLab is based on Groovy 1.1 and Java 1.5.
The following example shows GroovyLab in action:
Code Block
import static org.math.array.Matrix.*
import static org.math.plot.Plot.*
def A = rand(10,3)
println A
Code Block
import static org.math.array.Matrix.*
import static org.math.plot.Plot.*
def A = rand(10,3) // random Matrix of 10 rows and 3 columns
def B = fill(10,3,1.0) // one Matrix of 10 rows and 3 columns
def C = A + B // support for matrix addition with "+" or "-"
def D = A - 2.0 // support for number addition with "+" or "-"
def E = A * B // support for matrix multiplication or division
def F = rand(3,3)
def G = F**(-1) // support for matrix power (with integers only)
println A // display Matrix content
plot("A",A,"SCATTER") // plot Matrix values as ScatterPlot
def M = rand(5,5) + id(5) //Eigenvalues decomposition
println "M=\n" + M
println "V=\n" + V(M)
println "D=\n" + D(M)
println "M~\n" + (V(M) * D(M) * V(M)**(-1)) | {"url":"http://docs.codehaus.org/pages/diffpages.action?pageId=16252955&originalId=233050707","timestamp":"2014-04-18T09:00:06Z","content_type":null,"content_length":"35222","record_id":"<urn:uuid:1f1f0c89-54c7-4cde-b701-b4486876e2ce>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wang, Landau, Markov, and others...
Wang, Landau, Markov, and others…
On Thursday, the “Big’MC” seminar welcomes two talks (at 3pm and 4pm, resp., in IHP, Amphi Darboux):
• Orateur :Pierre Jacob (ENSAE) et Robin Ryder (CEREMADE)
• Titre : Some aspects of the Wang-Landau algorithm.
• Résumé : The Wang-Landau algorithm is an adaptive MCMC algorithm which generates a Markov chain designed to move efficiently in the state space, by constantly penalizing already-visited regions.
It hence falls into the class of exploratory algorithms, especially when the chosen regions correspond to different levels of density values. We explore two novel aspects of the Wang-Landau
algorithm. First, we show that the algorithm reaches the so-called Flat Histogram criterion in finite time, which ensures convergence properties. Second, we examine the effect of using multiple
chains, interacting through a common component. That component essentially represents the history of already-visited regions, computed on all the chains. We show numerically the benefit of using
parallel chains even if a single processing unit is available, in terms of stabilization of the schedule used in the adaptation process. If time permits, we shall present an ongoing attempt to
study theoretically the effect of parallelization using Feynman-Kac semigroups.
• Références http://arxiv.org/abs/1110.4025 et http://arxiv.org/abs/1109.3829
• Orateur : Nick Whiteley ( Univ. Bristol, UK)
• Titre : A particle method for approximating principal eigen-functions and related quantities
• Résumé : Perron-Frobenius theory treats the existence of a positive eigen-vector associated with the principal eigen-value \lambda_{\star} of a non-negative matrix, say Q. A simple method for
approximating this eigen-vector involves computing the iterate \lambda_{\star}^{-n}Q^{(n)}, for large n. In the more general case that Q is a non-negative integral kernel, an extended
Perron-Frobenius theory applies, but it is typical that neither the principal eigen-function nor the iterate \lambda_{\star}^{-n}Q^{(n)} can be computed exactly. In this setting we introduce an
interacting particle algorithm which yields a numerical approximation of the principal eigen-function and the associated twisted Markov kernel. Some of its theoretical properties will be
discussed and applications will be outlined. In particular, the algorithm allows approximation of an optimal importance sampling method for Markov chain rare event estimation.
Joint work with Nikolas Kantas.
• Référence : http://arxiv.org/abs/1202.6678
One Response to “Wang, Landau, Markov, and others…”
1. [...] Resolvable Coverings by van Dam, Haemers and Peek (2002). In the meanwhile (and even during the Big’MC seminar of yesterday!), I had been thinking of a simulated annealing implementation,
which actually was [...] | {"url":"https://xianblog.wordpress.com/2012/04/11/wang-landau-markov-and-others/","timestamp":"2014-04-19T12:46:12Z","content_type":null,"content_length":"38461","record_id":"<urn:uuid:987a5530-a32f-49b8-97e6-adcaade8f4c5>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Divide
Take a look at this mnemonic to help your child remember and understand the steps or algorithm for long division. In our house, it is not a good idea to come home late when the family’s favorite dish
is served for dinner. However, you can cross your fingers and leave Mom a note. The note may read, “ Dear Mom Save Cheese Burgers.” The first letter of each word represents a step listed below.
ear ........ Divide
om ......... Multiply
ave ........ Subtract
heese .......Check
urgers ..... Bring Down
The following illustrations will demonstrate step by step how to do long division.
Step 2: Multiply four times two and write the product directly under the number divided. In this case, it was eight.
Step 3 : Subtract and Check. The purpose of the Check step is to verify the previous steps were done correctly. In order to check, verify that the difference is smaller than the divisor. In this
problem, 0 < 4 ? True! So, continue to the next step. If false, check your subtraction and/ or go back to the previous Divide step and check your work from that point.
The arrow is not required, but it is helpful for the beginner.
Repeat steps from the beginning.
Step 5 Divide: 02 divided by 4 is zero. (Imagine 2 gummy bears. Can you group them into groups of four? No. So, zero is written above the two.) Many students allow this situation to confuse or trick
them. It’s okay to get zero for an answer.
Step 6 Multiply: Multiply the divisor by zero. (4 x 0 = 0)
Thereafter, continue to follow the steps.
After step 13; notice there is nothing else to bring down.
Step 14 The Last Check: Multiply the quotient (answer) by the dividend. Then add the remainder, if any. The result should equal the dividend. 205 x 4 = 820 which is the same number as the divisor.
So, this problem is correct.
Next, let us work a problem that with larger numbers and has a remainder.
649 divided by 15
Study the example below and try to identify each step.
Notice the first digit of the dividend, 6, is smaller than the divisor, fifteen. Thus, fifteen cannot be divided into six. Normally, if the first number in the dividend isn’t large enough, the space
above the number is left blank or an “x” is placed above it without going through the remaining steps. Next, place your thumb over the last digit in the dividend, 9, exposing only “64.” Divide 15
into 64 or ask yourself, “fifteen times what number gets me closest to sixty-four without going over?” The answer is 4. Multiply fifteen times four and subtract. When sixty is subtracted from
sixty-four, the answer is four. Since four is less than fifteen, we are dividing properly so far. Bring down the next number and repeat the process. Note: Sometimes, it is helpful to round two digit
divisors to the nearest ten to make dividing easier and to get a starting point.
To Check: Multiply the quotient (answer) by the divisor. Then add the remainder to the product. The result should equal the dividend. 43 x 45 = 645 + 4 = 649 True; thus the problem is correct.
In order to become comfortable with long division, you must practice. I recommend Richard Fisher’s 80 page workbook,
Mastering Essential Math Skills WHOLE NUMBERS AND INTEGERS | {"url":"http://www.bellaonline.com/articles/art57975.asp","timestamp":"2014-04-19T12:44:57Z","content_type":null,"content_length":"27445","record_id":"<urn:uuid:acbecc4b-3e6d-4fda-9f38-a01241bfdc6b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next Article
Contents of this Issue
Other Issues
ELibM Journals
ELibM Home
EMIS Home
Sin-Min Lee
Dept. of Maths. and Computer Science San Jose State University, San Jose, California 95192, USA
Abstract: The present paper is concerned with the study of $\Aut(B(n))$ the automorphism group of a non-associative Boolean rings $B(n)$, where $\left$ is a free 2-group on n generators $\{x_i\}$ $i=
1,\dots,n$, subject with $X_i\circ X_j=X_i+X_j$ for $i\neq j$. It is shown that for $n$ even, Aut$(B(n))=S_{n+1}$ and for $n$ odd, Aut$(B(n))=S_n$. An example of a non-associative Boolean ring $R$ of
order 8 is provided which shows that in general Aut$(R)$ is not a symmetric group.
Classification (MSC2000): 17A36
Full text of the article:
Electronic fulltext finalized on: 2 Nov 2001. This page was last modified: 16 Nov 2001.
© 2001 Mathematical Institute of the Serbian Academy of Science and Arts
© 2001 ELibM for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/PIMB/055/6.html","timestamp":"2014-04-19T04:31:46Z","content_type":null,"content_length":"3475","record_id":"<urn:uuid:04a43cad-e914-4420-b43b-2d1c8c1df623>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Programming Praxis - Benford’s Law
Programming Praxis – Benford’s Law
In today’s Programming Praxis exercise, our task is to see if Benford’s law (lower digits are more likely to be the first digit of numbers found i large collections of data) holds for the littoral
areas of lakes in Minnesota. Let’s get started, shall we?
Some imports:
import Data.List
import Text.Printf
The algorithm for calculating the distribution of leading digits given a group of numbers is virtually identical to the Scheme version, despite the fact that I didn’t look at the solution beforehand.
It’s not too surprising though, since it’s simply the most obvious method. Note that the function argument is a list of floats. Initially I assumed all areas were integers, which resulted in
incorrect results until I found that there were 10 floats hidden in the input (thank god for regular expressions).
firstDigits :: [Float] -> [(Char, Double)]
firstDigits xs = map (\ds -> (head ds, 100 * toEnum (length ds) /
toEnum (length xs))) . group . sort $ map (head . show) xs
With that function out of the way, the problem becomes trivial: just call firstDigits on all the appropriate numbers.
shriram :: [[String]] -> [(Char, Double)]
shriram xs = firstDigits [n | [(n,_)] <- map (reads . (!! 3)) xs]
Of course we need to run the algorithm over the given data, using the parser from two exercises ago:
main :: IO ()
main = either print (mapM_ (uncurry $ printf "%c %f\n") . shriram)
=<< readDB csv "csv.txt"
This produces the same list of percentages as the Scheme version. Looks like Benford’s Law holds in this case as well.
Tags: benford, bonsai, code, Haskell, kata, law, praxis, programming | {"url":"http://bonsaicode.wordpress.com/2010/10/26/programming-praxis-benford%E2%80%99s-law/","timestamp":"2014-04-18T11:37:59Z","content_type":null,"content_length":"50207","record_id":"<urn:uuid:d905e161-0b40-4664-b96d-33aa94632d97>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Kimi
Total # Posts: 49
One bottle of juice is 1/4 filled. Another smaller bottle holds 20 fluid ounces. if the contents from the larger bottle are poured into the smaller bottle, the bottle is 7/8 filled. how many fluid
ounces does the larger bottle holds
A 0.5 kg block is sliding along a table top with an initial velocity of 0.2 m/s. It slides to rest in a distance of 70 cm. Find the frictional force that is slowing its motion.
What writing convention does Jonathan Edwards use to persuade his audience?
A saleman took 1/2h to drive from Tuas to Bedok at an average speed of 72km/h.when he returned from Bedok to Tuas,he travelled at an average speed of 90km/h along the same road.how long did the
salesman take to reach Tuas?
Mr Thomas took 2hr to drive from Tokyo to Osaka.He then took another 4hr to complete the remaining 2/3 of the journey to Fukushima.His average speed for the 210km long journey was 35km/h. 1. If he
reached Osaka at 12:30,at what time did Mr Thomas arrive at Fukushima? 2. What w...
Chitra used wheat flour&plain flour to bake some cakes.19%of the flour used was wheat flour.if she used 475g of flour,find: A) the total mass of flour she used for the cakes B) the mass of plain
flour she used
35%of the number of books Vick owns are fiction books.if Vick has 70 fiction books,find: A) the total numbe of books he owns. B) the number of non-fiction books he owns
Rahayu spent 2h 25m in a library.she spent 30m of the time reading newspapers.what percentage of her time in the library was spent reading newspapers?
Mr Srinivasan invests 55000$ in a fixed deposit account.the interest rate is 3.3% per year.how much money will he have in the account after 1year? please show your operations clearly.
Ruzita bought 3kg of chicken.she used 0.75kg of the chicken to cook curry and another 0.13kg to stew w/ vegetables.what percentage of the chicken was not used?
express each fraction as a percentage and round off to 2 decimal places. a)79/148 b)58/379 c)15/52 d)43/95
Math word problem(algebra)
Carol is y years old&her daughter is 25 years younger.Find Carol's present age if the sum of their ages in 6years' time is 75years
Math word problem(algebra)
Diana buys 20 apples at x cents each&40 oranges&sells the bags for 20x cents each.Find out the amount that Diana paid for each apple if she obtained a total of RM 24 from selling all the fruits
Math word problem
The length&breadth of a rectangle are (x+5)cm&3cm respectively.Find the length if the area of the rectangle is 42 cm2.
Math word problem
if the sum of three consecutive numbers is 72,what is tge largest number?
word problem(algebra)
in 2001,there were g white storks migrating fron europe to africa in winter.The next year, only half of the white storks made the journey to africa.in 2003,the number of migrating white storks
decreased by another 45. (a) find the number of white storks that migrated in 2003 i...
word problem(algebra)
a magazine costs half as much as a book.The book costs $p.A pen costs $2 more than the magazine. (a) how much is the pen in terms of p? (b) if the book costs $5,how much is the pen?
word problem(algebra)
mary has y m of cloth.she used 2 m to sew a skirt.she used the remaining cloth to make 5 jackets. (a) find the amount of material that was used to make each jacket in terms of y. (b) if she has 17 m
of cloth.how much material was used for each jacket?
word problem(algebra)
a shopkeeper bought 16 boxes of pens.each box contained m pens.10 pens were sold on the first day. (a) how many pens were left after the first day?give ur answer in terms of m. (b) if m=5,how many
pens were left after the first day?
word problem(algebra)
a recycling company collected d kg of waste paper in february&2400 kg of waste paper in march.The amount of waste paper collected in both february&march was 3 times that of the amount collected in
january in terms of d. (a) Find the amount of waste paper collected in january i...
word problem(algebra)
Govin is w years old.His mother is 4 times his age.His father is 3 years older than his mother. (a) How old is Govin's father in terms of w? (b) If w=9,how old is Govin's father?
word problem
Rahim has 5 boxes of sweets.Each box contains y sweets.His teacher gives him another 8 sweets. (a) Find the total number of sweets Rahim has in terms of y. (b) If y=4,how many sweets does Rahim have
word problem(algebra)
at the market, a pear cost b cents&an apple cost 7 cents less than a pear.Mrs Ravi bought 4 pears&an apple.Find the total amount Mrs Ravi paid in terms of b. Help me!
word problem(algebra)
on monday,Linus made 5k paper cranes&gave 2k paper cranes to his friends.on tuesday,he made another 4k paper cranes.his friend gave him 5 paper cranes.how many paper cranes does he have now in terms
of k? Help me please !
Math,Word problems(algebra)
Kai Ling has 4m kg of flour.She bought 2 more packets of flour, each of mass m kg.How much does she has now in terms of m? Please help me
math,word problem(algebra)
The length of a piece of cloth is 8y m.Mr Lim cut 7 m from it to sew some cushion covers.Then,he cut 3y m to sew a curtain.The remaining material was cut into 4 equal pieces.How long was each piece
in terms of y? please help me to complete this&thanks for advance
Mr Ahmad starts work at 11.40 on Tuesday.He works for 12h45min.When does Mr Ahmad finish his work?Express your answer usering the 24-hour clock
A cuboid has a square base of side 18cm.The height of the cuboid is 10.4cm.Find its volume
Thank you , Helper. You and Ms. Sue have been a major part in my completing this. I cannot begin to tell you guys how thankful I am. Both of you have been a blessing. THANK YOU SO VERY MUCH!!
f(x)=-4x-3 Find the slope and y-intercept
Thank you, Ms Sue. I tried scanning through before I started posting. I either looked over these or I am so frustrated my eyes really are starting to cross. I have about come to the end of this. I
could not have done it without the help of you and Helper. You guys have been a ...
The function H descibed by H(x)= 2.75 + 71.48 can be used to predict the height, in centimeters, of a woman whose humerous is x cm long. Predict the height of a woman whose humerous is 38 cm long.
Helper, You are correct it is -1777. I have been at this for about 8 hours and I am pulling my hair out. Thanks for pointing that out for me. Ms. Sue Thank you again. You guys have been a life saver
today. THANKS A BUNCH!!!
Um,I have never had to see the answers link can you please help me out with that? I have no clue what the answers link is nor how to get to it.
The equation y=1777x + 27,153 can be used to predict the number of gun deaths in the united states x years after 2000, that is, x=0 corresponds with 2000, x=3 corresponds to 2003, x=5 corresponds to
2005, and so on. Predict the number of gun deaths in 2005 and 2007. In what ye...
Evaluate x+y/9 for x=51 and y=3
Thank you. for some reason these type problems blow my mind. I cannot figure out how to put them together to solve them. I am so thankful to everyone on this site that has helped me at one time or
another. THANKS!!!
Trains A and B are traveling in the same direction on parallel tracks. Train A is traveling at 100 miles per hour and train B is traveling at 120 miles per hour. Train A passes a station at 5:10pm.
If train B passes the same station at 5:25pm, at what time will train B catch t...
Thank you so so much. This was the first tme I have had to solve a problem of this sort, I had no clue where to begin. You have saved the day once again. I apprecite all your help!
The length of a rectangle is fixed at 27cm. What widths will make the perimeter greater than 100cm?
Thank you so much for the help. Although still slightly confused I now have something to work with to figure out what I am doing. I have had such a difficult time with this class. If it wasnt for all
of you guys I would be lost. Thanks again
In 1995, the life expectancy of males in a certain country was 64.8 years. In 1999, it was 67.2 years. Let E repersent the life ecpectancy in year t and let t repersent the number of years since
1995. E(t)= t+ (round to nearest tenth Use the function to predict the life expect...
I need help finding the the indicated outputs for f(x)=4x^-5x f(0) f(-1) f(2)
Social Studies
Thank You so much Ms. Sue you were alot of help. God Bless You!The websites you gave me really helped me out
Social Studies
Like facts on how many teens drop out of school each year. Facts like percentages.
Facts Sorry for the mistake
Sorry for the mistake the subject is facts
Social Studies
Could you someone give me facts on education? thanks
How do you pronounce "Traver" as in the city in california? Is it like Tray-ver or Traw-ver or Traa-ver? | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Kimi","timestamp":"2014-04-23T07:49:39Z","content_type":null,"content_length":"17532","record_id":"<urn:uuid:78191313-460d-454b-8b6d-95175aa2f4c0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
2008.55: Deflating Quadratic Matrix Polynomials
2008.55: Christopher J. Munro (2007) Deflating Quadratic Matrix Polynomials. Masters thesis, The University of Manchester.
Full text available as:
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
1835 Kb
In this thesis we consider algorithms for solving the quadratic eigenvalue problem, (lambda^2*A_2 + lambda*A_1 + A_0)x=0 when the leading or trailing coefficient matrices are singular. In a finite
element discretization this corresponds to the mass or stiffness matrices being singular and reflects modes of vibration (or eigenvalues) at zero or ``infinity''. We are interested in deflation
procedures that enable us to utilize knowledge of the presence of these (or any) eigenvalues to reduce the overall cost in computing the remaining eigenvalues and eigenvectors of interest. We first
give an introduction to the quadratic eigenvalue problem and explain how it can be solved by a process called linearization.
We present two types of algorithms, firstly a modification of an algorithm published by Kublanovskaya, Mikhailov, and Khazanov in the 1970s that has recently been translated into English. Using these
ideas we present algorithms that are able to reduce the size of the problem by ``deflating'' infinite and zero eigenvalues that arise when the mass or stiffness matrix (or both) are singular.
Secondly we look at methods that deflate zero and infinite eigenvalues by the use of Householder reflectors; this requires a basis for the null space of the mass or stiffness matrix (or both), so we
also summarize various decompositions that can be used to give this information. We consider different applications that yield a quadratic eigenvalue problem with singular leading and trailing
coefficients and after testing the implementations of the algorithms on some of these problems we comment on their stability.
Download Statistics: last 4 weeks
Repository Staff Only: edit this item | {"url":"http://eprints.ma.man.ac.uk/1094/","timestamp":"2014-04-18T00:49:15Z","content_type":null,"content_length":"10326","record_id":"<urn:uuid:d612ce2b-aad8-48dd-8b52-4afa8eb9c1e5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Sequences!
I thought so. You kid because you do not joke.
Last edited by anonimnystefy (2013-02-08 23:03:59)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=252514","timestamp":"2014-04-19T17:14:54Z","content_type":null,"content_length":"18045","record_id":"<urn:uuid:9cc772d1-e3fe-47a4-9054-785d6b31c5a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Forecasting yield volatility, Financial Management
There are several methods available to forecast yield volatility. But before that, let us look into the calculation of forecasted standard deviation.
Assume that a trader wants to forecast volatility at the end of 07/08/2007, by using the 20 most recent days of trading and update the forecast at the end of each trading day. To calculate these, the
trader can calculate a 20-day moving average of the daily percentage yield change.
Still now it has been assumed that the moving average is an appropriate value to use for the expected value of the change in yield. But, some experts view that it would be more appropriate to assume
the expected value of the change in yield to be zero. In eq. (1) by substituting zeros in place of moving average X, we get
Variance =
An equal weightage is assigned to all observations by the daily standard deviation given by equation 2. Therefore, a weightage of 20% for each day is given if the trader is calculating volatility
based on the most recent 20 days of trading.
Greater weightage is given to recent movements in the yield or price while determining volatility, and less weightage is given to the observations that are farther in the past. Revising equation 2 to
include the weightages we get,
Variance =
W[t] is the weight assigned to the observations t. The sum of all the weights assigned to the observation will be equal to 1.
A time series characteristic of financial assets suggests that a high volatility period is followed by a high volatility period and a low volatility period is followed by a low volatility period.
From this observation, we can tell that the recent past volatility influences current volatility. This time series property of volatility can be estimated with the help of statistical models like
autoregressive conditional heteroskedasticity.
Posted Date: 9/10/2012 3:46:53 AM | Location : United States
Your posts are moderated
Bond management evolution to some extent is linked to the increased volatility of the interest rate term structures which is in existence since seventies. Bond valuatio
1. Increasing the number of indirect-cost pools is guaranteed to sizably increase the accuracy of product or service costs.do you agree? Support your anser using examples 2. The
What is the primary assumption behind the experience approach to forecasting? The experience approach to forecasting is relies on the assumption that things will happen a fixed
Foreign Exchange Rates The proportional value of one currency to other, used to exchange currency from one denomination to another. For example, one British pound is wort
Floating Rate Notes (FRNs): When interest rates are high and the general outlook is either stable or indicating the possibility of a downward trend in return, then an investor
The usual number of passengers using the service is dependent upon the demand at each particular exchange rate. At 1·52 Euro/£ expected demand = (0·33·)(500 + 460 + 420) = 460
QUESTION The Stock of Max Ltd performs relatively well compared to other stocks during recessionary periods. The stock of Bax Ltd, on the other hand, does well during growth p
Can you describe what the payoffs from lookback options depend on? Can you write in a concise notation the payoff of a floating lookback call? a. What is the payoff of a portfol
Q. Explain Compound Value Concept? The Compound Value Concept is used to find out the FV of present money. It is the same as the concept of compound interest, wherein the inter
how would you judge the potential profit of bajaj electronics on the first year of sales to booth plastics and give your suggestion regarding credit limit.Should it be approved or | {"url":"http://www.expertsmind.com/questions/forecasting-yield-volatility-30111055.aspx","timestamp":"2014-04-19T07:22:07Z","content_type":null,"content_length":"32023","record_id":"<urn:uuid:af696408-6d31-400d-8da0-1bbe9ecf6792>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
Čech cover
Čech covers
A Čech cover is a Čech nerve $C(U)$ that comes from a cover $U \to X$.
Let $C$ be a site and $\{U_i \to X\}$ a covering sieve. Write $U = \sqcup_i Y(U_i)$ for the coproduct of the patches in the presheaf category $PSh(C)$ ($Y$ is the Yoneda embedding).
Write $X$ for $Y(X)$, for short.
Then the Čech nerve $C(U)$ of $U \to X$ in $PSh(C)$, i.e. the simplicial presheaf
$\cdots U \times_X U \times_X U \stackrel{\stackrel{\to}{\to}}{\to} U \times_X U \stackrel{\to}{\to} U$
is called a Čech cover.
Consider the local model structure on simplicial presheaves on $C$. If the sheaf topos $Sh(C)$ has enough points, then the weak equivalences (called local weak equivalences for emphasis) are the
stalk-wise weak equivalences of simplicial set (with respect to the standard model structure on simplicial sets).
$\pi_0(C(U)) = colim_{[n] \in \Delta^{op}} C(U)_n$
for the presheaf of connected components (see simplicial homotopy group) of $C(U)$. Regard this as a simplicially constant simplicial presheaf.
Remark. By the discussion in the section “Interpretation in terms of descent and codescent” at sieve this $\pi_0(C(U))$, regarded as an ordinary presheaf, is precisely the subfunctor of $Y(X)$ that
corresponds to the sieve $\{U_i \to X\}$.
For every Čech cover $C(U)$ the morphism of simplicial presheaves
$C(U) \to \pi_0(C(U))$
is a local weak equivalence.
Being a simplicially discrete simplicial sheaf, for every test object $V$$\pi_0(C(U))(V)$ has all simplicial homotopy groups trivial except possibly the set of connected components. But by the very
definition of $\pi_0(C(U))$ the morphism $C(U) \to \pi_0(C(U))$ is a bijection on $\pi_0$.
Over each test domain $V$ the simplicial set $C(U)(V)$ is just the nerve of the Čech groupoid
$\left( C(V,U)\times_{C(V,U \times_X U)} \stackrel{\to}{\to} C(V,U) \right) \,.$
The nerve of that groupoid is readily seen to have vanishing first simplicial homotopy group. Being the nerve of a 1-groupoid, also all higher simplicial homotopy groups vanish.
So $C(U) \to \pi_0(C(U))$ induces for each object $V \in C$ an isomorphism of simplicial homotopy groups. It therefore is an objectwise weak equivalence of simplicial sets.
See also for instance lemma 3.3.5 in
• Daniel Dugger, Sheaves and homotopy theory (web, pdf)
Every Čech cover
$C(U) \to X$
is a stalkwise weak equivalence.
From the above we know that $C(U) \to X$ factors as
$C(U) \to \pi_0(C(U)) \to X$
and that the first morphism is an objectwise, hence also a stalkwise weak equivalence. It therefore suffices to show that $\pi_0(C(U)) \to X$ is a stalkwise weak equivalence.
But by the remark above, $\pi_0(C(U)) \to X$ is actually the local isomorphism corresponding to the cover $U$. It is therefore even a stalkwise isomorphism.
See also for instance lemma 3.4.9 in
• Daniel Dugger, Sheaves and homotopy theory (web, pdf) | {"url":"http://www.ncatlab.org/nlab/show/%C4%8Cech+cover","timestamp":"2014-04-17T12:33:25Z","content_type":null,"content_length":"29799","record_id":"<urn:uuid:8e54c02c-fa1e-4596-bb38-6b0c6948eb2d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
May 1998
A moving charged particle, such as an electron, experiences electric and magnetic forces. In the middle of the 19th century, the Scottish physicist James Clerk Maxwell wrote down a set of equations
which unified these two forces into a single theory. This led him to understand light waves (and radio waves) as electromagnetic oscillations propagating through a vacuum.
Light as electromagnetic waves
Waves on the surface of a pond are a sequence of peaks and troughs that move along in the shape of a sine wave. Light waves are mathematically similar, except that they are electromagnetic: the
sinusoidal oscillation is of the amplitude of the electromagnetic field.
Figure 1: Light waves. In each box the sum of the first and second waves is the third wave.
When two waves of the same wavelength meet, their combined electromagnetic field depends on their relative phase. If the peaks of the two waves coincide, we say that the waves are in phase, and then
the two waves reinforce each other, as in the left-hand box in the figure. But if the peaks of one wave coincide with the troughs of the other the waves are said to be out of phase, and they cancel
each other, as in the right-hand box.
Try it : with a graphing calculator compare the graphs of sin(x) and sin(x)+sin(x). Then compare the graphs of sin(x), sin(x+pi) and sin(x)+sin(x+pi). What do you see?
Figure 2: The grooves on a CD are only 0.5 micro-metres wide and make a good diffraction grating, resulting in coloured interference patterns.
Because of the wave-like character of light, it is diffracted when it is shone through a pair of closely separated slits: if a screen is placed some way behind the slits, a pattern of light and dark
fringes appears on it. The spacing of these fringes is calculated from the wavelength lambda of the light. Dark fringes appear at points on the screen where the light received from the two slits is
exactly out of phase. These points have distances from the two slits that differ by:
where n
is an integer.
Figure 3: The double-slit experiment
Figure 4: Diffraction patterns are often used to analyse the structure of materials. Subtle differences in structure can change the properties of a material dramatically. Researchers at Heriot-Watt
University are using the technique to help Cadbury make sure their chocolate always solidifies in the tastiest way.
Light as particles
So, by the beginning of the 19th century, it was a well-established notion that light is wave-like. But in the early years of the 20th century it became apparent that it is also particle-like. A key
experiment was the photoelectric effect, in which electrons escape from the surface of a metal when it is bombarded with light.
A metal consists of a huge number of atoms effectively anchored to fixed sites by the electric forces caused by all the other atoms. The outermost orbital electrons of the atoms can easily be pulled
off when an electric field is applied. They then move through the metal and form what we call an electric current.
When light is shone on the metal, some of the electrons can actually escape from the surface. The number that escape rises with the intensity of the light, but their energy of escape does not. Rather
it depends on the colour of the light or, equivalently, its frequency nu (pronounced new).
The explanation of these properties by Albert Einstein in 1905 (the year in which he also produced the theory of relativity) was really the beginning of quantum theory. A beam of light can be thought
of as a collection of particles, called photons. The number of photons is proportional to the intensity of the light, and the energy E of each photon is proportional to its frequency:
This formula had already been guessed in 1900 by the German physicist Max Planck, and the constant h is named after him. In ordinary units it is very small:
The electron is ejected from the metal when one of the photons hits it and gets absorbed by it, so that the photon's energy is transferred to the electron. The number of escaping electrons increases
with the intensity of the light because when there are more photons there is a greater chance of one hitting an electron.
How heavy are photons?
Photons move with the speed of light, so when we think of them as particles we must use special relativity instead of Newton's mechanics. The energy of a particle whose speed is v (not to be confused
with the greek letter nu) and whose rest mass is m is, according to Einstein:
In this equation m is the particle's mass, v is its speed and c is the speed of light. For a particle at rest we can substitute 0 for v giving Einstein's famous formula:
For a particle moving with the speed of light, v = c, the denominator vanishes, so it can have finite energy only if the numerator vanishes too, that is m = 0. Experiments have tested that photons
have zero mass to very high accuracy: we know that their mass is less than 10^-18 times that of an electron.
Photons may not have mass but they do have momentum. In Newton's mechanics the momentum of a particle is simply its mass times its velocity; p = mv. However, in special relativity the momentum of a
particle is:
Try it : the Earth's mass is approximately 6x10^24 kg, and it moves around the Sun once per year at a distance of about 1.5x10^11 metres. Use a calculator to compare Newton's classical value for
the momentum of the Earth with Einstein's relativistic value. The speed of light is approximately 3x10^8 ms^-1.
What do you notice about the two values?
Using the relativistic momentum equation we can express the energy of a particle in terms of its momentum as follows:
For a photon this is simply:
But as we have seen already the energy is also related to the frequency of the light by:
When we combine these two equations, we find that:
This tells us the momentum of a photon for light of frequency nu.
For waves whose speed is c, the wavelength is:
Putting these formulae together we find that:
This is one of the fundamental relations of quantum theory!
It turns that this equation is so fundamental that it applies to all particles, not just photons. (See "Quantum uncertainty" elsewhere in this issue.).
Try it : using an approximation for the momentum of the Earth in its orbit round the Sun, calculate its wavelengh, lambda. The the value of h, Planck's constant, is given above as approximately
6.626x10^-34 Js.
What do you notice about the value?
Biographies of the mathematicians mentioned in the article are available from the "MacTutor history of mathematics archive":
The author
Professor Peter Landshoff is doing research on quarks (what we and everything in the world are made of) and is a lecturer in quantum mechanics at the University of Cambridge. | {"url":"http://plus.maths.org/content/comment/reply/2141","timestamp":"2014-04-17T09:50:49Z","content_type":null,"content_length":"32712","record_id":"<urn:uuid:5fc26546-eed8-4565-9195-077cd5003481>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brian D.
My name is Brian and I majored in Electrical Engineering at the University of California, Riverside. In middle school and high school, I took every Honors and Advanced Placement math and science
class offered. In college I have taken differential equations, single and multi-variable calculus and physics for engineers. I have always excelled in math and in high school I found that I am also
good a teaching it to others.
I do not have a lot of formal tutoring experience. I mainly have just helped out friends and friends' siblings whenever it was needed. In high school I helped classmates when they did not understand
certain concepts in our math classes. I also tutored a friend's younger brother for about a year and a half, in Geometry and Algebra 2. His parents asked if I could help him because he was struggling
in geometry and we succeeded in raising his grade from a D on his progress report to a B on his first report card. Then to an A on his second report card. Due to the immense progress that he made,
his parents asked me to help him the following year in Algebra 2. I have not tutored very much during college, but I have helped my younger sister, who is in high school, in Algebra 2 and
Pre-Calculus the past two years and I know I will be helping her in Calculus next year.
I think that math is a subject that is easiest to learn through repetition. Because of this, I will not simply do your homework for you. I help walk the student through the steps that need to be
taken to solve a problem, and if necessary I make up problems on the spot to ensure that the student really understands what steps need to be taken. I truly think that people can learn anything. It
is simply a matter of how much work is put in. School can be very hard for some, but the right help can make it much easier and less stressful.
Thank you for your interest!
Brian's subjects | {"url":"http://www.wyzant.com/Tutors/CA/Fairfield/8318371/","timestamp":"2014-04-16T11:15:05Z","content_type":null,"content_length":"84582","record_id":"<urn:uuid:ad83b9dc-4b3a-4830-93d1-47cf07bf6ba7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Emerson & Cuming Microwave Products : Application and Theory of Dielectric Materials in RF/Microwave Systems : Article
March 2012
Application and Theory of Dielectric Materials in RF/Microwave Systems
By Paul Dixon, Emerson & Cuming Microwave Products
Maxwell’s equations define two terms which determine the response of a material to electromagnetic fields. These are the electric permittivity µ and the magnetic permeability μ. If these quantities
are known for a material, then the reaction of a wave to the material is completely determined.
The permeability is a measure of a material’s response to the magnetic portion of an EM wave. This paper assumes the material has no magnetic properties therefore the permeability is equal to that of
free space.
What are Dielectrics?
In the broad sense, dielectrics are materials that can influence and be influenced by the electric portion of an electromagnetic field. While all objects exhibit dielectric properties to differing
degrees, this paper will concern itself with simple dielectric materials with low conductivity and no ‘semi-conducting’ properties. Also, certain dielectrics have high loss and are used to attenuate
a propagating wave. These dielectric absorbers will not be treated in this paper.
Dielectric Theory
A capacitor when connected to a voltage source will store charge proportional to the area A of the capacitor plates. When subjected to a voltage, the capacitor will draw a charging current which
leads the voltage by 90o. The voltage creates an electric field E between the plates. The charge is stored on the capacitor plates.
If a dielectric material is inserted between the capacitor plates the capacitance of the capacitor will increase. The relation between the electric field E and the dielectric flux density D is given
Where P is the polarization in the dielectric induced by the electric field and ε0 is the permittivity of free space and is equal to is 8.854x10-12 farads/m
Where ε*=the permittivity of the dielectric material. In general the permittivity is a complex quantity ε*= ε’-jε’’. Commonly the permittivity of materials is compared to the permittivity of free
space. The real part ε’ is generally called the dielectric constant and the imaginary portion ε” is called the loss factor No passive material can exhibit a permittivity less than that of free space
hence dielectric constants are always greater than 1. The ratio ε”/ ε’ is called the loss tangent.
As can be seen, the dielectric serves to increase the electric flux density through the dielectric. Note that the electric field in a capacitor remains constant. The additional flux density is due to
the polarization of the material.
Molecular basis of dielectrics.
A dielectric material becomes polarized in response to an electric field because it contains charge carriers that can be displaced. This polarization can take four forms. The first is due to bound
charge in the electron cloud surrounding the nucleus becoming displaced slightly upon the application of an electric field. This displacement results in a virtual electric dipole formed by the
positive nucleus and the negative electron cloud. A second source of polarization arises in molecules formed by atoms from different elements. In the absence of an electric field the electrons will
be displaced towards the stronger binding atoms. The presence of an external electric field will shift the equilibrium position of the atoms, thus creating a dipole moment. In addition, the
asymmetric charge distribution in molecules will result in permanent dipole moments which will reorient themselves in response to an external field. A fourth source is due to free charge carriers
which are restricted in their motion through the material which will result in an increase in capacitance. Note that the first three polarization mechanisms consist of charge carriers bound to atoms
while the fourth consists of free electrons with restricted motion. Also note that the presence of these free electrons will contribute to material loss since their motion requires work which will
attenuate the energy of the electric field. Desirable low loss dielectrics minimize free electrons. In the macroscopic world we are less concerned with the polarization mechanisms and lump them
together in the material polarization to determine the permittivity.
Dielectric Operation
All materials on earth can be considered dielectrics. They all have a permittivity greater than that of free space and will become polarized to varying extents due to an external electric field. In
this paper we are concerned primarily with a class of dielectrics with very small loss factors. An electromagnetic wave will propagate (or resonate) in the material with a minimum amount of
Most naturally occurring materials can have a dielectric constant as low as two or as high as several thousand. Artificial dielectrics consist of combinations of materials with various dielectric
constants which macroscopically will have the desired effective dielectric constant.
A dielectric will reduce the wavelength of an electromagnetic wave by a factor proportional to the square root of the dielectric constant. This factor is used in applications to reduce the physical
size of components. In circuit board or patch antenna applications, a high dielectric substrate enables the designer to reduce the overall size of the component. This doesn’t come without cost
however. The bandwidth or efficiency could be reduced due to the higher dielectric storing energy. A high dielectric material can also substantially reduce the size of a resonator.
A dielectric will also cause a wave in free space to reflect and refract when it impinges on its surface. Dielectrics are often used as lenses to shape antenna beams or to focus energy as with
Luneberg lenses. A lens can be used to transform a spherical wave into a plane wave which will increase the directivity of the antenna.
In the RF/microwave realm, dielectrics are primarily used to modify, compress or redirect electromagnetic energy. They can be used to reduce the size of antennas or other components or to change the
path of electromagnetic energy through controlled reflections or lenses.
Reflection and Transmission of Waves at a Dielectric Boundary
Dielectrics can be used to modify a wave by exploiting its reflection/transmission characteristics. At a dielectric interface, the incident, reflected and refracted waves must obey the boundary
condition that the sum of E and H fields of the waves must be continuous. Requiring continuity of the amplitudes leads to Fresnel’s equations. Continuity of phase leads to Snell’s Law. Reflection
from a dielectric interface depends on the polarization. There are two polarization states defined. Parallel polarization occurs when the E field vector is parallel to the plane of incidence. The
plane of incidence is defined by the vector normal to the material and the propagation direction of the incident wave. Perpendicular polarization occurs when the E field vector is perpendicular to
the plane of incidence.
The phase delay experienced by the wave in propagating a distance d is given by
Note that for a non-magnetic material these equations are simplified by μ*=1
The interface reflection coefficients are only half the story though. Eventually the wave will reach the other side of the dielectric and reflect. The total reflection is then derived from the sum of
the reflected waves.
The voltage reflection coefficient for a thickness d of a material is
where r is the appropriate interface reflection coefficient
The voltage transmission coefficient is given by
If the sample is metal backed, the total reflection coefficient becomes
Wave Propagation in Dielectrics
For waves in a dielectric medium two of Maxwell’s equations can be written as
Differentiating each equation by t and substituting yields
If it is assumed that E and H are functions of x and t only the solution is a plane wave
A nonzero α leads directly to an exponential attenuation of the wave. The complex exponential leads to a time period of
and a space period
For all materials with β>1 (all dielectric materials) the wavelength will be compressed inside the dielectric compared to free space by a factor of β. For a low loss material, a very good
approximation to β is
Hence for nonmagnetic material, wavelength compression inside a dielectric is proportional to the square root of the dielectric constant.
Dielectric Polarization -The microscopic and the macroscopic form of Maxwell’s equation for Gauss’s Law are
Where the electric field in a volume is dependent on the total charge density. In the presence of matter Maxwell introduced a displacement field D which is dependent only on the free charge density
ρb=bound charge density
ρf=free charge density
These equations are equivalent if we define
Where P=polarization of the matter induced by the applied electric field
Artificial Dielectrics
Permittivity of a material generally arises from the effect on electrons within individual atoms or molecules in the material. An artificial dielectric is created when two or more dissimilar
materials are mixed together. The combination will exhibit an effective permittivity somewhere intermediate between the permittivity of the materials. In order for the material to exhibit this
macroscopic permittivity the spacing between the filler components must be less than a wavelength which is not an issue in the RF/microwave realm. Generally artificial dielectrics are created by
combining particles or fibers (inclusions) with a matrix material. The dielectric constant of the composite will increase or decrease relative to the matrix depending on the dielectric of the
inclusions. Inclusions of hollow particles can serve to substantially lower the dielectric constant (and weight) of the composite; these materials are called syntactic foams.
Dielectric Metamaterials- No naturally occurring material can have a dielectric constant less than 1. Metamaterials can exploit the macroscopic behavior of materials to create an effective dielectric
constant less than 1 or even less than zero. These materials are made with high dielectric inclusions with specific shapes and sizes (rods, donuts, etc) in a lower dielectric matrix. Under certain
conditions, these materials can exhibit a dielectric constant less than zero. What exactly does this mean? Recall the equation D=εE. A positive ε implies that the displacement field is in phase with
the electric field. A negative ε means it is 180º out of phase with the electric field. A metamaterial can introduce a phase delay into the material response which will result in the displacement
field being out of phase with the applied electric field.
Resonator Concepts
It is well known that a dielectric material can act as a waveguide, supporting TE (transverse electric) and TM (transverse magnetic) modes like a more conventional hollow metallic waveguide. Like the
metallic waveguide, if the dielectric waveguide is truncated, standing waves will exist and it will behave as a resonant cavity.
Analysis of hollow metallic cavities is straightforward as matching the boundary conditions of zero tangential E field on the metallic boundaries leads to exact solutions for simple shapes. For a
dielectric resonator, it is a little more complex. The air dielectric boundary confines most of the field to the dielectric material but the boundary conditions are less straightforward. Often the
dielectric is used inside a metallic enclosure and hence serves to reduce the size of the enclosure. The quality factor Q can be very high for a dielectric resonator and is proportional to the
inverse of the material loss tangent.
A dielectric resonator when not enclosed by metal can be an efficient antenna with wider bandwidth than comparably sized patch antennas at the cost of the antenna no longer having a low profile.
Dielectric Resonator Antennas (DRA)also have qualities useful in creating antenna arrays. A single dielectric resonator will resonate at many different frequencies with different field
configurations. This can be exploited with different feed locations and mechanisms in a DRA enabling dual band performance.
Dielectric Forms
Low K Materials (1<K<2)
Materials with a dielectric constant very close to 1 are available in two forms. Foams are available with very low dielectric constants. The matrix material provides structural support while the foam
cavities have a dielectric of 1. Alternatively, the inclusions would consist of tiny glass spheres (microspheres). The inside of these spheres is a vacuum which gives them a very low effective
dielectric constant. When used as an inclusion, microspheres can substantially lower the dielectric constant of a composite.
Medium K materials (2<K<30)
Wide ranges of material types are available over this range of dielectric constants. Circuit board laminates which are made with layers of materials interspersed with higher dielectric inclusions are
available in K values up to 10. These materials are limited in their thickness to less than 0.125". Thicker dielectrics made of different plastics are available for higher thicknesses.
High K materials (K>30)
To achieve the very high dielectric constants needed for dielectric resonators, ceramic materials must be used. These are sintered materials with dielectrics as high as 80. For a precision resonator,
the dimensions of the material are crucial which can be a problem due to the difficulty in machining most ceramic materials.
Flexible Dielectrics
Traditionally dielectric materials used in electronics have been rigid. Some applications (conformal antennas) require a dielectric material that can be applied around a radius. Flexible dielectrics
composed of an elastomer are available with k values from 2-30.
Injection Moldable Dielectrics
Many dielectric applications require a custom shape. Machining cost of traditional dielectric materials can be prohibitive. Dielectrics from 2-15 are available in pelletized form suitable for
injection molding.
Dielectric Applications
Dielectrics have many applications in the RF/microwave world. The primary uses are as circuit board materials, radomes, and antennas. Desired properties differ in each case.
Circuit Board Laminates
Dielectrics are used as circuit board laminates. The laminate is plated with a conductive material on two sides. One side serves as the ground plane while circuit pathways are etched into the other
side. The dielectric constant of the laminate material is key to determining the etch pattern. Higher dielectric constants will shrink the needed circuit size, enabling lower overall size and weight.
Antennas which are outdoors must be protected from the weather with a radome. However, since all dielectric materials will reflect some incident energy, the radome must be carefully designed to
minimize reflections. Well designed radomes exploit the fact that a low loss material that is a quarter wavelength thick will exhibit almost perfect transmission.
All antennas use dielectrics. The value of the dielectric constant is critical for correct design of an antenna. If the dielectric constant is different, the antenna resonant frequency will change.
For many antennas it is critical to have known, consistent dielectric materials.
Patch Antennas
Patch antennas are a class of antenna which uses a microstrip circuit board substrate with a rectangular patch etched into the surface. Patch antennas are used where some gain is desired and wide
bandwidth is not critical. The patch resonates at the center frequency.
Dielectric Resonator Antennas (DRAs)
Dielectric Resonator Antennas exploit the property of dielectrics to resonate at precise frequencies. Since the DRA is unbounded it will radiate, making a rather efficient antenna. DRAs have
comparable beam patterns to patch antennas but have been shown to have much wider bandwidths. Also, they can be made to be dual (or more) frequency as a given resonator has many resonant frequencies.
The actual frequency radiated would depend on the properties of the feed system.
Dielectric Resonators
Dielectric Resonators are used as precision frequency sources for various RF/Microwave components. Also known as dielectric resonant oscillators (DROs) they can store high levels of energy at
resonance at frequencies comparable to a metal cavity. DROs usually have a very high dielectric constant (>40) which enables them to oscillate at frequencies in the RF/Microwave band while
maintaining a very small size. Small variance of dielectric constant due to temperature changes is critical for proper operation of DROs. The most common use for DROs are in oscillator or filter
Potting Material
Often a circuit will need protection from damage. A cure-in-place potting compound consisting of a low loss, low dielectric material could be poured over the circuit. It must be non conductive and
should cause a minimum of interference to circuit operation, hence a low dielectric material is used.
Low Dielectric Material
In many cases it is desirable to create physical space between components without the use of a material that would interfere with the electromagnetic operation. Materials with very low dielectric
constants are used. Generally, some sort of foam such that the primary material is air, these low dielectrics are virtually transparent to microwaves. They will also give some structural strength.
Dielectric Test Methods
Free space-Insertion loss/phase A quick nondestructive method for measuring dielectric constant of a flat sheet material is by using insertion loss and phase. In this test two antennas are set up and
are boresighted. A calibration measurement (magnitude and phase) is taken with a network analyzer to establish the baseline. The material under test is placed between the antennas and the signal
magnitude and phase are measured. Computer software is then used to determine the dielectric constant. Note that it is not possible to directly calculate the permittivity from the measured magnitude
and phase so iterative techniques must be employed. Also a given value for magnitude and phase does not have a unique solution for permittivity. A phase measurement of, for example –60º may represent
–60º or –420º or -780º, etc. This is because it is unknown how many complete wavelengths have passed through the material. This becomes particularly relevant for thick samples or material with very
high dielectric constant. This can be resolved by measuring two or more thicknesses of material. The interferometer does not result in accurate values for the loss factor of the material.
Slotted line - An empty slotted line waveguide terminated in a short, when excited by a CW signal, will exhibit a standing wave caused by the superposition of the incident and reflected wave. This
standing wave behavior will recur every half wavelength. Also, if the line is lossless, the nulls of the standing waves will be very deep and narrow. A precision probe can be used to measure the
location and width of the nulls to a high degree of accuracy. If a sample of dielectric material is placed next to the short the location of the nulls will shift. If the material has loss the width
of the nulls will increase. The new location and widths of the nulls are measured and very accurate values for the dielectric constant and loss tangent can be calculated. Note that this proecedure
has the same issues with multiple wavelengths inside the material as does the interferometer method but again, measuring multiple thicknesses can resolve this issue. The slotted line method can give
very accurate results for both dielectric constant and loss tangent.
Resonator Methods
Various resonator methods exist which share the common characteristic of measuring the resonant frequency and resonant width of a high Q resonator both with and without the dielectric sample.
Split Post Dielectric Resonator
The SPDR utilizes closely spaced dielectric resonators which couple to create a very high Q resonator. The resonant frequency and Q (3dB bandwidth) are measured and recorded. The sample is inserted
between the resonators and the frequency and Q are measured. This results in very accurate values for dielectric constant and loss tangent without stringent requirements in machining samples.
Resonant Cavity
Like the SPDR, resonant cavity methods measure the resonant frequency and Q of a cavity. The cavity is then filled with the material to be tested and the dielectric constant and loss tangent are
determined by the shift. This procedure can be very accurate but suffers from the drawback of error introduced if the material does not completely fill the cavity, often requiring expensive,
precision machining.
Cavity Perturbation
The cavity perturbation method relies on partially filling a cavity with the material to be tested. If the exact volume of the inserted material is known relative to the full volume of the cavity,
the dielectric constant can be accurately derived. The value attained for loss tangent is less accurate, particularly for very low loss materials as there is insufficient loss in the partially filled
cavity. This technique also does not require precision machined samples and is very cost effective.
For the full downloadable version, visit our website
Paul Dixon can be reached at pdixon@eccosorb.com
Dielectric Materials and Applications, Arthur von Hippel, Editor, Artech House
Emerson & Cuming Microwave Products
Email this article to a friend! | {"url":"http://www.mpdigest.com/issue/Articles/2012/Mar/emerson/Default.asp","timestamp":"2014-04-20T23:27:39Z","content_type":null,"content_length":"68652","record_id":"<urn:uuid:478143ca-18e2-4719-ada7-5d593b71fcf4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Specifying categorical factors & GLMM parameter estimates
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Specifying categorical factors & GLMM parameter estimates
From Richard Goldstein <richgold@ix.netcom.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Specifying categorical factors & GLMM parameter estimates
Date Wed, 14 Nov 2007 14:26:20 -0500
If you have more than 2 categories, then you must form the set of variables yourself, either prior to estimating the model (e.g., via tab or via xi) or by using xi at the time of the estimation
I am not clear about what you expect Stata or any other software to do that is special regarding a categorical variable that is ordinal; perhaps you could clarify what you are expecting.
You do not show any output, but from your description it appears that you have let Stata default to treating each right-hand-side variable as continuous.
Susan Lingle wrote:
Hi All,
I am new (1-day) to STATA and purchased it to run a specific test: a GLMM with a binary response variable (or logistic regression with mixed effects). I have a few specific questions.
(1) Categorical and ordinal variables: I re-named my categorical variables (which will be used as fixed factors in the GLMM) to numbers before pasting them into a STATA dta file. I now defined
and assigned data labels to each number, and they appear okay.
But how can I make sure that STATA recognizes these as categorical variables? I see the format of ‘byte” and “%8.0g”, but does that tell me enough? And how can I tell STATA to distinguish between
truly discrete variables (Europe, North America, Africa) and ordered variables (e.g., low, medium and high)? I assume there is no problem when there are only two categories (indicative), since
there should be no confusion with a continuous or ordinal category. But I have a YEAR variable with 8 years (not to be ordered) and an AREA variable with four categories.
(2) Specific parameter estimates: The GLMM/binary response produces parameter estimates for each variable, but it gives no degrees of freedom (which would help me find out whether STATA is
recognizing certain variables as categorical). It also does not produce specific parameter estimates, e.g., t-values with accompanying p-values for each level of a category. I am accustomed to
seeing that output in most statistical programs. Advice on how to get this? Or does the absence of this output suggest that STATA ‘thinks’ my variables are continuous rather than categorical?
I have been using the menus (GUI) as much as possible.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-11/msg00482.html","timestamp":"2014-04-16T10:15:17Z","content_type":null,"content_length":"7789","record_id":"<urn:uuid:94b57f14-b5df-4dc3-acc2-4543431393e2>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Xilin Zhang
Xilin Zhang
Indiana University
Weak Pion and Photon Production off Nucleons and Nuclei in a Chiral Effective Field Theory
Neutrino-induced pion and photon production from nucleons and nuclei are important for the interpretation of neutrino-oscillation experiments, and these processes are potential backgrounds in the
MiniBooNE experiment [A. A. Aquilar-Arevalo et al. (MiniBooNE Collaboration), Phys. Rev. Lett. 100, 032301 (2008)]. We have been working on these problems in the Lorentz-covariant effective field
theory (known as QHD EFT) which contains nucleons, pions, Deltas, isoscalar scalar (sigma) and vector (omega) fields, and isovector vector (rho) fields and has chiral symmetry built in. In the first
part of the talk, we will compare our results for weak production of pions from nucleons with data. This serves as the calibration of our theory. And the weak production of photons from nucleons will
also be shown. The convergence of the two calculations based on the EFT will be discussed. More interestingly, the preliminary results for the incoherent and coherent weak production of pion and
photon from nuclei will be discussed. We will mention the possible relevance between our result and the MiniBooNE low energy excess event puzzle. The results for incoherent electron scattering off
nuclei and coherent photoproduction of pion from nuclei will be compared with data as the benchmark for the previous nuclei calculations. In the second part of the talk, we will focus on the theory.
For scattering off nucleon problem, technical details including power counting, Delta propagator (related with pathologies of introducing high spin particles in field theory and redundant interaction
terms involving Delta and form factors will be mentioned. For scattering off nuclei problem, the approximation scheme and $\Delta$ dynamics in the medium will be discussed. Finally, possible
improvements on these calculations will be mentioned.
Back to the theory seminar page. | {"url":"http://www.phy.anl.gov/theory/semabstracts2010/zhang.html","timestamp":"2014-04-17T19:00:51Z","content_type":null,"content_length":"2424","record_id":"<urn:uuid:98c1c9d0-54bb-452b-a8f5-1ff271b1121e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Identity for cos(wt+phase)cos(wt)
December 29th 2010, 07:29 PM
Identity for cos(wt+phase)cos(wt)
I am looking for an identity to simplify the following:
I assume it will be something similar to the power reduction identity:
December 29th 2010, 09:14 PM
Dear laguna92651,
You can use the product to sum identity.
$\cos A\cos B=\frac{1}{2}\left[\cos(A+B)+\cos(A-B)\right]$
For a complete list of trignometric identities you can refer List of trigonometric identities - Wikipedia, the free encyclopedia
Hope this will help you.
December 29th 2010, 09:48 PM
I looked at that identity many times, but having the phase as part of one of the functions threw me. But all of sudden with your replay, I could see it clearly for some reason. It didn't dawn on
me what A and B represented I guess, made it way harder than it was.
I got what I would expect, this is an amplitude modulation problem I was working on. | {"url":"http://mathhelpforum.com/trigonometry/167103-identity-cos-wt-phase-cos-wt-print.html","timestamp":"2014-04-20T09:29:21Z","content_type":null,"content_length":"5518","record_id":"<urn:uuid:8e29f991-b4bc-4a5b-9703-3e2da7b4f94e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mr. Modenou World of Mathematics
Due to school cancellation, we will learn and review:
1. How to solve optimization problems using optimization strategy
2. How to find linear approximation
3. How to use mean value theorem, intermediate value theorem, and Rolle’s theorem.
4. How to approximate using linearization.
Also, we will have Application of derivative test2 on Thursday that will cover optimization, linearization, mean value theorem, intermediate value theorem, and Rolle’s theorem. | {"url":"http://kodjovi2005.edublogs.org/category/ap-calculus-ab/","timestamp":"2014-04-20T05:43:15Z","content_type":null,"content_length":"55147","record_id":"<urn:uuid:ce1a5fce-0607-48c7-85ca-1d44b8547bdb>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Use of RORA for Complex Ground-Water Flow Conditions
Use of RORA for Complex Ground-Water Flow Conditions
The RORA computer program for estimating recharge is based on a condition in which ground water flows perpendicular to the nearest stream that receives ground-water discharge. The method, therefore,
does not explicitly account for the ground-water-flow component that is parallel to the stream. Hypothetical finite-difference simulations are used to demonstrate effects of complex flow conditions
that consist of two components: one that is perpendicular to the stream and one that is parallel to the stream. Results of the simulations indicate that the RORA program can be used if certain
constraints are applied in the estimation of the recession index, an input variable to the program. These constraints apply to a mathematical formulation based on aquifer properties, recession of
ground-water levels, and recession of streamflow.
The RORA computer program estimates recharge to the water table on the basis of daily streamflow (Rutledge, 1998; 2000). The program is intended for analysis of a ground-water-flow system driven by
diffuse areal recharge to the water table and by ground-water discharge to a gaining stream. When the program is used, regulation and diversion of streamflow should be negligible.
The formulations that RORA uses are derived from a cross-sectional-flow model that calculates ground-water discharge per unit of stream length on the basis of designated transmissivity, storage
coefficient, distance from the stream to the hydrologic divide, and increase in head caused by recharge to the ground-water system (Rorabaugh, 1964, p. 433). The program is based on simple geometry,
as illustrated in figure 1A, which represents a segment of a basin. The bold line on the left border represents the stream, and the right border represents the hydrologic divide. Because the stream
extends along the length of the model, and because its altitude is uniform, the direction of ground-water flow is one-dimensional and perpendicular to the stream. A finite-difference-flow model,
MODFLOW-96, then is used to generate the hydrographs of ground-water level and ground-water discharge to the stream (fig. 1B, details about this simulation are given in “Finite-Difference Simulation
Before applying RORA, the user must designate the recession index (K). The input variable K is a measure of the time required for ground-water discharge to recede by one log cycle when the recession
becomes linear (or nearly linear) on the semilog hydrograph. In the ideal model (fig. 1), K can be determined from the following equation, which is derived from Rorabaugh and Simons (1966, p. 12) as:
where a is the distance from the stream to the hydrologic divide (same as arrows in fig. 1); S is storage coefficient; and T is transmissivity. Hydrographs of ground-water level and ground-water
discharge (fig. 1) show linearity of recession on the semilog plot. When RORA is used in this ideal simulation, the determination of K (and subsequent application of RORA) is straightforward because:
1. the value of “a” is not ambiguous;
2. the recession of ground-water level (when expressed as vertical distance above the altitude of a known outflow boundary) is clearly definable and is equal to K;
3. there are no departures from linearity in the semilog plot of recession.
Figure 1. (A) Finite-difference-flow model (plan view) and (B) simulated flow to drains and simulated ground-water level. (Water levels are shown for the finite-difference cell in the middle of the
When a component of ground-water flow is parallel to the stream, in the “downvalley” direction toward the next stream in the network, that component will cause departures from the ideal conditions
described in items 1-4. This report demonstrates the use of the RORA program for these complex flow conditions. Finite-difference simulations are used to evaluate flow conditions. In one simulation,
flow is perpendicular to the stream (fig. 1). In other simulations, there is an additional flow component that is parallel to the stream. The latter simulations include a variety of outflow-boundary
Finite-Difference Simulation Design
Each of the three finite-difference simulations (fig. 2) includes drain cells that represent streams. In simulation A, drain cells are configured so that all flow will be one-dimensional, in the
“west” direction, perpendicular to the stream (same as fig. 1). The altitude of all drain cells is zero in simulation A. In simulations B and C, drain cells are configured so that a flow component
will be parallel to the stream. This configuration includes an absence of drain cells in the northern part of the model and additional drain cells along the southern boundary. The altitude of the
additional drains along that boundary is zero. The altitude of drains along the western boundary increases from zero at the southern end to some higher altitude at the northern boundary. Slopes of
drain lines are uniform and are expressed as S[B] and S[C], in feet per 100 feet (ft/100 ft). Included among simulations are two variations on the flow condition illustrated in figure 2. In the first
variation, S[B] = 0.05 and S[C] = 0.25 ft/100 ft; and in the second variation, S[B] = 0.4 and S[C] = 2.0 ft/100 ft. (These slopes correspond to about 3, 13, 21, and 106 feet per mile.) The
conductance of all drain cells is uniform (0.1 feet squared per second).
Figure 2. The three finite-difference simulations used to evaluate flow conditions (plan view): (A) simulation with flow perpendicular to the stream, (B and C) simulations that include a flow
component that is parallel to the stream. (Each simulation measures 8,000 ft in the north-south direction and 2,000 ft in the east-west direction. Aquifer properties are uniform. Bold lines indicate
drains. The altitude of all drains is zero, except for the segments S[B] and S[C], which are sloping drains. The altitude of these segments is zero at the southern boundary.)
The program MODFLOW-96 (Harbaugh and McDonald, 1996) is used to simulate flow. Each model measures 8,000 ft in the north-south direction and 2,000 ft in the east-west direction. All finite-difference
cells are squares measuring 100 ft on the side. In each simulation, storage coefficient and transmissivity are uniform. Drain segments are intended to represent streams that receive ground-water
discharge most of the time, with the exception of extreme drought. The initial hydraulic-head distribution in each simulation is calculated on the basis of a preliminary steady-state simulation in
which ground-water recharges at a constant rate of 10 in/yr (inches per year). Subsequently, all simulations include 13 recharge events, each lasting 2 days, and each uniformly inducing 1 inch of
Simulation Results
Simulation results are shown for different transmissivities and storage coefficients (figs. 3 and 4). In each group of three simulations, recession characteristics of simulations B and C are similar
to recession characteristics of simulation A. Characteristics in simulation A must be dominated by the flow component that is perpendicular to the length of the stream; therefore, it appears that
component also is dominating recession characteristics in simulations B and C. Consequently, if equation 1 is used to determine the recession index before the RORA program is applied, the value of “a
” might be measured in the “east-west” direction in each simulation. To test this hypothesis, the RORA program is used to calculate recharge from each model-generated data set of ground-water
discharge. Before application of RORA, the recession index is calculated from equation 1, designating a = 2,000 ft. The following tabulation shows the recession index and recharge estimated using
Figure 3
Simulation Recession index Recharge
(days per log cycle) (inches)
A 93 11.6
B 93 11.3
C 93 11.2
Figure 4
A 373 11.7
B 373 11.5
C 373 11.5
The RORA program does not detect the last recharge event, but this is not ordinarily important because the program is intended for analysis of long periods (years) of streamflow record. The recharge
simulated by MODFLOW, for which RORA is giving estimates, is equal to 12 inches.
All simulations described include an initial head distribution that was calculated by using a preliminary steady-state simulation, in which recharge is 10 in/yr. To test effects of initial
conditions, all simulations were run again, but with a steady-state recharge of 20 in/yr (not illustrated). The RORA program was used to estimate recharge for each of the six simulated flow records.
Each result varied from its corresponding result previously presented, by 2 percent or less.
Figure 3. Ground-water discharge in the three simulations, with hydraulic diffusivity = 40,000 feet squared per day (ft^2/d). (Simulation A is identical to the simulation in figure 1. Variables are
transmissivity (T), Storage coefficient (S), and slopes of drain lines in simulations B and C (S[B] and S[C], respectively).
Figure 4. Ground-water discharge in the three simulations, with hydraulic diffusivity = 10,000 feet squared per day (ft^2/d). (Variables are transmissivity (T), Storage coefficient (S), and slopes of
drain lines in simulations B and C (S[B] and S[C], respectively).)
Model-generated hydraulic head for a simulation that includes complex flow will indicate considerable variation in the direction of flow, depending on the time elapsed since the last recharge event (
fig. 5). In some hydrologic studies, the length of a ground-water-flow path might be used to designate “a” in equation 1. In this simulation, the length of a flow path might vary from 2,000 to 8,000
ft, depending on the time that is represented by the water-table map, and the location of the specific flow path. The RORA program was used to estimate recharge for all simulations illustrated in
figures 3 and 4, substituting 4,000 and 8,000 ft for the variable “a” in equation 1. When these longer flow paths are considered to represent “a,” the estimates of recharge calculated by RORA
demonstrated greater error than the errors when “a” was designated as 2,000 ft. Errors were particularly large when 8,000 ft was substituted. In this case, the estimate of recharge calculated by RORA
exceeds the MODFLOW-designated recharge by a factor from 1.7 to 2.9. As indicated above, it appears that if the recession index is calculated from equation 1, then “a” should be measured in the
direction that is perpendicular to the stream, or transverse from the stream to the hydrologic divide, not from flow-path analysis.
Figure 5. Ground-water levels in the finite-difference simulation illustrated in figure 2B, at time = 105, 150, and 190 days (plan view). (Numbered contours represent equal ground-water level, in
feet. The hydrograph of ground-water discharge is figure 3B.)
For the ideal condition in which all ground-water flows perpendicular to the stream, the recession of the ground-water level will equal the recession index. Importantly, the variable considered
should be defined as the altitude of the water level in the well minus the altitude of the outflow boundary. Because the altitude of the outflow boundary in figure 1 is uniformly equal to zero,
subtraction is not necessary. Simulations of complex flow conditions, however, include variation in altitude of the outflow boundary. It could be inferred from such simulations (fig. 5) that the
orientation and direction of flow paths will change over time. This change, in turn, would lead to various interpretations of the location of the outflow boundary. Recession rates, expressed in days
per log cycle, vary depending on selection of the location of the outflow boundary for a simulation in which the altitude of this boundary changes along its length (fig. 6). It appears that if
ground-water-level recession is used to ascertain the recession index, results can vary appreciably.
Figure 6. (A) A location at which ground-water level is considered and five drain locations (plan view), and (B) ground-water-level recession expressed as the difference between water level at the
point of interest and the altitude of the drain. (The five different drain locations are indicated with numbers on frame A, and the recession curves are labeled with the corresponding number on frame
B. The simulation shown is the same as the simulation shown in figures 2B and 3B.)
The most practical method for determining K is measuring recession rates directly from the semilog flow hydrograph. Rutledge (2000, p. 14-18) includes discussion of this method and potential
problems. The method, which was used in a regional application of RORA (Rutledge and Mesko, 1996), consists of locating a part of the flow hydrograph that shows continuous recession, finding a
segment of recession that begins a sufficient amount of time after recharge so that it will exhibit linearity on the semilog plot, then determining the slope in days per log cycle. To test this
method for the hydrographs in figure 3, the flow from day 120 to 130 is considered to represent a linear-recession segment. As an approximation of the graphical method of calculating the recession
index, the difference between the logarithm of flow on these 2 days is calculated, then this rate of change is converted to days per log cycle. A method using daily flows at 150 and 160 days was used
for the hydrographs in figure 4. The following tabulation shows the recession index, thus determined, and the recharge calculated using RORA.
Figure 3
Simulation Recession index Recharge
(days per log cycle) (inches)
A 89 11.5
B 94 11.3
C 94 11.2
Figure 4
A 314 11.4
B 319 11.3
C 319 11.3
The estimates for the recession index (K) from the hydrograph in figure 3 almost are equal to the result of equation 1 (the equation gives 93 days/log cycle). The estimates for the recession index
from the hydrograph in figure 4, however, vary considerably from the result of the equation (the equation gives 373 days/log cycle). Nonetheless, the estimates of recharge from this method are
similar to estimates of recharge presented earlier.
Given the quantitative similarity between hydrographs affected by complex flow conditions and hydrographs not affected by them, recession characteristics under extreme low-flow conditions should not
be used when K is determined. This would apply to departures from the linear model of recession, such as exhibited by curves B and C in figure 3, in the time between 170 and 190 days.
As discussed, the ideal condition (fig. 1) exhibits a perfect log-linear relation in the recession of the streamflow hydrograph, after a period since the last recharge event. Complex flow causes
slight nonlinearity (figs. 3 and 4). The program user should evaluate variation in recession characteristics for the basin of interest. In many cases, the recharge estimate may not be extremely
sensitive to variation in K.
The RORA computer program for estimating recharge is based on a condition in which ground water flows perpendicular to the nearest stream that receives ground-water discharge. The program does not
explicitly account for the component of flow that is parallel to the stream. The program can be used for these flow conditions, if certain constraints are applied in the estimation of the recession
index (K), an input variable to the program. These constraints are as follows:
1. If the equation 0.933(a^2)S/T is used to determine K, the value of “a” should be measured as the transverse distance from the stream to the hydrologic divide. Estimation of “a” based on flow-path
lengths may introduce errors.
2. If K is determined from the recession of ground-water levels, results can vary depending on the user’s selection of the outflow boundary location. The boundary location can be open to
interpretation because there may be multiple flow paths.
3. If K is determined from streamflow data, extreme low-flow conditions should not be used.
The most practical method for estimating K is from analysis of the streamflow data set. Program users should be cautious when attempting to evaluate or replace this estimate on the basis of flow
paths or ground-water levels.
References Cited
Harbaugh, A.W., and McDonald, M.G., 1996, User’s documentation for MODFLOW-96, an update to the U.S. Geological Survey modular finite-difference ground-water flow model: U.S. Geological Survey
Open-File Report 96-485, 56 p.
Rorabaugh, M.I., 1964, Estimating changes in bank storage and ground-water contribution to streamflow: International Association of Scientific Hydrology, Publication 63, p. 432-441.
Rorabaugh, M.I., and Simons, W.D., 1966, Exploration of methods relating ground water to surface water, Columbia River Basin–second phase: U.S. Geological Survey Open-File Report 66-117, 62 p.
Rutledge, A.T., 1998, Computer programs for describing the recession of ground-water discharge and for estimating mean ground-water recharge and discharge from streamflow data–update: U.S. Geological
Survey Water-Resources Investigations Report 98-4148, 43 p. url: http://pubs.water.usgs.gov/wri98-4148
Rutledge, A.T., 2000, Considerations for use of the RORA program to estimate ground-water recharge from streamflow records: U.S. Geological Survey Open-File Report 00-156, 44 p. url: http://
Rutledge, A.T., and Mesko, T.O., 1996, Estimated hydrologic characteristics of shallow aquifer systems in the Valley and Ridge, the Blue Ridge, and the Piedmont Physiographic Provinces based on
analysis of streamflow recession and base flow: U.S. Geological Survey Professional Paper 1422-B, 58 p.
Complete accessible text of report (212 KB PDF)
The citation for this report, in USGS format, is as follows:
Rutledge, A.T., 2003, Use of RORA for complex ground-water flow conditions: U.S. Geological Survey Water-Resources Investigations 03-4304, 5 p.
To view PDF documents, you must have the Adobe Acrobat Reader (free from Adobe Systems) installed on your computer.
(download free copy of Acrobat Reader).
Document Accessibility: Adobe Systems Incorporated has information about PDFs and the visually impaired. This information provides tools to help make PDF files accessible. These tools convert Adobe
PDF documents into HTML or ASCII text, which then can be read by a number of common screen-reading programs that synthesize text as audible speech. In addition, an accessible version of Acrobat
Reader 5.0 for Windows (English only), which contains support for screen readers, is available. These tools and the accessible reader may be obtained free from Adobe at Adobe Access.
For more information: http://water.usgs.gov/ogw/rora/ | {"url":"http://pubs.usgs.gov/wri/wri034304/","timestamp":"2014-04-20T20:56:47Z","content_type":null,"content_length":"31670","record_id":"<urn:uuid:30a4f7a1-492f-46eb-843e-7599faed0cdb>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
indefinite integral
January 29th 2009, 09:58 AM #1
Nov 2008
indefinite integral
Which of the following is an indefinite integral of
I know that $\int \frac{dx}{a^2-x^2}=\frac {1}{a}tanh^{-1} \frac{x}{a}+C$ and then I manipulated it to $\int \frac{6}{244-227x^2}dx=6 \int \frac{dx}{244-227x^2}dx$ but I got stuck after that.
$\frac{1}{244-227x^2}=\frac{1}{227 \left(\frac{244}{277}-x^2\right)}=\frac{1}{227} \cdot \frac{1}{\frac{244}{277}-x^2}$
So $a=\sqrt{\frac{244}{277}}$
January 29th 2009, 10:03 AM #2 | {"url":"http://mathhelpforum.com/calculus/70630-indefinite-integral.html","timestamp":"2014-04-20T12:29:55Z","content_type":null,"content_length":"34082","record_id":"<urn:uuid:e0776493-41d5-44eb-a154-3724e81ca44f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - If no singularity, what’s inside a big black hole?
I know... but that transform is non-trivial! E.g. the part of the world line of the infalling observer which is inside the horizon is not even in the spacetime of the asymptotic observer, so the
transform must have some singularities; my question is whether they are physical. Searching for literature on this gives remarkably thin results (i.e. none). I really would like to know the answer,
but I don't think anyone has it --- I would be happy to be shown otherwise.
This is why I don't believe in black holes. To make hawking radiation compatible to an asymptotic observer, an infalling observer would receive an infinitely strong blast of radiation when crossing
the horizon. This is why I think the fuzzball is better than the LQG solution, at least how it is interpreted. The infinite blast should be actually the leaking gas of hot sphere made by whatever
entity a fundamental theory of quantum gravity regards as fundamental.
EDIT.: Just noticed what suprised said above. So, what I mean is a killer fuzball. | {"url":"http://www.physicsforums.com/showpost.php?p=3499583&postcount=68","timestamp":"2014-04-21T14:55:52Z","content_type":null,"content_length":"8447","record_id":"<urn:uuid:7f6d9d82-545e-4f63-a23f-058917e12a9b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
Amit Gupta, Author at Pivotal Labs
So You Still Don’t Understand Hindley-Milner? Part 3
Saturday, June 8, 2013
In Part 2, we finished defining all the formal terms and symbols you see in the StackOverflow question on the Hindley-Milner algorithm, so now we’re ready to translate what that question was asking
about, namely the rules for deducing statements about type inference. Let’s get down to it!
The rules for deducing statements about type inference
Read on at my blog (since these blogs don’t support MathJax) →
So You Still Don’t Understand Hindley-Milner? Part 2
Saturday, June 8, 2013
In Part 1, we said what the building blocks of the Hindley-Milner formalization would be, and in this post we’ll thoroughly define them, and actually formulate the formalization:
Formalizing the concept of an expression
We’ll give a recursive definition of what an expression is; in other words, we’ll state what the most basic kind of expression is, we’ll say how to create new, more complex expressions out of
existing expressions, and we’ll say that only things made in this way are valid expressions.
Read on at my blog (since these blogs don’t support MathJax) →
Entropy: How Password Strength Is Measured
Saturday, June 8, 2013
Mike Sierchio wrote a cool post on password strength, and the concept of entropy. As he points out, entropy isn’t always entropy. That confusion is apparently not uncommon, as it’s been asked about
on IT Security Stack Exchange as well. So what’s really going on?
Let’s step back for a sec and fill in some context. What are we trying to do? We’d like some way to measure how hard it is to guess our passwords, a number that serves as a heuristic standard of
password strength. But there are two fundamentally different things we might want to measure:
1. How hard would it be for someone to guess your password with essentially no knowledge of how you created your password?
2. How hard would it be for someone to guess your password if they knew the process used to generate it? This is of course assuming that there is a process, for example some script that does some
Math.rand-ing and produces a password string.
The term “entropy” has been used to refer to both kinds of calculations, but they’re clearly entirely different things: the former essentially takes a string as input, the latter takes a random
process as input. Hence, “entropy is not entropy.”
Alright, well if entropy isn’t entropy, let’s see what entropies are. We’ll look at the standard mathematical formulation of the random-process-entropy which comes from information theory. And we’ll
look at the function used to calculate particular-string-entropy in one of the most popular password strength testers. And that’s all we’re going to do, we’ll look at how the calculations are done,
without dwelling too much on the differences between the two approaches or what their use cases are.
Read on at my blog (since these blogs don’t support MathJax) →
Yo Dawg, I Herd You Like Math
Saturday, June 8, 2013
I’ve been learning a bit of statistical computing with R lately on the side from Chris Paciorek’s Berkeley course. I just got introduced to knitr and it’s damned sweet! It’s an R package which takes
a LaTeX file with embedded R, and produces a pure LaTeX file (similar to how Rails renders an .html.erb file into an .html file), where the resulting LaTeX file has the output of the R code. It makes
it super easy to embed statistical calculations, graphs, and all the good stuff R gives you right into your TeX files. It let’s you put math in your math, so you can math while you math.
I’ve got a little project which:
1. Runs a Python script which will use Selenium to scrape a web page for 2012 NFL passing statistics.
2. “Knits” a TeX file with embedded R that cleans the raw scraped data, produces a histogram of touchown passes for teams, and displays the teams with the least and greatest number of touchdowns.
3. Compiles the resulting TeX file and opens the resulting PDF.
4. Cleans up any temporary work files. Continue reading
Is It Possible to Be 15% Swedish?
Saturday, June 8, 2013
This question came up as a joke during a team standup a few months ago. Although the obvious answer is “no,” if you’re willing to play fast and loose with your metaphysics for a bit, the answer can
be “yes” and there’s a cute solution that ties together binary numbers and binary trees. This post itself is a bit of a joke in that it’s just for fun, but it might be nice to see the familiar
concepts of binary numbers and binary trees in a new light.
The obvious answer is “no”
Let’s quickly see why the real life answer is “no.” But first we should lay out the assumptions implicit in the problem. We’re going to assume that at some point in time, everyone was either entirely
Swedish or entirely non-Swedish. There’s a chicken-and-egg problem that we’re sweeping under the rug here, but that’s what rugs are for. Next we’re assuming that every person after that point in time
has their Swedishness wholly and equally determined by their parents Swedishness. So if mom is 17% Swedish and dad is 66% Swedish, then baby is ½ x 17% + ½ x 66% = 41.5% Swedish.
Read on at my blog (since these blogs don’t support MathJax) →
So You Still Don’t Understand Hindley-Milner? Part 1
Saturday, June 8, 2013
I was out for drinks with Josh Long and some other friends from work, when he found out I “speak math.” He had come across this StackOverflow question and asked me what it meant:
Before we figure out what it means, let’s get an idea for why we care in the first place. Daniel Spiewak’s blog post gives a really nice explanation of the purpose of the HM algorithm, in addition to
an in-depth example of its application:
Functionally speaking, Hindley-Milner (or “Damas-Milner”) is an algorithm for inferring value types based on use. It literally formalizes the intuition that a type can be deduced by the
functionality it supports.
Okay, so we want to formalize an algorithm for inferring types of any given expression. In this post, I’m going to touch on what it means to formalize something, then describe the building blocks of
the HM formalization. In Part 2, I’ll flesh out the building blocks of the formalization. Finally in Part 3, I’ll translate that StackOverflow question.
Read on at my blog (since these blogs don’t support MathJax) →
Making math make sense to programmers
Saturday, October 22, 2011
Whether you’re learning math for pleasure or profit (jumping on the Big Data bandwagon), there are times when it may seem intimidating, overwhelming, confounding, etc. My assertion is that if you
think like a programmer, you already have a leg up when it comes to learning math.
I gave today’s Tech Talk at Pivotal Labs SF on this very topic. The outline of the talk is listed below, and these are the slides. I’ll be following up with a series of mini blog posts extracting the
contents of the talk.
1. Program-y translations of math notation
□ Why is math hard to read? Conventions!
□ Translation 1: Mathematical functions (lines, sinusoids)
□ Translation 2: Sigma notation, and other indexed operations
□ Translation 3: Set notation and quantifiers
□ Resources for learning on your own
2. Program-y proof that the infinity of the reals is bigger than the infinity of the naturals (yes, there are different sizes of infinity)
□ Groundwork: The real numbers
□ Groundwork: Cardinality, the math way to say “how many”
□ Groundwork: Proof by contradiction, the math way to say “when pigs fly”
□ The proof that |R| > |N| for programmers (read: finitists) | {"url":"http://pivotallabs.com/author/agupta/","timestamp":"2014-04-20T21:03:43Z","content_type":null,"content_length":"54608","record_id":"<urn:uuid:623508bd-8df3-4a2a-b485-6b56b2dc1b84>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unsupervised methods for learning [artificial] support vectors for kernel methods?
Kernel methods are interesting in that, given a small enough data-set you can achieve interesting results. In the past I've experimented with a convolutional architecture for an SVM, but ultimately
abandoned it because the computational cost was so immense. It worked, but it was painfully slow.
Using GPUs and other fancy tricks for improving training time only helps so much since the training time and memory requirements appears to scale quadratically with the size of the training set.
There may be efficient methods of winnowing out examples that doesn't require O(n^2) (or worse) time; but I moved on to unsupervised methods and stopped researching SVMs, so I'm not familiar with
them if they exist.
My question is, could an unsupervised model not just learn the distribution of inputs, but be used to generate distinct -- in a sense, 'conjugate' -- examples from the data distribution that could
be used as the basis for a kernel-based method of associating new data -- e.g.: clustering, classification, regression, etc.
Would it be better to use a latent representation and feed that into a kernel based method, or would there be any value in what I've described.
It seems to me that feeding the latent representation into a kernel method just shifts the problem from using raw data to using another representation of the same samples. While you might end up
with a more terse representation, the computational burden isn't on the inner product of the samples with each-other, but rather the number of samples you need to evaluate to identify 'good' support
vectors and the memory requirements that go with this evaluation.
This question is marked "community wiki".
asked Dec 22 '10 at 15:49
Brian Vandenberg
I suppose there's also the option of using the weights as support vectors, provided the inner product used in the kernel method is [isomorphic?] to the linear combination of weights & data used in
the unsupervised method.
For example, if the unsupervised method were a convolutional RBM, you might use the filters it learns as support vectors as long as the kernel method also uses a convolutional inner product.
Can you clarify the convolutional SVM architecture you mentioned? In answer to your question, there are some specialized algorithms for ultra-fast SVM solving (particularly in the linear kernel
case). People try all sorts of wacky hacks to "kernelize" these things, many of which seem to involve applying KPCA to get a set of "basis kernels" that approximate the real kernel. Using a subset
of the original data, chosen by clustering and picking exemplars, is also done. Neither of these approaches seems fully robust when dealing with very-large-scale data and sufficiently
high-dimensional or complex kernels, but they can get the job done.
Interesting question, I've been thinking in these lines too. I think that the goal of the unsupervised learning is to discover better features in the absolute sense, not only with respect to some
artificial set of labels. And then from these intrinsic features one should be able train a possibly simpler model in a supervised fashion with respect to any set of reasonable labels with respect
to that dataset. As in kernel methods. First map the data into a higher dimensional space and implicitly train a linear model in this higher dimensional space. So the distances in the intrinsic
feature space should be more semantical, therefore the proximity matrix I expect is better for plugging it in a kernel SVM. In terms of speed there's no gain, but there may be some quality gains.
But one thing is theory and intuition, and practice another, and cannot support you with empirical evidence. But discussion is interesting. I think at 7:40 Geoffrey Hinton is discussing this here
answered Dec 22 '10 at 20:11
Oliver Mitevski
This sounds like radial basis function neural networks. I like the treatment in Bishop's neural networks book. In short, there are many ways of doing that, most of them nonconvex, but even things
like k-means work well enough in some cases.
This answer is marked "community wiki".
answered Dec 22 '10 at 23:02
Alexandre Passos ♦
An idea occurred to me tonight: if a model were constructed for a DBN auto-encoder, the detail necessary to reconstruct the original input is likely lost, but the hash could then be used to sample
from the distribution of other samples that are semantically similar. The model could then be used to generate a much smaller set of artificial samples in either the data distribution, the
auto-encoder distribution, or perhaps the highest layer in the DBN that has not undergone any dimensionality reduction.
This answer is marked "community wiki".
answered Dec 24 '10 at 01:27
Brian Vandenberg
But you don't need a DBN for this purpose. An RBM will do just fine for generating "dreaming up" some good samples of the distribution. Or if you used an auto-encoder use a shallow one, one or two
layers at most.
The reason for the auto-encoder is to implicitly give an approximate size to the kernel matrix. For example, if the auto-encoder layer is limited to 16 bits and is kept somewhat sparse (say, around
8 bits on at a time on average), then I'd hope to get approximately 12,870 support vectors, which would obviously have to be winnowed down from a larger sample, but hopefully a process like this
could select a smaller sample set than the original set. | {"url":"http://metaoptimize.com/qa/questions/3996/unsupervised-methods-for-learning-artificial-support-vectors-for-kernel-methods?sort=oldest","timestamp":"2014-04-17T03:59:49Z","content_type":null,"content_length":"27966","record_id":"<urn:uuid:88701cf1-9c6d-49b9-8257-361970c633b2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
arrange letters in words SNOWO such that??
March 24th 2010, 09:12 PM
arrange letters in words SNOWO such that??
Help. Question is: arrange the letters in the word SNOWO such that the two indistinguishable O's are NEVER together. How many ways is there to do this?
I had 36 ways. i did: (5!/2!) - 4! = 36
is this correct?
March 25th 2010, 06:09 AM
Yes, that is correct. | {"url":"http://mathhelpforum.com/discrete-math/135558-arrange-letters-words-snowo-such-print.html","timestamp":"2014-04-16T17:28:38Z","content_type":null,"content_length":"3555","record_id":"<urn:uuid:f526fc0d-8285-4505-8126-b55425bfadf5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rounding Profiles
SAP F&R requires logistical rounding profiles for requirement quantity optimization to round quantities to multiples of a unit of measure, for example, or to switch between units of measure.
• You assign the rounding profile in the transportation lanes for products. On the SAP Easy Access screen, you choose .
• You can import rounding profiles. For more information, see Inbound Interface for Rounding Profiles.
To perform rounding, you must have defined the rounding parameters in the profile for requirement quantity optimization. Rounding must be active in the basic settings of the profile. The tables below
illustrate how the Rounding Type and Rounding Mode parameters affect rounding behavior:
Rounding Type Rounding Behavior
The threshold values are calculated using the following formula:
Total requirement 1 - | (rounded quantity – requirement quantity) / rounded quantity |
[The quantity calculated within the absolute value brackets ("| ... |") is always used as a positive value, for example, | -10 | = 10.]
Individual logistical unit of measure Depends on the percentage missing to the nearest multiple of a unit
Rounding Mode Rounding Behavior
Rounding up and down to the largest unit of measure is preferred The system uses the largest possible unit of measure as the order unit.
Always prioritize rounding up over rounding down The system always tries to round up to a unit of measure. If this is not possible the system rounds down.
For more information, see Profile for Requirement Quantity Optimization.
• You can define rounding rules that specify the percentage from which the system rounds for a given unit of measure. You can specify threshold values for rounding quantities up and down. Based on
these rules, requirement quantity optimization determines which of the permitted units of measure is most suited as the requirement quantity.
• You can use zero rounding. It determines what happens when the rounding result is zero and no purchase order can therefore be created. You can choose which of the following rules you want the
system to use:
□ Do not round to zero
□ Round every possible rounding unit down to zero
□ Only round the unit of measure with the smallest base unit down to zero
In maintain rounding profiles, go to the SAP Easy Access screen and choose .
Pallets and boxes are possible rounding units. Rounding shows that the order quantity is so small that it will not fill a whole pallet. The system rounds down to zero, which means nothing is ordered.
You have the following options for zero rounding:
• If you select the rounding mode in which SAP F&R rounds every possible rounding unit down to zero, no purchase order is created because the system rounds the rounding unit pallet down to zero.
• If you select the option in which SAP F&R only rounds the unit of measure with the smallest base unit of measure down to zero, the system cannot round down to zero, as the unit of measure pallet
is not the smallest possible unit. The system checks whether a box can be ordered. If so, it orders the box; if the quantity is too small for a box as well, nothing is ordered, because no smaller
rounding unit is defined.
• If you choose the option that SAP F&R cannot round down to zero, the system orders a box. | {"url":"http://help.sap.com/saphelp_scm70/helpdata/en/bc/46e9227d0e4c1dbfcde45a0066d400/content.htm","timestamp":"2014-04-20T10:48:16Z","content_type":null,"content_length":"18357","record_id":"<urn:uuid:5cc47494-1642-4e9b-9087-d1de8eadfa74>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hudson, NH Statistics Tutor
Find a Hudson, NH Statistics Tutor
...I then did 1 year of graduate work at Dartmouth College, but decided to switch to a Master's in Computer Science program at UNH. Most recently I have been a stay-at-home dad, and taught my
eldest daughter to read using positive reinforcement and the DISTAR method. She scored very high in her ki...
12 Subjects: including statistics, calculus, physics, precalculus
I am an experienced teacher of maths and physics. I have taught many students from elementary level up to university students. I was an associate professor and I have an advanced degree in
Engineering but I love teaching.
21 Subjects: including statistics, physics, French, algebra 1
...I have several years part-time experience holding office hours and working in a tutorial office. I have worked with students who are taking the GED specifically. As an undergraduate I read
extensively in philosophy, literature, and sociology.
29 Subjects: including statistics, reading, English, writing
I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have
tutored a wide range of students - from middle school to college level.
14 Subjects: including statistics, geometry, algebra 1, algebra 2
...The SAT is a test that with some simple techniques and practice of specific aspects of high school math content students can improve their scores and their appeal to prospective colleges. I
will help students identify and address their mathematical issues on the SAT. I took PROMYS (Program in M...
23 Subjects: including statistics, physics, calculus, geometry
Related Hudson, NH Tutors
Hudson, NH Accounting Tutors
Hudson, NH ACT Tutors
Hudson, NH Algebra Tutors
Hudson, NH Algebra 2 Tutors
Hudson, NH Calculus Tutors
Hudson, NH Geometry Tutors
Hudson, NH Math Tutors
Hudson, NH Prealgebra Tutors
Hudson, NH Precalculus Tutors
Hudson, NH SAT Tutors
Hudson, NH SAT Math Tutors
Hudson, NH Science Tutors
Hudson, NH Statistics Tutors
Hudson, NH Trigonometry Tutors | {"url":"http://www.purplemath.com/Hudson_NH_Statistics_tutors.php","timestamp":"2014-04-16T13:20:36Z","content_type":null,"content_length":"23880","record_id":"<urn:uuid:32003835-e0d3-4f62-8267-d6f88b5d0182>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unit 2
Unit 2 Foundation: Number and Algebra
Non-calculator assessment has a higher proportion of straight-forward assessment from Assessment Objective 1.
The specification requires students to be familiar with:
• Working with numbers and the number system
• Fractions, decimals and percentages
• Ratio and proportion
• The language of algebra
• Expressions and equations
• Sequences, functions and graphs.
There are interlinking strands between these skills so content will impact on teaching approaches. Here, content is split into:
Depending on your teaching order, some content may have already been covered in Unit 1. However, the Assessment Guidance does cover all the specification references in detail. Unit 1 will usually
assess number skills in a statistical context, this unit, however, may test these skills discretely. Although the unit comprises both number and algebra work, the weighting at Foundation tier focuses
on assessing the number skills. Consequently, some teachers may choose to teach this unit first to ensure that the number work for all three units is addressed early in the study.
In areas where content is common across units, you are advised to look closely at the Assessment Guidance for details of where specific content will be assessed. For example, N6.12 which refers to
plotting graphs appears in Unit 2 and Unit 3, but assessment of distance-time graphs only appear in Unit 2. | {"url":"http://aqamaths.aqa.org.uk/index.php?CurrMenu=69","timestamp":"2014-04-18T23:43:14Z","content_type":null,"content_length":"16258","record_id":"<urn:uuid:5734da07-7e04-48f9-bca0-5541edef417c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 14
- JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR , 2002
"... Optimal decision criterion placement maximizes expected reward and requires sensitivity to the category base rates (prior probabilities) and payoffs (costs and benefits of incorrect and correct
responding). When base rates are unequal, human decision criterion is nearly optimal, but when payoffs are ..."
Cited by 26 (12 self)
Add to MetaCart
Optimal decision criterion placement maximizes expected reward and requires sensitivity to the category base rates (prior probabilities) and payoffs (costs and benefits of incorrect and correct
responding). When base rates are unequal, human decision criterion is nearly optimal, but when payoffs are unequal, suboptimal decision criterion placement is observed, even when the optimal decision
criterion is identical in both cases. A series of studies are reviewed that examine the generality of this finding, and a unified theory of decision criterion learning is described (Maddox & Dodd,
2001). The theory assumes that two critical mechanisms operate in decision criterion learning. One mechanism involves competition between reward and accuracy maximization: The observer attempts to
maximize reward, as instructed, but also places some importance on accuracy maximization. The second mechanism involves a flat-maxima hypothesis that assumes that the observer’s estimate of the
reward-maximizing decision criterion is determined from the steepness of the objective reward function that relates expected reward to decision criterion placement. Experiments used to develop and
test the theory require each observer to complete a large number of trials and to participate in all conditions of the experiment. This provides maximal control over the reinforcement history of the
observer and allows a focus on individual behavioral profiles. The theory is applied to decision criterion learning problems that examine category discriminability, payoff matrix multiplication and
addition effects, the optimal classifier’s independence assumption, and different types of trial-by-trial feedback. In every case the theory provides a good account of the data, and, most important,
provides useful insights into the psychological processes involved in decision criterion learning.
, 2001
"... Observers completed a series of simulated medical diagnosis tasks that differed in category discriminability and base-rate/costbenefit ratio. Point, accuracy, and decision criterion estimates
were closer to optimal (a) for category d' = 2.2 than for category d' = 1.0 or 3.2, (b) when base-rates, as ..."
Cited by 17 (13 self)
Add to MetaCart
Observers completed a series of simulated medical diagnosis tasks that differed in category discriminability and base-rate/costbenefit ratio. Point, accuracy, and decision criterion estimates were
closer to optimal (a) for category d' = 2.2 than for category d' = 1.0 or 3.2, (b) when base-rates, as opposed to cost-benefits were manipulated, and (c) when the cost of an incorrect response
resulted in no point loss (non-negative cost) as opposed to a point loss (negative cost). These results support the "flat-maxima" (von Winterfeldt & Edwards, 1982) and COmpetition Between Reward and
Accuracy (COBRA; Maddox & Bohil, 1998a) hypotheses. A hybrid model that instantiated simultaneously both hypotheses was applied to the data. The model parameters indicated that (a) the
reward-maximizing decision criterion quickly approached the optimal criterion, (b) the importance placed on accuracy maximization early in learning was larger when the cost of an incorrect response
was negative as opposed to non-negative, and (c) by the end of training the importance placed on accuracy was equal for negative and non-negative costs.
- Perception & Psychophysics , 2001
"... (i.e., d ¢ level), base rates, and payoffs was examined. Base-rate and payoff manipulations across two category discriminabilities allowed a test of the hypothesis that the steepness of the
objective reward function affects performance (i.e., the flat-maxima hypothesis), as well as the hypothesis th ..."
Cited by 10 (7 self)
Add to MetaCart
(i.e., d ¢ level), base rates, and payoffs was examined. Base-rate and payoff manipulations across two category discriminabilities allowed a test of the hypothesis that the steepness of the objective
reward function affects performance (i.e., the flat-maxima hypothesis), as well as the hypothesis that observers combine base-rate and payoff information independently. Performance was (1) closer to
optimal for the steeper objective reward function, in line with the flat-maxima hypothesis, (2) closer to optimal in base-rate conditions than in payoff conditions, and (3) in partial support of the
hypothesis that base-rate and payoff knowledge is combined independently. Implications for current theories of base-rate and payoff learning are discussed.
- Memory & Cognition , 2001
"... Two experiments were conducted in which the effects of different feedback displays on decision criterion learning were examined in a perceptual categorization task with unequal cost–benefits. In
Experiment 1, immediate versus delayed feedback was combined factorially with objective versus optimal cl ..."
Cited by 8 (5 self)
Add to MetaCart
Two experiments were conducted in which the effects of different feedback displays on decision criterion learning were examined in a perceptual categorization task with unequal cost–benefits. In
Experiment 1, immediate versus delayed feedback was combined factorially with objective versus optimal classifier feedback. Immediate versus delayed feedback had no effect. Performance improved
significantly over blocks with optimal classifier feedback and remained relatively stable with objective feedback. Experiment 2 used a within-subjects design that allowed a test of model-based
instantiations of the flat-maxima (von Winterfeldt & Edwards, 1982) and competition between reward and accuracy (Maddox & Bohil, 1998a) hypotheses in isolation and of a hybrid model that incorporated
assumptions from both hypotheses. The model-based analyses indicated that the flat-maxima model provided a good description of early learning but that the assumptions of the hybrid model were
necessary to account for later learning. An examination of the hybrid model parameters indicated that the emphasis placed on accuracy maximization generally declined with experience for optimal
classifier feedback but remained high, and fairly constant for objective classifier feedback. Implications for cost–benefit training are discussed.
- J EXP PSYCHOL LEARN MEM COGN , 2003
"... ..."
- PERCEPTION & PSYCHOPHYSICS , 2003
"... this article are based on the decision boundmodel in Equation 5. Specifically, each model includes one "noise" parameter that represents the sum of perceptual and criterial noise (Ashby, 1992a;
Maddox& Ashby, 1993). Each model assumes that the observer has accurate knowledge of the category structur ..."
Add to MetaCart
this article are based on the decision boundmodel in Equation 5. Specifically, each model includes one "noise" parameter that represents the sum of perceptual and criterial noise (Ashby, 1992a;
Maddox& Ashby, 1993). Each model assumes that the observer has accurate knowledge of the category structures [i.e., l o (x pi )]. To ensure that this was a reasonable assumption, each observer
completed a number of baseline trials and was required to meet a stringent performance criterion (see Method section). Finally,each model allows for suboptimal decision criterion placement where the
decision criterion is determined from the flat-maxima hypothesis, the COBRA hypothesis, or both, following Equation 6. To determine whether the flat-maxima and COBRA hypothesesare important in
accountingfor each observer's data, we developed four models. Each model makes different assumptions about the k r and w values used. The nested structure of the models is represented in Figure 5,
with each arrow pointing to a more general model and Figure 4. Decision criterion [ln( b )] predicted from the flat-maxima hypothesisplotted against the decision criterion [ln( b )] predicted from
the independence assumption of the optimal classifier for the six simultaneous base-rate/payoff conditions. (A) 2:1B/2:1P condition. (B) 3:1B/3:1P condition
- PERCEPTION & PSYCHOPHYSICS , 2004
"... ..."
, 2003
"... Accepted for publication in Perception & Psychophysics Observers completed perceptual categorization tasks that included 25 base-rate/payoff conditions constructed from the factorial combination
of 5 base-rate ratios (1:3, 1:2, 1:1, 2:1, and 3:1) with 5 payoff ratios (1:3, 1:2, 1:1, 2:1, and 3:1). T ..."
Add to MetaCart
Accepted for publication in Perception & Psychophysics Observers completed perceptual categorization tasks that included 25 base-rate/payoff conditions constructed from the factorial combination of 5
base-rate ratios (1:3, 1:2, 1:1, 2:1, and 3:1) with 5 payoff ratios (1:3, 1:2, 1:1, 2:1, and 3:1). This large database allowed an initial comparison of the competition between reward and accuracy
maximization (COBRA) hypothesis with a competition between reward maximization and probability matching (COBRM) hypothesis, and an extensive and critical comparison of the flat-maxima hypothesis with
the independence assumption of the optimal classifier. Model-based instantiations of the COBRA and COBRM hypotheses provided good accounts of the data, but there was a consistent advantage for the
COBRM instantiation early in learning, and the COBRA instantiation later in learning. This pattern held in the current study, and in a re-analysis of Bohil and Maddox (in press). Strong support was
obtained for the flat-maxima hypothesis over the independence assumption, especially as the observers gained experience with the task. Model parameters indicated that observers ’ reward-maximizing
decision criterion rapidly approaches the optimal value, and that more weight is placed on accuracy maximization in separate base-rate/payoff conditions than in simultaneous base-rate/payoff
conditions. The superiority of the flat-maxima hypothesis suggests that violations of the independence assumption are to be expected, and are well captured by the flat-maxima hypothesis without
requiring any additional assumptions. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=15800","timestamp":"2014-04-21T12:03:02Z","content_type":null,"content_length":"37264","record_id":"<urn:uuid:ae0b6551-5868-40c2-8392-cb4c88fd22b8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fort Mcdowell Precalculus Tutors
...Prealgebra is such an important part of math. It is the fundamental building block of all advanced mathematics. Without a strong understanding of prealgebra, all other math is much more
20 Subjects: including precalculus, calculus, computer programming, C
...I have been teaching at the college level for 15 years. I am currently a professor of mathematics at Scottsdale Community College. I have taught and tutored everything from basic mathematics up
through Calculus, Differential Equations and Mathematical Structures.
9 Subjects: including precalculus, calculus, geometry, algebra 1
I am currently a math teacher at a public high school. I have taught for 14 years in both public and private schools. I have taught everything from Pre-Algebra to Calculus.
12 Subjects: including precalculus, calculus, GED, trigonometry
...I view my goal as a tutor is to put myself out of a job by teaching students the skills for success in their given subject. Calculus is the first math class I fell in love with. The beauty of
the math in addition immense number of things it can be applied to make calculus exciting.
10 Subjects: including precalculus, chemistry, calculus, geometry
...Master's Degree in Mathematics from Youngstown State University. 2. Have taught children at all levels from 8th grade through 12th, and also college level courses at YSU. 3. Reputation as very
patient teacher with the ability to make difficult concepts easy to understand.
10 Subjects: including precalculus, calculus, geometry, algebra 1 | {"url":"http://www.algebrahelp.com/Fort_Mcdowell_precalculus_tutors.jsp","timestamp":"2014-04-19T07:25:59Z","content_type":null,"content_length":"25010","record_id":"<urn:uuid:63f4ea84-3169-4401-afea-aea0a6725e28>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
The Trisectrix as the Locus of Points of Intersection
A trisectrix (red) is the locus of the points of intersection (red point ) of a moving horizontal line (blue) and a rotating line (black).
Initially, the line lies along the line and the line lies along the line . The line rotates clockwise about the origin at 1 radian per time unit, and the line moves toward the axis at a constant
rate of units per time unit, so that both lines reach the axis at the same time. The line meets the axis at an angle .
[1] T. Heard, D. Martin, and B. Murphy,
A2 Further Pure Mathematics
, 3rd ed., London: Hodder Education 2005 p. 202, question no. 2. | {"url":"http://demonstrations.wolfram.com/TheTrisectrixAsTheLocusOfPointsOfIntersection/","timestamp":"2014-04-20T23:29:58Z","content_type":null,"content_length":"43973","record_id":"<urn:uuid:09861a34-c7a7-4be0-890e-b5f307152564>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
NetLogo - How to show the current coordinates of a turtle
up vote 1 down vote favorite
I have been trying for quite some time to show the current coordinates of a turtle in NetLogo. I'm aware that to show one of the coordinates, I could use:
show [xcor] of turtle 0
show [ycor] of turtle 0
But how do I show both coordinates?
Thank you.
add comment
2 Answers
active oldest votes
You can show [list xcor ycor] of turtle 0.
up vote 5 down vote accepted Or, fancier: show [(word "(" xcor ", " ycor ")")] of turtle 0.
add comment
From http://ccl.northwestern.edu/netlogo/docs/programming.html#syntax
you can do
show [xcor + ycor] of turtle 5
up vote -2 down
vote But not sure if that helps?
sorry.. it doesnt.. 1 sec!
yup dude, it doesnt. Tried that out too! haha! What I want is both the coordinates, that adds them up – aHaH Jan 31 '12 at 11:47
yep.. I know I misread your Q! Not sure if you can in 1 statement? – kenam Jan 31 '12 at 11:49
i think there is just enquired from a frd earlier. Could do with patch-here. ask turtle [ patch-here ] it returns the coordinates xy – aHaH Jan 31 '12 at 12:41
@aHaH: if the turtle is on a patch center, then its coordinates are integers, and the same as the coordinates of the patch. but if the turtle isn't on the patch center, then its
coordinates are different – Seth Tisue Nov 19 '12 at 13:01
add comment
Not the answer you're looking for? Browse other questions tagged netlogo or ask your own question. | {"url":"http://stackoverflow.com/questions/9078842/netlogo-how-to-show-the-current-coordinates-of-a-turtle/9078956","timestamp":"2014-04-18T01:54:43Z","content_type":null,"content_length":"69527","record_id":"<urn:uuid:f7b14153-b68a-4302-892b-e40c38a7d6cf>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the maximum volume, from an a4 sheet of paper folded into a box
November 1st 2012, 03:48 AM
Find the maximum volume, from an a4 sheet of paper folded into a box
Grab a piece of paper. Find the maximum volume obtainable, by cutting corners out of the box.
I haven't done this math before, and It was part of a puzzle based question for a topic I'm doing.
and the funny thing is this is a particular component of math I'm doing next year.
We know that all cuts must be identical to make an open cut box.
Assuming dimensions of 297mm and 210mm.
The maximum allowable cut is 210mm/2 = 105mm (although anything near would be practically useless)
Basically I have so far. where do I go from here?
I've plugged the values in and I get 24x-2028 = 2028/24 = 84.5mm.
I plugged this into wolfram, but I did get the 2028 somewhere in my working out. oh yeah it was -2x * by length and by width. am I on the right track?
I'm not right the answer is around about 40mm according to my spreadsheet.
Attachment 25502
November 1st 2012, 02:35 PM
Re: Find the maximum volume, from an a4 sheet of paper folded into a box
V = x(210-2x)(297-2x)
volume in mm^3
November 1st 2012, 03:45 PM
Re: Find the maximum volume, from an a4 sheet of paper folded into a box
Sorry, I do mean maximum volume, but I need to determine the length of cuts (or the height)
November 1st 2012, 03:52 PM
Re: Find the maximum volume, from an a4 sheet of paper folded into a box
To find the actual value for x which yields the maximal volume will require differential calculus. Otherwise, graph the volume function (as given by skeeter) and estimate the value of x that is
at the maximum value for the volume.
November 1st 2012, 10:41 PM
Re: Find the maximum volume, from an a4 sheet of paper folded into a box
Thanks....I've done that in excel, graphed it and estimated the max point to be 40mm.
What do I need to do to get an accurate answer using calculus. I've got my formula in the first post but I don't know where to go from there.
Alternatively I came up with another solution, I got from an example on Youtube.
And I got v(x) = 62370 -2028x + 12x^2
But I'm not sure how to plug this in to get my answer. The example on youtube was a nice neat problem with the width and length =
November 1st 2012, 10:56 PM
Re: Find the maximum volume, from an a4 sheet of paper folded into a box
To find the exact answer using calculus, we would equate the derivative of the volume function to zero:
Take the appropriate root: | {"url":"http://mathhelpforum.com/pre-calculus/206525-find-maximum-volume-a4-sheet-paper-folded-into-box-print.html","timestamp":"2014-04-19T19:07:17Z","content_type":null,"content_length":"7604","record_id":"<urn:uuid:d001acfd-502a-4baf-9e63-e51297617998>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fermat's Last Theorem
Èvariste Galois was born on October 25, 1811 in La Reine, France. Galois was educated by his mother up until he was 12. There was some discussion about sending him to college when he was 10 but in
the end, it was decided that he should stay at home. In 1815, his father was elected mayor of La Reine.
In 1823, enrolled in school. In 1824-1825, he received good grades, but in 1826, he was forced to repeat a grade because he failed rhetoric. In 1827, Galois enrolled in his first math class.
In 1828, just one year after his first math class, Galois took the entrance examination for the Ecole Polytechnique, the top university. He failed.
Despite the setback, Galois continued his studies of mathematics including work by
. In April 0f 1829, he published a paper on
continued fractions
which included the
proof that reduced quadratic equations are represented by purely periodic continued fractions
Later that year, a scandal erupted when a vulgar poems were distributed and attributed to Galois's father. The result was more than Galois's father could stand and he committed suicide on July 2,
1829. Just a few weeks later, Galois made his second attempt at entrance to the Ecole Polytechnique. Again, he failed. In December of 1829, Galois entered Ecole Normale.
Galois submitted a paper on the theory of equations to be published. He learned that the same topic had just been covered in a posthumous article written by
Niels Henrik Abel
. Galois rewrote the article on the conditions whereby an equation is soluble by radicals and resubmitted it. The paper was ver well received and was submitted to Fourier who was secretary of the
Paris Academy for the Grand Prize. Unfortunately, Fourier died in April of 1830 and Galois's paper got lost. In June, the Grand Prize of the Paris Academy was awarded to Niels Henrik Abel and Carl
In July of 1830, there was revolution in France. Charles X quickly departed and riots broke out. The head of the Ecole Normale locked the students in a the school to prevent them from joining in the
unrest. In 1830, the director of the Ecole Normale wrote an editorial criticizing the students for their behavior. Galois wrote a response defending the students and criticizing the decision to do a
student lock up. After writing this reply, Galois was expelled.
Galois next entered the Artillery of the National Guard. In December of 1830, King Louis Philippe disassembled the Artillery of the National Guard because he saw them as a threat to his power. 19 of
the guards had been accused of conspiracy but were later released. On May 9, 1831, a great celebration was put together. Galois was there. At one point, Galois raised his glass to make a toast and
held up a dagger at the same time. This was taken as a threat against the king. That same evening, Galois was arrested. He was held in prison until June 15 when he was acquitted.
On July 14, Galois was arrested for wearing the uniform of the Artillery of the National Guard which had been outlawed. While in prison, he found out that one of his math papers had been rejected and
he attempted suicide. He was stopped by the other prisoners. Finally, on April 29, he was released.
By this time, he was in love with a young woman he had met. On May 30, he entered into a duel. It is believed that it was over the young woman. During the fight, he was severely wounded and died on
May 31, 1832 at the age of 20.
Galois's papers were collected and sent out. Eventually, they made their way to Joseph Liouville who was deeply impressed. Liouville presented them to the French Academy in September 1843. These
papers were published in 1846 and form the basis of what is today known as Galois Theory.
Today, Galois is considered to be one of the most original and talented mathematicians of all time and Galois Theory is one of the great gems of modern mathematics.
No comments: | {"url":"http://fermatslasttheorem.blogspot.com/2006/02/variste-galois.html","timestamp":"2014-04-20T19:22:12Z","content_type":null,"content_length":"89318","record_id":"<urn:uuid:421d7570-7206-448a-9f96-f3abf4632338>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rem: Revista Escola de Minas
Services on Demand
Related links
Print version ISSN 0370-4467
Rem: Rev. Esc. Minas vol.65 no.2 Ouro Preto Apr./June 2012
MINING MINERAÇÃO
Simulation of the mineral breakage using a fractal approach
Simulação computacional da quebra mineral usando uma abordagem fractal
André Carlos Silva^I; Américo Tristão Bernardes^II; José Aurélio Medeiros da Luz^III
^IUniversidade Federal de Goiás - UFG Departamento de Engenharia de Minas - DEMIN. andre@iceb.ufop.br
^IIUniversidade Federal de Ouro Preto - UFOP Instituto de Ciências Exatas e Biológicas - Departamento de Física - DEFIS. atb@iceb.ufop.br
^IIIUniversidade Federal de Ouro Preto - UFOP Departamento de Engenharia de Minas Escola de Minas - DEMIN. jaurelio@demin.ufop.br
Hukki's law is an empirical law, which does not take into account several events of energy loss during the mineral fragmentation processes. Since experimental results are very difficult to obtain for
a large range of fragment sizes, the verification of the law is very difficult. The relation between the fracture and fragmentation processes with fractal geometry has been proposed some decades ago.
Empirical laws along this context show basic features of fractal geometry, mainly self-affinity and the power law behavior. Thus, in this paper, a model to simulate the fragmentation process and to
check the relationship between energy consumption and fragment sizes was developed. The model is represented on a regular lattice where links represent pathways for fracture processes. The energy of
fragmentation events was modeled by a probability distribution function. In the proposed model there is no mass loss and the fracture propagation occurs as self-avoiding random walks on the regular
Keywords: Simulation, mineral breakage, fractal.
A Lei de Hukki é uma lei empírica que não leva em conta vários tipos de perda de energia durante o processo de fragmentação mineral. Uma vez que resultados experimentais são muito difíceis de obter
para uma ampla de faixa de tamanho de fragmentos, a verificação desta lei se torna muito difícil. A relação entre as fraturas e o processo de fragmentação com geometria fractal foi proposta algumas
atrás. Leis empíricas conhecidas neste contexto mostram características básicas da geometria fractal, principalmente auto afinidade e comportamento de lei de potência. Desta forma, neste artigo é
apresentado um modelo para simular o processo de fragmentação e verificar a relação entre o consumo de energia e o tamanho dos fragmentos gerados. O modelo é representado por uma rede regular onde os
arcos da rede representam os caminhos gerados no processo da geração da fratura. A energia dos eventos de fragmentação foi modelada por uma função de distribuição de probabilidades. No modelo
proposto não existe perda de massa e a propagação das fraturas ocorre seguindo uma caminhada aleatória auto evitável em uma rede regular.
Palavras-chave: Simulação, quebra mineral, fractal.
1. Introduction
Mineral comminution represents a significant portion of energy consumption around the world. So, a precise knowledge of the structural properties of the products is necessary for the control of
energy expenditure. It is understood by structural properties, the quantitative description of morphological and architectural features, primarily the system of discontinuities genuine mineral
particle before the comminution and, secondly, the product of comminution, as shown by Thomas and Filippov (1999).
Several authors have published papers on the propagation of fractures in metal alloys, but only a few have worked with the same problem applied to minerals and always using metallurgical methods and
hypotheses. In this context, many of the approximations made are consistent, but as the hypotheses are greatly simplified they almost always end up neglecting a somewhat obvious observation: the
mineral breakage process involves several different processes occurring simultaneously or successively.
So, the energy balance, commonly used in analyses of this kind, refers to a single mechanical process arbitrarily selected, without the estimation of energy expenditure with thermic energy loss from
the comminution system to the environment around them. Therefore the determination of models relating the energy spent with the surface created are not available.
The aim of this paper is to present a model to simulate the process of mineral comminution, as well as to verify the relationship between energy consumption and the size of fragments generated in the
process of breaking.
2. Material and methods
Global experimental model
The more general expression, and also the simplest, relating the energy consumption per unit mass (E) to reduce the average size of a particle x of the broken material is the differential equation
proposed by Kapur and explained by Lynch (1977) given by:
The expressions of Kick, Rittinger and Bond used today are only integrations of the Kapur equation for different values of the exponent n (see Figure 1). The more general expression is Hukki's Law,
which is a generalization of the expression of Kapur, given by:
Hukki's Law is an empirical law which does not take into account the various types of energy losses in the comminution process. Even if it were theoretically possible to estimate the energy
dissipation (mechanical, kinetic, thermal and ultrasonic) through sophisticated experiments this would be expensive. All energy losses depend on the comminution devices and the products themselves,
whose identity is not exact by Hukki´s Law.
Calculation of the constants of Hukki´s Law
According to Thomas and Filippov (1999) Hukki did not propose an analytical expression for the function f(x) of his expression. However, it is possible to consider an approximate graphic for the
Hukki´s Law which according to Lynch (1977) has a slope given by:
Figure 1 (Thomas and Filippov, 1999) shows the experimental curves found by Lynch for Hukki´s Law.
Thomas and Filippov (1999) demonstrated that a linear regression of the Hukki´s Law in a logarithmic coordinate system for particle size between 1 and 10^4 µm has a strong correlation with the curve
and is given by:
Where B and C are constants from the linear regression. The integration of equation (4) results in:
Where A is an integration constant. The three constants (A, B and C) define all properties of Hukki´s Law. To introduce such constants in Hukki's Law (equation 2) Silva and Luz (2007) proposed to
derive equation (5) depending on the variable x. Thus:
Comparing the equation (6) with the equation (2) it follows that:
Modelling the comminution process
To model the comminution process of a mineral particle a simple model was developed based on computer simulation technique called Monte Carlo. The model consists of the following steps:
1. A regular lattice is created to represent one of the faces of a mineral particle (or a mineral grain). The lattice sites are the points where the impact energy can be transferred to the lattice.
2. All external sites over the lattice are interconnected to form the outer surface of the mineral face (or border of a mineral grain).
3. A lattice site and the amount of energy to be transferred by the impact are randomly selected. The energy to be transferred is given by a Weibull distribution. The energy transferred to the
particle is stored in a variable.
4. If the energy is high enough the fracture percolates to the adjacent sites of the lattice. Each time a fracture is established between two lattice sites the total impact energy is decreased. This
step repeats until the total energy becomes equal to zero or the fracture cannot find any adjacent intact sites to continue the breaking process.
5. The steps 3 and 4 repeat while there are sites in the lattice to be broken.
The fourth step of the algorithm characterizes a self-avoidable random walk, i.e. a path that can only be walked once. For the model in question this is equivalent to saying that once the fracture
has been established a new border is created, preventing the creation of another fracture. The result of running the algorithm is the quantification of the energy applied to the lattice to break it.
In the proposed model the total mass of the system is conserved, since the mineral particle is only subdivided into smaller ones. Figure 2 shows a graphical representation of the operation of the
proposed algorithm on a square lattice with four sites. Figure 2A shows a newly created lattice, without any fracture. Figure 2B shows the final result of the first fracture, where the site shown in
grey was hit in the impact and, from this site, the fracture percolated to the adjacent sites. The end of the algorithm occurs when the entire lattice was fractured (Figure 2C).
To simulate the fact that large particles require smaller amounts of energy to fragment in all simulations the same square lattice with 1600 sites was used, where the particle that is desired to
break was located within the lattice. Therefore, large particles almost always receive the impact directly, but small particles tend to escape the impact, consuming more energy in their process of
For the operation of the proposed model it is not necessary that the lattice be square, but may have any configuration. As the shape of the lattice will impose in the form of breakage products, is
possible to understand the lattice as the projection of the cleavage planes of the mineral. To model the breakage of minerals with preferential planes of breakage can be established a higher
probability of the fracture percolation in this planes, so that the breakage is favored in certain crystallographic planes. This assumption is similar to the hypothesis adopted by Turcotte (1986).
The author consider the mineral natural discontinuities that first induce the size distribution of fragments, may be cut out in sets of quasi-planar elements that present a scaling architecture
defined by a self-similar fractal law.
3. Results
Simulations were conducted varying the particle size, always on a lattice with total size of 1,600 sites. For each particle size 1.000 simulations were performed and the average energy spent for the
comminution was considered. Figure 3 shows the results of the proposed model and the comparison with the fractal equation proposed by Silva and Luz (2007). As can be seen in Figure 3 the equation of
Silva and Luz fits with approximately 98 per cent of adherence to the data generated by the simulation, a result considered satisfactory.
4. Conclusions
Two conclusions can be obtained from the data generated by the simulation: the mineral breakage, as expected, follows a power law, and the same hypothesis applies to the proposed model. Secondly that
the simulation data fit well to Silva and Luz equation (2007) (R^2 ≈ 0.98). The Monte Carlo technique was tested to simulate, in a simple model, the mineral comminution process. Although the process
itself is very complex, the model agrees with the proposed equation test.
However, experiments and simulations are needed to more extensive validation of the proposed model and to better understand the possibility of practical uses of the equation proposed by Silva and Luz
5. References
LYNCH, A. J. Mineral crushing and grinding circuits. Developments in Mineral Processing, Elsevier, v. 1, p. 1-25, 1977. [ Links ]
SILVA, A. C., LUZ, J. A. M. Abordagem fractal à quebra de partículas minerais. In: ENTMMET, 23. Anais... Ouro Preto: 2007, v. 1, p. 127-132. [ Links ]
THOMAS, A., FILIPPOV, L. O. Fractures, fractals and breakage energy of mineral particles. International Journal of Mineral Processing, v. 57, p. 285-301, 1999. [ Links ]
TURCOTTE, D. L. Fractals and fragmentation. Journal of Geophysical Research, v. 91, p. 1921-1926, 1986. [ Links ]
Artigo recebido em 28 de julho de 2011.
Aprovado em 03 de abril de 2012. | {"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0370-44672012000200019&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-17T10:28:26Z","content_type":null,"content_length":"36169","record_id":"<urn:uuid:2cde2496-4e4b-41ae-abd7-7c6ef3042789>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math2[Please Check Again]
Posted by Margie on Saturday, February 24, 2007 at 6:14pm.
a^2-9 a^2-3a
----- * --------
a^2 a^2+a-12
What I Got:
*Please Check My Work.*
factor the denominator to a^2(a+4)(a-3)
The last factor divides out in the numerattor (second term factors to a(a-3). The a in the numerator reduces the denominaor a^2 term
I lost you afterthe last factor divides out the numerator.Which one?
In the numerator, either term will give you a-3 when factored.
So then:
Did you get your answer yet? Do you understand now?
No.-Can you help me please Mack.?
Margie,do you needy helpy?Do you speak espanol?
¿Sí, hablo español por qué?
Related Questions
More Math 2 See If You Can Help Me... :) - Problem: a^2-9 a^2-3a...
Please Check (Math2) - a^2-9 a^2-3a ----- * ------- a^2 a^2+a-12 *I got: 3-3a/4 ...
math,correction - can someone check these for me please.... problem#4 Directions...
Math[PLease Check] - Problem: 25-x^2 6 ------ * --- 12 5-x What I got: 5-x --- 2...
algebra - 3ax^2 - 27a You can factor that into 3a (x^2 - 9), which also equals ...
Pre-Calc-please check - Please check this for me expand (3z-2b)^5 using Binomial...
Math(Please Check My Work.) - 3a^3b^2(-a^3b-a^2b^2)=-3a^9b^3-3a^6b^4 *is this ...
math - simplify 12b^-6/3a^0 you should read my notes on "rob"'s questions if you...
Math- factoring - Show all work and factor completely. 6a^2+5a-4 (6a^2+8a)-(3a-4...
Math(Please Check Again) - 25-x^2 6x^2 ------ * ------ 12 2x *I got: 5-x/2 No. ... | {"url":"http://www.jiskha.com/display.cgi?id=1172358889","timestamp":"2014-04-19T13:15:27Z","content_type":null,"content_length":"8721","record_id":"<urn:uuid:6fadd271-596d-4035-9d27-6f5f062da49e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Fifth Smarandache Friendly Prime Pair
Authors: Philip Gibbs
A Smarandache friendly prime pair (SFPP) is a pair of prime numbers (p,q), p < q, such that the product pq is equal to the sum of all primes from p to q inclusive. Previously four such pairs were
known: (2,5), (3,13), (5,31) and (7,53). Now a fifth one is found by a brute force computer search. A heuristic approximation can be to estimate the expected number of SFPPs in a given interval. The
result suggests that the probability of further pairs existing is about 0.07.
Comments: 4 pages
Download: PDF
Submission history
[v1] 28 Apr 2010
[v2] 30 Apr 2010
[v3] 2 May 2010
[v4] 4 May 2010
Unique-IP document downloads: 162 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/1004.0126","timestamp":"2014-04-20T18:31:14Z","content_type":null,"content_length":"7263","record_id":"<urn:uuid:7873330b-2dbb-4393-b1ce-1c2826106a1a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the formula for pressure? - Homework Help - eNotes.com
What is the formula for pressure?
The formula for pressure is
P = F/A
In this equation, P stands for pressure while F stands for force and A stands for area. Pressure is typically measured in a unit called the Pascal.
So what this means is that the amount of pressure that is exerted on an area is determined by the size of that area and by the amount of force that is pressing against it.
Therefore, if you have a given amount of force, you will have more pressure if it presses down on a small area than if it presses down on a large area.
Pressure is defined as the normal force per unit area of a surface. If F is the force acting normal to the surface area A,then the pressure P is given by :
P = F/A, Newtons/square meter or P = F/A Pascals.
According to the dimensional analysis of the pressure from this definition is: Force/Area = [M^1]*[L^1][T^(-2)]/[L^2]
= [M^1][L^(-1)][T^(-2)], where M represents mass, L represents length and T represents time in fundamental standard units.
Pressure (symbol: p) is the force applied per unit area, in the direction perpendicular to that surface.
P=F/A, where, F is the normal force, A is the area.
Pressure is a scalar, which in IS is measured in pascals.
1 Pa = 1 N/m2
The pressure is transmitted to surrounding areas or sections of the field of fluid, in the normal direction at any point in these areas or sections.
It is a fundamental parameter in thermodynamics and is a joint variable to volume.
Characteristic Cases
Static pressure
Static pressure, usually denoted SP, is the inner pressure of a fluid which is measured with a device that moves with the same speed as the fluid. For example, to the walls of a pipeline is carried
the static pressure of fluid flowing through it.
Dynamic Pressure
Dynamic pressure is the additional pressure of a fluid that would hit an area and would be forced to consume completely it's kinetic energy. It is expressed by the relation:
p dynamic= rho*(v^2/2)
where rho is fluid density in kg/m^3, v is velocity in m / s.
Stagnation pressure
Stagnation pressure is the pressure that would exert a fluid in motion if it were forced to stop. If a fluid moves faster, its stagnation pressure increases. Static pressure and stagnation pressure
are related to Mach number of fluid. See also Bernoulli's equation, which however is valid only for incompressible fluids.Pressure of a fluid in motion can be measured with a Pitot tube, connected
to a manometer.
Hydrostatic pressure
Hydrostatic pressure is the pressure due to the weight of a fluid.
ρ (rho) is density of the fluid (eg water density is almost 1000 kg/m3);
g is the acceleration due to gravity (conventional, 9.80665 m/s2 to the sea surface);
h is the height of column of liquid (in meters).
Pressure of explosion or deflagration
Explosion or deflagration pressures are created by igniting explosive gas, aerosol, suspension in closed or open spaces. These pressures propagate as a wave of shock.
Negative pressures
While pressures are generally positive, in some cases negative pressures are meet:
-When discussing the relative pressures. For example, an absolute pressure of 80 kPa may be expressed as a relative pressure of -21 kPa (ie 21 kPa under atmospheric pressure of 101 kPa). The
technique is called "a depression of 21 kPa.
- When attractive forces (eg Van der Waals forces) between particles of fluid, exceeds the forces of rejection. This scenario, however, is unstable because the particles are closer and closer
until rejection forces would balance the forces of attraction.
- Negative pressures occurring during plant transpiration.
-Casimir effect can create small forces of attraction by interacting with vacuum energy. Sometimes this is called 'vacuum pressure' (not to be confused with depression).
- Depending on the reference system to surface orientation, a positive pressure on one side of a surface can be considered negative on the other side of the area.
- In cosmology( dark energy, expansion of the universe).
Pressure is often listed as psi. The formula to measure pressure is Pressure equals force in pounds over the area in square inch. The written abbreviated formula is P= F/A.
By looking at the formula one can see that top obtain the pressure on an object, one must divide the the force by the area of an object.
All fluids, liquids, and gases omit pressure. If a fluid is not moving it exerts even pressure. Air pressure is measured by using a barometer.
The formulae for pressure is P= F/A
Pressure is the amount of force acting perpedicularly per unit area. The symbol of pressure is p.
p is the pressure
F is the normal force
A is the area of the surface on contact.
Pressure is a scalar quantity. It relates the vector surface element (a vector normal to the surface) with the normal force acting on it. The pressure is the scalar proportionality constantthat
relates the two normal vectors.
p= pressure
F= normal force
A= area of surface on contact
p= F/A
p= pressure
F= force
A= area
The formula for pressure is P= F/A
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/what-formula-pressure-125729","timestamp":"2014-04-16T16:27:27Z","content_type":null,"content_length":"54140","record_id":"<urn:uuid:f33f5652-a369-4c27-8d9b-b8605686622c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Visualization of Complex
┃ ┃
┃ │ NDimViewer: Visualization of High-Dimensional Dynamical Systems (1998-1999)NDimViewer is a web-based visualization system for numerical solutions of dynamical systems and representation of ┃
┃ │ the calculated data using one out of three techniques: Extruded Parallel Coordinates, Linking With Wings, or Three-Dimensional Parallel Coordinates. It is specialized for high-dimensional ┃
┃ │ dynamical systems with a dimension count up to 25, depending on the chosen technique. The implementation itself is separated into a calculation and a visualization part, which are treated ┃
┃ │ independently. (more) ┃
┃ │ Color-table animation for vector fields (1997-1998)FROLIC is a fast variant of Line Integral Convolution (LIC) and illustrates 2D vector fields by approximating streamlets by a set of disks ┃
┃ │ with varying intensity. Color-table animation is a very fast way of animating FROLIC images. Various color-table compositions are investigated. When animating FROLIC images visual artifacts ┃
┃ │ (pulsation, synchronization) must be avoided. Several strategies in this respect are dealt with. ┃
┃ │ Visualizing Dynamical Systems near Critical Points (1996-1998) The visualization of dynamical systems, e.g., flow fields, already provides quite a reasonable number of useful techniques. Many ┃
┃ │ approaches seen so far either facilitate the visualization of the abstract skeleton of flow topology, or directly represent flow dynamics by the use of integral cues, such as stream lines, ┃
┃ │ stream surfaces, etc. In this paper we present two visualization techniques which feature both approaches and demonstrate that combining both techniques, synergetic advantages are gained. ┃
┃ │ Enhancing the Visualization of Characteristic Structures in Dynamical Systems (1997-1998) We present a thread of streamlets as a new technique to visualize dynamical systems in three-space. A ┃
┃ │ trade-off is made between solely visualizing a mathematical abstraction through lower-dimensional manifolds, i.e., characteristic structures such as fixed point, separatrices, etc., and ┃
┃ │ directly encoding the flow through stream lines or stream surfaces. Bundlers of streamlets are selectively placed near characteristic trajectories. An over-population of phase space with ┃
┃ │ occlusion problems as a consequence is omitted. On the other hand, information loss is minimized since characteristic structures of the flow are still illustrated in the visualization. ┃
┃ │ Visualizing the Behavior of Higher Dimensional Dynamical Systems (1996-) The project deals with various techniques to visualize trajectories of high-dimensional dynamical systems. ┃
┃ │ The Virtual Ink Droplet Method (1997) The virtual ink droplets method is an efficient visualization technique for two-dimensional dynamical systems. It is based on a physical model of ┃
┃ │ smearing ink over a sheet of paper. Due to this intuitive metaphor images of flow fields can be easily understood and interpreted. ┃
┃ │ Java Exploration Tool for Dynamical Systems (1997) Various texture-based techniques to visualize 2D analytical dynamical systems are implemented as Java-applet. These techniquese include LIC, ┃
┃ │ OLIC, FROLIC, etc. ┃
┃ │ Animating Flowfields: Rendering of Oriented Line Integral Convolution (1997) Oriented Line Integral Convolution: Texture based flow visualization to illustrate direction and orientation of ┃
┃ │ flow (extension of Line Integral Convolution). ┃
┃ │ Collaborative Augmented Reality: Exploring Dynamical Systems (October 1996 - March 1997) In this paper we present collaborative scientific visualization in Studierstube. Dynamical systems are ┃
┃ │ investigated in a multi user setting by the use of augemented reality. ┃
┃ │ Visualizing Poincaré Maps together with the underlying flow (June 1996 - March 1997) Advanced visualization techniques for 2D Poincaré maps embedded within standard visualization techniques ┃
┃ │ for the underlying 3D flow. ┃
┃ │ Hierarchical Streamarrows for the Visualization of Dynamical Systems (April 1996 - March 1997) Hierarchical streamarrows are an extension to the streamarrows technique. It is not affected by ┃
┃ │ problematic cases as, e.g., such of high divergence or convergence. ┃
┃ │ Examples for Visualization using DynSys3D (1996 -) DynSys3D: A workbench for developing advanced visualization techniques in the field of three-dimensional dynamical systems under AVS. ┃
┃ │ Streamarrows -- Results (1996 - 1997) Streamarrows are a novel technique developed to illustrate multiple layers of streamsurfaces. Arrow shaped portions of a streamsurface are rendered ┃
┃ │ semitransparently to make portions of phase space structures perceivable which would otherwise have been occluded. Streamarrows are well suited for, e.g., highly curled streamsurfaces as ┃
┃ │ produced by the above mentioned mixed-mode oscillations. ┃
┃ │ Visualization of Mixed Mode Oscillations (1995 - 1997) Mixed-mode oscillations are a phenomenon quite often encountered in chemical systems. They owe their name to the alternating large and ┃
┃ │ small amplitude heights in the observed time series. Another characteristic feature of mixed-mode oscillations are the alternating chaotic and periodic responses as a parameter is varied. ┃
┃ │ A Guided Tour to Wonderland: Visualizing the Slow-Fast Dynamics of an Analytical Dynamical System (1994 - 1996) The Wonderland model is an oeconometric model which describes the interaction ┃
┃ │ between population size, economic activity and environmental implications. Various visualization techniques were taken to illustrate the phase space behaviour of this nonlinear system. ┃
┃ │ Visualization of the Dynastic Cycle (1994 - 1995) The Dynastic Cycle is an oeconometric model which was designed to model the periodic alteration of the society between despotism and anarchy ┃
┃ │ in ancient China. Thereby a three-class society of farmers, bandits and soldiers is considered. We give some phase-space visualizations to illustrate the long term behaviour of the model ┃
┃ │ which is characterized by a pronounced slow-fast dynamic. ┃
┃ │ Graphical Nonlinear Time Series Analysis (1994 - 1995) With Graphical Nonlinear Time Series Analysis a user investigates time series in phase space that may have originated from underlying ┃
┃ │ nonlinear systems. The program enables the user to quickly focus on interesting portions of his data. In a following step he may then use numerical tools for further analysis. ┃
┃ │ Nonlinear Iterated Function Systems (1993 - 1994) Iterated Function Systems describe (typically fractal) objects by a set of contractive affine transformations. Nonlinear Iterated Function ┃
┃ │ Systems are defined by contractive nonlinear functions. Thus the modeling flexibility of Iterated Function Systems is greatly increased. We have done some work on modeling and rendering of 3D ┃
┃ │ Nonlinear Iterated Function Systems. Furthermore interactive programs for the specification of 2D (Nonlinear) Iterated Function Systems have been implemented. ┃
┃ │ Visualization of Strange Attractors (1993 - 1994) The Visualization of Strange Attractors facilitates the understanding of the long term behavior of chaotic dynamical systems. Typically a ┃
┃ │ simple one-dimensional trajectory is used in phase space to approximate a strange attractor. We have investigated more complex visualization techniques for both illustrating the global and ┃
┃ │ local properties of strange attractors. ┃
┃ │ Editor for Strange Attractors (1993) An Editor for Strange Attractors allows an interactive and intuitive specification and modification of dynamical systems defined by differential or ┃
┃ │ difference equations. ┃
┃ ┃ | {"url":"http://www.cg.tuwien.ac.at/research/vis/dynsys/","timestamp":"2014-04-21T07:52:29Z","content_type":null,"content_length":"14013","record_id":"<urn:uuid:7a2df60c-5842-4296-ab40-1d0c311476d2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
residue classes of primes, covering intervals and bounds on the different ways
up vote 4 down vote favorite
Take the first $n$ primes $p_1,...,p_n$ and the primorial $P_n$ .Denote by $p_i$ every prime bigger than $p_n$ and smaller than $P_n$.
1) Is that true that there always be a number in any interval of consecutive integers of length $P_n$ not divided by any $p_i$? (It's the same as taking a residue class $r_i\bmod p_i$ for every $p_i$
in every possible way and wondering if you can cover all the numbers in the interval $[0,P_n-1]$.)
2) Even if we do not know if we can cover this interval, can we have any good upper bound on the number of ways?
prime-numbers arithmetic-progression nt.number-theory
Your question is equivalent to asking if there are more integers in the interval $[0,P_n]$ than primes in the interval $[p_n, P_n]$. If this is what you really meant to ask, I suggest you post
your questions on math.stackexchange.com instead of MathOverflow. – S. Carnahan♦ Mar 6 '11 at 15:19
1 in what way are these equivalent?for each prime you take a whole residue class not a number... – asterios gantzounis Mar 6 '11 at 16:01
3 @Scott Carnahan:Please, if you dont understand the question ask for specification ,not just close it – asterios gantzounis Mar 6 '11 at 16:04
Thank you for fixing the question. – S. Carnahan♦ Mar 6 '11 at 16:46
1 Is there a motivation for this particular question? I'd be interested to know where it comes from. – Mark Bennet Mar 6 '11 at 21:44
show 4 more comments
5 Answers
active oldest votes
I did some computer programming to check plausibility. In future I request that you do this step yourself.
For $p_n = 3$ and $P_n = 6,$ the only prime in between is 5, and any interval of length 6 contains an integer not congruent to any prescribed value mod 5.
In C++ I was able to check up to 10,000,000. For definiteness I took the residue classes to all be 0, that is I checked multiples of the primes between $p_n$ and $P_n.$ For the $p_n$ I
checked, I was able to find only relatively short intervals of consecutive numbers, each of which is divisible by at least one prime between $p_n$ and $P_n.$ That is, these intervals
have lengths much shorter than $P_n$ itself. Thus in any interval of length $P_n,$ it should be quite easy to find numbers that are not divisible by any of those primes. Indeed, the
probability of picking a success at random appears to increase with $p_n.$
For example, for $p_n = 5, P_n = 30,$ I tried to find long intervals where each number had at least one divisor in the set 7, 11, 13, 17, 19, 23, 29.
691558 = 2 * 7 * 47 * 1051
691559 = 11 * 62869
691560 = 2^3 * 3^2 * 5 * 17 * 113
691561 = 13 * 53197
691562 = 2 * 19 * 18199
691563 = 3 * 29 * 7949
691564 = 2^2 * 23 * 7517
691565 = 5 * 7 * 19759
The bound of 10,000,000 is not on primes, it is on the output, such as 691565 < 10,000,000.
p_n = 5 P_n = 30 0.592302 2.96151
length = 8
p_n = 7 P_n = 210 0.454539 3.18177
length = 20
p_n = 11 P_n = 2310 0.348014 3.82815
length = 43
up vote 3 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132
down vote 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142
accepted 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152
p_n = 13 P_n = 30030 0.283807 3.68949
length = 207
p_n = 17 P_n = 510510 0.236611 4.0224
length = 1 + 435005 - 433756 = 1250
434996 434997 434998 434999 435000 435001 435002 435003 435004 435005.
When you say you checked up to $10,000,000$, do you mean you checked all $n$ up to that number? or all $n$ such that $p_n\lt10,000,000$? or all $n$ such that $P_n\lt10,000,000$? I'm
not sure, anyway, that the congruence class $0$ is representative. This resembles a problem about covering congruences, and in those the choice of residue class is crucial. – Gerry
Myerson Mar 6 '11 at 23:13
Do you mean that you checked all primes up to 10000000? – Aaron Meyerowitz Mar 6 '11 at 23:22
Will, thanks. I'm not surprised the problem is computationally huge. I think the problem of deciding whether there is a set of covering congruences with given moduli is NP-complete. –
Gerry Myerson Mar 7 '11 at 2:37
Hi, Gerry. As to your comment after Aaron's answer, I was mostly trying to figure out what the question might be. I don't have any clear sense of how thngs might vary with a different
choice of residues, although I did want to try a few changes just in case. For that matter, I'm not sure that 10,000,000 is enough to stand in for infinity. The actual finite bound for
prime 5 and primorial 30, by CRT, is 7*11*13*17*19*23*29, and I cannot get near that. – Will Jagy Mar 7 '11 at 3:23
Will, length 8 (which you found) seems clearly optimal. Consider 9 consecutive integers. The residue classes (however chosen) for 11,13,17,19,23 and 29 together can only knock out 6 of
these. mod 7 you can knock out 2 more for a total of 8. Start with 56724458 which is [0,-1,-2,-3,-4,-5,-6] mod [7,11,13,17,19,23,29] to get multiples in that order. Obviously the
analysis is not that easy for larger $p_n$ – Aaron Meyerowitz Mar 7 '11 at 7:07
show 9 more comments
1) is equivalent to asking if it is true that property $J(n)$, which is the assertion $j(P_N/P_n) \leq P_n$, holds for all integers $n$, where $P_n$ is the product of the first $n$ primes,
$P_N$ is the product of the first $N$ primes where $N$ is the largest index such that $p_N < P_n$, and $j()$ is Jacobsthal's function $j(m)$ which gives the smallest integer $j$ such that any
interval of $j$ or more consecutive integers contains at least one integer which is coprime to $m$. The original formulation involving covering an interval with residue classes, one for each
prime $p_i$ with $n < i \leq N$, can be translated by the Chinese Remainder Theorem into one where the residues are 0, i.e. the classes are multiples of the prime instead of being an
arithemtic sequence with common difference p_i. Aaron Meyerowitz showed that $J(n)$ was true for $n=3$ and claimed it was for $n=4$, Will Jagy observed that $J(n)$ was true and easy for $n=2$,
and I showed that $j(P_N/P_n) = 9$ for $n = 3$ by a method similar to one outlined by Aaron in one comment, and that $74 < j(P_N/P_n)<=85$ for $n=4$ by an undisclosed but elementary method. I
also suggested that $J(n)$ is false for all $n > 4$.
A method of showing for $n < 5$ that $j(P_N/P_n) < P_n$ comes from the fact that $\sum_{n < i \leq N} 1/p_i < 1$, and some simple estimates on $j(m)$ which are applicable to any $m$ such that
the sum of the inverses of $m$'s distinct prime factors add up to less than 1. This method is no longer applicable for $n \geq 5$, as the indicated sum grows roughly as $\log(\log(p_N)/\log
(p_n))$, but it does grow, suggesting that $J(n)$ is eventually false for sufficiently large $n$.
I had hoped to show upper or lower bounds to resolve the matter, but the upper bounds I have at my disposal, while explicit, are too weak to show $J(n)$ is true, and the best asymptotic bounds
are also too weak, even making favorable assumptions on the (as yet unknown) multiplicative constants, while the best known lower bounds in the literature can probably be used to show $j(P_N/
P_n) > c P_N$ for $c$ some constant less than 1, so the lower bounds are tantalizingly close to showing that $J(n)$ is false, that is that the interval $[0, P_N -1]$ can be covered by
$N-n$-many residue classes, one for each prime $p_i$.
If one were to tweak things slightly, say allowing a couple smaller primes less than $p_n$ to help cover, or allowing not very many primes larger than $p_N$ to help (probably less than $n^6$
primes), then the answer to the modified question $J'(n)$ would be no, there would always be enough primes to cover.
One cover which shows how near a miss this is uses a midpoint sieve. Choose $L$ odd less than $P_n$ and forget even numbers for a while, and pretend you are covering the odd numbers in $[-L,L]
$ with the classes centered about the missing point 0. For $n=4$ I used $L =73$ and covered both endpoints with the class belonging to 73, the next with the class belonging to 71, the next
with the class belonging to 23 ( = 69/3) all the way down to 11, then I filled in the holes (odd numbers less than 74 which were 7-smooth) with 26 primes.
For $n=5$ one can use a midpoint sieve to cover something like 1700 number with the primes from 13 to 2309, and for $n=6$ something like 25000 for the primes from 17 up to 30030. (I have yet
to double check the figures, so I am being purposely vague.) In particular, it seems that the coverage ratio for the set using the midpoint sieve is increasing, and this suggests one can do
better by tweaking the midpoint sieve to show $J(n)$ true. Such tweaking is either random, so hit-or-miss, or computationally expensive, and I have no good heuristics at present for making
substantial improvements on the midpoint sieve. The fact that the midpoint sieve does far better than 50% coverage for $n>4$ is one of my reasons for believing $J(n)$ is false.
vote 3 I am trying to develop a technique to refine upper bounds, especially in the cases that the sum of the reciprocals of the distinct prime divisors is larger than 1 but still close to one. It is
down related to the following problem, which perhaps someone here can shed light upon. For $M$ small, I was able to use a relative of this problem to show $j(P_{46}/P_4) < 83$.
I want to see how poorly I can cover small portions of the number line according to the following constraints: 1) I only need to cover some subset around 0 of the number line $[-M, M]$, so I
can set my boundaries for computation to numbers not exceeding $2M$; 2) I have 0 already covered by something other than a tile, so no need to worry about that; 3) I have $k$ tiles of distinct
lengths, the lengths ranging from 2 to $l$ where $l$ is larger than $k$ but not by much, and $l < 2M$; 4) each tile has to cover exactly one positive integer $p$ and exactly one negative
integer $n$, and only a tile of length $p - n$ can cover both $p$ and $n$; and 5) I want to minimize simultaneously the amount of overlap and maximize the number of tiles I use. For an example
cost function this could be maximizing $j - o$, where $j$ is the number of tiles I use and $o$ represents overlap; $o$ itself could be $(2j-u)$ where $u$ is the number of integers in $[-M,M]$
that are covered by at least one tile in the arrangement of $j$ tiles.
In this problem, if I could prove that if I used $j$ tiles all of distinct lengths less than $3j/2$, that the overlap would be (say) at least $j/4$, I could use that in improving upper bounds
on $j(m)$ (different from but related to $j$) for some useful class of numbers $m$. Part of the challenge is that I can use all odd or all even tiles to create a partial cover with no overlap,
and there are some mixes of odd and even length tiles I could use with no overlap, so using less than $k/2$ tiles is useless to me unless most of their lengths are sufficiently small.
To summarize: the tile problem might help in showing that $J(n)$ is true for more $n$. My guess is still that $J(n)$ is false for $n > 4$.
So much for my latest attempt at 1). For 2), I am guided by the following scenario: let us suppose I am right and that for $n=6$, say, one needs only the primes from $p_7$ to $p_{N-8}$ to use
in a cover. Based on my studies of near-prime gaps, I expect (but cannot prove) that there would be 2 gaps of size larger than $P_n$ in the sequence of integers relatively prime to all the
primes from $p_7$ to $p_{N-8}$ inclusive. Suppose these gaps were each of size $P_n +d$; that would give $2d+2$ ways (including reflections) of covering the interval $[0, P_n-1]$ with residue
classes using the primes from $p_7$ to $p_{N-8}$. Now multiplying the whole set by $r=p_{N-7}$, this gives at least $r(2d+2)$ ways to cover the interval by residue classes which now allow the
use of the prime p_{N-7}, plus at least $2r$ more ways, since each of the 2 gaps before corresponds to r different gaps in the sequence of number relatively prime to all the primes from $p_7$
to $p_{N-7}$ inclusive, and each of the new gaps would be larger by some amount d', whose average is most likely related to the average gap size in the sequence ($M/\phi(M)$, where $M$ is a
product of the primes involved). Continuing up this way, we get at least $(2d+4)R$, where $R$ is the product of the last 8 primes before and including $p_N$.
This scenario ignores distribution of gaps in general, and assumes the largest gap is rare and (for sufficiently large N) the next largest is much smaller and far removed from the largest,
which is what is commonly seen. So I would expect (but cannot yet prove) that a good upper bound on the number of ways to cover would be something like $\prod_{0 \leq d < s} p_{N-d}$ where $s$
is small, conjecturally $s \in O(\log(\log(\log(N))))$.
Gerhard "Will Guess For Bounty Points" Paseman, 2011.03.16
I am sorry i didnt have internet for a few days and the bounty went automatically... – asterios gantzounis Mar 17 '11 at 14:13
Well, I can take comfort in that the bounty would not buy me a Venti Mocha anyway. If I find your email address, I will send you some followup information on this particular problem. Gerhard
"Ask Me ANout System Design" Paseman, 2011.03.21 – Gerhard Paseman Mar 22 '11 at 5:55
my e-mail is minasteris@gmail.com , thank you very much.. – asterios gantzounis Apr 5 '11 at 17:47
add comment
To clarify Aaron's observations: since the original post asked for something to be true in every interval (of consecutive integers) of length P_n, the residue classes are indeed a red
herring. The Chinese Remainder Theorem says that there will be a number common to all those residue classes, and therefore the problem will look the same whether the classes are nonzero
residues or not, since we are dealing with finitely many primes.
This now turns into a problem of Jacobsthal's function on numbers of the form (P_N/P_n), where P_N is the product of all primes less than P_n, which in turn is the product of all primes less
than n. Jacobsthal's function asks for the length j(m) of the smallest interval of integers which guarantees at least one integer coprime to m, i.e. lies outside the desired residue classes.
up vote Aaron is right when he requests that the sum of the reciprocals of the primes involved should be greater than 1. My computation suggests this starts to happen when n=5, p_n= 11, P_n = 2310,
2 down and there are roughly 340 (+ or - 20) primes involved in the product (P_N/P_n). I am trying to refine estimates to decide if j(P_N/P_n) is less than or equal to or greater than 2310. My
vote instinct tells me greater, and that this will hold true for n> 4. I will update this later with the computations. In the meantime, you can try to use the upper bound estimates in the recent
answer I posted to my Westzynthius question on MathOverflow.
Gerhard "Ask Me About Coprime Intervals" Paseman, 2011.03.07
Gerhard, I think you and Aaron have something. In Hardy and Wright, fifth edition, Theorem 427 on page 351. The harmonic sum for the primes up to positive real x is log log x + B_1 + o(1).
The constant B _1 is identified in Theorem 428 on the same page, but I still lack an approximate value. – Will Jagy Mar 7 '11 at 19:42
Mark Villarino has an arXiv paper on Mertens's proof of these estimates. B is about 0.26, but Merten's error is like 4/ln(x+1). I will include a reference in my next draft. Gerhard "Ask Me
About System Design" Paseman, 2011.03.07 – Gerhard Paseman Mar 7 '11 at 20:41
I got my estimates by dividing 10^6 by each prime and printing out whenever the sum passed a multiple of 10^5. Gerhard "Ask Me About System Design" Paseman, 2011.03.07 – Gerhard Paseman Mar
7 '11 at 20:44
Gerhard, good work. Also, the use of the word "dollar" follows from the discovery of silver in Jacobsthal. – Will Jagy Mar 7 '11 at 21:13
add comment
Edited version: Just to be clear, the question is: can we find integers $n$ and $s$ and arithmetic progressions $r_{n+i} \mod p_{n+i}$ where i ranges from 1 to the largest $j$ such that $p_
{n+j} \lt P_n$ such that the arithmetic progressions cover the interval $[s,s+P_n-1]$? I would guess that it IS possible (but I could be wrong). However it would likely require a huge
number of progressions. A better question might be given a certain set of primes (or pairwise relatively prime moduli), what is the longest interval you can cover?
There is no harm in assuming that $s=0$ since we can instead look at the progressions $r_{n+i}-s \mod p_{n+i}$.
If it can be done at all then the number of ways to do it (provided that we do get to pick $s$ to be what we want) is simply the product of all the primes in the range: Whatever residue
classes you choose will create a pattern of covered and uncovered integers which is periodic but with an extremely long period. Picking different residue classes creates the same pattern,
just shifted.
up vote 2 A question which makes some sense is: Bound the number of ways to choose residue classes and cover $[0,P_n-1]$ . Even that would be huge. One would have some carefully chosen progressions
down vote for the smaller primes and also an enormous number of one member residue classes filling in the holes. Those one member progressions could be shuffled around at will.
We would certainly need to have $\sum_1^j\lceil\frac{P_n}{p_{n+i}}\rceil \gt P_n$ but this is on the overly optimistic chance that we could have all the progressions distinct (At least for
$p_{n+i} \lt \sqrt{P_n}$ the progressions will not be completely disjoint) and that every one could be positioned to get in the maximum number of occurrences.
A condition which does not make the second assumption is $\sum_1^j\frac{1}{p_{n+i}} \gt 1$ so $\sum_1^{n+j}\frac{1}{p_{k}} \gt 1+\sum_1^{n}\frac{1}{p_{k}}$. By my calculations using this
estimate $$\sum_{p \lt x}\frac1p \ge \ln \ln (x+1) - \ln\frac{\pi^2}6$$ that condition requires that $p_n$ is at least $23$ meaning that $P_n$ is greater than $2.23\cdot 10^8$ and $j>1.2 \
cdot 10^7$. This makes the greedy strategy unattractive (Start with $s=0$ and pick residue classes repeatedly to take care of the smallest uncovered integer).
Aaron, I edited my answer to show the experiment I actually did. – Will Jagy Mar 7 '11 at 1:35
Also, I have been running a few related experiments, varying the seven residue classes mod 7,11,13,17,19,23,29 by hand. The longest intervals I can find stay just about the same length as
the original (all residues 0). Note that, if I take all residues the same constant c, I have simply shifted my initial intervals right by c, so I get exactly the same best length. – Will
Jagy Mar 7 '11 at 1:44
Will, consider the congruences $x\equiv a\pmod2$, $x\equiv b\pmod3$, $x\equiv c\pmod4$, $x\equiv d\pmod6$, $x\equiv e\pmod{12}$. There are $1728$ ways to choose the parameters
$a,b,c,d,e$, and only $24$ of these yield a cover for the integers. You're looking for a needle in a haystack, and you won't find it by sampling the haystack at random. – Gerry Myerson
Mar 7 '11 at 2:44
Gerry, true but in your example the moduli are not relatively prime. Here they obviously are. In this case, whatever the choice of 7 residue classes, the entire pattern of covered and
uncovered integers is periodic mod 7*11*13*17*19*23*29=215656441 so naively, that is how far one might have to look to be sure of finding the longest covered interval. The choice of
residue classes only affects where this cycle starts. The proportion of this interval left uncovered by any choice of 7 residue classes is 6/7 * 10/11 * ...* 28/29 or about 60%. – Aaron
Meyerowitz Mar 7 '11 at 6:51
Aaron, thanks for your comments. The nature of the problem is gradually becoming clear. I like your method of designing a set of residue classes to cover a long interval, as in length 70
for prime 7 and primorial 210. – Will Jagy Mar 7 '11 at 20:13
show 2 more comments
Let $M_4 = \prod_{p \text{ prime }, 10 < p < 210} p$. Another answer referred to computing $j(M_4)$, where $j(m)$ is Jacobsthal's function, and said that elementary estimates established
the inequalities $74 < j(M_4) < 85$. It is possible to tighten those inequalities, and perhaps produce a hand-checkable proof that $j(M_4)=79$. Right now though, the assertion is being made
with computer assistance, and some theory may need to be developed to bring it to the level of human verification.
One part of the verification is easy. Earlier I had a program find the following partial covering, where the integers to be covered range from $0$ to $77$, and the pairs are of the form $
(d,a)$, meaning they cover the arithmetic progression $a +nd$ that lies within the interval of interest: $(11,0), (13,4), (17,8), (19, 7), (23, 5), (29, 10), (31,9), (37, 1), (41, 16), (43,
18), (47, 15), (53, 12), (59, 13), (61, 6), (67, 3), (71, 2)$.
Except for the class represented by $19$, all of these are covering the maximum amount possible for sets of this type, i.e. they are disjoint and the size of each set represented by $(d,a)$
is $\ceil(78/d)$. This near-extreme situation is crucial in being able to find a partial cover of this size. The uncovered numbers in the interval start at $14$ and go up to $63$; they are
covered by trivial progressions, one for each of the primes from $73$ up to $199$, the other prime factors of $M_4$. This example thus gives $j(M_4)>78$.
up vote 0 The program also found many other partial covers. I have not checked them to see if any cover even more elements.
down vote
The best I could do by hand turned out to cover $72$ elements, instead of $74$ as I had claimed earlier. Once a nice search order was determined, the example above was found by a laptop in
less than an hour. The same program did not find any partial covers for either of the intervals $[0,78]$ or $[0, 79]$ that could be extended to full covers.
I was able by hand to come up with a proof that $[0, 80]$ could not be covered, giving $j(M_4) < 82$, but to go any further seemed impossible with the tools I was developing, so I resorted
to computer search to get to $j(M_4) < 80$. I invite verification of the assertion $j(M_4)=79$.
It is my hope to refine the techniques so that good estimates of $j(M_5)$ are possible, where $M_5$ is the product of primes from $13$ up to just below $P_5=2310$. One such goal is to
determine whether $2310 < j(M_5)$.
Gerhard "Ask Me About System Design" Paseman, 2011.04.26
add comment
Not the answer you're looking for? Browse other questions tagged prime-numbers arithmetic-progression nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/57564/residue-classes-of-primes-covering-intervals-and-bounds-on-the-different-ways/57603","timestamp":"2014-04-20T06:32:17Z","content_type":null,"content_length":"112090","record_id":"<urn:uuid:04e87e3a-6bc5-488f-a9ce-8ff998c4352d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: MODELING MOTION CAPTURE VOLUMES WITH DISTANCE FIELDS
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A k-covered motion capture volume is modeled. The modeling includes representing the k-covered motion capture volume with a distance field.
A method comprising modeling a k-covered motion capture volume based on a configuration of motion capture sensors, the modeling including representing the k-covered capture volume with a distance
The method of claim 1, wherein the distance field is an adaptive distance field, the adaptive distance field providing a signed scalar field indicating whether a point is within the capture volume,
on a boundary of the capture volume, or outside the capture volume.
The method of claim 1, wherein the modeling includes defining a k-coverage function based on the sensor configuration, and sampling the function.
The method of claim 2, wherein the adaptive distance field is a signed distance function, and wherein the function describes intersections of viewing volumes of the sensors.
The method of claim 4, wherein the sensors include motion capture cameras, and the viewing volumes have the shapes of frusta.
The method of claim 4, wherein the sampling includes encompassing the viewing volumes of the sensors with a root cell, performing a recursive subdivision of the root cell.
The method of claim 6, wherein a k
largest value of f
is used as an approximation to the distance to the capture volume's boundary, and wherein the recursive subdivision is guided by a sampling error
The method of claim 6, wherein the distance function is evaluated only once for each viewing volume, whereby the model has O(n) complexity to compute.
The method of claim 6, wherein each cell in the modeled capture volume has a set of vertices, and wherein each vertex corresponds to a distance field.
The method of claim 6, wherein evaluations of h(p) and ∇h(p) are combined for each leaf cell in the recursive subdivision.
The method of claim 3, wherein sampling is performed only for that portion of the capture volume above a floor plane.
The method of claim 1, further comprising storing the modeled capture volume in computer memory.
The method of claim 1, further comprising extracting an iso-surface of the modeled capture volume; and modeling the iso-surface with an adaptive distance field.
A motion capture system comprising a computer programmed to perform the method of claim
1. 15.
The motion capture system of claim 14, wherein the computer renders the modeled capture volume; and wherein the computer is also programmed to provide a user interface that allows sensor
configurations to be varied.
The motion capture system of claim 15, wherein the computer determines a bounding box for the modeled capture volume, and also determines the sensor configuration with respect to the bounding box.
Apparatus comprising a computer for generating a computational model of a k-covered motion capture volume based on a configuration of motion capture sensors, including representing the k-covered
capture volume with a distance field.
A vehicle control system comprising a motion capture system for covering a capture volume; a computational model of k-covered motion capture volume, the model represented by a distance field; and a
control for operating a vehicle within the modeled volume.
The system of claim 18, wherein if the vehicle moves within a preset distance of the capture volume's boundary, gradients from the model are accessed and used to generate commands for moving the
vehicle back inside the capture volume.
The system of claim 18, wherein the motion capture system includes a motion capture computer for sending tracking information to the control; and wherein the control includes an intelligent agent for
922 using the tracking information to generate control values for the vehicle's inner control loop and servos.
BACKGROUND [0001]
Motion capture ("mocap") is a technique of digitally recording movements for entertainment, sports, and medical applications. In the context of filmmaking (where it is sometimes called performance
capture), motion capture involves recording the actions of human actors, and using that information to animate digital character models in 3D animation.
In a typical motion capture system, an array of sensors is distributed around a motion capture stage, where a person's motion is recorded. The person wears either passive markers or active beacons
that are individually tracked by the sensors. A central processor fuses data from the sensors and computes the person's body position based on the marker data and a-priori information about the
marker position on the person's body.
The movements are recorded within a "capture volume." This volume refers to the workspace where the person can be tracked continuously by a certain sensor configuration. For instance, if a person can
be tracked continuously within a capture volume by three cameras, that capture volume is said to have 3-coverage.
A motion capture system may be used for closed loop control of a vehicle (e.g., an electric helicopter) within a capture volume. However, visualization and analysis of the capture volume is limited.
SUMMARY [0005]
According to an embodiment of the present invention, a k-covered motion capture volume is modeled. The modeling includes representing the k-covered motion capture volume with a distance field.
BRIEF DESCRIPTION OF THE DRAWINGS [0006]
FIG. 1 is an illustration of a method in accordance with an embodiment of the present invention.
[0007]FIG. 2
is an illustration of a method of modeling a capture volume in accordance with an embodiment of the present invention.
FIG. 3 is an illustration of an example of a viewing volume of a sensor.
FIGS. 4a-4c are illustrations of recursive subdivision of a root cell, and FIG. 4d is an illustration of a data structure representing the recursive subdivision.
FIG. 5 is an illustration of a method of using a data structure to determine a true signed distance function to a boundary of a motion capture volume in accordance with an embodiment of the present
FIG. 6 is an illustration of an apparatus in accordance with an embodiment of the present invention.
[0012]FIG. 7
is an illustration of a system in accordance with an embodiment of the present invention.
[0013]FIG. 8
is a screen shot of a motion capture tool in accordance with an embodiment of the present invention.
FIG. 9 is an illustration of a vehicle control system in accordance with an embodiment of the present invention.
FIG. 10 is an illustration of a method of controlling a vehicle in accordance with an embodiment of the present invention.
FIG. 11 is an illustration of a leaf cell.
DETAILED DESCRIPTION [0017]
Reference is made to FIG. 1. At block 110, a computational model of a k-covered motion capture volume is generated. The model is based on a configuration of motion capture sensors. The model
generation includes representing the k-covered motion capture volume with a distance field.
Let C
denote a k-covered motion capture volume for a motion capture system comprising a set C of sensors (e.g., cameras) in a specific configuration. This set C represents information about a portion of a
workspace (the "viewing volume") that is viewed by each sensor (e.g., parameters such as the position, orientation and field of view of each camera with respect to the workspace). A distance field is
a scalar function f
: R
→R denoting the minimum distance from a point p in R
to a boundary of the capture volume C
. The distance field may be signed such that f
(p) is positive if the point p is inside of the capture volume, and negative if the point p is outside the capture volume C
. The distance field is zero if the point p lies on the boundary of the capture volume C
An algorithm for evaluating f
at a given point is computationally intensive due to the complex geometry of the capture volume (e.g., a multi-faceted boundary with very little symmetry). However, this function f
may be approximated with an interpolation technique using regularly or adaptively sampled values of f
within a workspace containing the capture volume. One embodiment of the present invention may use adaptive distance fields (ADFs) to model the capture volume C
as an approximation h
of f
. In an adaptive distance field, sampling may be performed at higher rates in regions where the distance field contains fine detail and at lower sampling rates where the distance field varies
At block 120, the model can be used for a wide range of applications, examples of which will be provided below. These examples include a motion capture system and a system for controlling of a
vehicle to operate within a confined volume.
Reference is made to
FIG. 2
, which illustrates an example of modeling a motion capture volume based on a sensor configuration. The configuration includes a plurality of sensors distributed around a workspace. The sensors are
not limited to any particular type. The sensors may include, but are not limited to, magnetic, ultrasonic, and optical sensors.
Each sensor has parameter settings that include field of view, placement, and orientation. The settings of a sensor can be used to identify that portion (that is, the viewing volume) of the workspace
sensed by the sensor.
The volume viewed by a sensor such as a camera can be modeled as a frustum. The frustum illustrated in FIG. 3 is bounded by a set of six clipping planes P
={Left, Right, Top, Bottom, Front, Back}. The eyepoint o
defines the origin of the camera's local coordinate system where the z axis support's the camera's line of sight. The Back and Front planes are perpendicular to z, and the Left and Right planes
intersect at the eyepoint o
along the y axis. Their dihedral is the camera's horizontal field of view. The Top and Bottom planes intersect at o
along the x axis. Their dihedral corresponds to the camera's vertical field of view.
Each clipping plane in P
defines a half-space and is oriented with its normal pointing inside the frustum. Thus, the frustum is formed by the intersection of all six half-spaces.
At block 210, a k-coverage function f
is defined, based on the sensor configuration. Consider the example of 3-coverage by a set C of n cameras, where each camera has a viewing volumes that is modeled as a frustum. Given an oriented
plane P and its associated half-space Hp, a signed distance function d
: R
→R is defined such that
d P
( q ) = { dist P ( q ) if q .di-elect cons. H P - dist P ( q ) otherwise ##EQU00001##
where distp
(q) computes the Euclidian distance between P and a point q.
Let f
: R
→R be the coverage function of camera c
such that f
(q)). This function is positive for points inside the frustum, null for those at its boundary and negative outside of it. Furthermore, because the frustum is convex, the positive values of f
are actual Euclidian distances to the boundary of the frustum. This does not hold for negative values of f
Let covers: R
→{true, false} be the predicate that is true for a point q if and only if q is in the field of view of camera c
. This is equivalent to cover
Let cover
: R
→{true, false} be the predicate that is true for a point q if and only if q is in the field of view of at least k distinct cameras of C. This is equivalent to
|≧k and .A-inverted.c
, cover
where C[q]
is he subset of cameras covering point q.
Let f
: R
→R be the function that for a point q returns the k
largest value of f
(q) for iε0, . . . , n-1. If f
(q)≧0 then there exist a set C'
, . . . , c
} of k cameras where .A-inverted.c
(q)≧0. Thus, f
(q), where the value of f
indicates whether a point is k-covered by C. B
(q)=0} is the boundary of the capture volume where k-coverage is required. The k
largest value is used as an approximation of the distance to the capture volume's boundary.
Thus, a point q is k-covered if there is a k-tuple of cameras whose frustum intersection contains that point. Those k-tuple intersections have convex boundaries because they are the convex hulls of
all the planes forming the frusta of the cameras in the k-tuple.
The capture volume is the union of these intersections. Thus, the distance function f
describes a capture volume that is possibly concave prismatic and that might have disconnected components.
The distance function f
of each sensor is evaluated only once. Since the function f
evaluates each camera only once, it has O(n) complexity to compute. The function f
is a pseudo signed distance function to B
because it only does k-tuple-wise computations and does not considers their union. Its zero-contour (iso-surface for field value 0) is the boundary of the capture volume C
At block 220, the k-coverage function f
is sampled. Let h
be the ADF obtained by sampling f
with a user-defined sampling error E and maximum depth d
. The sampling begins with a root cell 410 that contains the frusta of the camera set C so as to encapsulate the entire capture volume. The root cell 410 has a depth of d=0.
Starting from the root cell the following process is applied to each cell. The function f
is evaluated at the eight vertices located at the corners of this cell. Let h be a scalar field defined within the boundaries of the cell. This field may be computed via trilinear interpolation of
the eight samples as described below. Next it is verified that, within the cell, the scalar field h approximates the function f
within the maximum sampling error ε. This may be done by computing the absolute value of the difference of the two fields at predefined test points such as the center of the cell. If one of these
differences exceeds the sampling error E and the depth d of the cell is less than d
then the cell is subdivided into eight child cells. The depth of these child cells is d+1. This algorithm is applied recursively to these child cells. A cell without children is referred to as a
"leaf" cell.
This subdivision is repeated until the leaf cells can interpolate the distance function f
within the predefined sampling error E or until a maximum depth is reached. Thus, the sampling process adaptively subdivides the cells to keep the local interpolation error bounded by a user-defined
Additional reference is made to FIGS. 4a-4d, which illustrates a recursive subdivision. FIG. 4a illustrates the root cell 410 of the ADF. FIG. 4b illustrates the root cell 410 subdivided into eight
child cells 420 having a depth d=1, and FIG. 4c illustrates one of the cells 420 further subdivided into cells 430 having a depth of d=2. Cells at the same depth (siblings) have the same size.
This recursive subdivision can be represented by a data structure such as a tree structure. For example, the recursive subdivision of FIGS. 4a-4c can be represented by an octree as illustrated in
FIG. 4d. The evaluation process traverses the tree quickly down to the leaf containing the input point and linearly interpolates the field at that location based on the sampled field values at the
eight vertices of the leaf.
Additional reference is made to FIG. 11. A distance field or distance field gradient (or both) may be interpolated within a cell. Each cell l has eight vertices located at its corners (q0, q1, q2,
q3, q4, q5, q6, q7). These vertices are shared with the adjacent cells. A value of the distance function f
is stored for each vertex location. This value was computed during the sampling step.
Distance fields may be interpolated as follows. Given a point p contained in the domain of an ADF h, h(p) is evaluated by trilinear interpolation of the sample values associated with the deepest leaf
l containing p. Let Q
, q
, q
, q
, q
, q
, q
, q
} and H
, h
, h
, h
, h
, h
, h
, h
} be the set of vertices and associated field values of leaf l. The distance field h
of leaf l is of the form
( p ) = i .di-elect cons. [ 0 , 7 ] w i h i ##EQU00002##
, w
, w
, W
, W
, w
, w
, w
) are the barycentric coordinates of p with respect to Q
. A leaf l is mapped to the unit cube [0,1]3 by defining a normalized coordinate system where its vertices have the following coordinates:
(0,0,0) q
(1,0,0) q
(1,1,0) q
(0,1,0) q
Let (x,y,z) be the normalized coordinates of point p in the leaf l. The formulae for the weights are:
with i
.di-elect cons. [ 0 , 7 ] w i = 1. ##EQU00003##
The function h[l]
is defined only inside leaf l.
Distance field gradients may be interpolated as follows. Evaluating the gradient ∇h(p) is very similar to evaluating h(p). The leaf l containing point p is located. The gradient of h
, which is of the following form, is evaluated:
∇ h ( p ) = [ ∂ h ∂ x ( p ) , ∂ h ∂ y ( p ) , ∂ h ∂ z ( p ) ] T ##EQU00004## where ##EQU00004.2## ∂ h ∂ x ( p ) = i .di-elect cons. [ 0 , 7 ] ∂ w i ∂ x h i , ∂ h ∂ y ( p ) = i .di-elect cons. [ 0 , 7
] ∂ w i ∂ y h i , ∂ h ∂ z ( p ) = i .di-elect cons. [ 0 , 7 ] ∂ w i ∂ z h i ##EQU00004.3##
with the weights given in Table
-US-00001 TABLE 1 ∂ w 0 ∂ x = y + z - ( yz + 1 ) ∂ w 4 ∂ x = yz - z ∂ w 1 ∂ x = yz + 1 - ( y + z ) ∂ w 5 ∂ x = z - yz ∂ w 2 ∂ x = y - yz ∂ w 6 ∂ x = yz ∂ w 3 ∂ x = yz - y ∂ w 7 ∂ x = - yz ##EQU00005#
# ∂ w 0 ∂ y = x + y - ( xz + 1 ) ∂ w 4 ∂ y = xz - z ∂ w 1 ∂ y = xz - x ∂ w 5 ∂ y = - xz ∂ w 2 ∂ y = x - xz ∂ w 6 ∂ y = xz ∂ w 3 ∂ y = xz + 1 - ( x + z ) ∂ w 7 ∂ y = z - xz ##EQU00006## ∂ w 0 ∂ z = y
+ x - ( xy + 1 ) ∂ w 4 ∂ z = xy + 1 - ( x + y ) ∂ w 1 ∂ z = xy - x ∂ w 5 ∂ z = x - xy ∂ w 2 ∂ z = - xy ∂ w 6 ∂ z = xy ∂ w 3 ∂ z = xy - y ∂ w 7 ∂ z = y - xy ##EQU00007##
The evaluation of h(p) and ∇h(p) can be efficiently combined in the same routine since the distance fields and gradients share the same octree traversal and barycentric coordinates computation.
ADFs are only C
continuous because of their linear interpolation scheme. They can faithfully reproduce fields with zero or small local curvatures. Areas of high curvature force the sampling algorithm to perform fine
subdivision to meet the approximation tolerance.
The combination of detail directed sampling and the use of a spatial hierarchy for data storage allows ADFs to represent complex shapes to arbitrary precision while permitting efficient processing.
ADFs can approximate complex distance fields by sampling them as a preprocessing step and replacing them in computations that require fast distance evaluations. ADFs are also a convenient neutral
format to represent distance fields sampled from various solid representations such as polygonal meshes, parametric surfaces and constructive geometry.
At block 230, the data structure can be stored in computer memory for later use. For instance, the data structure may be saved as an XML file.
The sampling method just described is for adaptive distance fields. However, a method according to the present invention is not so limited. In some embodiments, non-adaptive (i.e., uniform) sampling
could be performed by setting the maximum sampling error to zero. In some embodiments, sampling could be performed using a uniform grid. \
In some embodiments, cells might have shapes (e.g., tetrahedral) other than cubes. In some embodiments, interpolation other than trilinear interpolation may be used. In some embodiments, gradients
could be stored and fit to quadric surfaces. In general, other sampling methods may be used to approximate the distance function f
A method according to an embodiment of the present invention is not limited to signed distance functions. In some embodiments, positive distance functions may be used. However, positive distance
functions do not indicate whether a point is inside or outside a motion capture volume.
The method of
FIG. 2
may be modified in other ways. In some embodiments, that part of the capture volume below the floor plane (z=0) may be clipped off since it is not in free space. Thus, the sampling is performed as g
(q),q.z) which is in the intersection between the half-space above z=0 and f
The stored data structure is an approximation of the distance to a boundary of the motion capture volume. In some embodiments, however, the data structure may be further refined to determine a true
signed distance function to a boundary of a motion capture volume.
Reference is made to FIG. 5, which illustrates a method of using a data structure to determine a true signed distance function to a boundary of a motion capture volume. Let f
* be the true signed distance function to the capture volume boundary B
. The distance field can be derived by re-sampling a representation of the boundary B
. To do so, the boundary B
is assumed to be orientable within the domain (the root voxel) of h
. This means that each point in the domain is either inside or outside the capture volume and that every path between an inside point and an outside point goes through the boundary B
At block 510, one or more iso-surfaces are extracted from the modeled capture volume. A suitable iso-surface extraction method can produce a triangular mesh that approximates the iso-surface. In some
embodiments, however, this step may simply involve accessing a mesh model that was generated by some other means.
At block 520, the true distance function to the capture volume boundary is approximated by modeling the iso-surface with a distance field. In certain embodiments this intermediate distance function
could use an oriented bounding box tree (OBBTree) to find the nearest triangle to the point at which the function f
* is evaluated. This re-sampling yields a second ADF h
. This second ADF h
* provides a true signed distance function to a boundary of a motion capture volume.
Reference is made to FIG. 6. A method in accordance with an embodiment of the present invention may be implemented in a machine such as a computer 610. The computer 610 includes memory 620 that is
encoded with a program 630. The program 630, when executed, causes the computer 610 to generate a computational model of a k-covered motion capture volume based on a configuration of motion capture
A modeled capture volume can be used for a wide range of applications. Two applications will be described below: a tool for motion capture, and a control for containing a vehicle within a closed
Reference is made to FIGS. 7 and 8, which illustrate a system 710 and tool 810 for motion capture. The system 710 includes a plurality of sensors 720, a motion capture computer 730, and a display
740. The sensors 720 have a particular configuration (placement, field of view, orientation, etc.), and the computer 730 stores a parameter file 732 describing the sensor configuration.
A desired coverage (i.e., the value of k) can be inputted to the computer 730. The computer 730 executes a program 734 that accesses the parameter file 732, models a k-coverage capture volume based
on that file 732, and renders the modeled k-coverage volume.
ADF allows quick reconstruction of the model. For a model that is stored in an octree, distance values within a cell may be reconstructed from the eight corner distance values stored per cell using
standard trilinear interpolation. In addition to distance values, an operation such as rendering may also require a method for estimating surface normals from the sampled data. For distance fields,
the surface normal is equal to the normalized gradient of the distance field at the surface.
The program 734 can also cause the computer 730 to display a user interface of a motion capture tool 810. The user interface, in turn, may display an iso-surface of the modeled motion capture volume.
The user interface allows a user to vary the sensor configuration. In response, the computer 730 may update the model and render the updated model.
The motion capture tool 810 can also compute the volume and bounding box of the motion capture volume. The computer 730 can then execute an algorithm that determines a sensor configuration with
respect to the bounding box. For example, the computer 730 determines a number of sensors 720 that will be used, and also identifies sensor locations on the bounding box of the desired capture
volume. For a motion capture stage, these locations may be on the floor plane for tripod mounted cameras or above ground, on walls, ceilings, dedicated struts, etc.
The motion capture tool 810 can be used to assess the quality of camera placement for various level of k-coverage. Users can use the feedback of this tool 810 to adjust the number, location and
orientation of the cameras.
The motion capture tool 810 allows overlapping sensor coverage (or lack thereof) to be visualized. By varying the configuration, disjoint volumes can be merged to create a single volume with
continuous coverage. And by varying the configuration, sensor overlap can be reduced to increase the volume that is covered.
Generating a computational model has other uses. For instance, motion capture activities can be limited to the computed volume. Moreover, sensor configurations can be evaluated prior to setup.
Reference is now made to FIG. 9. A computational model of a k-covered volume can be used by a vehicle control system 910. The system 910 includes an intelligent agent 922 for controlling the
operation of a vehicle 920 within a k-covered capture volume. In some embodiments, the intelligent agent 922 may include a microprocessor-based control that is on-board the vehicle 920 (as shown in
FIG. 9). In other embodiments, the intelligent agent 922 may be in a computer (e.g., a flight computer) that is not on-board the vehicle 920.
The system 910 further includes a plurality of motion capture cameras 930 for collecting real time images of the vehicle 920, and a motion capture computer 940 for processing the real time images of
the vehicle 920. Active or passive markers on the vehicle 920 may assist with the tracking.
The motion capture computer 940 tracks the vehicle 920 and computes tracking information such as real-time position and attitude. For instance, the motion capture computer 940 may triangulate the
position of the markers attached to the vehicle 920 by intersecting lines of sight between several cameras 930. The motion capture computer 940 may then cluster the markers, and compare the clustered
markers to known marker configurations (i.e., vehicle models). The motion capture computer 940 may perform filtering and other signal processing as well.
The motion capture computer 940 sends the tracking information to the intelligent agent 932. A transmitter 950 may be used to up-link the tracking information to an agent 922 that is on-board the
vehicle 920.
The intelligent agent 922 uses the tracking information to control the operation of the vehicle 920. The intelligent agent 922 may use position, orientation, linear and angular velocity data, etc.,
to calculate error signals, which are then multiplied by feedback gain values, and then used to generate the control values for the vehicle's inner control loop and servos.
Reference is now made to FIG. 10. The intelligent agent 922 may store one or more k-coverage models of the motion capture volume and use those models to confine the vehicle 920 to move autonomously
or under remote operation inside the capture volume. When the vehicle 920 comes within a preset distance of the boundary of the capture volume, the intelligent agent 922 accesses distance and
gradient vector from the cell corresponding to the location of the vehicle 922 (block 1010). A command is generated from the distance field and field gradient (block 1020). The command causes the
vehicle to move in the direction of the gradient and its magnitude increases as the vehicle approaches the boundary. The gradient provides guidance to stay in the capture volume. It can be construed
as a control input because the control loop will command the vehicle's velocity to line-up with the gradient.
Each k-coverage model represents a different coverage. For instance, the intelligent agent 922 might store 3-coverage, 4-coverage and 5-coverage models of the capture volume. A user may choose a
distance field and field gradient from a model with a conservative k-coverage setting, such as five cameras. If the vehicle 920 escapes the 5-covered capture volume, it may still remain controllable
within a k-1 or k-2 covered volume.
The system 910 is not limited to any type of vehicle. As but one example, the system 910 can be used to control the flight of an electric helicopter within a motion capture volume. Helicopter
operation is confined to a k-covered motion capture volume because its control relies on at least k cameras to continuously track the helicopter and maintain stable operation. Typically, a minimum of
three cameras is used to track the helicopter at any point in 3D space. The motion capture computer 940 can provide real-time position and attitude information of the helicopter. The intelligent
agent 922 uses one or more k-coverage models to keep the helicopter within the capture volume.
In another embodiment of the vehicle control system, the vehicle may be teleoperated instead of controlled by an on-board intelligent agent. Such a vehicle includes servos and a control loop that are
responsive to commands transmitted by an RC transmitter. The commands may be generated in response to joystick movements.
However, when the vehicle moves outside or near the boundary of the capture volume, the motion capture computer uses the modeled capture volume to compute a command based on the field intensity and
gradient at cell where the vehicle is located. The command is up-linked to the vehicle's servos and overrides any other command.
In both embodiments, the inner control loop and the containment field form a fly-by-wire system that provides augmented stability and envelope protection. Envelope protection limits the velocity of
the vehicle and keeps it in the capture volume. Limiting the velocity prevents the helicopter from ramming through the envelope.
Patent applications by Charles A. Erignac, Seattle, WA US
Patent applications by The Boeing Company
Patent applications in class Vehicle or traffic control (e.g., auto, bus, or train)
Patent applications in all subclasses Vehicle or traffic control (e.g., auto, bus, or train)
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20090185719","timestamp":"2014-04-23T12:59:52Z","content_type":null,"content_length":"62587","record_id":"<urn:uuid:e49cc2f8-23e3-4094-94d3-0e5cbe1d6829>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
xactly 500
Possible Answer
What common household objects weigh exactly 500 grams? ChaCha Answer: 500 grams equal 17.6369 oz.. There are many items such as clea... - read more
What normal household item weighs exactly 500 grams ChaCha Answer: ... What are some everyday items that weigh exactly 500 grams? 500 Gram Sprinkler Cakes weigh exactly 500 grams. - read more
Share your answer: household items that weigh exactly 500 gram?
household items that weigh exactly 500 gram resources | {"url":"http://www.askives.com/household-items-that-weigh-exactly-500-gram.html","timestamp":"2014-04-17T22:23:21Z","content_type":null,"content_length":"35903","record_id":"<urn:uuid:bd96b06f-6845-474e-8e2a-c50380a135b0>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutors in Oakland County, MI
West Bloomfield, MI 48322
Master Certified Coach for Exam Prep, Mathematics, & Physics
...I look forward to speaking with you and to establishing a mutually beneficial arrangement in the near future! Best Regards, Brandon S.
1 covers topics such as linear equations, systems of linear equations, polynomials, factoring, quadratic equations,...
Offering 10+ subjects including algebra 1 and algebra 2 | {"url":"http://www.wyzant.com/Oakland_County_MI_Algebra_tutors.aspx","timestamp":"2014-04-16T09:16:28Z","content_type":null,"content_length":"63228","record_id":"<urn:uuid:40c4a089-8593-4781-8688-963f54da24ab>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Constant Acceleration Motion
Calculus Application for Constant Acceleration
The motion equations for the case of constant acceleration can be developed by integration of the acceleration. The process can be reversed by taking successive derivatives.
On the left hand side above, the constant acceleration is integrated to obtain the velocity. For this indefinite integral, there is a constant of integration. But in this physical case, the constant
of integration has a very definite meaning and can be determined as an intial condition on the movement. Note that if you set t=0, then v = v[0], the initial value of the velocity. Likewise the
further integration of the velocity to get an expression for the position gives a constant of integration. Checking the case where t=0 shows us that the constant of integration is the initial
position x[0]. It is true as a general property that when you integrate a second derivative of a quantity to get an expression for the quantity, you will have to provide the values of two constants
of integration. In this case their specific meanings are the initial conditions on the distance and velocity.
Motion concepts
Velocity and acceleration | {"url":"http://hyperphysics.phy-astr.gsu.edu/hbase/acons.html","timestamp":"2014-04-20T05:43:16Z","content_type":null,"content_length":"4651","record_id":"<urn:uuid:ab7909c3-8536-4312-9647-e0d85f13e56e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
CNN - Convolutional neural network class - File Exchange - MATLAB Central
Please login to add a comment or rating.
Comments and Ratings (37)
I have a problem with understanding the actual architecture of this network. Usually, LeCun et al have used different weights for the connections from different feature maps of a previous
27 layer (something that looks like 3D kernel). Therefore, the number of weights of a convolution layer (assuming full map of connections) is kernelHeight*kernelWidth*numFeatMapsLayer(k)
Jan *numFeatMapsLayer(k-1). I am not sure, but it seems that there are kernelHeight*kernelWidth*numFeatMapsLayer(k) different weights used in this program. Does it means that the connections from
2014 different feature maps of a previous layer to a particular feature map of the next layer have the same weights? Or maybe, I misunderstand something?
11 I want to use this toolbox on my own sound data set. Each spectogram equivalent to 1 image has dimensions 13x500, where as each image in MNIST has dimensions 28x28. I want to change the input
Jul width and height to 13 and 500 respectively. How do I go about doing so?
24 I apllied it to traffic sign recognition,it classified all classes to a same class,have you had any experience with this - if so what parameters might you suggest I change?How can I train the
May cnn with the features extracted from the images instead of the images themselves?
2013 Where should I change?
29 Hello, I have run train_cnn on Octave, however, I got a error message "max_recursion_depth exceeded". Is anybody know/experience same problem? How to solve it? Thanks.
07 In this pure matlab version there's no max pooling. But you can find it in C++/CUDA version, which is on bitbucket repository.
06 I've been searching around for a bit but I can't find the max pooling layer code. Can someone point me in the right direction?
29 Hi, I am running the demo train-cnn which is provided to train the network from Dr. Lecun's paper. I do full training for over 20 epochs. The best MCR I get is .12 which corresponds to 12%
Oct error. How can I improve this? (The goal is to get .8% error)
2012 Thanks
2Eliud Gonzalez:
23 1. the output of CNN is saved to workspace variable sinet. You can use matlab function "save" to save it on disk.
Oct 2. Hessian computation controlled on line 147:
2012 sinet.HcalcMode = 0;
Default value 0 means running estimate of hessian. In this case you'll not notice it in gui. If you change it to 1, gui progress bar will show the hessian recomputation.
22 Thanks, when training the CNN, how do i save it, or it saves itself? and why Hessian calculation is always 0?
16 2 Eliud Gonzalez: usually this means that you don't have MNIST dataset. It is not distributed with the library, but you can download in from http://yann.lecun.com/exdb/mnist/
Oct Please dowload and put the files into the MNIST folder.
I get this errors when running train_cnn.m
??? Error using ==> fread
Invalid file identifier. Use fopen to generate a valid file identifier.
16 Error in ==> readMNIST at 25
Oct magicNum = fread(fid,1,'int32'); %Magic number
Error in ==> train_cnn at 30
[I,labels,I_test,labels_test] = readMNIST(1000);
16 What can i do if i open train_cnn.m and click run, and nothing happens, do i need to install CUDA, i'm sorry i'm new in this world :(
15 2 yin: yes, it can. Actually you can feed in any data not only images.
Oct 2 Xiaolei Hu: As far as I know there's no support of Conv Nets in latest Matlab NN Toolbox.
12 Does anybody know whether the latest Matlab has changed its NN toolbox so CNN can be implemented?
27 can the cnn train with the features extracted from the images instead of the images themselves?
16 Unfortunately I don't have Matlab 2011b. It seems that they changed something in NN toolbox. I'm going to remove the dependencies from NN toolbox in next release, which I'm trying to finalize
Mar now.
I try to train the network but I get error:
02 Undefined function or variable 'boiler_transfer'.
Mar Error in tansig_mod (line 64)
I am using MATLAB R2011b. Any help?
06 Full training cycle needs about 10 epochs on full training set with progressively decreasing theta. As far as I understand you did 1 epoch.
28 This is a great job. However I have a question. I test this program using the MNIST handwritten digit database. The mcr rate is very high (about 15%) even I train the cnn using 10000 input.
Jan If I tried to train the cnn using 60000 input, then the program would took fairly long time, about several hours to finish. Anyway, the mcr is always about 15%. I wonder what parameter we
2012 should change to make the prediction accuracy higher. Thanks a lot.
12 Regarding several last questions. I'm going to put new release of the lib which will contain example with faces and would be more convenient to use with various data. It already works in
Jan general. But I need a week or two more to finalize it since I do it in my free time.
11 hi
Jan please can any one help as to adapt this code to face recognition
2012 thanks
13 seeing it now... looks great :) is it possible to load a jpg or png in some way? what would be required?
03 2Gaurav. It means that file train_cnn.m have comments for almost every line of code, so you can find parameter you want to change and actually change it before start training. Basically
Aug train_cnn.m contains settings for network architecture (number of layers, number of neurons etc)
in the read me file to train the network "
02 If you interested in training you should open train_cnn.m, set all parameters following to comments and start learning by runing it."
Aug i cant understand the "set all parameters following to comments and start learning by runing it." part
please help
10 Is it possible to train using color images? (from the example it says it is not). However, the version from Nikolay Chumerin can be trained using color patches.
15 Thanks!
Oct Actually I had a plans to adapt this to face recognition, but unfortunately there're still only plans, because now I have absolutely no time to develop it.
2010 So I'd advice to look closely on preprocessing algorithms and try to make architecture similar to face detection papers advices.
Excellent work!
Oct I've attempted to get this working on face recognition but without success. The MCR remains very high. Have you had any experience with this - if so what parameters might you suggest I
2010 change?
29 it's ok ,but only numbers.high quality .
23 Thanks a lot for the valuable comment. First bug is a result of merging different versions of CNN used in my experements. About second bug, the problem is in the error calculation. In the new
Nov version I'll fix these bugs.
22 22.10.2009 - in current Ver_0.72
Nov - in file train_cnn.m Conv. Layer 4 should have 16 kernels not 6 (sinet.CLayer{4}.numKernels = 6 ) or sinet.CLayer{4}.ConMap shouldn't be 6x16 in size.
2009 - net training result doesn't return "minimal error" net state. I.e. I can see lover error during training than final is. However I apply CNN to different task with my own image set.
30 Well done! Good job. I think this is the first publicly available implementation of CNN training in Matlab. The source code is written in a pretty good style with extensive comments, which
Sep are really useful for such complex classes. This submission is an asset for computer vision Matlab community.
New version coming soon. It will include:
28 1. Trained network for experements.
May 2. Simple GUI, visualising digits recognition.
2009 3. Improoved performance.
4. Independence from NN Toolbox. | {"url":"http://mathworks.com/matlabcentral/fileexchange/24291-cnn-convolutional-neural-network-class","timestamp":"2014-04-21T10:06:57Z","content_type":null,"content_length":"58471","record_id":"<urn:uuid:f50611f3-0440-4589-975b-c24b2d5fd0f2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Odd prime division problem
January 30th 2008, 09:34 AM
Odd prime division problem
Prove that if $p eq 5$ is an odd prime, prove that either $p^2-1$ or $p^2+1$ is divisible by 10.
My proof so far:
If $10 | p^2-1$, then we are done. Assume 10 doesn't divide $p^2-1$.
Let p=2n+1 for some integer n. Then $p^2+1 = (2n+1)^2+1=4n^2+4n+1+1=4n^2+4n+2$ so 2 can divide it.
January 30th 2008, 09:49 AM
Prove that if $p eq 5$ is an odd prime, prove that either $p^2-1$ or $p^2+1$ is divisible by 10.
My proof so far:
If $10 | p^2-1$, then we are done. Assume 10 doesn't divide $p^2-1$.
Let p=2n+1 for some integer n. Then $p^2+1 = (2n+1)^2+1=4n^2+4n+1+1=4n^2+4n+2$ so 2 can divide it.
Let for an odd prime $p \equiv 1,\ 3,\ 5,\ 7,\ \mbox{ or }\ 9 \mod 10$; so $p^2 \equiv 1,\ 5,\ \mbox{ or }\ 9 \mod 10$.
But as $p$ is a prime other than $5,\ p ot\equiv 5 \mod 10$ and so $p^2 ot\equiv 5 \mod 10$.
$p^2 \equiv 1,\ \mbox{ or }\ 9 \mod 10$
which proves the required result.
January 30th 2008, 10:19 AM
So far our course have not introduce mod yet, is it possible to do this by other means? thank you.
January 30th 2008, 02:48 PM
Prove that if $p eq 5$ is an odd prime, prove that either $p^2-1$ or $p^2+1$ is divisible by 10.
My proof so far:
If $10 | p^2-1$, then we are done. Assume 10 doesn't divide $p^2-1$.
Let p=2n+1 for some integer n. Then $p^2+1 = (2n+1)^2+1=4n^2+4n+1+1=4n^2+4n+2$ so 2 can divide it.
A number has one of the forms: $10k,10k+1,...,10k+9$.
Now examine each of these forms.
1) $10k$ this is never prime.
2) $10k+2 = 2(5k+1)$ this is only prime for $k=0$, but the prime is odd so it cannot have this form.
3) $10k+3$ since $3,10$ have no common factors this is a possible prime.
4) $10k+4 = 2(5k+2)$ this is never prime.
5) $10k+5 = 5(2k+1)$ this is only prime at $k=0$ and that prime is $5$ which cannot be because $pot = 5$.
6) $10k+6 = 2(5k+3)$ this is never prime.
7) $10k+7$ a possible prime.
8) $10k+8 = 2(5k+4)$ this is never prime.
9) $10k+9$ a possible prime.
Which means $p$ has one of three forms: $10k+3,10k+7,10k+9$. If you substitute that into $p^2 \pm 1$ you will see it is divisible by $10$. For example, $(10k+3)^2 = 10(10k+6)+9$ so $p^2 + 1$ is
divisible. And so on.
I just realized I forgot $10k+1$ - but you get the idea. And do not be afraid of this argument, it might look long but that is because I explained it in detail. | {"url":"http://mathhelpforum.com/number-theory/27100-odd-prime-division-problem-print.html","timestamp":"2014-04-17T05:10:26Z","content_type":null,"content_length":"15125","record_id":"<urn:uuid:112d4a42-6b44-4bab-8ef5-a253cdd4d30f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Regular languages are testable with a constant number of queries
Results 1 - 10 of 58
- Combinatorica
"... Let P be a property of graphs. An -test for P is a randomized algorithm which, given the ability to make queries whether a desired pair of vertices of an input graph G with n vertices are
adjacent or not, distinguishes, with high probability, between the case of G satisfying P and the case that it h ..."
Cited by 157 (44 self)
Add to MetaCart
Let P be a property of graphs. An -test for P is a randomized algorithm which, given the ability to make queries whether a desired pair of vertices of an input graph G with n vertices are adjacent or
not, distinguishes, with high probability, between the case of G satisfying P and the case that it has to be modified by adding and removing more than n 2 edges to make it satisfy P . The property P
is called testable, if for every there exists an -test for P whose total number of queries is independent of the size of the input graph. Goldreich, Goldwasser and Ron [8] showed that certain graph
properties admit an -test. In this paper we make a first step towards a logical characterization of all testable graph properties, and show that properties describable by a very general type of
coloring problem are testable. We use this theorem to prove that first order graph properties not containing a quantifier alternation of type "89" are always testable, while we show that some
properties containing this alternation are not. Our results are proven using a combinatorial lemma, a special case of which, that may be of independent interest, is the following. A graph H is called
-unavoidable in G if all graphs that differ from G in no more than jGj 2 places contain an induced copy of H . A graph H is called -abundant in G if G contains at least jGj jHj induced copies of H.
If H is -unavoidable in G then it is also ( ; jHj)-abundant.
- Science , 2001
"... Property testing is a new field in computational theory, that deals with the information that can be deduced from the input where the number of allowable queries (reads from the input) is
significally smaller than its size. ..."
Cited by 128 (20 self)
Add to MetaCart
Property testing is a new field in computational theory, that deals with the information that can be deduced from the input where the number of allowable queries (reads from the input) is
significally smaller than its size.
- Proc. of FOCS 2005 , 2005
"... The problem of characterizing all the testable graph properties is considered by many to be the most important open problem in the area of property-testing. Our main result in this paper is a
solution of an important special case of this general problem; Call a property tester oblivious if its decis ..."
Cited by 91 (16 self)
Add to MetaCart
The problem of characterizing all the testable graph properties is considered by many to be the most important open problem in the area of property-testing. Our main result in this paper is a
solution of an important special case of this general problem; Call a property tester oblivious if its decisions are independent of the size of the input graph. We show that a graph property P has an
oblivious one-sided error tester, if and only if P is (almost) hereditary. We stress that any ”natural ” property that can be tested (either with one-sided or with two-sided error) can be tested by
an oblivious tester. In particular, all the testers studied thus far in the literature were oblivious. Our main result can thus be considered as a precise characterization of the ”natural” graph
properties, which are testable with one-sided error. One of the main technical contributions of this paper is in showing that any hereditary graph property can be tested with one-sided error. This
general result contains as a special case all the previous results about testing graph properties with one-sided error. These include the results of [20] and [5] about testing k-colorability, the
characterization of [21] of the graph-partitioning problems that are testable with one-sided error, the induced vertex colorability properties of [3], the induced edge colorability properties of
[14], a transformation from two-sided to one-sided error testing [21], as well as a recent result about testing monotone graph properties [10]. More importantly, as a special case of our main result,
we infer that some of the most well studied graph properties, both in graph theory and computer science, are testable with one-sided error. Some of these properties are the well known graph
properties of being Perfect, Chordal, Interval, Comparability, Permutation and more. None of these properties was previously known to be testable. 1
- Handbook of Randomized Computing, Vol. II , 2000
"... this technical aspect (as in the bounded-degree model the closest graph having the property must have at most dN edges and degree bound d as well). ..."
Cited by 76 (10 self)
Add to MetaCart
this technical aspect (as in the bounded-degree model the closest graph having the property must have at most dN edges and degree bound d as well).
- Proc. of STOC 2006 , 2006
"... A common thread in all the recent results concerning testing dense graphs is the use of Szemerédi’s regularity lemma. In this paper we show that in some sense this is not a coincidence. Our
first result is that the property defined by having any given Szemerédi-partition is testable with a constant ..."
Cited by 69 (14 self)
Add to MetaCart
A common thread in all the recent results concerning testing dense graphs is the use of Szemerédi’s regularity lemma. In this paper we show that in some sense this is not a coincidence. Our first
result is that the property defined by having any given Szemerédi-partition is testable with a constant number of queries. Our second and main result is a purely combinatorial characterization of the
graph properties that are testable with a constant number of queries. This characterization (roughly) says that a graph property P can be tested with a constant number of queries if and only if
testing P can be reduced to testing the property of satisfying one of finitely many Szemerédi-partitions. This means that in some sense, testing for Szemerédi-partitions is as hard as testing any
testable graph property. We thus resolve one of the main open problems in the area of property-testing, which was first raised in the 1996 paper of Goldreich, Goldwasser and Ron [24] that initiated
the study of graph property-testing. This characterization also gives an intuitive explanation as to what makes a graph property testable.
- In Proc. 41th Annu. IEEE Sympos. Found. Comput. Sci , 2000
"... A set X of points in ! d is (k; b)-clusterable if X can be partitioned into k subsets (clusters) so that the diameter (alternatively, the radius) of each cluster is at most b. We present
algorithms that by sampling from a set X , distinguish between the case that X is (k; b)-clusterable and the ca ..."
Cited by 60 (13 self)
Add to MetaCart
A set X of points in ! d is (k; b)-clusterable if X can be partitioned into k subsets (clusters) so that the diameter (alternatively, the radius) of each cluster is at most b. We present algorithms
that by sampling from a set X , distinguish between the case that X is (k; b)-clusterable and the case that X is ffl-far from being (k; b 0 )-clusterable for any given 0 ! ffl 1 and for b 0 b. In
ffl-far from being (k; b 0 )-clusterable we mean that more than ffl \Delta jX j points should be removed from X so that it becomes (k; b 0 )-clusterable. We give algorithms for a variety of cost
measures that use a sample of size independent of jX j, and polynomial in k and 1=ffl. Our algorithms can also be used to find approximately good clusterings. Namely, these are clusterings of all but
an ffl-fraction of the points in X that have optimal (or close to optimal) cost. The benefit of our algorithms is that they construct an implicit representation of such clusterings in time
- In Proc. 35th ACM Symp. on Theory of Computing , 2003
"... Abstract. For a Boolean formula ϕ on n variables, the associated property Pϕ is the collection of n-bit strings that satisfy ϕ. We study the query complexity of tests that distinguish (with high
probability) between strings in Pϕ and strings that are far from Pϕ in Hamming distance. We prove that th ..."
Cited by 58 (11 self)
Add to MetaCart
Abstract. For a Boolean formula ϕ on n variables, the associated property Pϕ is the collection of n-bit strings that satisfy ϕ. We study the query complexity of tests that distinguish (with high
probability) between strings in Pϕ and strings that are far from Pϕ in Hamming distance. We prove that there are 3CNF formulae (with O(n) clauses) such that testing for the associated property
requires Ω(n) queries, even with adaptive tests. This contrasts with 2CNF formulae, whose associated properties are always testable with O ( √ n) queries [E. Fischer et al., Monotonicity testing over
general poset domains, in Proceedings of the 34th Annual ACM Symposium on Theory of Computing, ACM, New York, 2002, pp. 474–483]. Notice that for every negative instance (i.e., an assignment that
does not satisfy ϕ) there are three bit queries that witness this fact. Nevertheless, finding such a short witness requires reading a constant fraction of the input, even when the input is very far
from satisfying the formula that is associated with the property. A property is linear if its elements form a linear space. We provide sufficient conditions for linear properties to be hard to test,
and in the course of the proof include the following observations which are of independent interest: 1. In the context of testing for linear properties, adaptive two-sided error tests have no more
power than nonadaptive one-sided error tests. Moreover, without loss of generality, any test for a linear property is a linear test. A linear test verifies that a portion of the input satisfies a set
of linear constraints, which define the property, and rejects if and only if it finds a falsified constraint. A linear test is by definition nonadaptive and, when applied to linear properties, has a
one-sided error. 2. Random low density parity check codes (which are known to have linear distance and constant rate) are not locally testable. In fact, testing such a code of length n requires Ω(n)
- Proc. of the 35 th Annual Symp. on Theory of Computing (STOC , 2003
"... Let H be a fixed directed graph on h vertices, let G be a directed graph on n vertices and suppose that at least #n edges have to be deleted from it to make it H-free. We show that in this case
G contains at least f(#, H)n copies of H. This is proved by establishing a directed version of Sz ..."
Cited by 51 (17 self)
Add to MetaCart
Let H be a fixed directed graph on h vertices, let G be a directed graph on n vertices and suppose that at least #n edges have to be deleted from it to make it H-free. We show that in this case G
contains at least f(#, H)n copies of H. This is proved by establishing a directed version of Szemeredi's regularity lemma, and implies that for every H there is a one-sided error property tester
whose query complexity is bounded by a function of # only for testing the property PH of being H-free.
- STOC'02 , 2002
"... The field of property testing studies algorithms that distinguish, using a small number of queries, between inputs which satisfy a given property, and those that are ‘far’ from satisfying the
property. Testing properties that are defined in terms of monotonicity has been extensively investigated, pr ..."
Cited by 48 (22 self)
Add to MetaCart
The field of property testing studies algorithms that distinguish, using a small number of queries, between inputs which satisfy a given property, and those that are ‘far’ from satisfying the
property. Testing properties that are defined in terms of monotonicity has been extensively investigated, primarily in the context of the monotonicity of a sequence of integers, or the monotonicity
of a function over the £-dimensional hypercube ¤¥¦§§ § ¦¨©�. These works resulted in monotonicity testers whose query complexity is at most polylogarithmic in the size of the domain. We show that in
its most general setting, testing that Boolean functions are close to monotone is equivalent, with respect to the number of required queries, to several other testing problems in logic and graph
theory. These problems include: testing that a Boolean assignment of variables is close to an assignment that satisfies a specific �-CNF formula, testing that a set of vertices is close to one that
is a vertex cover of a specific graph, and testing that a set of vertices is close to a clique. We then investigate the query complexity of monotonicity testing of both Boolean and integer functions
over general partial orders. We give algorithms and lower bounds for the general problem, as well as for some interesting special cases. In proving a general lower bound, we construct graphs with
combinatorial properties that may be of independent interest.
- RANDOM-APPROX 2003
"... Abstract. We describe an efficient randomized algorithm to test if a given binary function � � ����� � � ���� � is a low-degree polynomial (that is, a sum of low-degree monomials). For a given
integer � � � and a given real � � �, the algorithm queries � at � � � � � ��� � points. If � is a ..."
Cited by 46 (7 self)
Add to MetaCart
Abstract. We describe an efficient randomized algorithm to test if a given binary function � � ����� � � ���� � is a low-degree polynomial (that is, a sum of low-degree monomials). For a given
integer � � � and a given real � � �, the algorithm queries � at � � � � � ��� � points. If � is a polynomial of degree at most �, the algorithm always accepts, and if the value of � has to be
modified on at least an � fraction of all inputs in order to transform it to such a polynomial, then the algorithm rejects with probability at least ���. Our result is essentially tight: Any
algorithm for testing degree- � polynomials over � � �� � must perform � � � � � �� � queries. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=275143","timestamp":"2014-04-16T14:14:45Z","content_type":null,"content_length":"41500","record_id":"<urn:uuid:f75dfe8f-3075-4795-aec7-7e9dba82d092>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex do
[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex do
From: Ralf Hemmecke
Subject: [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains.
Date: Mon, 15 May 2006 03:36:11 +0200
User-agent: Thunderbird 1.5.0.2 (X11/20060420)
Please, guys explain clearly you're after.
Trying to clarify what the Aldor compiler does and what it should do.
I do not take the compiler's behaivour as "God given".
Of course.
I need clear semantics.
Of course. But what Christian's program from
does is to exhibit a bug either in the Aldor compiler or the documentation. I don't think that one can find any better documentation than Section 7.3 "Type context" in http://www.aldor.org/
Interestingly, if the documentation where better, I would even say that despite the lines
local a == inc NUM;
local b == inc dec inc NUM;
stdout << "a : "; stdout << f()$Dom(a); stdout << newline;
stdout << "b : "; stdout << f()$Dom(b); stdout << newline;
give different output in Christian's program, the compiler still behaves functional. The problem is that the documentation should be more precise of what "a equals b" in the above context actually
If we replace Integer by TextWriter in Christian's program and do something like
stdout << f()$Dom(stdout);
stdout << f()$Dom(stderr);
of course most people would say that it is clear that the output might be different because
"stdout is not equal to stderr". (*)
But how can one say that? stdout and stderr are of type TextWriter and that type does not export any equality. So the only chance one has to give meaning to (*) is to mean pointer equality. I cannot
remember that I have ever read this in the Aldor User Guide.
> If you're after a non-functional type system, please explain
clearly what they are useful for, with clear examples. Explain also
how one reasons with such type system, how one writes relaible program
with such a type system.
I, for my part, did never claim that I want a non-functional type system. I actually learned through this thread that a functional one would be a good thing.
I am sure you could give a reference to illustrative examples that show serious problems. Not everyone has studied those things in his/her career.
PS: Apart from clarification of functionality of the type system Christian's program also shows that its behaviour depends on the implementation of AldorInteger. One might get different results on 32
and 64 bit machines. It's already interesting that one has to take quite big integers to make a and b into "different" things.
[Prev in Thread] Current Thread [Next in Thread]
• [Axiom-developer] Re: [Axiom-math] Are Fraction and Complex domains., Ralf Hemmecke, 2006/05/11
□ [Axiom-developer] Re: [Axiom-math] Are Fraction and Complex domains., Gabriel Dos Reis, 2006/05/11
□ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Christian Aistleitner, 2006/05/12
☆ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Ralf Hemmecke, 2006/05/12
○ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Christian Aistleitner, 2006/05/14
○ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Gabriel Dos Reis, 2006/05/14
○ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Ralf Hemmecke <=
○ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Gabriel Dos Reis, 2006/05/15
○ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Martin Rubey, 2006/05/16
○ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Christian Aistleitner, 2006/05/17
○ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Martin Rubey, 2006/05/17
○ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Christian Aistleitner, 2006/05/17
○ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Stephen Watt, 2006/05/17
○ [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Christian Aistleitner, 2006/05/18
• [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Christian Aistleitner, 2006/05/15
• [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Gabriel Dos Reis, 2006/05/15
• [Axiom-developer] Re: [Aldor-l] [Axiom-math] Are Fraction and Complex domains., Ralf Hemmecke, 2006/05/15 | {"url":"http://lists.gnu.org/archive/html/axiom-developer/2006-05/msg00183.html","timestamp":"2014-04-17T00:52:17Z","content_type":null,"content_length":"12021","record_id":"<urn:uuid:e191c4b2-80a8-41c5-aac2-93807b721999>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graph Theory
In my last post on game theory, I said that you could find an optimal
probabilistic grand strategy for any two-player, simultaneous move, zero-sum game.
It's done through something called linear programming. But linear programming
is useful for a whole lot more than just game theory.
Linear programming is a general technique for solving a huge family of
optimization problems. It's incredibly useful for scheduling, resource
allocation, economic planning, financial portfolio management,
and a ton of of other, similar things.
The basic idea of it is that you have a linear function,
called an objective function, which describes what you'd like
to maximize. Then you have a collection of inequalities, describing
the constraints of the values in the objective function. The solution to
the problem is a set of assignments to the variables in the objective function
that provide a maximum.
Computing Strongly Connected Components
As promised, today I'm going to talk about how to compute the strongly connected components
of a directed graph. I'm going to go through one method, called Kosaraju's algorithm, which is
the easiest to understand. It's possible to do better that Kosaraju's by a factor of 2, using
an algorithm called Tarjan's algorithm, but Tarjan's is really just a variation on the theme
of Kosaraju's.
Colored Petri Nets
Colored Petri Nets
The big step in Petri nets - the one that really takes them from a theoretical toy to a
serious tool used by protocol developers - is the extension to colored Petri nets (CPNs). Calling them "colored" is a bit of a misnomer; the original idea was to assign colors to tokens, and allow
color-based expressions on the elements of the net. But study of analysis models quickly showed
that you could go much further than that without losing the basic analyzability properties
that are so valuable. In the full development of CPNs, you can associate essentially arbitrary
collections of data with Petri net tokens, so long as you use a strong type system, and keep
appropriate restrictions on the expressions used in the net. The colors thus become
data types, describing the types of data that can be carried by tokens, and the kinds of tokens that
can located in a place in the net.
So that's the fundamental idea of CPNs: tokens have types, and each token type has some data value associated with it. Below the fold, we'll look at how we do that,
and what it means.
Counted Petri Nets, and Useless Protocol Specification Tools
There's one variant of Petri nets, called counted Petri nets, which I'm fond of for personal reasons. As Petri net variants go, it's a sort of sloppy but simple one, but as I said, I'm fond of it.
As a warning, there's a bit of a diatribe beneath the fold, as I explain why I know about this obscure, strange Petri net variant.
Modeling Concurrency with Graphs: Petri Nets
Among many of the fascinating things that we computer scientists do with graphs
is use them as a visual representation of computing devices. There are many subtle problems that can come up in all sorts of contexts where being able to see what's going on can make a huge
difference. Graphs are, generally, the preferred metaphor for most computing tasks. For example, think of finite state machines, flowcharts, UML diagrams, etc.
One of the most interesting cases of this for one of the biggest programming problems today is visual representations of concurrency. Concurrent program is incredibly hard, and making a concurrency
regime correct is a serious challenge - communicating that regime and its properties to another developer is even harder.
Which brings us to todays topic, Petri nets. Actually, I'm probably going to spend a couple of days writing about Petri nets, because they're fun, and there are a lot of interesting variations. The
use of Petri nets really isn't just an academic exercise - they are seriously used by people in a lot of real, significant fields - for one example, they're very commonly used to specify behaviors of
network communication protocols.
Critical Paths, Scheduling, and PERT
Yet another example of how graphs can be used as models to solve real problems comes from the world of project management. I tend to cringe at anything that involves management; as a former IBMer,
I've dealt with my share of paper-pushing pinheaded project managers. But used well, this demonstrates using graphs as a model for a valuable planning tool, and a really good example of how to apply
math to real-world problems.
Project managers often define something called PERT charts for planning and scheduling a project and its milestones. A PERT chart is nothing but a labeled, directed graph. I'm going to talk about the
simplest form of PERT, which considers only time, but not resources. More advanced versions of PERT exist that also include things like required resources, equipment, etc.
Using Graphs to Represent Information: Lattices and Semi-Lattices
There's a kind of graph which is very commonly used by people like me for analysis applications, called a lattice. A lattice is a graph with special properties that make it
extremely useful for representing information in an analysis system.
Games and Graphs: Searching for Victory
Last time, I showed a way of using a graph to model a particular kind of puzzle via
a search graph. Games and puzzles provide a lot of examples of how we can use graphs
to model problems. Another example of this is the most basic state-space search that we do in computer science problems.
Puzzling Graphs: Problem Modeling with Graphs
As I've mentioned before, the real use of graphs is as models. Many real problems can be
described using graphs as models - that is, to translate the problem into a graph, solve some problem on the graph, and then translate the result back from the graph to the original problem. This
kind of
solution is extremely common, and can come up in some unexpected places.
For example, there's a classic chess puzzle called the Knight's tour.
In the Knight's tour, you have a chessboard, completely empty except for a knight on one square. You can
move the knight the way you normally can in chess, and you want to find a sequence of moves in which is
visits every square on the chessboard exactly once. There are variations of the puzzle for non-standard
chessboards - boards larger or smaller than normal, toroidal boards (where you can wrap around the left edge to the right, or the top to the botton), etc. So - given a particular board, how can you
(1) figure out if it's possible to do a knights tour, and (2) find the sequence of moves in a tour
if one exists?
Graph Searches and Disjoint Sets: the Union-Find Problem
Suppose you've got a huge graph - millions of nodes. And you know that it's not connected - so the graph actually consists of some number of pieces (called the connected components of the graph). And
there are constantly new vertices and edges being added to the graph, but nothing is ever removed. Some questions you might want to ask about this graph at a particular point in time are:
• How many components are there in the graph?
• Which component is vertex X in?
• Are vertices X and Y in the same component?
• How many components are there?
All of these questions are variants of a classic computer science problem, called
union-find, which also comes up in an astonishing number of different contexts. The reason for the name is that in the representation of the solution, there
are two basic operations: union, and find. Basically, the division of the graph into
components is also a partition of the vertices of the graph into disjoint sets: union find
is a problem which focuses on a particular kind of disjoint set problem, where you can modify
the sets over time. | {"url":"http://scientopia.org/blogs/goodmath/category/good-math/graph-theory/","timestamp":"2014-04-17T15:30:01Z","content_type":null,"content_length":"98313","record_id":"<urn:uuid:a3270736-537b-4be4-a462-4fa5bada7a6a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paul Khuong mostly on Lisp
Many tasks in compilation and program analysis (in symbolic computation in general, I suppose) amount to finding solutions to systems of the form \(x = f(x)\). However, when asked to define
algorithms to find such fixed points, we rarely stop and ask “which fixed point are we looking for?”
In practice, we tend to be interested in fixed points of monotone functions: given a partial order \((\prec)\), we have \(a \prec b \Rightarrow f(a)\prec f(b)\). Now, in addition to being a fairly
reasonable hypothesis, this condition usually lets us exploit Tarski’s fixed point theorem. If the domain of \(f\) (with \(\prec\)) forms a complete lattice, so does the set of fixpoints of \(f\) !
As a corollary, there then exists exactly one least and one greatest fixed point under \(\prec\).
This is extremely useful, because we can usually define useful meet and join operations, and enjoy a complete lattice. For example, for a domain that’s the power set of a given set, we can use \(\
subset\) as the order relation, \(\cup\) as join, and \(\cap\) as meet. However, what I find interesting to note is that, when we don’t pay attention to which fixpoint we wish to find, humans seem to
consistently develop algorithms that converge to the least or greatest one, depending on the problem. It’s as though we all have a common blind spot covering one of the extreme fixed points.
A simple example is dead value (useless variable) elimination. When I ask people how they’d identify such variables in a program, the naïve solutions tend to be very similar. They exploit the
observation that a value is useless if it’s only used to compute values that are themselves useless. The routines start out with every value live (used), and prune away useless values, until there’s
nothing left to remove.
These algorithms converge to solutions that are correct, but suboptimal (except for cycle-free code). We wish to identify as many useless values as possible, to eliminate as many computations as
possible. Yet, if we start by assuming that all values are live, our algorithm will fail to identify some obviously-useless values, like x in:
for (...)
x = x
We could keep adding more special cases. However, the correct (simplest) solution is to try and identify live values, rather than dead ones. A value is live if it’s used to compute a live value.
Moreover, return values and writes to memory are always live. Our routine now starts out by assuming that only the latter values are live, and adjoins live values as it finds them, until there’s
nothing left to add.
In this case, the intuitive solution converges to the greatest fixed point, but we’re looking for the least fixed point. Setting the right initial value ensures convergence to the right fixed point.
Other common instances of this pattern are reference counting instead of marking, or performing type propagation by initially assigning the top type to all values (like SBCL).
# I recently found a use for fixed point computations outside of math and computer science.
Most university or CEGEP student unions in Québec will vote (or already have voted) on strike mandates to help organize protests against rising university tuition fees this winter and spring. There
are hundreds of such unions across the province representing, in total, around four hundred thousand students. The vast majority of these unions comprise a couple hundred (or fewer) students, and
many feel it would be counter-productive for only a tiny number of students to be on strike. Thus, strike mandates commonly include conditions regarding the minimal number of other students who also
hold strike mandates, along with additional lower bounds on the number of unions and universities or colleges involved. As far as I know, all the mandates adopted so far are monotone: if they are
satisfied by a set striking unions, they are also satisfied by all of its supersets.
Tarski’s theorem applies (again, with \((\subset, \cup, \cap)\) on the power set of the set of student unions). Which fixed point are we looking for?
It’s clear to me that we’re looking for the fixed point with the largest set of striking unions. In some situations, the least fixed point could trivially be the empty set (or all unions that did not
adopt any lower bound). Moreover, the mandates are usually presented with an explanation to the effect that, if unions representing at least \(n_0\) students adopt the same mandate, then all unions
that have adopted the mandate will go on strike simultaneously.
I asked fellow graduate students in computer science to sketch an algorithm to determine which unions should go on strike given their mandates; they started with the set of student unions currently
on strike, and adjoined unions for which all the conditions were met. Such algorithms will converge toward the least fixed point. For example, there could be two unions, each comprising 5 000
students, with the same strike floor of 10 000 students, and these algorithms would have both unions deadlocked, waiting for the other to go on strike.
Instead, we should start by assuming that all the unions (with a strike mandate) are on strike, and iteratively remove unions whose conditions are not all met, until we hit the greatest fixed point.
I’m fairly sure this will end up being a purely theoretical concern, but it’s a pretty neat case of abstract mathematics helping us interpret a real-world situation.
This pattern of intuitively converging toward a suboptimal solution seems to come up a lot when computing fixed points. It’s not necessarily a bad choice: conservative initial values tend to lead to
faster convergence, and often have the property that intermediate solutions are always correct (feasible). When we need quick results, it may make sense to settle for suboptimal solutions. However,
it ought to be a deliberate choice, rather than a consequence of failing to consider other possibilities. | {"url":"http://www.pvk.ca/Blog/2012/02/19/fixed-points-and-strike-mandates/","timestamp":"2014-04-17T13:48:59Z","content_type":null,"content_length":"14841","record_id":"<urn:uuid:7baae3da-0a6b-4ce7-9dfa-fe8113eccba0>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anatolii Volodymyrovych Skorokhod
This page is dedicated to the memory of Anatolii Volodymyrovych Skorokhod, one of the most outstanding mathematicians of our time. The world has lost a celebrated scientist, one of the
world-recognized authorities in the field of probability theory and mathematical statistics. His scientific and pedagogical work has had a considerable impact upon the development of the whole
mathematical culture in Ukraine during the second half of the XX century. | {"url":"http://www.probability.univ.kiev.ua/skorokhod/","timestamp":"2014-04-16T10:55:35Z","content_type":null,"content_length":"3610","record_id":"<urn:uuid:a2deb04b-fa32-4b42-959d-ee0e54082a3e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
SHEKEL (
Name of (1) a weight and of (2) a silver coin in use among the Hebrews.
1. Weight:
It has long been admitted that the Israelites derived their system of weights and coins from the Babylonians, and both peoples divided the talent (ad loc.). In fact, it actually states that the mina
contained 50 shekels, which would make the talent equal to 3,000 shekels, so that a mina equals 818.6 grams, and a talent equals 49.11 kilograms. A similar talent is found among other peoples, for
the Greeks and Persians likewise divided the mina into 50 shekels, while the division of the talent into 60 minas was universal. This division into 50 is evidently a consequence of the conflict of
the decimal and the sexagesimal system, the Egyptian influence making itself felt side by side with the Babylonian.
It may possibly be inferred from Ezek. xlv. 12 that in the exilic period and the time which immediately preceded it the division of the mina into 50 shekels became customary among the Jews, and that
this was simultaneous with the division of the shekel into 20 gerahs
2. Coin:
The shekel was the unit of coinage as well as of weight, and the pieces of metal which served for currency were either fractions or multiples of the standard shekel. As already noted, the struggle of
the Egyptian decimal and the Babylonian sexagesimal system for supremacy was especially evident in the gold and silver weights, and the fact that the mina of 50 shekels became the standard was
probably due to Phenician influence. The gold shekel was originally 1/60 of the weight of the mina, and the silver shekel, which was intended to correspond in value to the gold one, should
consequently have been 40/3 x 1/60 = 2/9 of the weight of the mina, since the ratio between gold and silver had gradually become as 40 to 3. Since this shekel could not have been commonly used as
currency, however, a demand arose for a smaller coin of practical size, which might be made either by dividing the silver equivalent of the gold shekel into ten parts, thus giving a silver shekel of
2/90 = 1/45 of the weight of the mina, or by dividing the silver equivalent into fifteen parts, giving a silver shekel of 2/9 x 15 = 2/135 of the weight of the mina. When the decimal system had
become established the gold and the silver mina each were reckoned at 50 of these shekels. Hence there were (1) the Babylonian silver mina, equal to 50 x 1/45 = 10/9 of the weight of the mina, and
(2) the Phenician silver mina, equal to 50 x 2/135 = 100/135 = 20/27 of the weight of the mina.
In the original Babylonian silver currency the silver shekel was divided into thirds, sixths, and twelfths, while in the Phenician currency it was divided into halves, fourths, and eighths. These
Phenician silver shekels were current among the Jews also, as is shown by the fact that the same division is found among them, a quarter of a shekel being mentioned in I Sam. ix. 8, while a
half-shekel is mentioned as the Temple tax in the Pentateuch. The extant shekels of the Maccabean period vary between 14.50 and 14.65 grams, and are thus equivalent to 2/135 of the great "common"
Babylonian mina—14.55 grams. The mina was equivalent, therefore, to 725.5 grams, and the talent to 43.659 kilograms. The Babylonian shekel, which was equal to 10/9 of the weight of the mina, was
introduced in the Persian time, for Nehemiah fixed the Temple tax at a third of a shekel. This Persian monetary system was based on the small mina, its unit being the siglos, which was equal to
one-half of the Babylonian shekel, its ratio to the Jewish shekel being 3 to 8. It was considered the hundredth instead of the fiftieth part of the mina, and weighed between 5.61 and 5.73 grams,
while the mina weighed between 565 and 573 grams, and the talent between 33,660 and 34,380 kilograms.
In the Maccabean period the Phenician silver shekel was again current, the Temple tax once more being a half-shekel (Matt. xvii. 24-27, R. V.). See Numismatics.
• C. F. Lehmann, Zeitschrift für Ethnologie, 1889, pp. 372 et seq.;
• Sitzungsberichte der Archäologischen Gesellschaft, 1888, pp. 23 et. seq.; 1893, pp. 6 et seq.;
• L. Herzfeld, Metrologische Voruntersuchungen zu einer Gesch. des Ibräischen, Respektive, Alljüdischen Handels, Leipsic, 1863;
• F. de Sauley, Numismatique de la Terre Sainte, Paris, 1875;
• F. W. Madden, Coins of the Jews, London, 1881;
• Th. Reinach, Les Monnaies Juives, in R. E. J. xv. (published separately, Paris, 1888);
• Ad. Erdmann, Kurze Uebersicht über die Münzgeschichte Palästinas, in Z. D. P. V. ii. 75 et seq.
E. G. H. W. N. | {"url":"http://www.jewishencyclopedia.com/articles/13536-shekel","timestamp":"2014-04-17T10:59:01Z","content_type":null,"content_length":"63752","record_id":"<urn:uuid:cff76380-57d2-4bfe-a448-25aa736fcd21>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
TJ Deep Sump Transmission Pan
**This is a companion article to one printed previously in 4x4 Low Down and available in the SHR website Tech Section - Understanding Gear Ratio’s vs. RPM http://www.southernhighrollers.com/tech/
All the gears, including 1st, 2nd, 3rd, 4th, and 5th (or even D, R, etc on an Automatic) will be lower when using the 4:1 kit and lowering your low side in the transfer case. The transfer case and
the transmission are two different pieces of machinery. The transmission takes rotation of the engine crankshaft as input. You change that rotational speed with your throttle. The rotational speed
comes in, and the transmission alters it based on the gearing. Suppose 1000 rpms come into the transmission. If your gear was a 2:1 ratio, that means that it reduces the input by that ratio - for
each 2 revolutions that come in, one revolution is passed to the output shaft, 1000 rpms out of the engine and into the transmission, 500 rpms out of the transmission. A gear of 4:1 would take the
same 1000 rpms out of the engine and output 250 rpms from the transmission.
Over the years different gears have been given specific names. "Direct" or "high" gear is normally associated with a 1:1 ratio (1000 rpms in - 1000 rpms out), "overdrive" with any gear less than 1:1.
A .79:1 (1000 rpms in - 1255.82 rpms out) would be an overdrive gear. On the low side any gear that is lower than one normally used for starting from a dead stop is called a "granny" gear, "low low",
"compound low or just compound"; although “compound” is often associated with another low gear lower than the "granny". Heavy equipment, trucks etc., may have several gears that are lower than the
"low" gear or "first" gear.
Ok, now we have the operation of the transmission. On a normal 2-wheel drive vehicle the output from the transmission goes directly to the axle where the rotational speed is changed again. Suppose
the transmission is exporting 1000 rpms. If the gear in the axle is 2:1, we have a further reduction of rotational speed - 1000 rpms into the differential, 500 rpms out to the wheels.
Go back to the engine. 1000 rpms out of the engine into a 2:1 transmission produces 500 rpms out to the axle. Using the 2:1 in the axle again causes the 500 input to be reduced by half to 250 rpms
from 1000 down to 250 is a 4:1 ratio. If you multiply the axle ratio by the transmission ratio, you get 4:1.
And finally, we introduce the transfer case. The transfer case is a 2-speed transmission. It's just another transmission between the main box and the axle. In addition it outputs from the front and
the rear so that you can operate 2 axles from it. It takes input from the main transmission, alters it, and transfers it to the axle or axles. The transfer case has 2 gears, low and high, and again
the "high" conforms to the above mentioned specific gear name. High gear is 1:1 or direct. It does not alter the input, so it is as if the transmission was connected "directly" (direct) to the axles.
If you are in high gear in the main transmission, it is as if the engine is connected "directly" to the axle because you could accomplish the same thing with no transmission at all - a direct
The final bit of math. We have 1000 rpms coming out of the engine. The transmission is in a 2:1 gear. 500 rpms out of the transmission and into the transfer case. The transfer case is in low gear,
and for this explanation, it will be another 2:1 ratio. So, 500 in, 250 out. The axle is also 2:1. 250 in 125 out. 1000 rpms out of the engine and 125 rpms to the wheels. That is an 8:1 ratio. If you
multiply all the gears - 2 X 2 X 2 you get 8. If you reduce the 1000:125 ratio to it's lowest form, you get 8:1.
A real life example: I have a 4.3:1 ratio on the low side of my t/c and 4.56:1 gears in the axles. The low gear ratio in the transmission is 3.83:1. If you multiply that out, you find that the
overall ratio, the "crawl" ratio in low gear, is 75.09:1. For each 1000 rpms of the engine the wheels turn 13.31 rpms. A stock Jeep with a 3.73:1 axle ratio would have a 37.57:1 crawl ratio - 2.63
(transfer case) X 3.73 (axle) X 3.83 (low gear in the transmission). In overdrive and the low transfer case gear my Jeep will only run about 18 mph. I often have to change into the high side of the
transfer case on a flat trail.
Back to the big trucks for a moment. Where the Jeep has a 2 speed transfer case or second transmission, big trucks have a 5 or 6-speed second transmission, and sometimes even 3 transmissions as well
as selectable gears in the axles. (Note: Sometimes on off-road vehicles especially it seems on Toyotas you will see a setup with multiple transfer cases sometimes referred to as “Crawl Boxes”. These
adapters and cases come from many sources but Marlin and Klune are two of the better known ones.)
I might add that the term "crawl ratio" is one invented by off roaders or 4 wheelers. It has no official or semi official meaning that I know of. More correctly the term should be "final ratio" or
"overall ratio" because it refers to the cumulative ratio of the various gears from the power source to the point where the work occurs - engine to wheels - regardless of which gear is used.
This ratio is useful to 4 wheelers to determine the lowest overall gearing of the vehicle, but it is just as useful to answer your original question. You questioned the drivability of your vehicle in
the higher gears on a relatively smooth trail where low speed was not a factor.
Let's do a bit more math and combine the results with real world data. The following are not real numbers, but they will make the point.
The 4th gear of a standard shift 5 speed Jeep transmission is 1:1 or "direct" or "high gear". That is a real number, BTW. Another real number is the 2.63:1 gear ratio in the low side of your transfer
case. A Jeep TJ Transfer Case has a ratio of 2.73:1. Suppose you are able to drive comfortably at 30 mph in 4th gear/low side transfer case. Your concern was whether you could continue to drive on
the smooth trail in low range, or whether you would have to change into high range if you changed to a 4:1 gear in the transfer case.
Here's the math. The 30 mph is a number from thin air, but will make the point. The overall or final gear ratio in your current vehicle using 4th gear, the low side of the transfer case and we will
assume a 3.73:1 axle ratio (that's a common ratio. I don't know what yours is.) is 1 (transmission gear) X 2.63 (transfer case) X 3.73 (rear axle) or 9.8099:1. So, at a particular rpm you can drive
30 mph on the trail. The question is - at the same rpm how fast would you be going with a 4:1 low in the transfer case instead of a 2.63:1.
The easy way to figure that is - a ratio of ratios, or 4:1 is to 2.63:1 as "X" is to 30, or 2.63 is to 30 as 4 is to "X". The math: 2.63 divided by 4 is .6575 multiplied by 30 is 19.725. If you
changed from 2.63:1 to 4:1 in the transfer case, and you formally went 30 mph in 4th gear, you can now go 19.725 mph in 4th gear at the same rpms. If you changed to a 4.3:1 ratio, your speed at the
same rpms would drop to 18.35 mph. OR you could do it using the "overall ratio". The final or overall ratio of a 4:1 under the same circumstances would be 4 (transfer case) X 1 (transmission) X 3.73
(axles) or 14.92:1. Again we look at the relationship of the ratios. 9.8099:1 is to 30 as 14.92:1 is to "X". The math: 9.8099 divided by 14.92 times 30 equals 19.725 - same result.
Having the real world data of your own Jeep - all the transmission ratios, the axle ratios, the transfer case ratios and the speeds at various rpms in various gears will allow you to mathematically
determine the speeds under identical conditions if you only change one factor - a single gear ratio - the transfer case gear ratio, for example. You can do it very quickly as you drive if you use a
calculator. Predetermine the relationship of the two transfer case ratios. If it is 2.63:1 vs. 4:1 the number you will use is .6575 - 2.63 divided by 4. If the ratio is 2.73:1, the number would be
.6825. As you drive attain a comfortable speed in each gear. That would be a speed that does not over-rev the engine. If it is comfortable to drive 27 mph in 4th, simply multiply 27 by .6575 or
whatever the correct number is. The answer will be the speed at which you will be comfortable in the same transmission gear with a lower gear in the transfer case.
How to figure your vehicle crawl ratio:
__________ x __________ x __________ = __________
1st Gear Low Range Axle Crawl
Transmission Transfer Ratio Ratio
Of course the one thing overlooked on this overall ratio is tire size. Tire size plays a lot into the off-road performance of a particular crawl ratio.
For example if I had a Jeep TJ with a stock 5-Speed, a stock t-case. 4.10 axle gears and 33” tires. I would have a crawl ratio of 43.98849.
In order to “upgrade” to 35” tires and keep the same performance I would need a “crawl ratio” closer to 46.6545.
I can determine this by a the following formula:
(__________ x __________ ) / __________ = New
Current New Tire Old Tire Crawl
Crawl Ratio Size Size Ratio
(43.98849x35)/ 33 = ~46.6545
In order to achieve that crawl ratio one could either modify the transfer case to a numerically lower gear or the axles.
One more bit of info. The lower gears multiply the power delivered to the wheels. In the 4 wheeling community the goal is to drive VERY slowly. High gears will only allow a vehicle to drive at a
certain low speed before the engine stalls. Lower gears allow the vehicle to drive at a lower speed before stalling the engine. Our goal is to drive SLOW. Trucks do the same thing. They gear LOW, but
their goal is not slow speed. Their goal is to increase the power to the wheels. Different goals, but the results of each satisfy the goal of the other. In some respects they are counter productive
though. The 4 wheeler starts breaking things because of all the extra power that results from his "go slow" low gears, and the trucker has to shift a whole bunch of gears to get up to speed because
his quest for power has resulted in a very low maximum speed in the low gears. Some of the very low gears in a big trucks only have a speed range of a very few miles per hour - the lowest, from 0 to
about 2 or 3 mph, the next from 0 to about 5 mph etc. When you get into the higher gears they are only good for ranges of speed like from 34 mph to 45 mph, but that's a factor of the engine power
band. The engines, despite their size and power, cannot "pull the load" if the engine revolutions are allowed to go too low. Same is true of your Jeep. Overdrive will not "pull the load" below a
certain speed. The engine is not strong enough. | {"url":"http://www.southernhighrollers.com/tech/articles/gearing.html","timestamp":"2014-04-17T10:04:23Z","content_type":null,"content_length":"21751","record_id":"<urn:uuid:4a563c57-b03e-48a9-97a8-815074b7c74e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
CR singular immersions of complex projective spaces.
(English) Zbl 1029.32020
Consider the rational mappings $f$ from $ℂ{P}^{1}$ into $ℂ{P}^{2}$ of the form $\left[z\right]↦P\circ {P}_{0}\left(\left[z\right]\right)$, where ${P}_{0}\left(\left[{z}_{0},{z}_{1}\right]\right)=\
left[{z}_{0}{\overline{z}}_{0},{z}_{0}{\overline{z}}_{1},{z}_{1}{\overline{z}}_{0},{z}_{1}{\overline{z}}_{1}\right]$ and $P:ℂ{P}^{3}\to ℂ{P}^{2}$ is an onto complex linear map. Two such mappings $f$
are equivalent, if they are equal after biholomorphic transformations of the domain and target, respectively.
The author gives a complete classification for the equivalence problem. For the proof, the author associates to each mapping $f$ a $2×2$ complex matrix $K$, and shows that the equivalence of two such
mappings implies the similarity of the corresponding $2×2$ matrices under matrix transformations satisfying a reality condition. The converse is also true, which is however a consequence of the
geometric interpretation of the normal form of the matrix $K$. The complete set of equivalence classes of mappings $f$ consists of a continuous family of embeddings of $ℂ{P}^{1}$ in $ℂ{P}^{2}$ with
exactly two elliptic complex tangents, a continuous family of totally real immersions of $ℂ{P}^{1}$ in $ℂ{P}^{2}$ with one point of self-intersection, and finitely many others.
The author also studies the equivalence problem for analogous mappings from $ℂ{P}^{2}$ into $ℂ{P}^{5}$. The complete classification for this case and for the higher dimension case is unsettled in
this paper. However, the author gives some interesting examples.
32V40 Real submanifolds in complex manifolds
14P05 Real algebraic sets
14E05 Rational and birational maps
32Q40 Embedding theorems
15A22 Matrix pencils
32S20 Global theory of singularities (analytic spaces) | {"url":"http://zbmath.org/?q=an:1029.32020&format=complete","timestamp":"2014-04-21T10:19:24Z","content_type":null,"content_length":"25090","record_id":"<urn:uuid:dd834a61-c8d1-4707-ad41-8dd8fbeeba34>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tensor analysis/Differential forms outside physics
up vote 1 down vote favorite
There are many "geometric systems" like tensor analysis or differential forms calculus, which more or less different perspectives onto the same abstract relations.
Most applications are physical, like electromagnetism. I wonder whether there are applications of these geometric systems beyond physics. Can you show me some active areas of research in that
A related question is whether there are partial differential equations whose origin is not directly physical and that can be meaning-fully stated in terms of tensor and vector fields.
reference-request dg.differential-geometry ap.analysis-of-pdes
3 Differential geometry isn't outside of physics? – Qiaochu Yuan Jul 16 '12 at 18:06
I guess he means outside of pure maths? – Paul Reynolds Jul 16 '12 at 18:08
1 Maybe you should clarify what you mean by "beyond physics"? – Kevin Jul 16 '12 at 18:10
2 This should be Community Wiki anyhow. Voting to close until it is. – Igor Rivin Jul 16 '12 at 18:17
1 Do you count medical imaging, computer vision, pattern-recognition, computer graphics, signal-processing and control-theory as outside physics ? Possibly using discrete-differential-geometry and
partial-difference equations instead of continuous versions. – user19172 Jul 17 '12 at 18:33
show 1 more comment
1 Answer
active oldest votes
While I do disagree with the premises, some explicit, many implicit, of the question, the question and these premises are surely fairly popular.
While not arguing about whether differential equations and vector fields and tensors "are" physics or not, I would agree that they have huge historical/experiential base of "physical
intuition", whether this is "physics" or not, notably.
In fact, it has been found profitable to transport from, or abstract from, physically meaningful situations in "mechanics", say, to "number theory" (as manifest in "automorphic forms",
especially). That is, physically unsurprising, if non-trivial, ideas sometimes seem to have non-trivial potential impact on "number theory" suitably translated into harmonic analysis,
up vote 1 understandably on special objects, not generic.
down vote
A widely-understood cliche, and wonderful it is, is the proof that $\sum 1/n^2=\pi^2/6$ via Plancherel applied to the sawtooth function made periodic, that is, on the circle as $\mathbb R/
\mathbb Z$. This is easy to explain, and does touch my aesthetic sense, though I understand it might not touch others'.
If one doubted that the "guts" of differential geometry mattered, I'd volunteer that the proofs that automorphic forms do conform to expectations (about global Sobolev indices and such) do
depend on the particulars, so are indeed dependent upon "geometry", whatever our conceits make of the latter.
add comment
Not the answer you're looking for? Browse other questions tagged reference-request dg.differential-geometry ap.analysis-of-pdes or ask your own question. | {"url":"http://mathoverflow.net/questions/102375/tensor-analysis-differential-forms-outside-physics","timestamp":"2014-04-18T16:18:37Z","content_type":null,"content_length":"57590","record_id":"<urn:uuid:9743d503-bc77-4dfb-9442-a18e5f76a34f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relativistic Electron Wave
The dispersion relation for the free relativisitic electron wave is [tex] \omega (k) = \sqrt{c^2 k^2 + (m_e c^2/ \hbar)^2}[/tex]. Obtain expressions for the phase velocity and group velocity of these
waves and show that their product is a constant, independent of k. From your result, what can you conclude about the group velocity if the phase velocity is greater than the speed of light?
The group velocity will be easy to find because I can just differentiate with respect to k. I am not really sure what to do for the phase velocity. I figure that since [tex]v_p = f \lambda = E/p [/
tex] then I could use the relativistic energy expression [tex]E = (p^2 c^2 + m^2 c^4)^{\frac{1}{2}}[/tex]. I am unsure about how to tackle the momentum. Does an electron have a de Broglie wave | {"url":"http://www.physicsforums.com/showthread.php?t=140103","timestamp":"2014-04-17T15:33:21Z","content_type":null,"content_length":"22169","record_id":"<urn:uuid:45bf33e2-c6f1-446b-9c80-2ce27ea94894>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Thanks, Bob. Amazed at your three solutions. I did not have your solution 1 and 2. I just had your solution 3 but I thought in solution 3, the upper right 4 cells can be reversed, that is, Amy +
Eastenders can be the 5th and Eilish + Neighbors to the 4th. Then there would be 4 solutions. As a novice, I thought so-called Einstein Puzzles are supposed to have only "one" solution like Sudoku.
It seems I was wrong, then. | {"url":"http://www.mathisfunforum.com/post.php?tid=17784&qid=220263","timestamp":"2014-04-19T02:05:46Z","content_type":null,"content_length":"19210","record_id":"<urn:uuid:5074d508-99f4-43f3-809f-6dc8f486e7ca>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inference vs. Implication
Difference Between Inference And Implication
In the science of statistics, inference is the process of using information from observed phenomena to derive conclusions about the underlying probability distribution of the observations (see
distribution, statistics). Suppose a coin is known to have some unknown probability p of coming up heads. (In most cases, p will not be ½ and the coin will be biased.) Assume that in a coin-tossing
experiment, 70 heads were observed out of 100 tosses. Two typical problems of inference are (1) to decide whether or not the coin is biased and (2) to estimate the value of p. The first of these
questions is an example of hypothesis testing: the null hypothesis that the coin is unbiased is being tested. The second question is a problem in estimation; it does not seek a simple yes or no
answer but rather an estimated value of a parameter of interest. In estimation, some measure of the precision of the estimate is also sought. This may take the form of the variance of the estimate in
(hypothetical) repeated sampling. Alternatively, instead of giving a point estimate and the variance, there are ways of giving a confidence interval, with ends computed from the data, that will
include the true value in some specified fraction of hypothetical repetitions.
Implication, a term in logic usually denoting “logical” (or “strict”) implication, although it sometimes denotes the weaker relation of “material” implication. Likewise, a group of statements
logically implies B if, and only if, it is not possible for every statement in the group to be true while B is false. The statement “Mount McKinley is taller than Mount Logan, which is taller than
Mount Rainier” logically implies the statement “Mount McKinley is taller than Mount Rainier.” It is not possible for the first of those statements to be true while the second is false.
Logical implication differs from material implication. To say that a statement, A, materially implies another statement, B, is to say merely this: it is not the case that A is true and B false. Any
pair of true statements materially imply each other because, given that both are true, it is not the case that the first is true and the second false. Nevertheless, not every pair of true statements
logically imply each other, for in many cases either of the two could be true even if the other were false. | {"url":"http://vspages.com/inference-vs-implication-23848/","timestamp":"2014-04-19T22:05:45Z","content_type":null,"content_length":"21424","record_id":"<urn:uuid:cea39760-3fa2-4c01-bda4-a31653286d53>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Function Notation: If f(x) = 2x+1, g(x) = 3-x, find f(a),
jolie3000 wrote:If ƒ(x) = 2x + 1, Find 2ƒ(x)
I originally worked it out like this:
ƒ(2(x))= 2(2(x)+ 1)
But 2 * f(x) is not at all the same thing as f(2*x).
This is, in part, a result of the fact that the parentheses, in
function notation
, have nothing to do with multiplication.
jolie3000 wrote:Does this make more sense:
Yes; two times f(x) means 2(f(x)) = 2(the formula for f) = 2(2x + 1).
Good job! | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=428&p=1364","timestamp":"2014-04-19T17:40:06Z","content_type":null,"content_length":"21774","record_id":"<urn:uuid:1f5d2308-e037-4974-ad80-a4e5ef0568d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
The product of the sum and difference of the same two terms is equal to what? - Yahoo Answers
The product of the sum and difference of the same two terms is equal to what?
• ✓
Follow publicly
• ✓
Follow privately
• Unfollow
i need help with summer school.
Other Answers (3)
Rated Highest
• (x + y)(x - y)
x^2 - xy + xy - y^2
The middle terms cancel
x^2 - y^2
The product of the sum and the difference fo the same two terms is equal to the difference of the terms squared. Notice that the order of the terms is relevant. If y were the first term, then the
outcome would be y^2 - x^2 . It's called a "perfect square", or:
The Difference of Squares
[check: (2+3)(2-3) = (5)(-1) = -5 and (2)^2-(3)^2 = 4 - 9 = -5 ]
[note: x^2 - y^2 is not the same as (x - y)^2 ]
• Suppose we call the two terms a and b. Then the product of the sum and difference is:
(a + b)(a - b)
Expand (FOIL):
= a² - ab + ab - b²
The ab's cancel out, so you're left with:
= a² - b², which is the difference of the squares.
• The difference of squares.
(a+b)(a-b) = a^2 - b^2 | {"url":"https://answers.yahoo.com/question/index?qid=20100724130153AAEWQdu","timestamp":"2014-04-19T14:31:27Z","content_type":null,"content_length":"56556","record_id":"<urn:uuid:c1b6e5d2-82ee-4620-bf92-f846c30a2ea9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sequence Help
June 6th 2011, 06:21 PM #1
Junior Member
Dec 2010
I'm studying for my final and my teacher gave me a worksheet. I have two sequence problems I do not understand at all. If anyone could please help I'd appreciate it!
1. Find the sum of the sequence, note i=symbol for factorial, I couldn't find a factorial symbol
((2n-1)i) / ((2n+1)i)
2. find sum of sequence (That's exactly how written, I have no idea...) :
2, -1/2 + 1/8 - ... 1/2048
Try [shift]+1
Have you noticed that $\frac{(2n-1)!}{(2n+1)!} = \frac{1}{2n(2n+1)}$?
Expand the factorials. Only two terms differ.
"fully evaluated" means nothing.
Finding a parsimonious expression for a sum can be an art. Wonderful accolades have been awarded because of such discoveries. Other than self-awesomeness, though, you need a bag of references. In
this case, notice further that:
$\frac{1}{2n(2n+1)}\;=\;\frac{1}{2n} - \frac{1}{2n+1}$
Write out the expression for n = 1, 2, and 3 and see if you notice anything. (Resist the temptation to add anything.)
"fully evaluated" means nothing.
Finding a parsimonious expression for a sum can be an art. Wonderful accolades have been awarded because of such discoveries. Other than self-awesomeness, though, you need a bag of references. In
this case, notice further that:
$\frac{1}{2n(2n+1)}\;=\;\frac{1}{2n} - \frac{1}{2n+1}$
Write out the expression for n = 1, 2, and 3 and see if you notice anything. (Resist the temptation to add anything.)
I don't quite understand what you're getting at. I made a mistake in the original post, it doesn't want the sum of the expression, but rather it says to evaluate it.
also, anything on the second question?
2, -1/2 + 1/8 - ... +1/2048
This looks like a geometric sequence with first term "2" and ratio "-1/4"...
Last edited by TheChaz; June 6th 2011 at 08:07 PM. Reason: sign!
You probably should differentiate between a sequence (a list) and a series (add things up).
Did you list the terms associated with n = 1, 2, and 3? Really, it will be quite enlightening. You will fail this course if you can't list three terms of a series.
n = 1 leads to $\frac{1}{2(1)} - \frac{1}{2(1)+1}\;=\;\frac{1}{2}-\frac{1}{3}$
How about n = 2?
Note: That ratio on #2 looks like "-1/4" and it appears to be finite.
Never noticed that, how did you get it? and how does that lead to the sum?
Telescoping series - Wikipedia, the free encyclopedia
This serie is not telescopic
$\sum^{\infty}_{k=1}\frac{1}{2k} -\frac{1}{2k+1}=\sum^{\infty}_{k=0}\frac{1}{2k+2} -\frac{1}{2k+3}=\sum^{\infty}_{k=0} \int^{1}_{0} x^{2k+1}(1-x)dx =$
$=\int^{1}_{0} (1-x)x\sum^{\infty}_{k=0} x^{2k}dx=\int^{1}_{0}\frac{x(1-x)}{1-x^2}dx = 1 - \ln (2)$
$=\sum^{\infty}_{k=1}\frac{1}{(2k)(2k+1)} = 1 - \ln (2) .$
June 6th 2011, 06:36 PM #2
MHF Contributor
Aug 2007
June 6th 2011, 06:47 PM #3
Junior Member
Dec 2010
June 6th 2011, 06:50 PM #4
MHF Contributor
Aug 2007
June 6th 2011, 06:53 PM #5
Junior Member
Dec 2010
June 6th 2011, 07:03 PM #6
MHF Contributor
Aug 2007
June 6th 2011, 07:09 PM #7
Junior Member
Dec 2010
June 6th 2011, 07:23 PM #8
June 6th 2011, 07:35 PM #9
MHF Contributor
Aug 2007
June 6th 2011, 08:05 PM #10
June 7th 2011, 04:32 AM #11
June 7th 2011, 05:43 AM #12 | {"url":"http://mathhelpforum.com/pre-calculus/182549-sequence-help.html","timestamp":"2014-04-17T20:53:28Z","content_type":null,"content_length":"66799","record_id":"<urn:uuid:1d1817a4-8dc8-457f-af3c-bf9c305d4dc8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sir Isaac Newton
Sir Isaac Newton was born in Lincolnshire, England in 1643, the same year that Galileo died. He was born premature and was not expected to live. His father, a farmer also named Isaac, had died three
months before his birth. His father had owned animals and property. By the standards of the day, he was born into a wealthy family. Still, his father had been uneducated and unable to sign his name.
It was the intention of his mother, Hanna, that young Isaac would become a farmer.
When Isaac was two years old, his mother married Barnabas Smith, a church minister from a neighboring town. From this point on, Isaac lived with his grandparents. He was raised as an orphan and it is
said that it was not a happy childhood.
In 1653, Barnabas Smith died and Isaac moved back with his mother. Isaac began attending a grammar school in Grantham. As a child, he did not do well in school. His teachers considered him to be
'idle' and 'inattentive.' Rather than socialize with other students, he kept to himself.
When he was 17, his mother called him home in order to run the farm. Newton seemed to show little interest in learning the farm responsibilities. He prefered to experiment and build gadgets rather
than watching the sheep. Eventually, it was decided that farming was not the career for young Newton. Instead, in 1661, he was sent to Cambridge.
Despite the property owned by his mother, Newton was sent to Cambridge as a subsizar, that is, a student who had to perform labor for other students in order to go to Cambridge. His mother refused to
pay any money for Newton's schooling.
At Cambridge, Newton began to study law. In addition, he took time to study mathematics and philsophy. At the time, philosophy was dominated by Aristotle. Despite this, Newton took strong interest in
the works of Gassendi, Boyle, and Kepler, and Descartes.
Newton's interests in mathematics supposedly began in 1663, when he purchased a book on astrology and was unable to understand its mathematical details. He started on Euclid's Elements but found the
initial proofs too simple. It was only when he came upon the principle that parallelograms that have the same base and the same parallels are equal (for details on this proof, see
Euclid's Book I, Proposition 35
). This gave Newton a new respect for the concept of proof and he read Euclid's Elements very thoroughly from beginning to end.
In the summer of 1665, an outbreak of the Bubonic Plague forced the closure of Cambridge. For the next two years, Newton returned to his mother's house where he worked on his independent projects
which included ideas that would later become his theory of calculus and his laws of motion. It is often said that 1666 was the year that Newton came up with his most important ideas.
Cambridge reopened in 1667 and Newton returned. In 1666, Newton wrote three very important papers on calculus. He was 24. Before these papers, no one had known who Newton was. These papers covered
slopes on curves (differentiation) and areas under curves (integration) and the relationship between these two methods (fundamental theorem of calculus). These results had been applied to very
specific cases before Newton but nothing before matched Newton's generalized methods and nothing compared to the depth of Newton's understanding of the two methods and their relationship. In 1667,
Newton became a minor fellow of Trinity College. One might wonder with his accomplishments with calculus why he would receive only a minor fellowship. The reason was probably because Newton kept to
himself for the most part. He did not talk during dinner and did not join the other scholars in social activities.
In 1668, Newton built the first ever reflecting telescope. Previous to this, all the telescopes were refracting telescopes, that is, a person looked directly through the telescopic lense. In a
reflecting telescope, a person looked through a mirror that reflected the lense. This resulted in an unprecedented clarity of image. This enabled telescopes to be made smaller with significantly
greater power of magnification.
Newton was so proud of his telescope which he had built himself that he started to demonstrate it to others. Word spread and it created a sensation. Newton was then became a member of the Royal
Society of Science. The most prestigious group in science at the time.
In 1669, Newton became the Cambridge Lucasian Professor of mathematics. He received this office primarily from the stong support of Dr. Isaac Barrow who had held the post previously and decided to
step down. At the time, all professors were expected to be ordained as ministers. Newton asked to be free from this requirement so that he could spend more time studying mathematics. His request was
approved by King Charles II.
From 1670 to 1672, Newton focused his time on optics. It was a this time that he did his very famous experiment with the prism which demonstrated that white light was composed of a spectrum of
colors. It was also at this time that he presented his theory of light as a particle and the concept of the ether.
In 1671, Newton presented his telescope to the Royal Society. He submitted a paper on optics detailing the ideas that led to the reflecting telescope. At the time, Robert Hooke was seen as the
leading expert on optics. He aggressively attacked many of the ideas of in Newton's paper.
In 1678, Newton experienced a mental breakdown. He was accused of plagiarism and later on, his mother died. Newton withdrew from his colleagues and became focused on alchemy. It is believed by
scholars that it was Newton's deep interest in alchemy that helped him with his ideas about universal gravitation. Gravity, after all, is a mysterious force at a distance that is not explained so
much as it is described mathematically.
In 1684, the problem of Kepler's planetary motions had become a very important topic of discussion for the Royal Society. The astronomer Edmund Halley had begun discussions with many of the leading
scientists about their ideas of whether Kepler's laws were correct and whether they implied an inverse square force between planets. It was at this time that Halley spoke about this with Newton. He
was very surprised to hear that Newton had already worked out the details and could demonstrate Kepler's principles with a proof. This work would become Newton's most famous work, the Principia.
Interestingly, Newton did not have the funds to publish this work and it was mostly through the contributions of Halley that the Principia was completed and released to the world.
In 1689, Newton was elected to Parliament to represent Cambridge. Thus, began his career in London. In 1699, he became Master of the Mint. With this position, he became a very wealthy man. He took
the position very seriously and was responsible for a significant "recoining" that occured.
In 1703, he was elected President of the Royal Society. Each year, he was reelected until his death in 1727. In 1705, he was knighted by Queen Anne.
Sir Isaac Newton is perhaps the most important scientist and mathematician who has ever lived.
No comments: | {"url":"http://fermatslasttheorem.blogspot.com/2005/09/sir-isaac-newton.html","timestamp":"2014-04-17T18:23:09Z","content_type":null,"content_length":"92248","record_id":"<urn:uuid:531a3aeb-174c-4d2c-88b9-e5b7d5a87298>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Canton, GA Math Tutor
Find a Canton, GA Math Tutor
Hello, my name is Kyle. I am currently a senior at KSU studying Chemistry; this concentration along with my personal study leads to a number of higher level math courses that demand thorough
knowledge of more basic maths. I tutor Algebra, Pre-Calculus, Calculus, Geometry,General Chemistry and more from Elementary-Undergrad.
25 Subjects: including algebra 1, algebra 2, calculus, ACT Math
...I do have experience in the school and with my two daughters. I have a bachelor's degree and am currently obtaining a master's degree in education. I can verify my qualifications through my
school and diplomas.
5 Subjects: including algebra 1, prealgebra, elementary math, biostatistics
...I have learned a great deal about the Visual Basic background functions of Access, and am just starting to use SQL statements to make my programs smaller. I have developed programs that
successfully: * catalog, organize, and play mp3s as well as write playlists for an mp3 collection * determ...
32 Subjects: including ACT Math, economics, SAT math, discrete math
...Before WyzAnt I worked with some of the highest quality personal educational service companies available. My areas of expertise include the following: standardized test preparation techniques
(SAT, SSAT, ISEE, state exams), mathematics (general, algebra, geometry, trigonometry, pre-calculus), En...
31 Subjects: including calculus, chemistry, physics, study skills
...My name is Leonora. I have worked with kids for 6 years now and plan on attending college to become an elementary school teacher. I began as a teacher's assistant in 6th grade working with
k-1st graders.
18 Subjects: including prealgebra, grammar, geometry, algebra 1
Related Canton, GA Tutors
Canton, GA Accounting Tutors
Canton, GA ACT Tutors
Canton, GA Algebra Tutors
Canton, GA Algebra 2 Tutors
Canton, GA Calculus Tutors
Canton, GA Geometry Tutors
Canton, GA Math Tutors
Canton, GA Prealgebra Tutors
Canton, GA Precalculus Tutors
Canton, GA SAT Tutors
Canton, GA SAT Math Tutors
Canton, GA Science Tutors
Canton, GA Statistics Tutors
Canton, GA Trigonometry Tutors
Nearby Cities With Math Tutor
Acworth, GA Math Tutors
Austell Math Tutors
Ball Ground Math Tutors
Cartersville, GA Math Tutors
Doraville, GA Math Tutors
Duluth, GA Math Tutors
Holly Springs, GA Math Tutors
Kennesaw Math Tutors
Lebanon, GA Math Tutors
Milton, GA Math Tutors
Norcross, GA Math Tutors
Rome, GA Math Tutors
Suwanee Math Tutors
Tucker, GA Math Tutors
Woodstock, GA Math Tutors | {"url":"http://www.purplemath.com/Canton_GA_Math_tutors.php","timestamp":"2014-04-19T14:50:08Z","content_type":null,"content_length":"23575","record_id":"<urn:uuid:718531b5-14ce-49a6-b8d0-b34065c12155>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the smallest two-digit palindrome?
A Lychrel number is a natural number that cannot form a palindrome through the iterative process of repeatedly reversing its digits and adding the resulting numbers. This process is sometimes called
the 196-algorithm, after the most famous number associated with the process. In base ten, no Lychrel numbers have been yet proved to exist, but many, including 196, are suspected on heuristic and
statistical grounds. The name "Lychrel" was coined by Wade VanLandingham as a rough anagram of Cheryl, his girlfriend's first name.
The reverse-and-add process produces the sum of a number and the number formed by reversing the order of its digits. For example, 56 + 65 = 121. As another example, 125 + 521 = 646.
A phonetic palindrome is a portion of sound or phrase of speech which is identical or roughly identical when reversed.
Some phonetic palindromes must be mechanically reversed, involving the use of sound recording equipment or reverse tape effects. Another, more abstract type are words which are identical to the
original when separated into their phonetic components (according to a system such as the International Phonetic Alphabet) and reversed.
Constrained writing is a literary technique in which the writer is bound by some condition that forbids certain things or imposes a pattern.
Constraints are very common in poetry, which often requires the writer to use a particular verse form.
Related Websites: | {"url":"http://answerparty.com/question/answer/what-is-the-smallest-two-digit-palindrome","timestamp":"2014-04-20T03:13:45Z","content_type":null,"content_length":"23550","record_id":"<urn:uuid:90692879-db21-4609-888b-22c8f14e5b6b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Is Godel's Theorem surprising?
ignacio natochdag at elsitio.net.uy
Mon Dec 11 09:42:36 EST 2006
Charles silver wrote:
" First, thanks very much for all the interesting and enlightening
responses to my question. A couple of comments:
Diagonalization is not central to Godel's (first) theorem, as shown
by Kripke's proof of G's theorem that was published by Putnam, which
does not *require* diagonalization.
I believe this proof also shows--please correct me if I'm wrong--
that a specifically *mathematical* proposition (though an unusual
one) cannot be proved nor can its negation."
It is possible to prove Gödel's first theorem without using diagonalization,
that s right: but any other technique you use to prove it is, from a formal
mathematical point of view, equivalent to diagonalization: see for instance
the method of concatenation developed by Quine and later on by Smullyan; the
heart of the question here seems to be that there is a "recursively
inseparable" center within any given formal system, no matter how simple or
trivial the system is: so if a system is complete it is inconsistent, and if
it is consistent it is incomplete, there is no way out, It seems.
See Smullyan: "Lanugages in which self reference is possible", JSL.
"Recursion theory for metamatematics" Oxford logic guides.
I. Nattochdag
-----Mensaje original-----
De: fom-bounces at cs.nyu.edu [mailto:fom-bounces at cs.nyu.edu] En nombre de
Charles Silver
Enviado el: Domingo, 10 de Diciembre de 2006 11:20 a.m.
Para: Foundations of Mathematics
Asunto: Re: [FOM] Is Godel's Theorem surprising?
First, thanks very much for all the interesting and enlightening
responses to my question. A couple of comments:
Diagonalization is not central to Godel's (first) theorem, as shown
by Kripke's proof of G's theorem that was published by Putnam, which
does not *require* diagonalization.
I believe this proof also shows--please correct me if I'm wrong--
that a specifically *mathematical* proposition (though an unusual
one) cannot be proved nor can its negation.
On Dec 8, 2006, at 4:58 AM, Harvey Friedman wrote:
> On 12/7/06 8:12 AM, "Charles Silver" <silver_1 at mindspring.com> wrote:
>> Why should it be so surprising that PA is incomplete, and even (in a
>> sense) incompletable?
>> Or put the other way, why should we have thought PA (or, for Godel,
>> the system of Principia Mathematica and related systems) would have
>> to be complete? It has been alleged, for example, that at the time
>> of Godel's proof John von Neumann had been working on proving
>> *completeness* for PM or some related system? Why would von Neumann
>> have thought *intuitively* that the system could be proved complete?
>> I'm not intending the above to be questions of mathematical fact.
>> I'm just wondering what accounts for the shock so to speak of Godel's
>> Theorem. One answer I've read is to the effect that everyone at
>> the time thought PM was complete. But for me, that's not
>> satisfactory. I'd like to know *why* they should have thought it was
>> complete. Did they have *intuitions* for thinking it had to be
>> complete?
>> I'm also wondering, though this is a separate point, whether today
>> the theorem is not only not surprising, but perhaps even intuitively
>> obvious. One unsatisfactory answer would be that incompleteness is
>> now not surprising because we now know it holds. But do we now have
>> distinctly different *intuitions*, aside from the proof itself
>> (though of course the proof can't entirely be discounted), that
>> establish, let's say the "obviousness" of the result?
> I assume you are talking exclusively about Godel's First Theorem. Not
> Godel's Second Theorem.
> One reason it was regarded as surprising is that, up to that time,
> every
> single example of an arithmetic theorem was easily seen to be
> provable in
> PA. At that time, there was no idea that arbitrary arithmetic
> statements
> might differ fundamentally from the arithmetic statements that came
> up in
> the course of mathematics.
> Of course, it is still true that PA may be complete for all
> arithmetical
> sentences that obey certain intellectual criteria - criteria which are
> normally left informal. I have no doubt that a lot of people will
> be very
> surprised by examples of statements independent of PA that meet
> certain such
> informal criteria. Much more surprised if PA can be improved to ZFC.
> In fact, following a general line that I have discussed on the FOM
> fairly
> recently, one can simply ask if PA is complete for sentences that
> are not
> very long in primitive notation. This is of course a very difficult
> problem.
> ALSO: There are two senses of surprise for a theorem. One is that
> one is
> surprised by the fact that the theorem is true. The other is that
> one is
> surprised by the fact that anyone was able to prove that the
> theorem is
> true. There were probably many people who weren't too surprised
> that it is
> true, but who were shocked that anyone was able to prove such a thing.
> Harvey
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
FOM mailing list
FOM at cs.nyu.edu
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.409 / Virus Database: 268.15.15/581 - Release Date: 09/12/2006
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.409 / Virus Database: 268.15.15/581 - Release Date: 09/12/2006
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.409 / Virus Database: 268.15.15/581 - Release Date: 09/12/2006
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-December/011163.html","timestamp":"2014-04-19T14:55:23Z","content_type":null,"content_length":"9297","record_id":"<urn:uuid:fc80391b-ad85-4f50-9e77-7a42dff6519d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how to find dimension of any physical quantity?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
@mukushla @mathslover @shruti @SUROJ @ashna @Preetha @PaxPolaris
Best Response
You've already chosen the best response.
I guess there are 3 dimension lenght, bredth and height
Best Response
You've already chosen the best response.
and if you consider forth time dimension
Best Response
You've already chosen the best response.
In the SI system, the dimension of all physical quantities is a combination of the 7 fundamental quantities such as ..
Best Response
You've already chosen the best response.
Name Unit Symbol Distance Meter L Mass Kilogram M Time Second T Current Ampere I Light Candela J Heat Kelvin K Concentration Mole N
Best Response
You've already chosen the best response.
Ok so there are two types of units: 1) Basic (given above by ashna) 2) Derived for example: speed = distance /time unit of distance = metre and unit of time = second hence unit of speed = m/s
Best Response
You've already chosen the best response.
reprsentation : [SPEED] = \(\large{[LT^{-1}]}\)
Best Response
You've already chosen the best response.
is this what u wanted @Ruchi. ?
Best Response
You've already chosen the best response.
@Ruchi. candela sybol is cd sorry !
Best Response
You've already chosen the best response.
right J is the symbol of Joule ( unit of energy )
Best Response
You've already chosen the best response.
Any physical quantity Q can then be dimensionally expressed as [Q] = L^1 M^2 T^3
Best Response
You've already chosen the best response.
THE ONE WHICH SEEMS TO BE POWERS IN MATH IS CALLED DIMENSIONS IN PHYSICS
Best Response
You've already chosen the best response.
P.S though not every where
Best Response
You've already chosen the best response.
do you have any books to refer to ?
Best Response
You've already chosen the best response.
@mathslover i don't know how to find dimension of any physical quantity.
Best Response
You've already chosen the best response.
Of any physical quantity :? you mean like find the dimensions of velocity , speed etc.?
Best Response
You've already chosen the best response.
Area = length * length = L^2 UNDERSTOOD ? @Ruchi.
Best Response
You've already chosen the best response.
any i mean plz tell me that hw can we find dimension of any quantity.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Volume = Length * area = Length * Length * Length = L^3
Best Response
You've already chosen the best response.
understood that ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
do you intent someone else to explain , or am i to continue ?
Best Response
You've already chosen the best response.
plz continue
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Angle = Arc / Radius = no dimensions
Best Response
You've already chosen the best response.
you know why ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
arc = L radius = L so both gets cancelled
Best Response
You've already chosen the best response.
understood ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Density = Mass / Volume = M / L^3 = M^1 L^-3
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
OKAY ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
velocity = displacemnt / time = LT^-1
Best Response
You've already chosen the best response.
SINCE DISPLACEMENT = DISTANCE = L
Best Response
You've already chosen the best response.
how? i hav nt under stood it
Best Response
You've already chosen the best response.
okay , displacement = distance = L (length) divided by time (t)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
now okay ?
Best Response
You've already chosen the best response.
you are free to say if you didn't understand it still :)
Best Response
You've already chosen the best response.
can i ask u 1 thing
Best Response
You've already chosen the best response.
sure :)
Best Response
You've already chosen the best response.
okay why u have taken distance=L
Best Response
You've already chosen the best response.
distance is nothing but length .
Best Response
You've already chosen the best response.
distance travelled , is the length travelled understood ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
where displacement is just the other word for distance , displacement is the shortest dist travelled and distance is the longest possible distance = L (length)
Best Response
You've already chosen the best response.
can i proceed ?
Best Response
You've already chosen the best response.
okay above u hav posted a table i table i hav find out that table i many bookbt i don't know how and where i hav to use that table.:"(
Best Response
You've already chosen the best response.
Name Unit Symbol Distance Meter L Mass Kilogram M Time Second T Current Ampere I Light Candela cd Heat Kelvin K Concentration Mole mol
Best Response
You've already chosen the best response.
this is nothing , i'll explain , dont worry yourself now :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
now i said distance = L , that is the first one , okay ? and its unit is meter .
Best Response
You've already chosen the best response.
anything you want to know ?
Best Response
You've already chosen the best response.
@Ruchi. you there ?
Best Response
You've already chosen the best response.
Dimension of a body is determined by the co-ordinate axis attach to it. There are 1D, 2D, 3D and 4D(It includes Time).The existence of 4D is beyond our imagination.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@ashna hav studied chapter work energy &power
Best Response
You've already chosen the best response.
hey ruchi .. tell me a time when you can stay on in open study for like hours .. not mins , i'll help you with dimensions then ;)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5024af71e4b040cb090b88b4","timestamp":"2014-04-20T00:46:08Z","content_type":null,"content_length":"188630","record_id":"<urn:uuid:5585f702-d63b-4412-9722-e08863d8dfe2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trees question
December 17th 2006, 05:43 PM #1
Junior Member
Dec 2006
Trees question
Ok, I have 2 questions.
1) So the center of a tree is defined as the set of vertices that have the smallest eccentricity (meaning the number of edges in a longest path that begins at a vertex x). How do you prove by
induction that every tree has a center that consists of either one or two vertices?
2) Also I was wondering if T is a tree that has more than one edge, how do you prove that the center of T equals the center of p(T)? p(T) is defined as the tree that has all its leaves (leaves
are vertices of degree 1) taken out (as well as the edges attached to those degree 1 vertices).
Thanx to anyone who can help!
Last edited by faure72; December 18th 2006 at 08:46 AM. Reason: wanted to make the question more clear
I am actually studying trees at the moment, and have been working on a slight variant of your problem, so I'll give it a shot. In what follows, denote by d(A,B) the distance (length of the path)
from a vertex A to a vertex B.
Oddly, it seems the way do to this would be to prove 2) in order to prove 1). Proving 2) would go as follows: if A is any vertex on P(T), Let B be the vertex on P(T) such that a longest path in P
(T) starting at A ends at B (so that A has eccentricity d(A,B)). We know that B is a leaf on P(T), for if it were not, we could just keep going and get a longer path from A. And since B is a leaf
on P(T), T is gotten from P(T) by adjoing some number of new leaves to B (as well as possibly some other leaves on other vertices). Take any new leaf C connected to B, and we have that, in T, d
(A,C) = d(A,B) + 1. We claim that d(A,C) is now the eccentricity of A in T. This is because, as we just argued, the distance from A to any other new leaf is simply the distance to that leaf's
"branch" plus 1.
Thus, we've shown that the eccentricity of a vertex A inside T is equal to the eccentricity of A inside P(T), plus 1. So if the eccentricity of A inside P(T) is smaller than B's inside P(T), it
must also be inside T as well.
We have to finally argue that none of the new leaves on T can qualify as centers for T. Let L be a leaf on T, and let B be the vertex (branch) that it is connected to. Then the eccentricity of L
is equal to the eccentricity of B plus 1, so L immediately has greater eccentricity than another vertex in T. Thus L is not a center of T.
Thus we have proved 2).
Now, how to prove 1). I have not gone through this yet, but I'm pretty sure it can be proved using an induction argument. The trick is to realize that, starting with a tree T, and going to P(T),
and then to P(P(T)), etc, you will eventually end up with a tree having exactly one or two vertices. The theorem is obviously true for such trees. And from part 2), we know that the center of T =
center of P(T) = center of P(P(T)) = ... = center of the tree we finally end up with, which consists of 1 or 2 vertices.
Hope that was clear, let me know if any of it was not.
By the way, I'm glad you posted this, part 2) I think will be a big hint for me in solving a similar problem (the problem I'm working on defines the "eccentricity" of A to be the sum of all the
distances to all other vertices in T, and the center again to be the vertex with the smallest eccentricity).
January 6th 2007, 07:56 AM #2
Junior Member
Nov 2005
January 6th 2007, 08:00 AM #3
Junior Member
Nov 2005 | {"url":"http://mathhelpforum.com/discrete-math/9017-trees-question.html","timestamp":"2014-04-20T00:50:12Z","content_type":null,"content_length":"35836","record_id":"<urn:uuid:9e39ecb3-3a33-45da-b8d6-7f016c1488b2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plastic potentials/ Flow rules
Submitted by tuhinsinha.25 on Thu, 2009-06-18 10:13.
I have a fundamental question regarding flow rules of finite plasticity models especially those used in soil mechanics. In most of the papers and books, I have seen the usage of an associated flow
rule with the plastic potential similar to the yield surface. However, I am unable to understand the means of obtaining a non-associative flow rule. I am using Abaqus with cap plasticity model
(modified Drucker Prager Cap model) to simulate powder compaction process. Abaqus uses a non-associative plastic flow rule definition in the shear yield space but doesn't provide any valid
explanation for the equation of the flow potential used. The non-associated flow rule definition results in a lower dilation than that predicted by an associated flow rule.Is the flow rule equation
something that is obtained arbitrarily and then fit using experiments or is it driven by plastic strain data obtained by compaction experiments?
If it is the latter case, then I am confused how does on obtained the plastic flow potential surface by plotting incremental plastic strains in the stress space. Wouldn’t it just give a qualitative
estimate of the surface and one will have to come up with a proper surface by trial and error.
Submitted by
on Thu, 2009-06-18 23:40.
1) if you work with soil, you will find out that not all material is obey assoc flow rule. Most does not dialate that much. For closer to reality, you need to use non-assoc flow rule.
2) material parameter can not come from "trial and error" period. For complicated problem, you probably won't get the same respond at all. However, you can propose an envolope that satisfy all
physics requirements.
You should take a look at Prof. Brannon's manual for the Sandia Geomodel where calibration issues are discussed. The physicality of a non-associated flow rule is still disputed, though it is a very
useful abstraction for certain features of pressure-dependent plasticity.
Prof. Brannon's work can be found at http://www.mech.utah.edu/~brannon/public/GeoModel8.pdf
-- Biswajit
Hi Biswajit,
Why does the physicality of non-associative plasticity disputed? Is it because of the violation of the max. dissipation theoerm? or the lack of normality? Would you mind to give us some reference
about this topic?
WaiChing sun
I too am interested in the reasoning behind your statement that "The physicality of a non-associated flow rule is still disputed..." I know that such theories aren't as "nice" as associative theories
and can "cause problems" from a thermodynamics standpoint, but I think there are physical justifications. In crystalline plasticity, consider the case of bcc metals whose plastic deformation is
dominated by screw dislocations with "star-shaped" non-planar dislocation cores. In such materials, the "yield function" (stress state required to recombine the sessile dissociated dislocation into a
full glissile one) depends on the resolved shear stress on several intersecting slip planes, but the "flow potential" is the usual one for crystal plasticity depending on which slip plane the
recombined partials move in. In this case, the derivative of the yield function with respect to the stress is not the flow direction, and is therefore non-associative - see reference 1 for more
details. This is just one example for metallic systems, but there are other physical reasons as well. The interested reader could refer to Section 3.1.2 "Non-associative flow and non-Schmid effects
at various length scales" of the review article given in reference 2 below.
1. "Complex macroscopic flow arising from non-planar dislocation core structures," (2001) Bassani, Ito and Vitek, Mat Sci Eng A.
2. "Viscoplasticity of heterogeneous metallic materials," (2008), McDowell, D.L., Mat Sci Eng R.
For granular materials, usually you need to use non-associate plastic flow law, since many of them do not obey the associate flow rules. If you want to check the physical background, it will be
better that you switch from the ABAQUS manual to the original paper about this model. Like this one,
Drucker and Prager (1953) Soil mechanics and plastic analysis or limit design. Quarterly of Applied Mathematics, 10:157-164.
If you look at the number of the parameters in the DPC model, e.g. slope and intercept of the shear surface, and the shape and size of the cap surface, it is difficult to determine all the parameters
from simple test data. But you could find some correlations between these parameters...
Thanks a lot for your comments. I will definitely look into the above mentioned references about the flow rules.
In order to understand what is going on here you have to look at the problem in a diffrent way. The easiest way is to start with a dissipation function and a dilatancy rule and then produce a yield
function from them. I will explain how this can be done in a minute. The special property of granular materials (like soil ) is that the dissipation function is a function of the current stress and
not just the strain increments. If it is not a function of the current stress then the procedure I will outline gives the associated flow rule.
When plastic deformation takes place the work work done equals the energy dissipated. If you guess values for the components of strain increment that obey the dilatancy rule then you have a surface
in stress space. Pick other values of strain incremnt and you get another curve - pick enough and you get an envelope that represents the yield surface. Inside the yield surface the work done is too
low to match the dissipation whatever the strain increment. This procedure can be done mathematically.
You can also see non-associativity easily in friction problems. Here, an associative rule would expect sliding to lift up with respect to the sliding plane.
Clearly, all plasticity coming from friction, like in geomaterials, is expected to show some effects of this.
Some references in the context of this start from Drucker 1954, to the Drucker medal lecture of Jim Barber in 2009, which Jim sent me and I can forward to you if you ask me !
Shakedown in elastic contact problems with Coulomb friction
International Journal of Solids and Structures, Volume 44, Issues 25-26, 15 December 2007, Pages 8355-8365
A. Klarbring, M. Ciavarella, J.R. Barber
Shakedown of coupled two-dimensional discrete frictional systems
Journal of the Mechanics and Physics of Solids, Volume 56, Issue 12, December 2008, Pages 3433-3440
Young Ju Ahn, Enrico Bertocchi, J.R. Barber
Submitted by
on Sat, 2011-09-17 08:39.
Thanks a lot for your comments, overoll .
Recent comments | {"url":"http://imechanica.org/node/5679","timestamp":"2014-04-17T03:57:05Z","content_type":null,"content_length":"38026","record_id":"<urn:uuid:e081d6d1-a9b4-4d37-9533-87907bfc389b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
list comprehension
set abstraction ==>
list comprehension
<functional programming> An expression in a functional language denoting the results of some operation on (selected) elements of one or more lists. An example in Haskell:
[ (x,y) | x <- [1 .. 6], y <- [1 .. x], x+y < 10]
This returns all pairs of numbers (x,y) where x and y are elements of the list 1, 2, ..., 10, y <= x and their sum is less than 10.
A list comprehension is simply "syntactic sugar" for a combination of applications of the functions, concat, map and filter. For instance the above example could be written:
filter p (concat (map (\ x -> map (\ y -> (x,y))
[1..x]) [1..6]))
p (x,y) = x+y < 10
According to a note by Rishiyur Nikhil <nikhil@crl.dec.com>, (August 1992), the term itself seems to have been coined by Phil Wadler circa 1983-5, although the programming construct itself goes back
much further (most likely Jack Schwartz and the SETL language).
The term "list comprehension" appears in the references below.
The earliest reference to the notation is in Rod Burstall and John Darlington's description of their language, NPL.
David Turner subsequently adopted this notation in his languages SASL, KRC and Miranda, where he has called them "ZF expressions", set abstractions and list abstractions (in his 1985 FPCA paper
[Miranda: A Non-Strict Functional Language with Polymorphic Types]).
["The OL Manual" Philip Wadler, Quentin Miller and Martin Raskovsky, probably 1983-1985].
["How to Replace Failure by a List of Successes" FPCA September 1985, Nancy, France, pp. 113-146].
Last updated: 1995-02-22
Try this search on Wikipedia, OneLook, Google
Nearby terms: Lisptalk « LispView « list « list comprehension » List Enhanced » listless » Listproc
Copyright Denis Howe 1985 | {"url":"http://foldoc.org/set+abstraction","timestamp":"2014-04-20T06:02:30Z","content_type":null,"content_length":"6338","record_id":"<urn:uuid:963c09d8-d3da-40dd-9747-c01ddaa4ea38>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using a cubit equal to 18 inches, calculate the dimensions of Noah's Ark in feet
Gives ark dimensions as 450ft long, 75 feet wide and 45 feet high which implies an 18 inch cubit. ... "For present purposes I will assume the cubit equal to 18 inches" .... the Royal Cubit standard
to calculate the area of the palace foyer. .... [ Using the 18" cubit (p50), he concludes that the animals would require ... - 134k - Cached - Similar pages ]
Not a good answer? Get an answer now. (FREE)
There are no new answers. | {"url":"http://www.weegy.com/home.aspx?ConversationId=AE49EE79","timestamp":"2014-04-21T05:43:44Z","content_type":null,"content_length":"39388","record_id":"<urn:uuid:56f218c9-d000-4613-aef4-350a730732d0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Joy of Mathematics
Ian Stewart
Ian Stewart is a Professor of Mathematics at the University of Warwick and the author of over 140 scientific papers and numerous textbooks. He is also a highly successful popular-science and
science-fiction writer and was the first recipient of the Christopher Zeeman Medal for his work on the public promotion of mathematics.
The Joy of Mathematics
For many of us, there is little on Earth that could be considered less creative, less inspiring and, quite simply, less interesting than mathematics.
But Ian Stewart – mathematician, popularizer and highly prolific writer – strongly disagrees. For Stewart, mathematics is far more than dreary arithmetic, while mathematical thinking is one of the
most important – and overlooked – aspects of contemporary society.
We caught up with Ian at the University of Warwick’s Mathematics Institute.
The Joy of Mathematics
A conversation with Ian Stewart
For many of us, there is little on Earth that could be considered less creative, less inspiring and, quite simply, less interesting than mathematics.
But Ian Stewart – mathematician, popularizer and highly prolific writer – strongly disagrees. For Stewart, mathematics is far more than dreary...
About Ian Stewart:
Ian Stewart is a Professor of Mathematics at the University of Warwick and the author of over 140 scientific papers and numerous textbooks. He is also a highly successful popular-science and
science-fiction writer and was the first recipient of the Christopher Zeeman Medal for his work on the pu...
Some Additional Resources:
Letters to a Young Mathematician
by Ian Stewart
In a series of letters to a fictional correspondent who aspires to be a mathematician, Ian takes up subjects ranging from the philosophical to the practical including what mathematics is and why it’s
worth doing, the relationship between logic and proof, the role of beauty in mathematical thinking, and the future of mathematics.
Visions of Infinity: The Great Mathematical Problems
by Ian Stewart
Ian’s most recent book, a history of mathematics as told through fourteen of its greatest problems, reveals how mathematicians the world over are rising to the challenges set by their predecessors.
Why Beauty Is Truth: The History of Symmetry
by Ian Stewart
Following the life and work of famous mathematicians from antiquity to the present, Professor Stewart traces how mathematics developed and handles the concept of Symmetry.
The Mathematics of Life
by Ian Stewart
This book provides an overview of the vital but little-recognized role mathematics has played in elucidating the complexities of the natural world. Professor Stewart explains how mathematicians and
biologists have come to work together on some of the most difficult scientific problems, including the nature and origin of life itself.
In Pursuit of the Unknown: 17 Equations That Changed the World
by Ian Stewart
In this book Professor Stewart uses a handful of mathematical equations to explore the vitally important connections between math and human progress.
Counting Sheep (Commentary Excerpt)
An engineer, a physicist and a mathematician are on a train that has just crossed into Scotland when outside of their window they see a black sheep standing in a field.
“How odd,” says the engineer. “All the sheep in Scotland are black”.
“No no!” counters the physicist indignantly. “You can’t just generalize like that. All we can say is that only some Scottish sheep are black.”
“You idiots,” sighs the mathematician, exasperated. “Always jumping to conclusions. The only thing we can say with any certainty is that there is at least one sheep in Scotland which is black on one
This joke might surprise you for two reasons.
For starters, you may be shocked to discover that there is such a thing as “mathematical humour” at all. It’s not the sort of thing that most people are even aware of.
But more significantly, it might make you suspect that what mathematicians are – what they do, how they think and what motivates them – is very, very, different from what any long buried tussles with
long division might have naively led you to believe.
Of course, we frequently have a pretty superficial image of other professions. Policemen do more than chase bad guys, while doctors do more than dispense medicine. On the other hand, policemen do
chase bad guys and doctors do dispense medicine. Mathematicians, on the other hand, typically spend no more time doing long division than the rest of us. And most aren’t any better at it – or any
more excited at the prospect of doing it – than anyone else.
So what do they do all day?
According to Ian Stewart, Professor of Mathematics at the University of Warwick and a highly acclaimed writer of both popular math books and science fiction, real mathematics is far removed from
“It’s about form and structure and logical connections. If certain things happen, if a problem is set up in a particular way, it has certain ingredients. What does that tell you? How can you answer
it? It’s about problem solving, but it’s also about seeing the kind of elegant structure that opens up a better understanding of whatever it is you’re working on...”
For the full Commentary, purchase this issue from our site, or buy the eBook from Amazon.com or iBookstore, or download our app off Apple Newsstand. Each issue comes with the commentary, the full
conversation, a biography of our guest, as well as references for further exploration. | {"url":"http://www.ideasroadshow.com/issues/ian-stewart-2013-08-02","timestamp":"2014-04-18T00:55:56Z","content_type":null,"content_length":"17428","record_id":"<urn:uuid:fdad7c14-18eb-4fb0-a2d9-177b32be6304>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
catastrophe theory
A theory, developed by the French mathematician René Thom (1923–2003), that attempts to explain the behavior of complex dynamical systems by relating it to topology. The evolution of such systems
consists of steady continuous change interspersed with sudden major jumps, or "catastrophes," when the topology of the set changes.
Catastrophe theory has been applied, with varying degrees of success, to phenomena as diverse as earthquakes, stock market crashes, prison riots, and human conflicts, at the personal, group, and
societal level. The theory was first developed by Thom in a paper published in 1968 but became well known through his book Structural Stability And Morphogenesis (1972).^1 Many mathematicians took up
the study of catastrophe theory and it was in tremendous vogue for a while, yet it never achieved the success that its younger cousin chaos theory has because it failed to live up to its promise of
useful predictions.
Late in his career, the surrealist Salvador Dali painted Topological Abduction Of Europe: Homage To René Thom (1983), an aerial view of a seismically fractured landscape juxtaposed with the equation
that strives to explain it.
1. Thom, R. Structural Stability and Morphogenesis: An Outline of a General Theory of Models. Reading, Mass.: Addison-Wesley, 1993.
Related categories | {"url":"http://www.daviddarling.info/encyclopedia/C/catastrophe_theory.html","timestamp":"2014-04-19T01:57:22Z","content_type":null,"content_length":"7809","record_id":"<urn:uuid:d2348374-fb16-4010-a32a-75b2b3189410>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
Earth Systems Science
at Fort Lewis College
Course: Kim Hannula
Enrollment: 40-50
Challenges to using math in introductory geoscience
Fort Lewis College is a 4-year public liberal arts college in Durango, Colorado. The student population includes both traditional and non-traditional students, and is ~20% Native American. Our
students enter college with varied math backgrounds; some need two years of remediation before college algebra, whereas others have already taken calculus. Experience with computers and the internet
also varies - some students have grown up with Facebook and texting, but others come from rural communities that have no high-speed internet access. The math requirements in science majors can be a
barrier to graduation for some students.
More about your geoscience course
Earth Systems Science is an introductory geoscience course with a lab. It primarily serves general education students (who are all required to take two science courses in order to graduate). It is
also one of three introductory courses that can lead into a major in Geology or Environmental Geology, and recruits students who did not initially intend to major in geology (or in a science). It is
a required course for future secondary school science majors (biology, chemistry, and physical science as well as earth science), and is one of several geoscience options for students majoring in
Environmental Studies and Adventure Education.
The Math You Need is integrated into the required lab portion of the class. Labs and lectures are taught by the same instructor. There are no teaching assistants.
Inclusion of quantitative content pre-TMYN
Because many students take Earth Systems Science before taking any college-level math, I have avoided using much math in class. However, students did participate in a group research project, which
involves collecting, graphing, and interpreting data.
Which Math You Need Modules will/do you use in your course?
• Graphing (winter 2010)
• Plotting Points (future plans)
• Best Fit Line (future plans)
• Topographic Profile (future plans)
• Rates (future plans)
• Rearranging Equations (winter 2010)
• Slopes (winter 2010)
• Unit Conversions (winter 2010)
Strategies for successfully implementing The Math You Need
I required participation in The Math You Need as part of a pre-lab grade (5% of the total course grade). Students took an ungraded pre-test, completed modules (and post-module online quizzes) before
several of the labs, and took a graded post-test (identical to the pre-test). Students were able to try each question as many times as necessary, but were only able to take each quiz once. Most
week's assignments included material from whichever modules fit the content of the lab.
Week 1: Introduction/Measurements, accuracy, & precision exercise (pre-test)
Week 2: Topographic Maps (unit conversions, slopes)
Week 6: Plate Tectonics (unit conversions, slopes, rearranging equations)
Week 10: Sampling for Florida River project (unit conversions)
Week 12: Weather (unit conversions)
Week 14: No Lab (final assessment)
The Math You Need is integrated into a semester-long group research project (the
Florida River Project
, also archived on SERC).
Reflections and Results
Anecdotally, it appeared that the students' ability to use math during fieldwork improved over previous years. During the week 10 lab ("Sampling for Florida River Project"), students measure
discharge and collect water samples in a local stream. Our current meter measures flow in m/s, but many students are familiar with discharge measurements in cubic feet per second (because they raft
or kayak). Therefore, students need to do a unit conversion in order to compare their measurements with values that they understand. In the past, I have had to redo discharge calculations for many
groups. After using The Math You Need modules, there were at least some students who were comfortable with unit conversions in each group, and I overheard students arguing amongst themselves about
how to do the calculations.
The most frequent problems in using The Math You Need involved students forgetting their username or password for the WAMAP assessment site. I plan to assign usernames that are identical to the
students' college e-mail addresses in the future, to make it easier for students to remember how to get into the site. (2011 note: using college e-mail addresses solved many of the problems, although
some students didn't use their college e-mail accounts.)
I am not certain whether the scaffolding approach (assigning the same module for several labs, and repeating the same math skills in several contexts) was successful or not. Students still had
trouble with more complicated unit conversions (such as converting cm/year to km/million years, or converting cubic meters to cubic feet), and may not have looked through the modules for examples
that fit the kind of problem they were doing. (2011 note: the scaffolding approach seemed to work better the second time, perhaps because I mentioned the kinds of problems the class would be solving
when reminding the class of upcoming assignments.)
Earth Systems Science 2010 syllabus (Microsoft Word 98kB Jul14 10) Earth Systems Science 2011 syllabus (Microsoft Word 92kB Jul28 11) Topographic Maps lab (2010) (Microsoft Word 79kB Jul15 10)
Discharge/Water Sampling Lab (Microsoft Word 33kB Jul15 10) | {"url":"http://serc.carleton.edu/mathyouneed/implementations/47493.html","timestamp":"2014-04-18T21:05:10Z","content_type":null,"content_length":"26419","record_id":"<urn:uuid:d91e7e09-80b9-4855-8fe3-abc00f480268>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Silverdale, WA Algebra 1 Tutor
Find a Silverdale, WA Algebra 1 Tutor
...Algebra I and II are among one of my favorites. I have studied these subjects thoroughly relating to linear equations, polynomials, word problems, graphing, etc. Since high school I have
worked hard to have a good foundation in Algebra II.
38 Subjects: including algebra 1, reading, English, physics
...I approach the material in a patient, non-threatening manner, and try to accommodate the student's needs and time-table to the best of my ability. It is my goal to make math an interesting, if
not enjoyable subject. I am detail oriented, and very focused on ensuring that whomever I am working with has a comprehensive, worthwhile and enjoyable experience.
12 Subjects: including algebra 1, chemistry, geometry, algebra 2
...After that we can agree on course work and scheduling. I look forward to investing in your student's education and developing the building blocks for confident success. In the state of Oregon
I was certified to teach K-9, enclosed classroom, all subjects, until I retired in 2011 and my certification expired.
26 Subjects: including algebra 1, reading, writing, SAT math
...My methods of teaching are forever adaptable in that it is my number one goal to reach the student and work with them in a way that empowers them and allows them to feel safe to learn. In my
years of teaching, I have been praised as being a highly dedicated and diverse teacher that incorporates ...
11 Subjects: including algebra 1, reading, writing, geometry
...I have experience in the use of the DOM as well as user defined objects. I have experience in various aspects of programming with this and other other languages. I should be capable in the
writing of JQuery.
26 Subjects: including algebra 1, geometry, computer science, computer programming
Related Silverdale, WA Tutors
Silverdale, WA Accounting Tutors
Silverdale, WA ACT Tutors
Silverdale, WA Algebra Tutors
Silverdale, WA Algebra 2 Tutors
Silverdale, WA Calculus Tutors
Silverdale, WA Geometry Tutors
Silverdale, WA Math Tutors
Silverdale, WA Prealgebra Tutors
Silverdale, WA Precalculus Tutors
Silverdale, WA SAT Tutors
Silverdale, WA SAT Math Tutors
Silverdale, WA Science Tutors
Silverdale, WA Statistics Tutors
Silverdale, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/Silverdale_WA_algebra_1_tutors.php","timestamp":"2014-04-20T21:33:26Z","content_type":null,"content_length":"24169","record_id":"<urn:uuid:4212953a-7379-4375-ba8c-3fa420e1cc38>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
High rate of transition to transversion ??
Rasmus Nielsen rasmus at mws4.biol.berkeley.edu
Fri Mar 24 14:34:15 EST 1995
In article <3kbn07$5ht at montespan.pasteur.fr> Hassan Badrane
<hbadrane at pasteur.fr> writes:
> Hello evolusionnists,
> I have many non-coding sequences. I have compared each one to the others
> between the results that I've got, there is the ratio of the transitions
> the transvertions. This number was almost similar between the most pairs
> sequences but between few pairs this ratio was so high (X3 to X11).
> In this last pairs the sequences was coming from isolates with a close
> relationship (close geographic origine).
> My question is why and how we can explane this event ? does it mean
> thing ?
> I thank you very much for your help
> Hassan
> hbadrane at pasteur.fr
It may indicate that you are approaching saturation in the number of
substitutions in the other pairwise comparisons.
If the ratio of the rate of transition to transversions is high in your
data set, the inferred ratio of the number of transitions to transversions
(based on the number of nucleotide differences between pairs of sequences)
will level of, as the level of divergence increases. This is due to the
simple fact that transitions will be masked by transversions when the
level of divergence is sufficiently high (enough transversions have
Therefore, in your case, you would observe a high ts/tv ratio between the
closely related sequences (x3 and x11) but not between the other
However, the effect could of course also be due to stochastic factors. It
is an entirely different story how to distinguish these two alternatives
More information about the Mol-evol mailing list | {"url":"http://www.bio.net/bionet/mm/mol-evol/1995-March/002533.html","timestamp":"2014-04-21T11:02:54Z","content_type":null,"content_length":"4330","record_id":"<urn:uuid:644e4228-f3ca-4e3e-a572-92327c07ecfa>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dense-Subset Break-the-Bank Challenge
I’m preparing for my first global travel for global health, but the net is paying attention to a paper that I think I’ll like, and I want to mention it briefly before I fly.
Computational Complexity and Information Asymmetry in Financial Products is 27 pages of serious TCS, but it is so obviously applicable that people outside of our particular ivory tower, and even
outside of academia entirely are blogging and twittering about it, and even reading it!
Freedom to Tinker has a nice summary of this paper, if you want to know what it’s about in a hurry.
Mike Trick makes the salient observation that NP-hard doesn’t mean computers can’t do it. But the assumption that this paper is based on is not about worst-case complexity; it is, as it should be,
based on an assumption about the average-case complexity of a particular optimization problem over a particular distribution.
As it turns out, this is an average-case combinatorial optimization problem that I know and love, the densest subgraph problem. My plan is to repeat the problem here, and share some Python code for
generating instances of it. Then, you, me, and everyone, can have a handy instance to try optimizing. I think that this problem is pretty hard, on average, but there is a lot more chance of making
progress on an algorithm for it than for cracking the P versus NP nut.
First, the Densest subgraph problem (bottom of p. 5):
Fix M; N; D; m; n; and d to be some parameters. The (average case, decision) densest subgraph problem with these parameters is to distinguish between the following two distributions P and D on (M
;N;D) graphs, where R is obtained by choosing for every top vertex D random neighbors on the bottom; and P is obtained by first choosing random hidden subsets S from [N] and T from [M] with |S| =
n and |T| = m, and then choosing D random neighbors for every vertex outside of T, and D-d random neighbors for every vertex in T. We then choose d random additional neighbors in S for every
vertex in T.
Then the Densest subgraph assumption (middle of p. 6) is:
Let (N; M; D; n; m; d) be such that N = o(MD), $(m d^2/n)^2 = o(MD^2/N)$, then there is no $\epsilon > 0$ and poly-time algorithm that distinguishes between R and P with advantage $\epsilon$.
Or, to say the same thing in Python, with a little help from networkx:
import random
from networkx import Graph
def planted_dense_subgraph(M=1000, N=1000, D=500, m=25, n=25, d=15):
""" Generate a bipartite graph with a planted dense subgraph
(distribution P)
M, N, D, m, n, d : int, optional
M and N are the sizes of the bipartitions and m and n are the
size of the planted node sets. D is the degree of the M-vertices
and d is the number of edges from an m-vertex to n-vertices
G : Graph
A bipartite graph, with vertices T_1, ..., T_M and B_1, ..., B_M
T_hidden, B_hidden : lists
The vertex sets of size m and n that are hidden in the T and B
T = ['T_%d'%i for i in range(M)]
B = ['B_%d'%i for i in range(N)]
T_hidden = random.sample(T, m)
B_hidden = random.sample(B, n)
G = Graph()
for t in T:
if t in T_hidden:
G.add_star([t] + random.sample(B, D-d))
G.add_star([t] + random.sample(B_hidden, d))
G.add_star([t] + random.sample(B, D))
return G, T_hidden, B_hidden
def random_graph(M=1000, N=1000, D=500):
""" Generate a bipartite graph without a planted dense subgraph
(distribution R)
M, N, D : int, optional
M and N are the sizes of the bipartitions and D is the degree of
the M-vertices
G : Graph
A bipartite graph, with vertices T_1, ..., T_M and B_1, ..., B_M
T = ['T_%d'%i for i in range(M)]
B = ['B_%d'%i for i in range(N)]
G = Graph()
for t in T:
G.add_star([t] + random.sample(B, D))
return G
If I give you the graph produced by of one of these functions, you can’t tell me which function I used with any more accuracy than if you flip a coin to decide.
As the authors say, this is an assumption. It could be proven false by a clever algorithm tomorrow.
4 responses to “Dense-Subset Break-the-Bank Challenge”
1. One direction for such a clever algorithm is the tri-linear form optimization from A new approach to the planted clique problem.
2. And, in case it’s a pain to get Python/NetworkX running for you, here are some edge lists for the hidden graph and the random graph.
3. The planted clique is one of my favorite problems too. And a new paper on it is here:
Filed under combinatorial optimization, cryptography, TCS | {"url":"http://healthyalgorithms.com/2009/10/16/dense-subset-break-the-bank-challenge/","timestamp":"2014-04-20T18:48:59Z","content_type":null,"content_length":"58294","record_id":"<urn:uuid:11db33b4-cae9-4e47-b6e5-007405d70c30>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
Another Physics Question
A green 4 kg toy car with a speed of 5 m/s collides head-on with a stationary yellow 1-kg car. After the collision, the cars are locked together with a speed of 4 m/s. How much kinetic energy is
lost in the collision?
a. What is the kinetic energy of the green 4-kg toy car BEFORE the collision?
b. What is the kinetic energy of the yellow 1-kg toy car BEFORE the collision?
c. What is the kinetic energy of the two cars AFTER the collision? | {"url":"http://mathhelpforum.com/math-topics/206986-another-physics-question.html","timestamp":"2014-04-16T14:09:21Z","content_type":null,"content_length":"40107","record_id":"<urn:uuid:04e97e09-4b2a-4b9e-9de1-37ba064c92fd>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proceedings of PASSHEMA
Conference Proceedings of PASSHEMA by year
Dr. Muhammad Aslam - Lock Haven University Iterative Regularization and its Applications
Dr. Donna Dietz - Mansfield University/ University of Pennsylvania Cell Complexes Arising from Bouncing Light Rays
Dr. Michael Ecker - PSU Wilkes-Barre Unifying Results Via L'Hopital's Rule
Dr. Xianrui Meng - Bloomsburg University Perfect Distance Tree
Dr. Richard Mikula - Lock Haven University An Introduction to Curvature
Dr. Adam Roberts - Clarion University Reluctant Geometer
Dr. Michael Robinson - University of Pennsylvania Localization of Mobile Receivers using Opportunistic Signal Sources
Dr. Philippe Savoye - Mansfield University Using Analog Circuits to Motivate the Lapalace Transform (also in pdf)
Dr. Carl Smith - Mansfield University Mathematics Learning for the Economics and Finance Student
Dr. Venkatesh Tamraparni - Fullbright Scholar to Mansfield University Ramanujan: the man and his mathematics
Anthony Berard and Joseph Evan - King's College Logic and Axiomatics, the foundation for the mathematics major at King's
Matthew Davis and Shiv K. Gupta (West Chester University) A small correction to a paper of Vandermonde
Dr. Donna Dietz (Mansfield University) Logic circuits laboratory for an undergraduate course in Discrete Mathematics
Noel Heitmann and Stephen Peurifoy (Millersville University) Stabilization of the Evolutionary Convection-Diffusion Problem: Introduction and Experiments
Dr. Premalatha Junius (Mansfield University) Proofs in the Classroom
Marc Renault (Shippensburg University) Andre and the Ballot Problem - History and a Generalization
Amy Bressler (Edinboro University of Pennsylvania) The Unit Circle by Right Triangles
Dr. Kevin Ferland (Bloomsburg University) Building the Toughest Networks
Dr. Akhtar Mahmood (Assistant Professor of Physics) Jack Dougherty (Undergraduate Research Assistant) Edinboro University of Pennsylvania Application of Group Theory in Particle Physics using the
Young Tableaux Method
Dr. Lyn Phy (Kutztown University) The Process of Motivation in the Mathematics Classroom
Rick White (Edinboro University) Reflection Groups and Spacetimes
Dr. Gregory Wisloski(Indiana University of Pennsylvania) Continuity in Semi-Metric Spaces
Kevin Ferland - Bloomsburg University The What, How, and Why of Pi
Dr. Donna Dietz (Mansfield University) Three-dimensional manipulatives for undergraduate geometry classes
Kevin Ferland - Bloomsburg University What is Discrete Mathematics?
Francis J. Vasko (Kutztown University) and Dennis D. Newhart Does Marilyn Know her Game Theory? (from SIAM News)
Kevin Ferland - Bloomsburg University Why Maximum Toughness is Tougher than Maximum Connectivity
Paul A. Loomis and Matthew B. Severcool - Bloomsburg University Bounding the density of abundant numbers
John B. Polhill - Bloomsburg University A Brief Survey of Difference Sets, Partial Difference Sets, and Relative Difference Sets
Jen Bergman, Doug Hogan, and Martin Schettler (Juniata College) I'm Leaving on an Overbooked Jet Plane
Andrew Clark, Megan Holben, and Mike Scatton (Bloomsburg University) Spouting Off About Fountains
Kevin Ferland (Bloomsburg University) Optimal Networks Have Maximum Toughness
Amadou Guisse (Edinboro University) A Seven Step Algorithm for Integration by Substitution
Paul Hartung and Elizabeth Mauch (Bloomsburg University) Content Mathematics Course for Elementary Teachers
Howard Iseri (Mansfield University) Impulse Gauss Curvatures (Also published in the Smarandache Notions Journal)
Lisa Lister and John Polhill (Bloomsburg University) Developments in Precalculus and Calculus at Bloomsburg University
Youmin Lu (Bloomsburg University) Asymptotics of Solutions to a Non-Linear Differential Equation
Elizabeth Mauch (Bloomsburg University) Incorporating Geometry and Statistics into the Elementary Classroom
Scott McClintock (West Chester University) A Radical Expression for cos(2*Pi / p)
Earl D. Packard (Kutztown University) Preparing Mathematical Animations for the Web with Maple 6
Dr. Lyn Phy (Kutztown University) Curriculum Infusion Project: Is Kutztown University a Party School?
Elaine Carbone (Clarion University) and Frank Marzano (Edinboro Univeristy) Sharing Assessment Ideas
Niandong Shi (East Stroudsburg University) Discrete Mathematics
Marc Fowler, Ryan Love, and Scott Savidge, Students; Advisor Professor Kevin Ferland, Bloomsburg University - Playing Darts with Elephants
John Polhill and Reza Noubary, Bloomsburg University - A Statistical Analysis of Major League Baseball's Single-Season Home Run Record
Kevin Ferland, Bloomsburg University - Toughness of Graphs
Steve Gendler, Clarion University - Mathematical Connections: Modeling in Business Calculus
Carol Rehn, Lock Haven University - Samples of Collaborative Learning Activities taken from Intermediate Algebra, Introductory Algebra, Basic Mathematics, and Consumer Mathematics
Kaddour Boukaabar, Ph.D., California Univeristy of Pennsylvania - Discrete Mathematics with the Towers of Hanoi
N. Paul Schembari, East Stroudsburg University - Fun with Fourier Series
Steven Gendler, Clarion Univeristy of Pennsylvania - How I Use MAPLE in Teaching and Research
Howard Iseri, Mansfield University - Minimal Surfaces and Plateau's Problem
Paul R. Wolfson, West Chester University - Teaching Mathematics Using the Genetic Approach
Steven Gendler, Clarion University of Pennsylvania - Precalculus- Where Is It and Where Is It Going?
Clifford Johnston, West Chester University - Evasion and Pursuit on a Football Field
Gene Fiorini, Shippensburg University - Extremal Properties of Generalized 4-Gons
N. Paul Schembari, East Stroudsburg University - Mathematics of Finance as General Education
1997- Volume One
Dr. Youmin Lu, Wilson College, Chambersburg PA - The Asymptotics of the Negative Solutions to the Fifth Painleve Equation -- III
Deborah Gochenaur, Shippensburg University - A Distribution Problem
Dr. Jean Werner, Mansfield University - Are the students in Elementary Statistics classes using statistical reasoning?
Mike McConnell, Clarion University - Using Writing Assignments in Teaching Mathematics to Non-Science Majors
Zhoude Shao, Millersville University - Approximate inertial manifolds for Fitzhugh-Nagumo type systems
Cindy Martin, Carlisle Area High School, Carlisle PA - On a Class of Counterexamples to a Generalization of Ore's Theorem
John H. Riley, Jr., Bloomsburg University - Precalculus Revelations
Stephen I. Gendler, Clarion University - The New Precalculus-Experiences of an Old Precalculus Teacher
E. Dennis Huthnance, Bloomsburg University - Using Computers to Notate Music
Reza Noubary, Bloomsburg University - Some statistical paradoxes
Clifford A. Johnston, West Chester University - Regularity of the Viscosity Solution of Hamilton-Jacobi-Bellman Equations with Large Discount Factor
N. Paul Schembari, East Stroudsburg University - Arizona Differential Equations at ESU
Frank Marzano, Edinboro University - Vedic Mathematics: The Scientific Heritage of Ancient India
Compiler's Note:
These Proceedings include papers from some of the presentations and discussion groups given at the PASSHEMA Conferences. The compiler has made no attempt to edit these papers. | {"url":"http://www.passhema.org/proceedings/indexproceedings.html","timestamp":"2014-04-20T00:38:55Z","content_type":null,"content_length":"12163","record_id":"<urn:uuid:c6e055e2-6041-4fad-abc0-d45cbfa44fa5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
John R. Ragazzini is credited with coining the term "Operational Amplifier ".
The June 2005 Vol25 No3 issue of the IEEE's Control Systems Magazine has an excellent retrospective and current state of the Analog Computer. Dr. Kent Lundberg of MIT, edited the 9 articles on this
topic in this issue. On page 67 of this issue there is the following excerpt:
"As an amplifier so connected can perform the mathematical operations of arithmetic and calculus on the voltages applied to it's input, it is hereafter termed an "Operational Amplifier ".
-John R. Ragazzini et al., "Analysis of problems in dynamics by electronic circuits", Proceedings of the IRE, vol. 35. p. 444, May1947. "
The following scans of this Ragazzini article came from the famous Palimpsest mentioned in the Recommended Reading section. | {"url":"http://www.philbrickarchive.org/operational_amplifier.htm","timestamp":"2014-04-21T07:20:53Z","content_type":null,"content_length":"2854","record_id":"<urn:uuid:1b703afe-e3bb-40a6-a0fa-09ee7f214927>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
By GEORGE JOHNSON; George Johnson, the author of ''Machinery of the Mind: Inside the New Science of Artificial Intelligence,'' is an editor of The Week in Review of The New York Times.
Published: June 21, 1987
FOREVER UNDECIDED A Puzzle Guide to Godel. By Raymond Smullyan. 257 pp. New York: Alfred A. Knopf. $17.95.
IN the domain of mathematics, Kurt Godel's Incompleteness Theorem enjoys the same notoriety Werner Heisenberg's Uncertainty Principle has in physics. Both of these curious, often misunderstood ideas
are seized upon by writers, scientists and the occasional mystic as founts of paradox, demonstrating that at its heart reality is twisted in astonishing ways.
Kurt Godel ended the quest for a perfect, all-encompassing mathematics by showing in 1931 that any formal system that is at least as complex as arithmetic is (like a diamond) inherently flawed. It
must be either incomplete or inconsistent. To be complete such a system must be able to prove that any formula expressible in its language (a mathematical equation, for example, or a statement in
symbolic logic) is either true or false; nothing can be undecidable. To be consistent it must contain no errors. One would think that a system lacking either quality should be declared in violation
of its warranty and returned to the manufacturer. But Godel showed that if you have one of these characteristics you can't have the other; they are mutually exclusive.
Godel proved his theorem by demonstrating that our various systems of mathematics and logic have the ability to talk about themselves. Because of this self-referential quality, paradoxes arise,
rendering them unreliable. More specifically, Godel proved that for every sufficiently complex formal system (and only the complex ones are interesting) there is a proposition that says, in effect,
''This statement is not provable.'' If the statement is true, then the system is incomplete -there is one statement that it is incapable of proving. If the statement is false, then it must be
provable - so we can prove that it is unprovable. Thus the system contradicts itself.
In ''Forever Undecided'' Raymond Smullyan -magician, musician, logician, mathematician and professor of philosophy at Indiana University - continues the explorations of the Godelian universe he
pursued in other books, such as ''The Lady or the Tiger?'' and ''To Mock a Mockingbird.'' But this time he adds a dimension that enriches our understanding of the Incompleteness Theorem while
enhancing its maddening elusiveness. This is a difficult book to describe, and so it is probably best to approach it with Mr. Smullyan's favorite rhetorical device, the logical puzzle.
Say you come to class on Monday and the professor announces she will give a surprise examination some time that week. ''On the morning of the exam,'' she tells the class, ''you won't know that this
is the day of the test.'' Well, the test can't be on Friday, you reason, because if you haven't had it by then you'll know it's inevitable, and then it won't be a surprise. But for the same reason
the test can't be on Thursday. For you've already ruled out Friday, making Thursday the last possible day. Similarly you rule out Wednesday, Tuesday and Monday. So, you conclude, there won't be a
test after all. Just as you've confidently decided that the professor is lying, she announces, ''The test will now begin.'' By doubting her, you've helped her fulfill her threat.
This story, like the others in the book, plays on the strange things that happen when the notion of belief is injected into logic. Through a series of increasingly demanding puzzles, Mr. Smullyan
introduces the reader to the curious situations that arise when we deal with ''conceited'' reasoners, who believe that they are infallible (even if they are not); ''peculiar'' reasoners, who believe
a statement but believe that they don't; ''timid'' reasoners; ''modest'' reasoners, and other similar types. Mr. Smullyan also deals (ever so logically) with the implications of self-awareness - in
reasoners who are aware that they are reasonable, and who are aware that they are aware. WITH these self-referential loops, Mr. Smullyan ties readers in Godelian knots. Before he is done he has
brought us near the forefront of logic (where, yes, research is still going on). He introduces the strange fields of modal logic and ''possible worlds'' semantics. He examines the notion of
self-fulfilling beliefs (''Could this have anything to do with religion?'' he asks). Ultimately what he offers is a painstakingly formal approach to the idea of belief systems. The implications to
psychology and artificial intelligence don't escape him though he chooses not to confront them here.
Raymond Smullyan is a master at translating difficult ideas into stories and puzzles that require no formal background, only patience and a passion to learn. For despite its whimsical tone, this is
not an easy book. A page of Mr. Smullyan requires a great deal of thinking. His many previous works have attracted a loyal, enchanted following. This book can only add to his readership and acclaim. | {"url":"http://www.nytimes.com/1987/06/21/books/logic-and-other-imperfect-things.html?src=pm","timestamp":"2014-04-21T06:36:16Z","content_type":null,"content_length":"37249","record_id":"<urn:uuid:9c67f1fd-6d29-442d-8203-d0bcf2752b8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inhibition theory
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking - Cognitive processes Cognition - Outline Index
Inhibition theory is based on the basic assumption that, during the performance of any mental task, which requires a minimum of mental effort, the subject actually goes through a series of
alternating states of distraction (non-work) and attention (work). These alternating states of distraction (state 0) and attention (state 1) are latent states, which cannot be observed and which are
completely unaware to the subject.
Additionally, the concept of inhibition is introduced, which is also latent. The assumption is made, that during states of attention inhibition linearly increases with a certain slope a[1], and
during states of distraction linearly inhibition decreases with a certain slope a[0]. According to this view the distraction states can be considered as a sort of recovery states. It is further
assumed, that when the inhibition increases during a state of attention, depending on the amount of increase, the inclination to switch to a distraction state also increases and when the inhibition
decreases during a state of distraction, depending on the amount of decrease, the inclination to switch to an attention state increases. The inclination to switch from one state to the other is
mathematically described as a transition rate or hazard rate, which makes the whole process of alternating distraction times and attention times a stochastic process.
If one thinks of a non-negative continuous random variable T as representing the time until some event will take place then the hazard rate λ(t) for that random variable is defined to be the limiting
value of the probability that the event will take place in a small interval [t,t+Δt] given the event has not occurred before time t. divided by Δt. Formally, the hazard rate is defined by the
following limit:
λ(t) = lim 1/Δt P(t ≤ T < t+Δt | T ≥ t)
The transition rates λ[1](t), from state 1 to state 0, and λ[0](t), from state 0 to state 1, depend on inhibition Y(t): λ[1](t) = l[1](Y(t)) and λ[0](t) = l[0](Y(t)), where l[1] is a non-decreasing
function and l[0] is a non-increasing function. Note, that l[1] and l[0] are dependent on Y, whereas Y is dependent on T. Specification of the functions l[1] and l[0] leads to the various inhibition
models. What can be observed in the test are the actual reaction times. A reaction time is the sum of a series of alternating distraction times and attention times, which both cannot be observed.
However, it is never-the-less possible to estimate from the observable reaction times some properties of the latent process of distraction times and attention times, such as the average distraction
time, the average attention time and the ratio a[1]/a[0].
In order to be able to simulate the consecutive reaction times, inhibition theory has been specified into various inhibition models. One is the so-called beta inhibition model. In the beta-inhibition
model, it is assumed that the inhibition Y(t) oscillates between two boundaries which are 0 and M (M for Maximum), where M is positive. In this model l[1] and l[0] are as follows:
l[1](y) = _____ with c[1] > 0
M - y
l[0](y) = ____ with c[0] > 0.
Note that, according to the first assumption, as y goes to M (during a work interval), l[1](y) goes to infinity and this forces a transition to a state of rest before the inhibition can reach M. Note
further that, according to the second assumption, as y goes to zero (during a distraction), l[0](y) goes to infinity and this forces a transition to a state of work before the inhibition can reach
For a work interval starting at t[0] with inhibition level y[0]=Y(t[0]) the transition rate at time t[0]+t is given by λ[1](t) = l[1](y[0]+a[1]t). For a non-work interval starting at t[0] with
inhibition level y[0]=Y(t[0]) the transition rate is given by λ[0](t) = l[0](y[0]-a[0]t). Therefore
λ[1](t) = ________________ with c[1] > 0
M - (y[0]+a[1]t)
λ[0](t) = __________ with c[0] > 0.
The model has Y fluctuating in the interval between 0 and M. The stationary distribution of Y/M in this model is a beta distribution (reason to call it the beta inhibition model).
The total real working time until the conclusion of the task (or the task unit in case of a repetition of equivalent unit tasks, such as is the case in the ACT) is referred to as A. The average
stationary response time E(T) may written as
E(T) = A + (a[1]/a[0])A.
For M goes to infinity λ[1](t) = c[1]. This model is known as the gamma - or Poisson inhibition model (see Smit and van der Ven, 1995).
Smit, J.C. and van der Ven, A.H.G.S. (1995). Inhibition in Speed and Concentration Tests: The Poisson Inhibition Model. Journal of Mathematical Psychology, 39, 265-273. | {"url":"http://psychology.wikia.com/wiki/Inhibition_theory?direction=prev&oldid=62975","timestamp":"2014-04-17T04:29:55Z","content_type":null,"content_length":"59637","record_id":"<urn:uuid:913f133d-befc-4d7c-8897-027f92ebebb6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/jemurray3/medals","timestamp":"2014-04-20T19:00:46Z","content_type":null,"content_length":"109189","record_id":"<urn:uuid:b45392cd-9f18-4a2b-8ecb-576d088c0e2f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nullspaces of a Matrix
The standard way to prove that to sets, A and B, say, are equal is to prove A is a subset of B then prove that B is a subset of A. And you prove A is a subset of B by starting "if x is in A" and then
use the properties of A and B to conclude "x is in B".
Here, the two sets are Null(M) and Null(M^T(M)). If x is in Null(M) then M(x)= 0. It then follows immediately that M^T(Mx)= M^T(0)= 0. That's the easy way. If x is in Null(M^T(M)) then M^T(M(x))= 0.
Obviously, if M(x)= 0 we are done. What can you say about non-zero x such that M^t(x)= 0? | {"url":"http://www.physicsforums.com/showthread.php?t=576657","timestamp":"2014-04-21T02:11:24Z","content_type":null,"content_length":"22986","record_id":"<urn:uuid:2d8ede9b-7719-4192-b780-5c563c6967de>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
January 18th 2010, 09:21 PM #1
Nov 2009
1) If a chord of the parabola y²=4ax subtends a right angle at the vertex, show that the tangents at the extremeties meet on the line x+4a=0.
2) A normal at point 't₁' of the parabola y²=4ax cuts the parabola again at point 't₂' . Then prove that t₂∈(-∞,-2√(2 ])∪[2√2,∞).
1. The tangent point T lies on the positive branch of the parabola (y > 0) and the line y = x. Calculate T. (for confirmation only: T(4a, 4a)
2. Calculate the slope of tangent using implicite differentiation. (for confirmation only: $y'=\frac{2a}{2\sqrt{ax}}$ )
3. Calculate the equation of the tangent in T. (for confirmation only: $y = \frac12 x +2a$ )
4. According to the method shown for the first tangent determine the equation of the second tangent. (for confirmation only: $y = -\frac12 x -2a$ )
5. Determine the point of intersection between the two tangents and show that it satisfies the given equation.
January 19th 2010, 12:10 AM #2 | {"url":"http://mathhelpforum.com/geometry/124366-parabola.html","timestamp":"2014-04-18T00:25:15Z","content_type":null,"content_length":"33812","record_id":"<urn:uuid:da8482cb-0ec9-4aae-a1cf-983312bffa00>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fraunhofer (ITWM)
29 search hits
Customer loads correlation in truck engineering (2009)
K. Dreßler M. Speckert R. Müller Ch. Weber
Safety and reliability requirements on the one side and short development cycles, low costs and lightweight design on the other side are two competing aspects of truck engineering. For safety
critical components essentially no failures can be tolerated within the target mileage of a truck. For other components the goals are to stay below certain predefined failure rates. Reducing
weight or cost of structures often also reduces strength and reliability. The requirements on the strength, however, strongly depend on the loads in actual customer usage. Without sufficient
knowledge of these loads one needs large safety factors, limiting possible weight or cost reduction potentials. There are a lot of different quantities influencing the loads acting on the vehicle
in actual usage. These ‘influencing quantities’ are, for example, the road quality, the driver, traffic conditions, the mission (long haulage, distribution or construction site), and the
geographic region. Thus there is a need for statistical methods to model the load distribution with all its variability, which in turn can be used for the derivation of testing specifications.
An improved multiaxial stress-strain correction model for elastic FE postprocessing (2009)
H. Lang K. Dreßler
In this paper, the model of Köttgen, Barkey and Socie, which corrects the elastic stress and strain tensor histories at notches of a metallic specimen under non-proportional loading, is improved.
It can be used in connection with any multiaxial s -e -law of incremental plasticity. For the correction model, we introduce a constraint for the strain components that goes back to the work of
Hoffmann and Seeger. Parameter identification for the improved model is performed by Automatic Differentiation and an established least squares algorithm. The results agree accurately both with
transient FE computations and notch strain measurements.
A generic geometric approach to territory design and districting (2009)
J. Kalcsics S. Nickel M. Schröder
Territory design and districting may be viewed as the problem of grouping small geographic areas into larger geographic clusters called territories in such a way that the latter are acceptable
according to relevant planning criteria. The availability of GIS on computers and the growing interest in Geo-Marketing leads to an increasing importance of this area. Despite the wide range of
applications for territory design problems, when taking a closer look at the models proposed in the literature, a lot of similarities can be noticed. Indeed, the models are many times very
similar and can often be, more or less directly, carried over to other applications. Therefore, our aim is to provide a generic application-independent model and present efficient solution
techniques. We introduce a basic model that covers aspects common to most applications. Moreover, we present a method for solving the general model which is based on ideas from the field of
computational geometry. Theoretical as well as computational results underlining the efficiency of the new approach will be given. Finally, we show how to extend the model and solution algorithm
to make it applicable for a broader range of applications and how to integrate the presented techniques into a GIS.
An energy conserving numerical scheme for the dynamics of hyperelastic rods (2009)
Th. Fütterer A. Klar R. Wegener
A numerical method for special Cosserat rods based on Antman’s description [1] is developed for hyperelastic materials and potential forces. This method preserves the relevant properties of the
underlying PDE system, namely the orthonormality of the directors and the conservation of the energy.
Design of pleated filters by computer simulations (2009)
A. Wiegmann L. Cheng E. Glatt O. Iliev S. Rief
Four aspects are important in the design of hydraulic lters. We distinguish between two cost factors and two performance factors. Regarding performance, filter eciencynd lter capacity are of
interest. Regarding cost, there are production considerations such as spatial restrictions, material cost and the cost of manufacturing the lter. The second type of cost is the operation cost,
namely the pressure drop. Albeit simulations should and will ultimately deal with all 4 aspects, for the moment our work is focused on cost. The PleatGeo Module generates three-dimensional
computer models of a single pleat of a hydraulic lter interactively. PleatDict computes the pressure drop that will result for the particular design by direct numerical simulation. The evaluation
of a new pleat design takes only a few hours on a standard PC compared to days or weeks used for manufacturing and testing a new prototype of a hydraulic lter. The design parameters are the shape
of the pleat, the permeabilities of one or several layers of lter media and the geometry of a supporting netting structure that is used to keep the out ow area open. Besides the underlying
structure generation and CFD technology, we present some trends regarding the dependence of pressure drop on design parameters that can serve as guide lines for the design of hydraulic lters.
Compared to earlier two-dimensional models, the three-dimensional models can include a support structure.
Hierarchy of mathematical models for production processes of technical textiles (2009)
A. Klar N. Marheineke R. Wegener
In this work we establish a hierarchy of mathematical models for the numerical simulation of the production process of technical textiles. The models range from highly complex three-dimensional
fluid-solid interactions to one-dimensional fiber dynamics with stochastic aerodynamic drag and further to efficiently handable stochastic surrogate models for fiber lay-down. They are
theoretically and numerically analyzed and coupled via asymptotic analysis, similarity estimates and parameter identification. Themodel hierarchy is applicable to a wide range of industrially
relevant production processes and enables the optimization, control and design of technical textiles.
Pricing American call options under the assumption of stochastic dividends – An application of the Korn-Rogers model (2009)
S. Kruse M. Müller
In nancial mathematics stock prices are usually modelled directly as a result of supply and demand and under the assumption that dividends are paid continuously. In contrast economic theory gives
us the dividend discount model assuming that the stock price equals the present value of its future dividends. These two models need not to contradict each other - in their paper Korn and Rogers
(2005) introduce a general dividend model preserving the stock price to follow a stochastic process and to be equal to the sum of all its discounted dividends. In this paper we specify the model
of Korn and Rogers in a Black-Scholes framework in order to derive a closed-form solution for the pricing of American Call options under the assumption of a known next dividend followed by
several stochastic dividend payments during the option's time to maturity.
Multibody dynamics simulation of geometrically exact Cosserat rods (2009)
H. Lang J. Linn M. Arnold
In this paper, we present a viscoelastic rod model that is suitable for fast and sufficiently accurate dynamic simulations. It is based on Cosserat’s geometrically exact theory of rods and is
able to represent extension, shearing (’stiff ’ dof), bending and torsion (’soft’ dof). For inner dissipation, a consistent damping potential from Antman is chosen. Our discrete model is based on
a finite difference discretisation on a staggered grid. The right-hand side function f and the Jacobian ∂f/∂(q, v, t) of the dynamical system q˙ = v, v˙ = f(q, v, t) – after
index reduction from three to zero – is free of higher algebraic (e.g. root) or transcendent (e.g. trigonometric or exponential) functions and is therefore cheap to evaluate. For the time
integration of the system, we use well established stiff solvers like RADAU5 or DASPK. As our model yields computation times within milliseconds, it is suitable for interactivemanipulation in
’virtual reality’ applications. In contrast to fast common VR rod models, our model reflects the structural mechanics solutions sufficiently correct, as comparison with ABAQUS finite element
results shows.
Discrete Lagrangian mechanics and geometrically exact Cosserat rods (2009)
P. Jung S. Leyendecker J. Linn M. Ortiz
Inspired by Kirchhoff’s kinetic analogy, the special Cosserat theory of rods is formulatedin the language of Lagrangian mechanics. A static rod corresponds to an abstract Lagrangian system where
the energy density takes the role of the Lagrangian function. The equilibrium equations are derived from a variational principle. Noether’s theorem relates their first integrals to
frame-indifference, isotropy and uniformity. These properties can be formulated in terms of Lie group symmetries. The rotational degrees of freedom, present in the geometrically exact beam
theory, are represented in terms of orthonormal director triads. To reduce the number of unknowns, Lagrange multipliers associated with the orthonormality constraints are eliminated using
null-space matrices. This is done both in the continuous and in the discrete setting. The discrete equilibrium equations are used to compute discrete rod configurations, where different types of
boundary conditions can be handled.
Calculating invariant loads for system simulation in vehicle engineering (2009)
M. Burger K. Dreßler A. Marquardt M. Speckert
For the numerical simulation of a mechanical multibody system (MBS), dynamical loads are needed as input data, such as a road profile. With given input quantities, the equations of motion of the
system can be integrated. Output quantities for further investigations are calculated from the integration results. In this paper, we consider the corresponding inverse problem: We assume, that a
dynamical system and some reference output signals are given. The general task is to derive an input signal, such that the system simulation produces the desired reference output. We present the
state-of-the-art method in industrial applications, the iterative learning control method (ILC) and give an application example from automotive industry. Then, we discuss three alternative
methods based on optimal control theory for differential algebraic equations (DAEs) and give an overview of their general scheme. | {"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/16196/start/0/rows/10/yearfq/2009","timestamp":"2014-04-18T18:57:29Z","content_type":null,"content_length":"47858","record_id":"<urn:uuid:f8c0272c-b0a0-4b9d-b563-103f3e64da8f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
Link to Article: Helical Gear Mathematics, Formulas and Examples Part II
Tell your friends and customers you were featured in the article Helical Gear Mathematics, Formulas and Examples Part II. Just copy one of the highlighted code examples below and paste it into your
website's HTML
Simple Link
Example: Helical Gear Mathematics, Formulas and Examples Part II (Gear Technology, July/August 1988)
Copy the highlighted code and put it into your HTML:
<a href="http://www.geartechnology.com/articles/0788/Helical Gear Mathematics, Formulas and Examples Part II" target="_new" title="Helical Gear Mathematics, Formulas and Examples Part II">Helical
Gear Mathematics, Formulas and Examples Part II</a> (<em>Gear Technology</em>, July/August 1988)
Link with Magazine Cover
Helical Gear Mathematics, Formulas and Examples Part II (Gear Technology, July/August 1988)
Copy the highlighted code and put it into your HTML:
<a href="http://www.geartechnology.com/articles/0788/Helical Gear Mathematics, Formulas and Examples Part II" target="_new" title="Helical Gear Mathematics, Formulas and Examples Part II"><img src=
"http://www.geartechnology.com/issues/0788/gt0788.jpg" width="182" height="245" alt="July/August 1988"><br>Helical Gear Mathematics, Formulas and Examples Part II</a> (<em>Gear Technology</em>, July/
August 1988)
Embed the PDF on Your Own Page
The article "Helical Gear Mathematics, Formulas and Examples Part II" (Gear Technology, July/August 1988) should appear in the box below, but if you do not see it, you can download it here.
Copy the highlighted code and put it into your HTML:
<p>The article "Helical Gear Mathematics, Formulas and Examples Part II" (<em>Gear Technology</em>, July/August 1988) should appear in the box below, but if you do not see it, you can <a href="/
issues/0788/buckingham.pdf" target="_new">download it here</a>.</p> <div style="border:40px solid black;"> <object width="895" height="500" data=http://www.geartechnology.com/issues/0788x/
buckingham.pdf type="application/pdf"> <p>It appears you don't have a PDF plugin for this browser. No biggie... you can <a href="http://www.geartechnology.com/issues/0788/buckingham.pdf">click here
to download the PDF file.</a></p></object></div> | {"url":"http://www.geartechnology.com/issues/link.php?issue=0788&article=Helical%20Gear%20Mathematics,%20Formulas%20and%20Examples%20Part%20II","timestamp":"2014-04-19T14:30:41Z","content_type":null,"content_length":"9709","record_id":"<urn:uuid:ed1a84ed-7380-478a-afd2-c3c0cad6d8ca>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |
Latest releases of math_matrixMath_Matrix 0.8.7Math_Matrix 0.8.6Math_Matrix 0.8.5Math_Matrix 0.8.0
http://pear.php.net/ pear-webmaster@lists.php.net pear-webmaster@lists.php.net en-us The latest releases for the package math_matrix http://pear.php.net/package/Math_Matrix/download/0.8.7/ QA release
<br /> Updated minimum version of Math_Vector to 0.7.0 2010-10-05T01:30:49-05:00 http://pear.php.net/package/Math_Matrix/download/0.8.6/ QA release<br /> Package 2.0<br /> Bug #884 PEAR QA:
improvement for get_class()-usage jmcastagnetto<br /> Bug #4959 Wrong dependency to PHPUnit <br /> Bug #9248 Parse error in PHP5 jmcastagnetto<br /> Bug #10728 multiply() method fou<br /> Bug #11942
setData creates an error even when it isn't raised jmcastagnetto<br /> Bug #13209 Math_Matrix has a method 'clone' which became a keyword as of PHP5 jmcastagnetto<br /> Bug #14986 PHP5 Integration
<br /> Bug #16838 subMatrix has an error where checking for the number of columns kguest<br /> Bug #16999 Support for PHP 5 kguest 2010-10-04T15:20:57-05:00 http://pear.php.net/package/Math_Matrix/
download/0.8.5/ Fixed some bugs in matrix multiplication reported by John Pye<br /> (john@curioussymbols.com) and Marcel Brunner (marcel@palmer.li).<br /> Fixed some minor documentation
incosistencies.<br /> <br /> Modified the setData() method to accept a Math_Matrix object or<br /> an array of arrays of numbers.<br /> Added the setZeroThreshold() and getZeroThreshold() to set and
get the<br /> value used as upper bound to minimize round-off errors.<br /> Added also two static methods to generate famous matrices:<br /> Math_Matrix::makeHilbert() for a square Hilber matrix, and
<br /> Math_Matrix::makeHankel() for m by n Hankel matrix.<br /> Reorganized the directories to comply to the current directory organization<br /> proposal.<br /> It is recommended to uninstall the
older version of Math_Matrix (and<br /> Math_Vector) before installing this new release to avoid lingering files<br /> (there is also a new release of Math_Vector):<br /> $ pear uninstall Math_Matrix
Math_Vector<br /> $ pear install Math_Vector<br /> $ pear install Math_Matrix<br /> Included explicitely the optional dependency on the PHPUnit package version<br /> 0.6.2 or older, as current
versions of that package need PHP5. 2003-11-01T19:15:29-05:00 http://pear.php.net/package/Math_Matrix/download/0.8.0/ Initial released under PEAR. Reestructured the package so the main class<br />
contains both instance and class methods to make things simpler. 2003-05-16T22:36:00-05:00 | {"url":"http://pear.php.net/feeds/pkg_math_matrix.rss","timestamp":"2014-04-16T04:37:37Z","content_type":null,"content_length":"4476","record_id":"<urn:uuid:3e23a6fa-c543-48e1-8478-6c601f54958d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 121
- IEEE Transactions on Computers , 1988
"... VLSI communication networks are wire limited. The cost of a network is not a function of the number of switches required, but rather a function of the wiring density required to construct the
network. This paper analyzes communication networks of varying dimension under the assumption of constant wi ..."
Cited by 296 (16 self)
Add to MetaCart
VLSI communication networks are wire limited. The cost of a network is not a function of the number of switches required, but rather a function of the wiring density required to construct the
network. This paper analyzes communication networks of varying dimension under the assumption of constant wire bisection. Expressions for the latency, average case throughput, and hot-spot throughput
of k-ary n- cube networks with constant bisection are derived that agree closely with experimental measurements. It is shown that low-dimensional networks (e.g., tori) have lower latency and higher
hot-spot throughput than high-dimensional networks (e.g., binary n-cubes) with the same bisection width. Keywords Communication networks, interconnection networks, concurrent computing,
message-passing multiprocessors, parallel processing, VLSI. 1 Introduction The critical component of a concurrent computer is its communication network. Many algorithms are communication rather than
processing limited. Fi...
, 1992
"... We provide the first optimal algorithms in terms of the number of input/outputs (I/Os) required between internal memory and multiple secondary storage devices for the problems of sorting, FFT,
matrix transposition, standard matrix multiplication, and related problems. Our two-level memory model is n ..."
Cited by 236 (32 self)
Add to MetaCart
We provide the first optimal algorithms in terms of the number of input/outputs (I/Os) required between internal memory and multiple secondary storage devices for the problems of sorting, FFT, matrix
transposition, standard matrix multiplication, and related problems. Our two-level memory model is new and gives a realistic treatment of parallel block transfer, in which during a single I/O each of
the P secondary storage devices can simultaneously transfer a contiguous block of B records. The model pertains to a large-scale uniprocessor system or parallel multiprocessor system with P disks. In
addition, the sorting, FFT, permutation network, and standard matrix multiplication algorithms are typically optimal in terms of the amount of internal processing time. The difficulty in developing
optimal algorithms is to cope with the partitioning of memory into P separate physical devices. Our algorithms' performance can be significantly better than those obtained by the well-known but
- Proceedings of the IEEE , 1990
"... tures and the programming technologies used to customize them is presented. Programming technologies are compared on the basis of their vola fility, size, parasitic capacitance, resistance, and
process technology complexity. FPGA architectures are divided into two constituents: logic block architect ..."
Cited by 109 (13 self)
Add to MetaCart
tures and the programming technologies used to customize them is presented. Programming technologies are compared on the basis of their vola fility, size, parasitic capacitance, resistance, and
process technology complexity. FPGA architectures are divided into two constituents: logic block architectures and routing architectures. A classijcation of logic blocks based on their granularity is
proposed and several logic blocks used in commercially available FPGA ’s are described. A brief review of recent results on the effect of logic block granularity on logic density and pe$ormance of an
FPGA is then presented. Several commercial routing architectures are described in the contest of a general routing architecture model. Finally, recent results on the tradeoff between the fleibility
of an FPGA routing architecture its routability and density are reviewed. I.
- in “Proc. 14th Annual ACM Sympos. on Theory of Cornput , 1982
"... A variety of models have been proposed for the study of synchronous parallel computation. These models are reviewed and some prototype problems are studied further. Two classes of models are
recognized, fixed connection networks and models based on a shared memory. Routing and sorting are prototype ..."
Cited by 105 (3 self)
Add to MetaCart
A variety of models have been proposed for the study of synchronous parallel computation. These models are reviewed and some prototype problems are studied further. Two classes of models are
recognized, fixed connection networks and models based on a shared memory. Routing and sorting are prototype problems for the networks; in particular, they provide the basis for simulating the more
powerful shared memory models. It is shown that a simple but important class of deterministic strategies (oblivious routing) is necessarily inefficient with respect to worst case analysis. Routing
can be viewed as a special case of sorting, and the existence of an O(log n) sorting algorithm for some n processor fixed connection network has only recently been established by Ajtai, Komlos, and
Szemeredi (“15th ACM Sympos. on Theory of Cornput., ” Boston, Mass., 1983, pp. l-9). If the more powerful class of shared memory models is considered then it is possible to simply achieve an O(log n
loglog n) sort via Valiant’s parallel merging algorithm, which it is shown can be implemented on certain models. Within a spectrum of shared memory models, it is shown that loglogn is asymptotically
optimal for n processors to merge two sorted lists containing n elements. 0 1985 Academic Press, Inc.
- TECHNICAL REPORT , 1980
"... The established methodologies for studying computational complexity can be applied to the new problems posed by very large-scale integrated (VLSI) circuits. This thesis develops a “VLSI model of
computation” and derives upper and lower bounds on the silicon area and time required to solve the proble ..."
Cited by 105 (1 self)
Add to MetaCart
The established methodologies for studying computational complexity can be applied to the new problems posed by very large-scale integrated (VLSI) circuits. This thesis develops a “VLSI model of
computation” and derives upper and lower bounds on the silicon area and time required to solve the problems of sorting and discrete Fourier transformation. In particular, the area A and time T taken
by any VLSI chip using any algorithm to perform an $N$-point Fourier transform must satisfy $AT^2 \geq c N^2 \log^2 N$, for some fixed $c > 0$. A more general result for both sorting and Fourier
transformation is that $AT^{2x} = \Omega(N^{1+x} \log^{2x} N)$ for any $x$ in the range $0 < x < 1$. Also, the energy dissipated by a VLSI chip during the solution of either of these problems is at
least $\Omega(N^{3/2} \log N)$. The tightness of these bounds is demonstrated by the existence of nearly optimal circuits for both sorting and Fourier transformation. The circuits based on the
shuffle-exchange interconnection pattern are fast but large: $T = O(\log^2 N)$ for Fourier transformation, $T = O(\log^3 N)$ for sorting; both have area $A$ of at most $O(N^2 / \log{1/2} N)$. The
circuits based on the mesh interconnection pattern are slow but small: $T = O(N^{1/2} \log\log N)$, $A = O(N \log^2 N)$.
, 1981
"... In this paper we implement several basic operating system primitives by using a "replace-add" operation, which can supersede the standard "test and set", and which appears to be a universal
primitive for efficiently coordinating large numbers of independently acting sequential processors. We also pr ..."
Cited by 89 (2 self)
Add to MetaCart
In this paper we implement several basic operating system primitives by using a "replace-add" operation, which can supersede the standard "test and set", and which appears to be a universal primitive
for efficiently coordinating large numbers of independently acting sequential processors. We also present a hardware implementation of replace-add that permits multiple replace-adds to be processed
nearly as efficiently as loads and stores. Moreover, the crucial special case of concurrent replace-adds updating the same variable is handled particularly well: If every PE simultaneously addresses
a replace-add at the same variable, all these requests are satisfied in the time required to process just one request.
- IEEE Trans. on Circuits and Systems , 1990
"... ..."
- Lectures on Parallel Computation , 1993
"... A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of
various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing ..."
Cited by 77 (5 self)
Add to MetaCart
A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various
aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing [365] demonstrated that, in principle, a single general purpose sequential machine could be
designed which would be capable of efficiently performing any computation which could be performed by a special purpose sequential machine. The importance of this universality result for subsequent
practical developments in computing cannot be overstated. It showed that, for a given computational problem, the additional efficiency advantages which could be gained by designing a special purpose
sequential machine for that problem would not be great. Around 1944, von Neumann produced a proposal [66, 389] for a general purpose storedprogram sequential computer which captured the fundamental
principles of...
- Journal of Computer and System Sciences , 1996
"... This paper presents a deterministic sorting algorithm, called Sharesort, that sorts n records on an n-processor hypercube, shuffle-exchange, or cube-connected cycles in O(log n (log log n) 2 )
time in the worst case. The algorithm requires only a constant amount of storage at each processor. Th ..."
Cited by 67 (10 self)
Add to MetaCart
This paper presents a deterministic sorting algorithm, called Sharesort, that sorts n records on an n-processor hypercube, shuffle-exchange, or cube-connected cycles in O(log n (log log n) 2 ) time
in the worst case. The algorithm requires only a constant amount of storage at each processor. The fastest previous deterministic algorithm for this problem was Batcher's bitonic sort, which runs in
O(log 2 n) time. Supported by an NSERC postdoctoral fellowship, and DARPA contracts N00014--87--K--825 and N00014-- 89--J--1988. 1 Introduction Given n records distributed uniformly over the n
processors of some fixed interconnection network, the sorting problem is to route the record with the ith largest associated key to processor i, 0 i ! n. One of the earliest parallel sorting
algorithms is Batcher's bitonic sort [3], which runs in O(log 2 n) time on the hypercube [10], shuffle-exchange [17], and cube-connected cycles [14]. More recently, Leighton [9] exhibited a
, 1991
"... Interprocessor communication (PC) overheads have emerged as the major performance limitation in parallel processing systems, due to the transmission delays, synchronization overheads, and
conflicts for shared communication resources created by data exchange. Accounting for these overheads is essenti ..."
Cited by 67 (11 self)
Add to MetaCart
Interprocessor communication (PC) overheads have emerged as the major performance limitation in parallel processing systems, due to the transmission delays, synchronization overheads, and conflicts
for shared communication resources created by data exchange. Accounting for these overheads is essential for attaining efficient hardware utilization. This thesis introduces two new compile-time
heuristics for scheduling precedence graphs onto multiprocessor architectures, which account for interprocessor communication overheads and interconnection constraints in the architecture. These
algorithms perform scheduling and routing simultaneously to account for irregular interprocessor interconnections, and schedule all communications as well as all computations to eliminate shared
resource contention. The first technique, called dynamic-level scheduling, modifies the classical HLFET list scheduling strategy to account for IPC and synchronization overheads. By using dynamically
changing priorities to match nodes and processors at each step, this technique attains an equitable tradeoff between load balancing and interprocessor communication cost. This method is fast,
flexible, widely targetable, and displays promising perforrnance. The second technique, called declustering, establishes a parallelism hierarchy upon the precedence graph using graph-analysis
techniques which explicitly address the tradeoff between exploiting parallelism and incurring communication cost. By systematically decomposing this hierarchy, the declustering process exposes
parallelism instances in order of importance, assuring efficient use of the available processing resources. In contrast with traditional clustering schemes, this technique can adjust the level of
cluster granularity to suit the characteristics of the specified architecture, leading to a more effective solution. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=260482","timestamp":"2014-04-20T08:04:06Z","content_type":null,"content_length":"40401","record_id":"<urn:uuid:da657ebc-d5b1-42f6-b47b-72bd4839e2f4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic electrical engineering formulas
Electrical Formulas, Calculations, Basic Electronic Engineering ...
Electronics engineering reference online- electrical formulas, circuit theory and design guides, theorems, electricity and magnetism basics. Calculation of impedances ...
Test this site now
Electrical Engineering Formulas
Basic Electrical Engineering Formulas: circuit element impedances, Ohm's law, impedances for series and parallel connections. Created Date: 5/21/2010 4:14:08 PM
Test this site now
transmission lines design | transmission lines | substation design | pls cadd tutorials | protective relaying | electrical engineering design tips | electricity by ...
Test this site now
Electrical Formulas - Engineering ToolBox
Resources, Tools and Basic Information for Engineering and Design of Technical Applications!
Test this site now
Electrical Engineering formulas. - Metric conversion, Calculation ...
Electrical Engineering formulas for ac and dc circuits ... BASIC MATH; ALGEBRA; GEOMETRY; TRIGONOMETRY; METRIC UNITS; PRE CALCULUS ...
Test this site now
Electrical Engineering Formulas | EEP
In practice however, circuit designers normally use simplified equations of electricity and magnetism and theorems that use circuit theory terms
Test this site now
electrical engineering formulas - Scribd
Here you will also find electricity and magnetism reference, basic electrical engineering formulas, calculators, and other related information.
Test this site now
Math Formula, general and engineering.
Math formula, general and engineering ... BASIC MATH; ALGEBRA; GEOMETRY ... ELECTRICAL ENGINEERING FORMULAS. Current; Resistance amp; Impedance
Test this site now
Engineering: Electrical Engineering
disciplines of electrical engineering that were identified in the previous section. Manipulation, solution, and analysis of real and complex algebraic equations Basic ...
Test this site now
Electrical Engineering Calculators - iFigure: online calculators ...
online electrical engineering and electronics calculators ... Basic Formulas. Ohm's Law Calc-- Calculate watts, amps, volts or ohms. Enter any two values and ...
Test this site now
Free Engineering Formulas/Equations ...
Here are some of the basic engineering formulas/equations related to energy conversion systems which are built into the Engineering Software product line:
Test this site now
Solve over 160 basic electrical and electronics formulas with the ...
Solve basic electrical and electronics formulas quickly and easily with the Electronics Toolkit! ... Electrical Engineering-- The Electronics Toolkit can be used initially to ...
Test this site now
Electrical Engineering Formulas (principles and technology ...
A.C Theory: all Formulas for a.c circuits Single phased Circuits Formulas D.c. transients - all Formulas Op amp formulas Three phased Systems formulas and equations
Test this site now
Formulas Of Subject Basic Electrical And Electronics Engineering
formulas of subject basic electrical and electronics engineering smps-us - tutorials electronics reference formulas Electrical Engineering Formulas Author Lazar ...
Test this site now
Solve over 160 basic electrical and electronics formulas with the ...
Electrical Engineering-- The Electronics Toolkit can be used initially to ... will serve as a reference and a refresher of all the basic electrical and electronics formulas.
Test this site now
Simplify the Calculation Process with Electrical Formulas Software
Use electrical/electronic formulas software to calculate complex ... re a student who is learning about electrical engineering ... to the next, but here are some basic formula ...
Test this site now
Eformulae.com: Maths, Science and Engineering Formulas and Tables
Eformulae.com is a online resource of engineering formulas, science formulas, math ... Electrical Circuits In Engineering Formulas
Test this site now
Engineering ToolBox
Tools and Basic Information for Design, Engineering and Construction of Technical Applications ... Amps and electrical wiring, AWG - wire gauge, electrical formulas, motors ...
Test this site now
formula sheet equation sheet - Get Homework Help in Math, Algebra ...
Energy, Electrical Engineering, Analog Circuits, Power, Resistors, RC Circuit, RLC circuit, ... Formula Sheet
Test this site now
Circuits, Formulas and Tables Electrical Engineering - Basic ...
influence of the temperature on the resistance of the conductor ...
Test this site now
electrical engineering formula - Electrical Resource - About ...
Basic Electrical Engineering Formulas (PDF) BASIC ELECTRICAL ENGINEERING FORMULAS. BASIC ELECTRICAL CIRCUIT FORMULAS. IMPEDANCE VOLT-AMP . R- electrical resistance in ohms ...
Test this site now
Electronics Links - Basic Electronics
Electronics Formulas http://en.wikibooks.org/wiki/Electronics:Formulas; Basic Electronics Formulas http://yb0ah.tripod.com/tech/main.html; Electrical Engineering http://www ...
Test this site now
Electrical and Electronic Formulas version 2.02
This program contains a collection of basic electrical and electronic formulas, which can be found in any electrical engineering handbook. The software was developed ...
Test this site now
Electrical Formulas for electrical engineers
Useful Electrical formulas to calculate Current, Voltage, Power Factor, KVA, KW. Check out our new job listings .
Test this site now
Electrical Engineering Notes, Formulas, Facts, and Study Guides
Engineering gt; Electrical Engineering ... I am a 2nd year student at an engineering school. I searched google and other sources ... Electrical engineering is an ...
Test this site now
transmission lines design | transmission lines | substation design | pls cadd tutorials | protective relaying | electrical engineering design tips | electricity by ...
Test this site now
Electrical Engineering Formula? - Yahoo! Answers
Best Answer: The most basic formula in electrical is Ohms law. 90% of the formulas used can be related back to that one.
Test this site now
Basic Electric Circuit Theory - Terp Connect
... does reflect the current state of instruction of basic circuit theory in Electrical Engineering ... It is known that the ac steady state equations and the basic equations ...
Test this site now
Become an electrical engineer - Basic mathematics
In electrical engineering, the study of electricity and electromagnetism are the ... Basic math test Algebra test Basic mathematics worksheets Basic math formulas ...
Test this site now
Basic Electrical Engineering Concepts | Engineersphere.com
While there are some very important equations that you need to know ... below into your web site (Ctrl+C to copy) It will look like this: Basic Electrical Engineering ...
Test this site now
Basic Probability Formulas | Reference.com Answers
The probability is the chances that something has or will occur. The basic formula for finding the probability is: the number of favorable events/the number of total ...
Test this site now
Handbook of Formulas and Tables for Signal Processing (Electrical ...
Amazon.com: Handbook of Formulas and Tables for Signal Processing (Electrical Engineering Handbook) (9780849385797): Alexander D. Poularikas: Books
Test this site now
Mike Holt Mike Holt Code Resources
Manachos Engineering's Calculator for design and ... Conversion Formulas Electrical Formulas Based on 60 Hz Parallel Circuits Series Circuits
Test this site now
Good civil engineering basic calculations formulas to study ...
Best Answer: Here are generalized formulas for load,shear, bending moment, deflection angle and deflection of simply supported beams which you may want to ...
Test this site now
Useful Formulas - Electricians Toolbox Etc...
Here are some common formulas that are frequently used in the field.
Test this site now
Circuits, Formulas and Tables Electrical Engineering - Basic ...
Three-phase motor with star-delta connection (star connection) Three-phase motor with star-delta connection (delta connection) terminal boards (clockwise sense of ...
Test this site now
EEE393 Basic Electrical Engineering
EEE393 Basic Electrical Engineering Kadir A. Peker kpeker@bilkent.edu.tr Tel: x5406 ... capacitor, inductor, voltage and current sources, etc. Element equations.
Test this site now
Electrical Formulae AC Three-Phase
Title: Electrical Formulas Author: CTi Automation Subject: Electrical Formulas Keywords: Electrical Formulas Created Date: 11/19/2007 3:42:09 AM
Test this site now
Basic Electrical Engineering Lessons - Welcome to Bucknell ...
Basic Concepts Chapter; Electrical Elements Chapter; Digital Signals and Logic Chapter ... Welcome To Exploring Electrical Engineering. Table of Contents (Reproduced in ...
Test this site now
Electrical Equations ( Alternating Current ) AC - Engineers Edge
Engineering Analysis Engineering Basics Engineering Calculators Engineering ... Electrical Equations ( Alternating Current ) AC
Test this site now | {"url":"http://sitenalytic.com/basic/basic-electrical-engineering-formulas.html","timestamp":"2014-04-18T05:30:19Z","content_type":null,"content_length":"27668","record_id":"<urn:uuid:6a7b9b40-2f89-4574-aae3-c8063cdf9a26>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Methuen SAT Math Tutor
Find a Methuen SAT Math Tutor
...Sometimes, it is difficult for students to understand the relevance and importance of science in their lives and struggle with the material. As your tutor, I will help you to understand the
intricacies of the subject and visualize the many connections that exist between the subtopics. I will a...
23 Subjects: including SAT math, chemistry, writing, biology
Hello, I am a former Elementary Teacher with a passion for Teaching. I now have the time to do something I really love and at the same time, I can help others to learn. I would love the chance to
help anyone who is struggling in any subject in which I have certified. Thank you, Janet
15 Subjects: including SAT math, reading, English, writing
...Some of the aspects of the program that I would be happy to teach you include: making an outline, using styles to ensure formatting consistency, generating a table of contents, table of figures
and table of tables, and creating a bibliography and storing your references in the program. I was a s...
9 Subjects: including SAT math, geometry, algebra 1, algebra 2
...Successful SAT tutoring requires the student to complete practice on his/her own time, so that we can focus on improving content knowledge and test-taking strategies during our tutoring
session. I therefore recommend that my SAT tutoring clients purchase an SAT prep book with practice tests. St...
28 Subjects: including SAT math, English, algebra 1, algebra 2
...I have the patience and experience to help explain high level mathematics like differential equations. I was a full-time math teacher for twelve years. I have taught linear systems of
equations, including using matrices to solve them.
24 Subjects: including SAT math, calculus, GRE, algebra 1 | {"url":"http://www.purplemath.com/Methuen_SAT_Math_tutors.php","timestamp":"2014-04-18T19:02:26Z","content_type":null,"content_length":"23694","record_id":"<urn:uuid:f113c2eb-cbe7-4ab9-b20f-a2f2a2a2493b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convex Geometry
Wikipedia : Uniformly convex space | Milman-Pettis theorem (1938 & 1939)
Technically, the width of a convex shape depends on the direction with respect to with it is measured: It's the least possible distance of two parallel planes perpendicular to that direction
which surround the body. It's what a sliding calipers measure.
The height of the shape can be given either of the following definitions:
□ The least width (the measurement direction is called vertical ).
□ The width along a prescribed direction, identified as vertical.
The former definition is called absolute, the latter is dubbed relative.
In either case, the length of the body is then defined as the largest width along an horizontal direction (a direction is said to be horizontal when it's perpendicular to the aforementioned
vertical direction).
The aspect ratio of a body is the ratio of its height to its length so defined.
The diameter of a body is simply its largest width. For example, the diameter of a rectangle is equal to its diagonal.
Either flavor of aspect ratio can be a convenient parameter to use when discussing the geometry of a family of objects. It makes little sense for nonconvex things, except by considering their
convex hulls. An absolute aspect ratio is never less than unity. A relative aspect ratio can be.
Playing Cards
The unit ball associated with a norm is defined as the set of all vectors whose norm is less than or equal to 1. The basic properties of a norm always make this a closed convex set symmetric
about the origin (i.e., it contains -V if it contains V ) and containing more than 0 itself (except in the trivial case of the space {0} of dimension zero).
Conversely (Minkowski) any such set B uniquely specifies a norm of which it is the unit ball, the norm of a vector V being defined as:
|| V || = inf { |x| | V Î x B }
The above definition of a norm from a convex, origin-symmetric, closed body B is called Minkowski's functional.
The notation x B was introduced by Minkowski himself. It denotes the set of all vectors that are equal to the scalar x multiplied into some element of B. Likewise, Minkowski defined the sum A+B
of two sets of vectors as the set of all vectors that can be obtained by adding together an element of A and an element of B. Similarly, any well-defined operation on the elements of sets induces
a natural extension of that operation to sets themselves, which is sometimes called a Minkowski operation on sets (the most common is Minkowski addition, which has nice convexity properties).
Minkowski functional
The intersection of any family of convex sets is itself convex. (HINT: If two points are in that intersection, so is the segment between them.)
Therefore, the intersection of all convex sets that contain a set S of vectors is a convex set containing S. It's clearly the smallest such set. It's called the convex hull of S and it's usually
denoted Conv (S).
The convex hull of a sum of sets is the sum of their convex hulls:
Conv ( S[1 ]+ S[2 ]) = Conv ( S[1 ]) + Conv ( S[2 ])
A convex hull is not necessarily closed. (Consider, for example, an open halfspace together with a single point on its boundary.)
The closure of Conv (S) is the convex hull of the closure of S :
Conv ( S ) = Conv ( S )
As discussed next, a closed convex set is the intersection of all [closed] halfspaces that contain it. The closure of the convex hull of S (or, equivalently, the convex hull of the closure of S)
is the intersection of all halfspaces that contain S.
Shapley-Folkman lemma : The [Minkowski] sum of many sets is nearly convex.
An hyperplane separates space into three disjoint regions; itself and two open halfspaces. A closed halfspace is obtained as the union of the hyperplane with either of the two open halfspaces it
When we say that any closed convex set is the intersection of halfspaces, we're normally thinking about closed ones. It's "more economical" to do so, but it's not strictly necessary (since a
closed halfspace containing S is clearly the intersection of infinitely many open halfspaces).
The converse proposition only holds for closed halfspaces, though. An intersection of any family of closed sets is guaranteed to be closed (an infinite intersection of open sets could be open,
closed or neither).
{ closed convex sets } = { intersections of closed halfspaces }
This proposition is the geometric Hahn-Banach theorem.
Convex Optimization by Stephen P. Boyd & Lieven Vandenberghe.
In Euclidean space (i.e., real linear space endowed with a positive-definite scalar product) a linear hyperplane can be defined as the set of all vectors orthogonal to a prescribed nonzero
vector. An affine hyperplane (or, simply, an hyperplane) is obtained by adding to some point every vector from such a linear hyperplane.
An hyperplane which does not go through the origin can be characterized by an othogonal vector H pointing to it from the origin with a length equal to the inverse of the Euclidean distance from
the origin. That hyperplane is the set of all vectors whose dot-product into H is equal to 1 :
{ V | V.H = 1 }
The hyperplane is the border of a closed half-plane containing the origin:
{ V | V.H ≤ 1 }
As stated in the previous section, we may always define any closed convex set C as the intersection of (possibly infinitely many) such closed half-spaces, namely:
C = { V | "HÎC' , V.H ≤ 1 }
All sets C' that yield C in this way have the same convex hull C* which is called the polar of C. The bodies C and C* are polars of each other :
C* = { V | "HÎC , V.H ≤ 1 }
C = { V | "HÎC* , V.H ≤ 1 }
If C and C* are polytopes (resulting from a finite C' ) then their networks of vertices, edges and faces are topological duals of each other.
The above describes duality with respect to a sphere (or hypersphere) of unit radius centered at the origin. Any center and any radius could be used in practice.
If the two disjoint convex sets are neither open nor closed, the hyperplane at the border of the two halfspaces may intersect both convexes...
Consider, for example, the following bounded planar regions:
{ (x,y) | x^2+y^2 ≤ 1 and either y > 0 or [ y = 0 & x > 0 ] }
{ (x,y) | x^2+y^2 ≤ 1 and either y < 0 or [ y = 0 & x < 0 ] }
Each region is convex and the two are disjoint. Only one straight line (of equation y = 0) can be drawn "between" them, but it intersects both.
If both sets are closed, one of them can be contained in a closed halfspace which doesn't intersect the other. [ Conjecture. ]
If at least one of them is compact, then two disjoint closed convex sets can always be separated by an hyperplane which doesn't intersect either set (in fact, infinitely many such hyperplanes
exist, in that case).
Two open disjoint convex sets can always be separated by at least one hyperplane that doesn't intersect either of them.
Hyperplane separation theorem | Hermann Minkowski (1864-1909)
MIT 18.409, by Jonathan Kelner : Convex geometry | Separating hyperplanes
The theorem doesn't apply for closed sets if neither is compact. Example: | {"url":"http://www.numericana.com/answer/convex.htm","timestamp":"2014-04-19T19:35:06Z","content_type":null,"content_length":"30830","record_id":"<urn:uuid:bd780278-7388-4127-a804-aa1598d45e67>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00118-ip-10-147-4-33.ec2.internal.warc.gz"} |
Improving light and heat spectra measurements
Whether you want to investigate objects in space, characterize the quality of light sources, optimize photovoltaics modules or analyze chemical compounds, measuring the spectrum of light- or heat
sources is often the method of choice. Conventional procedures thereby generate radiation distribution curves which are distorted and have to be subsequently corrected. The Physikalisch-Technische
Bundesanstalt (PTB) has now developed a mathematical procedure which yields clearly improved results and can be applied in numerous fields of radiometry and photometry. The software required can be
downloaded free of charge from PTB's website.
Measuring systems for optical or thermal radiation such as, e.g., radiometers, spectrometers and photometers, generate spectral distribution curves which shed light on the characteristics of the
measured radiation (e.g. its luminance, its colour quality, its temperature or its wavelength). These distribution curves, however, exhibit distortions which are caused by the measuring instrument
used. There are correction procedures, but these are reliable to a certain extent only. Scientists at PTB have found a new approach to this problem: they have, for the first time, considered the
occurring distortions as mathematical convolution and used the Richardson-Lucy method - an iterative procedure - for the deconvolution.
An issue which has often been discussed with regard to the Richardson-Lucy method is the need for a criterion for the breaking of the iterations. In this context, a novel approach has been developed
at PTB which works, in principle, automatically and independent of additional parameters. This new approach has turned out to be very robust, both in comprehensive simulations and in investigations
of measurement data. The scientists hereby investigated numerous scenarios with diverse line spread functions and signal-to-noise ratios. The procedure developed in this way is suitable both to
improve broadband spectral distribution curves (as occurring, e.g., in heat radiators) and narrowband distribution curves (as occurring in LEDs).
More information: Eichstädt, S. (2013). Comparison of the Richardson-Lucy method and a classical approach for spectrometer bandpass correction, Metrologia 50, 107 - 118. | {"url":"http://phys.org/news/2013-10-spectra.html","timestamp":"2014-04-18T05:50:25Z","content_type":null,"content_length":"67887","record_id":"<urn:uuid:2aaf877c-692a-4da5-b8eb-f911b0bf80a1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |